Come work at a place where innovation and teamwork come together to support the most exciting missions in the world!
About Shape Security
We are security and web specialists, pioneers, evangelists, and elite researchers. We believe in the power of the Internet to be a positive force;
our mission is to protect every website and mobile app from cybercriminals. Shape’s founders fought cybercrime at the Pentagon, Google, and other leading security companies.
We are backed by some of the most prominent leaders and investors in the technology industry including Kleiner Perkins, Google Ventures, and more.
Come be a part of our unparalleled team that is responsible for making the Internet a safer place for everyone.
As a software engineer on Shape’s data system team - Shape Scalable Data Processing (SSDP), you will be solving unique and challenging problems in our cloud-based data system, which handles petabyte-scale data collecting, stream processing, storage, query, and visualization in near real-time.
At Shape, you will work with exceptional team members, use cutting edge technologies, and solve world-class engineering challenges.
Design and develop Shape’s cloud-based data system to process petabyte-scale data in near real-time.
Research new technologies for more efficient and scalable data collection, processing, storage, and retrieval.
Design and implement tools and methodologies to ensure data integrity and build out our data system health monitoring solutions.
Collaborate with the product team, other engineering teams, and the operations team to design / implement / maintain multi-functional data systems for Shape’s growing core business needs.
Bachelor's degree (or above) in Computer Science or related fields : or equivalent experience
Proficient in one of the following languages : Java, C++, Go
Proficient in SQL.
Effectively communicate with clarity and conciseness both in written and verbal forms.
Proficient in Python
Working experience with one of the public cloud systems : GCP, AWS, or Azure.
Working experience with Pub / Sub related technologies like GCP Pubsub, Kafka, etc.
Working experience with streaming (or micro-batching) data processing platforms like GCP Dataflow, Apache Flink, Apache Spark, etc.
Working experience with both relational databases and NoSQL databases.
Working experience with data processing programming models like Apache Beam, etc.
Working experience with data warehouse technologies like GCP BigQuery, AWS Redshift, Azure Cosmos DB, Snowflake, Apache Druid, Clickhouse, etc.
Expertise in petabyte-scale data processing frameworks.
The Job Description is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change.