Data Platform Engineer
Obsessed with data? At scale? Years of experience with Hadoop, Spark, BigQuery, Redshift and more? We're looking for talented Data Pipeline Engineers to help us change the way the world works with data.
Who you are:
- Breadth: you have an unparalleled ability to utilize a large range of technologies to provide unique solutions for your users. You're well versed in Big Data platforms includeHadoop, Spark, BigQuery, and Redshift.
- Scale: you have significant experience designing, deploying, and scaling big data systems operating at terabytes per day.
- Polish: you are well versed in monitoring and automation.
- Experience: you have 5+ years of industry experience.
What we look for:
- Polymaths: we believe the best teams are those comprised of individuals who have demonstrated excellence in a variety of technical areas, love sharing knowledge with their team, and actively seek out environments where they are not the smartest person in the room.
- Context seekers: we love the 5 Whys technique. Why? The more you ask, the more context you have about our market, our company, our mission, and our product. Why (do we care)? With more context, you make more well informed decisions by yourself. Why (is this important)? It means less process, less management overhead, and most importantly, more personal growth for you.
- Impact thinkers: the biggest challenge we face each day is how best to invest our time. The best people we've worked with have this remarkable ability to take a step back and identify how they, and the company, can have the largest impact. They are the ones who view the design of the wheel as sufficient, and passionately pursue the innovations that most profoundly affect our users.
- Foundation builders: every team, when designing technology, tools, or process, must decide for which horizon to build. Do you design with the next few months, years, or decades in mind? We believe great engineers build for horizon n+1, while designing for n+2. Great engineers find that incredible balance between simplicity and extensibility, initial and recurring time investments.
What we offer:
- Stock options
- Caltrain shuttle
- Free lunches & snacks
- Fancy espresso machine
How we build:
- Environment: we run everything in Docker images running on Kubernetes, within Google Container Engine (GKE). Why? Because after getting over the initial learning curve, we’ve found Kubernetes to be incredibly powerful, with a vibrant community that is driving the product forward at an incredible pace.
- Testing & Deploy: our continuous deployment pipeline gates all changes on 100% test pass rate. Why? We believe technical debt is a slippery slope and people always underestimate the impact of shortcuts. It really is better to do things right from the beginning.
- Services & APIs: we like both TDD and DDD (Doc Driven Development). We like API Driven Development (acronym TBD) even more. Why? In early stage companies things change very quickly. Leveraging environments like Kubernetes allows us to easily spin up services that expose clean APIs for particular purposes. We like a microservice approach for its modularity, speed of development and maintainability. Each of our services exposes REST and/or gRPC APIs that adheres to a well-thought-out spec, ensuring that if we need to change implementation strategies, the work is well contained.
- Comms: Slack. It's really just that good.