Data Engineer

Remote, USA /
Software Engineering /
 At Density, we build one of the most advanced people sensing systems in the world. The product and infrastructure is nuanced and one-of-a-kind. Building this product for scale has been an exercise in patience, creativity, remarkable engineering, laser physics, global logistics, and grit. The team is thoughtful, driven, and world-class.   

We build systems that are real-time, accurate, and anonymous by design. Our systems help today’s largest companies understand how their buildings get used. We have counted hundreds of millions of people.   

Counting people in “real-time” is unique and particularly hard to achieve. It allows a user to walk into a room, beneath our sensor, and see the room’s occupancy increment 700ms later.    

Today alone, Density will ingest over 1m events. In the coming year, our sensor network is on track to grow ten-fold. The overall system load is exploding. Maintaining our low latency standards requires an increasingly thoughtful system.   

We’re architecting infrastructure where annual, unscheduled downtime is measured in minutes. We’re building intelligent redundancies so missed events are an oddity. We’re constructing an exceptional engineering team to support always-on, intelligible analytics generated on the fly.   

As a Data Engineer at Density, you will be joining a growing team of data engineers tasked with building state of the art IoT data pipelines and big data processing infrastructure.

In this role you will:

    • Support the data and infrastructure needs of data scientists to help develop new product solutions. 
    • Collaborate and define the data engineering tech stack and infrastructure. 
    • Work with software and hardware developers to establish good data engineering practices and processes across the organization.

The ideal candidate will have:

    • 2+ years experience as a data engineer.
    • Python - Extensive knowledge of Python programing for data engineering.
    • ETL - Experience implementing and monitoring production data pipelines.
    • SQL - A working knowledge of SQL.
    • Linux command line - Working knowledge of linux command line and security.
    • AWS cloud experience - EC2, S3, and IAM configuration and automation.
    • You have an awareness of your weak spots and a genuine desire to improve. 
    • You’re looking for a long-term role with a company that has long-term ambition. 
    • You can balance a demanding workload, discern priorities, and communicate tradeoffs effectively.

The icing on the cake:

    • Experience with big data tools including Apache Spark (PySpark) and Apache Kafka.
    • Working knowledge of Apache Airflow.
    • Experience with Python remote kernels (with Spyder or PyCharm).
    • Knowledge of big data real-time streaming tools like Kafka Stream or Flink.
    • Experience with image processing or complex sensor data.
    • Familiarity with C++

What we bring:

    • A team hailing from places like Apple, Meraki, HashiCorp, Stanford, NASA, and beyond.
    • $100M from investors such as Kleiner Perkins, Founders Fund, and Upfront Ventures.
    • A work environment full of fun, smart, dedicated, and kind teammates.
    • Our principles - Be Humble, Seek Feedback, and Solve the Fundamental Problem.
    • Excellent health benefits including medical, dental, vision, mental, reproductive, and active. Mandatory PTO and more.
We are looking for candidates who are strong in several areas and have exposure to or interest in the others. Most of all, we are looking for candidates who see themselves as a meaningful addition to Density’s team and culture.