Data & Visualization Engineer

Remote (NYC) /
Engineering /
Polywork is building a new kind of professional social network. Whereas traditional professional networks focus on labelling you with just a job title, Polywork enables people to share what they actually do in a timeline.

Check out a quick read from our CEO and Founder, Peter Johnston, on Why We Started Polywork  and see what our users are saying about us here!

As a visualization and data engineer, you will be responsible for deriving insights and building data infrastructure for one of the most innovative future-of-work platforms. 

We expect candidates to be self-driven quick learners, with a solid understanding of industry-standard data infrastructure and visualization technologies, and demonstrable experience in these areas. The candidate is expected to own our data sources. For the right candidate, the opportunities to groom into future leadership roles are inevitable as we expect to grow our data engineering and ML teams.

This is a full-time position which is completely remote (although we’re based out of New York). This role will directly report to our VP of Engineering but is expected to collaborate across teams, such as Machine Learning Research, Business, and Growth and Marketing teams.

You Will:

    • Own initiatives around helping us quantify success in the product and efforts of our business and marketing teams 
    • Own reporting and dashboards for different product-focused groups within the company
    • Build tooling as needed for data infrastructure, ETL, and visualization
    • Work with our machine learning team as needed

Must Have:

    • Solid experience in Python, Shell scripting, Docker, and related development tools, and writing high-quality code.
    • Experience with visualization tools and libraries such as Seaborne, Vega, Altair
    • Experience with AWS and GCP technologies and tooling around them
    • Comfort using SQL and NoSQL databases, and in particular, being able to quickly write queries for answering business questions.
    • Experience with Postgres is a must.
    • Working knowledge AWS Lambda, Kubeflow, Spark, Kafka/Kinesis, and Heroku
    • Experience building and optimizing data pipelines, architectures, and optimizing datasets for ETL workloads. 
    • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
    • Strong organizational skills with a bias for execution

Nice to Have:

    • Experience with tools like Heap and Domo


    • Remote work 
    • Choose your own hardware setup
    • Paid lunch on Friday's (even remote)
    • Home Office Stipend
    • Learning & Development Stipend
    • Competitive health, dental, & vision benefits 
    • Parental leave 
    • Flexible PTO and Sick time - if you need time off, take it! 
    • Wellness Gym/Spa reimbursement 
    • 401K
Join us as we strive to make the world more productive by connecting people to more possibilities.