Data Engineer (Contractor)
Amsterdam / Poland
Technology – Software Development /
Contractor /
Remote
Be yourself at Protolabs
Studies have shown that women and people of color are less likely to apply to jobs unless they meet every single qualification. We are an equal opportunity employer, and we are committed to building a diverse team that feels they are valued in the workplace. So do you feel you do not meet every single requirement but still intrigued? We encourage you to still apply! You can help make our company even better. We do not discriminate based on race, color, national origin, sexual orientation, gender, age, mental or physical ability, or any way you represent yourself. We strongly believe diversity makes for more successful teams.
Why Protolabs?
We are the leaders in digital manufacturing. We hire doers, makers, and creative thinkers who tackle our roles with an entrepreneurial spirit. Our culture is centered around meaningful work that brings new and innovative products to market at unprecedented speeds. We are a diverse team that comes from all walks of life and take pride in our team who is smart, genuine, humble, and passionate about what we do. It’s our people who fuel our creativity and make our culture feel like home.
We’re looking for a Data Engineer (Contractor) to join our Data & Analytics Engineering team.
You’ll play a key role in one of our most exciting projects: migrating legacy pipelines across two regions into a unified modern data stack. In this role, you’ll work closely with Analytics Engineers, supporting ingestion through a mix of custom Python jobs and Airbyte (Cloud and OSS), and helping us strengthen our foundation in Snowflake and dbt. You won’t just build pipelines — you’ll help design resilient, scalable systems that empower teams across the business with reliable data.
We’re a cross-functional team working under the Engineering organization, with teammates in Amsterdam and the US, collaborating daily across time zones. If you're someone who enjoys solving complex data challenges with a team-first mindset and takes pride in building data systems that make a difference — we’d love to hear from you.
Our tech stack: Snowflake, dbt, Airbyte, AWS, Prefect, DOMO.
What’ll you do
- Drive the migration of legacy pipelines into Snowflake and dbt, following modern data architecture best practices
- Develop and maintainPython-based Prefect flows for custom ETL, reverse-ETL, and lightweight ML jobs
- Use Airbyte (Cloud and OSS) to ingest data from internal and external sources
- Collaborate cross-functionally to ensure a scalable and reliable platform:
- Partner with Analytics Engineers on early-stage dbt model development
- Work with Platform teams to monitor and maintain self-hosted data tools (e.g., Airbyte OSS, DataHub)
- Coordinate with upstream data producers to securely and efficiently ingest from transactional systems
- Manage and optimize Snowflake resources, including warehouses, RBAC, and usage monitoring
What it takes
- 3+ years of experience in Data Engineering
- Proficiency in Python, with a focus on writing clear, maintainable code
- Good understanding of cloud infrastructure, especially AWS (e.g. S3, IAM, RDS, EC2)
- Familiarity with the modern data stack, including tools like Snowflake, dbt, Airbyte, and Prefect.
- A collaborative, team-oriented mindset — someone who’s eager to contribute, support teammates, and help us accelerate our migration projects
- A thoughtful, proactive problem-solver who communicates clearly and approaches technical challenges with patience and care
- Experience with dbt in multi-project (dbt mesh) setups
- Strong knowledge of Snowflake features and governance best practices
- Familiarity with DataOps, maintaining data tools on Kubernetes, and setting up monitoring (Datadog) for data services
- Exposure to event-driven architectures or handling large-scale data volumes (future use cases)
Nice to have