Data Engineer (East Coast Remote)

United States
Data & Analytics /
Full time /
Remote
About Empassion

Empassion is a Management Services Organization (MSO) focused on improving the quality of care and costs on an often neglected “Advanced illness/ end of life” patient population, representing 4 percent of the Medicare population but 25 percent of its costs. The impact is driven deeper by families who are left with minimal options and decreased time with their loved ones. Empassion enables increased access to tech-enabled proactive care while delivering superior outcomes for patients, their communities, the healthcare system, families, and society.
$145,000 - $160,000 a year

The Opportunity

Join our high-impact Data & Analytics team to shape a modern, flexible analytics platform that powers Empassion’s mission. As a Data Engineer, you’ll collaborate with analysts and cross-functional partners—Growth, Product, Operations, and Finance—to turn complex data into actionable insights. Using tools like SQL, dbt, and Looker, you’ll build pipelines, models, and dashboards that decode patient care journeys and amplify our value to partners. This is a chance to influence both internal strategy and external impact from day one.


What You’ll Do

🌟 Partner with teams across the business and external partners to understand data needs and deliver reliable pipelines and models that solve real problems.
📂 Build and maintain scalable ingestion and egress pipelines in Airflow and dbt Cloud, ensuring high quality, automated data flows across cloud environments.
⚙️ Implement unit tests and monitoring to guarantee data integrity and reproducibility.
🗂 Model and transform and structure healthcare datasets into usable formats that power data science models, and other reporting marts
🚀 Enhance and scale data models with SQL and dbt, ensuring precision and adaptability for new partnerships.
🐍 Write Python code for Apache Airflow DAGs, components and utilities that orchestrate and monitor data workflows. Build complex pipelines that enable flexible scheduling, conditional logic, and smooth integration across multiple data sources.


What You’ll Bring

1 - 4 years in data engineering or analytics engineering with proven ability to build pipelines and scalable workflows.
Strong SQL skills for querying large, complex datasets.
Proficiency in Python for data engineering tasks (transformations, APIs, automation).
Experience with cloud data warehouses and storage (GCP preferred: BigQuery, Cloud Storage, Composer; AWS/Azure equivalents acceptable).
Hands on experience with dbt or similar data modeling tools.
Comfort working in collaborative dev/staging/prod environments, partnering with Product and Tech to safely test, launch, and anticipate the impact of new changes.
Curiosity about operational workflows and a drive to partner with non-technical teams, ensuring data and reporting align with how the business actually runs. You're not just a spec-taker, you're part of the solution.
A proactive, problem-solving mindset and ability to thrive in fast-paced, iterative environments.
Strong communication skills to collaborate with analysts, engineers, and business stakeholders.


Bonus Points

Knowledge of healthcare data (claims, ADT feeds, eligibility files).
Familiarity with Git/GitHub for version control.
Early-stage startup experience (seed/Series A), especially mission-driven ones.
Experience building semantic layers and data models in Looker (LookML).

Ready to Make a Difference?

If you’re driven by data, healthcare, and impact, apply and let’s talk!