Senior Data Engineer

Bangalore
Testing /
Full-Time /
On-site
Role Overview

We are looking for a Senior Data Engineer who will play a key role in designing, building, and maintaining data ingestion frameworks and scalable data pipelines. The ideal candidate should have strong expertise in platform architecture, data modeling, and cloud-based data solutions to support real-time and batch processing needs.

What you'll be doing:

    • Design, develop, and optimise DBT models to support scalable data transformations.
    • Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect.
    • Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks
    • Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures
    • Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications.
    • Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution.
    • Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows.
    • Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance.

What you'll have:

    • 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning.
    • 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning.
    • 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3.
    • 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs.
    • Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query
    • Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch).
    • Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform).
    • Ability to effectively communicate timelines and deliver MVPs set for the sprint.
    • Strong analytical and problem-solving skills, with the ability to work across cross-functional teams.

Nice to haves:

    • Experience in hardware manufacturing data processing.
    • Contributions to open-source data engineering tools.
    • Knowledge of Tableau or other BI tools for data visualization.
    • Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization