Data Engineer Azure

Mexico City, Mexico
Lingaro /
Payroll /
Remote
Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data.

Responsabilities

    • Responsible for implementing data ingestion pipelines from diverse data sources employing Azure Data Factory, Azure Databricks, and additional ETL tools.
    • Tasked with developing scalable and reusable self-service frameworks for data ingestion and processing.
    • Will be involved in the design, construction, and administration of SQL Server databases within the Azure cloud environment.
    • Engaged in data modeling and the integration of data from various systems.
    • Will assess and implement best practices for data manipulation.
    • Responsible for creating and maintaining Azure Data Factory pipelines.
    • Involved in integrating end-to-end data pipelines to facilitate the seamless transfer of data from source to target data repositories, ensuring data quality and consistency.

Requirements

    • Over 3 years of Azure development experience.
    • Proficiency in cloud-based solutions.
    • Comprehensive understanding and practical experience with GIT.
    • Sound familiarity with Microsoft ETL tools including Azure Databricks, Azure Data Factory, Data Lake, and SSIS.
    • Hands-on experience with both structured and unstructured data.
    • Proficiency in utilizing ARM templates.
    • Proficient in working with JSON.
    • Good grasp of Azure DevOps or Jira.
    • Knowledge of SQL.
    • Effective communication skills, capable of providing customers with technical insights and interpreting data for them.
    • Ability to work independently with a strong sense of ownership for assigned tasks.
    • Capability to work effectively in both independent and collaborative team environments, spanning cross-functional and cross-cultural settings.

Nice to have

    • Proficiency in data analysis programming, particularly in PySpark and SparkSQL, or a willingness to acquire this skill.
    • Comprehension of AAS (Azure Analysis Services).
    • Familiarity with Power BI.