Senior Data Enginieer
Warsaw
Consulting – Polska Team /
Permanent contract & B2B /
Remote
170 zł - 200 zł a day
We are looking for data engineers perfectly familiar with data literacy and proficient with data
platform architecture patterns. Our data platform is leveraging MS Azure services, including
ingestion, staging, processing and serving. We are now entering a phase of scaling Databricks
solution, then we are looking for resources with relevant skills on this technology.
What you will do:
Design and build data. Ingest, transform, and publish data from sources and to destinations by
using Azure Databricks.
Implements processes and systems to monitor data quality, ensuring production data is always
accurate and available for key stakeholders and business processes.
Performs data analysis required to troubleshoot data-related issues and assist in the resolution
of data issues.
Develop and maintain scalable data pipelines in data bricks to integrate with enterprise data
streams. Build data pipelines that feed business intelligence tools and expose data to end users
using Power BI.
Design and implement Data Architectures.
The position is ideal for you if you have:
3+ years of experience as a Data Engineer
Experience with Azure Databricks and Power BI
Fluent in English (written and spoken)
Joining Margo you can expect:
Ability to work in an international consulting company on ambitious projects,
Permanent contract or B2B cooperation,
Benefits such as medical care and sports card,
Co-finantrainings, certification exams and post-graduate studies,
Internal training and the possibility of using our know-how,
Possibility to use our library free of charge,
Individual approach and development opportunities (career path planning, ability to change the
project and position, possibility to get involved in outside-project activities with additional
remuneration),
Possibility to influence the shape of the company, openness to your ideas and willingness to
implement them,
Excellent working atmosphere, integration events.
○ Missions :
■ support our dev team to develop our data pipelines & data
transformation & data exposure services
■ Actively Contribute to the continuous improvement of our dev
patterns
■ Document developments & unit testing
■ Improve platform observability capabilities
○ Key skills :
■ Proficient in SQL, Python and PySpark
■ At least 2 years of experience in Databricks, with proven
experience deploying Unity Catalog solutions
■ Proven experience working in Azure, and more specifically with
the following services within Azure: Data Factory, SQL Database, Data Lake, Logic Apps
■ Skilled in Data modelization, whatever the supporting tool.
■ Comfortable with business-specific discussions.