Senior Data Engineer

Budapest
Data Engineering /
Full-time /
Hybrid
Founded in 1999 in Vienna, the Qualysoft Group is a manufacturer-independent IT consulting and services company, which successfully provides support for its international customers with the aim of boosting their competitiveness and economic efficiency through innovative IT solutions.

Its focus is on financial services providers, telecommunications companies, the automotive industry and energy service providers. Over 400 employees in 6 subsidiaries work together to ensure state of the art solutions for our clients.

We are looking for new colleagues in Qualysoft teams for diverse projects providing continuous learning opportunities. Our common goal is to provide honesty, development and a stable background while getting to know the latest technologies. We are waiting for your application for the position below!

As a Data Engineer within the Architecture & Modernization team, you will be instrumental in building and maintaining the data infrastructure for the OneRiskStore project.

This role will involve hands-on development, data pipeline creation, and close collaboration with stakeholders across the organization. It requires a self-starter with strong execution skills and the ability to work independently.
You will be expected to not only execute on the current strategy but also contribute to its future evolution. We value diversity of thought and are committed to building a team that reflects the diversity of our global community.

Responsibilities

    • Develop and maintain data pipelines and ETL (Extract, Transform, Load) processes
    • Develop and maintain data pipelines and ETL (Extract, Transform, Load) processes
    • Work with structured and unstructured data to ensure it is accessible and usable
    • Optimize data systems for performance and scalability
    • Implement data quality and data governance standards
    • Collaborate with stakeholders across technology and business units to understand their data needs and translate them into technical solutions, providing data-driven insights
    • Contribute to the documentation and knowledge sharing within the team by creating and maintaining technical documentation and training materials
    • Participate in code reviews and contribute to the improvement of development processes
    • Contribute to the broader data architecture community through knowledge sharing and presentations
    • Work with structured and unstructured data to ensure it is accessible and usable
    • Optimize data systems for performance and scalability
    • Implement data quality and data governance standards
    • Collaborate with stakeholders across technology and business units to understand their data needs and translate them into technical solutions, providing data-driven insights
    • Contribute to the documentation and knowledge sharing within the team by creating and maintaining technical documentation and training materials
    • Participate in code reviews and contribute to the improvement of development processes
    • Contribute to the broader data architecture community through knowledge sharing and presentations

Requirements

    • 8+ years of experience in data engineering or a related field
    • 8+ years of experience in data engineering or a related field
    • Proficiency in Python
    • Experience with data processing frameworks such as Apache Spark or Hadoop
    • Knowledge of database systems (SQL and NoSQL)
    • Experience working with Snowflake and Databricks
    • Familiarity with cloud platforms (AWS, Azure) and their data services
    • Understanding of data modeling and data architecture principles
    • Experience with data warehousing concepts and technologies
    • Experience with message queues and streaming platforms (e.g., Kafka)
    • Experience with version control systems (e.g., Git)
    • Experience using Jupyter notebooks for data exploration, analysis, and visualization
    • Excellent communication and collaboration skills
    • Ability to work independently and as part of a geographically distributed team
    • Proficiency in Python
    • Experience with data processing frameworks such as Apache Spark or Hadoop
    • Knowledge of database systems (SQL and NoSQL)
    • Experience working with Snowflake and Databricks
    • Familiarity with cloud platforms (AWS, Azure) and their data services
    • Understanding of data modeling and data architecture principles
    • Experience with data warehousing concepts and technologies
    • Experience with message queues and streaming platforms (e.g., Kafka)
    • Experience with version control systems (e.g., Git)
    • Experience using Jupyter notebooks for data exploration, analysis, and visualization
    • Excellent communication and collaboration skills
    • Ability to work independently and as part of a geographically distributed team

Advantages

    • Familiarity with data visualization tools (e.g., Tableau, Power BI)
    • Knowledge of data governance and security best practices (e.g., data access control, data masking)
    • Experience with Agile methodologies
    • Familiarity with data catalog and metadata management tools (e.g., Collibra)
    • Familiarity with CI/CD pipelines and DevOps practices
Why we think you will love working here:

With us you count as a person, our doors are always open.
We live the Qualysoft Team Spirit and stand for transparency!

Fresh wind and new ideas are welcome, because standstill is a foreign word at Qualysoft.