Data Engineer (Junior)

Data Excellence – Data Engineering
Full Time
Hello there! We’re Zopa the Feel Good Money company.
In 2005 we built the first ever peer-to-peer lending company to give people access to simpler, better-value loans and investments. Since then we’ve helped hundreds of thousands of customers take the stress out of money by building our business on honesty, transparency and trust.
It works so well that we want to give our customers access to other great products and tools, empowering them to better manage their money. That’s why, in December 2018, we launched a different type of bank, allowing us to bring a greater range of smart finance products to even more people.

Job Title: Data Engineer
Salary: Competitive
Location: London Bridge
Start Date: Tuesday 6th August 2019
Closing Date: Tuesday 3rd September 2019

What you'll be doing:

    • Liaise with data scientists, data analysts and decision makers such as product owners and business analysts to gather requirements
    • Build scalable, reliable, maintainable, secure and high-performance data warehouse and data lake combo powering the analytical needs of the whole company
    • Implement various data pipelines using cloud based big data technologies
    • Produce clear and concise documentation where required
    • Work in a dynamic agile team
    • Peer review other team members code

Tech stack:

    • Data Pipeline: AWS S3, Lambda, Glue, Kinesis and Redshift
    • ETL: Custom ETL/ELT in Python
    • SQL: Postgres/SQL Server/Redshift
    • Python: Pandas, NumPy and boto3 AWS SDK
    • Big Data Technologies: Spark, Athena/Presto, Glue/Hive Data Catalog

Essential skills:

    • At least 1 years professional experience in data engineering, software engineering, or related domain
    • Strong understanding of Python data structures and algorithms
    • Strong understanding of at least one other programming language
    • Proficient in writing complex SQL queries
    • Strong unit test and debugging skills
    • Experience working with CI/CD
    • Experience working with data pipelines and ETL/ELT process using Python
    • Experience with different AWS services
    • Experience with Apigee
    • Experience with Docker and Kubernetes
    • Experience with DynamoDB
    • Experience with Serverless Frameworks
    • Understanding of software delivery life cycle and version control using Git
    • Demonstrable passion for projects related to data analytics
    • Hold a Masters Degree (or equivalent) in Computing, Software Engineering, or Data related subject.

Bonus points for:

    • Experience working with PySpark
    • Dimensional data model using Kimball (star/snow flake)
    • Tableau
    • Agile


    • Excellent interpersonal, relationship building and influencing skills
    • Passionate about learning open source and cutting-edge technologies
    • Ability to communicate with both technical and non-technical stakeholders
    • Passion for elegant and intuitive data structures and analytical interfaces
We are committed to equality of opportunity for all staff and applications from individuals are encouraged regardless of age, disability, sex, gender, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships.