Senior Software Engineer (Data)
Our vision at DueDil is to be the fuel of a more informed and connected economy. We help customers with high exposure to SMEs find, verify and monitor business opportunities and risks. We do this by using billions of data points on companies to build a digital map of the economy, and integrating that information into our customers’ workflows. We work with hundreds of customers, from High Street Banks to tech start-ups. Our products help power the economy by impacting the way hundreds of thousands of SMEs access everything from bank accounts to loans, insurance and payments.
At DueDil, we’re driven by three core values. The characteristics that define a member of our team are grit, authenticity and team spirit. These values factor into the way we hire, promote and reward every member of our team. Ultimately, we’re looking for people who take ownership and hold themselves accountable to ambitious goals. In return, we pledge whatever support you need to chase down these goals and a collaborative environment where growth is rewarded.
Critical to achieving DueDil's vision is our ability to combine multiple disparate data sources from different providers into a unified view of companies and the people who run them. This requires us to develop web crawlers, automated matching algorithms, machine learning models and complex ETL processes to tie all these components together. As a Senior Software Engineer, you'll be expected to enhance and expand our data processing toolset to support our international expansion effort, while maintaining quality and reliability of our existing data products and services.
This will mean dealing with challenges such as order of magnitude increases in data volumes, assessing quality of data from multiple suppliers and building pipelines to match and extract valuable insight from these datasets. You will be working in a team of experienced Software Engineers and Data Scientists building next generation tools and transforming the Fintech industry.
We are looking for:
- Proven track record leading complex ETL and Data Infrastructure projects
- Demonstrable ability working with high volume heterogeneous data with distributed systems such as Hadoop or Spark
- Expert knowledge in one or more of the following languages – Python, Scala, Java
- Strong understanding of data structures and algorithms
- Deep knowledge of data modeling, data access, and data storage techniques
- Familiarity with Unix systems, common command line tools e.g. grep, awk and source control tools e.g. git
- Familiarity with Machine Learning and Statistics a plus
You should apply, if you:
- Want to develop data-focussed products with visible and immediate impact
- Are passionate about simple, resilient and maintainable code
- Are able to identify and evaluate technical solutions to nebulous challenges
Our Tech Stack
How to apply:
- Please send us your CV
- a link to your GitHub profile (if applicable)