Senior Data Engineer

Bangalore /
Data Engineering /
⚡️ About Merkle Science
We are known industry-wide for our predictive crypto risk & intelligence platform's best-in-class user experience and unmatched customizability. Our solutions provide next-generation crypto threat detection, risk management, and compliance for businesses, banks, and government agencies.
Our growing solution suite includes transaction screening and wallet monitoring, crypto crime investigation tools, enhanced due diligence and entity reporting, and crypto compliance and investigation training. 
Backed by leading venture capital firms Darrow Holdings, Kraken Ventures, Uncorrelated Ventures, Digital Currency Group, Fenbushi Capital, Kenetic, Lunex Ventures and the Singapore Government-supported deep technology fund, SGInnovate, we enable businesses to scale and mature so that a full range of individuals, entities, and services may transact with crypto safely.
🌟 Team and Role
Merkle Science envisions a world powered by crypto. We are creating the infrastructure necessary to ensure the safe and healthy growth of this market, now at a $2 Trillion market cap. We are trailblazers and disruptors who are pushing the boundaries of innovation — to scale and mature so that a full range of individuals and corporations can transact with crypto safely. We are a global company with offices in Singapore, London, Bangalore, and New York.

⚡️ What does the Product do? 

We boast of the best user experience amongst our peers and unmatched customizability.

At a high level, we have two products. One is an investigation tool that helps the user track the flow of funds once a scam or a hack occurs. Intelligence wings and government bodies mostly use this product.

The other product is a compliance tool that ensures that we can help crypto companies, financial institutions, regulators, and banks monitor the kind of funds flowing and stop the bad players from using crypto to do something wrong, such as transactions over darknet or fraud. We also provide intelligence services and reports.

Did we mention that we are working on some super innovative and bold products that we cannot share details about here? 

💥 What will you do?

1. Create and maintain optimal data pipeline architecture for our workloads. This includes building highly resilient architecture for both streaming and batch ETL processes.
2. Have a good understanding of data structures used by public blockchains to store data. A significant portion of our data pipelines parse blockchain data and store them in our data warehouses.
3. Design and build data warehouses with efficiency and reliability to meet business data needs. 
4. Assemble large, complex data sets that meet functional / non-functional business requirements. This includes expanding the scope of our data-mining efforts by building data pipelines to crawl data from the darkweb, openweb, third party data sources.
5. Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization
6. Works closely with a team of frontend and backend engineers, product managers, and analysts to render data on to our products.
7. Implement algorithms to transform raw data into useful information
8. Build, Manage and Deploy AI / ML workflows

🙋 What are we looking for? 

1. >3+ years of relevant experience as a Big Data Engineer.
2. Experience with a breadth of tools and data processing paradigms - from transactional data, to event streams and batch processing which are robust and scalable using open-source tools such as Apache Airflow, Apache Beam, Apache Spark, Apache Kafka/Pubsub.
3. Experience in building and maintaining highly resilient and available OLAP and OLTP data warehouses. 
4. Strong Software Engineering background in at least one modern programming language.
5. Experience using data visualization tools(Redash, Tableau).
6. Experience using cloud platforms (GCP or AWS) including Kubernetes and Docker is a plus.
7. Evangelize data engineering best practices within and beyond the team 
8. Analytical mindset with a business acumen
9. Exceptional interpersonal and communication skills
10. Self-starter who thrives under a high level of autonomy

  👀 What process do we follow?

Short Answer: (<2 weeks)

1. Application: We will keep it simple. You can apply directly through our job portal. All we ask for is a Resume. Additional Portfolio links such as Github, Medium or a Personal website are welcome
2. Screening: We will screen your profile and get back with a decision within a week.Interviews: We will have two rounds of interviews. Round one (30mins) will focus on getting to know each other better and identifying if this could work for both of us. Round two (60mins) is a technical round where we will review your prior experience and discuss how you would build systems to solve a problem we will introduce on call. 
3. Meet the Team: Culture-Fit is essential for both you and us. So we always go the extra mile, and you will meet two other colleagues on the team who you would be working with. Here, you could discuss questions on stack, culture and some other things you might be interested in if you had a consideration for a new role.
4. Offer Rollout: If all looks well, we will open a bottle of champagne.  
❤️ Well Being, Compensation and Benefits
We care about your well-being. Along with excellent health insurance, we offer flexible time off, numerous learning & development initiatives where Merkle Science invests in employee development, and working hours that we have not heard a single complaint against. We respect your weekends too. We regularly host team-building sessions and encourage discussions around mental well-being as well.  
On the compensation front, we admire talent and believe in rewarding people for their inputs. Our compensation is best in class, and the whole process will be transparent from the very minute we speak to you.