Software Engineer, Data (Contract)

Mexico City /
Tinder – Engineering /
Contractor
Tinder brings people together. With tens of millions of users, hundreds of millions of downloads, 2 billion swipes per day, hundreds of thousands of requests per second, 20 million matches per day and a presence in every country on earth, our reach is expansive—and rapidly growing. 

At Tinder we strive to be at the cutting edge of technology and innovation to build platforms that scale and keep pace with our growing business needs. Tinder is looking for an engineer who's eager to design and implement the next generation of our data platform. We are a data-driven company and our business always has new needs for actionable information, which gives us exciting opportunities for building high-value products to serve them. This role will involve working with large scale data in near-real-time, batch, and lambda frameworks.

What You'll Do:

    • Collaborate with engineers in conceptualizing, planning and executing medium-sized data engineering initiatives working with a variety of stakeholders
    • Work with internal engineering and product teams to identify opportunities for improvements or feature enhancements to our data platforms and to improve overall engineering efficiency
    • Work on pipelines ingesting data greater than 1gb per second
    • Design and build data platforms and frameworks for processing high volumes of data, in real time as well as batch, that will be used across engineering teams
    • Drive self-serve capabilities for the engineering teams to easily onboard their critical data workloads onto our platforms
    • Coordinate with engineering, analytics, and other teams to assess the cost and value of existing and potential projects
    • Research and evaluate new technologies in the big data space to guide our continuous improvement
    • Collaborate with multifunctional engineers across the company to help tune the performance of large data applications to drive cost savings
    • Work on initiatives to ensure stability, performance and reliability of our data infrastructure

What you'll need:

    • 2+ years of experience in data engineering on distributed frameworks such as Spark, Flink, Kafka Streams, Storm, Hadoop etc.
    • Proven experience with ETL job orchestration, preferably with a feature-rich tool such as Airflow, Argo, or Rundeck
    • Strong proficiency with any of the following programming languages: Java, Scala or Python
    • Experience with a variety of databases and no-SQL appliances (Redshift, Databricks, Snowflake, Druid, Dynamo, Cassandra, Kafka, Elastic Search, Hive, etc.)

Nice to have:

    • Experience building real time data pipelines using Flink or Spark
    • Experience with distributed systems and microservices
    • Experience with Kubernetes and building Docker imagesExperience with any public cloud environment - AWS, Azure, GCP.
#LI-Remote