Software Engineer- Data (Big data and Data Pipelines)- Delhi (preferred)/ Bangalore

Bangalore/ Delhi
Engineering /
Full Time /
Hybrid
What is Findem:

Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai

Experience - 4- 7 years

We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies.

Location- Delhi (preferred)/ Bangalore (Based out of these locations or ready to relocate to this locations)
Hybrid- 3 days onsite

Role and Responsibilities

    • Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies
    • Assemble and process large, complex data sets that meet functional non-functional business requirements
    • ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies
    • Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
    • Build interactive and ad-hoc query self-serve tools for analytics use cases
    • Build data models and data schema for performance, scalability and functional requirement perspective
    • Build processes supporting data transformation, metadata, dependency and workflow management
    • Research, experiment and prototype new tools/technologies and make them successful

Must have Skills

    • Strong in Python/Scala
    • Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc
    • Experience in various file formats like parquet, JSON, Avro, orc etc
    • Experience in workflow management tools like airflow
    • Experience with batch processing, streaming and message queues
    • Any of visualization tools like Redash, Tableau, Kibana etc
    • Experience in working with structured and unstructured data sets
    • Strong problem solving skills

Good to have Skills

    • Exposure to NoSQL like MongoDB
    • Exposure to Cloud platforms like AWS, GCP, etc
    • Exposure to Microservices architecture
    • Exposure to Machine learning techniques
The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru.

Equal Opportunity

As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.