Data Engineer (Remote)

Greater Chicago Area /
Engineering – Data Science /
Full-time
Location - Remote, North America

About Our Team:
KAR Data Services is looking to expand our enterprise data delivery team as we continue to grow our data platform in support of a mission of digital transformation in automotive wholesale markets.  The data engineering team is responsible for the ingestion and persistence of data supporting an array of data products supporting KAR Global’s automotive wholesale business.
 
KAR Data Services has a polyglot data model using many cutting-edge data platforms including AWS Redshift for Data Warehouse, Elastic Search for location-based searching, and Postgres for transactional data and product delivery. Our delivery framework is comprised of Python/Docker on ECS, Spark on EMR, and Jenkins for CI/CD and a roadmap focused on strategic data integrations and products and expansion of platform capabilities including Snowflake and Informatica.
 
About Our Candidate:
 
This candidate should be a self-starter who is interested in learning new systems/environments and passionate about developing quality supportable data service solutions for internal and external customers. We highly value natural curiosity about data and technology that drives results through quality, repeatable, and long-term sustainable database and code development. The candidate should be highly dynamic and excited by opportunities to learn many different products and data domains and how they drive business outcomes and value for our customers.
 
What You Will Be Doing:
Members of the data engineering team participate daily in ceremonies of Agile sprint to help design, plan, build, test, develop, and support KAR Data Services’s data products and platforms consisting of Pythion ETL pipelines and Postrgres, Redshift, Dynamo DB, and Elastic search, and Snowflake databases. Our team works in a shared services delivery model supporting seven lines of business including front-end customer facing products, B2B portals, mobile applications, business analytics, and data science initiatives.

Responsibilities include:

    • Work with product, data science, analytics, and engineering teams to learn project data needs and define project scope.
    • Design and planning of data services solutions on the enterprise data Platform.
    • Building and delivery of Python/Docker feed framework data pipeline jobs and services.
    • Contribute to the Data Engineering team delivery framework including building of re-usable code, implementing industry best practices, and maintain a common delivery framework.
    • Monitoring, maintenance, documentation, and incident resolution of scheduled production data jobs supporting internal and external customers data needs.

What You Need to Be Successful:

    • 5+ years experience Postgres SQL development including functions, stored procedures, and indexing or equivalent (required).
    • Experience in production data management in high availability product delivery ODS / RDBMS or equivalent (required).
    • Experience planning and designing maintainable data schemas (required).
    • Experience with Python, Docker, and data warehouse environments (preferred).
    • Experience using Github / Jenkins (CI/CD) / Artifactory / PyPy or comparable delivery stacks (preferred).
    • Experience with Postgres, Elastic Search, AWS EMR, and AWS ECS (preferred).
    • Experience with AWS Redshift, MPP, or Dynamo DB (preferred).
    • Experience with Kinesis/Kafka (preferred).
    • Experience working with large enterprise data lakes / Snowflake (preferred).