Data Engineer, P2

San Francisco Bay Area /
Data Science /
Full-time
Pattern Ag is an agricultural technology company leveraging advances in genomics, data science, and microbiology to help farmers improve the productivity, profitability, and sustainability of their land. By applying genomic analysis to farm soil, we unlock a wide range of predictive insights that help farmers improve their soil health and optimize their spend on seed, crop protection, and fertility inputs. Combining these insights with the latest in digital sensing, data science, and machine learning, we are ushering in the era of predictive agriculture, powering on-farm decision making for millions of acres, and billions in crop spend.

Our team is seeking a data engineer with experience in automating ETL of diverse datasets in a cloud environment. The Data Science Team at Pattern Ag is responsible for developing, optimizing, and validating soil metagenomic insights in agricultural settings across the midwest. The team is focused on those analytic insights that help drive increasingly strategic input decisions, through a deeper understanding of pathogen loads and soil health properties. If you have a fine eye for detail, a deep understanding of data, and an interest in working in a highly collaborative team to increase the sustainability of modern agriculture then we look forward to meeting!

Remote is possible.

This role is for you if

    • You love working with diverse data sets, from time series to geospatial
    • You're excited about contributing to the develop of cutting edge analytics and building the sandbox for novel insights
    • You are energized by working with diverse stakeholders from laboratory researchers to data scientists
    • You want to be a part of a highly collaborative team

Your responsibilities will include

    • Participate in the design and implementation of cloud based data processing and transformations
    • Partner with R&D teams to understand use cases and supply clean and consistent data models
    • Develop data models with a focus on user workflows, particularly data science and analyst use cases
    • Write, configure, deploy, and maintain the tools needed to deliver data to internal teams
    • Build, deploy, and maintain ETLs to integrate data into third party systems
    • Document data models and develop data libraries
    • Automate batch-based data integrations in a GCP environment

Required skills and/or experience

    • Bachelor’s degree in computer science or a related field
    • Minimum 2 years of experience as a data engineer or related role in an AGILE environment
    • Experience designing strategies for datasets including warehousing systems and networks
    • Experience designing and constructing data models with a focus on user workflows
    • Boasts a project portfolio with clearly attributable individual design and implementation contributions
    • Experience across multiple tiers of an application, including databases, network, and containers
    • Fluency in Python, and proficiency in SQL and database technologies
    • proficient in Git version control and Github
    • A strong appreciation for data integrity

Bonus points

    • Experience automating inside the GCP framework
    • Experience building quick insight dashboards
Thank you for considering Pattern Ag. We look forward to reviewing your application!