Data Engineer (Mid Level)

Pardon Ventures – Optimism /
Full Time /
About Optimism
Optimism is a digital publisher working to build a brighter web. We conceive, launch, and
operate high-quality digital brands that spark curiosity, spread positivity, and improve the
lives of our readers. With an email-first approach, our hope is to transform the inbox into a
healthy alternative to social media feeds, a place where you can curate the news, information,
and entertainment you truly want.

Our brands populate a variety of categories: Lifestyle, Games, Wonder, and Travel, among
others. This distributed approach helps us reach 3 million subscribers across our network and
serve more than 30 million web sessions each month. And we’re growing with each new

Optimism Data Engineer Overview
Data is at the heart of everything we do. As a Data Engineer at Optimism, you will be
responsible for the entire data lifecycle. You will work within an engineering team, and report
directly to both the Principal Engineer and Head of Engineering.


    • Handling data delivery by writing/maintaining serverless Go applications
    • Working with engineering to update/extend the API stack for data delivery
    • Maintaining our existing extract/transform/load (ETL) pipelines written with Scio (Apache Beam + Google Cloud Dataflow + Scala)
    • Building new data pipelines by working with others in the engineering and business insights teams, i.e. ingesting new data sources, bringing real-time data from the APIs into BigQuery, or transforming existing data sources into new structures and tables
    • Processing data from disparate sources, joining into highly available, standardized structures
    • Leveraging the data stack to handle requests from stakeholders/business insights/revenue operations
    • Using CI tooling to manage software and pipeline deployments
    • Using Google Cloud Platform (GCP) logs to monitor data delivery, troubleshoot behavior, and understand application history over time
    • Continually improve data quality, often by working with stakeholders /business insights/revenue operations to understand what they need from the data, i.e. filling in the gaps of a bigger picture or finding ways to make irregular data regular


    • Expert in SQL and at least one programming language
    • Experience with Big Query or equivalent
    • Experience with event-driven architecture and data pipelines
    • Experience in the cloud, we use both AWS and GCP
    • Experience with Git
    • Experience working remotely
    • Enjoy independence and working asynchronously

    • We will only consider candidates who are existing US citizens or naturalized candidates. 

You will hit the ground running if you have...

    • Experience writing software in functional programming languages like Scala
    • Experience with Google Cloud Dataflow / Apache Beam
$110,000 - $150,000 a year