Principal Data Engineer

Salt Lake City
Varo is on a mission to redefine banking so it's easy for everyone to make smart choices with their money. Our app offers bank accounts and high-yield savings accounts that don’t cost a thing, tools to help you manage your money and save automatically, and invitation-only personal loans at competitive rates. On the contrary, traditional banks charge fees, offer next-to-nothing savings rates, and don’t work with their customer’s best interests in mind.

Varo is distinct from other fintechs: With preliminary approval for a bank charter from the Office of the Comptroller of the Currency (OCC), we're on our way to becoming the first mobile-centric national bank in the country. Our unique team combines the best people in tech and banking, and we’re wildly passionate about keeping our customers happy by helping them manage and grow their money. Based in San Francisco and privately held, Varo has raised $178M to date, led by Warburg Pincus and The Rise Fund / TPG Growth.


As a Principal Big Data Engineer, you will play a senior role in implementing a variety of solutions to ingest data into, process data within, and expose data from, a Data Lake that enables our data warehouse, data mart, reporting and data analysts and scientists to use and explore data in an automated or self-service fashion.

As a technical leader, you will take ownership of the data architecture for processing and analyzing data across the Platform.


    • Develop and maintain data strategy for Varo in terms of capabilities, architecture, and control mechanisms that support company intentions to be a bank
    • Design, build and maintain Big Data workflows/pipelines to process records into and out of Varo’s lake
    • Provide technical leadership in the area of data systems development including data ingestion, data curation, data storage, high-throughput data processing, analytics
    • Collaborate to actively gain buy-in from stakeholders at all levels on technology direction
    • Work with business partners on requirements clarification and results from rollout efforts
    • Participate in developing and enforcing data security & access control policies
    • Architect effective controls for a resilient data ingestion process
    • Support application data integration design and build efforts, including real-time capabilities
    • Proficiency in Amazon AWS big data technologies including S3, RDS, RedShift, Elasticsearch, Lambda, AWS Glue
    • Conduct code reviews in accordance with team processes and standards


    • Bachelor's degree in Computer Science, MIS, Engineering or related field, or relevant work experience
    • 10+ years of ETL, data modeling, warehouse and data pipelines experience.
    • 5+ years’ experience working within the AWS Big Data/Hadoop Ecosystem (EMR is preferred)
    • Experience developing extract load transform tooling
    • Experience with downstream consumption patterns is a plus (reports, dashboards, API)AWS Glue, Redshift, RDS is a plus
    • Experience in Hadoop, HDFS, Hive, Python, REST API/ SOAP API, Spark2, Oozie WFs is a plus