Senior Data Engineer - (Remote - Latin America)
Remote (Latam Only)
Tesorio empowers the Fortune 5,000,000 to run as if they had a Fortune 500 finance team. We plug into a company's cashflow data sources and leverage data science to surface the most impactful actions they can take to optimize their cash flow. Then, we let them implement those actions with the click of a button.
We listen to our customers, and empower them to share their best practices and wishlist with us to make our product better every day. We are developing machine learning algorithms to understand business cash needs, predictive algorithms to forecast future cash flow, and a sleek UI/UX to make our products enjoyable to work with.
We're looking for talented data engineers to take ownership over the data pipelines and workflows required to drive the data science, machine learning, and analytics that powers our platform. You'll be joining an early-stage company with a small, tight-knit team, backed by top-tier VCs (including First Round, Floodgate, Fuel and Y Combinator). You'll work closely with the entire engineering team, our Head of Product, and the co-founders. Learn more at tesorio.com.
- You must be comfortable owning end-to-end data engineering pipeline as part of a small team
- You can independently execute on data engineering projects from concept to completion
- You're looking to have a large impact on the success of the business
- You have strong opinions, but you hold them loosely.
- You're always learning.
- You love being a crucial part of a team that is building and shipping magical products that will help thousands of companies.
- You enjoy the dynamic and fast-paced nature of a startup
What you’ll do day-to-day
- Take ownership over the current and future state of the data architecture
- Collaborate with engineers, data scientists, and product managers to understand their data requirements and translate them into design
- Build scalable data frameworks and data pipelines required to support data science, analytics, machine learning, and product use cases
- Own the data quality, efficiency, automation, and observability of data pipelines and data transformations
- Partner with different teams within the company to drive data driven projects and deliver data artifacts that integrate seamlessly into our product experience
- Tackle a wide variety of technical problems and contribute daily to improve system health and code base.
The ideal candidate
- Bachelor's degree in Computer Science, Engineering, or related discipline
- Has 5+ years of work experience (including 3+ years in data engineering)
- Proficiency in SQL and one high-level programming language (preferably python).
- Experience with one or more data orchestration, big data processing and transformation frameworks (for example: Airflow, DBT)
- Experience with one or more SQL stores and data warehouse (for example: Postgres, Snowflake) is required
- Experience with docker and kubernetes
- Experience with other related technologies (for example: NoSQL data stores, Pandas) a plus
- Is resourceful and agile, and remains positive in the face of problems.
- Empathetic towards colleagues and users.
- Excited about the challenge of working in a fast-paced environment with a small and talented team.
Note: we currently cannot sponsor visas.