Platform Engineer

Remote /
Product – Engineering /
/ Remote
About the Role

Our core belief is that computer vision is a foundational technology that is going to transform nearly every industry. This is an opportunity to shape how millions of developers will experience and use it for the first time. Your contribution will have a massive impact.

Roboflow is hiring our fourth full-time engineer on the machine learning team to work on our application’s interface with developers and their code. This role is all about building capabilities for Roboflow’s REST API and downstream SDKs. 

Core skills - you should be familiar with many of these concepts and technologies and have built projects with some of them:

Backend: node, Docker, python, Flask, pip, REST
AWS: EC2, ECR, ECS, Lambda, Cloud Watch, Batch, S3
Google Cloud: Cloud Run, Cloud Functions, Cloud Storage, PubSub, GCE
Frontend: Firebase, React, jQuery

We anticipate this role being focused 90% on backend, 10% on high level machine learning concepts.

If you care about writing great code and clean interfaces that make sense to developers, this is the role for you!

You certainly don't need to be an expert in all of these areas; but should be excited to learn new skill sets as you need them. We also hope you'll bring some new knowledge and experiences you can share to help level-up the rest of the team. The above tools span code you may be operating in to date - your opinions on new technologies and practices that we adapt for new tasks will be highly valued.

What We Need from You

On the machine learning team, we work on building and maintaining technology within Roboflow’s training, search, and deployment services and the APIs and SDKs that interface with machine learning services and the Roboflow web application. From time to time we are also helping deliver on enterprise contracts and coding awesome open source projects and sample projects.

In the beginning, you will be tackling projects in close collaboration with your fellow product team members. As you progress in your knowledge of Roboflow’s mission and tools, you will have a wide degree of freedom to advocate for and drive your own projects. If you need a rigid list of tasks spelled out in a multi-month roadmap, this role probably won't be a good fit.

Example Projects

To give you an idea of what it will be like to work here, here is a sampling of a few projects you might work on in your first few months:

● Polish generate, export, and train REST APIs by adding progress codes
● Implement generate, export and train APIs in the Roboflow PIP package
● Add a REST API capability to assign images to an annotator
● Split out REST API logging
● Write a method in the Roboflow PIP package to login programmatically
● Debug API errors reported by customers by looking at all the backend systems that failing API calls touch, analyzing customer feedback and backend logs, for example.


Our goal is to build the world's best computer vision infrastructure so our users don't have to. This means we handle a lot of challenging complexities like seamlessly ingesting dozens of data formats, processing millions of images per day, and deploying auto-scaling machine learning infrastructure that can handle our customers' most demanding training and deployment needs.

Our core app sits atop Firebase with assistance from auto-scaling groups of Docker containers (for jobs like archiving datasets and training models). We also heavily lean on serverless infrastructure so we can gracefully deal with bursty traffic involved in manipulating datasets that can range anywhere from one hundred to one million images.

Our machine learning infrastructure runs in AWS, with a few deployments spanning into GCP. We train and deploy various state of the art models in a variety of machine learning frameworks. All of our machine learning applications are closely integrated with the core Roboflow web application.

Our REST API is written in nodejs, powering our PIP package and nascent CLI

We also maintain a library of Colab notebooks our customers can use to train common computer vision models, a directory of public datasets, and a web of format specifications. We see building and supporting mini-projects like these that are helpful to the community at large as part of our role in democratizing computer vision.