MLOps Engineer III/IV

Remote
Analytics and Machine Learning – Quantitative Tools /
Full Time /
Remote
Beacon Biosignals is on a mission to revolutionize precision medicine for the brain. We are the leading at-home EEG platform supporting clinical development of novel therapeutics for neurological, psychiatric, and sleep disorders. Our FDA 510(k)-cleared Dreem EEG headband and AI algorithms enable quantitative biomarker discovery and implementation. Beacon’s Clinico-EEG database contains EEG data from nearly 100,000 patients, and our cloud-native analytics platform powers large-scale RWD/RWE retrospective and predictive studies. Beacon Biosignals is changing the way that patients are treated for any disorder that affects brain physiology. 

Beacon Biosignals is seeking a talented MLOps Engineer as part of Beacon's Quantitative Tools Team to accelerate research & development for neurobiomarker discovery to power novel therapies for sleep disorders, neurologic conditions, and psychiatric disease. In this role, you'll build and maintain tooling that supports the ability of algorithm development teams to train, evaluate, deploy, and monitor advanced machine learning and signals processing algorithms to analyze large-scale EEG data. This role is part of the research software engineering function within the Quantitative Tools Team and reports to the VP of Analytics and Machine Learning at Beacon.

At Beacon, we've found that cultural and scientific impact is driven most by those that lead by example. As such, we're always seeking new contributors whose work demonstrates an avid curiosity, a bias towards simplicity, an eye for composability, a self-service mindset, and - most of all - a deep empathy towards colleagues, stakeholders, users, and patients. We believe a diverse team builds more robust systems and achieves higher impact.

Beacon's robust asynchronous work practices ensure a first-class remote work experience, but we also have in-person office hubs available located in Boston, New York, and Paris.

What success looks like:

    • Build high-quality, composable tooling that enables algorithms engineers to experiment faster and more efficiently.
    • Develop distributed machine learning pipelines and tooling that support training and evaluation of deep learning models on large datasets.
    • Build tooling that allows algorithm development teams to deploy their models into production, and monitor those models as new data flows in.
    • Develop human annnotation workflows that enable efficient high-volume collection of labels for model training and evaluation.
    • Automate performance analysis and reporting, for both internal (e.g. algorithms teams, executives) and external audiences (e.g. pharma clients, FDA).
    • Develop tooling for efficient dataset curation, versioning, and access to enable reproducible research.
    • Collaborate closely with teams of engineers, data scientists, and neuroscientists to understand workflow pain-points and inefficiencies that can be resolved with better tooling and processes.

What you will bring:

    • You're comfortable building cloud infrastructure for training and evaluating models on large datasets.
    • You have experience setting up and maintaining the infrastructure required for serving ML models, and have worked with teams to deploy their models into production.
    • You have experience with scientific computing in Julia or Python, and writing efficient, maintainable, composable packages in at least one of these languages.
    • You have basic knowledge of machine learning, signal processing, statistics, and/or optimization.
    • You're excited to work with large-scale time series biosignal data (e.g. EKG, actigraphy, EEG, polysomnography, etc.).
    • You're excited to build efficient human-in-the-loop labeling workflows to enable large-scale training data collection.
    • You're familiar with or excited to work in an environment that makes heavy use of Julia, Python, Docker, K8s, Helm, workflow orchestration tools (e.g. Argo, Airflow, Ray), Kafka, SQL, GraphQL, and GitHub Actions.
    • You have excellent verbal/written communication and presentation skills.
    • You're comfortable working in a highly asynchronous hybrid environment, and have demonstrated success doing so in the past.
    • Approximate experience: PhD + 1-4 years experience, Master's + 3-7 years experience, Bachelor's + 4-9 years experience, or other comparable experience.