Machine Learning Engineer
San Francisco, CA
Engineering – Data Science
Grand Rounds' vision is to create a path to great health and health care, for everyone, everywhere. Founded in 2011, the company provides an employer-based technology solution that connects members and their families to high-quality care. With Grand Rounds, employers get a personalized, high-performance network at scale, while their employees get the tools and support needed to navigate their care on their own terms.
Named Glassdoor’s Best Place to Work in 2019 and Rock Health’s 2018 Fastest Growing Company, Grand Rounds helps restore individual health and quality of life, and offers employers lower health care spend and higher employee productivity. For more information, please visit www.grandrounds.com.
Data Scientists at Grand Rounds work on problems that are core to the company’s mission. Major challenges include developing systems and models to identify the highest quality doctors in the country as well as methodologies to uncover the subtle differences between each physician’s clinical expertise. Additionally, patient-level modeling allows us to understand the specific healthcare needs of every person. With a high fidelity understanding of both patients and physicians we are able to match patients to both appropriate and high quality care and understand the health of our patient populations.
Our growing sect of machine learning engineers sit on the Data Science team and work closely with Data Engineering, Platform, and Analytics to build out search and analytics platforms powered extensively by machine learning technologies. This role will involve managing the full platform and lifecycle involved with production grade online machine learning, developing batch and stream processing pipelines in Spark, and ensuring deep integration with the next generation data platform being developed by our data engineers.
Example projects include:
Building modules for our “Match Engine” ecosystem: This collection of services powers our provider matching backend in terms of distributed runtime and deployment services. As data scientists across our team are constantly developing new models and features that will help patients immediately, we aim to publish these into our own integrated platform. You will help solidify, grow, and lead the evolution of the end-to-end systems architecture. As a user-facing real-time prediction environment, we take seriously the notions of instrumentation, optimization of both models and orchestration code, as well as the construction and integration of online and offline experimentation frameworks.
Laying the groundwork for our next generation analytics platform. This multi-purpose Spark-based ecosystem will enrich our view of providers and patients through machine learning driven inference and population health statistics. By deeply understanding the modeling and inference patterns that have long been used across our team you will help to simplify and improve our prototype-to-production processes at scale. You will work at the intersection of our Data Engineering and Data Science teams to ensure robust modeling and data delivery systems.
- Excellent verbal communications, including the ability to clearly and concisely articulate complex concepts to both technical and non-technical collaborators
- BS with 8+ years or MS with 6+ years or PhD with 3+ years of experience. Degree(s) should be in a technical discipline such as Computer Science, Engineering, Statistics, Physics, Math, quantitative social science
- Previous experience in machine learning, and statistics fundamentals
- Production engineering experience is highly desired including previous experience developing and maintaining high-availability search and/or machine learning services.
- Experience with distributed systems componentry including SQL and NoSQL databases, caching layers (Redis, Memcached), compute in Spark or Hadoop, queueing (Kafka, Kinesis, SNS), cloud based databases (BigQuery, Athena, Redshift).
- Experience with workflow management solutions such as Airflow, Azkaban, or Luigi and scalable ETL in batch and stream processing workloads in Spark, Hadoop.
- Required: SQL, Python, linux shell scripting
- Desired: Scala, Java, or Ruby
- Experience with production ready machine learning packages such as scikit-learn, TensorFlow, PyTorch, SparkML.
- Frequent user of cloud computing platforms such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform
- Double Bonus Points: previous work on medical applications and/or with claims data
This is a full time position located in San Francisco, CA.
Grand Rounds is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Grand Rounds considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.Apply for this job
Grand Rounds is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Grand Rounds considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.