QA Engineer SR (KC-QA-20250604)

Remote
Celara /
Contract /
Remote
The Company is building an AI‑driven care‑coordination platform that relies on a React + Vite front end, a TypeScript back end with supporting Python services, and extensive large‑language‑model (LLM) workflows. We already maintain a robust Vitest suite for the server side, and we have designed a custom LLM‑aware test runner that automates the validation of model responses. Your mission is to bring the same rigor to our React client while continuing to evolve our AI‑centric quality strategy. Because many features hinge on generative output, testing often requires novel, out‑of‑the‑box thinking rather than simple yes/no assertions.

What you’ll do

    • Roughly half of your time will be dedicated to exploratory testing of new UI flows, identifying edge cases, stress-testing prompt variations, and documenting reproducible issues with clear context. 
    • The remaining time will focus on translating those insights into automated regression tests, including unit and integration tests for React components, API-level tests using vitest, end-to-end scenarios that validate our LLM workflows, and enhancements to our custom test runner to ensure generative outputs stay within acceptable bounds. 
    • You’ll work closely with Product, Design, and Engineering to embed testability into every story and help maintain a stable CI pipeline, contributing to safe, high-quality releases without slowing down development.

What makes you a great fit

    • You have professional experience validating modern React applications and Node/TypeScript APIs, and you write clean test code with tools such as React Testing Library, Vitest, Playwright, or Cypress. 
    • You approach quality through a risk lens, crafting concise plans that focus coverage where it matters to users. Because our product leans heavily on LLMs, you’re comfortable reasoning about nondeterministic outputs, inventing creative test strategies, and refining heuristics in our custom runner to spot subtle prompt regressions. 
    • You’re equally adept using DevTools to inspect a DOM tree or reviewing a pull request to suggest more testable designs. You communicate clearly and constructively in a collaborative team environment.

Bonus points

    • Experience with Python-based testing frameworks such as pytest or hypothesis, and familiarity with testing workflows involving LLM prompts or generative AI outputs.
    • Additional advantages include knowledge of CI/CD tools like GitHub Actions, accessibility testing practices, and prior work in healthcare or HIPAA-compliant environments. These are not required, but will be considered a plus.

Why join us

    • You’ll define the gold standard for quality at a mission‑driven startup improving the lives of caregivers and families. 
    • Your work lands in production every sprint, immediately enhancing the reliability and safety of AI features thousands of people depend on each day.