QA Engineer
Miami, Florida / Hampton, Virginia / Remote, Remote
Luminexis.AI – Corporate /
Full-time /
Hybrid
Luminexis.AI builds human-centered AI systems that accelerate clarity, streamline decision-making, and solve complex problems at scale. We work with organizations that need more than automation—they need intelligent infrastructure that moves as fast as they do.
Where AI Starts for Companies That Can’t Afford to Fail
Luminexis.AI delivers AI strategy and execution for organizations operating in high-stakes environments—defense, healthcare, insurance, and logistics. Through our flagship Discovery Package and the upcoming Illuminate (TM) platform, we turn AI overwhelm into clarity, capability, and measurable results. We’re a fast-moving, client-first, technology-driven team looking for a proven closer to help us scale from high-trust engagements to platform-driven growth.
The Opportunity
We are seeking a detail-oriented QA Engineer to join our Quality Assurance team within Product & Engineering team. This role requires a skilled QA professional with 6-8 years of quality assurance experience who can ensure the reliability, performance, and accuracy of both traditional software applications and cutting-edge AI/ML systems. The ideal candidate will develop comprehensive testing strategies, implement automated testing frameworks, and collaborate with development teams to deliver high-quality solutions that meet rigorous quality standards across diverse client environments
Key Responsibilities
- Software Application Testing & Quality Assurance
- Design and execute comprehensive test plans for web applications, mobile apps, and enterprise software systems including functional, integration, regression, and user acceptance testing
- Develop and maintain automated test suites using tools like Selenium, Cypress, TestNG, and Jest to ensure consistent quality across multiple releases
- Perform API testing using Postman, REST Assured, or similar tools to validate data integrity, error handling, and performance characteristics
- Conduct cross-browser and cross-platform compatibility testing to ensure consistent user experience across different environments
- Execute performance testing and load testing using JMeter or LoadRunner to identify bottlenecks and scalability limitations
- AI/ML Application Testing & Validation
- Develop specialized testing frameworks for machine learning models including data quality validation, model accuracy assessment, and bias detection
- Create test datasets and implement data pipeline testing to ensure AI systems receive clean, representative training and inference data
- Perform model performance testing including accuracy metrics, precision/recall analysis, and robustness testing against edge cases and adversarial inputs
- Validate AI model outputs for consistency, explainability, and adherence to business requirements and ethical AI principles
- Test AI/ML integration points including real-time inference APIs, batch processing systems, and model versioning workflows
- Test Automation & Framework Development
- Build and maintain scalable test automation frameworks that support both traditional software and AI application testing requirements
- Implement continuous integration testing pipelines using Jenkins, GitLab CI, or GitHub Actions to provide rapid feedback on code quality
- Develop custom testing tools and utilities for specialized AI testing scenarios including model drift detection and data validation
- Create comprehensive test documentation including test cases, testing procedures, and quality metrics reporting
- Establish testing standards and best practices for both software applications and AI/ML systems across development teams
- Quality Process Management & Collaboration
- Collaborate with development teams, product managers, and data scientists to understand requirements and define comprehensive testing strategies
- Participate in agile ceremonies including sprint planning, daily standups, and retrospectives to ensure quality considerations are integrated throughout development lifecycle
- Manage defect tracking and resolution processes using tools like Jira, ensuring timely communication and resolution of quality issues
- Conduct root cause analysis for production defects and implement preventive measures to avoid recurring issues
- Provide quality metrics and testing reports to stakeholders including test coverage, defect density, and release readiness assessments
Qualifications
- QA Engineering & Testing Expertise
- 6-8 years of hands-on quality assurance experience with both manual and automated testing methodologies
- Proficiency in test automation tools and frameworks including Selenium WebDriver, Cypress, TestNG, JUnit, or Pytest
- Strong experience with API testing tools (Postman, REST Assured, SoapUI) and database testing including SQL query validation
- Knowledge of performance testing tools (JMeter, LoadRunner, Gatling) and mobile testing frameworks (Appium, Espresso, XCTest)
- Experience with CI/CD pipelines and integrating automated tests into continuous deployment workflows
- AI/ML Testing & Data Quality Knowledge
- Understanding of machine learning concepts, model evaluation metrics, and statistical analysis for AI system validation
- Experience with data quality assessment, data profiling, and test data management for AI/ML applications
- Knowledge of AI/ML frameworks (TensorFlow, PyTorch, scikit-learn) and ability to understand model behavior and limitations
- Familiarity with AI ethics, bias detection, and fairness testing methodologies for responsible AI deployment
- Experience with big data testing tools and techniques for validating large-scale data processing systems
- Technical Skills & Development Capabilities
- Strong programming skills in languages such as Python, Java, JavaScript, or C# for test automation development
- Experience with version control systems (Git), test management tools, and defect tracking systems
- Knowledge of cloud platforms (AWS, Azure, GCP) and containerized testing environments using Docker and Kubernetes
- Understanding of database systems, SQL queries, and data validation techniques for backend system testing
- Familiarity with security testing principles and tools for identifying vulnerabilities and compliance issues
- Communication & Analytical Abilities
- Excellent analytical and problem-solving skills with attention to detail and ability to identify edge cases and potential failure scenarios
- Strong written and verbal communication skills for creating clear test documentation and collaborating with cross-functional teams
- Ability to translate business requirements into comprehensive testing strategies and provide risk assessment for release decisions
- Bachelor's degree in Computer Science, or related field
Deliverables
- Quality Delivery & Testing Excellence
- Achieve 95%+ test coverage across assigned projects with comprehensive test automation that reduces manual testing effort by 60%
- Identify and prevent critical defects from reaching production environments, maintaining defect escape rate below 5%
- Implement testing frameworks that support both software and AI applications with standardized approaches across development teams
- Deliver testing results and quality assessments that enable confident release decisions and meet client quality expectations
- AI/ML Testing Innovation & Impact
- Develop specialized AI testing methodologies that become standard practice across the organization's AI/ML projects
- Successfully validate AI model accuracy and performance metrics that meet or exceed business requirements by 10-15%
- Implement bias detection and fairness testing protocols that ensure ethical AI deployment and regulatory compliance
- Create reusable AI testing frameworks and tools that reduce AI project testing time by 25-30%
- Process Improvement & Team Collaboration
- Contribute to quality process improvements that increase overall team productivity and reduce time-to-market by 20-40%
- Mentor junior QA engineers and contribute to team knowledge sharing through documentation and training sessions
- Maintain strong working relationships with development teams resulting in proactive quality discussions and early defect prevention
- Professional Development & Technical Leadership
- Lead 2-3 quality improvement initiatives annually that enhance testing capabilities and organizational quality standards
- Contribute to thought leadership through technical presentations, blog posts, or participation in quality assurance communities
At Luminexis.AI, we believe the best teams win—by hiring elite talent, putting the customer first, driving innovation, and acting with urgency and integrity.
Nothing in this job description restricts management’s right to assign or reassign duties and responsibilities to this job at any time. This description reflects management’s assignment of essential functions; it does not proscribe or restrict the tasks that may be assigned. This job description is subject to change at any time.
#TT