Systems Architect (Singapore, Indonesia, or India)

Remote (Singapore, Indonesia, & India)
IMACS /
Remote
The Institute for Health Modeling and Climate Solutions (IMACS) is a global center of excellence, hosted by Malaria No More, with the mission to empower the world’s most climate-vulnerable countries with the tools, data, and expertise needed to predict, prevent, and respond to climate-sensitive health threats.  

IMACS is redefining how climate intelligence is operationalized in public health by building and scaling AI-powered digital public goods that integrate and model climate and health data. Through the application of machine learning, interoperable platforms, and next-generation early warning systems, IMACS enables real-time risk detection and proactive responses at scale. IMACS supports countries through co-designed implementation pathways– orchestrating data cooperation, strengthening national health and climate information systems with tailored innovations, training frontline actors and policymakers, and institutionalizing their use through clear SOPs and sustainability guidelines. By unlocking the value of climate and health data, IMACS helps transform fragmented information into strategic, actionable knowledge– enabling smarter decisions, better preparedness, and more resilient health systems in the era of climate disruption. 

Backed by the Patrick J. McGovern Foundation, we are building a Central Data & Analytics Hub (CDAH) to advance IMACS’ climate health AI foundation model and related digital public goods, as well as a training program, to equip public health professionals with the knowledge and tools required to make data-informed decisions at the intersection of climate and health.  

CDAH Platform Overview The CDAH will be a cloud-native, open-source “operating system” for integrated climate and health intelligence, built on five pillars:

    • AI R&D environment: Ingests multi-modal climate, environmental, epidemiological and socio-demographic data into a unified data lake & feature store; supports Kubeflow/PyTorch/TensorFlow pipelines with MLflow registry, automated benchmarking, architecture search, transfer learning and uncertainty-aware modeling. 
    • Digital tool marketplace & public goods registry: User-facing portal for dashboards, mobile apps and alerting platforms; structured backend registry of pre-trained model packages, microservices, ETL scripts, governance adapters, metadata and version history. 
    • Systems integration & deployment layer: Middleware adapters and Kafka messaging to plug AI services into DHIS2, HMIS, IDSR and similar platforms; Terraform/Ansible IaC, identity management, end-to-end encryption and compliance with data-governance standards. 
    • Training environment: Web portal and virtual bootcamp infrastructure hosting open-access modules, instructor-led sessions, hands-on Jupyter labs, code templates and certification tracks on climate-health AI workflows and interoperability. 
    • Real-world evaluation sandbox: Controlled simulation environment replicating public-health workflows, climate variability and institutional constraints; structured feedback loops for piloting, validating and refining tools prior to full-scale rollout. 
    • AI R&D environment:Ingests multi-modal climate, environmental, epidemiological and socio-demographic data into a unified data lake & feature store; supports Kubeflow/PyTorch/TensorFlow pipelines with MLflow registry, automated benchmarking, architecture search, transfer learning and uncertainty-aware modeling. 
    • Digital tool marketplace & public goods registry: User-facing portal for dashboards, mobile apps and alerting platforms; structured backend registry of pre-trained model packages, microservices, ETL scripts, governance adapters, metadata and version history. 
    • Systems integration & deployment layer: Middleware adapters and Kafka messaging to plug AI services into DHIS2, HMIS, IDSR and similar platforms; Terraform/Ansible IaC, identity management, end-to-end encryption and compliance with data-governance standards. 
    • Training environment: Web portal and virtual bootcamp infrastructure hosting open-access modules, instructor-led sessions, hands-on Jupyter labs, code templates and certification tracks on climate-health AI workflows and interoperability. 
    • Real-world evaluation sandbox: Controlled simulation environment replicating public-health workflows, climate variability and institutional constraints; structured feedback loops for piloting, validating and refining tools prior to full-scale rollout. 
    •  

What You’ll Do

    • Cloud infrastructure and security: Design, deploy and operate the CDAH’s secure, scalable, cloud-native backbone using Kubernetes, Docker, Terraform and Ansible; implement identity management, encryption, and compliance frameworks. 
    • DevSecOps & CI/CD leadership: Lead all infrastructure CI/CD (Terraform/Ansible), security scans, vulnerability testing, and platform health monitoring. 
    • Middleware and systems integration: Define and build REST/Kafka-based middleware adapters to embed AI services into DHIS2, HMIS, IDSR and other national health systems–ensuring data-governance, authentication and encryption. 
    • Sandbox orchestration: Provision, monitor, and teardown real-world evaluation sandboxes end-to-end. 
    • Front-end architecture and development: Architect and build the CDAH web portal front end: select frameworks, implement UI/UX components, integrate with backend APIs and data services, and ensure high availability, performance, accessibility and seamless user experience. Incorporate basic UX analytics (usage metrics, A/B tests) and iterate on portal design. 
    • Platform integration and collaboration: Work closely with the Data Engineer and AI/ML Engineer to weave data and model services into a cohesive, end-to-end platform. 
    • Learning Management System development and operations: Configure and operate the virtual bootcamp environment– user authentication, content management, lab provisioning and analytics dashboards– provisioning ephemeral compute, monitoring performance and responding to live support requests. 
    • DevOps Blueprints and Open Source: Publish Terraform modules, Helm charts, Dockerfiles, and integration blueprints to the public-goods registry– ensuring reusability, security, scalability and compliance. 

What We’re Looking For

    • Deep technical expertise: 8+ years in cloud architecture or SRE/DevOps, with a strong track record in secure, large-scale Kubernetes and Terraform deployments. 
    • MLOps & Cloud proficiency:Expertise in Kubernetes, Docker, CI/CD (GitOps), and multi-cloud (AWS, Azure, GCP) environments. 
    • API & microservices: Proven ability to design, implement, and secure RESTful APIs and event-driven architectures. 
    • Consulting acumen: Exceptional stakeholder management, technical storytelling, and client-facing presentation skills– ideally honed at a top-tier consulting firm or tech organization. 
    • Autonomous delivery:Demonstratedcapacity to own complex projects end-to-end, navigate ambiguity, and deliver production-ready solutions with minimal oversight. 

Preferred Qualifications

    • Prior engagement in global health, One Health, or climate-health data initiatives. 
    • Familiarity with data-governance frameworks (e.g., GDPR, HIPAA) and cybersecurity best practices. 
    • Experience designing and delivering technical training or bootcamps. 
    • Contributions to open-source digital public goods or curated registries

Why You’ll Love This Role

    • High-impact mission: Your work will directly strengthen early warning systems and resilience in climate-vulnerable regions. 
    • Technical leadership:Shape the CDAH's cloud-native strategy and run critical platform components. 
    • Innovation-friendly environment:Experiment with state-of-the-art container orchestration, IaC and event-driven integration patterns. 
    • Global collaboration: Engage a diverse network of public-health experts, policymakers, and open-source communities. 
Please submit your résumé, a brief cover letter outlining your most relevant cloud architecture projects/ consulting engagements, and links to GitHub repos or demos.

Engagement: Contract through June 30, 2026 (extension subject to funding)