Technical Product Owner

Anywhere in Europe
Data Engineering /
Contract /
Remote
Who We Are
Massive Rocket is a high-growth Braze & Snowflake agency that has made significant strides in connecting digital marketing teams with product and engineering units. Founded just 5 years ago, we have experienced swift growth and are now at a crucial juncture, aspiring to reach $100M in revenue. Our focus is on delivering human experiences at scale, leveraging the latest in web, mobile, cloud, data, and AI technologies. We pride ourselves on innovation and the delivery of cutting-edge digital solutions.

Every role at Massive Rocket is Entrepreneurial - Successful people at Massive Rocket will not only think about their role but understand the roles around them, their goals and contribute to the success and growth of their team, customers and partners.

What We Offer
πŸš€ Fast-moving environment – you will never stop learning and growing
❀️ Supportive and positive work culture with an emphasis on our values
🌍 International presence – work with team members in Europe, the US, and around the globe
πŸͺ 100% remote forever
πŸ§—πŸΌβ€β™‚οΈ Career progression paths and opportunities for promotion/advancement
πŸ• Organized team events and outings

What we’re looking for
We are looking for a Technical Product Owner (TPO) to drive the vision, strategy, and execution of our Kafka-based Data Engineering platform. You will act as the bridge between business stakeholders and engineering teams, owning the real-time data streaming stack roadmap. Your role is to ensure our Kafka pipelines and data engineering initiatives deliver scalable, reliable, and compliant event-driven solutions that empower business outcomes.

This role combines technical depth (Kafka, event-driven architectures, data governance) with product ownership (roadmap definition, prioritization, stakeholder alignment).

Responsibilities

1) Product Ownership
Define and own the product vision and roadmap for Kafka/Data Engineering capabilities. Translate business needs into technical requirements and prioritized backlogs. Ensure clear acceptance criteria and measurable KPIs.

2) Stakeholder Management
Collaborate with Data, Analytics, Marketing, and Product teams to define event models, SLAs, and integration needs. Align stakeholders on priorities, trade-offs, and delivery timelines.

3) Technical Strategy
Shape the architecture and evolution of Kafka-based pipelines (topics, partitions, retention, compaction, Connect/Debezium, Streams/ksqlDB). Partner with engineers to ensure scalable, secure, and cost-efficient solutions.

4) Governance & Compliance
Drive schema governance (Avro/Protobuf), data quality enforcement, and regulatory compliance (GDPR/CCPA, PII handling). Ensure monitoring, observability, and incident management practices are in place.

5) Delivery Management
Own backlog grooming, sprint planning, and delivery tracking. Ensure throughput, latency, and consumer lag targets are met. Manage risks, dependencies, and SLAs.

6) Optimization & Innovation
Continuously evaluate and introduce improvements in reliability, cost-efficiency, and latency reduction. Assess new tools, frameworks, and best practices in data streaming and event-driven systems.


Required Skills and Qualifications:

β€’ Product Ownership & Agile Delivery
- 4+ years of proven experience as a Product Owner or Technical Product Owner in data engineering or streaming domains.
- Demonstrated ability to own a product vision and roadmap, align it with business goals, and communicate it effectively to technical and non-technical stakeholders.
- Hands-on experience in backlog management, user story writing, prioritization (MoSCoW, WSJF, RICE), and defining acceptance criteria.
- Strong experience with Agile/Scrum/Kanban frameworks, backlog grooming, and sprint ceremonies.

β€’ Stakeholder Engagement & Business Value
- Skilled at gathering and refining requirements from diverse stakeholders (data, analytics, product, marketing).
- Ability to translate business outcomes into technical user stories for Kafka/data engineering teams.
- Experience in balancing trade-offs (cost, reliability, time-to-market, compliance) and negotiating priorities.
- Comfortable defining KPIs, SLAs, and success metrics for platform capabilities.

β€’ Technical Acumen
- Strong understanding of event-driven architectures, Kafka ecosystem (Confluent, Kafka Connect, Schema Registry, Streams/ksqlDB).
- Familiarity with data governance, compliance, and security requirements (GDPR/CCPA, PII handling, encryption, RBAC/ACLs).
- Working knowledge of cloud platforms (AWS/Azure/GCP), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform/Helm).
- Ability to engage in technical discussions on latency, throughput, consumer lag, schema evolution, and cost optimization.

β€’ Leadership & Communication
- Excellent written and verbal communication skills (English C1 or higher).
- Proven ability to present technical concepts in business language and influence decisions.
- Experience in cross-functional leadership within engineering squads or consulting/client-facing roles.

Preferred Qualifications:
β€’ Experience with platform product ownership (internal developer platforms, streaming data services).
β€’ Familiarity with observability and reliability practices (Prometheus, Grafana, Datadog, incident response).
β€’ Exposure to Customer Data Platforms (e.g., mParticle) or tag management systems (e.g., Tealium).
β€’ Background in data engineering/software development (Python, Kotlin, Spark/Flink) is a strong plus.

If you're ready to launch your career to new heights at a company fuelled by passion and innovation, we want to hear from you!