Business Intelligence – Business Intelligence
Ninja Van is on a mission to dominate e-commerce logistics in Southeast Asia. We are one of the fastest growing startups in the region - founded in mid-2014, we have already won over 10,000 merchants and deliver more than 8 million parcels a month across six countries. In Jan 2018, we raised one of the largest Series C rounds ever in Southeast Asia and are well-positioned for our next phase of rapid growth.
At our core, we are a technology company that is disrupting a massive industry with cutting-edge software and operational concepts. Powered by algorithm-based optimisation, dynamic routing, end-to-end tracking and a data-driven approach, we provide best-of-class delivery services that delight both the shippers and end customers. But we are just getting started! We have much room for improvement and many ideas that will further shape the industry.
Role & Responsibilities
- Design, develop and maintain Ninja Van’s infrastructure for streaming, processing and storage of data.
- Build tools for effective maintenance and monitoring of the data infrastructure.
- Contribute to key data pipeline architecture decisions and lead the implementation of major initiatives.
- Work closely with stakeholders to develop scalable and performant solutions for their data requirements, including extraction, transformation and loading of data from a range of data sources.
- Develop the team’s data capabilities - share knowledge, enforce best practices and encourage data-driven decisions.
- Develop Ninja Van’s data retention policies, backup strategies and ensure that the firm’s data is stored redundantly and securely.
- Solid Computer Science fundamentals, excellent problem-solving skills and a strong understanding of distributed computing principles.
- At least 3 years of experience in a similar role, with a proven track record of building scalable and performant data infrastructure.
- Expert SQL knowledge and deep experience working with relational and NoSQL databases (e.g. HBase, Cassandra).
- Advanced knowledge of Apache Kafka and demonstrated proficiency in Hadoop v2, HDFS, MapReduce.
- Experience with stream-processing systems (e.g. Storm, Spark Streaming), big data querying tools (e.g. Pig, Hive) and data serialization frameworks (e.g. Protobuf, Thrift, Avro).
- Bachelor’s or Master’s degree in Computer Science or related field from a top university.
- Data storage: Percona XtraDB Cluster, Elasticsearch, Apache Cassandra
- In-Memory data grid: Hazelcast
- Real-time data pipeline: Apache Kafka
- Backend webservice stack: Play (Java 8), GoLang, Node.js
- Web frontend: AngularJS, React
- Mobile: Android SDK, React Native
- Containerization: Docker on Kubernetes
Submit a job application