Hadoop Software Engineer
Arm Treasure Data is changing the data management landscape: we provide an industry leading, cloud-based Big Data Analytics platform on which our many customers are processing an astounding amount of data every day. If you enjoy using both halves of your brain to scope and frame undefined problems using intuition, common sense, relevant data, and strong academic knowledge of computer science fundamentals, we may have a match here!
We’re looking for an experienced Software Engineering who is excited to jump between our APIs, middlewares, and backend. In this position you’re likely to work on a number of different challenges around our entire stack, designing and implementing or improving key features and components of our technology, a robust and scalable data platform leveraging adapted open-source query engines (Hadoop/Hive, Presto) hosted in the public cloud. You can also expect to have direct contact with our customers at times, hearing their feedback firsthand and working with our Product team incorporating their input into our product, and to have to operate parts of our systems, especially the ones you build yourself.
At Arm Treasure Data, we put a great deal of emphasis on collaboration and maintaining an open work environment, regardless of location. We believe employees should not just work but enjoy doing it - appreciating and valuing working alongside your co-workers goes a long way towards that goal and we strongly believe in ensuring that’s always the case.
If this sounds like the kind of opportunity you’ve been looking for, then we’re going to need your resume of course, but more importantly include a short note giving us a sense of why you think you are absolutely the right person for this job and how you are going to meet and exceed the objectives outlined below.
Things you will do
- Design and implement/improve (Java, Ruby and Python are our languages of the trade) our middleware (Hadoop, Hive, etc.), and/or backends in cooperation with the Product team to continue supporting its data-heavy analysis use case.
- Develop software for operation automation and monitoring.
- Analyse and suggest/implement improvements of performance in a wide span of networks and middleware/backend applications.
- Operationalize (as in making it easier to observe/monitor and perform operations) as well as perform operations on our systems and application hosted on public cloud providers.
- Contribute your input on product improvements to stay ahead of industry trends and standards.
- Help translate business and product requirements into technical specifications and designs.
- Communicate effectively with technical and non-technical resources across time zones and teams.
- Help train and mentor other Software Engineers.
Your background and skills will include
- A BS or MS in Computer Science or a related field.
- A solid understanding of computer science (data structures, algorithms, etc.).
- Around 3 years of professional experience as a Software Engineer.
- Experience operating Hadoop cluster or similar large Java based systems.
- Excellent Java programming experience and experience working with and tuning the JVM.
- Industry experience working running services in public cloud IaaS providers, specifically around computing, storage, relational databases, and load balancers to achieve service redundancy and robustness.
- Demonstrated ability working collaboratively in cross-functional teams and a strong track record for delivery as part of a team more than individually.
- Strong foundation in system engineering on Unix like system.
- You engender a sense of mentoring and knowledge sharing amongst your team, hence creating a synergistic learning environment.
- Able to work with a distributed team.
- Ability to handle stressful situations with rigor and composure.
- Self motivation and sensitive about on-time delivery.
We would be thrilled if you
- Have had experience building and managing data-centric services that support a large user base.
- Are knowledgeable of MySQL, PostgreSQL, Presto, or other open-source distributed database/engine.
- Are familiar with security best practices.
- Have a hands-on experience with software packaging, such Debian, or similar.
- Take equal pride in optimizing as well as building systems and are able to share a success story around the former.
- Own or are actively contributing to any open-source project.
- Experience designing and developing APIs, middlewares, and/or backends to support data-heavy analysis systems.
- Articulate and personable with strong spoken and written language abilities.
You can expect a work environment where the team is collaborative and open to your ideas, while we keep our collective eye on supporting our customers’ needs.
Our team is committed to technical innovation in our product and in the world through customer collaboration, open-source projects, and by continuing to make our product an integral part of our customers’ growth and success.
We are an equal opportunity employer dedicated to building an inclusive and diverse workforce.
We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Arm Treasure Data provides an end-to-end, fully managed cloud service (data acquisition, storage and analysis capability) for Big Data that is trusted and simple. As the original developers of Fluentd, an advanced open-source log collector specifically designed to solve the big data log collection problem, Arm Treasure Data solves the problems for companies wanting the ability to manage their big data needs.
Agencies and recruiters, we cannot consider your candidate(s) without a contract in place. Any resumes received without having an active agreement will be considered gratis referrals to us. Thank you for your understanding and cooperation!