Sr. Processing Backend Engineer

Remote
Engineering
Full Time
Why we do what we do:
Logikcull’s mission is to democratize discovery. The costs and risks associated with complex data projects like e-discovery, responding to FOIA requests, and conducting internal investigations are skyrocketing as the amount of data increases. Logikcull is transforming these tasks so they can be completed in minutes, by anyone, anywhere. As a result, our customers--modern legal teams from solo attorneys to massive law firms, Fortune 500 companies, and leading non-profit organizations -- can find and use important information quickly so they can focus on their important work, like pursuing a better democracy or saving the Earth.    

Who we are:
Logikcull.com is instant discovery for modern legal teams. Its secure, cloud-based solution helps law firms and organizations of all sizes solve the expensive, complex, and risky challenges associated with eDiscovery, internal investigations, and open records response. With Logikcull, you can start a discovery project in five seconds, from anywhere at any time on any device.

What we need:
We are looking for a strong data processing engineer who enjoys being challenged with new ways of implementing data transformations. You should be proficient in both Ruby and Java with a dose of Kotlin. You enjoy working in both, since both will be part of your day-to-day. We are looking for a team player who can have spirited debates about architecture without ego. You will need to be flexible and work in an ever-changing environment. Finally, you will thrive if you enjoy owning your projects from beginning to end. Bonus points go to you if you have some experience with highly parallel job processing, job orchestration frameworks and generally being able to handle the complexities of data duplication in map-reduce architectures.

Our current stack includes: MySQL, Redis, Java, Kotlin,  Ruby, MongoDB, and Elasticsearch, all running in AWS.

What you’ll be doing:

    • You will work on our processing pipeline handling billions of documents and terabytes of data
    • You will work with parts of core pipeline to do data transformations, security, and ocr utilities to make data searchable
    • You’ll be responsible for writing clean, modular, maintainable code within our codebase
    • You are able to take a design/proposal and carry it through to a thoughtful and polished end result with good test coverage
    • Work with our amazing Customer Success team to make our customers love us with features and reported issues
    • You will review code written by other engineers and provide useful and honest feedback
    • You will help with architecting solutions to scale our proprietary data processing platform for 10x, 100x, and beyond. 
    • You will participate in on-call rotations and work effectively and collaboratively during site outages

What we need from you:

    • Computer science degree a plus and 5+ years development experience
    • You have 2+ years experience with Java/Kotlin using the JVM and enjoy working with the JVM ecosystem
    • You have 2+ years of experience Ruby, or Python in a production environment
    • You know how to make them work and take joy in making them perform
    • You have a good understanding of highly distributed processing systems
    • You are comfortable navigating through complex data structures and algorithms, and have a strong desire to produce optimal code for speed and efficiency
    • You are familiar with the basics of scalable software design and architecture
    • You have excellent communication skills and the willingness to share your expertise
    • You are open minded and enjoy learning from others 
    • You are pragmatic and sensible in your approach to problem solving
    • You are able to think critically and gather data to constructively support your position
    • You thrive in fast moving environments without need for constant supervision
    • Logikcull’s mission and values speak to you and you feel could inspire you to do your best work
    • Your gif game is strong and you know just the right clip in any situation
    • [Bonus] You have experience with batch processing and scheduling of remote jobs in parallel
    • [Bonus] You have strong experience in map-reduce concepts and deduplication of data from incoming 
    • [Bonus] You have experience with data transformation pipelines and job scheduling frameworks (Airflow, Conductor, Spark)