Big Data Engineer - Chicago

Schaumburg, IL
Development
Full-time
If you have experience with big data solutions, you have a wealth of opportunities to grow your career. How about an environment with extraordinarily talented peers? How about a company offering a range of both big data products and professional services to expand your skills? How about an organization working on cutting edge projects that many of the biggest companies simply can't execute on their own? Are you looking to join a startup where you can directly contribute to the establishment of one the next great high tech companies? Then Kogentix, now hiring Junior-to-Mid-Level Big Data Engineers, may just be a perfect fit.


Responsibilities

    • Develops distributed applications to solve large scale processing problems, utilizing various languages like Java, Scala , Shell etc. 
    • Implements, troubleshoots, and optimizes solutions based on modern big data technologies like Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model
    • Implements data architecture, including data ingress in batch and real time from a broad variety of external systems; data transformations to prepare data for analytics processing, and data egress for availability of analytics results to visualization systems, applications, or external data stores
    • Supports documentation, change control, and QA processes consistent with enterprise requirements
    • Establishes strong teamwork with client technical resources, and effectively communicates project status, technical issue options and resolution, and operational requirements to client stakeholders

Qualifications

    • Overall 2-3 years experience with minimum of 1 year on Hadoop or a closely related technology
    • Very strong server-side Java experience, especially in an open source, data-intensive, distributed environments
    • Expert in the Hadoop Framework & java programming (i.e. Spark, MapReduce, Pig, Hive, Kafka, Storm, etc.) including performance tuning
    • Implemented complex projects dealing with the considerable data size (TB/ PB) and with high complexity
    • Good understanding of algorithms, data structure, and performance optimization techniques
    • Experience with agile development methodologies like Scrum
    • Self motivated, and has the ability to drive technical discussions. 
    • Organized, detail oriented, able to work both independently and in a team
    • Excellent problem solver, analytical thinker and quick learner.
    • Strong verbal and written communication skills
    • Broad understanding of most of the following, with depth of expertise and experience in at least 1:
    • o Hadoop security (Kerberos, Ranger, Knox)
    • o Amazon EMR and related technologies (e.g. DynamoDB, Kinesis, S3)
    • o Data mining, statistical modeling techniques and quantitative analyses
    • o Data Architecture, Master Data Management and Governance
    • o Kafka
    • o Search capabilities such as Elastic Search
    • o NoSQL DB such as Cassandra and MongoDB
    • Certifications a plus: Amazon, Cloudera, Spark
    • Masters / Bachelor of Computer science with focus on distributed computing