Big Data/Hadoop Engineer

India - Bangalore
Public Cloud - Offerings and Delivery – Cloud Data Services /
Hybrid
We are hiring for Big Data / Hadoop Engineer

Exp - 3+yrs
Location - Bangalore ( 3days WFO)
Notice Period - Less than 30days only

.    Big Data Development: Design, build, and maintain robust and scalable data pipelines for ETL/ELT processes using tools within the Hadoop ecosystem.
.    Programming & Scripting: Develop complex data processing applications using Spark (with Python/Scala/Java) and implement efficient jobs using Hive and Pig.
.    Hadoop Ecosystem Mastery: Work extensively with core Hadoop components including HDFS, YARN, MapReduce, Hive, HBase, Pig, Sqoop, and Flume.
.    Performance Tuning: Optimize large-scale data processing jobs for speed and efficiency, troubleshooting performance bottlenecks across the cluster.
.    Data Modeling: Collaborate with data scientists and analysts to design and implement optimized data models for the data lake/warehouse (e.g., Hive schema, HBase column families).
.    Workflow Management: Implement and manage job orchestration and scheduling using tools like Oozie or similar workflow managers.
.    Operational Support: Monitor, manage, and provide L2/L3 support for production Big Data platforms, ensuring high availability and reliability.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.