Unidev is looking for a Senior Hadoop Developer to develop, create, and modify general computer applications software or specialized utility programs. At Unidev, you will work with a team of talented individuals who share a strong passion for their work.
Job responsibilities and duties include:
- Implement data analytics processing algorithms on Big Data batch and stream processing frameworks (e.g. Hadoop MapReduce, Python, Spark, Scala, Kafka etc.).
- Perform data acquisition, preparation, and perform analysis leveraging a variety of data programming techniques in Spark using Scala.
- Work on complex issues where analysis of situations and data requires an in-depth evaluation of variable factors.
- Load data from different datasets and decide on which file format is efficient for a task.
- Source large volumes of data from diverse data platforms into Hadoop platform.
- Install, configure, and maintain enterprise Hadoop environment.
- Build distributed, reliable, and scalable data pipelines to ingest and process data in real-time.
- Fetch impression streams, transaction behaviors, clickstream data, and other unstructured data.
- Define Hadoop Job Flows and manage Hadoop jobs using Scheduler.
- Review and manage Hadoop log files.
- Design and implement column family schemas of Hive and HBase within HDFS.
- Assign schemas and create Hive tables with suitable formats and compression techniques.
- Develop efficient Pig and Hive scripts with joins on datasets using various techniques.
- Apply different HDFS formats and structure like Parquet, Avro, etc. to speed up analytics.
- Fine tune Hadoop applications for high performance and throughput.
- Troubleshoot and debug any Hadoop ecosystem run time issues.
- Develop and document technical design specifications.
- Design and develop data integration solutions (batch and real-time) to support enterprise data platforms including Hadoop, RDBMS, and NoSQL.
- Implement Spark Streaming architecture and integration with JMS queue with custom receivers.
- Develop and deploy API services in Java Spring.
- Create Hive and HBase data source connection to Spring.
- Implement multi-threading in Java/Scala.
- Lead technical meetings, as required, and convey ideas clearly and tailor communication based on selected audience (technical and non-technical).
- Mentor Big Data Developers on best practices and strategic development.
Qualified applicants must also have experience with the following:
- Hadoop/Big Data Ecosystem and Architecture
- Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, and MapReduce
- Programming experience in Java, Scala, Python, and Shell Scripting
- SQL and Data modeling
Required:
Master’s degree in Computer Science, Applied Computer Science, Engineering, or any related field of study, plus at least two (2) years of experience in the job offered or in any related position(s).
What can we offer?
- 100% premium-paid medical insurance for employee-only-coverage
- 100% premium-paid dental insurance for employee-only-coverage
- Paid Time Off – 15 days/year for 0-5 years of service, 20 days/year after 5 years of service
- Paid holidays
- 401k plan
- Educational assistance, some certifications, and professional memberships paid
- Premium Only Section 125 Plan for dependent insurance
- Employee life insurance, AD&D and long term disability, and long-term care insurance provided
- Group vision coverage and supplemental life insurance available to employees and dependents
- A healthy change of pace from your average “cube farm.” Our staff is our company’s pride, and we do our best to provide a rewarding and creative environment.
- A focus on you as an employee of our company and an expert in your field!
Employment is contingent upon the successful completion of a background check.
Unidev is an equal opportunity employer.
Position is located in St. Louis, MO. Hybrid work schedules offered.
Must be legally authorized to work in the United States without sponsorship.
To apply, submit your resume to Human Resources at recruiter@unidev.com.