About Ambrosia Infotech Inc

Ambrosia Infotech Inc is a staffing and recruiting company.

company website →

Backend Java Developer w/(Hadoop)← All Jobs

2016-03-16 | San Francisco, CA | Market | 12 Months Contract


Description

As part of the technology transformation, we have also embarked on a journey for enabling data driven decision culture at StubHub and started transitioning to and innovating on a Hadoop eco system based data platform that meets our both online as well as offline use case challenges. We are looking for a SENIOR BIG DATA PLATFORM ENGINEER WITH JAVA EXPERTISE to help build our next generation data platform. This highly motivated individual needs to be a self-starter with hands on in relevant experience (must). Also, the candidate expected to be working agile environment with competing priorities and expect to learn new technologies part of the delivery. This is an excellent opportunity for the right individual to have a significant impact on the organization. Specific responsibilities include:
NOTE: This position is NOT fall in following categories.
• Analytics
• Data Warehousing, BI, ETL
• Not data consumer side of the Big Data
• Typical Java Engineer
• Maintenance engineering experience.
Primary Skills:
• Language: Java (must) Python (Desired)
• Framework: Spring
• Enterprise Software Development Exposure (Eclipse, GitHub, Test-Driven Development and Server Side Framework Programing)
• Big Data (Internals): Hadoop, HDFS, Spark Hive, Pig, Oozie, Zoo Keeper, Hbase, Mongo
• Search: ElasticSearch, Solr Cloud
• Database: Oracle or Similar
Experience:
• Enterprise Software Design, Development and Testing.
• Outstanding hands on Object Oriented Experience.
• Prior Platform Development,
• Open Source Contribution is a huge plus.
• Big Data:
• Data Science, Machine Learning, Text Mining & Natural Language
• Processing Framework.
• Enterprise Data Processing - Extract meaningful data from structured, RDBMS, text and unstructured data.
• Experience in writing, scheduling, debugging Pig, Hive, and Spark jobs at scale.
• high-volume real-time data ingestion frameworks and automate ingesting various data sources into Hadoop.
• Research, develop, Optimize and Innovate frameworks and related components for enterprise scale data analysis and computations.
• Develop data validation frameworks, proactive monitoring solutions to detect data ingestion failures in big data platform and take appropriate remedies.
• Collaborate with people working on various technologies and ensure consistency for the data exposed through these different channels.
• Ownership of the end-to-end development life cycle with high quality of solution/code you develop and evangelize the test driven development - (tests, code coverage, etc.)
• Minimum 8+ years of experience in requirements analysis, design, development and testing of distributed, enterprise-class applications/platforms with particular attention to scalability and high performance, with demonstrable experience
• Knowledge and experience with RDBMS, O-R mapping, and application of distributed caching technologies


Share:

Apply
Your Name here
Your Email Address
Enter your message to the company explaining why you are a fit for this job
Please use Microsoft Word format


Similar Jobs: