About Kaizen Technologies Inc.

Kaizen Technologies Inc. is a staffing and recruiting company.

company website →

Big Data Engineer← All Jobs

2018-08-10 | Pleasanton, CA | DOE | 6 Months Contract


Description

• Participate in technical planning & requirements gathering phases including design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business’s operational and analytics databases, and troubleshoots any existent issues.
• Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems.
• Define and build large-scale near real-time streaming data processing pipelines that will enable faster, better, data-informed decision making within the business.
• Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale.
• Keep up with industry trends and best practices on new and improved data engineering strategies that will drive departmental performance leading to improvement in overall improvement in data governance across the business, promoting informed decision-making, and ultimately improving overall business performance.
Required Qualifications:
• Bachelor’s Degree with a minimum of 6+ year’s relevant experience or equivalent.
• 6+ years of experience with Big Data & Analytics solutions Hadoop, MapReduce, Pig, Hive, Spark, Storm, AWS (EMR, Redshift, S3, etc.)/Azure (HDInsight, Data Lake Design) and other technologies
• 3+ years of experience in Large Scale, Fault Tolerance Systems with components of scalability, and high data throughput with tools like Kafka, Spark and NoSQL platforms such as HBase, Mongo DB.
• 4+ years of experience in building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozy, Sqoop, Pig, Hive, HBase, Avro, Parquet, Spark, NiFi
• 2+ years of experience in deploying Big Data solutions (Hadoop ecosystem) on cloud technologies such as Amazon AWS, Azure etc.
• Experience in working with Team Foundation Server/JIRA/GitHub and other code management toolsets
• Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Python
• Operations support with a solid understanding of release, incident, and change management
• someone who is a self-starter and team player, capable of working with a team of strategists, Architects, co-developers, and business/data analysts


Share:

Apply
Your Name here
Your Email Address
Enter your message to the company explaining why you are a fit for this job
Please use Microsoft Word format


Similar Jobs: