Big Data Engineers

Wellington based - Long Term Project

Position Filled

Big Data Engineers

Oozie, Big Data, Hadoop, IntelliJ, Scala…If these are super familiar words and you use them on a regular basis, we really should be talking.

Our client is in need of Big Data Engineers who have actual professional experience with Scala and Spark.

This is for a long-term role that will fully utilise a wide range of your skills and experience, offering you exposure to a wide range of tasks and using a wide array of skills including:

  • Developing Spark Applications to move raw data from Data Lake to refined data layers with in the Hadoop platform
  • Fine tuning of Spark applications
  • Troubleshooting and debugging any Hadoop ecosystem components at runtime
  • Understanding data mappings i.e. Input-output transformations
  • Defining Jobs - workflows in Oozie
  • Data Analysis and trouble shooting
  • Creating Spark data pipelines to ingest / process semi-structured data
  • Maintaining and managing log files
  • Monitoring Spark Jobs and take necessary actions when it fails
  • Managing hive databases
  • Developing hive scripts for data processing
  • Maintaining data security with the Big Data Platform

Essentially, we are looking for Engineers that have:

  • 10+ years of experience in building Data Warehouse
  • Minimum 3 years of experience in developing Spark Applications in Scala using Spark SQL, Dataframe, Dataset & Spark streaming API's with functional programming
  • Experience in developing Maven based projects in IntelliJ
  • Experience in scheduling Jobs using Oozie
  • Experience in tuning Spark applications to use the cluster resources optimally
  • Experience in trouble shooting Spark Applications
  • Good understanding of the different Hadoop file formats and when to use what
  • Expert level experience in writing complex SQL queries
  • Expert level knowledge in using Linux commands and writing shell scripts

If you also have a clear understanding of each component of Hadoop ecosystem like HBase, Pig, Hive, Sqoop, Flume, YARN, etc, have experience with:

  • Data modelling
  • VSTS & Jenkins
  • Administering MapR Cluster and
  • Configuring Zeppelin Notebooks

Get involved with this exciting project - register your interest via the link below

E3 Recruit. We Get People.

According to E3 Recruit Limited recruitment procedures, successful applicants will be required to undergo a variety of screening processes including drug screening and criminal conviction checks.
Only applicants who are legally able to work in the country in which this role is based will be considered.

Enquire About This Position

The following errors have occurred:

    The following errors have occurred: