Senior Data Engineer

Cary, North Carolina, United States

Overview

For over 25 years, Epic Games has been making award winning games and game engine technology that empowers others to make visually stunning games and 3D content that brings environments to life like never before. Epic’s award-winning Unreal Engine technology not only provides game developers the ability to build high-fidelity, interactive experiences for PC, console, mobile, and VR, it is also a tool being embraced by content creators across a variety of industries such as media and entertainment, automotive, and architectural design.  As we continue to build our Engine technology and develop remarkable games, we strive to build teams of world-class talent.


 We think of “Epic” as the collective effort of smart, talented, passionate people who are dedicated to building the highest quality experiences possible for our developer and player communities.  If you’d like to be part of something Epic while creating amazing games or incredible technology used across a multitude of industries, we’d love to hear from you!    


The Senior Data Engineer will be responsible for helping to build Epic’s data collection systems, processing pipelines, and data warehouses. This role will be responsible for helping to architect, build, and maintain an optimized and highly available data pipeline for deeper analysis and reporting by a team of analysts. 

 The person in this role will be responsible for the following:

  • Assisting in the architecture, design and implementation of big data platforms operating on Amazon Web Services
  • Performing performance tuning tasks on large compute clusters
  • Working with both batch and near-real-time data sources
  • Creating SaaS applications
  • Streamlining maintenance tasks as they pertain to the platform

The ideal candidate will have a mix of the qualifications below:

  • Skilled in managing multiple Hadoop clusters via the command line

  • Past experience building or integrating open source applications (e.g.Lipstick, Inviso, Genie) into Hadoop environments

  • Knowledge of Apache Spark, Apache Hive and Apache Pig

  • Expertise in one or more programming languages (Java or Python)

  • Familiarity with basic Database administration tasks on a major RDBMS platform such as Oracle

  • Experience with enterprise scheduler software (control-m, Automic/UC4, Tivoli workload scheduler, etc)

  • Expertise using SQL

  • Experience working with complex data types (JSON, XML)

  • Comfort with software development methodology around unit testing, performance tuning, integration testing, etc.

  • Experience working with Amazon Web Services, specifically Elastic Map Reduce and EC2

This is going to be Epic!

#LI2