Big Data Developer (Food Delivery sphere)

Company Background

Our client is one of the leading online and mobile food ordering companies in the US.

Project Description

Our client is starting to enhance the capabilities of ETL/analytics big data platform with streaming pipelines to generate complex insights in real-time. Currently the platform is built primarily around a few core technologies including:


  • AWS EMR, Hive, Cassandra and S3 for data storage;
  • Apache Spark, Spark Streaming and python for data processing;
  • Presto query engine;
  • Azkaban for workflow management;


Our team is responsible for designing, implementing, monitoring and maintaining data pipelines powering fundamental datasets used across multiple business departments.

AWS EMR, Hive, Cassandra and S3 for data storage
Apache Spark, Spark Streaming and Python for data processing
Presto query engine
Azkaban for workflow management
Job Requirements
  • Write batch and streaming data processing pipelines, ETL processes, automated workflows;
  • Work with high volumes of data and distributed systems using technologies such as Spark, Spark Streaming, Hive, AWS EMR, AWS S3, Azkaban, Presto and etc;
  • Analyze data to measure impacts of data schemas and use it to iterate on improvements;
  • Take data oriented projects from start to finish, including requirements, scope, architecture, development, releases and maintenance;
  • Build auditing, testing and monitoring tools for data pipelines and data job executions.
Напишите нам.
Мы обязательно ответим!
Отклинуться через:

*Обязательное поле

Проверьте, правильность заполнения формы.
Ваша заявка принята, спасибо. Мы свяжемся с вами, используя указанные вами контакты.