BA Hiring week 25th-29th October 2021

Scala Developer (Media Measurement sphere)

Company Background

Our client is the company which pioneers the future of cross-platform media measurement, arming organizations with the insights they need to make decisions with confidence. Central to this aim are our people who work together to simplify the complex on behalf of our clients & partners.


It is a trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, its platform allows media buyers and sellers to quantify their multiscreen behavior and make business decisions with confidence.

Project Description

You’ll be responsible for building next generation data delivery platform. Our API is a driver of business growth by providing access to the television ratings metrics for a broad range of clients and products, including industry leading ad agencies, national television networks and other products. As a member of this fast-moving team you’ll have large impact on the evolution and adoption of the API as well as on the success of the business. It’s worth mentioning that this company processes and stores dozens of petabytes of data which is coming from Web and their current infrastructure processes 15 bln requests per day.


Service consists of two applications connected by the queue. First, one is the API web-service that accepts user requests (with JSON payload), validates them and pushes to the job queue. Second is the data engine implemented as a long-running Spark application deployed on the EMR cluster. Data engine pulls job requests from the queue, builds Spark jobs from JSON payload and runs these jobs on the cluster. When the job is done, the result is pushed back to the web-service via queue and returned to the client. It is important to note that all the data engine use is preprocessed and pre-aggregated already and stored as parquet files.

What You’ll Do
  • Work within an agile team to develop new endpoints and enhancements to the API;
  • Recommend and implement creative solutions for improving query response times for large data sets;
  • Increase scalability and maintainability to support rapid usage growth;
  • Collaborate openly with stakeholders and clients to continuously improve the product and increase adoption;
Big Data
Apache Spark
Job Requirements
  • Experience in the design and development of web-based APIs;
  • Experience with functional languages like Scala or Haskell is preferred (but not required);
  • Experience building, deploying and managing application in AWS is preferred;
  • Experience using Apache Spark is preferred;
  • Strong SQL skills nice to have;
  • Strong communication skills (written and verbal) along with a track record of success delivering large software projects;
  • Knowledge of verbal and written English (Intermediate at least);
  • Demonstrated knowledge of commonly used software engineering concepts, practices, and procedures;
What Do We Offer
  • 100+ technical and non-technical trainings to develop and expand your horizons;
  • Internal meetings and conferences (we like sharing our expertise);
  • English courses (both with Russian-speaking teachers and native speakers);
  • Free medical insurance in 80+ private clinics;
  • Sports activities (soccer, basketball, hockey, tennis, swimming, etc.) and corporate events (corporate parties, team building, happy hours, canoeing, etc.);
  • Free hot lunches in the office;
Напишите нам!
Мы обязательно ответим!