Company Background

Our client is the market leader in contact center robotic process automation (RPA). Their patented platform automates currently manual management processes and enables contact centers to effectively reduce cost and increase employee engagement in industries including financial services, telecommunications, insurance and healthcare.

 

The company is the only contact center automation solution that increases business call centers’ efficiency while improving both agent engagement and the customer experience. Their powerful technology acts as an automated manager for contact center agents with rules that are triggered in real-time by their customers actual service level conditions. The result is increased productivity and a highly measurable return on an investment, with typically a 2x payback in the first year and a 3-5x return in subsequent years.

 

The company is powering over 1 billion automated actions annually and has saved their customers over $140 million in the past 2 years.

Project Description

The project is aimed to create data integration platform which will become a crucial driver for the further growth of the company by enabling multiple consumers to have human friendly access to the trustworthy data in time.

 

It’s going to serve to feed both OLAP and OLTP systems as well as Data Scientists group.

 

You’ll do

Work directly with CTO and Chief Architect of the client to

 

  • provide a comprehensive data-wise description of the domain of the company;
  • give an assessment to existing data flows and data management practices;
  • architect/design solution of the data integration platform;

 

Besides that

  • build and lead the team of data engineers;
  • closely collaborate with different group of stakeholders to identify their needs and constraints in order to expand capabilities of the data integration platform organically;
Technologies

Technological stack of data integration platform is the subject for discussion.

 

So far client’s data center is running on bare-metal Hadoop, MariaDB and MS SQL Server, using Ni-Fi and Flink for data processing, ActiveMQ for streaming and Arcadia Data for reporting.

 

Job Requirements

Required skills

 

  • Practical experience with designing complicated and high-performing data processing and analytics pipelines;
  • Proficient in Apache Flink or Spark;
  • Proficient in SQL and one of the programming languages: Scala, Python, Java;
  • Strong analytical and problem-solving skills;
  • Ability to work with cross-functional teams in a highly collaborative manner;
  • Excellent communication and presentation skills;

 

Preferred Skills

 

  • Background in data mining and statistical analysis;
Напишите нам.
Мы обязательно ответим!
Отклинуться через: linkedin.com hh.ru

*Обязательное поле

Проверьте, правильность заполнения формы.
Ваша заявка принята, спасибо. Мы свяжемся с вами, используя указанные вами контакты.