Job Description

Posted on:

October 27, 2024

We create the software of the future. Join us! We are looking for an experienced data engineer to participate in knowledge-intensive international projects.

Tech Stack: Scala, Apache Spark, ClickHouse, HDFS, Apache Airflow, PostgreSQL, Apache Kafka, Apache Hive, Apache Iceberg

Required Skills and Experience:

  • Proficiency in Scala or Java, with a willingness to quickly master Scala basics
  • Basic Linux command line skills
  • Experience with Spark and a solid understanding of its working principles and potential issues
  • Knowledge of database fundamentals and strong SQL expertise

Nice to Have:

  • Experience with Scala Dataframe/Dataset API
  • Building and orchestrating ETL processes for Big Data processing
  • Optimizing Spark queries and configuring resource consumption
  • In-depth understanding of Spark working principles and configuration parameters
  • Experience with Zeppelin or Jupyter
  • Familiarity with ClickHouse or other NoSQL databases
  • Experience with Apache Airflow
  • Knowledge of Hadoop/HDFS, Parquet files, and Hive
  • Ability to work with GitLab CI
  • Basic knowledge of Python
  • English language proficiency at B1 level or higher

Responsibilities:

  • Building and supporting ETL processes, solving various business tasks for processing large volumes of data using Spark and Scala
  • Optimizing data processing speed and system resource consumption
  • Identifying and resolving errors and anomalies in the resulting data
  • Creating medium complexity SQL queries for analyzing large volumes of data according to business requirements

We Offer:

  • Various projects for international clients on a modern technology stack
  • A friendly team and enjoyable working environment
  • Regular assessments and salary reviews

Secret insights

WaveAccess is scaling! With 252 employees, they've seen 30% headcount growth, strong engineering focus up 25%, and HR support up 40%. Ideal for AI pros wanting to join a thriving team!