Our toolchain democratizes access to data for everyone and makes it easy and painless to run experiments to establish cause and effect. The team focus is on the complete data life-cycle to ensure any data leaving Kiwi.com is of the highest quality.

If you are interested in putting our data loads and collectors on the next level including both the batch and real-time processing of our data routines, you are the ONE we looking for! If you love to experiment and build on top technologies like airflow, custom python apps or anything else from the open source world come to see us!

What are examples of work that Data Engineers from Data Provisioning & Engineering team have done at Kiwi.com?

Data workflow management: To manage our data loads for the Analytics world we're using the Apache Airflow. Apache Airflow enables scheduling data-related workflows with a code-as-configuration model and web front end, we driving our data routines to feed up data provisioning customers.

Real-time streaming infrastructure: To enable our analytics teams to move quickly, getting accurate data with minimal delay is a core focus in data provisioning & engineering. Currently, we are building out real-time infrastructure to allow for easy development of streaming applications that includes anomaly detections and forecasts.

Interactive dimensional analysis: Our data analysts have a strong need to query data and compute aggregates on various dimensional cuts in “yesterday was too late” frame. To address this, we are building a query tool stack to allow users to interactively slice-and-dice large datasets.

What will you do?

  • Develop, monitor and support our data workflow management environments and ELT/ETL routines tooling as a service, as well as decommission any no longer used service/tool in order to perform reasonable data associated with those decommissioned tools
  • Provide continuous support on data workflow management end ETL jobs for our data infrastructure services; maintain and provide all relevant information on current infrastructure and tools
  • Educate on the current toolings and data used within the data provisioning stack in order to make the access easier for anyone in the company
  • Regularly update and clearly communicate on the team achievements and main projects progress

What we expect?

  • 2+ years of full-time, industry experience
  • Experienced & interested in technologies like Airflow, Postgres, Redis/Kafka, or Presto
  • Working knowledge of relational databases and query authoring (SQL)
  • Working with batch and real-time data processing routines
  • Strong coding skills in Python (prefered) / Ruby
  • Rigor in high code quality, automated testing, and other engineering best practices
  • Operations of robust distributed systems in cloud (AWS, Google Cloud) is the best fit
  • BS/MS in Computer Science or a related field (ideal)

The things that we offer and you might love about us

  • Meal vouchers, flexible benefits scheme (contribution to leisure time activities), and a free Multisport card
  • Quarterly financial bonuses dependent on overall company performance
  • 3 sick days per year and optional VIP Medical Care
  • Annual flight credit vouchers,
  • A very friendly work environment where dogs are welcome, and which offers free refreshments, gym, fitness courses, sauna and relaxing zones;
  • We are great team of young, passionate and fun-loving guys from you will love working with. We look forward to you joining our team-buildings and parties.

 

Recruiter

Monika Kavická

Your manager to be

Michael Štencl

Brno, Czech Republic | Full-Time
Brno, Czech Republic | Full-Time
Barcelona, Spain | Full-Time
Brno, Czech Republic | Full-Time
Belgrade, Serbia | Full-Time