Register for Demo | Confluent Terraform Provider, Independent Network Lifecycle Management and more within our Q3’22 launch!

Tips for Apache Flink on Kafka

Apache Flink’s mission is simple: compute over streams of data, this is why combining it with Apache Kafka makes a lot of sense. Kafka is the solid, reliable, scalable backbone and Flink is the engine on top! In 10 minutes you’ll learn all the basics of Flink over Kafka: starting by defining the types of connectors, we’ll explore how to work with various data formats, using pre-defined schemas when appropriate, and storing the pipeline output as standard or compacted topic when needed. Finally we’ll extend the picture, by bringing in additional sources of data, demonstrating how we can join various data sources in streaming mode. If you’re into streaming and want to understand how to define data pipelines using Flink, one of the fastest growing engine in the market, this session is for you.

Presenter

Olena Babenko

Olena is a senior software engineer and a data engineer. Born in Ukraine, but now lives in Finland. For most of her career, she has worked for big companies such as Yandex and Zalando. Therefore, she understands that the problem of "too much data" does exist. This is why she has been a big fan of Kafka and Flink streaming for almost 4 years now. She believes that sharing knowledge is a win-win for both the audience and the speaker.