Data Pipelines Made Simple with Apache Kafka

Data Pipelines Made Simple with Apache Kafka

Watch on demand

Recording Time: 30:25

Presenter: Ewen Cheslack-Postava, Engineer, Apache Kafka Committer, Confluent

In streaming workloads, often times data produced at the source is not useful down the pipeline or it requires some transformation to get it into usable shape. Similarly, where sensitive data is concerned,  filtering of topics is helpful to ensure that the wrong data doesn't get to the wrong place.

The newest release of Apache Kafka now offers the ability to do transformations on individual messages, making is possible to implement finer grained transformations customized to your unique needs.  In this session we’ll talk about the new single message transform capabilities, how to use them to implement things like data masking and advanced partitioning, and when you’ll need to use more complex tools like the Kafka Streams API instead.