Build your real-time bridge to the cloud with Confluent Platform 7.0 and Cluster Linking | Read the blog

Online Talk: Demystifying Stream Processing with Apache Kafka

View Webinar

Demystifying Stream Processing with Apache Kafka - 4 out of 6

Thursday, November 17th, 2016

10:00am PT | 1:00pm ET | 7:00pm CET

Recording Time: 49:22

At its core, stream processing is simple: read data in, process it, and maybe emit some data out. The core features needed to do stream processing include scalability and parallelism through data partitioning, fault tolerance and event processing order guarantees, support for stateful stream processing, and handy stream processing primitives such as windowing.

This talk will introduce Kafka Streams and help you understand how to map practical data problems to stream processing and how to write applications that process streams of data at scale using Kafka Streams. We will also cover what is stream processing, why one should care about stream processing, where Apache Kafka and Kafka Streams fit in, the hard parts of stream processing, and how Kafka Streams solves those problems; along with a concrete example of how these ideas tie together in Kafka Streams and in the big picture of your data center.

This is talk 4 out of 6 from the Kafka Talk Series.

Presentador

Neha Narkhede

Neha Narkhede is the co-founder at Confluent, a company backing the popular Apache Kafka messaging system. Prior to founding Confluent, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s streaming infrastructure built on top of Apache Kafka and Apache Samza. She is one of the initial authors of Apache Kafka and a committer and PMC member on the project.