Build Predictive Machine Learning with Flink | Workshop on Dec 18 | Register Now
Confluent 3.2 brings the latest enhancements for running Apache Kafka at scale.
In this weekly tech talk series, we’ll discuss and demonstrate the latest advancements available in Apache Kafka, and show how to apply them to deploy a production-ready streaming platform at scale with Confluent.
Fill out the form to be sent the recordings.
**
Speaker: Clarke Patterson, Senior Director, Product Marketing
With the introduction of connect and streams API in 2016, Apache Kafka is becoming the defacto solution for anyone looking to build a streaming platform. The community continues to add additional capabilities to make it the complete solution for streaming data.
Join us as we review the latest additions in Apache Kafka 0.10.2. In addition, we’ll cover what’s new in Confluent Enterprise 3.2 that makes it possible for running Kafka at scale.
Speaker: Nick Dearden, Director, Engineering and Product
It’s 3 am. Do you know how your Kafka cluster is doing?
With over 150 metrics to think about, operating a Kafka cluster can be daunting, particularly as a deployment grows. Confluent Control Center is the only complete monitoring and administration product for Apache Kafka and is designed specifically for making the Kafka operators life easier.
Join Confluent as we cover how Control Center is used to simplify deployment, operability, and ensure message delivery.
**Speaker: Ewen Cheslack-Postava, Engineer, Confluent
**
In streaming workloads, often times data produced at the source is not useful down the pipeline or it requires some transformation to get it into usable shape. Similarly, where sensitive data is concerned, filtering of topics is helpful to ensure that the wrong data doesn't get to the wrong place.
The newest release of Apache Kafka now offers the ability to do transformations on individual messages, making is possible to implement finer grained transformations customized to your unique needs. In this session we’ll talk about the new single message transform capabilities, how to use them to implement things like data masking and advanced partitioning, and when you’ll need to use more complex tools like the Kafka Streams API instead.
Speaker: Michael Noll, Product Manager, Confluent
For many industries the need to group together related events based on a period of activity or inactivity is key. Advertising businesses, content producers are just a few examples of where session windows can be used to better understand user behavior.
While such sessionization has been possible in Apache Kafka up to this point, implementing it has been rather complex and required leveraging low-level APIs. In the most recent release of Kafka, however, new capabilities have been added making session windows much easier to implement.
In this online talk, we’ll introduce the concept of a session window, talk about common use cases, and walk through how Apache Kafka can be used for session-oriented use cases.
Speaker: Gwen Shapira, Product Manager, Confluent
With the rapid increase of Apache Kafka use within organizations, issues of data governance and data quality take center stage. When more and more disparate departments and teams depend on the data in Apache Kafka, it’s important to provide a way to make sure "bad data" does not make its way into critical topics. Every organization that uses Kafka at large scale realize they need a way to deliver these guarantees.
In this talk, Kafka committer, Gwen Shapira will review the benefits of a schema registry for large-scale Kafka deployments and will give high-level overview of how the Confluent schema registry is being used in enterprise architectures across industry.