지금 Current 2022: The Next Generation of Kafka Summit에 등록해서 데이터 스트리밍의 미래를 라이브로 확인해 보세요!

Announcing Tutorials for Apache Kafka

We’re excited to announce Tutorials for Apache Kafka®, a new area of our website for learning event streaming. Kafka Tutorials is a collection of common event streaming use cases, with each tutorial featuring an example scenario and several complete code solutions. It’s the fastest way to learn how to use Kafka with confidence.

We’re building this because we know that event streaming is a radically different way of thinking. It causes us to rethink the way we architect our programs and systems. Although it has heaps of benefits (immutability, information sharing, and fault tolerance, to name a few), it can be surprisingly difficult for newcomers to learn.

It doesn’t need to be that way.

For beginners, Kafka Tutorials reveals the “shape” of the problems that event streaming can solve. It makes it easier to recognize the domain of things that you might use event streaming for. Moreover, each tutorial reliably takes you from zero to working code by following each of the steps.

For the experienced, it’s a crucial reference guide that makes your work easier. Easily look up how to join a stream and a table together when you’re rusty, or quickly recall how to merge discrete streams together. Over time, we’ll introduce more advanced material that makes use of the entire stack.

Although it’s early, we’re building Kafka Tutorials for the long term. That’s why we’ve intelligently engineered the site to use a unique flavor of literate programming. Each tutorial that you see on a page is backed by a single data structure. We’ve built programs that understand this shared structure—namely one to render the page, and another to test the content of the page. That means that when we make changes to each tutorial, they are automatically validated on a continuous integration system to ensure that we’re giving you actual code that works.

Data Structure

Lastly, Kafka Tutorials is a community-driven site. Its source code is available on GitHub. If you have a great idea for a new tutorial or can make an existing open better, we’d love your contributions.

Happy learning!

Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided.

Michael Drogalis is Confluent’s stream processing product lead, where he works on the direction and strategy behind all things compute related. Before joining Confluent, Michael served as the CEO of Distributed Masonry, a software startup that built a streaming-native data warehouse. He is also the author of several popular open source projects, most notably the Onyx Platform.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Real-Time Gaming Infrastructure for Millions of Users with Apache Kafka, ksqlDB, and WebSockets

The global video game market is bigger than the music and film industry combined. It includes Triple-A, casual/mid-core, mobile, and multiplayer online games. Various business models exist, such as hardware

Real-Time Wildlife Monitoring with Apache Kafka

Wildlife monitoring is critical for keeping track of population changes of vulnerable animals. As part of the Confluent Hackathon ʼ22, I was inspired to investigate if a streaming platform could

Serverless Stream Processing with Apache Kafka, Azure Functions, and ksqlDB

Serverless stream processing with Apache Kafka® is a powerful yet often underutilized field. Microsoft’s Azure Functions, in combination with ksqlDB and Confluent’s sink connector, provide a powerful and easy-to-use set