Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
Here comes the July 2016 edition of Log Compaction, a monthly digest of highlights in the Apache Kafka and stream processing community. Want to share some exciting news on this blog? Let us know.
In this post, the second in the Kafka Producer and Consumer Internals Series, we follow our brave hero—a well-formed produce request—which is on its way to be processed by the broker and have its data stored on the cluster.
The beauty of Kafka as a technology is that it can do a lot with little effort on your part. In effect, it’s a black box. But what if you need to see into the black box to debug something? This post shows what the producer does behind the scenes to help prepare your raw event data for the broker.