Welcome to the first edition of Log Compaction, a monthly digest of highlights in the Apache Kafka and stream processing community. Today’s edition are the highlights from July and early August 2015. Got a newsworthy item? Let us know.
In this post, the second in the Kafka Producer and Consumer Internals Series, we follow our brave hero—a well-formed produce request—which is on its way to be processed by the broker and have its data stored on the cluster.
The beauty of Kafka as a technology is that it can do a lot with little effort on your part. In effect, it’s a black box. But what if you need to see into the black box to debug something? This post shows what the producer does behind the scenes to help prepare your raw event data for the broker.