[Webinar] Harnessing the Power of Data Streaming Platforms | Register Now

Presentation

Understanding Kafka Produce and Fetch api calls for high throughput applications

« Kafka Summit Europe 2021

The data team at Cloudflare uses Kafka to process tens of petabytes a day. All this data is moved using the 2 foundational Kafka api calls: Produce (api key 0) and Fetch (api key 1). Understanding the structure of these calls (and of the underlying RecordSet structure) is key to building high throughput clients.

The talk describes the basics of the Kafka wire protocol (api keys, correlation id), and the structure of the Produce and Fetch calls. It shows how the asynchronous nature of the wire protocol can combine with the structure of the Produce and Fetch calls to increase latency and reduce client throughput; a solution is offered through use of synchronous single-partition calls.

The RecordSet structure, which is used to encode and store sets (batches) of records is described, and its implications on Fetch requests are discussed. The relationship between Fetch api calls and ""consume"" operations is discussed, as is the impact of offset alignment to RecordSet boundaries.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how