[Webinar] From Fire Drills to Zero-Loss Resilience | Register Now
Queues for Kafka is now in General Availability (GA) on Confluent Cloud and is coming soon to Confluent Platform, coinciding with the Apache Kafka® 4.2 release. This milestone brings production-ready queue semantics and elastic consumer scaling natively to Kafka through KIP-932, enabling organizations to consolidate their messaging infrastructures while gaining elastic consumer scaling and per-message processing controls.
Today, many organizations maintain two separate messaging systems: Apache Kafka for high-throughput streaming and traditional message queues for task distribution and job processing. This dual-estate approach creates operational overhead through duplicate security protocols, separate monitoring stacks, and fractured data governance.
Teams also face a fundamental tradeoff: Traditional queues offer per-message acknowledgment and simple task distribution but lack Kafka's durability and scalability. Meanwhile, Kafka's consumer groups provide strong ordering and reliable streaming but struggle with bursty workloads due to the strict 1:1 partition-to-consumer mapping and head-of-line blocking.
Queues for Kafka eliminates this tradeoff by bringing queue semantics directly into Kafka. Organizations can now run both streaming and queuing workloads on a single platform, reducing total cost of ownership (TCO) while maintaining the operational simplicity developers expect from traditional message queues.
Queues for Kafka introduces two main capabilities to Kafka:
The new share consumer API and share groups enable multiple consumers to cooperatively process messages from the same topic regardless of the number of partitions on that topic.
Per-message semantics enable developers to code in Kafka like it’s a task queue.
Unlike traditional Kafka consumer groups where each partition maps to exactly one consumer, share groups allow you to scale consumers elastically beyond partition count. The broker manages acquisition locks on individual records, allowing consumers to acknowledge, release, reject, or renew messages independently.
Queues for Kafka ships with several key capabilities.
Infrastructure consolidation: Run queue and stream workloads on the same Kafka cluster, eliminating duplicate infrastructure.
Elastic scaling without partition constraints: Add consumers to a share group instantly without rebalancing or over-provisioning partitions, avoiding head-of-line blocking in the process.
Per-message processing controls: Acknowledge successful processing, release messages for retry, reject and route unprocessable records for future processing, and renew messages for long running task processing individually.
Kafka's proven durability: Apply long-term retention, replayability, and zero message loss to queuing workloads.
Queues for Kafka is built on KIP-932, a native Kafka improvement that implements queue semantics at the broker level. When a share consumer fetches records, the broker acquires them for the consumer with a time-limited lock with a default of 30 seconds. The consumer can then acknowledge the record (i.e., mark as processed), release it (i.e., make available for retry), reject it (i.e., mark as unprocessable), or renew the lock (i.e., extend processing time).
If no action occurs, the lock automatically expires and the record becomes available to other consumers.
This acquisition lock mechanism enables parallel consumption from individual partitions while maintaining processing guarantees. The system tracks delivery attempts—with a configurable maximum of five—and future plans call for first class dead letter queue (DLQ) functionality to route bad records for further processing.
The share consumer API uses the KafkaShareConsumer class, providing a familiar interface for developers already using Kafka consumers. Share consumers can be added to existing topics, even those already consumed by traditional consumer groups, making adoption additive rather than disruptive. And if you’re using the KafkaConsumer class with minimal configurations, you can likely use the new KafkaShareConsumer as a drop in replacement that enables elastic scaling of consumers.
Here’s a simple code example using the new KafkaShareConsumer with the default "implicit" mode that allows Queues for Kafka to manage acknowledgment (ack) decisions automatically (i.e., implicitly) for you. This example shows how you can use the KafkaShareConsumer as a drop-in replacement for KafkaConsumer.
Here’s a more involved example using pseudo-code to show some of the flexibility and configurations of the new share consumer. In this example, messages are explicitly acknowledged with three different code paths:
Happy path: ack
Failure: nack (i.e., reject)
Long-running task: Renew the acquisition lock
The GA release delivers Queues for Kafka on both Confluent Cloud and Confluent Platform, providing the best user experience of any Kafka distribution.
Confluent Cloud provides a fully managed Queues for Kafka experience with zero configuration. Create a share consumer, specify your share group, and start processing—no broker setup required. The Cloud Console includes a dedicated share groups user interface (UI) that provides deep visibility and management of share consumers and share groups, while the Confluent CLI offers seamless command-line operations with group configuration support. Confluent Cloud also provides queue-specific metrics through the Metrics API, including share lag (similar in concept to queue depth) for auto-scaling decisions. The REST API enables programmatic management for automation workflows. And the first share fetch request per connection is logged in Confluent Cloud’s audit logs.
Confluent Platform delivers share groups with full integration into Control Center, providing UI visibility that isn’t available in open source Kafka. Deploy and manage clusters using Ansible Playbooks for Confluent Platform or Confluent for Kubernetes, with broker-level configuration control for advanced tuning. Monitor share groups through JMX metrics and Control Center, including the new share lag metric available only in Confluent.
Queues for Kafka introduces a new consumer paradigm for Kafka users with the share consumer API. Choosing between the share consumer API and traditional Consumer API depends on your workload characteristics:
Use share consumer API (share groups) for operational and application workloads that require per-message acknowledgment, parallel partition consumption, and elastic scaling beyond partition count. This includes command invocation, service communication, task execution, work queues, job processing, and event-driven workflows with back pressure. Share groups prioritize elastic scaling over strict ordering guarantees.
Use Consumer API (Consumer Groups) for analytical and integration workloads that require strict per-partition ordering and traditional streaming patterns. This includes data pipelines, ETL workloads, and stream processing applications in which ordered data is critical. Consumer Groups maintain the 1:1 partition-to-consumer mapping that guarantees order.
The key decision factors are ordering requirements, scaling needs, and message processing model. Share groups sacrifice ordering for elastic scaling and per-message control, while Consumer Groups maintain strict ordering with partition-based scaling limits.
Queues for Kafka represents a fundamental expansion of Apache Kafka's capabilities, enabling organizations to consolidate messaging infrastructure while delivering the operational simplicity developers expect from traditional queues. The differentiated experiences on Confluent Cloud and Confluent Platform ensure that you can run Queues for Kafka in the deployment model that fits your requirements.
Queues for Kafka is available on Confluent Cloud for Enterprise and Dedicated clusters, and it ships with Confluent Platform 8.2. Apache Kafka 4.2+ Java clients are currently supported. Additional Confluent Cloud cluster types and non-Java client support is targeted for the second half of 2026.
Start with Queues for Kafka on Confluent Cloud or download Confluent Platform 8.2. Read the documentation, explore the share groups UI, download and run the qfk-demo (youtube overview), and join the Apache Kafka community discussion about KIP-932 in the Confluent Community #queues-for-kafka slack channel . Review the documentation for Confluent Cloud and Confluent Platform for detailed setup guidance.
And if you haven’t done so already, sign up for a free trial of Confluent Cloud. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CLOUDBLOG60 for an additional $60 worth of free usage.
The GA release is just the beginning. Near-term enhancements include non-Java client support and additional cluster type support on Confluent Cloud in 2026. We're actively developing DLQ support (KIP-1191, targeting Apache Kafka 4.4 in 2026) to automatically copy undeliverable records onto dedicated DLQ topics.
Future road map items include exactly-once semantics through delivery acknowledgment in Kafka transactions, exponential back-off for redelivery attempts, and key-based ordering—a significant enhancement that combines partition sharing with ordering guarantees by record key.
Apache®, Apache Kafka®, and Kafka® are registered trademarks of the Apache Software Foundation. No endorsement by the Apache Software Foundation is implied by the use of these marks.
Explore the latest Confluent client updates, featuring Python asyncio general availability, improved support for Schema Registry and more.
Apache Kafka 4.2.0 is here. Explore production-ready share groups, Kafka Streams rebalance GA, new metrics, security enhancements, and upgrade details.