[Virtual Event] Agentic AI Streamposium: Learn to Build Real-Time AI Agents & Apps | Register
Today, we're excited to announce the release of Confluent Platform 8.2, which builds on Apache Kafka® 4.2!
This release extends and simplifies what you can do with Apache Kafka and Apache Flink®, whether that’s handling task queues natively with Apache Kafka 4.2, processing streams easily with Flink SQL, or managing cluster migration, upgrades, or disaster recovery without the usual operational pain.
The release highlights are below, and additional details about the features are in the release notes.
New key capabilities:
Run streaming and task-queue workloads side by side with Queues for Kafka, with native queue semantics and elastic consumer scaling built in.
Simplify stream processing with Flink SQL, in General Availability (GA). Filter, join, aggregate, and transform data streams in Confluent Platform for Apache Flink® using data definition languages (DDLs), changelogs, and shared compute pools—providing a declarative way to manage Kafka topics directly through the Confluent CLI or the Control Center UI.
Reduce operational complexity with Flink ease-of-use enhancements, including multi-Kubernetes cluster support, a new savepoint management UI, and native support for Red Hat Enterprise Linux 10 (RHEL 10) and Red Hat OpenShift environments.
Kafka has always been the backbone of real-time data—the engine that teams trust to move massive volumes of events reliably and at scale. But if you've ever tried to build task-processing workloads on it, you know the friction. Scaling consumers meant scaling partitions, and a single slow message could block everything behind it. Per-message acknowledgment simply wasn't there. Teams worked around it, built on top of it, or lived with it.
With Queues for Kafka now in GA on Confluent Platform, that changes. For the first time, Kafka natively supports queue semantics—elastic consumer scaling, per-message acknowledgment, and task-level processing controls—opening a whole new class of workloads that teams can build directly on Kafka. No new systems to operate. Just more of what you already trust, doing more than it ever could before.
Overview of Queues for Kafka components: Share consumers with share groups and per-message semantics
Processing data is just as important as moving it, and in version 8.2, Flink SQL is now GA in Confluent Platform for Apache Flink®. While Flink SQL has been part of the ecosystem for a while, this GA milestone marks a shift in how you interact with your data. We’re moving beyond just running Flink to providing a deeply integrated, declarative experience that allows you to manage your entire stream processing estate through the Confluent CLI or Control Center.
This release matures the developer experience and expands where and how you can run these workloads:
Declarative Stream Management: With GA, Confluent Platform for Apache Flink® simplifies the SQL experience with Flink SQL support. New support for CREATE TABLE, DROP TABLE, and ALTER TABLE DDLs means you can manage Kafka topics, schemas, and watermarking strategies using standard SQL syntax and changelogs for append, upsert, and retract. Your data analyst and data engineer communities can now easily query, aggregate, and join streaming data for real-time insights and analytics.
Resource Efficiency With Shared Pools: We’ve introduced "shared" compute pools for SQL to allow faster query submission and to optimize infrastructure costs. You can now run multiple SQL statements within a single Flink session cluster, maximizing CPU and memory utilization across your applications.
Operational Flexibility: Confluent Platform for Apache Flink® is the only Flink service offering multi-Kubernetes cluster support (GA) today, enabling you to manage Kubernetes clusters across on-premises and cloud service provider environments from a single Confluent Manager for Apache Flink (CMF) instance. This unified approach significantly reduces the complexity and operational overhead of managing large-scale Flink deployments. Confluent Platform for Apache Flink® is now OpenShift-certified, ensuring a smooth deployment path for organizations standardized on Red Hat’s ecosystem or running in diverse environments.
Cloud-Native Operations On-Prem: You can now manage Flink applications across multiple clusters and handle lifecycle tasks such as creating savepoints directly through the CMF interface.
By treating Flink SQL as a first-class citizen, we’re making it possible for anyone who knows SQL to build high-performance, stateful streaming applications without needing to be a Java or Scala expert.
Confluent is deprecating CMF version 1.x. If you use this version, you should migrate to CMF version 2.2 or later, which is backward compatible. Support for patching and bug fixes ends on May 25, 2026, and CMF version 1.1 will be fully deprecated in September 2026.
Confluent Private Cloud debuted at Current in New Orleans as our premium offering for organizations that need cloud-like agility within their own data centers. While 8.2 is a massive milestone, it’s also the foundation for where we’re heading: a future where “self-managed” doesn't mean high overhead.
With the release of Confluent Private Cloud Gateway 1.2 (the Gateway), we’re introducing features that make it significantly easier to manage complex migrations and secure external access:
Seamless Migration and Client Failover With Intelligent Fencing and Unfencing: You now have granular control over client traffic during migration, maintenance, and disaster recovery. By "fencing" specific clients or groups at the Gateway level, you can ensure data integrity during cutovers without dropping the entire cluster.
Protocol-Level Auth Swapping: The Gateway now supports Salted Challenge Response Authentication Mechanism (SCRAM) auth swap, allowing clients to authenticate with the Gateway using one set of credentials while the Gateway communicates with the backend brokers using another. This is a game-changer for rotating secrets or migrating clusters without forcing every application team to update their configs simultaneously.
Expanded Non-Java Client Support: We’ve officially extended non-Java client support via the Gateway. Whether your teams are building in Python, Go, or .NET, they can now leverage the same high-availability routing and security abstraction that was previously reserved for Java-based microservices.
These updates are specifically designed to remove the management tax often associated with basic managed Kafka services. Decoupling the application from the underlying broker infrastructure gives you the freedom to upgrade, move, or reconfigure your clusters without the typical operational friction that slows down large-scale deployments. For more details, check out the release notes.
Looking ahead, we’re leaning further into fleet management and multi-tenancy. Our goal is to make managing hundreds of clusters feel like managing one, providing a cloud-like experience wherever your data lives. Stay tuned for what’s coming next.
Strengthen data governance with cross-context authorization checks, preventing unauthorized schema access during subject lookups while providing granular exemptions for administrative principals.
Starting in Confluent Platform 8.2, Confluent Cloud Schema Registry uses Avro version 1.12, which introduces strict namespace validation by default to ensure data integrity and compatibility. Learn more.
Build more resilient pipelines with native Dead Letter Queue (DLQ) support, which allows you to automatically redirect malformed or unprocessable records to a separate topic instead of halting your entire stream.
Gain finer operational control over your streams applications with anchored punctuation for predictable task scheduling, explicit file system permissions for local state directories, and improved monitoring via application-id tags in state metrics.
Starting with Confluent Platform 8.0, Confluent Control Center packages are hosted in a separate repository under the name confluent-control-center-next-gen, beginning with Confluent Control Center version 2.0. Control Center is now shipped independently of Confluent Platform releases, so you can pick up new operational capabilities without waiting for a full platform upgrade. With the Control Center 2.5 release, you’ll also get:
A Queues for Kafka (share groups) UI to visualize and manage queue-based consumption patterns alongside traditional consumer groups
Flink SQL UI (GA) in CMF, making it easier to manage Flink SQL statements and jobs directly from the Control Center
Shared compute pools for SQL UI in CMF.
Multi-Kubernetes support UI in CMF, allowing management of disparate Kubernetes clusters on-prem or using cloud service provider Kubernetes, all through one CMF instance
Savepoint management UI in CMF, allowing customers to create, list, read, detach, and delete savepoints using the UI
A redesigned landing page built to monitor 100+ Kafka clusters from a single instance, giving large fleets a clearer, more scalable operational view
For more information, see the support plans and compatibility documentation.
Confluent for Kubernetes 3.2.0 continues to provide a declarative control plane for managing Confluent Platform on Kubernetes, supporting up to version 8.2.x. Confluent Ansible 8.2.0 introduces new capabilities for modern environments, including deployment on RHEL 10 hosts, secure connection management via AWS Systems Manager, and support for deployments that are compatible with Federal Information Processing Standards (FIPS) 140-3. For complete details, see the Confluent for Kubernetes release notes and Confluent Ansible release notes.
Scaling Monitoring Beyond Health+ With Unified Stream Manager
As we continue to modernize the operational experience of Confluent Platform, we’re transitioning from localized monitoring to a more holistic, fleet-wide approach. Unified Stream Manager (USM) is now the standard for monitoring, managing, and governing your data infrastructure across all your Kafka environments.
With this shift, Confluent Health+ is now deprecated and entering its end-of-life process. While Health+ remains operational for existing users, it’s scheduled to be retired at the end of 2026. Starting with version 8.1 and continuing in 8.2, Health+ is discontinued for all new deployments.
Why should you move to USM?
USM isn't just a replacement; it’s an upgrade for handling large-scale environments. It provides a single pane of glass across your entire estate, moving beyond simple health checks to offer:
Unified Governance and Observability: Gain end-to-end visibility with catalog and lineage plus centralized governance policies using cloud schema registry for data contracts and encryption rules.
Hybrid Monitoring in One Place: Use integrated dashboards in the Confluent Cloud Console to monitor Kafka clusters across cloud and on-prem from a single, consistent experience.
Secure, Simplified Operations: Connect via a secure agent and private networking architecture while reducing manual effort to keep your clusters optimized, compliant, and resilient at scale.
If you’re currently running Health+, now is the time to start your migration to USM. Detailed steps and architectural benefits are in the USM documentation.
Confluent Platform 8.2 is based on Apache Kafka 4.2. Following the removal of legacy client support in version 8.0, Java clients must now be on version 2.1.0 or newer. This requirement stems from KIP-896, which removed older protocol API versions. Most clients released before November 2018 are affected; see the release notes for details.
You can use your current support plan until you’re ready to upgrade. Confluent Professional Services is available to assist with migrating to compliant clients. For upgrade steps and compatibility details, refer to the Apache Kafka 4.2 upgrade guide.
For more details about Apache Kafka 4.2, read this blog post or check out the release video below.
Download Confluent Platform 8.2 today—the only cloud-native and comprehensive platform for data in motion, built by the original co-creators of Apache Kafka. Before you upgrade to Confluent Platform 8.2, review the Confluent Platform upgrade guide and the Kafka 4.2 upgrade guide for detailed, step-by-step upgrade instructions, rolling upgrade considerations, and information about breaking changes and compatibility issues.
For those looking to try the new queuing features, check out the updated KafkaShareConsumer API in the Java client. If you want to get Flink up and running, see the new Quick Start Guide that walks through installing CMF using Helm.
Ready to get started with Confluent Private Cloud? All Confluent Private Cloud features require a valid license to function correctly. Reach out to your Confluent account team today.
The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.
Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Flink®, Apache Kafka®, Kafka®, Flink®, and the Kafka and Flink logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by the use of these marks. All other trademarks are the property of their respective owners.
Learn how the Confluent Trust Center helps security and compliance teams accelerate due diligence, simplify audits, and gain confidence through transparency.
Unified Stream Manager is now GA! Bridge the gap between Confluent Platform and Confluent Cloud with a single pane of glass for hybrid data governance, end-to-end lineage, and observability.