Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
The ideal data streaming platform empowers you to handle every type of data movement with confidence. In our first Confluent Cloud launch of 2026, we’re delivering on the promise of Apache Kafka® for any workload, at any scale.
Whether you’re an enterprise needing to consolidate queuing workloads with data streaming, a retailer preparing for massive Black Friday–level traffic spikes, or a startup looking to power artificial intelligence (AI) applications with real-time data, Confluent is the platform to satisfy your data needs. With additional scalability, cost savings, and analytics features, we’re making it easier than ever to bring every workload home to Confluent Cloud.
Check out the Kafka Copy Paste migration demo video to see how you can migrate to Confluent Cloud and start taking advantage of these new features.
We’re excited to announce the general availability of Queues for Kafka (KIP-932) on Confluent Cloud, coinciding with the release of Apache Kafka 4.2.
Historically, organizations may have maintained two separate technology estates—a modern data streaming platform for real-time workloads and a legacy queuing estate for task distribution—which led to infrastructure sprawl and fragmented governance. By introducing native queue semantics to Kafka, Queues for Kafka eliminates the need for organizations to manage these separate platforms, allowing them instead to consolidate their messaging infrastructure onto a single, governed data streaming platform.
As a result, organizations leveraging Queues for Kafka can reduce total cost of ownership (TCO) while maintaining the durability and scalability they’ve come to expect from Kafka and Confluent.
The magic sauce that brings queues to Kafka is the new share group abstraction that enables scaling and the new share consumer that provides queue semantics. Unlike traditional consumer groups, which are restricted by a strict 1:1 partition-to-consumer mapping, share groups allow multiple consumers to cooperatively process messages from the same topic regardless of the number of partitions. This enables consumers to scale elastically to meet the demands of bursty, parallel workloads.
We recommend that you use share groups for operational and application workloads that require per-message acknowledgment, parallel partition consumption, and elastic scaling beyond partition count—including command invocation, service communication, task execution, work queues, and job processing.
Queues for Kafka on Confluent Cloud takes this further with a dedicated share group user interface (UI) directly in the Confluent Cloud console, providing deep visibility into consumer health and group management that isn't available in open source distributions. Confluent Cloud also provides critical, queue-specific metrics through our Metrics API, allowing you to make autoscaling decisions. Combined with programmatic management via the Confluent CLI and REST API, you get a production-ready queuing service that’s fully integrated into our data streaming platform.
Consolidate workloads from traditional messaging systems onto a single, unified Kafka platform
Improve scalability with share groups that solve Kafka's traditional partition-based scaling limitations by allowing dynamic consumer scaling independent of partition assignments
Unlock new use cases, including work distribution and task queue processing patterns that were previously difficult or impossible with traditional Kafka consumer groups
Enhance observability with real-time monitoring of consumer health and queue-specific metrics through the Metrics API
Queues for Kafka is available for Enterprise and Dedicated clusters on Confluent Cloud, with support for Standard clusters coming in the second half of 2026. Read the documentation, download and run the qfk-demo, and join the Apache Kafka community discussion about KIP-932. For questions, reach out to your account team.
We’re thrilled to introduce Kafka Copy Paste (KCP), a free open source CLI tool designed to automate migration from hosted Kafka (and eventually open source Kafka) to Confluent Cloud. This new migration tooling dramatically streamlines the entire process by eliminating a significant amount of the manual work traditionally involved. This cuts migration times from months to days and helps you achieve a stress-free, near–zero-downtime migration.
Moving workloads from hosted Kafka to a fully managed, cloud‑native data streaming platform can unlock major cost savings and agility. With KCP, that process is easier than ever. KCP orchestrates the full journey from hosted Kafka to Confluent Cloud, including:
Discovery and Planning: KCP scans your existing Kafka environments to detect cluster configuration, gather costs based on actual usage, and provide accurate inputs for Confluent’s TCO calculator.
Provisioning Infrastructure: KCP generates pre-filled Terraform scripts to automatically provision the equivalent Confluent Cloud clusters, networking, and necessary migration infrastructure.
Data Migration: KCP enables end-to-end automation for replicating data via secure external Cluster Linking and automates the conversion and migration of associated components, such as access control lists (ACLs), connectors, and Schema Registry data.
Client Migration: Coming soon, KCP will make use of Confluent Cloud Gateway, a cloud-native Kafka proxy solution, to simplify client cutovers during migration.
Explore KCP on GitHub, try out the migration workshop to see it in action, and check out the deep-dive blog post to walk through the key steps of migration with KCP.
Confluent Cloud is introducing new capabilities for Kafka clusters to help you scale confidently while balancing cost efficiency and predictability.
Operators can now configure a capacity limit on elastic Confluent units (eCKUs) across all serverless cluster types (Basic, Standard, Enterprise, and Freight) for better cost control. Teams can experiment and onboard without the risk of exceeding budget.
Enterprise clusters on all major clouds can now autoscale up to 32 eCKUs, delivering more than 7.5 GB/s of combined throughput—more than 3x the previous capacity. All clusters retain exceptionally fast scaling (in seconds) for up to 10 eCKUs. Scaling beyond this threshold shifts to an on-demand model that may take up to 20 minutes per eCKU. If your workload requires rapid expansion at these higher volumes, contact your account team to enable faster scaling.
We’re extending client quotas to Enterprise and Freight clusters, matching the functionality that previously existed only in Dedicated clusters. Client quotas enable you to enforce precise ingress and egress throughput limits on specific principals, making it possible to safely consolidate diverse workloads onto shared resources to optimize costs.
By establishing these guardrails, you prevent “noisy neighbor” applications from monopolizing throughput or degrading overall cluster performance. The result is a cost-effective, multi-tenant environment where every application maintains predictable performance regardless of traffic spikes from others.
Fetch-from-follower (KIP-392) is now available for Enterprise clusters in addition to Freight clusters. For Enterprise clusters using Private Network Interface (PNI), you can configure your clients to consume from the closest replica in the same availability zone (AZ) rather than a leader replica in a different AZ, cutting out cross-AZ traffic and slashing egress charges on Amazon Web Services (AWS) networking bills.
Last year, we introduced Confluent Intelligence, our fully managed service for building real-time, context-rich, trustworthy AI. Today, we’re expanding Confluent Intelligence with new capabilities across Streaming Agents, built-in ML functions, and Model Context Protocol (MCP) server support. These features enable you to connect existing agents, detect anomalies more accurately, use more vector stores for retrieval-augmented generation (RAG), secure networking, and standardize how agents access real-time data on Confluent Cloud.
Learn about the latest in Confluent Intelligence in the deep-dive blog post or check out the demo video:
Real-world data is rarely perfect, and system health often relies on the interplay between multiple metrics. That’s why we’re introducing multivariate anomaly detection for built-in ML functions in Early Access.
Unlike traditional tools that monitor metrics in isolation, multivariate anomaly detection treats multiple correlated metrics as a single unified vector. By using median absolute deviation (MAD), it identifies the “true normal” of your system, automatically ignoring outliers and capturing complex issues that individual metrics checks would miss. This helps you cut through the noise, making it easier to detect and resolve critical issues before they impact your customers.
Sign up for Early Access.
Streaming Agents on Confluent Cloud enables you to build, test, deploy, and orchestrate event-driven agents using fully managed Apache Flink® and Kafka on a unified platform. With the new Agent2Agent (A2A) integration (Open Preview) for Streaming Agents, you can use A2A directly from Flink to communicate, collaborate, and orchestrate tasks with external agents on any A2A-capable platform (e.g., LangChain, CrewAI, SAP, Salesforce) while leveraging Confluent’s reliability, replayability, and observability for inter-agent communication.
By wiring A2A calls into Streaming Agents, you effectively turn existing external systems and agents into event-driven AI systems and agents that act on the state of the live business instead of stale, batch-based signals.
To supply agents with the right data and security, we’ve expanded our private networking and vector search capabilities.
For enterprises with strict security requirements, we’re excited to announce that AWS and Azure Private Link for model inference, external tables, and vector search is generally available. This allows Confluent Intelligence (including Streaming Agents) to reach external databases, vector stores, and REST APIs over private VPC-to-VPC connections. You can safely enrich real-time streams with sensitive customer or operational data while meeting rigorous compliance standards.
To further ground your agents in proprietary, domain-specific knowledge, we’ve expanded our vector search ecosystem to include Azure Cosmos DB and Amazon S3 Vectors as first-class providers within Confluent’s external tables and search fabric—alongside MongoDB, Elastic, Couchbase, Postgres, SQL Server, Oracle, and more.
Vector search for Cosmos DB and Amazon S3 Vectors allows you to retrieve the most relevant results in real time directly within your Flink SQL pipelines, enabling seamless, production-grade RAG that’s grounded in relevant context.
While teams want a standardized way for AI agents to tap into Confluent Cloud via Anthropic’s MCP, they may currently struggle with self-managed, bespoke deployments of the open source server. Confluent now provides official support for the open source MCP server, allowing any MCP client to securely access and manage real-time data across Confluent Cloud—including Kafka, Flink, and Tableflow—using natural language.
This gives enterprises a production-ready, vendor-backed MCP server and simplifies the operational overhead of multi-agent systems.
We’re committed to making Tableflow accessible across more clouds and environments. By expanding our ecosystem support, we’re ensuring that every team, regardless of their preferred cloud or architecture, can turn real-time streams into actionable tables with a single click.
That’s why we’re pleased to announce that Tableflow on Microsoft Azure is generally available for Confluent Cloud. You can now materialize Kafka topics as Apache Iceberg™ or Delta tables in Azure Data Lake Storage Gen 2, making your streaming data readily accessible for analytics and AI use cases. Reach out to your account team to get started.
If you want the same topics‑to‑tables experience but need to keep all data within your own environment, WarpStream Tableflow offers a zero-access, Bring Your Own Cloud (BYOC)–native architecture. This ensures that your sensitive raw data is processed and stored exclusively within your VPC and never leaves your environment. WarpStream Tableflow allows you to ingest from any Kafka-compatible source—including Confluent Platform, hosted Kafka, and open source Kafka—to create fully managed Iceberg tables. Learn more in the deep-dive blog post.
To enhance the security of data in transit, we’re introducing mutual TLS (mTLS) authentication for Confluent Cloud Freight clusters, providing two-way authentication between clients and clusters before any data is exchanged. Sign up for the Limited Availability program.
We’re excited to introduce several new fully managed connectors for Confluent Cloud:
We’re bringing AI‑assisted troubleshooting for fully managed connectors directly into the Confluent Cloud UI for fully managed connectors. Powered by Azure OpenAI, the AI-assisted troubleshooting feature provides the following: 1) clear summaries of connector failures and 2) step-by-step remediation recommendations, helping you resolve issues faster without waiting for support.
We’re extending support for custom SMTs to Azure in addition to AWS, enabling you to upload and execute your own custom SMT code on fully managed connectors for clusters using private networking setups.
To help organizations scale securely, we’re introducing pool-level role-based access control (RBAC), designed to enforce the principle of least privilege across your Flink workloads. With pool-level RBAC, admins can assign granular permissions, such as the new FlinkDeveloper role, scoped specifically to individual compute pools rather than the entire environment. This ensures that users and services have access only to the workspaces and statements relevant to their specific roles, thus reducing security risk.
And to support your most demanding workloads, we’re significantly expanding capacity limits to 1,000 CFUs per compute pool (Limited Availability), a 20x increase from the previous limits. You can now run larger workloads and handle massive spikes in data throughput within a single pool. Whether you want to consolidate more jobs into fewer resources or simply need to ensure that your infrastructure can withstand heavy traffic, this increased ceiling ensures that your compute power can grow in lockstep with your stream processing needs. Sign up for the Limited Availability program.
Confluent Cloud now supports the Streams Rebalance Protocol (KIP‑1071), enabling a broker-driven rebalancing system that provides significantly faster, more stable, and more observable rebalances for Kafka Streams applications. The Streams Rebalance Protocol:
Speeds up failover and scaling by reducing rebalance times by > 50%, minimizing downtime when you deploy new code or when an instance fails
Enhances pipeline stability through centralized broker coordination that eliminates the risk of catastrophic rebalance storms and cascading failures
Deepens observability by using dedicated broker-side metrics in the Confluent Cloud UI and Metrics API that provide direct insight into rebalance performance and group health
The Streams Rebalance Protocol is available now on Dedicated and Enterprise clusters. Basic and Standard cluster availability will come in the following weeks. To get started, simply:
Ensure that your application uses a Kafka Streams client library version 4.2 or newer.
Set the group protocol property in your Kafka Streams application configuration.
That’s it. Confluent Cloud handles the rest.
Start Building With New Confluent Cloud Features
If you’re new to Confluent, sign up for a free trial of Confluent Cloud and create your first cluster to explore the new features. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CLOUDBLOG60 for an additional $60 worth of free usage.
The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionalities described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.
Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Kafka®, Apache Flink®, Flink®, Apache Iceberg™, Iceberg™, and the respective logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
Explore new Confluent Intelligence features: A2A integration, multivariate anomaly detection, vector search for Cosmos DB and S3 Vectors, Private Link, and MCP support.
Learn how to migrate to Confluent Cloud in hours using Confluent’s open source Kafka Copy Paste tool. Get an in-depth introduction to the KCP tool and a walk-through of the four steps of migrating from MSK to Confluent Cloud using the tool.