Introducing Confluent Private Cloud: Cloud-Level Agility for Your Private Infrastructure | Learn More

New in Confluent Cloud: Data in Motion for AI in Action

Written By
  • Hannah Miao Senior Product Marketing Manager, Confluent

The most innovative companies know that artificial intelligence (AI) problems are fundamentally data problems. To produce the right results, AI applications, ranging from real-time fraud detection to autonomous agents, must share a common foundation: trustworthy data in motion. Today in New Orleans at Current, we’re rolling out powerful new products for Confluent Cloud that are designed to solve this challenge at its core—providing the definitive data streaming platform to fuel your mission-critical applications and bring your AI initiatives to life.

Join us on November 20 for the Streaming Agents webinar and demo to see some of these new features in action.

Build Real-Time Production AI Systems With Confluent Intelligence

We’re thrilled to introduce Confluent Intelligence, our vision for bringing real-time, trustworthy data to production AI systems through the power of Kafka and Flink. Built on our cloud-native data streaming platform, Confluent Intelligence is designed to solve the complex data problems at the heart of AI.

While AI agents and applications require fresh context to observe, reason, and act continuously, building these systems today often means managing fragmented infrastructure, which ultimately slows innovation. Confluent Intelligence eliminates this challenge by unifying historical replay, continuous processing, and real-time serving on a single platform, ensuring AI systems are grounded in trustworthy, contextualized data.

Streaming Agents (Phase 2 - Open Preview)

Streaming Agents enables you to build, deploy, and orchestrate event-driven AI agents natively on Confluent Cloud for Apache Flink®, unifying data processing and AI reasoning in a single, powerful framework. Unlike traditional agent frameworks that are disconnected from data, Streaming Agents lives in the event streams, giving them the freshest and most accurate view of your business in motion. This allows it to observe, decide, and act in real time to power intelligent context-aware automation.

Streaming Agents allows you to build a real-time context engine to feed AI applications with fresh, relevant, contextual data in real time.

We first introduced Streaming Agents last quarter, complete with capabilities such as model inference, real-time embeddings, tool calling with Model Context Protocol (MCP), external tables and vector search, and more—all while using familiar Flink APIs. Our next phase takes this further with powerful new capabilities that make it even easier to build and run Streaming Agents:

  • Agent Definition (Open Preview): Define and build applications with Streaming Agents in just a few lines of code, and easily test and reuse agents to focus on creating differentiated agent workflows rather than low-level boilerplate code. Enable more sophisticated tasks with dynamic, iterative tool calling where large language models (LLMs) evaluate and adapt outputs for better outcomes.  

  • Observability and Debugging (Open Preview): Capture every agent interaction, including input events, agent-to-agent communications, tool invocations, and model decisions, in an immutable log for end-to-end visibility, auditability, security, and compliance. Trace what happened, debug issues, reliably test changes and recover from failure, transforming agents from opaque systems into fully transparent ones to iterate faster, safely

  • Real-Time Context Engine (Early Access): Deliver structured, real-time context to Streaming Agents with built-in security and governance, ensuring that agents can make the best informed decision at the moment it’s needed.

To learn more about Phase 2 of Streaming Agents, check out the demo video, read Faster, Smarter, More Context-Aware: What’s New in Streaming Agents, and register for the upcoming tutorial webinar.

Real-Time Context Engine

Good AI needs good context. A key challenge in building AI systems is securely and reliably serving fresh, relevant context to agents at scale. Real-Time Context Engine, available today in Early Access, directly addresses this challenge. It provides a fully managed serving layer that materializes streaming data into an in-memory, low-latency cache. It continuously builds, refines, and serves this structured context to any AI app or agent through MCP. Real-Time Context Engine abstracts away all Kafka and Flink complexity, allowing developers to access fresh context instantly. When upstream definitions change, the platform automatically reprocesses affected data, ensuring downstream AI systems stay consistent without manual rebuilds or drift.

With authentication, RBAC, and audit logging provided out-of-the-box, it’s the easiest and most secure way to expose trustworthy, real-time context to your AI agents without managing any backend infrastructure. Get started through the Early Access program.

Confluent Welcomes Airy!

We’re thrilled to welcome the Airy leadership team to Confluent, taking real-time AI in Apache Flink to the next level. Airy’s AI agent framework and developer-friendly tooling will strengthen the capabilities of our Flink functions, making it easier to infuse real-time intelligence into modern applications.

More Languages and Observability in Confluent Cloud for Apache Flink®

Python User-Defined Functions (Early Access)

Stream processing often calls for logic that goes beyond what’s possible with built-in SQL operators. That’s where user-defined functions (UDFs) come in, letting developers extend Flink SQL with their own logic. Last year, we introduced Java UDFs on Confluent Cloud for Apache Flink®, and we’re now excited to mark another milestone: Python UDFs on AWS, with Early Access coming this quarter.

This release makes it possible to write scalar UDFs in Python and run them directly within Flink SQL, bringing the versatility of Python into real-time stream processing. By tapping into Python’s rich ecosystem of libraries, teams can unlock more advanced data transformations to power use cases from AI/machine learning (ML) to Internet of Things (IoT) signal processing. With both Java and Python support, we’re making Confluent Cloud for Apache Flink® accessible to a broader range of developers and data teams. To participate in Early Access, reach out to your account team or sign up here.

Observability and Error Handling

Understanding what’s happening inside your stream processing pipelines shouldn’t be guesswork. Confluent Cloud for Apache Flink® now offers a richer suite of observability tools that make it easier to monitor, debug, and optimize jobs in real time.

Enhanced Flink operational logs bring visibility into Flink statement behavior directly in the Confluent Cloud Console. Developers can monitor statement life cycle transitions, scaling status, and runtime errors or warnings, while built-in search, filtering, and visualizations help teams quickly spot and resolve issues. For custom code, Flink UDF logs let you emit and view your own Log4j logs alongside operational logs in the same interface.

We’re also introducing the Flink SQL Query Profiler, a dynamic, real-time visual dashboard that simplifies performance tuning and bottleneck identification by providing metrics across statements, tasks, and operators as well as an intuitive job graph visualization. Finally,  custom error handling gives you control over deserialization behavior. You can choose to fail, ignore, or log bad records to a Dead Letter Queue (DLQ) table for later analysis.

Flink SQL Query Profiler in Confluent Cloud for Apache Flink®

Expanding Tableflow Functionality for Confluent Cloud

We’ve listened closely to what customers need to bring their data lakehouse architectures to life with real-time data. Tableflow is now ready for production use cases with its enterprise-grade features—making it easier than ever to deliver high-quality Kafka data streams into your analytics and AI systems with full governance, reliability, and security.

Support for Delta Lake and Unity Catalog

We’re excited to announce the General Availability of Tableflow support for Delta Lake tables and Databricks Unity Catalog, making it easy to feed high-quality Kafka data streams into Databricks and other compatible analytical engines. Tableflow also publishes all metadata to Unity Catalog, making your real-time data instantly discoverable and queryable within your Databricks lakehouse environment.

Tableflow support for Delta Lake and Unity Catalog establishes a powerful, unified governance model, ensuring consistent access controls and schema management across both your streaming and analytics platforms.

Tableflow now supports Delta Lake in addition to Apache Iceberg™️ tables.

Upserts, DLQ, and BYOK Encryption

To help you maintain accurate and reliable analytical tables, Tableflow now supports upserts and DLQ. Upsert tables enable you to insert, update, and delete rows efficiently, reducing redundancy and keeping data current for real-time analytics. With DLQ support, malformed or problematic records are automatically routed to a dedicated queue with rich metadata, giving you visibility into data quality issues while keeping your pipelines running smoothly.

To meet stringent security and compliance requirements, Tableflow now also supports Bring Your Own Key (BYOK) encryption. You can extend self-managed encryption keys from your Kafka clusters to the analytical tables that Tableflow creates, ensuring end-to-end data protection across both Confluent-managed and customer-managed storage environments.

Tableflow on Microsoft Azure (Early Access)

We’re thrilled to announce the availability of Tableflow on Microsoft Azure, now in Early Access. You can now materialize Kafka topics as open tables in the Azure ecosystem, making your streaming data readily accessible for analytics and AI use cases. Reach out to your account team to get started.

To learn more about these Tableflow updates, read this deep-dive blog post: New in Tableflow: Delta Lake, Unity Catalog, Azure Early Availability (EA), and More Enterprise-Grade Features.

Easy Private Apache Kafka® Migrations Powered by Cluster Linking

We’re excited to announce Private Kafka Migrations powered by Cluster Linking, coming soon to Confluent Cloud. Many organizations run their most sensitive, business-critical workloads in private networks, making migration difficult due to security and compliance requirements. Until now, migrating from a privately networked external Kafka cluster could require exposure of public endpoints, VPN setup, deployment of intermediary infrastructure, and the use of external tools like MirrorMaker 2.

With Private Kafka Migrations, you can directly link self-managed or hosted Kafka clusters to Confluent Cloud over private networking—using options such as AWS PrivateLink, Azure Private Link, Google Private Service Connect, or virtual private cloud (VPC)/virtual private network (VNet) peering—without exposing brokers to the public internet or managing one-off workarounds.

By extending Cluster Linking—our Kafka-native replication protocol—to private-to-private connectivity, Confluent delivers secure, fully managed data replication. Data is replicated byte for byte with offsets preserved automatically, ensuring a smooth cutover for applications and avoiding the risks of downtime or data loss. This gives platform and security teams a migration solution that aligns with approved enterprise networking patterns.

We’ll also soon extend Cluster Linking between Confluent clusters to support Freight clusters. This new capability will enable organizations already using Freight clusters to implement robust disaster recovery strategies for their mission-critical applications. With this release, Cluster Linking is available across all three major clouds, supports both public and private networking types, and can be used to replicate data across Enterprise, Dedicated, and Freight clusters—making it simple to unify data across global and hybrid environments.

Move to Fully Managed Connectors With Connector Migration Utility

We’re pleased to introduce Connector Migration Utility, a free and open source self-service tool that makes it easy to move from self-managed Kafka connectors to fully managed connectors in Confluent Cloud.

With just a few CLI commands, this tool automates every step of the migration journey:

  • Scans existing connector environments to extract details about the connector setup

  • Assesses the feasibility of migration, categorizing each as "Fully Supported," "Partially Supported," or "Unavailable”

  • Generates fully managed connector configurations and copies relevant properties and SMT mappings

  • Prepares fully managed connectors in the cloud and allows you to provision them with minimal manual setup

  • Validates performance by comparing operational metrics of self-managed connectors with those of fully managed connectors

Connector Migration Utility automates every step of the migration process.

By removing manual effort and complexity, the Connector Migration Utility helps teams migrate in less than an hour and helps them take advantage of the latest features and automated upgrades available with fully managed connectors. With Connector Migration Utility, teams can spend less time focusing on infrastructure and more on innovation.

Unlock Dynamic Scaling With Queues for Kafka (KIP-932)

We’re excited to introduce the Apache Kafka feature Queues for Kafka (KIP-932), now available in Early Access on Confluent Cloud for Dedicated clusters. This new capability extends Kafka beyond ordered, real-time streaming by adding powerful, native queue-like semantics through the concept of “share groups.”

Unlike traditional consumer groups where partitions are tied to specific consumers, share groups allow consumers to cooperatively process messages without being bound to partitions. This decoupling unlocks the same simplicity and elasticity of traditional queuing systems while still giving you Kafka’s proven durability, replayability, and reliability.

With Queues for Kafka, you can:

  • Consolidate workloads from traditional queuing systems onto a single, unified platform by using Kafka to support additional queuing use cases

  • Improve scalability and flexibility by decoupling consumers from partitions, allowing you to dynamically scale consumption to match load fluctuations

  • Handle high-throughput workloads more efficiently, including use cases like notification systems and AI-powered fraud detection that require dynamic scaling via share groups

To participate in the Early Access for Queues for Kafka, reach out to your account team or sign up here.

Additional New Features and Updates

mTLS Authentication for Enterprise Clusters (Limited Availability)

To enhance security, we’re releasing mutual TLS (mTLS) authentication for Enterprise clusters in Limited Availability in November. This authentication method provides two-way authentication between clients and clusters before any data is exchanged, enhancing the security of data in transit. You can now also define granular access control by using RBAC and ACLs to manage client permissions for Confluent clusters based on client certificate metadata.

Connector Updates

Fully Managed Connectors on Confluent Cloud

We’re excited to introduce several new fully managed connectors on Confluent Cloud:

New fully managed connectors on Confluent Cloud include the Neo4j Sink connector (available in November), the InfluxDB 3 Sink connector (available in November), and the and .

New Connect with Confluent Integrations to Fuel Agentic AI and Analytics

Our Connect with Confluent (CwC) partner program extends our global data streaming ecosystem, making it easier than ever for enterprises to harness real-time data across the tools and platforms that run their businesses and power AI and analytics use cases. In Q3 2025, we welcomed a diverse range of new partner system integrations into the CwC technology partner program—spanning machine learning, real-time analytics, operational databases, workflow orchestration, and much more.

Check out the CwC Q3 announcement blog post to learn more about these connectors.

Confluent Hub 2.0

We’ve completely redesigned our connector marketplace for Apache Kafka with Confluent Hub 2.0. Finding the right connector can be a time-consuming challenge, so we've transformed the user experience and also added a learning center with built-in tutorials and configuration guides.

Confluent Hub 2.0 makes it easy to find the right connector for your streaming use case.

Egress Private Link for Custom Connectors on AWS

We’re pleased to announce support for custom connectors in AWS PrivateLink environments for Enterprise and Dedicated clusters. Follow our quick start guide to create your custom connector.

Google Cloud IAM and Microsoft Entra ID Support for Fully Managed Connectors

We’re introducing native authentication for our fully managed connectors with both Microsoft Entra ID and Google Cloud IAM, eliminating the risk and operational overhead of managing static credentials. This enables seamless cross-cloud access, letting connectors in any cloud securely access Microsoft Azure and Google Cloud resources using native Entra ID or Google IAM roles.

Strengthening Kafka Streams Operations and Observability 

We’re also expanding visibility into your Kafka Streams applications with application health metrics. You can now instantly assess application health, drill into the state of each individual thread, pinpoint bottlenecks, and correlate insights with your own observability tools. The user interface also surfaces metrics from RocksDB state stores for richer context and faster troubleshooting. These insights give your teams the operational confidence to keep real-time applications running at peak performance.

As your real-time applications built with Kafka Streams mature into mission-critical services, the Kafka Streams Add-On, available in November, helps you overcome day 2 operational challenges with confidence. It brings the Kafka Streams client under Confluent’s standard software support terms, guaranteeing reliability with contractual service level agreements, a predictable three-year maintenance life cycle, proactive fixes, and access to subject matter experts for deep, code-level investigation and resolution. 

This add-on makes it easier to mitigate operational risk and ensure the performance of your most vital streaming applications. In 2026, it will unlock premium operational capabilities for even deeper observability. You’ll gain access to application topology visualizations, detailed task assignment insights, and clear visibility into application rebalance progress and reasons, giving you powerful new ways to troubleshoot and stabilize your apps.

Beyond Compliance: Enabling Trust and Transparency

We’re excited to strengthen Confluent’s commitment to security and compliance with the launch of six core Trust Principles that guide how we design, build, and operate our platform. To demonstrate this commitment, we’ve signed the Cybersecurity & Infrastructure Security Agency (CISA) Secure by Design pledge and released four new white papers that provide deep insight into how we secure our data streaming products and services. Together, these principles, practices, and resources give customers the transparency and confidence they need to innovate securely on Confluent. Learn more in the deep-dive blog post, Beyond Compliance: Confluent's Commitment to Trust and Transparency

Start Building With New Confluent Cloud Features

If you’re new to Confluent, sign up for a free trial of Confluent Cloud and create your first cluster to explore the new features. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CCBLOG60 for an additional $60 worth of free usage.


The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.

Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.

Apache®, Apache Kafka®, Apache Flink®, Flink®, Apache Iceberg™️,  Iceberg™️, and the respective logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.

  • Hannah is a product marketer focused on driving adoption of Confluent Cloud. Prior to Confluent, she focused on growth of advertising products at TikTok and containers services at AWS

Did you like this blog post? Share it now