[Webinar] Master Apache Kafka Fundamentals with Confluent | Register Now
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Confluent Cloud Freight clusters are now Generally Available on AWS. In this blog, learn how Freight clusters can save you up to 90% at GBps+ scale.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
This blog announces the general availability of the next generation of Control Center for Confluent Platform
CC Q2 2025 adds Tableflow support for Delta Lake tables, Flink Snapshot Queries, maximum eCKU configuration for elastically scaling clusters, and more!
Announcing the launch of the 2025 Data Streaming Report—highlighting some of the key findings from the report, including data streaming platform’s role in driving AI success.
This post introduces the VISTA Framework, a structured approach to prioritizing AI opportunities. Inspired by project management models such as RICE (Reach, Impact, Confidence, and Effort), VISTA focuses on four dimensions: Business Value, Implementation Speed, Scalability, and Tolerance for Risk
In this blog, Confluent's Chief Product Officer, Shaun Clowes, explores strategies to foster effective async collaboration—reduce burnout, boost productivity, and make distributed work actually work.
For AI agents to transform enterprises with autonomous problem-solving, adaptive workflows, and scalability, they need event-driven architecture (EDA) powered by streaming data.
Just as some problems are too big for one person to solve, some tasks are too complex for a single artificial intelligence (AI) agent to handle. Instead, the best approach is to decompose problems into smaller, specialized units so that multiple agents can work together as a team.
By combining Google A2A’s structured protocol with Kafka’s powerful event streaming capabilities, we can shift from brittle, point-to-point integrations to a dynamic ecosystem where agents publish insights, subscribe to context, and coordinate in real time. Observability, auditability, and...
This blog post demonstrates using Tableflow to easily transform Kafka topics into queryable Iceberg tables. It uses UK Environment Agency sensor data as a data source, and shows how to use Tableflow with standard SQL to explore and understand the data.
Learn how Confluent champion Tejal Bhatt helps customers on their data streaming journey as part of the Dedicated Solutions Engineering team.
Building multi-agent systems at scale requires something most AI platforms overlook: real-time, observable, fault-tolerant communication, and governance. That's why we build on Confluent data streaming platform…
The efficient management of exponentially growing data is achieved with a multipronged approach based around left-shifted (early-in-the-pipeline) governance and stream processing.
The guide covers Kafka consumer offsets, the challenges with manual control, and the improvements introduced by KIP-1094. Key enhancements include tracking the next offset and leader epoch accurately. This ensures consistent data processing, better reliability, and performance.
Learn how data streaming unlocks shift-left data integration—a key enabler for adopting and building next-generation technologies like generative AI— for the government agency.