[Webinar] 4 Tips for Cutting Your Kafka Costs Up to 70% | Register Today
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Confluent Cloud Freight clusters are now Generally Available on AWS. In this blog, learn how Freight clusters can save you up to 90% at GBps+ scale.
Build event-driven agents on Apache Flink® with Streaming Agents on Confluent Cloud—fresh context, MCP tool calling, real-time embeddings, and enterprise governance.
Apache Kafka 4.2.0 is here. Explore production-ready share groups, Kafka Streams rebalance GA, new metrics, security enhancements, and upgrade details.
Master MongoDB Atlas Connectors on Confluent Cloud. Go beyond basic setup with architectural best practices for Source and Sink. Learn to ensure idempotency, handle CDC events, tune performance, and configure precision filtering for resilient, high-velocity data pipelines.
Learn how to design future-proof architectures for hybrid and multicloud environments, balancing portability, resilience, and long-term flexibility and using Kafka to implement continuous availability.
Kafka client failover is hard. This post proposes a gateway‑orchestrated pattern: use Confluent Cloud Gateway plus Cluster Linking to reroute traffic, reverse replication, and enable one‑click failover/failback with minimal RTO.
Streaming data integration supports enriched, reusable, canonical streams that can be transformed, shared ,or replicated to different destinations, not just one.
Learn the difference between cloud API keys and resource-specific API keys in Confluent Cloud, plus best practices for service accounts and production security.
Learn how the Confluent Trust Center helps security and compliance teams accelerate due diligence, simplify audits, and gain confidence through transparency.
A global investment bank and Confluent used Apache Kafka to deliver sub-5ms p99 end-to-end latency with strict durability. Through disciplined architecture, monitoring, and tuning, they scaled from 100k to 1.6M msgs/s (<5KB), preserving order and transparent tail latency.
ZooKeeper is going away. Learn why hosted Kafka migrations are accelerating—and how Confluent Cloud simplifies your move with KRaft and KCP.
Learn how to build a custom Kafka connector, which is an essential skill for anyone working with Apache Kafka® in real-time data streaming environments with a wide variety of data sources and sinks.
Manual schema management in Apache Kafka® leads to rising costs, compatibility risks, and engineering overhead. See how Confluent lowers your total cost of ownership for Kafka with Schema Registry and more.
Apache Kafka® cluster rebalancing seems routine, but it drives hidden costs in time, resources, and cloud spend. Learn how Confluent helps reduce your Kafka total cost of ownership.
Audit logging in Confluent Cloud can seem boring—until you need precise insights in a crisis. Learn how to easily filter audit logs for your serverless Apache Kafka® environment and improve your data security.
Manual Apache Kafka® monitoring and tool sprawl drive hidden costs in time, complexity, and cloud spend. Learn how Confluent lowers total cost of ownership for Kafka with integrated monitoring.