Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Confluent Cloud Freight clusters are now Generally Available on AWS. In this blog, learn how Freight clusters can save you up to 90% at GBps+ scale.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Most AI projects fail not because of bad models, but because of bad data. Siloed, stale, and locked in batch pipelines, enterprise data isn’t AI-ready. This post breaks down the data liberation problem and how streaming solves it—freeing real-time data so AI can actually deliver value.
The concept of “shift left” in building data pipelines involves applying stream governance close to the source of events. Let’s discuss some tools (like Terraform and Gradle) and practices used by data streaming engineers to build and maintain those data contracts.
Airy helps developers build copilots as a new interface to explore and work with streaming data – turning natural language into Flink jobs that act as agents.
This article explores how event-driven design—a proven approach in microservices—can address the chaos, creating scalable, efficient multi-agent systems. If you’re leading teams toward the future of AI, understanding these patterns is critical. We’ll demonstrate how they can be implemented.
Real-time data streaming and GenAI are advancing Singapore's Smart Nation vision. As AI adoption grows, challenges from data silos to legacy infrastructure can slow progress - but Confluent, through IMDA's Tech Acceleration Lab, is helping orgs overcome hurdles and develop smarter citizen services.
Learn how Flink enables developers to connect real-time data to external models through remote inference, enabling seamless coordination between data processing and AI/ML workflows.
Learn how to use the recently launched Provisioned Mode for Lambda’s Kafka ESM to build high throughput Kafka applications with Confluent Cloud’s Kafka platform. This blog also exhibits a sample scenario to activate and test the Provisioned Mode for ESM, and outline best practices.
Learn how Confluent Champion Suguna motivates her team of engineers to solve complex problems for customers—while challenging herself to keep growing as a manager.
Confluent は、Confluent Cloud for Government の提供において FedRAMP Ready ステータスを達成しました。これは政府機関に安全なデータストリーミングサービスを提供するための重要なマイルストーンであり、厳格なセキュリティ基準へのコミットメントを示しています。この認定は、完全な...
An expanded partnership between Confluent and Databricks will dramatically simplify the integration between analytical and operational systems, so enterprises spend less time fussing over siloed data and governance and more time creating value for their customers.
Confluent for Startups AI Accelerator プログラムに新しく参加いただいた初期段階のスタートアップ企業をご紹介します。この10週間のバーチャルプログラムは、リアルタイムの生成AI(GenAI)アプリケーションを開発している次世代のAIイノベーターを支援するために設計されています。
We built an AI-powered tool to automate LinkedIn post creation for podcasts, using Kafka, Flink, and OpenAI models. With an event-driven design, it’s scalable, modular, and future-proof. Learn how this system works and explore the code on GitHub in our latest blog.
FLIP 304 lets you customize and enrich your Flink failure messaging: Assign types to failures, emit custom metrics per type, and expose your failure data to other tools.
Discover how to unlock the full potential of data streaming with Confluent's "Ultimate Data Streaming Guide." This comprehensive resource maps the journey to becoming a Data Streaming Organization (DSO), with best practices, industry success stories, & insights to scale your data streaming strategy.