[Webinar] From Fire Drills to Zero-Loss Resilience | Register Now
Login Contact Sales
Learn more about how Confluent differs from Apache Kafka
Discover the platform that is built and designed for those who build
Unlock the value of data across your business
Explore testimonials and case studies from Confluent's customers
Find a partner or explore our partner programs
Meet the data streaming platform
Build real-time data architectures
Easily integrate your data ecosystem
Democratize access to high-quality data
Transform, analyze, and act on real-time data
Topics to tables in a few clicks
Real-time, context-aware AI
Automate business processes with AI
Choose Your Deployment
Stream smarter with our fully managed, cloud-native Apache Kafka® service
Run and manage our complete data streaming platform on-premises
Bridge on-premise control with the automation of a cloud service
Deploy a Kafka-compatible data streaming platform in your private cloud
Wherever you are in your data streaming journey, you'll find the explainer videos and tutorials you need to advance your skills here.
Learn more about our pricing structure, features and deployment options
Compare your savings with Confluent to a self-managed Kafka deployment
Use cases
Clean and govern data at the source and turn topics into Iceberg or Delta Lake tables
Build RAG and agentic AI use cases with real-time, contextual and trustworthy data
Learn to build event-driven microservices applications with data streaming
Build trusted data products with fresh,high-quality operational data
Industries
How to build and scale real-time AI agents with a data streaming platform
General Resources
Browse whitepapers, ebooks, reports & more
Best practices, industry trends and news
Live upcoming sessions with our experts
Learn the core concepts of data streaming
Ensure successful outcomes with advisory and implementation services
Developer Resources
Courses, tutorials and language guides
Browse 200+ Connnectors and native integrations
Guides, tutorials, API and CLI references
Start learning for free from Confluent
Join a global community of developers
From Current to the Data Streaming World Tour to local developer meetups, you're invited to unleash your data with Confluent experts
Deployment Options
Move beyond the batch with continuous, real-time data ingestion, transformation, and analysis. If you’ve got questions about event stream processing, we’ve got your answers below.
Shoe retailer NewLimits is struggling with decentralized data processing challenges and needs a manageable, cost-effective stream processing solution for an important upcoming launch. Join developer Ada and architect Jax as they learn why Apache Kafka and Apache Flink are better together.
In this webinar, you'll learn about the new open preview of Confluent Cloud for Apache Flink®, a serverless Flink service for processing data in flight. Discover how to filter, join, and enrich data streams with Flink for high-performance stream processing at any scale.
Hasan Jilani, Confluent
Martijn Visser, Confluent
In this ebook, you’ll learn about the profound strategic potential in an event streaming platform for enterprise businesses of many kinds. The types of business challenges event streaming is capable of addressing include driving better customer experience, reducing costs, mitigating risk, and providing a single source of truth across the business. It can be a game changer.
Learn how event-driven architecture and stream processing tools such as Apache Kafka can help you build business-critical systems that open modern, innovative use cases.
Discover a technical playbook for data engineers, architects, and developers on implementing a Shift Left approach with data streaming to optimize lakehouse and warehouse data ingestion.
See BMW, Michelin, and Siemens use Apache Kafka for event-driven systems and how your manufacturing organization run event-driven microservices with data streaming.
Discover the latest Apache Flink developments and major Confluent announcements from Kafka Summit 2023 in 451 Research’s Market Insight Report.
In this paper, we introduce the Dual Streaming Model. The model presents the result of an operator as a stream of successive updates, which induces a duality of results and streams.
This paper presents Apache Kafka’s core design for stream processing, which relies on its persistent log architecture as the storage and inter-processor communication layers to achieve correctness guarantees.
Businesses are discovering that they can create new business opportunities as well as make their existing operations more efficient using real-time data at scale. Learn how real-time data streams is revolutionizing your business.