Real-Time Data at Massive Scale

Real-Time Data at Massive Scale

Know and respond to every single event in your organization in real time. Like a central nervous system, Confluent creates an Apache Kafka-based streaming platform to unite your organization around a single source of truth.


Data Access Turned Inside Out

Today, your data is locked up in systems that don't talk to each other. You want a single, real time source of truth about your business that reaches the very end points of your organization: a central nervous system that enables you to react as events arrive, turning inaccessible systems inside out.

Learn More >
Be Ready for What’s Next


Be Ready for What’s Next

What is fast without reliable? Confluent Platform adds administration, data management, operations tools and robust testing to your Kafka environment. You can even have your Kafka environment hosted in the cloud - by the original creators of Apache Kafka®.

Learn More >


Support Trillions of Messages

Whether for cost or performance, an architecture based on a highly scalable, distributed streaming platform like Confluent will grow quickly with your business and the expanding data pipelines that come with it, accelerating both performance and cost savings.

Learn More >
The Streaming Platform


The Streaming Platform

A streaming platform combines the data distribution strength of a publish-subscribe model with a storage layer and a processing layer. This makes it much easier to create data pipelines and connect them to all of your systems.

Learn more about Confluent Platform >
Your Platform for Real-time Streaming Apps and Pipelines

Built for Developers

Your Platform for Real-time Streaming Apps and Pipelines

Modern applications rely on real-time data to personalize experiences, recognize fraudulent behavior, or deliver dashboards. As a consumer, it’s great; as a developer, it’s revolutionary. A streaming platform provides a single massively-scalable solution for building real-time applications and delivering data wherever you need it, reliably, in real-time.

Get started with Confluent >
                      #It’s easy to get up and running with all things Confluent 
#once you’ve downloaded and setup the confluent CLI
$confluent start
starting zookeeper
zookeeper is [UP]
starting kafka
kafka is [UP]
starting schema-registry
schema-registry is [UP]



A Single Solution for Streaming

Confluent and Apache Kafka® make it easy to work with continuous streams of data. Confluent provides a better-packaged distribution of Kafka and adds developer tools to get you started quickly.

  • Java
  • C+/C++
  • Python
  • Go
  • REST Proxy
Get started with Confluent >
                      #You can use Kafka’s Streams API or KSQL to easily process 
#our stream without any additional clusters to do some powerful 
#things - like get a top 5 chart of songs played. We’ll start by 
#joining a stream of songs that are played with an ID(playEvents) 
#and enriching that with songTable to get their name

final KStream songPlays =
  playEvents.leftJoin(songTable,(value1, song) -> song, 
  Serdes.Long(), playEventSerde);

#Next, we create a state store, ‘songPlayCounts’ to track song play counts

final KTable songPlayCounts =
  songPlays.groupBy((songId, song) -> song, keySongSerde, 
  valueSongSerde) .count(SONG_PLAY_COUNT_STORE);

final TopFiveSerde topFiveSerde = new TopFiveSerde();

#Lastly, we’ll compute the top five charts for each genre. 
#The results of this computation will continuously update
# the state which we can then query songPlayCounts.groupBy((song, plays) -> KeyValue.pair(song.getGenre().toLowerCase(), new SongPlayCount(song.getId(), plays)),Serdes.String(), songPlayCountSerde)


Stream Processing, Made Simple

Your data is rarely in the exact shape you’d like it to be. That’s why we aggregate, filter, and join it with such painstaking care. In a real-time environment, these actions are continuously executed with a streaming engine like KSQL or Kafka’s Streams API. Confluent Platform gives you an end-to-end solution that minimizes complexity – you can build your application, deploy as you like, and start capturing the value of your data with millisecond-latency, strong guarantees and proven reliability.

  • Java
  • KSQL
Get started with stream processing >
                      #Kafka’s Connect API makes it simple to manage sources and sinks -
#for example this elasticsearch sink is an easy to prep
#friendly config file
#User defined connector instance name. name = elasticsearch-sink #The class implementing the connector connector.class =
io.confluent.connect.elasticsearch.ElasticsearchSinkConnector #Maximum number of tasks to run for this connector instance tasks.max = 1 topics = test-elasticsearch-sink key.ignore = true connection.url = http://localhost:9200 = kafka-connect #Then once we’re set, we can run the elasticsearch connector
#in one line $confluent load elasticsearch

Connect all the things

Data, Where You Need It

Whether a search index, caching layer, or training a machine learning algorithm, Confluent makes it simple to build real-time data pipelines to get data where you need it reliably. Confluent Platform includes pre-built, certified Kafka Connectors for popular data stores and systems.

  • Amazon S3
  • Elasticsearch
  • HDFS
  • MySQL
  • +More
Get started with data integration >

Real-Time in the Real World with Confluent Customers

View All Customer Stories


We use cookies to understand how you use our site and to improve your experience. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.