DRIVEN BY DATA
Data Access Turned Inside Out
Today, your data is locked up in systems that don't talk to each other. You want a single, real time source of truth about your business that reaches the very end points of your organization: a central nervous system that enables you to react as events arrive, turning inaccessible systems inside out.Learn More >
Be Ready for What’s Next
What is fast without reliable? Confluent Platform adds administration, data management, operations tools and robust testing to your Kafka environment. You can even have your Kafka environment hosted in the cloud - by the original creators of Apache Kafka®.Learn More >
Support Trillions of Messages
Whether for cost or performance, an architecture based on a highly scalable, distributed streaming platform like Confluent will grow quickly with your business and the expanding data pipelines that come with it, accelerating both performance and cost savings.Learn More >
STORE, PROCESS, PUBLISH + SUBSCRIBE
The Streaming Platform
A streaming platform combines the data distribution strength of a publish-subscribe model with a storage layer and a processing layer. This makes it much easier to create data pipelines and connect them to all of your systems.Learn more about the Confluent Platform >
Built for Developers
Your Platform for real-time streaming apps and pipelines
Modern applications rely on real-time data to personalize experiences, recognize fraudulent behavior, or deliver dashboards. As a consumer, it’s great; as a developer, it’s revolutionary. A streaming platform provides a single massively-scalable solution for building real-time applications and delivering data wherever you need it, reliably, in real-time.Get started with Confluent >
#It’s easy to get up and running with all things Confluent #once you’ve downloaded and setup the confluent CLI $confluent start starting zookeeper zookeeper is [UP] starting kafka kafka is [UP] starting schema-registry schema-registry is [UP]
POWERFUL AND EASY TO USE
A Single Solution for Streaming
Confluent and Apache Kafka® make it easy to work with continuous streams of data. Confluent provides a better-packaged distribution of Kafka and adds open source developer tools to get you started quickly.
- REST Proxy
#You can use Kafka’s Streams API or KSQL to easily process #our stream without any additional clusters to do some powerful #things - like get a top 5 chart of songs played. We’ll start by #joining a stream of songs that are played with an ID(playEvents) #and enriching that with songTable to get their name final KStream
songPlays = playEvents.leftJoin(songTable,(value1, song) -> song, Serdes.Long(), playEventSerde); #Next, we create a state store, ‘songPlayCounts’ to track song play counts final KTable songPlayCounts = songPlays.groupBy((songId, song) -> song, keySongSerde, valueSongSerde) .count(SONG_PLAY_COUNT_STORE); final TopFiveSerde topFiveSerde = new TopFiveSerde(); #Lastly, we’ll compute the top five charts for each genre.
#The results of this computation will continuously update
# the state which we can then query songPlayCounts.groupBy((song, plays) -> KeyValue.pair(song.getGenre().toLowerCase(), new SongPlayCount(song.getId(), plays)),Serdes.String(), songPlayCountSerde)
WHERE STREAM MEETS TABLE
Stream Processing, Made Simple
Your data is rarely in the exact shape you’d like it to be. That’s why we aggregate, filter, and join it with such painstaking care. In a real-time environment, these actions are continuously executed with a streaming engine like KSQL or Kafka’s Streams API. Confluent Platform gives you an end-to-end solution that minimizes complexity – you can build your application, deploy as you like, and start capturing the value of your data with millisecond-latency, strong guarantees and proven reliability.
#Kafka’s Connect API makes it simple to manage sources and sinks -
#for example this elasticsearch sink is an easy to prep
#friendly config file
#User defined connector instance name. name = elasticsearch-sink #The class implementing the connector connector.class =
io.confluent.connect.elasticsearch.ElasticsearchSinkConnector #Maximum number of tasks to run for this connector instance tasks.max = 1 topics = test-elasticsearch-sink key.ignore = true connection.url = http://localhost:9200 type.name = kafka-connect #Then once we’re set, we can run the elasticsearch connector
#in one line $confluent load elasticsearch
Connect all the things
Data, Where You Need It
Whether a search index, caching layer, or training a machine learning algorithm, Confluent makes it simple to build real-time data pipelines to get data where you need it reliably. Confluent Platform includes pre-built, certified Kafka Connectors for popular data stores and systems.
- Amazon S3
Join leading teams building with Confluent
"Working with the Apache Kafka experts at Confluent gives us the confidence to scale the data pipeline, improving business operations and overall reliability."
WE CAN'T WAIT TO SEE WHAT YOU STREAM UP