Confluent Platform

Announcing the Confluent Platform 1.0

Neha Narkhede. .


We are very excited to announce general availability of Confluent Platform 1.0, a stream data platform powered by Apache Kafka, that enables high-throughput, scalable, reliable and low latency stream data management.

The Confluent Platform


This platform enables you to manage the barrage of data generated by your systems and makes it all available in realtime, throughout your organization. At its core lies Apache Kafka, as well as additional components and tools that help you put Kafka to use effectively in your organization.

Confluent Platform 1.0 includes:

  1. Apache Kafka: The Confluent Platform has Apache Kafka at its core and ships with a Kafka distribution that is completely compatible with the latest release of Kafka – in open source, and has a few critical patches applied on top of
  2. Schema Management Layer: All Confluent Platform components use a schema management layer that ensures data compatibility across applications and over time as data evolves.
  3. Improved Client Experience: Confluent Platform provides Java as well as REST clients that integrate with our schema management layer. The Java clients use a serializer that ships with the Confluent Platform to integrate with the schema management layer. Non-java clients can use the REST APIs to send either Avro or binary data to Kafka.
  4. Kafka to Hadoop ETL: Confluent Platform also includes Camus, which provides a simple, automatic and reliable way of loading data into Hadoop.

How Do I Get It?

Today, we are also posting a two part guide that walks you through the motivation and steps for using such a stream data platform in your organization. You can also learn more about the details of the Confluent Platform or give it a spin.

We have a public mailing list and forum to discuss these tools and answer any questions, and we’d love to hear feedback from you.

What’s Next?

Before founding Confluent, we spent five years at LinkedIn turning all data they had into realtime streams. Every click, search, recommendation, profile view, machine metric, error and log entry was available centrally in realtime. Part of that process was building a powerful set of in-house tools around our open source efforts in Apache Kafka that comprised LinkedIn’s stream data platform. We got to witness the impact of this transition and want to make this possible for every company in the world. We think the rise of real-time streams represents a huge paradigm shift for how companies can use their data. Kafka’s impressive adoption is evidence for this, but there is a lot left to do.

Today’s announcement is just the first step towards realizing this stream data platform vision. We look forward to building the rest of it in the months and years ahead. Stay tuned for more announcements from us.

Subscribe to the Confluent Blog

Email *

More Articles Like This

Matthias J Sax

Data Reprocessing with Kafka Streams: Resetting a Streams Application

Matthias J Sax . .

This blog post is the third in a series about Kafka Streams, the new stream processing library of the Apache Kafka project, which was introduced in Kafka v0.10....

Alex Loddengaard

Design and Deployment Considerations for Deploying Apache Kafka on AWS

Alex Loddengaard . .

Apache Kafka (the basis for the Confluent Platform) delivers an advanced platform for streaming data used by hundreds of companies, large and small. Amazon Web Services (AWS) provides a powerful ...

secure stream processing
Michael Noll

Secure Stream Processing with Kafka Streams

Michael Noll . .

This blog post is the second in a series about Kafka Streams, the new stream processing library of the Apache Kafka project, which was introduced in Kafka v0.10. Current blog ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Try Confluent Platform

Download Now