실시간 움직이는 데이터가 가져다 줄 가치, Data in Motion Tour에서 확인하세요!
Announcing the latest updates to Confluent’s cloud-native data streaming platform: Kora Engine, Data Quality Rules, Custom Connectors, Streaming Sharing, and more.
Today I’d like to give a tour of the internals of Confluent’s Apache Kafka® service. Powering this is a next-generation engine, Kora. Kora is a cloud data service that serves up the Kafka protocol for our thousands of customers and their tens of thousands of clusters.
I'm excited to share our intent to acquire Immerok! Together, we’ll build a cloud-native service for Apache Flink that delivers the same simplicity, security, and scalability that you expect from Confluent for Kafka.
I am very excited to announce the availability of the 0.10 release of Apache Kafka and the 3.0 release of the Confluent Platform. This release marks the availability of Kafka […]
Last week, Confluent hosted Kafka Summit, the first ever conference to focus on Apache Kafka and stream processing. It was exciting to see the stream processing community coming together in […]
Apache Kafka is designed to be highly performant, reliable, scalable, and fault tolerant. At the same time, the performance and reliability of a Kafka cluster is highly dependent on the […]
This post was written by guest blogger Rajiv Kurian from SignalFx. SignalFx is a member of the Confluent partner program. Rajiv is a software engineer with over five years experience building […]
The Apache Kafka community was crazy-busy last month. We released a technical preview of Kafka Streams and then voted on a release plan for Kafka 0.10.0. We accelerated the discussion […]
I’m really excited to announce a major new feature in Apache Kafka v0.10: Kafka’s Streams API. The Streams API, available as a Java library that is part of the official […]
It was another productive month in the Apache Kafka community. Many of the KIPs that were under active discussion in the last Log Compaction were implemented, reviewed, and merged into […]
A few months ago, we announced the major release of Apache Kafka 0.9, which added several new features like Security, Kafka Connect, the new Java consumer and also critical bug […]
For a long time, a substantial portion of data processing that companies did ran as big batch jobs — CSV files dumped out of databases, log files collected at the […]
Welcome to the February 2016 edition of Log Compaction, a monthly digest of highlights in the Apache Kafka and stream processing community. Got a newsworthy item? Let us know.
TLS, Kerberos, SASL, and Authorizer in Apache Kafka 0.9 – Enabling New Encryption, Authorization, and Authentication Features Apache Kafka is frequently used to store critical data making it one of […]
When Apache Kafka® was originally created, it shipped with a Scala producer and consumer client. Over time we came to realize many of the limitations of these APIs. For example, […]
When we released Apache Kafka 0.9.0.0, we talked about all of the big new features we added: the new consumer, Kafka Connect, security features, and much more. What we didn’t […]
Happy 2016! Wishing you a wonderful, highly scalable, and very reliable year. Log Compaction is a monthly digest of highlights in the Apache Kafka and stream processing community. Got a newsworthy item? Let us […]