Happy 2017! Wishing you a wonderful year full of fast and scalable data streams.
Many things have happened since we last shared the state of Apache Kafka™ and the streams ecosystem. Let’s take a look!
Most importantly – we did a bug fix release. Apache Kafka 0.10.1.1 fixes some critical issues found in the 0.10.1.0 release. There is a pretty substantial list of fixes, so if you are running 0.10.1.0, we recommend upgrading to avoid running into issues we already resolved.
Kafka Summit! If you haven’t heard – last year was so successful that we are doing two events this year. New York on May 8th and San Francisco on August 28th. Call for paper is ending soon, so please submit your talk proposals this week!
There are many KIPs (improvement proposals) being discussed in the Kafka developer list, many of them are huge improvements:
- KIP-48 proposes adding delegation tokens to Kafka’s long list of authentication mechanisms. KIP-84 adds SASL-SCRAM mechanism as well.
- KIP-66 adds single message transformations to Kafka Connect, which will allow light-weight processing of individual events as they are being streamed in and out of Kafka with the connectors. This is useful in cases where you want to remove a sensitive field from the records, add timestamps or UUID or route different events to different topics.
- KIP-99 adds global tables to the Streams API in Kafka. This will allow loading small dimension tables, unpartitioned to the local cache of each Streams API node, which means you can now enrich a data stream with multiple dimensions without expensive re-partitioning for each join operation. This is similar to broadcast join when running parallel queries in traditional data warehouse.
- KIP-101 proposes a modification to the message format in order to solve few known issues that can result in consistency issues between replicas in rare cases. Both the descriptions of the issues and the solution will be of interest to anyone who enjoys diving into distributed systems.
- KIP-103 proposes new configuration that will allow separating traffic from internal and external clients. This will be useful for the many SREs who wanted to run internal traffic on a different network and for container and cloud deployments where there are different configuration and costs for internal and external traffic.
In addition to the many KIPs, there are some interesting releases, blogs and presentations I’d recommend checking out:
- Apache Spark 2.1.0 was released. The highlight for the stream processing community is the addition of event-time watermarks to Spark Streaming.
- It is a tradition to begin the new year with some predictions! For example, what do you think is the future of streams in financial services?
- Apache Flink® published a review of the Flink community activities in the last year.
- And Datanami reviewed 2016 for the entire Big Data industry.
- This presentation has a nice overview of use-cases of streams, with more details than most.
- A useful and funny presentation on the use of stream processing to handle billing in cloud environments.
- And in case you were wondering why is managing data for microservices so challenging – Ben Stopford explains the Data Dichotomy.
- Great discussion of why embedded DB is a must for stream processing – including a discussion of how this is done in Flink, Kafka Streams and Samza.