We are pleased to announce the release of KSQL 0.4, aka the January 2018 release of KSQL. As usual, this release is a mix of new features as well as stability improvements.
Let’s take a look at what is new in this release.
We’ve updated the PRINT TOPIC command to output the contents of any Kafka topic in the Kafka cluster, not just those topics which are already mapped to KSQL streams and tables. This provides a simple way to “peek” at your topics for data discovery and exploration. Check out the PRINT TOPIC documentation for more information.
Example output:
The SHOW TOPICS command has been enhanced to include the number of active consumers and also the number of active consumer groups which are reading the topics.
Consumer groups are a feature of Apache Kafka which enable multiple consumer processes to divide the work of consuming Kafka topic. You can learn more about them in the Kafka Consumer JavaDocs, and of course you should read the SHOW TOPICS documentation for more information.
Example output:
We added two new aggregation functions, TOPK and TOPKDISTINCT.
The TOPK function allows you to select the top K values for a given key for a given window. This is a more general implementation of the ‘MAX’ aggregate function.
For example, if you want to compute the the 5 highest value orders per zip code per hour, you can now run the following query:
The TOPKDISTINCT function is similar to the TOPK function, except that it will output the topK distinct values for a given key for a given window.
For example, to print the 5 latest page views for each page, you can run the following query:
In the December 2017 release, we added JMX metrics which give insights into what is happening inside your KSQL servers. These metrics include the number of messages, the total throughput, the throughput distribution, the error rate, and further information.
We also now ship binary tarballs for each release in addition to Docker images. This was a much requested feature from users who are not into Docker and who now don’t need to build KSQL from source anymore to get the latest release. Of course, if you still want to build a development version of KSQL from source you can continue to do so.
Finally, we have continued to invest in improving our test coverage. In particular, we added fully distributed system tests for KSQL which stand up KSQL server pools and Kafka clusters, and then test for correctness when there are rolling bounces and other failures for various server nodes. This is a big step toward making KSQL ready for prime-time production use.
If you have enjoyed this article, you might want to continue with the following resources to learn more about KSQL:
If you are interested in contributing to KSQL, we encourage you to get involved by sharing your feedback via the KSQL issue tracker, voting on existing issues by giving your +1, or opening pull requests. Use the #ksql channel in our public Confluent Slack community to ask questions, discuss use cases or help fellow KSQL users.
Event design plays a big role in your ability to fix bad data in your streams. But if you’ve wrecked a stream with bad data (i.e., it’s unavoidably contaminated), you'll need to employ a "rewind, rebuild, and retry" strategy.
At a high level, bad data is data that doesn’t conform to what is expected, and it can cause serious issues and outages for all downstream data users. This blog looks at how bad data may come to be, and how we can deal with it when it comes to event streams.