Confluent
Streams and Tables: Two Sides of the Same Coin
Stream Processing

Streams and Tables: Two Sides of the Same Coin

Matthias J. SaxGuozhang Wang

We are happy to announce that our paper Streams and Tables: Two Sides of the Same Coin is published and available for free download. The paper was presented at the Twelfth International Workshop on Real-Time Business Intelligence and Analytics (BIRTE) held in conjunction with the 44th International Conference on Very Large Data Bases (VLDB) in Rio de Janeiro, Brazil, in August of this year.

The BIRTE workshop attracted many participants and hosted a keynote, research, industry and demo session as well as a panel discussion about data stream processing.

Paper summary

The paper is a joint work between Confluent and Humboldt-Universität zu Berlin that describes the Dual Streaming Model, which is the foundation of Kafka Streams’ and KSQL’s stream processing semantics:

In this paper, we introduce the Dual Streaming Model to reason about physical and logical order in data stream processing. This model presents the result of an operator as a stream of successive updates, which induces a duality of results and streams. As such, it provides a natural way to cope with inconsistencies between the physical and logical order of streaming data in a continuous manner, without explicit buffering and reordering. We further discuss the trade-offs and challenges faced when implementing this model in terms of correctness, latency, and processing cost. A case study based on Apache Kafka illustrates the effectiveness of our model in the light of real-world requirements.
Original Source

The Dual Streaming Model builds on the so-called stream-table duality, which allows you to unify data streams and relational tables into a holistic data processing model. Thus, data streams and continuously updating tables are the two core abstractions in the model. Additionally, the Dual Streaming Model decouples the handling of data that arrives later (i.e., out-of-order) from latency concerns and opens up a design space between processing cost, accepted latency and result completeness for the user that no other model offers.

Figure 1. Design Space

Figure 1. Design space

The wide adoption and growth of Kafka Streams and KSQL among enterprises shows that the Dual Streaming Model solves real-world problems across all types of industries. As a result, we are elated to share our paper for free so you can become the stream processing expert in your company and take the business to the next level.

Happy reading! 🙂

Next steps

Subscribe to the Confluent Blog

Subscribe

More Articles Like This

Kafka Connect Deep Dive – Converters and Serialization Explained
Robin Moffatt

Kafka Connect Deep Dive – Converters and Serialization Explained

Robin Moffatt . .

Kafka Connect is part of Apache Kafka®, providing streaming integration between data stores and Kafka. For data engineers, it just requires JSON configuration files to use. There are connectors for ...

ATM Fraud Detection with Apache Kafka and KSQL
Robin Moffatt

ATM Fraud Detection with Apache Kafka and KSQL

Robin Moffatt . .

Detecting fraudulent transactions is one of the classic use cases for real-time data. The business value is clear: Reduce exposure to risk by identifying fraud sooner in order to take ...

Kafka Summit SF 2018 Roundup

That’s a Wrap! Kafka Summit San Francisco 2018 Roundup

Tim Berglund . .

Were you there last week? For as big as that event felt, it’s hard to believe it’s only the third annual Kafka Summit San Francisco. But the view was beautiful ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Try Confluent Platform

Download Now

We use cookies to understand how you use our site and to improve your experience. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.