[ウェビナー] ストリーミングデータメッシュを構築する方法 | 今すぐ登録

Presentation

Reusing Kafka Data Structure Between Projects

« Kafka Summit Europe 2021

Cut the time of application delivery by reusing Kafka data structure between projects! Expecting boundaries and data definitions to remain consistent between source and consuming projects can be a constant source of surprise - a Kafka spiderweb. Duplicate datasets with mutations often bring with them monetary, opportunity and reputation costs, as well as errors, inconsistencies, tech debt, reactive approach to data problems, audit issues, data ownership confusion and data that is not fit for purpose. Solving this requires moving towards uniform datasets. And this moves requires an Enterprise Domain Driven Design approach, combined with Event Storming. Doing this architecture work upfront allows for real-time, ubiquitous, distributed data implementations, as opposed to a Kafka spiderweb. This talk and demo will show the design, along with a demo illustrating this approach.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how