[Demo+Webinar] New Product Updates to Make Serverless Flink a Developer’s Best Friend | Watch Now

Presentation

Many Sources, Many Sinks, One Stream

« Current 2022

The concept of the Data Mesh is making headway in enterprise data design, fueled by core principles of contextual data domains, local governance, and decentralized integration. Kafka makes the data mesh scalable and resilient with event sourcing and replication. But how do you join multiple data domains on a single node in your mesh, where they all need to stay consistent on the same data changes, without going back to the central data store?

In this session we’ll introduce the concept of the Canonical Stream, an ordered, declarative event stream of information about a thing that exists in the real world, with its own context and governance. The Canon is technology agnostic, and data context agnostic - events on the Canon provide updates about the thing itself, and must be consumed and interpreted differently for each data domain.

You’ll learn:

  • When it’s appropriate to use a Canonical Stream, and when it isn’t
  • How to build your Kafka applications in layers, to maintain data context boundaries
  • How to manage schemas and transformations and business logic in a decentralized, locally governed way

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how