Live Demo: Build Scalable Event-Driven Microservices with Confluent | Register Now

Presentation

Designing a Data Mesh with Kafka

« Kafka Summit London 2023

“Data Mesh objective is to create a foundation for getting value from analytical data and historical facts at scale” [Dehghani, Data Mesh founder]

If the central concern of a Data Mesh is about enabling analytics, then how is Kafka relevant?

In this talk we will describe how we managed to apply Data Mesh founding principles to our operational plane, based on Kafka. Consequently, we have gained value from these principles more broadly than just analytics. An example of this is treating data as-a-product, i.e. that data is discoverable, addressable, trustworthy and self-describing.

We will then describe our implementation, which includes deep dives into Cluster Linking, Connectors and SMTs. Finally, we discuss the dramatic simplification of our analytical plane and consuming personas.

Agenda • Saxo Bank’s implementation of the Data Mesh • Cluster Linking - Why? How? • Data lake connectors – configuration and auto-deployment strategy • Mapping Kafka to the data lake infrastructure. • Enabling analytics, data warehousing and production support

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how