์‹ค์‹œ๊ฐ„ ์›€์ง์ด๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ๊ฐ€์ ธ๋‹ค ์ค„ ๊ฐ€์น˜, Data in Motion Tour์—์„œ ํ™•์ธํ•˜์„ธ์š”!

Enabling product personalisation using Apache Kafka, Apache Pinot and Trino

ยซ Kafka Summit London 2022

Our core banking platform has been built using domain driven design and microservices and whilst this provides many well-known advantages, it also presents some challenges. Data encapsulation results in each application having its own data store and it becomes impossible to query the state of a customerโ€™s relationship in totality to provide the right products. This challenge becomes even harder if we want to personalise products based on aggregate values of a customerโ€™s behaviour over potentially large periods of time.

In this session, we describe how we overcome this problem to enable dynamic charging and rewards based on customer behaviour in a banking scenario. We describe โ€ข How we guarantee consistency between our event stream and our OLTP databases using the Outbox pattern. โ€ข The design decisions faced when considering the schema designs in Pinot and how we balanced flexibility and latency using Trino โ€ข Two patterns for enriching the event stream using Kafka streams and how we dealt with late arriving events and transactions.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how