Elevate your stream processing w/ The Force of Kafka + Flink Awakens | Read Now


Enabling product personalisation using Apache Kafka, Apache Pinot and Trino

« Kafka Summit London 2022

Our core banking platform has been built using domain driven design and microservices and whilst this provides many well-known advantages, it also presents some challenges. Data encapsulation results in each application having its own data store and it becomes impossible to query the state of a customer’s relationship in totality to provide the right products. This challenge becomes even harder if we want to personalise products based on aggregate values of a customer’s behaviour over potentially large periods of time.

In this session, we describe how we overcome this problem to enable dynamic charging and rewards based on customer behaviour in a banking scenario. We describe • How we guarantee consistency between our event stream and our OLTP databases using the Outbox pattern. • The design decisions faced when considering the schema designs in Pinot and how we balanced flexibility and latency using Trino • Two patterns for enriching the event stream using Kafka streams and how we dealt with late arriving events and transactions.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how