์‹ค์‹œ๊ฐ„ ์›€์ง์ด๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ๊ฐ€์ ธ๋‹ค ์ค„ ๊ฐ€์น˜, Data in Motion Tour์—์„œ ํ™•์ธํ•˜์„ธ์š”!

Use ksqlDB to migrate core-banking processing from batch to streaming

Core banking systems are batch oriented: typically with heavy overnight batch cycles before business opens each morning. In this talk I will explain some of the common interface points between core-banking infrastructure and event streaming systems. Then I will focus on how to do stream processing using ksqlDB for core-banking shaped data: showing how to do common operation using various ksqlDB functions. The key features are avro-record keys and multi-key joins (ksqlDB 0.15), schema management and state store planning.

Chinese Japanese Korean

๋ฐœํ‘œ์ž

Mark Teehan

Mark Teehan is a Sales Engineer at Confluent, covering APAC, based in Singapore. His focus is on enterprise connectivity to Apache Kafka: how organisations can set up secure, scalable, scripted streaming data pipelines across the enterprise.