Register for Demo | Confluent Terraform Provider, Independent Network Lifecycle Management and more within our Q3’22 launch!

3 Kafka patterns to deliver Streaming Machine Learning models

The presentation highlights the main technical challenges Radicalbit faced while building a real-time serving engine for streaming Machine Learning algorithms. The speech describes how Kafka has been used to fasten two ML technologies together: River, an open-source suite of streaming machine learning algorithms, and Seldon-core, a DevOps-driven MLOps platform. In particular, the talk focuses on how Kafka has been used to (1) build a dynamic model serving framework thanks to Kafka Streams joins and the broadcasting pattern (2) implement a Kafka user-given feedback topic by which online models can learn while they generate predictions, and (3) design a models' prediction bus, a particular Kafka bidirectional topic whereby predictions flow at tremendous scale; the prediction bus enabled seldon-core Kubernetes deployment to communicate with Kafka Streams, and as a conclusive subject this speech explains how this unleashed unprecedented performance.

Presenter

Andrea Spina

Andrea is currently CTO at Radicalbit, Milan. His main works have been focused on streaming technologies, machine learning, and performance-boosting. Andrea co-authored "flink-jpmml" project; he loves to spread the voice about how to regulate machine learning end-to-end lifecycle and streaming applications. He co-authored "Benchmarking Data Flow Systems for Scalable Machine Learning" science paper at DIMA Group, TU Berlin.