[Webinar] Kafka Re-Architected for the Cloud with Kora Engine → Register Today!

Online Talk

Show Me How: Build Streaming Data Pipelines for Cloud Databases

Available On-demand

Data pipelines do the heavy lifting of helping organizations integrate, transform, and prepare data for downstream systems in operational use cases. However, legacy databases and ETL pipelines hold organizations back as real-time data streaming becomes business critical.

This Show Me How will walk through the story of a bank that uses an Oracle database to store customer information and RabbitMQ as the message broker for credit card transaction events. Their goal is to perform real-time analysis on credit card transactions to flag fraudulent transactions and push these to MongoDB, their new cloud database that powers their in-app mobile notifications.

During this session, we'll show you step by step how to:

  • Connect data sources to Confluent Cloud using Confluent’s fully managed Oracle CDC and RabbitMQ source connectors. We’ll also use a fully managed sink connector to load aggregated, transformed data into MongoDB Atlas.
  • Process and enrich data in flight using ksqlDB to merge multiple data streams, generating a unified view of customers and their credit card activity in order to flag fraudulent transactions.
  • Govern your data pipelines using Schema Registry and Stream Lineage.

We’ll have a Q&A to answer any of your questions. Register today and learn to build your own streaming data pipelines.

Resources:

Presenter

Maygol Kananizadeh

Senior Developer Adoption Manager, Confluent

Watch Now

Additional Resources

cc demo
kafka microservices
Image-Event-Driven Microservices-01

Additional Resources

cc demo
kafka microservices
microservices-and-apache-kafka