[ウェビナー] ストリーミングデータメッシュを構築する方法 | 今すぐ登録

Online Talk

How to Build Streaming Pipelines for Cloud Databases

Available On-demand

Data pipelines perform much of the heavy lifting in organizations for integrating and transforming and preparing the data for subsequent use in downstream systems for operational use cases. And yet despite being critical to the data value stream, data pipelines fundamentally haven’t evolved in the last few decades.

This webinar will walk through a story of a Bank who uses an Oracle database to store sensitive customer information and RabbitMQ as the message broker for credit card transaction events. Their goal - perform real time analysis on credit card transactions to flag fraudulent transactions and push suspicious activity flags to MongoDB Atlas, their modern cloud-native database that powers their in-app mobile notifications.

To illustrate this use case, expect a live demo of

  • Confluent’s fully managed connectors for Oracle CDC Source and RabbitMQ Source to stream the data in real-time
  • ksqlDB to merge the two data sources, generating a unified view of customers and their credit card activity, and to flag fraudulent transactions
  • The fully managed MongoDB Atlas sink connector to load the aggregated and transformed data into MongoDB Atlas

Along with the demo and customer use case, you’ll also learn about the challenges with batch-based data pipelines, and the benefits from streaming data pipelines to power modern data flows.

Learn to build your own data streaming pipelines to push data to multiple downstream systems, including MongoDB in order to power real-time operational use cases. Register today!

Resources:

プレゼンター

Bharath Chari

Team Lead, Solutions Marketing, Confluent

Maygol Kananizadeh

Senior Developer Adoption Manager, Confluent

今すぐ見る

その他のリソース

cc demo
kafka microservices
Image-Event-Driven Microservices-01

その他のリソース

cc demo
kafka microservices
microservices-and-apache-kafka