[ウェビナー] ストリーミングデータメッシュを構築する方法 | 今すぐ登録

Presentation

Comparing three data ingestion approaches where Apache Kafka integrates with a distributed graph database in real time

« Kafka Summit Europe 2021

Using Kafka to stream data into TigerGraph, a distributed graph database, is a common pattern in our customers’ data architecture. We have seen the integration in three different layers around TigerGraph’s data flow architecture, and many key use case areas such as customer 360, entity resolution, fraud detection, machine learning, and recommendation engine. Firstly, TigerGraph’s internal data ingestion architecture relies on Kafka as an internal component. Secondly, TigerGraph has a builtin Kafka Loader, which can connect directly with an external Kafka cluster for data streaming. Thirdly, users can use an external Kafka cluster to connect other cloud data sources to TigerGraph cloud database solutions through the built-in Kafka Loader feature. In this session, we will present the high-level architecture in three different approaches and demo the data streaming process.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how