Apache Kafka®️ 비용 절감 방법 및 최적의 비용 설계 안내 웨비나 | 자세히 알아보려면 지금 등록하세요
Stream, process, connect and govern data in edge locations, and replicate data across hybrid cloud architectures to power a range of operational and analytical use cases.
Data streaming at the edge allows applications to handle large volumes of heterogeneous data in environments with unreliable network connectivity. This unlocks a range of operational and analytical use cases, from predictive maintenance in IIoT settings to on-board (e.g., ship or train) booking systems.
Confluent empowers organizations to easily deploy and manage data streaming in edge locations, enabling them to:
Deliver real-time use cases in remote locations
Elastically scale streaming data pipelines
Create a bridge to a unified hybrid edge architecture
This use case leverages the following building blocks in Confluent:

Data is ingested to Confluent Platform via connectors, Kafka producers, or directly from event-driven applications
Data is processed on the edge device with Apache Flink® or the Kafka Streams API for Confluent Platform, before being synced (if necessary) to Confluent Cloud via cluster linking.
Confluent Health+ provides intelligent alerts and monitoring, and enables proactive support to ensure cluster health and minimize business disruptions.