Apache Kafka®️ 비용 절감 방법 및 최적의 비용 설계 안내 웨비나 | 자세히 알아보려면 지금 등록하세요

Online Talk

No More Silos: Integrating Databases into Apache Kafka®

Watch Now

Companies new and old are all recognizing the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka® streaming platform. With Apache Kafka, developers can integrate multiple sources and systems, which enables low latency analytics, event-driven architectures and the population of multiple downstream systems.

In this talk, we’ll look at one of the most common integration requirements – connecting databases to Apache Kafka. We’ll consider the concept that all data is a stream of events, including that residing within a database. We’ll look at why we’d want to stream data from a database, including driving applications in Apache Kafka from events upstream. We’ll discuss the different methods for connecting databases to Apache Kafka, and the pros and cons of each. Techniques including Change-Data-Capture (CDC) and Kafka Connect will be covered, as well as an exploration of the power of KSQL, streaming SQL for Apache Kafka, for performing transformations such as joins on the inbound data.

Watch now to learn:

  • Why databases are just a materialized view of a stream of events
  • The best ways to integrate databases with Apache Kafka
  • Anti-patterns to be aware of
  • The power of KSQL for transforming streams of data in Apache Kafka

Robin은 Confluent의 DevRel 팀에서 일하고 있습니다. 그의 데이터 엔지니어링 이력은 COBOL로 메인프레임에 데이터 웨어하우스를 구축하는 일에서 출발해 Oracle 분석 솔루션 개발을 거쳐 최근에는 Kafka 생태계와 첨단 데이터 스트리밍 분야로 이어졌습니다. 여가 시간에는 좋은 맥주를 마시고 아침 식사로 튀김을 즐겨 먹지만, 맥주와 튀김을 함께 먹지는 않습니다.