Apache Kafka®️ 비용 절감 방법 및 최적의 비용 설계 안내 웨비나 | 자세히 알아보려면 지금 등록하세요

Generic

Steps to Building a Streaming ETL Pipeline with Apache Kafka and KSQL

지금 등록하기

Tuesday, July 30

10:30am Bangkok / 11:30am Singapore / 12:30pm Tokyo / 1:30pm Sydney

Moderated by Mark Teehan, Sales Engineer at Confluent Asia Pacific, this online talk is part two of a three part series called Streaming ETL - The New Data Integration.

Mark will run a video talk by Robin Moffat, Developer Advocate at Confluent which covers building a streaming data pipeline using nothing but bare hands, the Kafka Connect API and KSQL. The video shows how to stream data in from MySQL, transform it with KSQL and stream it out to Elasticsearch. Options for integrating databases with Kafka using CDC and Kafka Connect will be covered as well.

Mark will then be available for a Q&A session. Register now for this 60 minute session.

Also check out the next talk in the series:

Part 3: Streaming Transformations - Putting the T in Streaming ETL

Robin은 Confluent의 DevRel 팀에서 일하고 있습니다. 그의 데이터 엔지니어링 이력은 COBOL로 메인프레임에 데이터 웨어하우스를 구축하는 일에서 출발해 Oracle 분석 솔루션 개발을 거쳐 최근에는 Kafka 생태계와 첨단 데이터 스트리밍 분야로 이어졌습니다. 여가 시간에는 좋은 맥주를 마시고 아침 식사로 튀김을 즐겨 먹지만, 맥주와 튀김을 함께 먹지는 않습니다.

Mark Teehan is a Sales Engineer at Confluent, covering APAC, based in Singapore. His focus is on enterprise connectivity to Apache Kafka: how organisations can set up secure, scalable, scripted streaming data pipelines across the enterprise.