Apache Kafka®️ 비용 절감 방법 및 최적의 비용 설계 안내 웨비나 | 자세히 알아보려면 지금 등록하세요

Online Talk

On Track with Apache Kafka: Building a Streaming ETL Solution with Rail Data

지금 등록하기

Available On-Demand

As data engineers, we frequently need to build scalable systems working with data from a variety of sources and with various ingest rates, sizes, and formats. This talk takes an in-depth look at how Apache Kafka can be used to provide a common platform on which to build data infrastructure driving both real-time analytics as well as event-driven applications.

Using a public feed of railway data it will show how to ingest data from message queues such as ActiveMQ with Kafka Connect, as well as from static sources such as S3 and REST endpoints. We'll then see how to use stream processing to transform the data into a form useful for streaming to analytics in tools such as Elasticsearch and Neo4j. The same data will be used to drive a real-time notifications service through Telegram.

If you're wondering how to build your next scalable data platform, how to reconcile the impedance mismatch between stream and batch, and how to wrangle streams of data—this talk is for you!

Robin은 Confluent의 DevRel 팀에서 일하고 있습니다. 그의 데이터 엔지니어링 이력은 COBOL로 메인프레임에 데이터 웨어하우스를 구축하는 일에서 출발해 Oracle 분석 솔루션 개발을 거쳐 최근에는 Kafka 생태계와 첨단 데이터 스트리밍 분야로 이어졌습니다. 여가 시간에는 좋은 맥주를 마시고 아침 식사로 튀김을 즐겨 먹지만, 맥주와 튀김을 함께 먹지는 않습니다.