Analytic pipelines running purely on batch processing systems can suffer from hours of data lag, resulting in accuracy issues with analysis and overall decision-making. Join us for a demo to learn how easy it is to integrate your Apache Kafka streams in Apache Druid (incubating) to provide real-time insights into the data.
In this online talk, you’ll hear about ingesting your Kafka streams into Imply’s scalable analytic engine and gaining real-time insights via a modern user interface.
Watch now to learn about:
Confluent Platform, developed by the creators of Kafka, enables the ingest and processing of massive amounts of real-time event data. Imply, the complete analytics stack built on Druid, can ingest, store, query and visualize streaming data from Confluent Platform, enabling end-to-end real-time analytics. Together, Confluent and Imply can provide low latency data delivery, data transform, and data querying capabilities to power a range of use cases.
Rachel Pedreschi, Senior Director, Solutions Engineering, Imply.io
Rachel Pedreschi is the Field Engineering Director at Imply Data. A "Big Data Geek-ette," Rachel is no stranger to the world of big data, fast data, and everything in between. She is a Vertica-, Informix-, and Redbrick-certified DBA on top of her work with Apache Cassandra, Apache Ignite, and Apache Druid. She has more than 20 years of high-performance database experience. Rachel has an MBA from San Francisco State University and a BA in Mathematics from University of California, Santa Cruz.
Josh Treichel, Partner Solutions Architect, Confluent
Josh Treichel is a Partner Solutions Architect at Confluent. As a software engineer he’s spent over 10 years building, integrating and supporting complex systems. He previously worked on Confluent’s Customer Operations team supporting some of the largest Kafka/Confluent deployments in the world.