[Demo] Migriere von Kafka Services zu Confluent Cloud + spare 70 % der Kosten | Jetzt registrieren

Online Talk

Streaming meets Iceberg: Unlock AI-Ready Tables with Confluent Tableflow and Snowflake

Jetzt registrieren

Thursday, August 14, 2025

10 AM PST | 1 PM EST | 10 AM BST | 10:30 AM IST | 1 PM SGT | 3 PM AEST

Your analytics and AI are only as good as the data powering them. High-quality, reliable and reusable data products that are seamlessly available as streams and AI-ready tables—can help.

With Kafka dominating operational data and open table formats such as Iceberg and Delta Lake increasingly powering high-scale analytics and AI, the time is ripe for a rethink in how we create and access high-quality data to maximize data value across the entire operational and analytical landscape.

Join this demo webinar to:

  • Explore why unifying operational and analytical data is critical to winning in the age of AI
  • Learn how to get more value out of Snowflake, faster, using high-quality streaming data products available on-hand as Iceberg tables via Confluent Tableflow
  • Dive into how Confluent and Snowflake work together to dramatically simplify analytics and AI workflows using Confluent Cloud for Apache Fink, Snowflake Open Catalog, Snowflake Cortex AI and more

Sean is a Senior Director, Product Management - AI Strategy at Confluent where he works on AI strategy and thought leadership. Sean's been an academic, startup founder, and Googler. He has published works covering a wide range of topics from AI to quantum computing. Sean also hosts the popular engineering podcasts Software Engineering Daily and Software Huddle.

Jeremy Ber is a Solutions Engineer at Confluent, specializing in stream processing and helping customers shift left. He has worked in the streaming data space for 10 years, making stream processing more accessible through demos, talks and workshops. Jeremy has been a software engineer, data engineer, Amazonian and now works at Confluent. He loves to teach and distill complex concepts into digestible ideas that anyone can understand.

Vino is a Developer Advocate, focusing on Data engineering and LLM workloads at Snowflake. She started as a software engineer at NetApp, and worked on data management applications for NetApp data centers when on-prem data centers were still a cool thing. She then hopped onto the cloud and big data wagon and landed at the data teams of Nike and Apple. There she worked mainly on batch processing workloads as a data engineer, built custom NLP models as an ML engineer and even touched upon MLOps a bit for model deployments.