Unlock the Secrets of Shifting Left in Our Upcoming Webinar | Register Now
“It really is an awesome time for this community,” said Jay Kreps, CEO of Confluent, in his opening keynote to over 1,500 live attendees (and 2k+ virtual viewers) from 50+ countries at Kafka Summit London. “There is incredible adoption of Apache Kafka® across all industries and use cases that I never could have imagined when I was first working on the early Kafka code at LinkedIn.”
Kreps went on to state that he believes this widespread adoption of real-time data and streaming technologies is being driven by a perfect storm of business trends and pressures. On the trends side, he listed growing popularity of IoT, proliferation of cloud systems, and data-hungry AI/ML applications as reasons why streaming has become essential for quickly processing enormous volumes of data. Equally important, he continued, is the pressure businesses are feeling to create more engaging customer experiences and drive efficiency throughout their organizations.
The action-packed keynote was full of exciting highlights, including:
A rundown of enhancements coming to Kafka over the next year and beyond (including tiered storage, simplified protocol, queues for Kafka, and more)
A debut of Confluent’s Kora Engine, the Apache Kafka engine built for the cloud. Kora is the engine that powers Confluent Cloud to be a cloud-native, 10x Kafka service, bringing GBps+ elastic scaling, guaranteed reliability, and predictable low latency to 30K+ clusters worldwide. Watch how Kreps explains it.
A reveal of Confluent’s Early Access (EA) program for its new fully managed service for stream processing technology Apache Flink®
The launch of additional Confluent features including Data Quality Rules, Custom Connectors, Stream Sharing, and more. Read the blog for the full scoop.
A lively Q&A with Michelin and Flutter, who shared their streaming success stories.
In a shoutout to the Kafka community, Kreps acknowledged the team effort required to get Kafka to where it is today. “[Kafka] is something that we have all built together,” he said. And with 150k+ organizations now using Kafka, and 69k+ Kafka meetup attendees yearly, it’s safe to say that the project is a global success!
Watch the full keynote here.
Check out Confluent’s free Apache Flink courses to learn Flink fundamentals and start processing real-time stateful data streams:
Course: Apache Flink® 101
With over 50 live breakout sessions and 15 lightning talks happening at Kafka Summit, there’s no way we could recap them all! But here are a few highlights from some of the amazing presentations hosted over a very busy 48 hours.
Ela Demir, Big Data & AI Engineer at Vodafone, gave a crash course on Apache Flink. In her 10-min lightning session, she covered:
The origins of Apache Flink
Why the industry is shifting away from other processing technologies in favor of Flink on Kafka (Hint: Flink can process tens of millions of Kafka events per second with latencies of mere milliseconds)
How Netflix is using Flink to process 3 trillion events daily
She concluded her session by stating: "Have no fear, Apache Flink is here."
It was standing-room only for anyone who came late to the highly anticipated session, “Apples and Oranges – Comparing Kafka Streams and Flink.” Bill Bejeck, DevX Engineer at Confluent, addressed a jam-packed room of summit-goers, stating: "When I was coming up with this presentation, I thought I was going to be able to make a hard recommendation. But there's really quite a lot of overlap between the two technologies."
The 45-min presentation covered all the noteworthy differences between Apache Flink and Kafka Streams, the two most dominant stream processing technologies available today. It also provided attendees with guidelines for matching their event streaming requirements with the correct streaming framework that meets their needs.
In an exclusive mini-summit held for tech executives attending Kafka Summit, attendees listened in as speakers from Confluent, 10x Banking, and Michelin shared the latest data streaming insights, trends, and success stories.
Highlights from the presentations included:
Greg DeMichillie, VP of Product & Solutions Marketing at Confluent, described how adopting a data streaming platform can drive success across an entire data strategy and future-proof organizations against shifting trends. "You have to build an environment that can be flexible and adapt to whatever comes next, including generative AI," he said.
The results of the 2023 Data Streaming Report, which surveyed over 2k IT and engineering leaders from various industries around the world, revealed encouraging metrics about the real-world business value and ROI of data streaming adoption. The biggest takeaway: 76% of organizations reap 2x-5x in returns on data streaming initiatives.
A panel discussion featuring Olivier Jauze (CTO at Michelin) and Stuart Coleman (Director of Data & Analytics at 10x Banking) showcased massive success stories of real-time streaming adoption and partnership with Confluent. Michelin estimates ~35% cost savings from using Confluent Cloud, compared to on-premises operations. And 10x Banking has been able to supercharge their customer experiences with real-time data.
Day two started out strong with Saxo Bank’s session, Designing a Data Mesh With Kafka, presented by Paul Makkar and Rahul Gulati. Makkar, Director of Datahub, kicked off the presentation by stating that data mesh “means different things to different people,” and has often been seen as a foundation for getting value from analytical data at scale.
Gulati, Principal Data Platform Engineer, then outlined how Saxo Bank has implemented data mesh principles to its operational plane, based on Kafka. Diving into topics like Cluster Linking, data lake connectors, and mapping Kafka to data lake infrastructure, Gulati strengthened his argument for data mesh use cases that go beyond simply enabling analytics.
The result of applying data mesh to its operational data? Saxo Bank is now able to treat their data as a product—making it more discoverable, addressable, and trustworthy.
If your analysts already know about version control, testing, and documentation…why not toss data streaming and Kafka into the mix? That’s the question Amy Chen, Partner Engineering Manager at dbt Labs, posed in her session, Upleveling Analytics With Kafka.
Chen shared how she built her first end-to-end analytics pipeline (with the help of Confluent Cloud and Snowflake Snowpipe) and her thoughts on when it makes sense for analysts to dive into Kafka to address their own analytics needs.
“Kafka is actually for analysts, but not in the way that data engineers and data architects are traditionally using it,” said Chen. “I think people who have Kafka in their pipelines should know Kafka enough to understand the dependencies they’re building their dashboards on,” she concluded.
Thank you to everyone who joined us at Kafka Summit 2023 this year! This event is truly a testament to the thriving Kafka community and its commitment to ongoing innovation. We hope to see you at our next event, Current 2023 | The Next Generation of Kafka Summit, in September for two full days of content (100+ sessions) and networking that covers everything data streaming.
On September 17-18, the data streaming world will descend upon Austin, Texas for Current 2024––bringing the community together to discuss all things Apache Kafka® and Apache Flink®. You’ll hear from tech leaders, industry giants, and startups as they drop a seemingly endless supply of knowledge...