Live demo: Kafka streaming in 10 minutes on Confluent | Watch now
Kafka Summit New York City was the largest gathering of the Apache Kafka community outside Silicon Valley in history. Over 500 “Kafkateers” from 400+ companies and 20 countries converged in midtown Manhattan to hear the latest developments and use cases from their peers in financial services, media, ad tech and more. If you missed it, we’d love to see you at the next one in San Francisco on August 28th. The call for papers remains open through May 15th and of course, there’s still plenty of time to register. To give you a sense of what to expect, here’s a brief recap of Kafka Summit NY.
Kafka Summit NY was all about the emergence of a new trend: the event streaming platform. What is an event streaming platform, you ask? With Kafka at the core, it’s an entirely new perspective on data in the enterprise environment. Part messaging system, part Hadoop made fast, part fast ETL and scalable data integration, an event streaming platform is a new way to stream, store and process data across the business. Jay Kreps went into this at length in his keynote. You can find a recording of that keynote on our Facebook page here.
At Confluent, our vision is putting an event streaming platform at the center of every company. Until now, Confluent Platform has been our means for delivering on that vision – as an on-premises offering. It was difficult for the cloud-first developer or operations starved organization to adopt. To address this need, during her keynote Neha Narkhede announced Confluent Cloud, Apache Kafka as a Service backed by the experts in Kafka. This exciting new offering will make it easy for anyone to implement and scale the leading event streaming platform as a service without fear of cloud provider lock-in while continuing to leverage the same robust and growing open source ecosystem that surrounds Kafka. You can join us in an online talk on June 1st where discuss this new service in detail.
During Kafka Summit, the Apache Kafka community also had the opportunity to learn about exactly once semantics, two major Kafka Improvement Proposals (KIP-98 and KIP-129, specifically) that will be released in Kafka 0.11 in June. These features open up a whole new set of use cases for Kafka, in particular those where no duplicates and delivery guarantees are paramount. This feature enables Kafka to become a replacement for traditional messaging systems.
Beyond this recurrent theme, there were numerous user testimonials throughout the event, including a keynote by Ferd Scheepers, Chief Information Architect at ING, who walked us through how Kafka is helping ING transform into an “IT company that also happens to be in the business of banking.” Ferd’s dynamic talk gave some excellent insight into the role Kafka plays in helping ING completely modernize their data strategy and deliver an amazing customer experience.
Many other sessions elaborated on this further and covered a range of use cases, across a wide range of industries. A couple of examples include the New York Times, who use Kafka as the single source of truth for all content at the publisher, storing in Kafka every piece of content created since 1851. BNY Mellon uses Kafka on AWS as the basis for message exchange across their microservices. All presentations will soon be posted on the Kafka Summit website, so check back frequently for content updates.
It is important to note that the event would not have been possible without its sponsors. These events don’t happen without the sponsorship and participation from the entities who are invested in furthering Kafka’s mission. To our sponsors: thank you! We hope you found Kafka Summit as valuable as we did.
We now look forward to Kafka Summit San Francisco. It will be another amazing event full of technical deep dives, customer use cases, and interaction with the Kafka community. We look forward to seeing you there, be sure to register!