Build Predictive Machine Learning with Flink | Workshop on Dec 18 | Register Now
It might seem a little strange, being the holiday season and still technically 2018, for me to be talking 2019 Kafka Summit events. But as you may already know, the CFPs are open, and we’ve got some details that you might find useful.
First of all, ICYMI, there will be three events next year: our usual fall event in San Francisco, and two in the spring: one in New York, and one in London. That’s a total of three Kafka Summits in one year—a new personal best for the community. I would like to formally invite you (again) to submit your talks to whichever of these two events is appropriate for you:
Note: The Kafka Summit San Francisco CFP will open later next year.
Because we want to make Kafka Summit a place where the most advanced thinking about stream processing takes place, we’ve reconfigured the track definitions a little bit to keep things current. These are the four tracks you’ll find at the London and NYC events:
Want to dive deep into Kafka’s internals, or just get a refresher on the basics of the platform? The Core Kafka track focuses on the present and future of Kafka as a distributed messaging system. As the platform matures, you have to keep your eye on the leading edge and the fundamentals alike. This track is here to help.
Talks in this track cover APIs, frameworks, tools and techniques in the Kafka ecosystem used to perform real-time computations over streaming data. If you’re building an event-driven system—whether on KSQL or Kafka Streams or something else—you need the experience shared in this track.
Streaming applications are unlike the systems we have built before, and we’re all learning this new style of development. Event-driven thinking requires deep changes to the request/response paradigm that has informed application architectures for decades, and a new approach to state as well. In this track, practitioners will share about their own experiences and the architectures they have employed to build streaming applications and bring event-driven thinking to the forefront of every developer’s mind.
Stream processing and event-driven development is being applied to use cases across every industry you can imagine. In the Use Cases track, we’ll share stories of actual streaming platform implementations from a variety of verticals. Talks in this track will define the business problems faced by each organization, tell the story of what part streaming played in the solution and reveal the outcomes realized as a result.
You may have really important and interesting Kafka knowledge to share, but you might be new at this whole conference business. If this is you, I want you to submit your talk. Everybody gets started some time, and we do have room for newcomers who have knowledge to share.
To help make it easier, we are holding online office hours, where you can ask questions about crafting a strong submission, bounce off ideas, etc. Members of the program committee and our developer relations team will be there to help. To take part, join the #summit-office-hours channel in our Slack community at one of these two times:
With over 30 sessions to attend at each summit, every talk is carefully chosen by the collaboration of this outstanding program committee. You’ll recognize some new names here, as well as some faithful contributors who have been working for years now to make the Kafka Summit more and more awesome:
Now, go mark your calendar with the dates of the summits, and if you want to speak there, make sure you get your paper in before the deadline! I look forward to seeing many of you at these two events.
Join Confluent at AWS re:Invent 2024 to learn how to stream, connect, process, and govern data, unlocking its full potential. Explore innovations like GenAI use cases, Apache Iceberg, and seamless integration with AWS services. Visit our booth for demos, sessions, and more.