Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more

It’s Time for Kafka Summit NYC!

Written By

The team at Confluent, along with the Apache KafkaTM community, are excited the day is finally here – it’s time for Kafka Summit NYC!

We’ll be at the Midtown Hilton all day today hearing how leading companies like ING, Bank of New York Mellon, Yelp, Uber, Goldman Sachs, The New York Times, Target, and many, many more are using Kafka and streaming data to change how their business operates.

I’m delighted to welcome more than 500 attendees and our 19 sponsors, who have demonstrated their commitment to the success of the Kafka community. Our Kafka Summit NYC sponsors are Attunity, Azul Systems, Capital One, Confluent, Couchbase, DataDog, Eventador.io, Heroku, Kinetica, Lightbend, LinkedIn, MemSQL, Microsoft, MongoDB, SQLstream, Striim and Wavefront. We also want to thank the Apache Software Foundation, O’Reilly Media, and of course the conference organizers and committee who made all of this possible.

The conference got off to a great start last night with the Kafka Summit hackathon. This year’s theme was microservices, and notably, microservices implemented with the Kafka Streams API or with Kafka producer and consumer clients for programming languages such as Java, C/C++, Python, .NET or Go. The goal was to combine the power of stream processing with the agility and composability of microservices. 

Congratulations to our winners!

  • First place winner: Adam Abrams, Jonathan David (Axonite)
    They implemented a custom storage engine for the Kafka Streams API, based upon Lucene, to index incoming invents, and then wrote a Kafka Streams application that uses this new storage engine. Among other things, the application exposes e.g. the latest Top N documents via a REST API and Kafka’s interactive queries to other applications and microservices.
  • Second place: Jeroen van Disseldorp, Daniel Mulder, Joris Meijer (Axual)
    They modeled a power grid, which they monitored in real-time via events collected into Kafka, and implemented a Kafka Streams application that balances power generation and power consumption in that grid.
  • Third place: Vishwa Teja, Pradeepraj Chandrasekaran (Egen Solutions)
    They implemented an IoT use case where sensors send CO2 measurements to Kafka in real-time, followed by a Kafka Streams application that analyzes these measurements in order to compute CO2 hot spots as well as to send text and audio alerts to mobile phones (via Twilio) when CO2 levels reach critical thresholds.

Be sure to attend today’s keynotes from Jay Kreps, Neha Narkhede and Ferd Scheepers, the Chief Information Architect at ING, a leading global bank based in the Netherlands with operations across the world. Ferd is responsible for defining ING’s information architecture strategy, leading ING’s transformation to a truly event-driven bank of which Kafka is an integral component providing their customers with relevant, accurate, secure and actionable information, wherever they happen to be and whatever digital channel they choose to conduct their business via supporting ING’s 800 million euro investment into its digital transformation.

For those who are in attendance, we hope you have a great show. If you haven’t picked which talks to attend, now would be a great time to check the schedule and choose: https://kafka-summit.org/kafka-summit-ny/schedule/. Please use the Kafka Summit app to communicate with the committee organizers and provide your feedback, and feel free to join the conversation live with the #KafkaSummit hashtag on Twitter. Don’t forget to participate in the mobile app game as well – you could win a Nintendo Switch!

Don’t forget to stop by the Confluent booth with all your Kafka-related questions – we have experts ready to help!

For those who aren’t able to attend, we’ll be live streaming the keynote sessions on Facebook Live. Go to the Confluent Facebook page to view the keynotes live starting at 9:00 am ET today. We’ll also be posting all the session videos online for you in the coming weeks. Follow us on twitter @confluentinc and/or sign up for our blog to get notified when we publish the recordings. 

Here’s to a great conference!




  • Gwen Shapira is a Software Enginner at Confluent. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. She currently specialises in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, an author of books including “Kafka, the Definitive Guide”, and a frequent presenter at data related conferences. Gwen is also a committer on the Apache Kafka and Apache Sqoop projects.

Did you like this blog post? Share it now