Conferences

Confluent is proud to participate in the following conferences, trade shows and meetups.

Kafka Summit London

Discover the World of Streaming Data

As streaming platforms become central to data strategies, companies both small and large are re-thinking their architecture with real-time context at the forefront. Monoliths are evolving into Microservices. Datacenters are moving to the cloud. What was once a ‘batch’ mindset is quickly being replaced with stream processing as the demands of the business impose more and more real-time requirements on developers and architects.

This revolution is transforming industries.

What started at companies like LinkedIn, Uber, Netflix and Yelp has made its way to countless others in a variety of sectors. Today, thousands of companies across the globe build their businesses on top of Apache Kafka®. The developers responsible for this revolution need a place to share their experiences on this journey.

Kafka Summit is the premier event for data architects, engineers, devops professionals, and developers who want to learn about streaming data. It brings the Apache Kafka community together to share best practices, write code, and discuss the future of streaming technologies.

Welcome to Kafka Summit London!

Event Details

Meetup: KSQL and Stream All Things

Register

18:00 Doors open
18:10 pm - 18:55 KSQL--Streaming SQL for Apache Kafka® - Matthias J. Sax
18:55 - 19:35 Stream All Things - Patterns of Modern Data Integration - Gwen Shapira
19:35 - 20:00 Pizza, drinks, networking and additional Q&A

Speaker: Matthias J. Sax, Software Engineer, ConfluentSession: KSQL--Streaming SQL for Apache Kafka®

Abstract: This talk is about KSQL, an open source streaming SQL engine for Apache Kafka®. KSQL aims to make stream processing available to everybody without the need to write Java or Scala code. Streaming SQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We will give a general introduction to KSQL covering its SQL dialect, core concepts, and architecture including some technical deep-dives how it works under the hood.

About Matthias: Matthias is an Apache Kafka® committer working as a Software Engineer at Confluent. His main focus is Kafka’s Streams API and stream processing with KSQL. Prior to Confluent, he was a PhD student at Humboldt-University of Berlin, conducting research on data stream processing system. Matthias is also a committer at Apache Flink and Apache Storm.

Speaker: Gwen Shapira, Principal Data Architect, ConfluentSession: Stream All Things - Patterns of Modern Data Integration

Abstract: 80% of the time in every project is spent on data integration: getting the data you want the way you want it. This problem remains challenging despite 40 years of attempts to solve it. We want a reliable, low latency system that can handle varied data from wide range of data management systems. We want a solution that is easy to manage and easy to scale. Is it too much to ask?

In this presentation, we’ll discuss the basic challenges of data integration and introduce design and architecture patterns that are used to tackle these challenges. We will explore how these patterns can be implemented using Apache Kafka® and share pragmatic solutions that many engineering organizations used to build fast, scalable and manageable data pipelines.

About Gwen: Gwen Shapira is a principal data architect at Confluent, where she helps customers achieve success with their Apache Kafka implementation. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen currently specializes in building real-time reliable data-processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, the coauthor of Hadoop Application Architectures, and a frequent presenter at industry conferences. She is also a committer on Apache Kafka and Apache Sqoop. When Gwen isn’t coding or building data pipelines, you can find her pedaling her bike, exploring the roads and trails of California and beyond.

Register

CGI Know-how Day

Seit 16 Jahren bietet der CGI Know-how Day Ihnen eine einzigartige Plattform zum Informationsaustausch und Netzwerken in angenehmer Atmosphäre. Dies wollen wir gerne fortsetzen.

Der CGI Know-how Day findet dieses Jahr am Donnerstag, 26. April 2018, ab 15:30 Uhr in der KAMEHA SUITE Frankfurt statt.

Auch dieses Jahr wollen wir die Auswirkungen der Digitalen Transformation auf ihr Unternehmen, auf die Arbeitswelt generell sowie jeden Einzelnen von uns näher beleuchten:

    Wie wird sich in Zeiten von Robotik und künstlicher Intelligenz unser Arbeitsleben verändern?
    Was sind die aktuellen und zukünftigen Megatrends der Digitalen Transformation?
    Wie können Sie und ihr gesamtes Unternehmen diese Veränderungen für sich nutzen?

Diese und weitere Fragen möchten unsere CGI-Spezialisten und Partner mit Ihnen diskutieren und Ihnen unter dem Motto „Digital Transformation - Quo Vadis?“ Ansatzpunkte aufzeigen.

Als Keynote Speaker konnten wir Wolfgang Bosbach, Querdenker und ehemaliger Vorsitzender des Innenausschusses des Deutschen Bundestages, gewinnen, der mit seinem Thema „Deutschland in Zeiten der Globalisierung und Digitalisierung“ einen sehr spannenden Vortrag halten wird.

Die Teilnahme am CGI Know-how Day ist für Sie kostenfrei. Da die Teilnehmerzahl begrenzt ist, empfehlen wir eine rechtzeitige Anmeldung.

Wir würden uns freuen, Sie zum CGI Know-how Day begrüßen zu dürfen.

Anmelden

Full Stack Hack

MongoDB, Confluent and Nearform present you a one day Hackathon, Full Stack Hack.

Full Stack Hack is a Hackathon where you get a chance to build a full stack application in one day using the top technology stack for today’s modern web applications specifically MongoDB, Kafka and Node.js. On the 27th of April 80 people will form up into teams of 3-5 people to compete for the prize of Top Full Stack Hack team.

Register

Interop ITX

Speaker: Gwen Shapira, ConfluentSession: Metrics Are Not Enough: Monitoring Apache Kafka®May 4, 10:00 am - 10:50 am

When you are running systems in production, clearly you want to make sure they are up and running at all times. But in a distributed system such as Apache Kafka… what does "up and running" even mean?

Experienced Apache Kafka users know what is important to monitor, which alerts are critical and how to respond to them. They don't just collect metrics - they go the extra mile and use additional tools to validate availability and performance on both the Kafka cluster and their entire data pipelines.

In this presentation we'll discuss best practices of monitoring Apache Kafka. We'll look at which metrics are critical to alert on, which are useful in troubleshooting and what may actually misleading. We'll review a few "worst practices" - common mistakes that you should avoid. We'll then look at what metrics don't tell you - and how to cover those essential gaps.

Gwen is a systems architect at Confluent. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen is the author of "Kafka - The Definitive Guide" and "Hadoop Application Architectures," and a frequent presenter at industry conferences. Gwen is a PMC member on the Apache Kafka project and committer on Apache Sqoop. When Gwen isn't building data pipelines or thinking up new features, you can find her pedaling on her bike exploring the roads and trails of California, and beyond.

Session Details
Event Details

Data-Driven Strategies in Financial Services

5:15 pm - 6:00 pm Networking
6:00 pm - 6:10 pm Welcome (Mike Marzo, Managing Director, BNY Mellon)
6:10 pm - 6:40 pm JPMorgan (Adam Goldberg, Managing Director, JPMorgan Chase)
6:40 pm - 7:10 pm BNY Mellon (Aditya Anand, Director IaaS Storefront, BNY Mellon)
7:10 pm - 7:40 pm Confluent (Chris Matta, Lead Solutions Architect Financial Services)
*Beer, Wine and Appetizers will be provided.

Streaming data is redefining competition in financial services. Those that capitalize on it are creating a new, powerful customer experience, designing for regulatory uncertainty and lowering risk in real-time.

Join Confluent May 3rd and hear from BNY Mellon and JPMorgan Chase & CO. share their data-driven strategies using Apache Kafka to create digital nervous systems that connect disconnected and siloed systems at astonishing scale. This event is valuable to all in the financial services space who would like to learn more about Apache Kafka use cases, features and roadmap.

Register

Comment créer une application de streaming de données et les valoriser en temps réel ?

Join our two-part online lecture series, speed up the flow via KSQL to dive into the heart of this tool. Our experts will explain the KSQL engine architecture and show you how to design and deploy interactive and continuous queries.

With these two webinars, you'll get a deep understanding of KSQL as well as tips for using it effectively for streaming / ETL, anomaly detection, application development, real-time analytics and more! This lecture series is ideal for KSQL beginners or those who have not yet explored their real-time stream processing capabilities.

Speaker: Aurélien Goujet, Southern Europe Director of Solution Engineering, ConfluentSession: Part I: Discover the Confluent and KSQL platformMay 3, 11:00 am - 12:00 pm
Speaker: Florent Ramière, Technical Account Manager, ConfluentSession: Part II: Live development of a microservice with KSQLMay 15, 11:00 am - 12:00 pm
Register Now

Elastic Stack in a Day

Speaker: Paolo Castagna, Account Executive, ConfluentSession: Elastic & Kafka a lovely couple14:25 - 14:50

Paolo's technical background is "big data" and he has touched the evolution and enormous change that the software industry is going through from batch / big data to stream processing / fast data.

Event Details

Codemotion Amsterdam

Speaker: Kai Wähner, Technology Evangelist, ConfluentSession: How to Leverage the Apache Kafka® Ecosystem to Productionize Machine LearningMay 8, 15:10 - 15:50

This talk shows how to productionize Machine Learning models in mission-critical and scalable real time applications by leveraging Apache Kafka® as streaming platform. The talk discusses the relation between Machine Learning frameworks such as TensorFlow, DeepLearning4J or H2O and the Apache Kafka ecosystem. A live demo shows how to build a Machine Learning environment leveraging different Kafka components: Kafka messaging and Kafka Connect for data movement, Kafka Streams for model deployment and inference in real time, and KSQL for real time analytics of predictions, accuracy and alerts.

Kai Wähner works as Technology Evangelist at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Integration, Microservices, Internet of Things, Stream Processing and Blockchain. He is regular speaker at international conferences such as JavaOne, O’Reilly Software Architecture or ApacheCon, writes articles for professional journals, and shares his experiences with new technologies on his blog.

Event Details

Apache Kafka® Delivers a Single Source of Truth for The New York Times

Speaker: Boerge Svingen, Director of Engineering, The New York Times

Abstract: With 3.2 million paid print and digital subscriptions last year, how did The New York Times remain a leader in an evolving industry that once relied on print? It fundamentally changed its infrastructure at the core to keep up with the new expectations of the digital age and its consumers. Now every piece of content ever published by The New York Times throughout the past 166 years and counting is stored in Apache Kafka®.

Join The New York Times' Director of Engineering Boerge Svingen to learn how the innovative news giant of America transformed the way it sources content while still maintaining searchability, accuracy and accessibility through a variety of applications and services—all through the power of a real-time streaming platform.

In this talk, Boerge will:

    Provide an overview of what the publishing infrastructure used to look like
    Deep dive into the log-based architecture of The New York Times’ Publishing Pipeline
    Explain the schema, monolog and skinny log used for storing articles
    Share challenges and lessons learned
    Answer live questions submitted by the audience

About Boerge: Boerge Svingen was a founder of Fast Search & Transfer (alltheweb.com, FAST ESP). He was later a founder and CTO of Open AdExchange, doing contextual advertising for online news. He is now working on search and backend platforms at the New York Times.

Register

Gluecon

Speaker: Gwen Shapira, Principal Data Architect, ConfluentSession: Kafka and the Service MeshMay 16, 1:30 pm - 2:10 pm

Service Mesh is an infrastructure layer for microservices communication. It abstracts the underlying network details and provides discovery, routing and a variety of other functionality. Apache Kafka® is a distributed streaming platform with pubsub APIs - also often used to provide an abstract communication layer for microservices. In this talk, we’ll discuss the similarities and differences between the communication layer provided by a service mesh and by Apache Kafka. We’ll discuss the different paradigms they help implement - streaming vs request/response, and how to decide which paradigm fits different requirements. We’ll then discuss a few ways to combine them and to use Apache Kafka within a service-mesh architecture. We’ll conclude with thoughts on how Apache Kafka and its ecosystem can evolve to provide some of the functionality available in service mesh implementations...and vice versa.

Event Details

JFrog swampUP

Speaker: Viktor Gamov, Solutions Architect, Confluent
Speaker: Baruch Sadogursky, Developer Advocate, JFrog
Session: Fight Crime with Kafka Streams and Bintray Firehouse APIRoom: 1May 18, 5:15 pm - 6:00 pm

Abstract: Can you find this malicious activity needle in the haystack of events on one of the busiest distribution hubs in the world? Processing the streaming events from Bintray Firehose API with Kafka Streams can give you the superpower to do that. In this session, we will show a real-life example of how using Kafka KSQL to process and parse huge amounts of data to determine a worrying trend that might be a sign of a malicious activity.

About Viktor: Viktor Gamov is a Solution Architect at Confluent, the company behind the popular Apache Kafka® streaming platform. Viktor has comprehensive knowledge and expertise in enterprise application architecture leveraging open source technologies and enjoys helping different organizations build low latency, scalable and highly available distributed systems.

He is a professional conference speaker on Distributed Systems, Java and JS topics, and is a regular at the most prestigious events including SwampUP, JavaOne, Devoxx, OSCON, Qcon and others , blogging and producing podcasts “Razbor Poletov” (in Russian) and DevRelRad.io.

About Baruch: Baruch Sadogursky (a.k.a JBaruch) is the Developer Advocate at JFrog. For a living he hangs out with JFrog’s tech leaders, writes code around the JFrog Platform and its ecosystem and then speaks and blogs about it all. He has been doing this for the last dozen years or so, and enjoys every minute of it.

Event Details

DataXDay

Session: Kafka Beyond the Brokers: Stream Processing and Monitoring
Speaker: Florent Ramière, Technical Account Manager, Confluent

He is a technical account manager for Confluent. His job is to sit with customers and help them succeed with Apache Kafka®, so he knows a thing or two about Kafka.

Event Details

Strata Data London

Speaker: Michael Noll, Product Manager, ConfluentSession: Unlocking the World of Stream Processing with KSQL, the Streaming SQL Engine for Apache Kafka®May 23, 14:05 – 14:45

We introduce KSQL, the open source streaming SQL engine for Apache Kafka. KSQL makes it easy to get started with a wide range of real-time use cases such as monitoring application behavior and infrastructure, detecting anomalies and fraudulent activities in data feeds, and real-time ETL. We cover how to get up and running with KSQL and also explore the under-the-hood details of how it all works.

Michael Noll is a product manager at Confluent, the company founded by the creators of Apache Kafka. Previously, Michael was the technical lead of DNS operator Verisign’s big data platform, where he grew the Hadoop, Kafka, and Storm-based infrastructure from zero to petabyte-sized production clusters spanning multiple data centers—one of the largest big data infrastructures in Europe at the time. He is a well-known tech blogger in the big data community. In his spare time, Michael serves as a technical reviewer for publishers such as Manning and is a frequent speaker at international conferences, including Strata, ApacheCon, and ACM SIGIR. Michael holds a PhD in computer science.

Session Details
Event Details

Elastic Stack in a Day (Milan)

Speaker: Paolo Castagna, Account Executive, ConfluentSession: Elastic & Kafka a lovely couple14:25 - 14:50

Paolo's technical background is "big data" and he has touched the evolution and enormous change that the software industry is going through from batch / big data to stream processing / fast data.

Event Details

Building IoT 2018

Speaker: Kai Wähner, Technology Evangelist, ConfluentSession: Process IoT Data with Apache Kafka, KSQL and Machine LearningJune 5, 10:45 - 11:25

IoT devices generate large amounts of data that must be continuously processed and analyzed. Apache Kafka is a highly scalable open-source streaming platform for reading, storing, processing and routing large amounts of data from thousands of IoT devices. KSQL is an open source streaming SQL engine natively based on Apache Kafka to enable stream processing to anyone using simple SQL commands.

This talk, with a health care scenario, shows how Kafka and KSQL can help to continuously perform health checks on patients. A live demo shows how machine learning models - trainers with frameworks such as TensorFlow, DeepLearning4J or H2O - can be deployed in a time-critical and scalable real-time application.

Previous Knowledge
Knowledge of distributed systems and architectures is helpful. Experience with machine learning is helpful, but not mandatory.

Learning Objectives

    Apache Kafka is a streaming platform for reading, storing, processing and forwarding large volumes of data from thousands of IoT devices.
    KSQL allows continuous integration and analysis without external big-data clusters and without writing source code.
    Machine learning models can be easily trained and used in the Apache Kafka environment.

Kai works as a Technology Evangelist at Confluent. The kangaroo areas are Big Data Analytics, Machine Learning / Deep Learning, Messaging, Integration, Microservices, Stream Processing, Internet of Things and Blockchain.

Session Details
Event Details

Velocity Conference

Event Details
Speaker: Gwen Shapira, Principal Data Architect, Confluent
Speaker: Xavier Léauté, Software Engineer, Confluent
Session: Metrics Are Not Enough: Monitoring Apache Kafka®Room: LL21 A/BJune 13, 11:25 am – 12:05 pm

Prerequisite knowledge: Some knowledge of Apache Kafka is important.

Abstract: When you are running systems in production, clearly you want to make sure they are up and running at all times. But in a distributed system such as Apache Kafka… what does “up and running” even mean?

Experienced Apache Kafka users know what is important to monitor, which alerts are critical and how to respond to them. They don’t just collect metrics – they go the extra mile and use additional tools to validate availability and performance on both the Kafka cluster and their entire data pipelines.

In this presentation we’ll discuss best practices of monitoring Apache Kafka. We’ll look at which metrics are critical to alert on, which are useful in troubleshooting and what may actually misleading. We’ll review a few “worst practices” – common mistakes that you should avoid. We’ll then look at what metrics don’t tell you – and how to cover those essential gaps.

About Gwen: Gwen Shapira is a system architect at Confluent, where she helps customers achieve success with their Apache Kafka implementation. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen currently specializes in building real-time reliable data-processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, the coauthor of Hadoop Application Architectures, and a frequent presenter at industry conferences. She is also a committer on Apache Kafka and Apache Sqoop. When Gwen isn’t coding or building data pipelines, you can find her pedaling her bike, exploring the roads and trails of California and beyond.

About Xavier: One of the first engineers to Confluent team, Xavier is responsible for analytics infrastructure, including real-time analytics in KafkaStreams. He was previously a quantitative researcher at BlackRock. Prior to that, he held various research and analytics roles at Barclays Global Investors and MSCI. He holds an MEng in Operations Research from Cornell University and a Masters in Engineering from École Centrale Paris.

Session Details
Event Details

Big Data Analytics London

We showcase Big Data use cases and techniques that drive the greatest business value. With an emphasis on real-life implementation of Big Data technologies, this practical business forum will provide bold vision from leading innovators across the data-driven spectrum. Join hundreds of C-suite executives, business strategists, data scientists and analytics professionals to leverage the opportunity to harness your data for competitive advantage.

Event Details

AWS Public Sector Summit

On June 20-21, 2018, global leaders from government, education, and nonprofit organizations will come together for the ninth annual AWS Public Sector Summit in Washington, DC. The move to the cloud is unlike any other technology shift in our lifetime. Don’t miss this opportunity to learn how to use the cloud for complex, innovative, and mission-critical projects. With over 100 breakout sessions led by visionaries, experts, and peers, you’ll take home new strategies and tactics for shaping culture, building new skillsets, saving costs, and achieving your mission. Also, check back soon for an opportunity to register for technical bootcamps and workshops on the Summit Pre-day.

Event Details

Scala Days New York

Speaker: Neha Narkhede, Co-founder and CTO, ConfluentSession: Journey to a Real-Time EnterpriseJune 20, 9:00 am - 10:00 am

There is a monumental shift happening in how data powers a company's core business. This shift is about moving away from batch processing and to real-time data. Apache Kafka® was built with the vision to help companies traverse this change and become the central nervous system that makes data available in real time to all the applications that need to use it.

This talk explains how companies are using the concepts of events and streams to transform their business to meet the demands of this digital future and how Apache Kafka serves as the foundation to streaming data applications. You will learn how KSQL, Connect, and the Streams API with Apache Kafka capture the entire scope of what it means to put real time into practice. Neha Narkhede is co-founder and CTO at Confluent, the company behind the popular Apache Kafka streaming platform. Prior to founding Confluent, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s streaming infrastructure built on top of Apache Kafka. She is one of the initial authors of Apache Kafka and a committer and PMC member on the project.

Session Details
Event Details

OSCON Open Source Convention

OSCON has been ground zero of the open source movement. In its 20th year, OSCON continues to be the catalyst for innovation and success for companies.

Speaker: Tim Berglund, Senior Director of Developer Experience, ConfluentSession: Tutorial
Event Details

Google Cloud Next

Google Cloud Next ’18 is your chance to unlock new opportunities for your business, uplevel your skills, and uncover what’s next for Cloud.

Event Details

Strata Data East

How do you drive business results with data?

Every year thousands of top data scientists, analysts, engineers, and executives converge at Strata Data Conference—the largest gathering of its kind. It's where technologists and decision makers turn data and algorithms into business advantage.

Event Details

SpringOne

Speaker: Neha Narkhede, Co-founder and CTO, Confluent

Neha is the co-founder of Confluent and one of the initial authors of Apache Kafka®. She’s an expert on modern, stream-based data processing.

Event Details

Kafka Summit San Francisco

Discover the World of Streaming Data

As streaming platforms become central to data strategies, companies both small and large are re-thinking their architecture with real-time context at the forefront. Monoliths are evolving into Microservices. Data-centers are moving to the cloud. What was once a ‘batch’ mindset is quickly being replaced with stream processing as the demands of the business impose more and more real-time requirements on developers and architects.

This revolution is transforming industries. What started at companies like LinkedIn, Uber, Netflix and Yelp has made its way to countless others in a variety of sectors. Today, thousands of companies across the globe build their businesses on top of Apache Kafka®. The developers responsible for this revolution need a place to share their experiences on this journey.

Kafka Summit is the premier event for data architects, engineers, devops professionals, and developers who want to learn about streaming data. It brings the Apache Kafka community together to share best practices, write code, and discuss the future of streaming technologies.

Welcome to Kafka Summit San Francisco!

Event Details

Big Data London

Speaker: Jay Kreps, Co-founder and CEO, ConfluentRoom: Keynote Theater13 November, 09:30

Jay Kreps is the CEO of Confluent, Inc., a company backing the popular Apache Kafka® messaging system. Prior to founding Confluent, he was formerly the lead architect for data infrastructure at LinkedIn. He is among the original authors of several open source projects including Project Voldemort (a key-value store). Apache Kafka (a distributed messaging system) and Apache Samza (a stream processing system).

Event Details

Ready to Talk to Us?

Have someone from Confluent contact you.

Contact Us