Conferences

Confluent is proud to participate in the following conferences, trade shows, and meetups.

Apache Kafka <p></p>

Apache Kafka

January 25, 2017PAYBACK GmbH, München, DE

  

Details
Jay Kreps, Mitbegründer von Kafka, Engineer und CEO von Confluent wird am 25. Januar in München sein. Er wird über Produktstrategie und Roadmap von Kafka sprechen. Zusätzlich wird Michael Noll als ProductManager über Kafka Streams präsentieren und PAYBACK Ihre Erfahrungen damit darstellen.
Meetup: Seattle Apache Kafka <p></p>

Meetup: Seattle Apache Kafka

January 25, 2017Seattle, WA

 

Details
David Tucker, Director, Partner Engineering & Alliances, Confluent
State of the Streaming Platform 2017 : An Overview of Apache Kafka and the Confluent Platform

In the past few years Apache Kafka has emerged as the world's most popular real-time data streaming platform. In this talk, we introduce some key additions to the Apache project from 2016: Kafka Connect and Kafka Streams.
Kafka Connect is the community’s framework for scalable, fault-tolerant data import and export into your streaming platform. By standardizing the most common design patterns seen in large Kafka deployments, Kafka Connect dramatically simplifies the development of ETL pipelines and the integration of disparate data systems across the Kafka backbone. We’ll discuss the Kafka Connect architecture and how you can publish or subscribe to Kafka topics by simply configuring a standard Connector.
Kafka Streams provides a natural DSL for writing stream processing applications and a light-weight deployment model that integrates with any execution framework. As such, it is the most convenient yet scalable option to analyze, transform, or otherwise process data that is streaming through Kafka. We'll round out the discussion with a brief demonstration of a data pipeline illustrating all of these components along with the latest monitoring and alerting capabilities of the Confluent Enterprise offering.

Kafka Meetup Utrecht <p></p>

Kafka Meetup Utrecht

January 26, 2017Utrecht, NL

 

Details
Jay Kreps, CEO of Confluent
TBD

Andrew Stevenson, Datamountaineer
Kafka Connect, a scalable distribution service for simplifying the the loading of data to Kafka and unloading of data to datastores.

Roel Reijerse, Eneco Energy Trade
Automated repeatable deployments of Kafka stream topologies on Kubernetes
London Dev Community Meetup <p> </p>

London Dev Community Meetup

January 31, 2017London, UK

 

Details
Ben Stopford, Engineer at Confluent
Microservices, The Data Dichotomy: Rethinking Data & Services with Streams

When we build Mircroservices we generally don't think too much about data. We focus on clean abstractions and singular responsibilities. But data systems are built to make data as accessible as possible, something that sits starkly in contrast to the encapsulated abstractions most services aim for. This talk will explore this dichotomy: that data systems are about exposing data, but services are really about hiding it. Two forces that inevitably compete in serious service-based systems. We’ll look at how, to address this, we need to consider shared data as a 1st class citizen. How a distributed log is often used as a mechanism for holding data that is shared between services and, importantly, how stateful stream processors allow data operations to be embedded right in each service. The result being a very different way to architect and build Microservice applications.

Meetup: Chicago Area Kafka Enthusiasts
Jeremy Custenborder, System Engineer, Confluent
Introducing Kafka Streams

Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.

Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.

In this session, we talk about how Apache Kafka helps you to radically simplify your data processing architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. Notably, we introduce Kafka’s Streams API, its abstractions for streams and tables, and its recently introduced Interactive Queries functionality. As we will see, Kafka makes such architectures equally viable for small, medium, and large-scale use cases.

Spark Summit East 2017 <p></p>

Spark Summit East 2017

February 07, 2017 to February 09, 2017Boston, MA

 

Details
Speaker: Ewen Cheslack-Postava

Session: Building Realtime Data Pipelines with Kafka Connect and Spark Streaming

Wednesday, February 8; 2:40 PM – 3:10 PM

Spark Streaming makes it easy to build scalable, robust stream processing applications — but only once you’ve made your data accessible to the framework. If your data is already in one of Spark Streaming’s well-supported message queuing systems, this is easy. If not, an ad hoc solution to import data may work for a single application, but trying to scale that approach to complex data pipelines integrating dozens of data sources and sinks with multi-stage processing quickly breaks down. Spark Streaming solves the real-time data processing problem, but to build large scale data pipeline we need to combine it with another tool that addresses data integration challenges.

The Apache Kafka project recently introduced a new tool, Kafka Connect, to make data import/export to and from Kafka easier. This talk will first describe some data pipeline anti-patterns we have observed and motivate the need for a tool designed specifically to bridge the gap between other data systems and stream processing frameworks. We will introduce Kafka Connect, starting with basic usage, its data model, and how a variety of systems can map to this model. Next, we’ll explain how building a tool specifically designed around Kafka allows for stronger guarantees, better scalability, and simpler operationalization compared to other general purpose data copying tools. Finally, we’ll describe how combining Kafka Connect and Spark Streaming, and the resulting separation of concerns, allows you to manage the complexity of building, maintaining, and monitoring large-scale data pipelines.

View Session Details
Meetup: Minneapolis Apache Kafka<p></p>

Meetup: Minneapolis Apache Kafka

February 07, 2017Minneapolis, MN

  

Details
Ben Stopford, Systems Engineer at Confluent
Introducing Kafka Streams

Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.

Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.

In this session, we talk about how Apache Kafka helps you to radically simplify your data processing architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. Notably, we introduce Kafka’s Streams API, its abstractions for streams and tables, and its recently introduced Interactive Queries functionality. As we will see, Kafka makes such architectures equally viable for small, medium, and large-scale use cases.

Elastic{ON} <p></p>

Elastic{ON}

March 07, 2017 to March 09, 2017San Francisco, CA

  

Details
QCon London: Workshops <p> </p>

QCon London: Workshops

March 09, 2017 to March 10, 2017London, UK

 

Details
Ian Wrigley, Director, Education Services at Confluent
Real-Time Streaming Data Pipelines with Apache Kafka

In this workshop, we will show how Kafka Connect and Kafka Streams can be used together to build a real-world, real-time data pipeline. Using Kafka Connect, we will ingest data from a relational database into Kafka topics as the data is being generated. We then will process and enrich the data in real time using Kafka Streams, before writing it out for further analysis.

We’ll see how easy it is to use Connect to ingest and export data (no code is required), and how the Kafka Streams Domain Specific Language (DSL) means that developers can concentrate on business logic without worrying about the low-level plumbing of streaming data processing. Because Streams is a Java library, developers can build real-time applications without needing a separate cluster to run an external stream processing framework.

Strata + Hadoop World: San Jose<p> </p>

Strata + Hadoop World: San Jose

March 14, 2017 to March 16, 2017San Jose, CA

 

Details
Speaker: Gwen Shapira, System Architect at Confluent

One cluster does not fit all: Architecture patterns for multicluster Apache Kafka deployments

There are many good reasons to run more than one Kafka cluster. And a few bad reasons too. Great architectures are driven by use cases, and multicluster deployments are no exception. Gwen Shapira offers an overview of several use cases, including real-time analytics and payment processing, that may require multicluster solutions to help you better choose the right architecture for your needs.

Speaker: Ian Wrigley, Dir, Education Services at Confluent

Workshop: Building Real-time Data Pipelines with Apache Kafka

Ian Wrigley demonstrates how Kafka Connect and Kafka Streams can be used together to build real-world, real-time streaming data pipelines. Using Kafka Connect, you’ll ingest data from a relational database into Kafka topics as the data is being generated and then process and enrich the data in real time using Kafka Streams before writing it out for further analysis.

You’ll see how easy it is to use Connect to ingest and export data (no code required), and how the Kafka Streams domain-specific language (DSL) means that developers can concentrate on business logic without worrying about the low-level plumbing of streaming data processing. And because Streams is a Java library, developers can build real-time applications without needing a separate cluster to run an external stream processing framework.

O'Reilly Software Architecture Conference

O'Reilly Software Architecture Conference

April 03, 2017 to April 05, 2017New York, NY

Details
Ben Stopford, Engineer at Confluent
The Data Dichotomy: Rethinking Data & Services with Streams

This talk looks at two forces which sit in opposition: data systems (which focus on exposing data), and services (which focus on encapsulating it). How should we balance these two? Streaming has a solution.

dotScale 2017 <p> </p>

dotScale 2017

April 24, 2017Paris, FR

Details
Neha Narkhede, Co-creator of Apache Kafka, Co-founder and CTO of Confluent
Kafka Summit New York <p></p>

Kafka Summit New York

May 08, 2017New York City, NY

 

Details

Sponsorship opportunities available. Request more information

Strata + Hadoop World: London<p> </p>

Strata + Hadoop World: London

May 23, 2017 to May 25, 2017London, UK

 

Details
Kafka Summit San Francisco <p></p>

Kafka Summit San Francisco

August 28, 2017San Francisco, CA

Details

Call for Papers opened. Submit proposal

Sponsorship opportunities available. Request more information

Ready to Talk to Us?

Have someone from Confluent contact you.

Contact Us