Conferences

Confluent is proud to participate in the following conferences, trade shows, and meetups.

Kafka Summit San Francisco

Speaker: Neha Narkhede, Co-founder and CTOSession: Keynote9:00am - 9:20am
Speaker: Benjamin Stopford, Engineer Session: Building Even-Driven Services with Stateful StreamsRoom: Systems Track 10:30am - 11:10am

Event Driven Services come in many shapes and sizes from tiny event driven functions that dip into an event stream, right through to heavy, stateful services which can facilitate request response. This practical talk makes the case for building this style of system using Stream Processing tools. We also walk through a number of patterns for how we actually put these things together.

Speaker: Gwen Shapria, Product ManagerSession: One Data Center is Not Enough: Scaling Apache Kafka Across Multiple Data Centers Room: Pipelines Track11:20am - 12:00pm

You have made the transition from single machines and one-off solutions to distributed infrastructure in your data center powered by Apache Kafka. But what if one data center is not enough? In this session, we review resilient data pipelines with Apache Kafka that span multiple data centers. We provide an overview of best practices and common patterns including key areas such as architecture and data replication as well as disaster scenarios and failure handling.

Speaker: Nick Dearden, Director of Engineering Session: Kafka Stream Processing for Everyone Room: Streams Track12:10pm - 12:50pm

The rapidly expanding world of stream processing can be confusing and daunting, with new concepts to learn (various types of time semantics, windowed aggregate changelogs, and so on) but also new frameworks and programming models. Multiply this by the operational complexities of multiple distributed systems and the learning curve is steep indeed. Come hear how to simplify your streaming life.

Speaker: Sriram Subramanian, Director, Platform & Infra Engineering Session: Running Kafka as a Service at ScaleRoom: Systems Track 1:50pm - 2:30pm

Apache Kafka is recognized as the world’s leading real-time, fault tolerant, highly scalable stream platform. It is adopted very widely across thousands of companies worldwide from web giants like LinkedIn, Netflix, Uber to large enterprises like Apple , Cisco, Goldman Sachs and more. In this talk, we will look at what Confluent has done along with the help from the community to enable running Kafka as a fully managed service. The engineers at Confluent spent multiple years running Kafka as a service and learnt very valuable lessons in that process. They understood how things are very different when you run in a controlled environment inside a single company vs running Kafka for thousands of companies. This talk will go over those valuable lessons and what we have built in Kafka as a result which is available to all Kafka users as part of Confluent Cloud.

Speaker: David Tucker, Director, Partner Engineering Session: Kafka Connect Best Practices - Advice from the FieldRoom: Pipeline Tracks2:40pm - 3:20pm

This talk will review the Kafka Connect Framework and discuss building data pipelines using the library of available Connectors. We’ll deploy several data integration pipelines and demonstrate : * best practices for configuring, managing, and tuning the connectors * tools to monitor data flow through the pipeline * using Kafka Streams applications to transform or enhance the data in flight.

Speaker: Matthias Sax, EngineerSession: Query the Application, Not a Database: "Interactive Queries" in Kafka's Streams APIRoom: Streams Track4:30pm - 5:10pm

Kafka Streams allows to build scalable streaming apps without a cluster. This “Cluster-to-go” approach is extended by a “DB-to-go” feature: Interactive Queries allows to directly query app internal state, eliminating the need for an external DB to access this data. This avoids redundantly stored data and DB update latency, and simplifies the overall architecture, e.g., for micro-services.

Speaker: Guozhang Wang, EngineerSession: Exactly-once Stream Processing with Kafka StreamsRoom: Systems Track 5:20pm - 6:00pm

In this talk, we present the recent additions to Kafka to achieve exactly-once semantics within its Streams API for stream processing use cases. This is achieved by leveraging the underlying idempotent and transactional client features. The main focus will be the specific semantics that Kafka distributed transactions enable in Streams and the underlying mechanics to let Streams scale efficiently.

Learn More

Dublin Apache Kafka Meetup

RSVP
Speaker: Elizabeth K. Joseph, Developer Advocate, Mesosphere Session: The SMACK Stack7:00pm - 7:50pm

With today's increased focus on data workloads, we're now faced with finding scalable solutions to process it quickly and accurately. This talk will cover one of the ways to tackle the big and fast data challenges: by using the SMACK Statck. We will explore each of the technologies in this stack (Apache Spark, Apache Mesos, Akka, Apache Cassandra, and of course, Apache Kafka), how they work together, and how you can get going with all of them quickly on open source DC/OS, which uses the "M" for "Mesos" under the hood. The presentation will conclude with a live stream data processing demo that we've open sourced using data from the Los Angeles Metro system.

RSVP

San Francisco Bay Area Big Data and Scalable Systems

Speaker: Xavier Léauté, Software Engineer, ConfluentSession: Kafka Steams: Are your streams keeping up? Monitoring for a streaming world

Apache Kafka helps you radically simplify your data architectures, and Kafka Streams lets you easily build distributed applications and micro-services that directly tap those data streams. As this development model is becoming more prevalent, so becomes the need to monitor all those event streams. We will go over some less talked about aspects of how to make sure your streaming applications are keeping up and ensure that all the data that goes in also makes it out.

RSVP

Hadoop User Group Vienna

RSVP
Speaker: Kai Waehner, Technology Evangelist, ConfluentSession: Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API

Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.

Join us

Munich Apache Kafka Meetup

RSVP
Speaker: Jay Kreps, CEO, ConfluentSession: The Rise of the Streaming PlatformRoom: 6:05pm - 6:40pm

What happens if you take everything that is happening in your company — every click, every database change, every application log — and make it all available as a real-time stream of well structured data?

Jay will discuss the experience at LinkedIn and elsewhere moving from batch-oriented ETL to real-time streams using Apache Kafka. He’ll talk about how the design and implementation of Kafka was driven by this goal of acting as a real-time platform for event data. Jay will cover some of the challenges of scaling Kafka to hundreds of billions of events per day at Linkedin, supporting thousands of engineers, applications, and data systems in a self-service fashion.

He’ll describe how real-time streams can become the source of ETL into Hadoop or a relational data warehouse, and how real-time data can supplement the role of batch-oriented analytics in Hadoop or a traditional data warehouse.

Jay will also describe how applications and stream processing systems such as Storm, Spark, or Samza can make use of these feeds for sophisticated real-time data processing as events occur.

Speaker: Kai Waehner, Technology Evangelist, ConfluentSession: Highly Scalable Machine Learning in Real Time with Apache Kafka’s Streams API6:40pm - 7:15pm

Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.

RSVP

MesosCon North America

Speaker: Kaufman Ng, Solutions Architect, ConfluentSession: Deploying Kafka on DC/OSThursday, September 14, 2017; 2:00pm - 2:40pm

Apache Kafka is increasingly popular as the streaming platform of choice for real-time data pipelines. In addition, Kafka and microservices are deployed together in DC/OS. In this presentation, Kaufman Ng will discuss the best practices on deploying Kafka to DC/OS, the challenges, and lessons learned from customer deployments.

Event Details

Elastic{ON} Tour: Chicago

This one-day event puts you within high-fiving distance of Elastic experts. Get the best practices and know-how you need to drive success with Elasticsearch and the open source Elastic Stack.

Register

Paris Apache Kafka Meetup

RSVP
Speaker: Kai Waehner, Technology Evangelist, ConfluentSession: Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API7:15pm - 7:45pm

Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.

RSVP

PhillyJUG Meetup

RSVP
Speaker: Viktor GamovSession: Divide, Distribute and Conquer: Stream v. Batch6:30 PM to 8:30 PM

Data is flowing everywhere around us, from phones, credit cards, sensor-equipped buildings, vending machines, thermostats, trains, buses,planes, posts to social media, digital pictures and video and so on. Simple data collection is not enough anymore. Most of the current systems do data processing via nightly extract, transform, and load (ETL)operations, which is common in enterprise environments, requires decision makers to wait an entire day (or night) for reports to become available.But businesses don’t want «Big Data» anymore. They want «Fast Data». What distinguishes a «streaming systems» from the batch systems is that the event stream is unbounded or “infinite” from a system perspective.Decision-makers need to analyze these streaming events as a whole to make business decisions as new information arrives.In this talk, after a short introduction to common approaches and architectures (lambda, kappa), Viktor will demonstrate how Hazelcast Jet can beused for in-memory stream processing.

RSVP

Zürich Apache Kafka Meetup

Speaker: Michael Noll, Product Manager, ConfluentSession: Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases6:30pm - 7:00pm

Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.

Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.

In this session we talk about how Apache Kafka helps you to radically simplify your data architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. We discuss common use cases to realize that stream processing in practice often requires database-like functionality, and how Kafka allows you to bridge the worlds of streams and databases when implementing your own core business applications (inventory management for large retailers, patient monitoring in healthcare, fleet tracking in logistics, etc), for example in the form of event-driven, containerized microservices.

Speaker: Kai Waehner, Technology Evangelist, ConfluentSession: How to Apply Machine Learning Models to Real Time Processing with Apache Kafka Streams 7:00pm - 7:30pm

Big Data and Machine Learning are key for innovation in many industries today. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like Apache Spark, TensorFlow or H2O.ai. The second part discusses the deployment of these built analytic models to your own applications or microservices leveraging the Apache Kafka cluster and Kafka Streams. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable and performant way.

RSVP

Strata Data Conference

Speaker: Gwen Shapira, System Architect, ConfluentSession: Architecting a next generation data platformRoom: 1E 12/13September 26, 2017, 1:30pm–5:00pm

Rapid advancements are causing a dramatic evolution in both the storage and processing capabilities in the open-source big data software ecosystem. These advancements include projects like:

  • Apache Kudu, a modern columnar data store that complements HDFS and Apache HBase by offering efficient analytical capabilities and fast inserts and updates with Hadoop.
  • Apache Kafka, which provides a high-throughput and highly reliable distributed message transport. Apache Impala (incubating), a highly concurrent, massively parallel processing query engine for Hadoop.
  • Apache Spark, which is rapidly replacing frameworks such as MapReduce for processing data on Hadoop due to its efficient design and optimized use of memory. Spark components such as * Spark Streaming and Spark SQL provide powerful near real-time processing.

Along with the Apache Hadoop platform, these storage and processing systems provide a powerful platform to implement powerful data processing applications on batch and streaming data. While these advancements are exciting, they also add a new array of tools that architects and developers need to understand when architecting solutions with Hadoop.

Using an example based on Customer 360 and the Internet of Things, we’ll explain how to architect a modern, real-time big data platform leveraging components like Kafka, Impala, Kudu, Spark Streaming, Spark SQL, and Hadoop to enable new forms of data processing and analytics. Along the way, we’ll discuss considerations and best practices for utilizing these components to implement solutions, cover common challenges and how to address them, and provide practical advice for building your own modern, real-time big data architectures.

Topics include:

  • Accelerating data processing tasks such as ETL and data analytics by building near real-time data pipelines using tools like Kafka, Spark Streaming, and Kudu.
  • Building a reliable, efficient data pipeline using Kafka and tools in the Kafka ecosystem such as Kafka Connect and Kafka Streams along with Spark Streaming.
  • Providing users with fast analytics on data with Impala and Kudu.
  • Illustrating how these components complement the batch processing capabilities of Hadoop.
  • Leveraging these capabilities along with other tools such as Spark MLlib and Spark SQL to provide sophisticated machine-learning and analytical capabilities for users.

Session Details
Speaker: Dustin Cote, Customer Operations Engineer, ConfluentSession: Mistakes were made, but not by us: Lessons from a year of supporting Apache KafkaRoom: 1E 07/08 September 27, 2017 2:05pm-2:45pm

The number of deployments of Apache Kafka at enterprise scale has greatly increased in the years since Kafka’s original development in 2010. Along with this rapid growth has come a wide variety of use cases and deployment strategies that transcend what Kafka’s creators imagined when they originally developed the technology. As the scope and reach of streaming data platforms based on Apache Kafka has grown, the need to understand monitoring and troubleshooting strategies has as well. Topics include: - Effective use of JMX for Kafka - Tools for preventing small problems from becoming big ones - Efficient architectures proven in the wild - Finding and storing the right information when it all goes wrong

Session Detail
Speaker: Jun Rao, Co-Founder, ConfluentSession: Apache Kafka Core Internals: A Deep DiveRoom: 1E 07/08September 27, 2017 2:55pm-3:35pm

In the last few years, Apache Kafka is a streaming platform and has been used extensively in enterprises for real-time data collecting, delivering, and processing. This talk will provide a deep dive on some of the key internals that help make Kafka popular and provide strong reliability guarantees. Companies like LinkedIn are now sending more than 1 trillion messages per day to Kafka. Learn about the underlying design in Kafka that leads to such high throughput. Many companies (e.g., financial institutions) are now storing mission critical data in Kafka. Learn how Kafka supports high reliability through its built-in replication mechanism. One common use case of Kafka is for propagating updatable database records. Learn how a unique feature called compaction in Apache Kafka is designed to solve this kind of problem more naturally.

Session Detail
Speaker: Gwen Shapira, Product Mananger, ConfluentSession: The Three Realities of Modern Programming: Cloud, Microservices, and the Explosion of DataRoom: 1A 23/24September 28, 2017 11:20am-12:00pm

Learn how the three realities of modern programming – the explosion of data and data systems, building business processes as microservices instead of monolithic applications and the rise of the public cloud – affect how developers and companies operate today and why companies across all industries are turning to streaming data and Apache Kafka for mission-critical applications.

Session Detail
Speaker: Gwen Shapira, Product Manager, ConfluentSession: One Cluster Does Not Fit All: Architecture Patterns for Multicluster Apache Kafka DeploymentsRoom: 1E 07/08September 28, 2017 2:05pm-2:45pm

In the last year, multicluster and cross-data center deployments of Apache Kafka have become the norm rather than an exception. The reasons are many and include:

  • Different groups in the same company using Kafka in different ways
  • Collecting information from many geographical regions and branches to a centralized analytics cluster
  • Planning for cases where an entire cluster or data center is not available
  • Using Kafka to assist in cloud migration

Robin Moffatt offers an overview of several use cases, including real-time analytics and payment processing, that may require multicluster solutions and discusses real-world examples with their specific requirements. Robin outlines the pros and cons of several common architecture patterns, including:

  • Multitenant Kafka clusters
  • Active-active multiclusters
  • Failover clusters
  • Stretching a single cluster between multiple data centers
  • Using Kafka to bridge between clouds or between on-premises and the cloud

Along the way, Robin explores the features of Apache Kafka and demonstrates how to use this understanding of Kafka to choose the right architecture for use cases from the financial, retail, and media industries.

Session Detail
Speaker: Tim Berglund, Sr Director of DevX, ConfluentSession: Heraclitus, Enterprise Architecture, and Streaming DataRoom: 1E 07/08September 28, 2017 2:55pm-3:35pm

Hailing from the Persian city of Ephesus in around 500 BC, the Greek philosopher Heraclitus is famous for his trenchant analysis of big data stream processing systems, saying “You never step into the same river twice.” Central to his philosophy was the idea that all things change constantly. His close readers also know him as the Weeping Philosopher—perhaps because dealing with constantly changing data at low latency is actually pretty hard. It doesn’t need to be that way.

Almost as famous as Heraclitus is Apache Kafka, the de facto standard open-source distributed stream processing system. Many of us know Kafka’s architectural and API particulars as well as we know the philosophy of Heraclitus, but that doesn’t mean we know how the most successful deployments of Kafka work. In this talk, I’ll present several real-world systems build on Kafka, not just as a giant message queue, but as a platform for distributed stream computation.

The talk will include a brief summary of Kafka architecture and (probably Java) APIs, then a detailed description of several architectures drawn from live customer deployments. The role of stream processing will be featured in each, with attention given to what computation gets done in the stream, how Kafka fills the role of persistence rather than merely a message queue, and what other persistence and computational technologies are present in the system.

Session Detail
Event Details

GOV ICT 2.0

In its 6th year GOV ICT 2.0 will return as the UK’s annual must-attend one-day conference for 350+ senior level executives, tech leaders and digital experts from across Central and Local Government.

This year the conference will review the technical and infrastructure requirements presented by the 2020 Government Transformation Strategy. Join us to explore the technology, procurement and leadership challenges of leading end-to-end digital transformation projects and enabling the delivery of world–class public services for citizens.

Event Details

Strange Loop 2017

Speaker: Jason Gustafson, Apache Kafka Committer & Confluent EngineerSession: EOS In Kafka: Listen Up, I Will Only Say This Once!

Apache Kafka's rise in popularity as a streaming platform has demanded a revisit of its traditional at-least-once message delivery semantics. In this talk, we present the recent additions to Kafka to achieve exactly-once semantics (EoS) including support for idempotence and transactions in the Kafka clients. The main focus will be the specific semantics that Kafka distributed transactions enable and the underlying mechanics which allow them to scale efficiently. We will discuss Kafka's spin on standard two-phase commit protocols, how transaction state is maintained and replicated, and how different failure scenarios are handled. We will also share our view of future improvements that will make exactly-once stream processing with Kafka even better!

Session Info
Event Details

JavaOne

Speaker: Kai Waehner, Technology Evangelist, ConfluentSession: Apache Kafka Streams + TensorFlow + H2O.ai = Highly Scalable Deep Learning

Big data and machine learning are key for innovation today. This session explains how to build analytic models leveraging open source machine learning/deep learning frameworks such as Apache Spark, TensorFlow, or H2O.ai. The second part discusses the deployment of these built analytic models to your own applications or microservices, leveraging the Apache Kafka cluster and Kafka Streams instead of building a new stream-processing cluster. The session focuses on live demos and also teaches lessons learned for executing analytic models in a highly scalable and performant way. The last part explains how Apache Kafka can help you move from manual build and deployment of analytic models to continuous online model improvement in real time.

Details

Oracle OpenWorld 2017

Speaker: Robin Moffatt, Partner Technology Evangelist, EMEA, ConfluentSession: Kafka's Role in Implementing Oracle's Big Data Reference Architecture

Big data... big mess? Without a flexible and proven platform design up front there is the risk of a mess of point-to-point feeds. The solution to this is Apache Kafka, which enables stream or batch consumption of the data by multiple consumers. Implemented as part of Oracle's big data architecture, it acts as a flexible and scalable data bus for the enterprise. This session introduces the concepts of Kafka and a distributed stream platform, and explains how it fits within the big data architecture. See it used with Oracle GoldenGate to stream data into the data reservoir, as well as ad hoc population of discovery lab environments, microservices, and real-time search.

Speaker: Robin Moffatt, Partner Technology Evangelist, ConfluentSession: An Enterprise Databus: Oracle GoldenGate in the Cloud Working with Kafka and Spark

Enterprises are going through massive changes and must react to events quickly. This is forcing enterprises to rethink their data management and information management architectures. One of the key shifts has happened in the way information is streamed and processed. In this session learn about the Oracle GoldenGate databus capabilities that enable customers to build a scalable stream processing platform on Kafka. Capabilities discussed include how Oracle GoldenGate handles change data and metadata to ensure downstream consumers are robust and resilient. This session also includes a case study using Oracle GoldenGate for Big Data to build an enterprise databus.

Event Details

Elastic{ON} Tour: Seattle

We’re coming to the Emerald City for a one-day event that puts you within high-fiving distance of Elastic experts. Get the best practices and know-how you need to drive success with Elasticsearch and the open source Elastic Stack.

Register

JAX London

Speaker: Tim Berglund, Sr Director, Developer ExperienceSession: Apache Kafka, Enterprise Architecture, and Streaming Data: What we can learn from HeraclitusOctober, 10 11:30 - 12:20

Apache Kafka is the de facto standard open-source distributed stream processing system. In this talk, I’ll present several real-world systems build on Kafka, not just as a giant message queue, but as a platform for distributed stream computation. Many of us know Kafka’s architectural and API particulars, but that doesn’t mean we know how the most successful deployments of Kafka work.

Hailing from the Persian city of Ephesus in around 500 BC, the Greek philosopher Heraclitus is famous for his trenchant analysis of big data stream processing systems, saying “You never step into the same river twice.” Central to his philosophy was the idea that all things change constantly. His close readers also know him as the Weeping Philosopher—perhaps because dealing with constantly changing data at low latency is actually pretty hard. It doesn’t need to be that way.

Session Details
Event Details

Software Architecture Conference

Speaker: Ben Stopford, Engineer at ConfluentSession: Rethinking Microservices with Stateful StreamsRoom: King's Suite - SandringhamOctober 16, 15:50–16:40

When building service-based systems, we don’t generally think too much about data. If we need data from another service, we ask for it. This pattern works well for whole swathes of use cases, particularly ones where datasets are small and requirements are simple. But real business services have to join and operate on datasets from many different sources. This can be slow and cumbersome in practice.

These problems stem from an underlying dichotomy. Data systems are built to make data as accessible as possible — a mindset that focuses on getting the job done. Services, instead, focus on encapsulation — a mindset that allows independence and autonomy as we evolve and grow. But these two forces inevitably compete in most serious service-based architectures.

Ben Stopford explains why understanding and accepting this dichotomy is an important part of designing service-based systems at any significant scale. Ben looks at how companies use log-backed architectures to build an immutable narrative that balances data that sits inside their services with data that is shared, an approach that allows the likes of Uber, Netflix, and LinkedIn to scale to millions of events per second.

Ben concludes with a set implementation patterns, starting lightweight and gradually getting more functional, paving the way for an evolutionary approach to building log-backed microservices.

Session Info
Event Details

All Things Open

A conference exploring open source, open tech, and the open web in the enterprise.

Event Details

Big Data in Healthcare Conference

Join us for the Big Data in Healthcare Conference where expert speakers from across the health and technology sectors will explain how data use and analytics will shape the future of the NHS. Study examples of how big data can drive quality, understand the benefits of data sharing between health and social care providers and ensure your data security measures are up to standard.

Event Details

Elastic{ON} Tour: Washington, DC

This no-cost, one-day event delivers the best practices and know-how you need to drive success with Elasticsearch and the open source Elastic Stack in the public sector.

Register

GDG DevFest 2017

RSVP
Speaker: Michael Noll
RSVP

W-JAX

Speaker: Ben Stopford, Engineer at ConfluentSession: Keynote: The Streaming Transformation

The value of an architecture doesn’t lie in a static picture on a whiteboard or even a well formed POC. It lies in a system’s ability to evolve over time. To grow and expand, not simply in terms of data, throughput or numbers of users, but as teams and organisation grow.

Streaming Platforms provide a unique basis for such systems. They embrace asynchronicity first and foremost. Forming a narrative of events that flow from service to service. But events are more than just a communication protocol. They are the facts of our business. A shared dataset sitting at the very heart of our system.

In this talk, we’ll examine how Streaming Platforms change the way we build business applications. How we can embrace fine grained event driven services, wrap them in efficient transactional guarantees, and evolve our way forwards from legacies of old.

Keynote Details
Details

Big Data LDN

Speaker: Neha Narkhede, CTO and Co-Founder, ConfluentSession: Keynote: The Rise of the Streaming PlatformNovember 15, 2017
Details

AWS re:Invent

Speaker: Neha Narkhede, Co-Founder and CTO of ConfluentSession: Bridge to the Cloud: Using Apache Kafka to Migrate to AWS

Most organizations don't move to the cloud all at once. You start with a new use case, or a new application. Sometimes these applications can run independently in the cloud, but oftentimes they need data from the on premises datacenter.

After the first migration is successful – more applications will follow: brand new applications will start in the cloud and will need some data from existing applications that are still running on-prem. Existing applications will slowly migrate, but will need a strategy for migrating their data in phases, an initial bulk upload often followed by incremental updates. In a mature organization, this process can take years.

In this session, Apache Kafka co-creator Neha Narkhede will share how companies around the world are using Kafka to migrate to AWS. By implementing a central-pipeline architecture using Apache Kafka to sync on-prem and cloud deployments, companies can accelerate migration times and reduce costs. The Kafka-centric migration process is ultimately more manageable and therefore safer for the organization.

Details

Ready to Talk to Us?

Have someone from Confluent contact you.

Contact Us