Confluent is proud to participate in the following conferences, trade shows, and meetups.
An ETL use case where we read data from Kafka topics and transform them into the desired formats using KSQL and finally load them into a store like Scylla for further analysis.
Join us for the Big Data in Healthcare Conference where expert speakers from across the health and technology sectors will explain how data use and analytics will shape the future of the NHS. Study examples of how big data can drive quality, understand the benefits of data sharing between health and social care providers and ensure your data security measures are up to standard.
This session discusses how to build a highly scalable, performant, mission-critical microservice infrastructure with Apache Kafka and Apache Mesos. Apache Kafka brokers are used as powerful, scalable, distributed message backbone. Kafka Streams’ API allows to embed stream processing directly into any microservice or business application; without the need for a dedicated streaming cluster. Apache Mesos is used as scalable infrastructure under the hood of Apache Kafka and Kafka Streams applications to leverage the benefits of a cloud native platforms like service discovery, health checks, or fail-over management.
A live demo shows how to develop real time applications for your core business with Kafka messaging brokers and Kafka Streams API and how to deploy / manage / scale them on a Mesos cluster using different deployment options like Marathon, Docker, Kubernetes.Session Details
Through presentations, demonstrations and a hands-on workshop, we’ll provide you with an introduction to Apache KafkaTM, its Streams API and KSQL, the new streaming SQL engine for Apache Kafka.
This workshop is perfect for Kafka beginners or those that have yet to explore its stream processing capabilities. You’ll leave the day with a solid foundation of Kafka, hopefully some fun, and a real-time, streaming application that is ready to scale.
Stream processing is changing the way companies organize their data systems architecture and respond to events critical to their business.
In this talk, we'll review how software available with Confluent Open Source can help you hit the ground running when integrating your data systems to Apache Kafka. We'll see how Kafka Connect API can be leveraged to do the heavy lifting at scale and how new tools in Confluent Open Source help you use, test and even develop Kafka Connect plugins.
Current ETL pipelines are entirely too slow. There are so many companies that are making critical business decisions based on data that is days old. Augmenting ETL existing pipelines can be error prone and extremely costly. There has to be a better way.
Apache Kafka is a project that has grown out of necessity. Integrating data at scale is a difficult task that requires purpose built systems. LinkedIn saw this growth early and was shackled with the difficult task of building agile data pipelines that fed their business critical data. This talk will focus on the evolution of the streaming data space using Apache Kafka.
Ingesting data is one of the most critical tasks for any integration project. Many projects fall flat in the early stages after having difficulties getting the required data for success. Kafka Connect is the E & L of ETL and is a distributed framework designed to pull data in and out of Apache Kafka. Connectors for Kafka Connect are available to help eliminate much of the effort required to bring data into Kafka. Couple this with the distributed nature of Kafka Connect, gives you a rock solid method for ingesting data.
Once the data has made it to the cluster now it's time to do something actionable. Do we really want to land data into a staging database, then wait for data from another source, then wait and wait and wait? All while your nightly ETL window is getting shorter and shorter. Why not process this in real time? Kafka Streams and Apache Spark are there to help you build these nightly tables and views in real time. Augmenting data can be done on the fly and data can be sent to downstream systems much faster as a result.
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.
Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.
In this session, we talk about how Apache Kafka helps you to radically simplify your data architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. We discuss common use cases to realize that stream processing in practice often requires database-like functionality, and how Kafka allows you to bridge the worlds of streams and databases when implementing your own core business applications (inventory management for large retailers, patient monitoring in healthcare, fleet tracking in logistics, etc), for example in the form of event-driven, containerized microservices.
Event Driven Services come in many shapes and sizes from tiny event-driven functions that dip into an event stream, right through to heavy, stateful services which can facilitate request response. This practical talk makes the case for building this style of system using Stream Processing tools, defining a microservices architecture and leveraging Apache Kafka. We also walk through a number of patterns for how we actually put these things together to enable independent teams and autonomous development of microservices.
Schibsted is setting up a new streaming platform based on Kafka and Kafka Streams. With this new platform, we aim to deliver better performance and enable new features, such as self-serve for our data consumers. In this talk, we will present some of the ways this new platform enables collaboration across Schibsted, as well as some of the challenges we are facing.
A number of collaborations with various teams in Schibsted are underway to build projects on top of this new platform. Examples of such collaborations are projects geared towards building tools for visualization of user behaviour and experimentation.
Our long-term goal is to provide a self-serve platform for real-time processing of data, enabling our data users to quickly create new data-driven applications. Data and analytics is a central part of Schibsted’s strategy, and we believe the streaming platform will play a significant role in building a global data-driven organisation.
The value of an architecture doesn’t lie in a static picture on a whiteboard or even a well formed POC. It lies in a system’s ability to evolve over time. To grow and expand, not simply in terms of data, throughput or numbers of users, but as teams and organisation grow.
Streaming Platforms provide a unique basis for such systems. They embrace asynchronicity first and foremost. Forming a narrative of events that flow from service to service. But events are more than just a communication protocol. They are the facts of our business. A shared dataset sitting at the very heart of our system.
In this talk, we’ll examine how Streaming Platforms change the way we build business applications. How we can embrace fine grained event driven services, wrap them in efficient transactional guarantees, and evolve our way forwards from legacies of old.Keynote Details
Monzo is dedicated to building a core banking system from scratch using Open-source and modern technologies to provide a unique experience to its customers. This talk describes how we use Kafka in our infrastructure and how it has been deployed on Kubernetes.
An overview of Kafka clients (and why not to write your own ...) with a short dive into the Kafka wire protocol.
Most organizations don't move to the cloud all at once. You start with a new use case, or a new application. Sometimes these applications can run independently in the cloud, but oftentimes they need data from the on premises datacenter.
After the first migration is successful – more applications will follow: brand new applications will start in the cloud and will need some data from existing applications that are still running on-prem. Existing applications will slowly migrate, but will need a strategy for migrating their data in phases, an initial bulk upload often followed by incremental updates. In a mature organization, this process can take years.
In this session, Apache Kafka co-creator Jay Kreps will share how companies around the world are using Kafka to migrate to AWS. By implementing a central-pipeline architecture using Apache Kafka to sync on-prem and cloud deployments, companies can accelerate migration times and reduce costs. The Kafka-centric migration process is ultimately more manageable and therefore safer for the organization.
Il talk introduce Apache Kafka (incluse le APIs Kafka Connect e Kafka Streams), Confluent (la società creata dai creatori di Kafka) e spiega perché Kafka è un'ottima e semplice soluzione per la gestione di stream di dati nel contesto di due delle principali forze trainanti e trend industriali: Internet of Things (IoT) e Microservices.
Il talk racconta come Kafka ha migliorato la qualità dei sistemi progettati e realizzati da Bottega52 per i suoi clienti, portando come esempio un caso industriale di successo: un sistema di tracciamento basato su "watermark" commissionato da una multinazionale italiana di prodotti alimentari. In particolare, si presenterà l'evoluzione del sistema, nato come un piccolo "monolite" ed evoluto grazie a Kafka in un'architettura a servizi con maggiore affidabilità ed efficienza, secondo il "Command Query Responsibility Segregation" pattern.
Microservices establish many benefits like agile, flexible development and deployment of business logic. However, a Microservice architecture also creates many new challenges like increased communication between distributed instances, the need for orchestration, new fail-over requirements, and resiliency design patterns.
This session discusses how to build a highly scalable, performant, mission-critical microservice infrastructure with Apache Kafka and Apache Mesos. Apache Kafka brokers are used as powerful, scalable, distributed message backbone. Kafka’s Streams API allows to embed stream processing directly into any external microservice or business application; without the need for a dedicated streaming cluster. Apache Mesos can be used as scalable infrastructure for both, the Apache Kafka brokers and external applications using the Kafka Streams API, to leverage the benefits of a cloud native platforms like service discovery, health checks, or fail-over management.
A live demo shows how to develop real time applications for your core business with Kafka messaging brokers and Kafka Streams API and how to deploy / manage / scale them on a Mesos cluster using different deployment options.Session Details
Have someone from Confluent contact you.Contact Us