Save 25% or More on Your Kafka Costs | Take the Confluent Cost Savings Challenge
View sessions and slides from Kafka Summit APAC 2021
Kafka Summit APAC 2021 Keynote: Leveraging Data in Motion - Jun Rao, Co-Founder, Confluent
Kafka Summit APAC 2021 Keynote: Leveraging Cloud-native Managed Services for Speed, Reliability, and Scale - a conversation with Jun Rao, Co-Founder of Confluent and Wui Ngiap Foo, Head of Technology, Grab
Kafka Summit APAC 2021 Keynote: Building an Event-driven CNS for Digital Banking - A Conversation with Jun Rao Co-Founder of Confluent and Kaspar Situmorang, CEO, BRI Agro
Summit APAC 2021 Keynote: Building a Modern Digital Platform at NAB by Harnessing the Power of an Event-driven Architecture - a conversation with Jun Rao, Co-Founder of Confluent and Leng Be, Head of MEGA, National Australia Bank
Tim Berglund, Developer Relations, Confluent delivers the closing keynote "The Physics of Streaming" for Kafka Summit APAC 2021
The audience will learn how to use a custom client library to boost adoption, horizontally scale platforms with appropriate partitioning strategy, design a domain driven message protocol & use Kafka to increase recoverability of the system deterministically in case of crashes.
This session explores how event streaming with Apache Kafka and API Management complement and compete with each other depending on the use case and point of view of the project team. The session concludes exploring the vision of event streaming APIs instead of RPC calls.
A deep dive into achieving excellent command query segregation with ksqlDB and the lessons learnt!
In this talk, I want to explore the relationship between Blockchain and Kafka and demonstrate how the two technologies can benefit from each other. If you’re interested in the future of blockchain and love Kafka, this is definitely up your alley.
At this session, you're going to learn:
In this session, we will go into details of the challenges we encountered, the lessons we learned, what we improved, and lastly; are we on vacation yet?
In this session we share our experience of building a real-time data pipelines at Tencent PCG - one that handles 20 trillion daily messages with 700 clusters and 100Gb/s bursting traffic from a single app.
Do you want to know what streaming ETL actually looks like in practice? Or what you can REALLY do with Apache Kafka once you get going—using config & SQL alone?
More and more Enterprises are relying on Apache Kafka to run their businesses. Cluster administrators need the ability to mirror data between clusters to provide high availability and disaster recovery.
We will walk through our decisions of using one cluster vs many and how the improvements in the connect ecosystem like incremental rebalancing have allowed us to scale to thousands of connects.
GraphQL is a powerful way to bridge the gap between frontend and backend. Providing a typed API with introspection. This can be used for code generation or code completion.
Kafka is a vital part of data infrastructure in many organizations. When the Kafka cluster grows and more data is stored in Kafka for a longer duration, several issues related to scalability, efficiency, and operations become important to address.
Building systems around an event-driven architecture is a powerful pattern for creating awesome data intensive applications. Apache Kafka simplifies scalability and provides an event-driven backbone for service architectures.
Apache Kafka is used as the primary message bus for propagating events and logs across Uber. In particular, it pairs with Apache Pinot, a real-time distributed OLAP datastore, to deliver real-time insights seconds after the messages produced to Kafka.
Legacy applications that were developed in bygone days may appear to be close to unsalvageable. In reality, these applications are still running in production and carrying out the important day-to-day missions for their respective companies.
Core banking is one of the last bastions for the mainframe. As many other industries have moved to the cloud, why are most of the world’s banks yet to follow?
Despite great advances in Kafka's SaaS offerings it can still be challenging to create a sustainable event-driven ecosystem.
Fully embracing the “one database per microservice” principle can present challenges for the management of data across the whole ecosystem.
Kafka organizes data as immutable append-only logs at its core, and relied on external consensus services (a.k.a. Zookeeper) to manage the metadata
Core banking systems are batch oriented: typically with heavy overnight batch cycles before business opens each morning.
In this session, we explore how the central nervous system can be used to build a mesh topology & unified catalog of enterprise wide events, enabling development teams to build event driven architectures faster & better.
We will explore a high available and fault-tolerant task scheduling infrastructure using Kafka, Kafka Streams, and State Store."
There are many ways to use Kafka alongside the Elastic Stack, which is what this talk will cover.
We'll also see how to leverage hints to allow you to automatically monitor new instances of Kafka, use Ingest pipelines for parsing data, visualizing it with Kibana in Elastic Observability.
Push Technology, a verified gold technology partner, developed the Diffusion Kafka Adapter. The presentation will provide real-world examples of how the adapter is used to power Kafka in an event-driven world.
What happened when our biggest and most important Kafka cluster went rogue? While trying to recover it, a single, crucial misconfiguration made things even worse. This session is the story of how we learned the hard way about mitigating cluster failures with the proper configurations in place.
We’ll cover:
We have the AsynAPI specification to document the endpoints where the schema of the records become the main part of the contract payload. Microcks allows us to deploy a testing and mocking platform to have a unified view of the endpoints to speed-up application delivery.
In this session we’ll show you how to get up and running with Neo4j Streams to show you how to sink and source between graphs and streams.
Confluent has worked with Accenture in the creation of our Digital Decoupling strategy. Leveraging CDC technologies to allow data access without modifying the core, organizations are now able to easily access data they previously would struggle to marshal.
In this talk we will explore how we realised that vision of production readiness at scale by categorising the open source and internal tools we use into 4 quadrants.
Billions of fraud and safety detections are performed daily as there are millions of transactions happening every day and thus storing and querying the data of a database in real-time is not feasible. So come listen to us how we use Apache Kafka to detect fraud successfully!
Join us to see how you can discover event streams from your Kafka clusters, import them to a catalog to see alongside other enterprise event streams and leverage code gen capabilities to ease development.
Secure and Integrated - Using IAM with Amazon MSK
Come hear how these learnings inspired our new product, Cluster Linking, to make geo-replication simple, consistent, and cloud-native.
In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet.
We'll explore how making changes to the JVM design can eliminate the problems of garbage collection pauses and raise the throughput of applications. For cloud-based Kafka applications, this can deliver both lower latency and reduced infrastructure costs. All without changing a line of code!
This talk is based on our real-world experiences building out streaming analytics stacks powering production use cases across many industries.
Join us for a talk with Confluent's Head of Education, Mario Sanchez, as he discusses how we've successfully transformed business through a prescriptive approach to enablement. We invite you to join the live Q&A that follows, to discuss how enablement can benefit your organization.
The data that organizations are required to analyze in order to make informed decisions is growing at an unprecedented rate.
APIs have become ubiquitous as a way of exposing the capabilities of the enterprise both internally and externally. However, are APIs alone enough?
In this session you will learn how to setup and configure the Confluent Cloud with MongoDB Atlas.
Come learn about the newly available self-managed Azure Cosmos DB connector to safely deliver data and events in real time. We will also demonstrate on how to quickly setup your data pipelines with the all new connector.
Join experts from Confluent and AWS to learn how to build Apache Kafka®-based streaming applications backed by machine learning models. Adopting the recommendations will help you establish repeatable patterns for high performing event-based apps.
Azure Labs: Confluent on Azure Container Services & Real-time Search with Redis