Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Presentation

Marching Toward a Trillion Kafka Messages per Day: Running Kafka at scale at PayPal

« Kafka Summit 2020

At PayPal, our Kafka journey, which started with a handful of isolated Kafka clusters, now has us marching at a rapid pace towards a trillion messages per day in a true multi-tenant Kafka environment with thousands of brokers. To enable that tremendous growth -- which today is reflected by 30% quarter-on-quarter increases in traffic volume -- we’ve had to overcome a wide variety of challenges, including:

Deploying the tooling required to operate multiple Kafka clusters at scale Monitoring and operating a large fleet of brokers across multiple availability zones Identifying the right set of metrics to monitor and then finding a way to automate remediation Troubleshooting customer issues related to broker connectivity and data loss Planning and optimizing capacity within budget constraints Managing a polyglot client stack with a diverse set of Kafka client libraries.

And of course, doing all of the above while ensuring compliance with the stringent data security requirements mandated for FinTech companies.

In this talk, we will describe the Kafka deployment model at PayPal and how we addressed these challenges to implement event streaming at enterprise scale. Independent of the scale of your Kafka deployment – be it a large enterprise already using Kafka, a midsize company in the midst of transforming your Big Data platform, or a small but rapidly growing company just getting started with Kafka – this talk will provide you with knowledge of the ins and outs of operating Kafka at scale, learn the challenges you can expect, and understand how you can address them will help you pave the way for your own Kafka journey. Screen reader support enabled.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how