Live Demo: Build Scalable Event-Driven Microservices with Confluent | Register Now
Although cloud-based, managed Kafka offerings abstract away most administrative responsibilities, a few admin-related concerns remain––like cluster scaling. When is scaling your cloud-based Kafka appropriate? And how should you set it up to auto-scale?
Gone are the days of over-provisioning resources to meet expected demand. Technologies like kubernetes make it relatively simple to implement strategies around both horizontal and vertical scaling. Cloud providers give users the ability to track their resource utilization and set up autoscaling groups and policies. Cloud administrators use these tools (and others) to guarantee their applications can handle the demands placed on them. With Kafka being a central pillar of our cloud-native data pipelines it requires administrators to determine if, when and how to scale Kafka as their workloads ebb and flow.
In this session, we’ll explore the topic of auto-scaling by implementing a strategy for Confluent Cloud resources. We’ll first discuss common use cases that dictate a need to create a scaling strategy for Confluent Cloud and introduce the approaches best suited for each use case. With a nod to both where we came from and where we are going, we will discuss the architecture of Confluent Cloud and how it affects the way we scale Kafka. Attendees will learn how to deal with ephemeral workloads, what to monitor for when creating an auto-scaling policy, and the “gotchas” of auto-scaling in Confluent Cloud. We will also discuss best practices for scaling Kafka clients, because Kafka is only as scalable as the client applications that connect to it.
We will dive into code that examines these approaches and by the end of the session, you’ll have the tools needed to design and implement your own scaling strategy for your Confluent Cloud workloads.