[Webinar] 4 Tips for Cutting Your Kafka Costs Up to 60%| Register Now

Monitoring Apache Kafka with Confluent Control Center Video Tutorials

Written By

Mission critical applications built on Kafka need a robust and Kafka-specific monitoring solution that is performant, scalable, durable, highly available, and secure. Confluent Control Center helps monitor your Kafka deployments and provides assurances that your services are behaving properly and meeting SLAs. Control Center was designed for Kafka, by the creators of Kafka, from Day Zero. It provides the most important information to act on so you can address important business-level questions:

  • Are my brokers up?
  • Are applications receiving all data?
  • Are they up to date with the latest events?
  • Why are the applications running slowly?
  • Do we need to scale out?

You may think Kafka is running fine, but how do you prove it?

Control Center can monitor your Kafka cluster and applications with important monitoring features like System healthEnd-to-end stream monitoring, and Alerting

Watch the introductory Confluent Control Center video Monitoring Kafka like a Pro (3:30).



If you are in the early phase of your exploration of Confluent Control Center, you can learn more about what it can do by watching the Confluent Control Center Overview video series:

If you are ready to get hands on, check out our Confluent Platform Demo GitHub repo. Following the realistic scenario in this repo, which takes just a few seconds to spin up, you will use Control Center to monitor a Kafka cluster and then walk through a playbook of various operational events. The use case is as follows:

A streaming ETL pipeline built around live edits to real Wikipedia pages. The Wikimedia Foundation has IRC channels that publish edits happening to real wiki pages in real time. Using Kafka Connect, a Kafka source connector kafka-connect-irc streams raw messages from these IRC channels, and a custom Kafka Connect transform kafka-connect-transform-wikiedit transforms these messages and then the messages are written to Kafka. This demo uses KSQL for data enrichment, or you can optionally develop and run your own Kafka Streams application. Then a Kafka sink connector kafka-connect-elasticsearch streams the data out of Kafka, and the data is materialized into Elasticsearch for analysis by Kibana.

The cp-demo repo comes with a playbook for operational events and corresponding video tutorials of useful scenarios to run through with Control Center:

Link to Playbook Link to Demo video series 
Installing and running the demo Demo 1: Install + Run | Monitoring Kafka in Confluent Control Center
Tour of Confluent Control Center Demo 2: Tour | Monitoring Kafka in Confluent Control Center
KSQL Demo 3: KSQL | Monitoring Kafka in Confluent Control Center
Consumer rebalances Demo 4: Consumer Rebalances | Monitoring Kafka in Confluent Control Center
Slow consumers Demo 5: Slow Consumers | Monitoring Kafka in Confluent Control Center
Over consumption Demo 6: Over Consumption | Monitoring Kafka in Confluent Control Center
Under consumption Demo 7: Under Consumption | Monitoring Kafka in Confluent Control Center
Failed broker Demo 8: Failed Broker | Monitoring Kafka in Confluent Control Center
Alerting Demo 9: Alerting | Monitoring Kafka in Confluent Control Center

You can also download the Confluent Platform and bring up your own cluster and Confluent Control Center locally with the quickstart.

  • Yeva is an integration architect at Confluent designing solutions and building demos for developers and operators of Apache Kafka. She has many years of experience validating and optimizing end-to-end solutions for distributed software systems and networks.

Did you like this blog post? Share it now