OSS Kafka couldn’t save them. See how data streaming came to the rescue! | Watch now

Streamlining Your Workflow – Confluent Cloud Console Feature Roundup

Écrit par

Apache Kafka® has evolved from a modest project into the backbone of data streaming for thousands of organizations worldwide. Whether through open source or cloud-native solutions such as Confluent Cloud, different types of users now interact with Kafka every day:

  • Operators monitor and maintain Confluent Cloud clusters for optimal business operations and identify problematic clients.

  • Developers build, test, and monitor streaming applications to ensure the right outcomes to meet their business goals.

  • Business analysts now have direct, real-time access to important business data within topics.

The diversity of these users, each with their own expertise, calls for flexible tools that match their preferred ways of working. While developers often gravitate toward command-line interfaces and code, business users typically prefer visually interactive interfaces for their tasks. That's why Confluent provides multiple options: Alongside our CLI, client libraries, and APIs, we offer the Confluent Cloud Console—a comprehensive web interface that’s free for all Confluent Cloud users. Whether you're an operator, developer, or analyst, the Cloud Console provides an intuitive way to manage your entire data streaming platform, from Kafka clusters and connectors to Apache Flink® applications and schemas.

In this blog, we'll explore recent enhancements to the Cloud Console for functionality related to Kafka, showing how to better view and produce data, ensure client compatibility, monitor performance, and validate functionality.

The New Topic Message Browser

The topic message browser in the Cloud Console offers a visual interface to explore topic messages in Confluent Cloud clusters. Users can view messages in real time, inspect contents and metadata, and produce new messages—all without command-line tools or client code. Along with being a useful tool to test client applications, the message viewer can also be used to inspect incoming data when troubleshooting incidents in production. We’ve made a number of changes to the message browser to make it a more capable and reliable tool for these use cases.

First, we’re introducing a full overhaul of the message browser interface, allowing for more functionality and improved performance. These changes include:

  • Allowing up to 1 million messages in the browser at once

  • Providing the option to display timestamps in a human-readable ISO format instead of Unix time

  • Providing an understanding of incoming message distribution through a new histogram chart

By allowing up to 1 million messages, we now enable users to search and filter through a larger number of messages to find specific messages of interest or to export a larger number into JSON or CSV files. Being able to view spikes in traffic visually and correspond them to human-readable ISO timestamps can also help identify and correlate potential issues during troubleshooting or testing.

Second, we added a new improvement to the “produce with schema” functionality that we launched last year, now giving the option to generate a schema-compliant body for your message directly. Whether testing applications or troubleshooting consumer behavior in production-like environments, you can observe downstream functionality without spinning up a new client or a cumbersome CLI command.

Understanding and Auditing Your Clients

Along with the ability to view data, the Cloud Console can provide a simple overview of clients connecting to your Confluent Cloud cluster. The type of clients being used on your cluster have a huge impact on the overall performance, stability, and productivity of the cluster. Over the years, we’ve received a lot of feedback from our customers that led to the following conclusions:

  • Central operators didn’t know who was connecting to their clusters.

  • The type of clients and their versions were not known and could be figured out only with tedious workarounds.

  • It was unclear how many connections and requests were being made at a time.

To address these concerns, we completely remade our clients overview page in the Cloud Console. With the new interface, operators can get much-needed visibility into their clients, including:

  • Which client IDs belong to which principals

  • Which software type and version each client is using and whether it’s supported by Confluent

  • How many connections and requests all the clients for a specific principal have made in the last minute

Clients Overview and KIP-896

The new clients overview page can be used as a tool to examine deprecated client requests with the changes introduced in KIP-896. As part of Apache Kafka® 4.0, KIP-896 introduced the removal of certain client request API versions from Kafka. Confluent Cloud customers will have extended compatibility for their Confluent Cloud clusters, free of charge. Full removal of support will occur by February 2026. For more information, review our Confluent Cloud documentation.

With the new clients overview page, principals and clients sending deprecated requests will be clearly indicated in the console, along with the type of deprecated request being sent. Alternatively, users can also use the new deprecated_request_count metric in the Confluent Cloud Metrics API.

Monitoring Application Health Using Consumer Lag

Keeping with the theme of client management, we made some sweeping changes to how we help you understand consumer groups. Previously, users could view only stable consumer groups, and when they clicked on a group, the console would show individual bars that represented the current lag of each topic/partition the group was subscribed to. We found that this took up a lot of space and that the data wasn't the most useful as it didn't show any previous lag values. Operators using this screen to make decisions on whether to scale groups or further investigate their applications were operating with limited visibility.

Our latest changes simplified the interface, providing all the information within two connected time charts to help reveal trends over time as well as point-in-time values for production, consumption, and lag by the minute. The new changes to the interface also provide the following:

  • Current consumer offset position by topic-partition

  • Client ID and consumer group member IDs

  • Summary metrics for the group and per topic

  • Ability to download charts as CSV

  • An on-hover tooltip for both charts simultaneously

  • Time range options for the last 10 minutes, last hour, and last 6 hours, with 1-minute granularity

  • Ability to filter the graph by specific topics and partitions

Resetting Consumer Group Offsets

Consuming messages in a partition is not always a one-way action. There may be times when messages need to be read again or skipped entirely, such as when testing or correcting client app behavior, recovering from disaster, or dodging poison pills. Previously, this was supported through the Kafka CLI tool or through supported Admin Clients. We’re happy to announce that users now have the much simpler option of doing this directly from the Cloud Console. Consumer groups that are empty now appear in the consumer group list, and users can click on them, inspect their committed offsets for each topic partition, and reset those offsets to either the earliest log offset, latest log offset, or a value in between.

Getting Started With Kafka Streams Observability

For customers who are all in with using Kafka Streams to process their Kafka topics, we found that grouping Kafka Streams applications with other generic clients gave a cluttered view and limited the ability to identify aspects that make Kafka Streams clients unique. We have now introduced a new, dedicated Kafka Streams view that consolidates all Kafka Streams applications into a single page, including the following key pieces of information:

  • Application name and client version

  • Number of running threads

  • Production and consumption rate for each application

Stay tuned for more updates to this view in the future, as we look to provide more insight into Kafka Streams applications to help monitor critical performance metrics and diagnose performance issues.

Other Updates to Look Out For

Some other minor updates we added to the console allow you to:

  • Download your list of topics into a CSV

  • View which protocol type your consumer groups are using to help identify which consumer groups you should update to the new consumer group protocol

  • View the associated schema ID for each message in the message browser

What's Next

The updates we described today mark the beginning of making Cloud Console the best companion for data streaming. We're excited about our upcoming road map, which includes:

  • A more powerful message browser with enhanced search and filtering capabilities

  • Deeper insights into your Confluent Cloud clusters and client behavior

  • Comprehensive monitoring and debugging tools for Kafka Streams applications

Your feedback drives our innovation. We're committed to building the tools you need for your day-to-day operations, and we’d love to hear from you through our feedback tab on this blog post.

To try the features discussed in this blog post, use the code KAFKABLOG25 for $25 in credit for Confluent Cloud.

Stay tuned for more exciting updates across Confluent products in the coming months!

‎ 

Apache®, Apache Kafka®, Kafka®, Apache Flink®, and Flink® are registered trademarks of the Apache Software Foundation. No endorsement by the Apache Software Foundation is implied by the use of these marks.

  • Nusair Haq is a product manager for Confluent Cloud focusing on Cloud Console and Kora Observability, with the goal to provide operators and developers the best tools to monitor and act on their Confluend Cloud clusters. Outside of work, Nusair loves spending time with his family, exploring the Toronto food scene and watching sports, particularly Formula 1 and MotoGP.

Avez-vous aimé cet article de blog ? Partagez-le !