[Don’t Miss] Data Visionaries Summit: September 15–18 | Register Now
Stream Governance provides visibility and control over the structure, quality, and flow of data across your applications, analytics, and AI.
Stream Governance unifies data quality, discovery, and lineage. You can create data products in real-time by defining data contracts once, and enforcing them as data is created––not after it’s batched.
How to Protect PII in Apache Kafka® With Schema Registry and Data Contracts
Dive into how you can secure sensitive data
Stream Quality prevents bad data from entering the data stream.
It manages and enforces data contracts––schema, metadata, and quality rules––between producers and consumers within your private network with:
Define and enforce universal standards for all your data streaming topics in a versioned repository
Enforce semantic rules and business logic on your data streams
Brokers verify that messages use valid schemas when assigned to a specific topic
Sync schemas across cloud and hybrid environments in real time
Protect your most sensitive data by encrypting specific fields within messages at the client level
Any number of data producers can write events to a shared log, and any number of consumers can read those events independently and in parallel. You can add, evolve, recover, and scale producers or consumers—without dependencies.
Confluent integrates your legacy and modern systems:
Stream Catalog organizes data streaming topics as data products any operational, analytical, or AI system can access.
Enrich topics with business information about teams, services, use cases, systems, and more
Allow end-users to search, query, discover, request access, and view each data product through a UI.
Consume and enrich data streams, run queries, and create streaming data pipelines directly in the UI
Search, create, and tag topics through a REST API based on Apache Atlas, and a GraphQL API
You can respond to changing business requirements without breaking downstream workloads for applications, analytics, or AI with:
Add new fields, modify data structures, or update formats while maintaining compatibility with existing dashboards, reports, and ML models
Validate schema changes before deployment to prevent breaking downstream analytics applications and AI pipelines
Choose backward, forward, or full compatibility modes based on your specific upgrade requirements and organizational constraints
ACERTUS utilizes Schema Registry to facilitate editing and additions to order contracts without requiring changes to the underlying code. Changes to schemas and topic data are noted in real time so once data users find the data they’re looking for, they can trust that it’s accurate and reliable.
Confluent ensures data quality and security with Stream Governance — and allows Vimeo to safely scale and share data products across their business.
"It’s amazing how much more we can get done when we don’t have to worry about exactly how to do things. We can trust Confluent to offer a secure and rock-solid Kafka platform with a myriad of value-add capabilities like security, connectors, and stream governance on top."
Jetzt Vorsprung für Kafka- und Flink-Anwendungsfälle sichern. Noch heute ein Confluent Cloud-Konto erstellen und aktivieren und Credits im Wert von 400 $ erhalten, die in den ersten 30 Tagen genutzt werden können.
Stream Governance is a suite of fully managed tools that help you ensure data quality, discover data, and securely share data streams. It includes components like Schema Registry for data contracts, Stream Catalog for data discovery, and Stream Lineage for visualizing data flows.
As Kafka usage scales, managing thousands of topics and ensuring data quality becomes a major challenge. Governance provides the necessary guardrails to prevent data chaos, ensuring that the data flowing through Kafka is trustworthy, discoverable, and secure. This makes it possible to democratize data access safely.
While open source provides basic components like Schema Registry, Confluent offers a complete, fully managed, and integrated suite. Stream Governance combines data quality, catalog, and lineage into a single solution that is deeply integrated with the Confluent Cloud platform, including advanced features like a Data Portal, graphical lineage, and enterprise-grade SLAs.
Yes. Confluent Schema Registry supports Avro, Protobuf, and JSON Schema, giving you the flexibility to use the data formats that best suit your needs.
Yes, Confluent Stream Governance provides robust support for industry-standard data serialization formats through its integrated Schema Registry.
Schema Registry centrally stores and manages schemas for your Kafka topics, supporting the following formats: Apache Avro, Protobuf, JSON Schema.
This ensures that all data produced to a Kafka topic adheres to a predefined structure. When a producer sends a message, it serializes the data according to the registered schema, which typically converts it into a compact binary format and embeds a unique schema ID. When a consumer reads the message, it uses that ID to retrieve the correct schema from the registry and accurately deserialize the data back into a structured format. This process not only enforces data quality and consistency but also enables safe schema evolution over time.
The complete Stream Governance suite, including the Data Portal and interactive Stream Lineage, is exclusive to Confluent Cloud. However, core components like Schema Registry are available as part of the self-managed Confluent Platform.
Yes. Stream Governance is suitable for regulated industries and provides the tools you need to ensure data security and compliance with industry and regional regulations. Its Stream Lineage features help with audits, allowing you to use schema enforcement and data quality rules to ensure data integrity. Across the fully managed data streaming platform, Confluent Cloud also holds numerous industry certifications like PCI, HIPAA, and SOC 2.
You can get started with Stream Governance by signing up for a free trial of Confluent Cloud. New users receive $400 in cloud credit to apply to any of the data streaming, integration, governance, and stream processing capabilities on the data streaming platform, allowing you try Stream Governance features firsthand.
We also recommended exploring key Stream Governance concepts in Confluent documentation, as well as following the Confluent Cloud Quick Start, which takes you through how to deploy your first cluster, product and consume messages, and inspect them with Stream Lineage.