[Webinar] Master Apache Kafka Fundamentals with Confluent | Register Here

Cloud API Keys vs Resource-Specific API Keys in Confluent Cloud

作成者 :

As you build and manage data streams in Confluent Cloud, securing your interactions with its APIs is paramount. Confluent Cloud offers two types of API keys that manage authentication to the different APIs in Confluent Cloud: cloud API keys and resource-specific API keys. Each has its own distinct characteristics and use cases.

What's the Difference?

The core difference lies in the scope of access granted by each key type.

  • Cloud API Keys: These provide authentication and access for managing Confluent Cloud organization and resource management at the organization level.

    ‎ 

    For instance, an administrator with OrganizationAdmin privileges uses a cloud API key with their user account if they need to set access control to an account in the organization.

    ‎ 

  • Resource-Specific API Keys: These offer granular access limited to specific resources within your Confluent Cloud environment, clusters, regions, or API groups. 

For instance, a service account is configured with an Apache Kafka® cluster API key for production_cluster_1, and the service account has permission to write only to orders_topic on production_cluster_1. If this key were compromised, the attacker could affect only that specific topic on that cluster, not any other topics, clusters, or organizational settings.

Categories of API Keys

  1. Confluent Cloud Kafka Cluster API Keys

    ‎ 

    These are a type of resource-specific API key that grant programmatic access and authenticate interactions directly with your Kafka clusters. These keys are used by client applications (producers and consumers), connectors, and other tools to securely connect to a designated Kafka cluster.

    ‎ 

    They allow for fine-grained control, enabling administrators to define specific permissions, such as the ability to produce to certain topics, consume from others, create or delete topics, and manage access control lists (ACLs) within that particular Kafka cluster.

    ‎ 

    This approach enhances security by ensuring that applications and services have only the minimum necessary privileges to perform their tasks on the specified Kafka cluster rather than having broader access to the entire Confluent Cloud environment.

    ‎ 

  2. ksqlDB API Keys

    ‎ 

    These are used for authentication when interacting with ksqlDB clusters using the Confluent CLI, REST API, or other tools. These keys, generated specifically for the ksqlDB cluster, are distinct from those used for general Confluent Cloud access or Kafka clusters.

    ‎ 

  3. Schema Registry API Keys

    ‎ 

    These are specific to Schema Registry and are distinct from those used for Kafka clusters. They’re used to manage and access schemas, ensuring secure and controlled access to the Confluent Cloud Schema Registry service.

    ‎ 

  4. Apache Flink® Region API Keys

    ‎ 

    These are scoped to a specific environment and region in Confluent Cloud.

    ‎ 

  5. Tableflow API Keys

    ‎ 

    These allow users to interact with the Tableflow Catalog APIs across topics, clusters, and environments. They help configure storage settings and manage other Tableflow-related resources.

    ‎ 

  6. Cloud Resource Management API Keys

    ‎ 

    These are used to access the resource management APIs for managing the Confluent Cloud organization.

Figure 1: Flowchart explaining the workflow of API keys in Confluent Cloud

Here's a table summarizing the key distinctions:

Feature

Organization Scope

Environment Scope

Cluster Scope

Granularity

Applies to large units (organizational level)

Applies to specific environments in an organization

Applies to specific clusters in an environment

Use Cases

Centralized admin tools and audit logs extraction

Flink workspace to a specific region

Kafka cluster producer/consumer access and for connectors

Access Control

Admin-level permissions across all environments

Access limited to services within the environment

Access limited to resources within the Kafka cluster

What Is Service Account vs User Account in Confluent Cloud?

In Confluent Cloud, you interact with resources through either your personal user account (My Account) or a service account.

  • User Account: This represents your individual user identity within the Confluent Cloud organization. Actions performed using your account are attributed to you. It's suitable for personal exploration, administrative tasks you perform directly, and initial setup.

  • Service Account: This is a non-user account designed for applications, services, and automated processes to interact with Confluent Cloud resources. Service accounts are not tied to a specific individual and persist even if a user leaves the organization.

Key Differences and When to Use:

Feature

User Account

Service Account

Identity

Represents an individual user

Represents an application or service

Life Cycle

Tied to a user's tenure in the org

Tied to a client application or programmatic use case  (e.g., Terraform)

Best Use

Personal administration, direct interaction

Applications, automated processes, long-running tasks

Security

Less ideal for production applications

Best practice for production environments

For any automated processes or applications interacting with Confluent Cloud, always use service account API keys rather than user account API keys. This ensures that access is not disrupted if a user leaves and provides a clear separation of concerns. 

Here's a typical workflow:

  1. Navigate to service accounts.

    • In the Confluent Cloud Console, find the section for managing accounts and access. This is usually under "Administration," "Accounts and Access," or a similar option.

    • Locate the "Service accounts" area.

  2. Create a new service account.

    • Click on an option such as "Add service account," "Create service account," or a "+" button.

    • You'll likely need to provide the following.

      • Name: Choose a descriptive name for the service account (e.g., my-kafka-producer-app, data-pipeline-connector).

      • Description (optional but recommended): Briefly explain what this service account will be used for.

  3. Generate API key for the service account.

    • Once the service account is created, go to the Administration section in the hamburger icon of the home page and choose “API Keys.”

    • Look for an option such as "Create API key," "+ Add key," or similar on the service account's details page.

    • When the API key and secret are generated, copy and store them immediately and securely. The secret is typically shown only once. You will embed this key and secret into your application's configuration so it can authenticate with Confluent Cloud.

    • You might be asked to choose the scope of the API key (e.g., if it's for a specific Kafka cluster or if it has broader permissions defined by the service account's grants). For service accounts, the key often inherits the permissions granted to the service account itself.

What If You Don’t Want to Use API Keys for Authentication?

In Confluent Cloud, External OAuth (often referred to as Partner OAuth in some contexts) allows you to grant very specific, granular access to certain Confluent Cloud resources—much like a resource-scoped API key would.

Security Token Service (STS) OAuth in Confluent Cloud matches cloud API keys.

Benefits of External OAuth:

  • Centralized Identity Management: Leverage the existing security infrastructure of your partners or your organization's identity provider (IdP).

  • Enhanced Security: Avoid sharing long-lived API keys directly with external entities. Access is controlled through short-lived tokens.

  • Simplified Onboarding and Offboarding: Managing access is handled through the IdP, making it easier to grant and revoke permissions for external partners.

  • Auditability: Authentication and authorization activities are logged in the external IdP.

By understanding the nuances of cloud API keys, resource-specific API keys, External OAuth, and service accounts, you can build a secure and well-managed data streaming platform on Confluent Cloud.


Apache®, Apache Kafka®, Kafka®, Apache Flink®, and Flink® are registered trademarks of the Apache Software Foundation. No endorsement by the Apache Software Foundation is implied by the use of these marks.

  • Laasya Krupa B is a Senior Cloud Enablement Engineer at Confluent with 5 years of experience rooted in DevOps. She applies her deep expertise in architecting and managing production infrastructure on clouds like AWS, Azure, and GCP allows to help customers scale their real-time data systems. She specializes in showing Kafka and Confluent Cloud users how design, build, and operate high-performance applications with data streaming. Her primary areas of expertise are Kafka, Flink, and AI. Laasya is passionate about sharing best practices to help the wider community build efficient, real-time applications and guiding customers in implementing solutions ranging from event-driven microservices to scalable AI/ML feature pipelines.

このブログ記事は気に入りましたか?今すぐ共有