Level Up Your Kafka Skills in Just 5 Days | Join Season of Streaming

Building a Hybrid Cloud Data Pipeline is Even Easier with Confluent CLI v2

Écrit par

In the latest major version update of the Confluent CLI, we’ve packed all of the functionality from our cloud-based ccloud CLI into the existing confluent CLI client! This is a pivotal step for Confluent as we move towards providing a single platform that serves cloud and on-prem users simultaneously. The newest Confluent Platform release, Confluent Platform 7.1, ships with this major version update, Confluent CLI v2.

Confluent CLI v2

Cloud users who update from ccloud v1.x to confluent v2.x will receive multi-org support and helpful command syntax changes. On-prem users will receive similar usability improvements and unlock all of the functionality of the Cloud CLI, making it easier than ever to migrate infrastructure to the Cloud. Most importantly, hybrid users who need to frequently switch between cloud and on-prem deployments can now use a single CLI for any Confluent task. The complete list of changes can be found in the documentation.

In this blog post, we’ll give a quick breakdown of the key features included in the new CLI and demonstrate an example workflow for a hybrid user who needs to link two clusters across Confluent Platform and Confluent Cloud and mirror a topic between them.

To get your hands on the new CLI, download the unified CLI client as part of Confluent Platform 7.1. Alternatively, if you are already a Confluent Cloud user, simply perform a major version update with ccloud update --major, or follow the installation instructions in the documentation.

What’s new?

Users of the new unified CLI will find that its interface hasn’t changed noticeably, with the most obvious exception that, for Cloud users, the name has changed from ccloud to confluent.

ccloud to confluent

Logging in is exactly the same as before for all users: confluent login authenticates against Confluent Cloud by default, but if a Metadata Service (MDS) URL is provided with the --url flag, the user can log in to Confluent Platform.

Confluent Cloud Confluent Platform
confluent login
confluent login --url https://example.com:8090

After logging in, a user will have access only to the commands that pertain to them. For instance, a Confluent Cloud user will see:

admin           Perform administrative tasks for the current organization.
api-key         Manage the API keys.
audit-log       Manage audit log configuration.
cloud-signup    Sign up for Confluent Cloud.
completion      Print shell completion code.
connect         Manage Kafka Connect.
context         Manage CLI configuration contexts.
environment     Manage and select Confluent Cloud environments.
help            Help about any command
iam             Manage RBAC and IAM permissions.
kafka           Manage Apache Kafka.
ksql            Manage ksqlDB.
local           Manage a local Confluent Platform development environment.
login           Log in to Confluent Cloud or Confluent Platform.
logout          Log out of Confluent Cloud.
price           See Confluent Cloud pricing information.
prompt          Add Confluent CLI context to your terminal prompt.
schema-registry Manage Schema Registry.
shell           Start an interactive shell.
update          Update the Confluent CLI.
version         Show version of the Confluent CLI.

A Confluent Platform user will see a slightly different list of commands:

audit-log       Manage audit log configuration.
cloud-signup    Sign up for Confluent Cloud.
cluster         Retrieve metadata about Confluent Platform clusters.
completion      Print shell completion code.
connect         Manage Kafka Connect.
context         Manage CLI configuration contexts.
help            Help about any command
iam             Manage RBAC, ACL and IAM permissions.
kafka           Manage Apache Kafka.
ksql            Manage ksqlDB.
local           Manage a local Confluent Platform development environment.
login           Log in to Confluent Cloud or Confluent Platform.
logout          Log out of Confluent Platform.
prompt          Add Confluent CLI context to your terminal prompt.
schema-registry Manage Schema Registry.
secret          Manage secrets for Confluent Platform.
shell           Start an interactive shell.
update          Update the Confluent CLI.
version         Show version of the Confluent CLI.

Context switching

A context represents a user’s login state, whether that be with Confluent Cloud or Confluent Platform, and other associated settings that persist between command executions.

command executions

Being able to quickly switch contexts is so important to our hybrid users that, in the new update, confluent context has been promoted to a top-level command. Run confluent context list to see the list of available contexts:

$ confluent context list

If a context name is too tedious to type, it can be renamed:

$ confluent context update --name cloud

Then, switching to a new context is as easy as:

$ confluent context use cloud

Tip: If you switch between contexts frequently, you can use confluent prompt<c/code> to show the current context directly in the terminal! You can set this up by following the instructions in confluent prompt -h.

(confluent|cloud) $ confluent ...

Hybrid demo

Scenario: You are a Confluent Platform user who has an on-prem cluster. You are interested in migrating your data to Confluent Cloud. You will link the on-prem cluster to a cloud cluster, mirror a topic over the cluster link, and test that the setup works by producing and consuming across the cluster link.

migrate to on-prem cluster

Prerequisites:

  • A Confluent Cloud account. Register with confluent cloud-signup.
  • A running Confluent Platform deployment, with MDS and LDAP set up.
  • Confluent CLI, v2.6.1 or later. Verify with confluent version, and if necessary, update with confluent update.
WARNING
This tutorial requires running a Dedicated Cluster in Confluent Cloud, which will incur charges. Use the promo code CL60BLOG for $60 of free usage.
  1. Log in to Confluent Cloud and give the login context an easy-to-remember name. Here, we name the context “cloud”:
    $ confluent login --save
    Enter your Confluent Cloud credentials:
    Username: name@example.com
    Password: ********

$ confluent context update --name cloud +------------+---------------------------+ | Name | cloud | | Platform | confluent.cloud | | Credential | username-name@example.com | +------------+---------------------------+

  • Next, create a dedicated Confluent Cloud cluster with public internet endpoints, and save its details for later:
    $ confluent kafka cluster create my-cluster --type dedicated --cloud aws --region us-west-2 --cku 1
    It may take up to 1 hour for the Kafka cluster to be ready. The organization admin will receive an email once the dedicated cluster is provisioned.
    +--------------+----------------------------------------------------------+
    | Id           | lkc-123456                                               |
    | Name         | my-cluster                                               |
    | Type         | DEDICATED                                                |
    | Ingress      |                                                       50 |
    | Egress       |                                                      150 |
    | Storage      | Infinite                                                 |
    | Provider     | aws                                                      |
    | Availability | single-zone                                              |
    | Region       | us-west-2                                                |
    | Status       | PROVISIONING                                             |
    | Endpoint     | SASL_SSL://pkc-12345.us-west-2.aws.stag.cpdev.cloud:9092 |
    | ApiEndpoint  |                                                          |
    | RestEndpoint | https://pkc-12345.us-west-2.aws.stag.cpdev.cloud:443     |
    | ClusterSize  |                                                        1 |
    +--------------+----------------------------------------------------------+
    

    $ export CC_CLUSTER_ID=lkc-123456 $ export CC_URL=pkc-12345.us-west-2.aws-confluent.cloud

    $ confluent kafka cluster use $CC_CLUSTER_ID Set Kafka cluster "lkc-123456" as the active cluster for environment "env-12345".

  • Then, create an API key and a secret for the Confluent Cloud cluster:
    $ confluent api-key create --resource $CC_CLUSTER_ID
    It may take a couple of minutes for the API key to be ready.
    Save the API key and secret. The secret is not retrievable later.
    +---------+----------------------------------------------------------+
    | API Key | AAAAAAAAAAAAAAAA                                         |
    | Secret  | BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB |
    +---------+----------------------------------------------------------+
    

    $ export CC_API_KEY=AAAAAAAAAAAAAAAA $ export CC_API_SECRET=BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB

    $ confluent api-key use $CC_API_KEY --resource $CC_CLUSTER_ID Set API Key "AAAAAAAAAAAAAAAA" as the active API key for "lkc-123456".

  • To complete this tutorial, the Confluent Platform cluster must have the following properties set in its server.properties file, with the encoder secret being any secure password:
    password.encoder.secret=<encoder-secret>
  • While waiting for the Confluent Cloud cluster to provision, log in to Confluent Platform. Save the Confluent Platform URL in an environment variable:
    $ export CP_URL=https://example.com:8090
    

    $ confluent login --url $CP_URL Enter your Confluent credentials: Username: name@example.com Password: *********

    $ confluent context update --name platform +------------+----------------+ | Name | platform | | Platform | localhost:8090 | | Credential | username-name | +------------+----------------+

    Tip: Now that we’re dealing with multiple contexts, it may be useful to display the current context directly in the terminal with confluent prompt:

    $ export PS1='$(confluent prompt) '$PS1
    

    (confluent|platform) $ confluent kafka broker list --url $CP_URL/kafka Cluster ID | Broker ID | Host | Port
    -------------------------+-----------+---------------------------------------+------- 0000000000000000000000 | 1 | ip-0-0-0-0.us-west-2.compute.internal | 9092

    (confluent|platform) $ export CP_CLUSTER_ID=0000000000000000000000

    (confluent|platform) $ confluent kafka topic create my-topic --replication-factor 1 --url $CP_URL/kafka Created topic "my-topic".

  • Once the Confluent Cloud cluster finishes provisioning, the organization admin should receive an email. At this point, we’re ready to create a cluster link! To create a cluster link from Confluent Platform to Confluent Cloud, we must create a “source-initiated” cluster link so that the Confluent Platform cluster (the source) sends an outbound connection to the Confluent Cloud cluster (the destination). A source-initiated cluster link requires us to create two halves of the cluster link: one in Confluent Cloud and one on Confluent Platform. Start by creating a cluster link configuration file in the current directory with the following content, and name it clusterlink-dst.config:
    link.mode=DESTINATION
    connection.mode=INBOUND
    
  • Then, create the cluster link:
    (confluent|platform) $ confluent context use cloud
    

    (confluent|cloud) $ confluent kafka link create my-link --source-bootstrap-server 0.0.0.0 --source-cluster-id $CP_CLUSTER_ID --config-file clusterlink-dst.config

  • To create the other side of the cluster link, add a file with the following content to the current directory, and name it clusterlink-src.config. Properties prefixed with “local” are for authentication to the Confluent Platform cluster. Make the appropriate substitutions for the variables in angle brackets:
    link.mode=SOURCE
    connection.mode=OUTBOUND
    

    bootstrap.servers=<CC_URL>:9092 ssl.endpoint.identification.algorithm=https security.protocol=SASL_SSL sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='<CC_API_KEY>' password='<CC_API_SECRET>';

    local.sasl.mechanism=OAUTHBEARER local.security.protocol=SASL_SSL local.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler local.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required username='<CP_USERNAME>' password='<CP_PASSWORD>' metadataServerUrls='';

    local.ssl.key.password=<KEY_PASSWORD> local.ssl.truststore.location=<PATH_TO_TRUSTSTORE> local.ssl.truststore.password=<TRUSTSTORE_PASSWORD> local.ssl.keystore.location=<PATH_TO_KEYSTORE> local.ssl.keystore.password=<KEYSTORE_PASSWORD>

  • Next, switch the CLI context back to Confluent Platform and create the other end of the cluster link:
    (confluent|cloud) $ confluent context use platform
    

    (confluent|platform) $ confluent kafka link create my-link --destination-bootstrap-server $CC_URL:9092 --destination-cluster-id $CC_CLUSTER_ID --config-file clusterlink-src.config --url $CP_URL/kafka Created cluster link "my-link".

  • In a separate terminal window, switch contexts to Confluent Cloud, create the mirror topic, and begin consuming:
    (confluent|platform) $ confluent context use cloud
    

    (confluent|cloud) $ confluent kafka mirror create my-topic --link my-link Created mirror topic "my-topic".

    (confluent|cloud) $ confluent kafka topic consume my-topic --from-beginning Starting Kafka Consumer. Use Ctrl-C to exit.

  • In the original terminal window, switch contexts to Confluent Platform, copy the client certificate and key from the Confluent Platform cluster, and produce:
    (confluent|cloud) $ confluent context use platform
    

    (confluent|platform) $ ssh <username>@example.com "sudo cat /var/ssl/private/kafka_broker.crt" > client.crt (confluent|platform) $ ssh <username>@example.com "sudo cat /var/ssl/private/kafka_broker.key" > client.key

    (confluent|platform) $ confluent kafka topic produce my-topic --bootstrap $CP_URL --cert-location client.crt --key-location client.key Starting Kafka Producer. Use Ctrl-C or Ctrl-D to exit. 🎉

  • We can see this data mirrored in Confluent Cloud!

    Starting Kafka Consumer. Use Ctrl-C to exit.
    🎉

    Saying goodbye to the Confluent Cloud CLI

    The move to the unified Confluent CLI means that the Confluent Cloud CLI, ccloud, will be deprecated. On May 9, 2022, ccloud will be sunset. To guide users through the process of updating to v2, we’ve placed helpful messages in the latest minor version of ccloud, and the full list of breaking changes can be found in the documentation.

    Conclusion

    We introduced Confluent CLI v2, discussed the new features included, and demonstrated an example of linking a Confluent Platform cluster to a Confluent Cloud cluster for the purposes of mirroring data, using only the new unified CLI.

    Looking back to the initial release of Confluent CLI v1, we’ve come a long way. This major version update for the CLI represents the first fully unified product at Confluent, capable of handling cloud and on-prem workflows from a single interface. We’re excited to keep developing the CLI to support all of the new features we have planned at Confluent!

    Get in touch

    The CLI team would like to hear from you! If you have any feedback, feature requests, bug reports, reach out on the Confluent Forum, send us an email at cli-team@confluent.io, or file a ticket through Confluent Support. We look forward to hearing from you!

    Get In Touch

    • Brian Strauch is a software engineer who works on the Confluent CLI. He’s a recent graduate from the University of Illinois and lives in Palo Alto, California.

    Avez-vous aimé cet article de blog ? Partagez-le !