[Demo+Webinar] New Product Updates to Make Serverless Flink a Developer’s Best Friend | Watch Now
This tutorial describes how to set up an Apache Kafka® cluster on Enterprise Pivotal Container Service (Enterprise PKS) using Confluent Operator, which allows you to deploy and run Confluent Platform at scale on virtually any Kubernetes platform, including Pivotal Container Service (PKS). With Enterprise PKS, you can deploy, scale, patch, and upgrade all the Kubernetes clusters in your system without downtime.
You’ll start by creating a Kafka cluster in Enterprise PKS using Confluent Operator. Then, you’ll configure it to expose external endpoints so the cluster will be available for use outside of the Enterprise PKS environment. This is useful in cases where you are deploying a Pivotal Application Service (PAS) that produces and/or consumes to Kafka running in Enterprise PKS.
Let’s begin!
Run all the following command line tasks in a terminal unless explicitly noted otherwise.
helm install \ -f ./providers/pks.yaml \ --name operator \ --namespace operator \ --set operator.enabled=true \ ./confluent-operator
NAME READY STATUS RESTARTS AGE cc-manager-7bf99846cc-qx2hb 1/1 ContainerCreating 1 6m40s cc-operator-798b87b77-lx962 1/1 ContainerCreating 0 6m40s
Wait until the status changes from ContainerCreating to Running.
helm install \ -f ./providers/pks.yaml \ --name zookeeper \ --namespace operator \ --set zookeeper.enabled=true \ ./confluent-operator
kubectl get pods -n operator
NAME READY STATUS RESTARTS AGE cc-manager-7bf99846cc-qx2hb 1/1 Running 1 6m40s cc-operator-798b87b77-lx962 1/1 Running 0 6m40s zookeeper-0 0/1 ContainerCreating 0 15s zookeeper-1 0/1 ContainerCreating 0 15s zookeeper-2 0/1 Pending 0 15s
Wait until the ZooKeeper pods are in Running status before proceeding to the next step. It takes approximately one minute.
helm install \ -f ./providers/pks.yaml \ --name kafka \ --namespace operator \ --set kafka.enabled=true \ ./confluent-operator
Issue kubectl get pods -n operator and wait for the status of the kafka-0, kafka-1, and kafka-2 pods to be Running.
At this point, the setup is complete and you are ready to verify that the installation is successful.
Before beginning verification, some of you may be wondering about the rest of the Confluent Platform, such as Schema Registry, KSQL, Control Center, etc. Based on the steps you just went through, I suspect you already know the answer. Nevertheless, here’s an example of deploying Schema Registry:
helm install -f ./providers/pks.yaml --name schemaregistry --namespace operator --set schemaregistry.enabled=true ./confluent-operator
In this tutorial, verification involves both internal and external verification of tasks because you configured your pks.yaml file for exposing external endpoints.
Internal verification involves connecting to one of the nodes in the cluster and ensuring there is communication amongst the nodes by executing various Kafka scripts.
cat << EOF > kafka.properties sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=test password=test123; bootstrap.servers=kafka:9071 security.protocol=SASL_PLAINTEXT EOF
This is what running through these three steps looks like:
For external validation, you can interact with your Kafka cluster from the outside, such as performing these steps on your laptop. Make sure to have Confluent Platform downloaded and extracted, and have the scripts available in the bin/ directory.
NAME TYPE CLUSTER-IP EXTERNAL-IP kafka ClusterIP None kafka-0-internal ClusterIP 10.100.200.223 kafka-0-lb LoadBalancer 10.100.200.231 35.192.236.163 kafka-1-internal ClusterIP 10.100.200.213 kafka-1-lb LoadBalancer 10.100.200.200 35.202.102.184 kafka-2-internal ClusterIP 10.100.200.130 kafka-2-lb LoadBalancer 10.100.200.224 104.154.134.167 kafka-bootstrap-lb LoadBalancer 10.100.200.6 35.225.209.199
35.192.236.163 b0.supergloo.com b0 35.202.102.184 b1.supergloo.com b1 104.154.134.167 b2.supergloo.com b2 35.225.209.199 kafka.supergloo.com kafka
It is critical to map b0, b1, and b2 hosts to their corresponding kafka-0, kafka-1, and kafka-2 external IPs. The same goes for the bootstrap mapping. (Note: the domain supergloo.com was configured in the pks.yaml file.)
sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=test password=test123; bootstrap.servers=kafka.supergloo.com:9071 security.protocol=SASL_PLAINTEXT
bin/kafka-topics --create --command-config kafka.properties --zookeeper localhost:2181/kafka-operator --replication-factor 3 --partitions 1 --topic example
bin/kafka-topics --list --command-config kafka.properties --bootstrap-server kafka.supergloo.com:9092
You’ve completed the tutorial and deployed a Kafka cluster to Enterprise PKS using Confluent Operator!
Next up in part 2, we’ll walk through how to deploy a sample Spring Boot application to PAS and configure it to produce and consume for the Kafka cluster created in this tutorial.
For more, check out Kafka Tutorials and find full code examples using Kafka, Kafka Streams, and KSQL.
We covered so much at Current 2024, from the 138 breakout sessions, lightning talks, and meetups on the expo floor to what happened on the main stage. If you heard any snippets or saw quotes from the Day 2 keynote, then you already know what I told the room: We are all data streaming engineers now.
We’re excited to announce Early Access for Confluent for VS Code. This Visual Studio integration streamlines workflows, accelerates development, and enhances real-time data processing, all in a unified environment. This post shows how to get started, and also lists opportunities to get involved.