Register Now for Current 2022: The Next Generation of Kafka Summit, let's explore the future of data streaming live!
Hier Bereitstellung auswählen
Hier Bereitstellung auswählen
Are you looking for a way to run AWS services on premises in your own datacenter? I am excited to share today that we have completed validation of support for running Confluent Platform on AWS Outposts. AWS Outposts delivers fully managed, configurable compute and storage racks built with Amazon Web Services (AWS) designed hardware that allows you to run instances as you would on Amazon EC2 locally in your datacenter or co-location facility. You get all these resources locally while seamlessly connecting to a broad array of cloud-based AWS services.
We are also excited to announce that we are AWS Outposts Ready partner certified—one the highest designations that an Amazon Partner Network (APN) Partner can achieve. This means that Confluent Platform has been certified for integration with AWS Outposts, giving you the assurance that can run your production workloads on Confluent Platform locally and securely.
We completed testing of Confluent Platform on Outposts and can now connect various on-premises event sources like legacy databases, proprietary storages, and/or monolithic applications to AWS, serving as a data hub for hybrid designs. Confluent Platform is an enterprise-ready platform that complements Apache Kafka® with advanced capabilities designed to help accelerate application development and connectivity, enable digital transformations through stream processing, simplify enterprise operations at scale and meet stringent architectural requirements. The advantages of running Confluent Platform on AWS Outposts include:
In collaboration with AWS, we are able to provide a true hybrid experience with a unified control plane for AWS infrastructure, as well as a unified event streaming platform to build out your next-generation event-driven applications. Thus, you can overcome challenges that exist due to data locality requirements and running complex hybrid scenarios between on-premise and AWS services.
Next, let’s walk through how we set up our environment.
This section covers how we deployed Confluent Platform to Outposts during our validation process. I will cover the connectivity you will need to have between your Outposts instances, AWS, and the internet. Then, we will delve into building a custom Amazon Machine Image (AMI) on top of Ubuntu Linux that will be used as a basis for all of the instances we deploy. Next, we will run through a few updates that you need to make to the CloudFormation template supplied below. Finally, we will close with how to verify that your deployment was successful.
Let’s first discuss the connectivity that EC2 instances running in your Outposts will need:
Follow these steps to create the customer Ubuntu 18.04 AMI:
sudo apt-get update -y
pip3
:sudo apt install python3-pip -y
python3
:pip3 install ansible --user vi .bashrc export PATH=$PATH:$HOME/.local/bin source .bashrc sudo cp .local/bin/ansible* /usr/bin sudo su pip3 install ansible --user
awscli
with pip3
:pip3 install awscli --user sudo cp .local/bin/aws* /usr/bin sudo su pip3 install awscli --user
boto3
:pip3 install boto3 --user sudo su pip3 install boto3 --user
CloudFormation
helper scripts:sudo apt-get install python-pip -y sudo pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz sudo cp /usr/local/init/ubuntu/cfn-hup /etc/init.d/cfn-hup sudo chmod +x /etc/init.d/cfn-hup sudo update-rc.d cfn-hup defaults sudo service cfn-hup start
.bash_history
is not created until the user logs out:logout
ANSIBLE_SSH_USER
:vi /home/ubuntu/.bashrc export ANSIBLE_SSH_USER=ubuntu source .bashrc sudo su vi /root/.bashrc export ANSIBLE_SSH_USER=ubuntu source /root/.bashrc
ssh
keys from the system:rm .ssh/authorized_keys sudo shred -u /etc/ssh/*_key /etc/ssh/*_key.pub sudo su rm .ssh/authorized_keys shred -u /etc/ssh/*_key /etc/ssh/*_key.pub
sudo passwd -l root
.bash_history
file, we can remove the file and clear history:rm .bash_history sudo su rm .bash_history history -c && history -w && exit history -c && history -w
RegionMap
: Add your custom AMI to the region of your Outposts. In the example below, I used ami-0081b1729374885e9
in us-west-2
:"Mappings": { "RegionMap": { "ap-northeast-1" : { "AMIId" : ""}, "ap-south-1" : { "AMIId" : ""}, "ap-southeast-1" : { "AMIId" : ""}, "ap-southeast-2" : { "AMIId" : ""}, "ca-central-1" : { "AMIId" : ""}, "eu-central-1" : { "AMIId" : ""}, "eu-west-1" : { "AMIId" : ""}, "eu-west-2" : { "AMIId" : ""}, "sa-east-1" : { "AMIId" : ""}, "us-east-1" : { "AMIId" : ""}, "us-east-2" : { "AMIId" : ""}, "us-west-2" : { "AMIId" : "ami-0081b1729374885e9"} }
*InstanceType
: Your Outposts will have a unique setup in terms of the instance family and sizes. Please be sure to customize your setup accordingly."InventoryInstanceType" : { "Description" : "Choose EC2 instance type for Ansible Inventory Node", "Type" : "String", "Default" : "t2.micro", "AllowedValues": [ "t2.micro", "t2.medium", "t2.small", "m5.large" ] }, "BrokerInstanceType" : { "Description" : "Choose EC2 instance type for Kafka Broker", "Type" : "String", "Default" : "m4.large",
InventoryNode
, larger instances for ksqlDB and Confluent Control Center, and medium-sized instances for everything else.m5.large
for everything except ksqlDB and Control Center, which used m5.4xlarges
.BucketName
parameter will take the name of the S3 bucket where the keypair is present. Note that the keypair is region specific.KeyName
parameter.VPCId
.InventoryNode
should continue to deploy Confluent Platform.. Upon creation of the stack, the nodes will be launched with the Ansible inventory node being the last to be created. You can track the progress of the playbook by logging into the Ansible inventory node:$ vi /var/log/cloud-init-output.log
If there are any errors in the parameters entered, the cloud-init-output.log
will have a log saying that it failed to send an cfn-signal
.
BROKER IP
, please use a private IPs of one of the broker’s CloudFormation/Ansible setups).kafka-topics –create –topic test-dr-topic –bootstrap-server <BROKER IP>:9092 –replication-factor 3 –partitions 1
kafka-avro-console-producer --broker-list <BROKER IP>:9092 --property schema.registry.url=http://localhost:8081 --property key.converter=StringConverter --topic test-dr-topic --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'
{"f1": "value1-a"} {"f1": "value2-a"} {"f1": "value3-a"}
Press Ctrl-C. Consume messages:
kafka-avro-console-consumer --from-beginning --topic test-dr-topic --bootstrap-server <BROKER IP>:9092 --property print.key=false --property schema.registry.url=http://localhost:8081
You should see three messages consumed:
{"f1":"value1-a"} {"f1":"value2-a"} {"f1":"value3-a"} Processed a total of 3 messages
Press Ctrl-C. Delete your test topic through the Confluent Control Center GUI.
And that’s a wrap!
AWS Outposts provides a new paradigm for running on-premises workloads together with Confluent Platform for truly event-driven, hybrid, and mission-critical use cases. We covered many of the advanced features of AWS Outposts and walked through the exact process used to get Confluent Platform up and running for our AWS Outposts Ready partner certification. We hope this helps you on your path to event streaming.
If you’d like more details, I encourage you to learn more about AWS Outposts and get started with Confluent Platform today.
Joseph Morais started early in his career as a network/solution engineer working for FMC Corporation and then Urban Outfitters (UO). At UO, Joseph joined the e-commerce operations team, where he focused on agile methodology, CI/CD, containerization, public cloud architecture, and infrastructure as code. This led to a greenfield AWS opportunity working for a startup, Amino Payments, where he worked heavily with Kafka, Apache Hadoop, NGINX, and automation. Just prior to joining Confluent, Joseph was helping AWS enterprise customers scale through their cloud journey as a sr. technical account manager. At Confluent, Joseph serves as cloud partner solutions architect and AWS evangelist.
Jobin George is a senior partner solutions architect at AWS, with more than a decade of experience designing and implementing large scale big data and analytics solutions. He provides technical guidance, design advice, and thought leadership to key AWS customers and big data partners.
Subscribe to the Confluent blog