Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

Announcing the Elasticsearch Service Sink Connector for Apache Kafka in Confluent Cloud

We are excited to announce the preview release of the fully managed Elasticsearch Service Sink Connector in Confluent Cloud, our fully managed event streaming service based on Apache Kafka®. Our managed Elasticsearch Service Sink Connector eliminates the need to manage your own Kafka Connect cluster, reducing your operational burden when connecting across Kafka in all major cloud providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Before we dive into the Elasticsearch Service sink, let’s recap what Elasticsearch Service is and does.

What is Elasticsearch Service?

Elastic Cloud is a set of SaaS offerings that make it easy to deploy, operate, and scale Elastic products and solutions in the cloud. Elasticsearch Service is the official Elasticsearch Service from the creators. You can spin up a fully loaded deployment on any cloud provider—AWS, GCP, and Azure—you choose.

Getting started with Confluent Cloud and Elasticsearch Service

To get started, you will need access to Confluent Cloud as well as Elasticsearch Service. You can use the promo code CL60BLOG to get an additional $60 of free Confluent Cloud usage.* If you don’t have Elasticsearch Service yet, you can sign up for a 14-day free trial.

When using the Elasticsearch Service Sink Connector, your Elasticsearch Service must be located in the same region as the cloud provider for your Kafka cluster in Confluent Cloud. This prevents you from incurring data movement charges between cloud regions. In this blog post, the Elasticsearch Service is running on GCP us-central1 and the Kafka cluster is running in the same region.

Once your Elasticsearch Service is running, store the Elasticsearch Service URL, username, and password to configure Elasticsearch Service Sink Connector.

Using the Elasticsearch Service sink

Building off of this food delivery scenario, when a new user is created on the website, multiple business systems want their contact information. Contact information is placed in the Kafka topic users for shared use, and we then configure Elasticsearch Service as a sink to the Kafka topic. This allows a new user’s information to propagate to a users index in Elasticsearch Service.

To do this, first create the topic users for a Kafka cluster running in Confluent Cloud (GCP us-central1).

New topic

Use this Python script to populate sample records to the users topic, and check whether the records are available in the users topic.

Click the Elasticsearch Service Sink Connector icon under the “Connectors” menu, and fill out configuration properties with Elasticsearch Service. Make sure JSON is selected as the input message format. The connector will use the users topic as an index name. You can also use a CLI command to configure this connector in Confluent Cloud.

Elasticsearch Service Sink

Input Messages | How should we connect to your Elasticsearch Service? | Data Conversion

Once the connector is up and running, records for the users index will show up in Elasticsearch Service.

Elasticsearch Service records

You can use a Kibana dashboard in Elastic Cloud to visualize the distribution of users’ locations.

Kibana dashboard in Elastic Cloud

Learn more

If you haven’t tried it yet, check out Confluent Cloud, a fully managed Apache Kafka and enterprise event streaming platform, available on Microsoft Azure and GCP Marketplaces with the Elasticsearch Service sink and other fully managed connectors. Enjoy retrieving data from your data sources to Confluent Cloud and sending Kafka records to your destinations without any operational burdens.

Nathan Nam is a senior product manager for Kafka Connect, connectors, and Schema Registry at Confluent. Previously, he worked at MuleSoft as a product manager and held various roles at Samsung Electronics. He holds an MBA from Tuck School of Business at Dartmouth and an MIDS from UC Berkeley.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Analysing Changes with Debezium and Kafka Streams

Change Data Capture (CDC) is an excellent way to introduce streaming analytics into your existing database, and using Debezium enables you to send your change data through Apache Kafka®. Although […]

Data Privacy, Security, and Compliance for Apache Kafka

Why data privacy for Apache Kafka®? As companies seek to leverage all forms of data for competitive advantage, there is a growing regulatory and reputational risk that calls for the […]

Track Transportation Assets in Real Time with Apache Kafka and Kafka Streams

Apache Kafka® is a distributed commit log, commonly used as a multi-tenant data hub to connect diverse source systems and sink systems. Source systems can be systems or records, operational […]

Sign Up Now

Start your 3-month trial. Get up to $200 off on each of your first 3 Confluent Cloud monthly bills

New signups only.

By clicking “sign up” above you understand we will process your personal information in accordance with our Privacy Policy.

By clicking "sign up" above you agree to the Terms of Service and to receive occasional marketing emails from Confluent. You also understand that we will process your personal information in accordance with our Privacy Policy.

Get Confluent Cloud

Get up to $200 off on each of your first 3 Confluent Cloud monthly bills


Choose one sign-up option below

Marketplaces

  • AWS
  • Azure
  • Google Cloud

  • Billed through your Cloud provider*
  • Stream only on 1 cloud
*Billing admin role needed

Marketplaces

  • Billed through your Cloud provider*
  • Stream only on 1 cloud
  • Billing admin role needed

*Billing admin role needed

Confluent


  • Pay with a credit card
  • Stream across multiple clouds

Confluent

  • Pay with a credit card
  • Stream across multiple clouds

By clicking “sign up” above you understand we will process your personal information in accordance with our Privacy Policy.

By clicking "sign up" above you agree to the Terms of Service and to receive occasional marketing emails from Confluent. You also understand that we will process your personal information in accordance with our Privacy Policy.

Free Forever on a Single Kafka Broker
i

The software will allow unlimited-time usage of commercial features on a single Kafka broker. Upon adding a second broker, a 30-day timer will automatically start on commercial features, which cannot be reset by moving back to one broker.

Select Deployment Type
Manual Deployment
  • tar
  • zip
  • deb
  • rpm
  • docker
or
Auto Deployment
  • kubernetes
  • ansible

By clicking "download free" above you understand we will process your personal information in accordance with our Privacy Policy.

By clicking "download free" above, you agree to the Confluent License Agreement and to receive occasional marketing emails from Confluent. You also agree that your personal data will be processed in accordance with our Privacy Policy.

This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising, and analytics partners.