Connecting to Apache Kafka

Celebrating Over 100 Supported Apache Kafka Connectors

We just released Confluent Platform 5.4, which is one of our most important releases to date in terms of the features we’ve delivered to help enterprises take Apache Kafka® and event streaming into production. These include Role-Based Access Control (RBAC), Structured Audit Logs, Multi-Region Clusters, and Schema Validation.

In the spirit of innovation around Kafka, we are very excited to announce that we now have reached over 100 supported connectors for getting data in and out of Kafka.

100+ Prebuilt Connectors

A rich ecosystem of 100+ prebuilt connectors

One of our main goals here at Confluent is to enhance productivity for Kafka developers. This means delivering capabilities that help developers spend more time building the event streaming applications that will actually change their business, and less time figuring out the inner workings of Kafka.

If you are a developer working with open source Apache Kafka, you have two options to connect existing data sources and sinks:

  1. Develop your own connectors using the Kafka Connect framework: the challenge with developing your own connectors is the time and effort that it takes, which could take up to several weeks, excluding the time required to fix any issues that might arise during actual operations.
  2. Leverage existing open source connectors already built by the community: the challenge in this case is the inherent risk of using technology that isn’t supported by an expert vendor. If you work for an organization deploying Kafka and event streaming into production, this usually is a gamble you cannot afford to make.

That’s why in 2019, we decided to rocketboost our efforts in this space. We started the year with fewer than 10 and ended the year with more than 100 supported connectors.

Most of these connectors are developed and supported by Confluent, but we have also worked closely with our technology partners. We have developed a program for independent software vendors (ISVs) and partners to verify connectors, assuring customers and users of connector interoperability and functionality with Kafka and Confluent Platform. About a quarter of the 100+ connectors are graduates of this program.

We also gathered extensive customer feedback to prioritize building the connectors you need, so we are confident that you will find the most popular and valuable connectors in our catalog, including Salesforce, InfluxDB, Google BigQuery, Azure Blob Storage, AWS Lambda, Syslog, and more.

Confluent Hub: Your one-stop shop for Kafka connectors

To further simplify how you leverage our connector portfolio, we offer Confluent Hub, an online marketplace to easily browse, search, and filter for connectors and other plugins that best fit your data movement needs for Kafka.

You may already know about Confluent Hub given that we first introduced it back in June 2018. What’s newsworthy is that in the upcoming days, we will give it a complete makeover. The new Confluent Hub will boast updated graphics, cleaner layouts and content, but most importantly, a dramatically enhanced user experience with a left-hand filtering banner that will allow you to granularly search for plugins based on critical attributes, such as:

  • Type: a sink connector, source connector, converter, or transform
  • Enterprise support: Confluent or partner supported
  • Licensing: commercially licensed or free (open source or community licensed)
  • Confluent Cloud availability: whether it’s available fully managed in our hosted SaaS offering

Confluent Hub

Ready to get started?

Thanks to this important milestone of 100+ supported connectors and a revamped Confluent Hub, it has never been easier for you and your organization to instantly connect your most popular data sources and sinks to Kafka. We encourage you to explore Confluent Hub to find the Kafka connectors that are right for your use cases.

If you’d like to try the rest of our enterprise features from the 5.4 release, you can download Confluent Platform to take Kafka into production for your mission-critical use cases.

Mauricio Barra is a product marketing manager at Confluent, responsible for the go-to-market strategy of Confluent Platform. His primary goal is to drive clarity and awareness within the Apache Kafka community about the value proposition of Confluent Platform as an enterprise-ready event streaming platform. Mauricio has more than seven years of experience in enterprise technology, priorly having worked on storage, availability and integrated systems products for VMware.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

How to Make Your Open Source Apache Kafka Connector Available on Confluent Hub

How to Make Your Open Source Apache Kafka Connector Available on Confluent Hub

Do you have data you need to get into or out of Apache Kafka®? Kafka connectors are perfect for this. There are many connectors out there, usually for well-known and […]

Building a Cloud ETL Pipeline on Confluent Cloud

Building a Cloud ETL Pipeline on Confluent Cloud

As enterprises move more and more of their applications to the cloud, they are also moving their on-prem ETL (extract, transform, load) pipelines to the cloud, as well as building […]

4 Steps to Creating Apache Kafka Connectors with the Kafka Connect API

4 Steps to Creating Apache Kafka Connectors with the Kafka Connect API

If you’ve worked with the Apache Kafka® and Confluent ecosystem before, chances are you’ve used a Kafka Connect connector to stream data into Kafka or stream data out of it. […]

This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising, and analytics partners.

More Information