5日間で Kafka、Flink、Tableflow を完全習得:データストリーミング・グランプリに参加しましょう | 今すぐ登録

How Confluent Is Enhancing and Easing Migration to Fully Managed Connectors

作成者 :

Taking advantage of Confluent’s pre-built connectors means you’re able to build integration pipelines without having to write, test, and maintain integration code. And with our fully managed connectors on Confluent Cloud, you can connect to any source or sink system with zero ops or infrastructure management and with cost-efficiency. This year, we've been hard at work making the benefits of fully managed connectors a reality for more of our customers.

We’ve significantly enhanced the Confluent data streaming platform and connector portfolio by: 

  • Exposing fully managed connector logs to non-admin roles

  • Introducing plugin versioning for custom connectors 

  • Expanding single message transform (SMT) support

  • Enabling custom topic naming for dead letter queue (DLQ) and error topics

  • Making key updates across specific connectors for databases, software-as-a-service (SaaS) and applications, messaging systems, analytics and data warehouses, and monitoring and observability systems 

Ultimately, these improvements give you enhanced control, robust data guarantees, an improved user experience, and less operational burden when building integration pipelines on Confluent Cloud. They directly facilitate migrations from self-managed to fully managed connectors, offering greater control and reliability for consistent performance and data integrity when transitioning pipelines to the fully managed Confluent Connect environment.

This blog post covers both the platform-level improvements that are applicable to fully managed, self-managed, and custom connector suites as well as individual connector enhancements.

How Is Confluent Easing Your Connect Migrations? 

We’ve made platform-level improvements to help Confluent Cloud customers seamlessly transition from using self-managed to fully managed connectors.

These changes apply across all connectors and make it easier for your team to independently manage connectors, upgrade custom connectors, and leverage SMTs as well as DLQ, success, and error topics.

Exposing Fully Managed Connector Logs to Non-Admin Roles

We’re pleased to announce that non-admin roles in Confluent Cloud, specifically ResourceOwner and ConnectManager, now have access to connector logs.

This enhancement addresses previous limitations to role-based access control (RBAC) for connectors, in which these roles could create and manage connectors but lacked the ability to access logs for comprehensive end-to-end management and troubleshooting. This update empowers users in these roles to independently manage and diagnose their connectors.

Support for Plugin Versioning Selection in Custom Connectors 

Previously, upgrading connectors was a disruptive process—requiring users to upload a new plugin and create a new connector—which unfortunately meant losing the last offset of the former connector. This is no longer the case!

We're excited to announce that custom connectors now offer plugin versioning. This powerful new feature, currently accessible via API, CLI, or Terraform (with user interface [UI] support coming soon), empowers users to seamlessly direct their active connectors to an updated plugin version and resume from the very last offset. This ensures a much smoother and more efficient upgrade experience.

This functionality is exclusively available through our new Custom Connect Plugin Management (CCPM) APIs, which establish custom connect plugin resources at the environment level. This aligns with other custom resources such as Apache Flink® user-defined functions and custom SMTs, providing a consistent and robust management experience. Please note that the legacy APIs are slated for deprecation.

Enhancing SMTs for Fully Managed Connectors

Out of the Box SMTs

We've now rolled out several SMT enhancements to our fully managed connectors. This includes expanded support for “DropHeaders,” “InsertHeader,” and “HeaderFrom” as well as extending “ValueToKey” SMT to Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, and SFTP source connectors. 

We've also introduced “HeaderToValue” as a Confluent-supported SMT and added Confluent's “Flatten” SMT, which allows you to maintain nested data structures. Additionally, we've integrated support for the “Filter” SMT to Amazon S3, Microsoft Azure Blob Storage, Amazon Kinesis, and Google Cloud Storage source connectors. 

These SMT enhancements offer users greater flexibility and control over data transformations within fully managed connectors. By supporting a wider range of SMTs—including those for header manipulation, key transformation, and data flattening—users can more precisely tailor their data before it's delivered to its destination, ensuring data quality and compatibility with downstream systems.

For more details on all available SMTs, refer to the Kafka Connect Single Message Transform for Confluent Cloud.

Custom SMTs

Custom SMTs enable customers to bring their own Single Message Transformations (SMTs) to Confluent Cloud to be used alongside fully-managed connectors on AWS within private networking setups.  Regardless if you’re leveraging custom in-house built SMTs or third party SMTs they can now be used with fully-managed connectors by simply uploading the associated zip/JAR file containing the SMT(s) to Confluent Cloud.

For more details and to get started with Custom SMTs, refer to Configure Custom SMT for Kafka Connectors in Confluent Cloud.

Custom Naming for DLQ, Success, and Error Topics in Fully Managed Connectors

We've made improvements to how DLQ, success, and error topics are named. Previously, a predefined naming convention presented challenges for users with managing access and requiring a single topic per connector, creating a functional difference between our self-managed and fully managed connectors.

With this update, users can now specify a name (using an expression) for these topics during connector creation via the CLI or API. While these topics will still default to the <connector-id> suffixes, this new flexibility eliminates previous restrictions on user access to specific topics and addresses usability challenges by no longer necessitating a single topic per connector.

This brings the functionality of our fully managed connectors in line with our self-managed offerings, making your experience even more seamless. UI support for this feature will be implemented at a later stage.

For more details, refer to Common Property Configurations for Connectors.

How Is Confluent Powering Robust And Flexible Integration With Connector Enhancements?

Database Connectors

Google Cloud Spanner sink: PostgreSQL dialect

MongoDB Atlas source and sink: Field truncation

PostgreSQL Change Data Capture (CDC) source V2: Publications at root level

Snowflake sink: Proxy-based connectivity

Oracle XStream CDC source: Support for large objects, validation script, client-side field level encryption (CSFLE)

MySQL CDC source V2 and Amazon DynamoDB sink: AWS Identity and Access Management (IAM) AssumeRole

JDBC connectors: Alternative for freedom queries

SaaS and Application Connectors

AWS Lambda sink: Support for aliases

HTTP source/sink V2: API cursor pagination and key authentication

Salesforce source and sink: OAuth, support for five sObjects and fetching deleted records

Messaging System Connectors

IBM MQ source and sink: OAuth2, personalized certificate validation, single sign-on (SSO), and exactly-once semantics

Amazon Kinesis and Amazon Simple Queue Service (SQS) source: AWS IAM AssumeRole

File Storage Connectors

Amazon S3 source and sink: Expanded capacity for indexed objects

Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage sink: Field partitioner support

Monitoring and Observability Connectors

Amazon CloudWatch logs source: AWS IAM AssumeRole

Analytics and Warehouse Connectors

Amazon Redshift sink: AWS IAM AssumeRole

Improving Security, Data Compatibility, and Reliability of Database Connectors

PostgreSQL Dialect for Google Cloud Spanner Sink Connector

The Google Cloud Spanner sink connector, which previously supported the default GoogleSQL database dialect, has been enhanced to include support for Google Cloud Spanner's PostgreSQL dialect. This allows customers to use both GoogleSQL and PostgreSQL dialects within their Spanner database configurations when using our connector.

This enhancement provides like-for-like data type support, meaning all data types previously supported for the GoogleSQL dialect are now also supported for the PostgreSQL dialect. This feature is available starting from version 1.1.0.

MongoDB Atlas Source and Sink Connector Enhancements

Fully managed MongoDB Atlas source and sink connector now allow for better management of field truncation when those fields aren't explicitly defined within the schema

Additionally, the “output.format.key” and “output.format.value” configurations, which control the serialization format of Apache Kafka® message keys and values, are now accessible. The “output.schema.key” configuration is also exposed, allowing clients to explicitly define the schema for Kafka message key and value documents.

Creation of Publications With the PostgreSQL CDC Source V2 Connector at the Partition Root Level

The PostgreSQL CDC source V2 connector now automates the publication of change event data from partitioned PostgreSQL source tables to a single topic.

This enhancement streamlines the process for customers, reducing operational costs and improving data consistency by ensuring that all relevant data is captured in a single stream. This feature simplifies data integration, making it easier to leverage PostgreSQL CDC for various analytical and operational use cases.

Proxy-Based Connectivity for Snowflake Sink Connector

We’ve implemented a new feature that enables the fully managed Snowflake sink connector to establish a connection with the destination Snowflake instance via a proxy configuration, as recommended by Snowflake.

This implementation deviates from the self-managed approach and more closely resembles the HTTP connector proxy implementation, offering enhanced security and flexibility for users.

Oracle XStream CDC Source Connector Enhancements

The new Oracle XStream CDC source connector has now been enhanced to support large objects, including BLOB, CLOB, and NCLOB. A key aspect of this implementation is that these objects will be processed identically to other records and directed to the same table topic, eliminating the need for a separate topic (unlike the previous Oracle LogMiner connector). Users can manage column inclusion in change event messages through the “column.include.list” and “column.exclude.list” configuration properties.

The Oracle XStream connector necessitates several database configurations as prerequisites. To streamline setup and reduce interaction between database administrators and application teams, a validation script is being released to assist customers in verifying the correct configuration of these prerequisites. Furthermore, the new Oracle XStream connector will incorporate CSFLE, ensuring continuous data security.

AWS IAM AssumeRole for MySQL CDC Source V2 and Amazon DynamoDB Sink Connectors

Confluent Cloud's MySQL CDC source V2 and Amazon DynamoDB sink connector now support AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to MySQL and DynomoDB data streams.

This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for Amazon Web Services (AWS) users, offering a more secure and efficient way to integrate Amazon Kinesis with Confluent Cloud for real-time data pipelines.

Providing Alternatives for Freeform Queries With JDBC Connectors

The direct use of freeform queries within fully managed connectors introduces a significant security vulnerability, primarily due to the risk of SQL injection attacks. To mitigate this risk and achieve similar functionality, we strongly recommend using views. 

This approach enhances security by abstracting the underlying data and restricting direct manipulation of the database schema. Comprehensive guidance on implementing this advised method can be found in this dedicated knowledge base article.

Enabling Robust Control and Secure Connectivity for SaaS and Application Connectors

Support for Aliases in AWS Lambda Sink Connector

The AWS Lambda sink connector now offers support for both function versions and aliases, allowing customers to direct records to designated Lambda versions.

This enhancement improves flexibility and control during deployments, facilitating seamless version rollouts without service interruption. The use of aliases decouples connector configuration from specific function versions, thereby promoting safer, more dynamic, and more manageable updates.

HTTP Source/Sink V2 Enhancements

HTTP source V2 now supports API cursor pagination with timestamp mode, allowing users to configure APIs that support cursor pagination to retrieve data between a start time (an initial timestamp specified by the customer and subsequent timestamps constructed using the response JSON pointer field) and an optional end time (based on the specified interval).

Additionally, HTTP source/sink V2 now supports API key authentication. These features are available only via API, CLI, or Terraform.

Salesforce Source and Sink Connector Enhancements

The Salesforce connector suite now supports the OAuth Client Credentials grant flow, enhancing connection security. 

Specifically, the Salesforce sObject sink connector has been upgraded to support up to five sObjects, leading to an ~80% cost reduction for clients. Its new batching capability improves connector performance and minimizes API calls, thus preventing frequent API limit breaches. Furthermore, it now manages tombstone records, offering options to ignore, delete from the Salesforce system, or fail the connector.

Also, the Salesforce Bulk API source connector now includes support for fetching deleted records, which facilitates data integrity and synchronization across all systems. It also allows for the configuration of the Salesforce Bulk API V2 source connector with extended polling intervals, ranging from five minutes to three hours, enabling clients to tailor connector settings to their specific needs while adhering to API limitations.

Security and Semantics Enhancements for Messaging System Connectors

Self-Managed IBM MQ Source and Sink Connector Enhancements

IBM MQ connectors have been augmented with enhanced security and authentication functionalities. IBM MQ security exits facilitate the implementation of bespoke authentication and authorization protocols during connection establishment, thereby accommodating advanced scenarios such as OAuth2 token injection, personalized certificate validation, and enterprise SSO integration. The Custom Credentials Provider Interface offers adaptable dynamic credential management, empowering the retrieval of credentials at runtime from external repositories.

Furthermore, the IBM MQ source connector now supports exactly-once semantics. The connector processes each record precisely once, even in the event of failures or restarts. It uses the state topic to monitor the progress of processed records, enabling it to resume from the last processed record following a failure. It’s important to note that the connector doesn’t support the execution of multiple tasks when the exactly-once settings are activated for the connector. For additional details, refer to this documentation.

AWS IAM AssumeRole for Amazon Kinesis and Amazon SQS Source Connectors

Confluent Cloud's Amazon Kinesis and Amazon SQS source connectors now support AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to Kinesis and SQS data streams. This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for AWS users, offering a more secure and efficient way to integrate Kinesis and SQS with Confluent Cloud for real-time data pipelines. Refer to this documentation for more details on setup.

Enhancing File Storage Connectors Capacity and Efficiency  

Amazon S3 Source Connector Improvements

Amazon S3 source connector's capacity for indexed objects within a bucket has been significantly expanded from 100,000 to 10,000,000. This enhancement facilitates the efficient management of extensive directories.

Confluent recommends the creation of additional top-level folders to accelerate file processing and bolster long-term scalability. Furthermore, the connector now incorporates support for embedded schemas, thereby enabling seamless conversion from JSON (with embedded schemas) to Apache ParquetTM or Apache AvroTM formats.

Field Partitioner Support for Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage Sink Connectors

Data is partitioned based on the value of a specified field, resulting in Amazon S3, Azure Blob Storage, or Google Cloud Storage object paths that incorporate the field's name and value. A maximum of five field names can be specified as a comma-separated list.

Additionally, the task is restricted to a single partition when using the field partitioner. For further details, refer to Amazon S3, Azure Blob Storage, and Google Cloud Storage sink connector documentation.

Monitoring and Observability Connector Enhancements

Security Enhancements for Amazon CloudWatch Connector

Confluent Cloud's Amazon Kinesis source connector now supports AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to Amazon CloudWatch logs data streams.

This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for AWS users, offering a more secure and efficient way to integrate CloudWatch logs with Confluent Cloud for real-time data pipelines.

Analytics and Warehouse Connector Enhancements

Security Enhancements for Amazon Redshift Sink Connector

Confluent Cloud's Amazon Redshift sink connector now supports AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to Redshift data streams.

This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for AWS users, offering a more secure and efficient way to integrate Redshift with Confluent Cloud for real-time data pipelines.

Additional Updates

End of Support for Connector Versions Unsupported in Confluent Platform 7.8+

The support policy for the self-managed connector version became effective on July 6, 2025. As a result, all connector versions preceding the minimum version specified on the Supported Connector Versions till Confluent Platform 7.8 page will no longer receive Confluent support. Furthermore, as of July 21, 2025, customers were unable to download unsupported connector versions from Confluent Hub.

Find and Configure Your Next Integration on Confluent Cloud

Are you building an application that needs real-time data? Get started here:

Interested in joining the CwC program? Become a Confluent partner and give your customers the absolute best experience for working with data streams—right within your application, supported by the Kafka experts.


Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.

Apache®, Apache Kafka®, Kafka®, Apache Flink®, and Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.

  • Yashwanth Dasari is a Sr. Product Marketing Manager at Confluent responsible for positioning, messaging and GTM strategy of Confluent Cloud Stream & Connect, Tableflow and WarpStream. Prior to joining Confluent, Yashwanth was a management consultant at BCG advising F500 clients in technology, marketing and corporate strategy sectors. He also worked as a software engineer at Optum and SAP Labs.

このブログ記事は気に入りましたか?今すぐ共有