Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Taking advantage of Confluent’s pre-built connectors means you’re able to build integration pipelines without having to write, test, and maintain integration code. And with our fully managed connectors on Confluent Cloud, you can connect to any source or sink system with zero ops or infrastructure management and with cost-efficiency. This year, we've been hard at work making the benefits of fully managed connectors a reality for more of our customers.
We’ve significantly enhanced the Confluent data streaming platform and connector portfolio by:
Exposing fully managed connector logs to non-admin roles
Introducing plugin versioning for custom connectors
Expanding single message transform (SMT) support
Enabling custom topic naming for dead letter queue (DLQ) and error topics
Making key updates across specific connectors for databases, software-as-a-service (SaaS) and applications, messaging systems, analytics and data warehouses, and monitoring and observability systems
Ultimately, these improvements give you enhanced control, robust data guarantees, an improved user experience, and less operational burden when building integration pipelines on Confluent Cloud. They directly facilitate migrations from self-managed to fully managed connectors, offering greater control and reliability for consistent performance and data integrity when transitioning pipelines to the fully managed Confluent Connect environment.
This blog post covers both the platform-level improvements that are applicable to fully managed, self-managed, and custom connector suites as well as individual connector enhancements.
We’ve made platform-level improvements to help Confluent Cloud customers seamlessly transition from using self-managed to fully managed connectors.
These changes apply across all connectors and make it easier for your team to independently manage connectors, upgrade custom connectors, and leverage SMTs as well as DLQ, success, and error topics.
We’re pleased to announce that non-admin roles in Confluent Cloud, specifically ResourceOwner and ConnectManager, now have access to connector logs.
This enhancement addresses previous limitations to role-based access control (RBAC) for connectors, in which these roles could create and manage connectors but lacked the ability to access logs for comprehensive end-to-end management and troubleshooting. This update empowers users in these roles to independently manage and diagnose their connectors.
Previously, upgrading connectors was a disruptive process—requiring users to upload a new plugin and create a new connector—which unfortunately meant losing the last offset of the former connector. This is no longer the case!
We're excited to announce that custom connectors now offer plugin versioning. This powerful new feature, currently accessible via API, CLI, or Terraform (with user interface [UI] support coming soon), empowers users to seamlessly direct their active connectors to an updated plugin version and resume from the very last offset. This ensures a much smoother and more efficient upgrade experience.
This functionality is exclusively available through our new Custom Connect Plugin Management (CCPM) APIs, which establish custom connect plugin resources at the environment level. This aligns with other custom resources such as Apache Flink® user-defined functions and custom SMTs, providing a consistent and robust management experience. Please note that the legacy APIs are slated for deprecation.
We've now rolled out several SMT enhancements to our fully managed connectors. This includes expanded support for “DropHeaders,” “InsertHeader,” and “HeaderFrom” as well as extending “ValueToKey” SMT to Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, and SFTP source connectors.
We've also introduced “HeaderToValue” as a Confluent-supported SMT and added Confluent's “Flatten” SMT, which allows you to maintain nested data structures. Additionally, we've integrated support for the “Filter” SMT to Amazon S3, Microsoft Azure Blob Storage, Amazon Kinesis, and Google Cloud Storage source connectors.
These SMT enhancements offer users greater flexibility and control over data transformations within fully managed connectors. By supporting a wider range of SMTs—including those for header manipulation, key transformation, and data flattening—users can more precisely tailor their data before it's delivered to its destination, ensuring data quality and compatibility with downstream systems.
For more details on all available SMTs, refer to the Kafka Connect Single Message Transform for Confluent Cloud.
Custom SMTs enable customers to bring their own Single Message Transformations (SMTs) to Confluent Cloud to be used alongside fully-managed connectors on AWS within private networking setups. Regardless if you’re leveraging custom in-house built SMTs or third party SMTs they can now be used with fully-managed connectors by simply uploading the associated zip/JAR file containing the SMT(s) to Confluent Cloud.
For more details and to get started with Custom SMTs, refer to Configure Custom SMT for Kafka Connectors in Confluent Cloud.
We've made improvements to how DLQ, success, and error topics are named. Previously, a predefined naming convention presented challenges for users with managing access and requiring a single topic per connector, creating a functional difference between our self-managed and fully managed connectors.
With this update, users can now specify a name (using an expression) for these topics during connector creation via the CLI or API. While these topics will still default to the <connector-id> suffixes, this new flexibility eliminates previous restrictions on user access to specific topics and addresses usability challenges by no longer necessitating a single topic per connector.
This brings the functionality of our fully managed connectors in line with our self-managed offerings, making your experience even more seamless. UI support for this feature will be implemented at a later stage.
For more details, refer to Common Property Configurations for Connectors.
Database Connectors | Google Cloud Spanner sink: PostgreSQL dialect MongoDB Atlas source and sink: Field truncation PostgreSQL Change Data Capture (CDC) source V2: Publications at root level Snowflake sink: Proxy-based connectivity Oracle XStream CDC source: Support for large objects, validation script, client-side field level encryption (CSFLE) MySQL CDC source V2 and Amazon DynamoDB sink: AWS Identity and Access Management (IAM) AssumeRole JDBC connectors: Alternative for freedom queries |
SaaS and Application Connectors | AWS Lambda sink: Support for aliases HTTP source/sink V2: API cursor pagination and key authentication Salesforce source and sink: OAuth, support for five sObjects and fetching deleted records |
Messaging System Connectors | IBM MQ source and sink: OAuth2, personalized certificate validation, single sign-on (SSO), and exactly-once semantics Amazon Kinesis and Amazon Simple Queue Service (SQS) source: AWS IAM AssumeRole |
File Storage Connectors | Amazon S3 source and sink: Expanded capacity for indexed objects Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage sink: Field partitioner support |
Monitoring and Observability Connectors | Amazon CloudWatch logs source: AWS IAM AssumeRole |
Analytics and Warehouse Connectors | Amazon Redshift sink: AWS IAM AssumeRole |
The Google Cloud Spanner sink connector, which previously supported the default GoogleSQL database dialect, has been enhanced to include support for Google Cloud Spanner's PostgreSQL dialect. This allows customers to use both GoogleSQL and PostgreSQL dialects within their Spanner database configurations when using our connector.
This enhancement provides like-for-like data type support, meaning all data types previously supported for the GoogleSQL dialect are now also supported for the PostgreSQL dialect. This feature is available starting from version 1.1.0.
Fully managed MongoDB Atlas source and sink connector now allow for better management of field truncation when those fields aren't explicitly defined within the schema.
Additionally, the “output.format.key” and “output.format.value” configurations, which control the serialization format of Apache Kafka® message keys and values, are now accessible. The “output.schema.key” configuration is also exposed, allowing clients to explicitly define the schema for Kafka message key and value documents.
The PostgreSQL CDC source V2 connector now automates the publication of change event data from partitioned PostgreSQL source tables to a single topic.
This enhancement streamlines the process for customers, reducing operational costs and improving data consistency by ensuring that all relevant data is captured in a single stream. This feature simplifies data integration, making it easier to leverage PostgreSQL CDC for various analytical and operational use cases.
We’ve implemented a new feature that enables the fully managed Snowflake sink connector to establish a connection with the destination Snowflake instance via a proxy configuration, as recommended by Snowflake.
This implementation deviates from the self-managed approach and more closely resembles the HTTP connector proxy implementation, offering enhanced security and flexibility for users.
The new Oracle XStream CDC source connector has now been enhanced to support large objects, including BLOB, CLOB, and NCLOB. A key aspect of this implementation is that these objects will be processed identically to other records and directed to the same table topic, eliminating the need for a separate topic (unlike the previous Oracle LogMiner connector). Users can manage column inclusion in change event messages through the “column.include.list” and “column.exclude.list” configuration properties.
The Oracle XStream connector necessitates several database configurations as prerequisites. To streamline setup and reduce interaction between database administrators and application teams, a validation script is being released to assist customers in verifying the correct configuration of these prerequisites. Furthermore, the new Oracle XStream connector will incorporate CSFLE, ensuring continuous data security.
Confluent Cloud's MySQL CDC source V2 and Amazon DynamoDB sink connector now support AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to MySQL and DynomoDB data streams.
This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for Amazon Web Services (AWS) users, offering a more secure and efficient way to integrate Amazon Kinesis with Confluent Cloud for real-time data pipelines.
The direct use of freeform queries within fully managed connectors introduces a significant security vulnerability, primarily due to the risk of SQL injection attacks. To mitigate this risk and achieve similar functionality, we strongly recommend using views.
This approach enhances security by abstracting the underlying data and restricting direct manipulation of the database schema. Comprehensive guidance on implementing this advised method can be found in this dedicated knowledge base article.
The AWS Lambda sink connector now offers support for both function versions and aliases, allowing customers to direct records to designated Lambda versions.
This enhancement improves flexibility and control during deployments, facilitating seamless version rollouts without service interruption. The use of aliases decouples connector configuration from specific function versions, thereby promoting safer, more dynamic, and more manageable updates.
HTTP source V2 now supports API cursor pagination with timestamp mode, allowing users to configure APIs that support cursor pagination to retrieve data between a start time (an initial timestamp specified by the customer and subsequent timestamps constructed using the response JSON pointer field) and an optional end time (based on the specified interval).
Additionally, HTTP source/sink V2 now supports API key authentication. These features are available only via API, CLI, or Terraform.
The Salesforce connector suite now supports the OAuth Client Credentials grant flow, enhancing connection security.
Specifically, the Salesforce sObject sink connector has been upgraded to support up to five sObjects, leading to an ~80% cost reduction for clients. Its new batching capability improves connector performance and minimizes API calls, thus preventing frequent API limit breaches. Furthermore, it now manages tombstone records, offering options to ignore, delete from the Salesforce system, or fail the connector.
Also, the Salesforce Bulk API source connector now includes support for fetching deleted records, which facilitates data integrity and synchronization across all systems. It also allows for the configuration of the Salesforce Bulk API V2 source connector with extended polling intervals, ranging from five minutes to three hours, enabling clients to tailor connector settings to their specific needs while adhering to API limitations.
IBM MQ connectors have been augmented with enhanced security and authentication functionalities. IBM MQ security exits facilitate the implementation of bespoke authentication and authorization protocols during connection establishment, thereby accommodating advanced scenarios such as OAuth2 token injection, personalized certificate validation, and enterprise SSO integration. The Custom Credentials Provider Interface offers adaptable dynamic credential management, empowering the retrieval of credentials at runtime from external repositories.
Furthermore, the IBM MQ source connector now supports exactly-once semantics. The connector processes each record precisely once, even in the event of failures or restarts. It uses the state topic to monitor the progress of processed records, enabling it to resume from the last processed record following a failure. It’s important to note that the connector doesn’t support the execution of multiple tasks when the exactly-once settings are activated for the connector. For additional details, refer to this documentation.
Confluent Cloud's Amazon Kinesis and Amazon SQS source connectors now support AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to Kinesis and SQS data streams. This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for AWS users, offering a more secure and efficient way to integrate Kinesis and SQS with Confluent Cloud for real-time data pipelines. Refer to this documentation for more details on setup.
Amazon S3 source connector's capacity for indexed objects within a bucket has been significantly expanded from 100,000 to 10,000,000. This enhancement facilitates the efficient management of extensive directories.
Confluent recommends the creation of additional top-level folders to accelerate file processing and bolster long-term scalability. Furthermore, the connector now incorporates support for embedded schemas, thereby enabling seamless conversion from JSON (with embedded schemas) to Apache ParquetTM or Apache AvroTM formats.
Data is partitioned based on the value of a specified field, resulting in Amazon S3, Azure Blob Storage, or Google Cloud Storage object paths that incorporate the field's name and value. A maximum of five field names can be specified as a comma-separated list.
Additionally, the task is restricted to a single partition when using the field partitioner. For further details, refer to Amazon S3, Azure Blob Storage, and Google Cloud Storage sink connector documentation.
Confluent Cloud's Amazon Kinesis source connector now supports AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to Amazon CloudWatch logs data streams.
This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for AWS users, offering a more secure and efficient way to integrate CloudWatch logs with Confluent Cloud for real-time data pipelines.
Confluent Cloud's Amazon Redshift sink connector now supports AWS IAM AssumeRole, a highly requested feature that significantly enhances security and streamlines connecting to Redshift data streams.
This eliminates the need for long-lived access keys, reducing security risks and simplifying credential management. This is a significant improvement for AWS users, offering a more secure and efficient way to integrate Redshift with Confluent Cloud for real-time data pipelines.
The support policy for the self-managed connector version became effective on July 6, 2025. As a result, all connector versions preceding the minimum version specified on the Supported Connector Versions till Confluent Platform 7.8 page will no longer receive Confluent support. Furthermore, as of July 21, 2025, customers were unable to download unsupported connector versions from Confluent Hub.
Are you building an application that needs real-time data? Get started here:
Setup fully managed connectors within private networking environments.
Check out the full library of Connect with Confluent (CwC) partner integrations to easily integrate your existing tech stack with fully managed data streams.
Visit Confluent Hub to explore our repository of 120+ pre-built source and sink connectors, including 80+ that are fully managed.
Interested in joining the CwC program? Become a Confluent partner and give your customers the absolute best experience for working with data streams—right within your application, supported by the Kafka experts.
Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Kafka®, Kafka®, Apache Flink®, and Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
Turn legacy Oracle data into real-time AI insights using Kafka, Flink, and MongoDB. Learn how to stream, enrich, and personalize faster than batch ever could.
Confluent has introduced Private Network Interface (PNI) for AWS, a new secure networking option that helps save 50% on networking costs and has been successfully adopted by customers like Indeed. PNI is now available for Enterprise and Freight clusters.