Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Over the second half of 2025, the Connect team at Confluent has focused on enhancing the flexibility, efficiency, and overall manageability of our portfolio of Kafka connectors. By delivering improvements at every level—from core platform features to specific enhancements for our fully managed and Debezium family of connectors—we’re making it easier for you to build and scale data pipelines with Apache Kafka®.
Key platform-level innovations include AI-assisted troubleshooting to expedite issue resolution and expanded metric visibility for deeper insights. And we've developed specialized migration tooling to simplify transitions and ensure reliability. These collective advancements are designed to empower you to construct more robust, performant, and trustworthy streaming data architectures with Confluent connectors.
TL:DR: Confluent has bolstered its connectors’ security features, data compatibility capabilities, reliability, and efficiency. Learn how expanded platform support and connector-specific improvements will streamline Kafka operations, accelerate connector migrations, and offer more flexible resource management. These include Oracle XStream and privacy enhanced mail (PEM)-based authentication for MongoDB; robust proxy, upsert, and delete functionality for HTTP connectors; significant capacity and efficiency upgrades for high-volume connectors such as Salesforce Platform Events, Amazon S3, and Azure Data Lake Storage (ADLS) Gen2; and key updates to the Snowflake, BigQuery, Elasticsearch, and OpenSearch connector updates.
These platform enhancements for Confluent Connect focus on improving the flexibility, observability, troubleshooting, and migration of fully managed and Debezium connectors.
We’ve initiated a multi-phase project to achieve configuration parameter parity between our managed and self-managed Debezium connectors. The initial phase has focused on exposing high-demand configuration parameters to address customer requests for a more robust and flexible experience. Notably, this includes adding the highly requested heartbeat.action.query parameter, which will allow for greater control and customization of change data capture behavior.
We’re pleased to announce the introduction of several new configuration parameters for the Debezium family of CDC connectors. This enhancement significantly increases the flexibility available to users, enabling more precise tailoring of connector behavior to meet specific database deployment and performance requirements. The successful rollout of this second phase of configuration properties ensures that users have a comprehensive set of options to optimize the setup of their Debezium connectors for various use cases. A detailed and complete reference of all newly added parameters is available in the documentation.
We’ve now launched the new UI support for custom SMTs on AWS Private Networking setups. This feature empowers customers to seamlessly integrate their proprietary SMTs with Confluent Cloud's fully managed connectors. Users can now use the Connect UI to upload, manage, and provision connectors with their custom transformations, streamlining the development and deployment of sophisticated data pipelines.
We’ve now added topics.regex support for our fully managed sink connectors. This enhancement provides greater flexibility by allowing connectors to consume data from topics matching a specified regular expression pattern rather than requiring a predetermined, static list of topics.
Consult the individual connector documentation for a comprehensive overview of all supported properties. Note that this functionality is currently accessible exclusively through the API and Confluent CLI.
We’re continuously expanding the functionality of our fully managed connectors by integrating new SMTs. Recently, we added support for several powerful SMTs sourced from the Kafka Connect Common Transforms repository. These additions include FromXML for IBM MQ Source and HTTP V2 Source connectors, ChangeTopicCase for JDBC Source, CDC Source, and MongoDB Source connectors, and ExtractTimeStamp for HTTP V2 Sink, MongoDB Sink, and JDBC Sink connectors, enabling greater data manipulation flexibility. We plan to release further SMTs by December, such as AdjustPrecisionAndScale, SetMaximumPrecision, BytesToString, ExtractNestedField, TimestampNow, TimestampNowField, and ExtractXPath, which will provide even more granular control over data transformation pipelines.
Custom SMTs now feature enhanced performance and more robust error handling during artifact uploads. Furthermore, we’ve significantly increased the supported message size to 20 MB, up from the previous default of 4 MB, enabling users to process larger messages with greater ease. Availability on Azure is coming soon.
Currently, the consumer_lag_offset metric is exposed, which indicates the discrepancy between the most recent available offset and the last committed offset. Although this metric is helpful, its value is affected by the commit interval and may not accurately represent the connector's true real-time processing status.
By making records_lag_max available, customers will gain immediate visibility into the maximum lag across all partitions, thereby simplifying the identification of real-time delays in record consumption.
The AI-assisted troubleshooting feature for fully managed connectors generates auto-generated, easily comprehensible summaries of connector issues directly within the Confluent Cloud UI. These summaries are produced through the analysis of connector logs and metadata, allowing users to quickly understand issues without requiring them to review raw logs.
In future releases, users will be able to generate fix recommendations with a single click in the Confluent Cloud UI. This is designed to enable Connect users to resolve issues autonomously, thereby enhancing efficiency and minimizing the necessity for contacting support.
The Connect Migration Utility aids in discovering, translating, and migrating self-managed connectors to fully managed ones. It also provides self-managed setup details (e.g., worker nodes, tasks, Confluent Platform connector packs) to simplify total cost of ownership (TCO) calculations for sales. A new config/translate API accepts self-managed connector configuration and returns the fully managed configuration, including warnings/errors, which allows users to validate settings before creation.
Custom connectors now support configuration on the Apache Kafka 3.9 Kafka Connect runtime, a necessary migration for users as the older AK 3.6 Kafka Connect runtime is scheduled for EoL on April 1, 2026. Customers are strongly encouraged to manually migrate to the AK 3.9 Kafka Connect runtime after thorough testing to ensure proper connector function before this deadline, as the system will enforce an automatic upgrade afterward, and customers will receive monthly email reminders leading up to the EoL date. Support for the AK 4.1 Kafka Connect runtime is tentatively planned for early 2026.
These individual connector enhancements focus on improving security, flexibility, and operational efficiency across various categories of database, software-as-a-service (SaaS) and application, file storage and analytics, and warehouse connectors, as summarized in the below table.
Database Connectors | Oracle XStream CDC source: Support for Oracle Exadata systems and Amazon RDS MongoDB: PEM-based authentication |
SaaS and Application Connectors | HTTP source/sink V2: Support for proxy, upsert and delete, and chaining offset with timestamp mode Salesforce Platform Events source and sink: Support for five platform events and lowercase config in topic names |
File Storage Connectors | HDFS 3 sink: Configurable retry mechanism ADLS Gen2 sink: Worm storage support Amazon S3 sink: Exposing more configurations |
Analytics and Warehouse Connectors | Snowflake: V3.1 upgrade and additional configuration options BigQuery V2: New migration script Elasticsearch sink: Support for index aliases and external resource usage OpenSearch sink: Upsert support |
Confluent is significantly expanding the platform support and security features of the Oracle XStream CDC Source connector, underscoring our commitment to robust and flexible data integration. A key enhancement is the enablement of the connector on Exadata systems, encompassing both Oracle Cloud Infrastructure (OCI) and other deployment types. We’re also introducing essential security qualifications, including support for transparent data encryption (TDE). This ensures that organizations using TDE on their source Oracle databases can seamlessly integrate their data without any required alterations to the connector's existing functionality, maintaining the highest levels of data security from the source. Additionally, support is being extended to Amazon RDS for Oracle, provided that specific configuration criteria—database version 19c, a non-multitenant deployment, and a non-custom Oracle deployment—are met.
To further bolster security and connectivity options, the Oracle XStream CDC connector now supports both one-way and two-way TLS with the option to store necessary certificates in a client wallet. One-way TLS verifies the server's certificate for an encrypted connection, with a wallet required for self-signed certificates. Two-way TLS mandates verification of both client and server certificates, which must both be present in the client wallet, offering a comprehensive level of mutual authentication and security for the data channel.
Finally, to address evolving operational needs, the connector introduces support for ad hoc snapshots. This feature is particularly valuable for scenarios such as adding new tables to an already streaming connector, performing a snapshot of only a subset of data, and backfilling missing data in topics. Users can initiate these ad hoc, blocking snapshots, which pause streaming until the data snapshot is complete, at which point streaming automatically resumes. Ad hoc snapshots require a signaling mechanism using either a signaling table within the source database or a dedicated signaling topic for the connector, which ensures precise control over the data ingestion process. See the documentation for more details.
We’re extending the configuration capabilities of the fully managed MongoDB Atlas Source and Sink connectors on Confluent Cloud to support PEM-based X.509 client certificate authentication, which requires the client's PEM-encoded X.509 certificate chain and private key and an optional password for the client's encrypted private key. Refer to the connector documentation for more details.
We’re pleased to announce the addition of Azure Private Networking support for the fully managed ClickHouse Sink connector when connecting to ClickHouse Cloud. This enhancement allows customers to leverage Azure Private Link service for secure, private connectivity. During connector configuration, customers provide the Private Link service alias and, if applicable, the specific DNS name for ClickHouse Cloud. Our system automatically manages the internal routing, ensuring that all connector traffic is directed exclusively through the private interface and thereby bolstering data security and compliance. For more detailed instructions, refer to the documentation.
Addressing the networking complexities inherent in hybrid architectures, we’re excited to announce private networking support for our Couchbase Source and Sink connectors when connecting to self-managed Couchbase clusters. This enhancement significantly reduces networking friction, allowing our connectors to securely reach Couchbase deployments whether they reside in a virtual private cloud (VPC) on Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), or on-premises via established VPN or AWS Direct Connect links. By enabling this secure connectivity, Confluent empowers customers to realize a true hybrid data mesh, offloading the operational overhead of the Kafka Connect layer to Confluent while maintaining complete control over their Couchbase database infrastructures.
Fully managed HTTP Source/Sink V2 connectors now include proxy support, mirroring the functionality previously available in their self-managed counterparts. This enhancement allows users to seamlessly connect to their source or sink systems through a proxy, providing greater flexibility and security. For comprehensive details on configuring this feature, please refer to connector configuration properties.
The HTTP Sink V2 connector now supports upsert and delete modes (via API, CLI, or Terraform). Upsert allows configuring a post fallback for put/patch requests when a key is missing or a specific key-value pair exists, ensuring correct HTTP verb usage. Delete mode performs a delete request even if the message value is null, offering more robust data removal control.
Finally, the HTTP Source V2 now supports chaining offset with timestamp mode. Users can configure APIs to retrieve data between a start time (customer-set, subsequent times from offset JSON pointer) and an optional end time (interval-based), provided that the source API guarantees consistent ordering across all queries (global ordering mode). See the documentation for details.
The Salesforce Platform Events connectors have been significantly enhanced to improve efficiency, flexibility, and operational control. Users can now process up to five Platform Events with a single connector, yielding a potential cost reduction of up to 80%.
Furthermore, the source connector now supports topic names with both uppercase and lowercase characters via the kafka.topic.lowercase configuration, addressing compatibility issues with case-sensitive resources like access control lists (ACLs) while preserving backward compatibility.
Finally, the introduction of the configurable invalid.replay.id.behaviour allows teams to precisely define recovery and replay logic, choosing between reading data from the start or resuming from the latest event when an invalid or missing replay ID is encountered. Refer to the source connector and sink connector documentation for more details.
The Google Cloud Functions (GCF) Gen 2 Sink connector has been significantly enhanced with support for custom URLs and advanced SSL configuration, allowing for greater flexibility and robust security. Users can now use a personalized URL for their GCFs, moving beyond the default format. Furthermore, to meet the highest security standards, the GCF Gen 2 Sink now incorporates mTLS/SSL security, enabling the configuration of essential parameters like keystore and truststore to enforce TLS v1.3, ensuring highly secure and encrypted data transfer. Consult the official documentation for comprehensive configuration details.
A key enhancement to the Salesforce Bulk API V2 Sink connector is the introduction of the skip.relationship.fields configuration. By setting skip.relationship.fields=false, users can now enable the connector to process relationship fields, facilitating the updating or insertion of related data within Salesforce SObjects. This feature provides greater flexibility and power for complex data synchronization scenarios. Comprehensive documentation is available for detailed guidance on leveraging this new capability.
To significantly improve the robustness and reliability of the self-managed HDFS 3 Sink connector, a configurable retry mechanism has been introduced to prevent tasks from indefinitely hanging due to transient Connect exceptions or repeated Kerberos authentication failures. This enhancement ensures greater stability by allowing users to define the number of retries before a task ultimately fails—a configuration that’s managed through the new retry.on.kerberos.cred.errors and max.retries settings.
The connector now supports write-once, read-many (WORM)-enabled ADLS Gen2 containers. This enhancement addresses a scenario during connector restarts where the connector might attempt to recreate previously initiated but empty (0-byte) files. When WORM storage is enabled, attempts to overwrite existing files are blocked by design. Now, the connector is engineered to safely ignore this exception and continue processing when the existing file is 0 bytes, ensuring seamless operation without failure.
Further, the connector has been significantly enhanced with support for the DefaultPartitioner. This improvement enables the connector to automatically structure data within the ADLS Gen2 environment based on the source Kafka topic partition number. Specifically, the resulting directory layout now follows a predictable pattern: <prefix>/<topic>/partition=<kafkaPartition>/<topic>+<kafkaPartition>+<startOffset>.<format>. This standardized organization is crucial for improving data accessibility, streamlining subsequent processing workflows, and ensuring consistency by aligning the ADLS Gen2 Sink connector's behavior with that of other major cloud storage sink connectors.
The fully managed Amazon S3 sink connector has been significantly enhanced to support several key functionalities and properties previously exclusive to the SM S3 Sink connector. These properties include important configuration options for handling retries, such as s3.retry.backoff.ms, retry.backoff.ms, and s3.part.retries.
Additionally, the connector now offers more granular control over file organization and naming with the addition of directory.delim, file.delim, and filename.offset.zero.pad.width. Advanced features such as timestamp.extractor, format.bytearray.extension, format.bytearray.separator, and s3.http.send.expect.continue are now also available, alongside crucial support for the string converter within input.data.format. This ensures greater compatibility and flexibility for users migrating or deploying new S3 sink workloads.
The self-managed IBM MQ Sink connector has been significantly enhanced to support high availability (HA) and disaster recovery (DR) through the introduction of the mq.connection.list configuration. This new capability allows users to specify multiple IBM MQ broker endpoints in a comma-separated list (e.g., host1:port1,host2:port2,host3:port3). By configuring multiple endpoints, the connector automatically attempts to connect to each queue manager in the list until a successful connection is established. This improvement dramatically enhances resiliency, minimizes potential downtime, and ensures more robust failover handling for critical production environments using IBM MQ.
The upcoming Snowflake connector upgrade to V3.1 introduces significant enhancements and fixes, aligning with partner connector advancements. A notable feature is the expanded support for ingestion into Apache Iceberg™️ tables within Snowflake, which is configurable via the snowflake.streaming.iceberg.enabled parameter, though this option will not be enabled by default for existing connectors.
The SnowflakeConnectorPushTime metadata field will also be exposed exclusively for Snowpipe Streaming, providing a timestamp to estimate ingestion latency by indicating when a record was pushed into an ingest software development kit (SDK) buffer.
Further, we’ve enhanced the fully managed Snowflake Sink connector by exposing additional configuration options previously only available in the self-managed version. This enhancement directly supports customers seeking to transition from their self-managed connectors to the fully managed service, ensuring a smoother migration path and feature parity. For a complete list of the newly exposed configurations and more detailed information, refer to the connector Snowflake Sink connector documentation.
We’re pleased to announce the introduction of a new migration script designed to facilitate a smooth transition for customers upgrading from BigQuery V1 to BigQuery V2 connectors. This script streamlines the migration process by identifying key changes in the new connector, clarifying datatype processing updates within the Google APIs and enabling the new connector to seamlessly resume operations from the previous offset of its predecessor.
The Elasticsearch Sink connector has received two significant updates to enhance flexibility and resource management. First, the introduction of support for index aliases allows customers to write to logical references instead of fixed index names. An alias can point to specific index versions or distribute writes across multiple versions, thereby decoupling the connector from direct index version references and making index management more seamless, flexible, and maintainable.
Second, the connector now supports the external.resource.usage setting. This provides users with greater control over whether data is written to an index, a data stream, or an alias. This new capability enables the use of pre-created resources, overcoming the previous limitation in which the connector auto-created resources based solely on topic names, which in turn permits more deliberate and consistent naming conventions for external resources.
The OpenSearch Sink connector has been enhanced to support upsert functionality, allowing it to update existing documents based on their IDs or insert them if they don’t yet exist in the index. This behavior is available as an opt-in feature controlled by the index[i].write.method configuration parameter; importantly, the connector maintains its default insert-only behavior for continued backward compatibility.
We've significantly enhanced the reliability of our connectors, particularly during platform rebalance activities. Previously, connectors undergoing long-running operations like snapshotting were susceptible to disruption when worker nodes and tasks were rebalanced due to resource usage, forcing these operations to restart. To address this, we implemented new internal checks and metrics. These ensure that connector tasks running a snapshot or other long-running transaction will now be protected from interruption during any rebalance events, guaranteeing a more seamless and reliable data integration experience. This enhancement will be rolled out across all Debezium-based CDC connectors by the end of Q4 2025, with Mongo connector support following in Q1 2026.
In response to customer demand, we’re enhancing the Debezium connectors for MySQL, SQL Server, PostgreSQL, and MariaDB by exposing a greater number of essential metrics. These new metrics are designed to help users track snapshot progress, understand operational lag, and gain deeper insight into the streaming process.
The full list includes 10+ new metrics, including SnapshotCompleted, SnapshotAborted, MilliSecondsSinceLastEvent, TotalNumberOfEventsSeen, and MilliSecondsBehindSource.
Building on the recent launch of Google IAM (Identity and Access Management) and Microsoft Entra ID support for Confluent Cloud as well as an initial set of fully managed connectors, we’re pleased to announce the extension of this crucial security and management capability to additional connectors. Now available are Google IAM integrations for the Google Pub/Sub Source and Google Cloud Storage Source as well as Microsoft Entra ID support for the Microsoft SQL Server CDC Source V2 and Azure Blob Storage Sink. These additions are designed to significantly enhance security posture, streamline credential management, and improve overall operational efficiency across an even wider array of your critical connector workflows. For comprehensive integration details, consult this documentation: Managing Provider Integration for Fully Managed Connectors in Confluent Cloud.
The Amazon S3 Sink, Azure Blob Storage Sink, and GCS Sink connectors now incorporate support for the DefaultPartitioner, allowing these connectors to structure output data directories based on the source Kafka topic partition number. The resulting file path adheres to the format /<prefix>/<topic>/partition=<kafkaPartition>/<topic>+<kafkaPartition>+<startOffset>.<format>. This feature provides a predictable and organized structure for data stored in cloud object storage, leveraging the partitioning that’s inherent in Kafka topics.
To ensure continued reliability and performance, Confluent has proactively upgraded its AWS connectors to the AWS SDK for Java V2 ahead of the V1.x end of support on December 31, 2025. This comprehensive update, which includes the AWS Lambda Sink, AWS S3 sink/source, AWS CloudWatch Logs source, DynamoDB sink/source, and the upcoming Kinesis source connectors, is entirely backward-compatible and introduces no breaking changes, allowing customers to benefit from the latest AWS features without disruption.
Confluent is adjusting its licensing policy for the Oracle XStream connector by discontinuing support for the developer license type. This change is necessary to comply with the existing core-count reporting agreement with Oracle, as the perpetual nature of developer licenses adds to the overall core count that must be reported. Moving forward, only trial and enterprise licenses will be supported for the Oracle XStream connector within Confluent Platform, distinguishing this from the separate, orderable developer license SKU.
Are you building an application that needs real-time data? Get started here:
Set up fully managed connectors within private networking environments.
Check out the full library of Connect with Confluent (CwC) partner integrations to easily integrate your existing tech stack with fully managed data streams.
Visit Confluent Hub to explore our repository of 120+ pre-built source and sink connectors, including 80+ that are fully managed.
Interested in joining the CwC program? Become a Confluent partner and give your customers the absolute best experience for working with data streams—right within your application, supported by the Kafka experts.
The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.
Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Kafka®, Kafka®, Apache Iceberg™️, and Iceberg™️ are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
Unlock real-time context for AI with Confluent’s Real-Time Context Engine. Evaluate, process, and serve trustworthy context continuously in Confluent Cloud.
Transform real-time Kafka data into governed, AI-ready Delta Lake tables with Confluent Tableflow and Databricks Unity Catalog. Simplify pipelines, ensure governance, and unlock real-time analytics and AI.