Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Streaming your data with Apache Kafka®, at its core, involves moving data from one point to another in real time, much like a river flows from its source to its destination. However, beneath this seemingly straightforward goal lies significant complexity and hidden costs. The multitude of available deployment options, hosted and managed Kafka services, and design choices make it difficult to navigate the data streaming landscape.
Teams have to balance their security needs, budget limitations, and bandwidth—especially when considering the benefits and risks of Kafka migrations. Moreover, the impending end of support for Apache ZooKeeper™️ is making this discussion more relevant than ever in the Kafka community. In this blog post, we’ll dive into the impact of ZooKeeper end of support, new open source migration tooling from Confluent, and why there’s never been a better time to migrate to Confluent Cloud.
Ready to learn how Confluent Cloud can ease your migration from Zookeeper to KRaft?
ZooKeeper has underpinned Kafka’s control plane for years. As of Apache Kafka 4.0, ZooKeeper is no longer supported, and metadata management duties have shifted to KRaft. Apache Kafka 3.9.0, the last ZooKeeper-compatible version, was released on November 7, 2024, and the community usually provides about a year of support for minor releases.
What does this mean for Kafka users who want to keep their data streaming processes up to date? Time is running out.
End of support doesn’t just mean that the open source community has moved on to new versions; it could also mean that critical security vulnerabilities are left without fixes. While Confluent Cloud users have already been using KRaft for the past couple of quarters with automatic upgrades by Confluent, other hosted Kafka service providers have introduced more difficulty for users by requiring a full migration to a new Kafka cluster just to update. If you’re in this boat, then this deadline is an urgent call to reexamine your long-term data streaming strategy.
Moving from hosted Kafka to Confluent Cloud is a strategic decision that goes beyond a simple migration. It’s a move toward a complete, cloud-native data streaming platform that cuts Kafka costs by 40%–70%, eliminates operational burdens with managing Kafka clusters, and offers an industry-leading 99.99% uptime service level agreement (SLA).
Hosted Kafka services often leave your teams managing the hardest parts of Kafka, resulting in overprovisioned clusters, unpredictable costs, and operational toil. Now, you have an opportunity to make this your last migration ever.
Historically, migrating Kafka workloads required meticulous planning, time-consuming analysis, courage, and many late nights. Mapping dependencies, managing consumer offsets, building the infrastructure, and orchestrating the move could stretch on for weeks. Today, that landscape has changed dramatically. With Kafka Copy Paste (KCP), an open source tool from Confluent, users can now develop migration plans in minutes.
There are other tools out there, including MSK Replicator that help streamline migrations, but KCP focuses specifically on making it easy to unlock the benefits of Confluent Cloud’s fully managed, complete data streaming platform. With just a few commands, KCP can scan your existing Kafka resources, create reports for migration planning and cost analysis, and generate the entire migration infrastructure for you. This simplifies the migration process into four steps:
Discover and Plan - Use the kcp discover and kcp scan commands, along with the built-in user interface (UI), to gain insights into your existing infrastructure and plan your migration.
Provision - Use the kcp create-asset command to automatically generate Terraform infrastructure-as-code and provision your target Confluent Cloud environment in minutes.
Migrate Data - Use KCP commands to migrate not only your topic data but your access control lists (ACLs), schemas, and connectors too.
Migrate Clients - Perform any necessary configuration changes and then restart your client applications with a connection to your new target cluster.
KCP at its core uses Cluster Linking to mirror topics byte-for-byte from your source to your target and maintain consumer offsets so that you don’t have to worry about duplicate messages or data loss during your migration. We’ll now take a look under the hood to see how this is modernizing and simplifying migrations, global replication, and hybrid cloud.
Cluster Linking is a Kafka-native solution that changes the game for teams looking to migrate from hosted Kafka environments to Confluent Cloud, and it serves as a critical tool to quickly and safely perform your ZooKeeper to KRaft migration. Rather than relying on brittle connectors, custom scripts, or complex batch jobs, Cluster Linking offers a secure, efficient, and automated method to move data continuously and reliably.
At its core, Cluster Linking establishes a direct and encrypted pathway between source and destination Kafka clusters. For migrations, the data flow is typically one-way from the source cluster to the target cluster. The process for migrating via Cluster Linking is as follows:
Administrators authenticate both clusters using Confluent’s intuitive interfaces (UI, CLI, or API).
Selected Kafka topics from the source environment are mirrored in Confluent Cloud, preserving crucial properties like partitioning, ordering, and offsets.
As new records are produced in the source, Cluster Linking automatically propagates them to Confluent Cloud, with no batching delays or intermediary storage.
Robust error handling ensures that if connectivity is interrupted, replication resumes without data loss or manual intervention.
By automating topic discovery, configuration replication, and migration monitoring, Cluster Linking minimizes risks and dramatically shortens migration timelines. The new External Cluster Linking over AWS Private Link capability streamlines the migration infrastructure even further and will soon be included in the KCP tooling.
Cluster Linking is far more than a one-time migration tool. Its architecture is purpose-built for ongoing data unification across global and hybrid environments. Enterprises can synchronize data in real time between on-premises datacenters and multiple cloud regions, supporting high availability, disaster recovery, data sharing, and regulatory compliance. Hybrid architectures become practical and cost-effective, as teams gain a single source of streaming truth regardless of geography. Mission-critical apps can cut over to Confluent Cloud incrementally, using Cluster Linking to maintain data consistency throughout transition.
Integrated security features, including ACL enforcement, audit logging, and schema registry compatibility, ensure that unifying distributed Kafka environments doesn’t come at the expense of governance or data integrity.
Make your KRaft migration count with a move to Confluent Cloud.
Hosted Kafka services often need overprovisioning to handle peak loads, and just networking fees alone can comprise up to 80%–90% of your total infrastructure costs. Confluent Cloud offers a fundamentally better approach. Here’s how:
Powered by Kora, our cloud-native Kafka engine, Confluent Cloud offers serverless, elastic autoscaling clusters that save more than 50% on infrastructure costs by eliminating overprovisioning.
Private Network Interface (PNI) provides secure low-cost private networking that can reduce networking costs by up to 50%.
Freight clusters are serverless clusters designed for high-throughput, relaxed latency use cases, such as logging, telemetry, and feeding batch analytics pipelines that can reduce infrastructure costs by up to 90%.
Our customers prove these savings: Indeed successfully adopted PNI to save more than 50% on networking costs, Security Scorecard saw more than $1 million in savings after migrating to Confluent Cloud, and Citizens Bank was able to cut its IT costs and achieve $1.2 million in cost savings upon moving to Confluent Cloud while improving data processing speeds by 50%. Customers have benefited from the Confluent Cloud Price Guarantee and achieved 40%–70% savings on Kafka costs by switching from hosted Kafka services to Confluent Cloud.
While Kafka services hosted by cloud service providers are available only in a single cloud ecosystem, Confluent Cloud provides true deployment flexibility across Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.
Features such as Cluster Linking enable a persistent data bridge to sync data between on-premises, hybrid, and multicloud environments in real time. Our financially backed 99.99% uptime SLA covers the entire service, including the underlying Kafka software, minimizing your risk of downtime, data loss, and business impact. This is a critical advantage over other hosted Kafka services that explicitly exclude failures related to Kafka from their SLAs, placing the burden of resolving critical issues on the customer. BigCommerce, for example, relies on Confluent Cloud on Google Cloud to handle traffic spikes, particularly during peak events like Black Friday.
Confluent Cloud is not just Kafka. It’s a complete, enterprise-ready data streaming platform that accelerates development with continuous innovation. We eliminate the need to assemble a complete platform from fragmented hosted Kafka components.
Confluent Cloud provides a rich ecosystem out of the box, with more than 120 pre-built connectors, including 80+ fully managed ones for seamless data integration, integrated governance, serverless Apache Flink®, and Tableflow to build faster and more securely. Tableflow is a zero-ETL feature that easily converts Kafka topics into Apache Iceberg™️ or Delta tables to feed any data warehouse or analytics engine. And our fully managed governance suite—which includes Stream Catalog, Stream Lineage, and data quality rules—ensures that you can build faster and more securely.
Now, Streaming Agents combines the power of our real-time data streaming platform with event-driven artificial intelligence (AI) workflows. This framework enables you to build intelligent, automated agents that can act on fresh, contextualized data in real time, enabling use cases like automated competitive price matching or predictive maintenance. These features further differentiate Confluent Cloud from hosted Kafka options, making your migration even more worthwhile.
Thinking about making the move to Confluent Cloud? Visit the KCP CLI page to learn how you can easily plan your migration in minutes.
Learn how to quickly plan and automate your migration using the KCP migration tool and Cluster Linking with this step-by-step workshop on migrating from hosted Kafka services to Confluent Cloud.
Apache®, Apache Kafka®, Kafka®, Apache Flink®, Flink®, Apache ZooKeeper™️, ZooKeeper™️, Apache Iceberg™️, Iceberg™️, and the Kafka logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
A behind-the-scenes look at why hosted Kafka falls short—and how Confluent Cloud’s architecture solves for cost, resilience, and operational simplicity at scale.
This blog post outlines the top four challenges of using hyperscaler-hosted Kafka, focusing on high costs, manual maintenance, reliability gaps, and vendor lock-in. It then explains how Confluent Cloud is architected to solve each of these specific problems for customers.