Build Predictive Machine Learning with Flink | Workshop on Dec 18 | Register Now
Times are tight across technology teams and organizations everywhere. Companies are looking to optimize cloud and tech spend, and being incredibly thoughtful about which priorities get assigned precious engineering and operations resources. “Build vs. Buy” is being taken seriously again. And if we’re honest, this probably makes sense. There is a lot to optimize.
How did we get here? Well, tech companies led the way in the boom over the last decade. Growth was prioritized, and in order to support that growth, teams would expand each year – sometimes faster than the problems they were tasked to deal with. Salaries and stock compensation for top tech talent in the US grew more than 75 percent. A key aspect of this was building out highly staffed, well-funded, in-house data and infrastructure efforts to build competitive advantage. Part of this was a necessity of the time, as in many domains, mature cloud-native services were not as available.
In many ways, everyone copied Google circa 2010, a company that built its advantage around having the best data infrastructure and the highest compensation for engineers. It worked to a certain extent – Google was immensely successful, and their success was built on their ability to operate at scale and harness their vast troves of data to build unique products. Most of Silicon Valley copied Google, and much of the rest of the economy, seeing the innovation in software, copied Silicon Valley.
This wasn’t entirely wrong. Software is indeed eating the world. Customers expect world-class digital interactions, and the next wave of disruption is coming from technology. Data is at the heart of this, and the capabilities around data, AI, and the move to the cloud is indeed unleashing a wave of innovative products and capabilities. You should hire great engineers, and pay them well. This is all at the heart of how modern companies are competing.
But like any good boom, this one came with some excesses around the edges. There is a good chance that when you look across your tech operations, you see a lot of servers sitting at 9% CPU utilization, a big, hard to understand AWS bill, and some talented, well-compensated software and operations teams working on things that aren’t really core to the competitive advantage of the business you’re in.
Now we’re in a wave of optimization. Companies are figuring out (and optimizing) where all that cloud spend is going. They’re reorganizing and restructuring teams to go do the things the business really needs. This is the current reality. It’s painful, but in many cases necessary. After all, in software development, you want to fold in a few sprints on optimization after a few quarters solely devoted to new functionality.
At Confluent, we recognize this is the reality for many companies and we want to help. The capabilities around data streaming are now a necessity in modern software development, but operating Kafka clusters is not a core competency of your business. It is the core competency of ours, and we think we can do it better and for cheaper. Your best engineers should be working on the new features that help you win in your market, not getting bogged down with managing infrastructure. No one says “let’s run our own S3”—and we think the same should be true for your Kafka infrastructure.
It may seem counterintuitive to claim we can do this for less. After all, free software is, well, free, right? It is, but this is where our cloud service comes in. Your spend is actually not on software—it’s on servers, networking, and people. We’ve built our service to be much more efficient and to get rid of the operational burden for our customers, and this drives significant savings on cloud infrastructure while helping you redeploy your valuable engineers to the many critical projects every business has.
If you want to understand the details of how we do this, we’ve gone into great detail on both the underlying infrastructure and the operational savings of our product. But it is easy to make claims, and harder to deliver on them. Who hasn’t seen a bogus vendor ROI deck, justifying spending more money with them. We want to show you this is different. The fact is, these are very clear, easily calculable costs, and the resulting savings are equally crisp.
Rather than just look at our numbers, we invite you to sit with us and plug in your numbers. At the end of this, you’ll have two things: a crisp analysis of what it costs you to run open source Kafka on your own, as well as a proposal from Confluent to do it for less. Many of our customers did this exact analysis with us and realized they can save substantial amounts of money. For large-scale deployments, this savings can be in the millions of dollars.
Our challenge: We’re so confident we can do it for less, we will give you $100 if we can’t beat your costs.
It’s worth noting that our product isn’t just about cost savings – it’s also more reliable and more complete, and will help your team move faster. But these days, it isn’t enough to be better – you also have to be a better deal. And we are, so let us prove it to you. Use the following link to our cost estimator, enter a few simple details on your workload(s), and we’ll reach out to show you how we can do it for cheaper (or $100 on us).
Operating Kafka at scale can consume your cloud spend and engineering time. And operating everyday tasks like scaling or deploying new clusters can be complex and require dedicated engineers. This post focuses on how Confluent Cloud is 1) Resource Efficient, 2) Fully Managed, and 3) Complete.
Why do our customers choose Confluent as their trusted data streaming platform? In this blog, we will explore our platform’s reliability, durability, scalability, and security by presenting some remarkable statistics and providing insights into our engineering capabilities.