Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
Announcing the latest updates to Confluent’s cloud-native data streaming platform: Kora Engine, Data Quality Rules, Custom Connectors, Streaming Sharing, and more.
Today I’d like to give a tour of the internals of Confluent’s Apache Kafka® service. Powering this is a next-generation engine, Kora. Kora is a cloud data service that serves up the Kafka protocol for our thousands of customers and their tens of thousands of clusters.
I'm excited to share our intent to acquire Immerok! Together, we’ll build a cloud-native service for Apache Flink that delivers the same simplicity, security, and scalability that you expect from Confluent for Kafka.
GitOps can work with policy-as-code systems to provide a true self-service model for managing Confluent resources. Policy-as-code is the practice of permitting or preventing actions based on rules and conditions defined in code. In the context of GitOps for Confluent, suitable policies...
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database service that is highly available and scalable. It is designed to deliver single-digit millisecond query performance at any scale. It offers a fast and flexible way to store...
Our new PII Detection solution enables you to securely utilize your unstructured text by enabling entity-level control. Combined with our suite of data governance tools, you can execute a powerful real-time cyber defense strategy.
Announcing the latest updates to Confluent’s cloud-native data streaming platform: Kora Engine, Data Quality Rules, Custom Connectors, Streaming Sharing, and more.
Today I’d like to give a tour of the internals of Confluent’s Apache Kafka® service. Powering this is a next-generation engine, Kora. Kora is a cloud data service that serves up the Kafka protocol for our thousands of customers and their tens of thousands of clusters.
Companies are looking to optimize cloud and tech spend, and being incredibly thoughtful about which priorities get assigned precious engineering and operations resources. “Build vs. Buy” is being taken seriously again. And if we’re honest, this probably makes sense. There is a lot to optimize.
Why do our customers choose Confluent as their trusted data streaming platform? In this blog, we will explore our platform’s reliability, durability, scalability, and security by presenting some remarkable statistics and providing insights into our engineering capabilities.
Operating Kafka at scale can consume your cloud spend and engineering time. And operating everyday tasks like scaling or deploying new clusters can be complex and require dedicated engineers. This post focuses on how Confluent Cloud is 1) Resource Efficient, 2) Fully Managed, and 3) Complete.
The blog introduces Confluent Platform 7.4 and its key features, including enhancing scalability, increasing architectural simplicity, accelerating time to market, reducing ops burden, and ensuring high-quality data streams. It also covers what's new in Apache Kafka 3.4.
In part 2 of our blog series on understanding and optimizing your Kafka costs, we dive into how to estimate costs stemming from the development and operations personnel needed to self-manage Kafka.
This blog post discusses the two generals problems, how it impacts message delivery guarantees, and how those guarantees would affect a futuristic technology such as teleportation.
It's hard to properly calculate the cost of running Kafka. In part 1 of 4, learn to calculate your Kafka costs based on your infrastructure, networking, and cloud usage.
If you’ve been working with Kafka Streams and have seen an “unknown magic byte” error, you might be wondering what a magic byte is in the first place, and also, how to resolve the error. This post explains the answers to both questions.
The ML and data streaming markets have socio-technical blockers between them, but they are finally coming together. Apache Kafka and stream processing solutions are a perfect match for data-hungry models.