How to Build a Data Mesh with Stream Governance | Join Webinar
As of today, Confluent Cloud for Apache Flink® is available for preview in select regions on AWS. In this post, learn how we’ve re-architected Flink as a cloud-native service on Confluent Cloud.
Apache Kafka 3.6 is here! This release includes Tiered Storage (Early Access), the ability to migrate clusters from ZooKeeper to KRaft with no downtime, the addition of a grace period to stream-table joins, & more!
Ever dealt with a misbehaving consumer group? Imbalanced broker load? This could be due to your consumer group and partitioning strategy!
By this point, just about everybody has had a go playing with ChatGPT, making it do all sorts of wonderful and strange things. But how do you go beyond just messing around and using it to build a real-world, production application?
GitOps can work with policy-as-code systems to provide a true self-service model for managing Confluent resources. Policy-as-code is the practice of permitting or preventing actions based on rules and conditions defined in code. In the context of GitOps for Confluent, suitable policies...
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database service that is highly available and scalable. It is designed to deliver single-digit millisecond query performance at any scale. It offers a fast and flexible way to store...
Our new PII Detection solution enables you to securely utilize your unstructured text by enabling entity-level control. Combined with our suite of data governance tools, you can execute a powerful real-time cyber defense strategy.
Announcing the latest updates to Confluent’s cloud-native data streaming platform: Kora Engine, Data Quality Rules, Custom Connectors, Streaming Sharing, and more.
Today I’d like to give a tour of the internals of Confluent’s Apache Kafka® service. Powering this is a next-generation engine, Kora. Kora is a cloud data service that serves up the Kafka protocol for our thousands of customers and their tens of thousands of clusters.
Companies are looking to optimize cloud and tech spend, and being incredibly thoughtful about which priorities get assigned precious engineering and operations resources. “Build vs. Buy” is being taken seriously again. And if we’re honest, this probably makes sense. There is a lot to optimize.
Why do our customers choose Confluent as their trusted data streaming platform? In this blog, we will explore our platform’s reliability, durability, scalability, and security by presenting some remarkable statistics and providing insights into our engineering capabilities.
Operating Kafka at scale can consume your cloud spend and engineering time. And operating everyday tasks like scaling or deploying new clusters can be complex and require dedicated engineers. This post focuses on how Confluent Cloud is 1) Resource Efficient, 2) Fully Managed, and 3) Complete.
The blog introduces Confluent Platform 7.4 and its key features, including enhancing scalability, increasing architectural simplicity, accelerating time to market, reducing ops burden, and ensuring high-quality data streams. It also covers what's new in Apache Kafka 3.4.
In part 2 of our blog series on understanding and optimizing your Kafka costs, we dive into how to estimate costs stemming from the development and operations personnel needed to self-manage Kafka.
This blog post discusses the two generals problems, how it impacts message delivery guarantees, and how those guarantees would affect a futuristic technology such as teleportation.
It's hard to properly calculate the cost of running Kafka. In part 1 of 4, learn to calculate your Kafka costs based on your infrastructure, networking, and cloud usage.
If you’ve been working with Kafka Streams and have seen an “unknown magic byte” error, you might be wondering what a magic byte is in the first place, and also, how to resolve the error. This post explains the answers to both questions.