How to Build a Data Mesh with Stream Governance | Join Webinar
As of today, Confluent Cloud for Apache Flink® is available for preview in select regions on AWS. In this post, learn how we’ve re-architected Flink as a cloud-native service on Confluent Cloud.
Apache Kafka 3.6 is here! This release includes Tiered Storage (Early Access), the ability to migrate clusters from ZooKeeper to KRaft with no downtime, the addition of a grace period to stream-table joins, & more!
Ever dealt with a misbehaving consumer group? Imbalanced broker load? This could be due to your consumer group and partitioning strategy!
The ML and data streaming markets have socio-technical blockers between them, but they are finally coming together. Apache Kafka and stream processing solutions are a perfect match for data-hungry models.
Breaking encapsulation has led to a decade of problems for data teams. But is the solution just to tell data teams to use APIs instead of extracting data from databases? The answer is no. Breaking encapsulation was never the goal, only a symptom of data and software teams not working together.
Apache Kafka and stream processing solutions are a perfect match for data-hungry models. Our community’s solutions can form a critical part of a machine learning platform, enabling machine learning engineers to deliver real-time MLOps strategies.
Stream processing has long forced an uncomfortable trade-off: choose a framework based on its power, or in your preferred programming language. GraalVM may offer an alternative solution to avoid having to choose.
The big data revolution of the early 2000s saw rapid growth in data creation, storage, and processing. A new set of architectures, tools, and technologies emerged to meet the demand. But what of big data today? You seldom hear of it anymore. Where has it gone?
Use the Confluent CLI and API to create Stream Designer pipelines from SQL source code.
Experienced technology leaders know that adopting a new technology can be risky. Often, we are unable to distinguish between those investments that will be transformational and those that won’t be worthwhile. This post examines how one can decide if event streaming makes sense for them.
Learn how modern data management approaches like data mesh and event-driven architecture (EDA) can be used to manage data platforms and how to take advantage of them.
Perhaps the largest challenge for modern data teams is gaining and retaining trust. The challenge of Big Data has come and gone, now we face the challenge of Untrustworthy Data, which will be one of the core focal points of the data space in 2023 and beyond.
Get an introduction to why Python is becoming a popular language for developing Apache Kafka client applications. You will learn about several benefits that Kafka developers gain by using the Python language.
Discover tools, practices, and patterns for planning geo-replicated Apache Kafka deployments to build reliable, scalable, secure, and globally distributed data pipelines that meet your business needs.
An Approach to combining Change Data Capture (CDC) messages from a relational database into transactional messages using Kafka Streams.
This post details how to minimize internal messaging within Confluent platform clusters. Service mesh and containerized applications have popularized the idea of control and data planes. This post applies it to the Confluent platform clusters and highlights its use in Confluent Cloud.
Using Apache Kafka to decouple microservices is a successful way to build a more resilient, flexible, and scalable architecture. However, it is very common for such microservices to pair with a database. This blog provides a real-world use case on how Kafka replaces a database with ksqlDB.