Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
By this point, just about everybody has had a go playing with ChatGPT, making it do all sorts of wonderful and strange things. But how do you go beyond just messing around and using it to build a real-world, production application?
Learn why stream processing is such a critical component of the data streaming stack, why developers are choosing Apache Flink as their stream processing framework of choice, and how to use Flink with Kafka.
Learn what windowing is in Kafka Streams and get comfortable with the differences between the main types.
Learn the best practices for integrating Confluent with AWS Lambda to build event-driven architectures.
Learn about the role of batch.size and linger.ms in data compression.
Learn how to build a Java pipeline that consumes clickstream data from Apache Kafka®. Consuming clickstreams is something that many businesses have a use for and it can also be generalized to consuming other types of streaming data.
Dive into Flink SQL, a powerful data processing engine that allows you to process and analyze large volumes of data in real time. We’ll cover how Flink SQL relates to the other Flink APIs and showcase some of its built-in functions and operations with syntax examples.
Is Windows your favorite development environment? Do you want to run Apache Kafka® on Windows? Thanks to the Windows Subsystem for Linux 2 (WSL 2), now you can, and with fewer tears than in the past.
Apache Kafka (the basis for the Confluent Platform) delivers an advanced stream processing platform for streaming data across AWS, GCP, and Azure at scale, used by thousands of companies. Amazon...
Get a high-level overview of source connector tuning: What can and cannot be tuned, and tuning methodology for any and all source connectors.
Learn about Confluent Platform 7.5 and its latest key features: enhancing security with SSO for Control Center, improving developer efficacy with Confluent REST Proxy API v3, and improving disaster recovery capabilities with bidirectional Cluster Linking.
Apache Flink can be used for multiple stream processing use cases. In this post we show how developers can use Flink to build real-time applications, run analytical workloads or build real-time pipelines.
Versioned key-value state stores, introduced to Kafka Streams in 3.5, enhance stateful processing capabilities by allowing users to store multiple record versions per key, rather than only the single latest version per key as is the case for existing key-value stores today...
Learn why stream processing is such a critical component of the data streaming stack, why developers are choosing Apache Flink as their stream processing framework of choice, and how to use Flink with Kafka.
Confluent Cloud has chosen Let’s Encrypt as its Certificate Authority and leverages its automation features to spend less time managing certificates and more time building private networking features.
Learn the basics of what an Apache Kafka cluster is and how they work, from brokers to partitions, how they balance load, and how they handle replication, and leader and replica failures.
When developing streaming applications, one crucial aspect that often goes unnoticed is the default partitioning behavior of Java and non-Java producers. This disparity can result in data mismatches and inconsistencies, posing challenges for developers.