Build your real-time bridge to the cloud with Confluent Platform 7.0 and Cluster Linking | Read the blog

Your Confluent Cloud Newsletter

The Ingress

Your one-stop shop to learn about the most important recent updates within Confluent Cloud™.

Product Updates

Want more? Check out and subscribe to the Confluent Cloud release notes.

Now Available

ksqlDB is now available on Private Link clusters

Stream processing with ksqlDB—enabling you to derive instant insights from your Confluent data streams with a simple, familiar SQL syntax—is now supported on Private Link networked clusters. Popular for its unique combination of security and simplicity of setup, Private Link networking allows for one-way secure connection access from your VPC/VNet to Confluent Cloud with an added protection against data exfiltration. Start working with ksqlDB on your Private Link cluster today.

Preview

Databricks Delta Lake Sink Connector for AWS

Fuel your Delta Lake with real-time event streams using our latest fully managed connector. The Databricks Delta Lake Sink connector, now available in Preview for AWS clusters, polls data from Kafka and copies it to an Amazon S3 staging bucket before committing these records to a Databricks Delta Lake instance. Check out the documentation linked above for more information on setup.

Three new features for Confluent Cloud Connectors

Connect log events

Connect log events are now available to customers on Standard and Dedicated clusters for self-service consumption and analysis. This feature increases the operational transparency of fully managed connectors by providing contextual logging information so that users have more information to self-identify the root cause of connector errors and resolve them quickly.

Single message transforms (SMTs)

Single message transforms (SMTs) are simple and lightweight modifications to message values, keys, and headers as messages flow through connectors. With an available list of pre-built options, implemented and managed through both the Cloud Console or CLI, SMTs are useful for inserting fields, masking information, routing events, and performing other minor data adjustments within the connector itself.

Confluent Cloud Connectors

Connect data previews

Connect data previews (available today on select source connectors for Basic and Standard clusters) provide a dry-run functionality that previews the output of the connector using your actual connector configurations. This feature allows you to launch your connector with confidence if the data preview reflects your expected outputs or modify your configurations prior to launching if the preview outputs are unexpected.

Development Resources

Want more? Check out Confluent Developer

Demo

Accelerate Your Cloud Data Warehouse Migration and Modernization

Join this session to understand how Confluent helps teams connect hybrid and multi-cloud data to their cloud data warehouse of choice in real time. We’ll review the benefits of modernization including no-code data source integration, real-time data processing prior to writing to your data warehouse, and overall strategies for future proofing your implementation. Modernizing your data warehouse doesn’t need to be a multi year lift and shift effort—join and learn how.

Workshop series

Your Path to Production with Confluent Cloud

New to Confluent? This 5-part instructional series will guide you through your first usage of Confluent Cloud. Each session is a technical demo led by a Solutions Engineer and will walk you through the key milestones for successfully deploying your first use case with Confluent. Alongside a deep dive into the product, you’ll learn about producers/consumers, source & sink connectors, stream processing, platform metrics, monitoring, and more.

Webinar

How ACERTUS Migrated from a Monolith to Microservices with ksqlDB

Register for our upcoming online talk with ACERTUS who shifted thinking from a synchronous, API-first mindset to an asynchronous approach oriented around event streaming. You’ll hear from J3, ACERTUS VP of Data, on how ksqlDB was used to build new stream processing features and functionalities using just SQL statements—no requirements for new application code—and his team’s broader use of ksqlDB for streaming ETL, data warehouse ETL processing, and microservices projects.

More ways to continue learning

Learning Events

Come visit Confluent at AWS re:Invent

We're back and in-person at this year's AWS re:Invent taking place in Las Vegas later this month. We’ll have a number of different activities going on including sponsored sessions where you'll hear more about Confluent on the Edge and Data Warehouse Modernization, daily booth happy hours, and brand-new custom product demos. Make sure to attend a session or stop by our booth (#220) to meet with our experts and pick up some swag!

Ready to set your data in motion with Confluent?