Build Predictive Machine Learning with Flink | Workshop on Dec 18 | Register Now
The Q2 2024 Confluent Cloud launch introduces a suite of enhancements across the four key pillars of a Data Streaming Platform - Stream, Connect, Process, and Govern – alongside some significant work we have been doing with our partner ecosystem to help customers unlock new possibilities. Confluent has helped over 4,900+ global enterprises start their data streaming journey and was recently named a Leader by Forrester Research in The Forrester Wave™: Streaming Data Platforms, Q4 2023.
When developing or debugging a stream processing pipeline with Flink SQL, it’s common to inspect each processing step's output to ensure data is being transformed properly. However, comprehending the resulting data stream's structure, distribution, and characteristics entails executing multiple ad-hoc SQL queries, which can be time-consuming and tedious. Additionally, isolating specific subsets of the stream for analysis or debugging often involves even more queries, adding to the complexity and time required.
We're excited to announce Flink SQL Workspaces now includes interactive tables that allow you to quickly explore and visualize Flink query results directly within the Confluent Cloud UI. Users can efficiently scan, analyze, and profile the output data of each query, streamlining the development and troubleshooting processes when implementing stream processing workloads.
These enhancements offer a deeper and more intuitive exploration of data with the following benefits:
Streamline data exploration and profiling with sortable infinite scrolling results that update in real-time
Gain immediate insights into data trends and distributions using sparklines and summary statistics without requiring additional tedious queries
Enhance troubleshooting and monitoring with message filtering and search capabilities to isolate specific data subsets for detailed inspection
The interactive frontend provides a comprehensive suite of functionality tailored to the needs of developers, data engineers, and DevOps teams, including:
Everything you expect from a data table like infinite scroll and row sorting
Histograms and time series summaries of column data showing distributions and change over time
Column statistics like cardinality, median, min, and max
Filtering numerical and dimension data with brushing and selection
Table-level string search to find the exact row you want
Column management to adjust the display and understand column metadata
Private networking on Confluent Cloud provides a secure environment for data streaming workloads and ensures compliance with regulations like GDPR and CCPA. Private networking options are offered across multiple features within Confluent Cloud, and we are excited to extend this capability to our cloud-native Apache Flink service on AWS. The new capability is currently available for Flink compute pools connected to Enterprise clusters and will be available for Dedicated clusters shortly.
Private networking support for Flink provides a critical layer of security for businesses that need to process data in real time within strict regulatory environments. Data streams flow between Flink and Kafka without exposing sensitive information to the public internet, enabling secure, flexible stream processing.
With private networking support for Flink, Confluent users can:
Enhance data security and privacy by safeguarding in-transit data between Flink and Kafka within a secure, private network
Simplify secure network configuration, making it easier to set up private connections without requiring extensive networking expertise
Facilitate secure and flexible stream processing across clusters and environments, ensuring data accessibility while adhering to strict security protocols
We plan to extend Flink private networking support to additional cloud platforms and cluster types soon.
The recently introduced serverless Enterprise clusters combine the unique benefits of elastic autoscaling with private networking so you can minimize operational burden while upholding strict security requirements. Powered by Elastic CKUs (eCKUs), Enterprise clusters instantaneously autoscale up to meet spikes in demand and back down without user intervention, so you never need to worry about cluster sizing and (over)provisioning again.
The latest updates to Enterprise clusters enable teams to:
Avoid overpaying for unused resources with scale-to-zero clusters and pay only for what you use, when you actually need it
Double the peak capacity with serverless clusters that autoscale up to 2.4 GBps of throughput
Improve cost-efficiency for high-partition workloads with triple the number of partitions per Elastic CKU
With these enhancements, Enterprise clusters are now even more elastic and cost-efficient than before.
Complementing and expanding upon Confluent’s portfolio of 120+ pre-built connectors, the Connect with Confluent (CwC) technology partner program further extends the global data streaming ecosystem and brings fully managed Confluent Cloud data streams directly to developers’ doorsteps from right within the tools where they are already working.
This quarter, we’re excited to introduce seven new program entrants who have recently launched new Confluent integrations: Amazon EventBridge, Couchbase, Cockroach Labs, DeltaStream, Neo4j, Nexla, and SAP. Additionally, Redis has expanded its portfolio of integrations with Confluent by introducing a new source & sink connector.
Together with Confluent, CwC partners provide:
Native Confluent integrations: Easily integrate fully managed data streams from within the data systems your teams already use, helping you lower costs while building the real-time applications customers want.
New data streaming use cases: Expand beyond Kafka experts and unlock new data streaming initiatives when more teams have easy access to data products built on Confluent.
More real-time data: Readily share your data products across Confluent’s data streaming network to fuel your entire business with more real-time data. Leverage the ever-expanding value of data as the program continues to expand into the future.
With more than 40 fully managed partner integrations developed since the CwC program launch last July, together with our diverse partners we’re further broadening the range of real-time use cases available to customers on Confluent’s data streaming platform. Check out our CwC Ecosystem Landscape to understand just how far real-time data streams can reach when running through Confluent’s partner ecosystem.
At Kafka Summit London, we introduced platform-wide enhancements to our fully managed connectors that included secure networking to private endpoints and updated Debezium V2 CDC connectors. However, migrating from a self-managed connector to a fully managed connector can be a tricky process. The process often requires reprocessing all data within a topic, which is time-consuming and increases the risk of producing duplicate records.
Custom offset management simplifies migrating from self-managed to fully managed connectors and upgrading connector versions by specifying the starting offset. With custom offsets, customers can now:
Seamlessly migrate from self-managed to fully managed connectors without data duplication
Replay messages starting from a specific offset during disaster recovery
Skip bad records causing issues that can’t be addressed with existing error-handling features
With support for over 20 source connectors and all sink connectors, offset management offers two migration paths: one that guarantees no data loss or duplication by pausing the running connector and retrieving its offset, and another that ensures no downtime by launching the new connector against the actual topic and cleaning up any duplicates through Flink. Teams should evaluate their migration requirements and choose the suitable approach for their needs.
Amazon DynamoDB CDC Source Connector
Organizations utilizing Amazon DynamoDB with traditional batch-based data pipelines face challenges in maintaining synchronization across various data systems. The fully managed Amazon DynamoDB CDC Source Connector solves this by continuously capturing modifications like inserts, updates, and deletes in Amazon DynamoDB tables and publishing them into Kafka topics.
Google Cloud Functions (Gen 2) Sink connector
Currently, Confluent’s fully managed Google Cloud Functions Sink connector integrates Apache Kafka with Google Cloud Functions. With the launch of Google Cloud Functions Gen 2, we are launching a fully managed Google Cloud Functions (Gen 2) Sink connector to bring improvements such as instance concurrency and longer request processing to our customers.
We launched Custom Connectors in Q1’23 so customers could upload and run their own Kafka Connect plugin code on Confluent Cloud. The feature is now available in all regions on Azure, in addition to AWS.
Confluent Cloud offers a ‘Bring Your Own Key’ (BYOK) feature as a security enhancement that allows customers to manage their own encryption keys for data protection. This feature enables customers to use keys they generate and control, typically through a cloud provider's key management service (KMS), to encrypt data at rest within Confluent Cloud. Today, we are announcing a few key enhancements to this feature -
Managed HSM support across AWS, Azure, and GCP: We recently added Hardware Security Modules (HSM)-protected key support across all three cloud providers that allow customers to meet compliance requirements with FIPS (Federal Information Protection Standard) 140-2 validated HSMs.
Simplification of the AWS KMS policy: We have simplified how customers configure the AWS KMS policy that allows Confluent Cloud access to their encryption key. This makes the management of the policy within AWS operationally easier and more secure. There is no action required from existing customers. New clusters will automatically receive the updated policy during key provisioning. Learn more
BYOK API support for Google Cloud Platform (GCP): We have added BYOK API support for Google Cloud Platform (GCP), aligning the BYOK cluster provisioning experience across all three major cloud providers. This improvement was also rolled out in the Terraform provider release v1.56.0. Learn more
Confluent is committed to being a strategic ally in overcoming customers’ business challenges. As part of the Accelerate with Confluent program, we’ve strengthened our partner collaborations to better serve you. This program is designed to help you achieve your data streaming goals through:
Native Integrations through Connect with Confluent: Our partners can offer seamless integration solutions, making it easier for you to connect various systems and streamline your operations.
Build with Confluent: Partners can quickly develop streaming use case offerings, allowing you to jumpstart new projects and innovate faster.
Confluent’s Migration Accelerator: Transition smoothly from traditional messaging systems or Apache Kafka® to Confluent, ensuring minimal disruption and maximum efficiency.
By working hand-in-hand with Confluent and our partners, you can meet the growing demand for real-time customer experiences and applications, unlocking new opportunities for growth and success.
Build with Confluent
Build with Confluent helps system integrators (SIs) speed up the development of data streaming use case offerings. Customers can be confident that our SI’s Confluent-based service offerings are not only built on the leading data streaming platform, but are also verified by Confluent experts. Our library of partner solutions can help customers easily deploy new use cases, with access to GitHub repositories, demo videos, and reference architecture.
Build with Confluent partners include Data Reply GmbH, Data Reply S.R.L, GFT Technologies, GoodLabs Studio, is-land Systems Inc., iLink Digital, KPMG, MFEC Public Company Limited, Ness Digital Engineering, Onibex, Persistent, Platformatory, Psyncopate, Quantyca S.p.A., Synthesis Software Technologies (Pty) Ltd.
Learn more about each partner’s solutions in this blog.
Confluent Migration Accelerator
The Confluent Migration Accelerator enables a smooth transition from any version of Apache Kafka® or traditional messaging systems to Confluent. We give customers exclusive access to our resources from our partner ecosystem and partners' tailored migration offerings to make migrations a breeze. Not only can customers upgrade their data streaming platform with confidence, while saving time and money, but qualified migrations may also be eligible for funding support.
Migration Accelerator partners include Amazon Web Services (AWS), CloudMile, EPAM, Google Cloud, iLink Digital, Improving, Microsoft Azure, Ness, Platformatory, Psyncopate, SVA System Vertrieb Alexander GmbH, Somerford Associates, and MFEC Public Company Limited.
Ready to get started? If you haven’t done so already, sign up for a free trial of Confluent Cloud to explore new features. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CL60BLOG for an additional $60 of free usage.*
The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.
Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Kafka®, and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
With both Confluent and Amazon Redshift supporting mTLS, streaming developers and architects are able to take advantage of a native integration that allows Amazon Redshift to query Confluent Cloud topics.