Apache Kafka®️ 비용 절감 방법 및 최적의 비용 설계 안내 웨비나 | 자세히 알아보려면 지금 등록하세요

Expanding the AI Data Landscape: Confluent’s Q3 Integrations Summary

작성자:

In an era when every second counts, enterprises that can act on information the moment it arrives are positioned to win—and real-time streaming data is the fuel that brings artificial intelligence (AI) to life. Powering agentic AI and advanced analytics can’t be done with static or delayed data; organizations need a comprehensive, reliable supply of streaming data representing their entire businesses. The Confluent data streaming platform makes this possible, enabling organizations to seamlessly connect, process, and govern data in motion to drive smarter, real-time business outcomes.

In Q3, we welcomed a diverse range of new partner system integrations into the Connect with Confluent technology partner program—spanning machine learning, real-time analytics, operational databases, workflow orchestration, and much more. Each new integration expands Confluent’s data streaming ecosystem and the overall AI data landscape, making it easier than ever for enterprises to harness real-time data across the tools and platforms that run their businesses.

Explore New Integrations Within Confluent’s Data Streaming Ecosystem

  • Amazon SageMaker: Feed live streaming data from Confluent Cloud into Amazon SageMaker for real-time model training, predictions, and AI-driven decision-making.

  • AWS Glue: Catalog and unify Confluent Cloud data streams with AWS Glue and Tableflow, enabling real-time analytics across lakehouses with seamless schema integration.

  • ClickHouse: Stream data from Confluent Cloud into ClickHouse with the new fully managed sink connector for real-time analytics and insights.

  • Couchbase: Stream data to and from Confluent Cloud with new fully managed source and sink connectors, powering real-time applications with bidirectional data flow.

  • Datadog: Monitor WarpStream in real time with the new Datadog integration, tracking connection status, latency, throughput, and error rates.

  • DiffusionData: Deliver data streams in real time to mobile apps, web clients, Internet of Things (IoT) devices, and edge services with Diffusion’s powerful edge server.

  • Firebolt: Stream event data from Confluent Cloud into Firebolt for sub-second dashboards, AI/machine learning (ML) feature delivery, and real-time operational insights.

  • Infoview: Modernize IBM i with real-time data replication pipelines from Confluent Cloud, simplifying integration with modern apps and analytics platforms.

  • LittleHorse: Orchestrate event-driven workflows with LittleHorse and Confluent Cloud, enabling resilient, real-time business processes.

  • MariaDB: Capture change events with the Debezium MariaDB change data capture (CDC) fully managed source connector for Confluent Cloud, enabling real-time replication and streaming analytics.

  • Onibex for ClickHouse: Stream data into ClickHouse using the Onibex ClickHouse JDBC sink connector—supporting idempotent writes, auto table creation, and schema evolution.

  • SingleStore: Power real-time analytics, AI/ML feature delivery, and operational insights by ingesting data from Confluent Cloud into SingleStore.

  • Weaviate: Ingest Confluent data into the Weaviate vector database using the new sink connector, enabling semantic search and AI-powered applications.

  • YugabyteDB: Stream CDC events directly from YugabyteDB into Confluent Cloud to power real-time analytics, modern data pipelines, and event-driven applications.

Beyond net-new integrations, our existing Connect with Confluent partners continue to enhance their integrations:

With the Amazon EventBridge sink connector, developers can stream data from Confluent Cloud directly into EventBridge, enabling real-time event routing to more than 140 Amazon Web Services (AWS) services and software-as-a-service (SaaS) applications. This reduces custom integration work, supports orchestration at scale, and lets teams focus on business logic instead of infrastructure.

AWS Lambda now natively supports Confluent Schema Registry for Avro and Protobuf formats. This means developers can ensure data integrity, cut down custom deserialization code, and even apply filtering rules to reduce unnecessary function calls and costs—all while focusing more on business logic than infrastructure.

Govern AI-Ready Data Products Across the Operational-Analytical Divide With Tableflow

Deploying AI across an enterprise requires real-time, trusted data products, not just raw data. These products must be governed, reusable, and designed to power AI and analytics, no matter the source. Many organizations struggle because their operational and analytical systems remain siloed, making it hard to provide AI applications with the data needed for accurate predictions and real-time decision-making.

Tableflow represents Apache Kafka® topics and associated schemas as open table formats—such as Apache Iceberg™️ or Delta Lake—in a few clicks to feed any data warehouse, data lake, or analytics engine.

Tableflow solves this problem by allowing you to take any Kafka topic with a schema and expose it as an Iceberg or Delta Lake table—no stitching pipelines, less data duplication, no schema headaches. Just enable Tableflow, and your Kafka data becomes instantly accessible to your analytics and AI use cases.

Tableflow also uses a new metadata publishing service behind the scenes that taps into Confluent’s Schema Registry to generate Iceberg metadata and Delta transaction logs while handling schema mapping, schema evolution, and type conversions. Catalog syncing allows you to sync Tableflow-created tables as external tables in Snowflake Open Catalog, Apache Polaris™️, AWS Glue, and Unity Catalog (open preview).

Additionally, Tableflow is now available for Oracle Autonomous Database (ADB). With Kafka topics written to Iceberg tables in Amazon S3, Oracle ADB can then treat those tables as external tables, so you can run rich Oracle SQL (or even Oracle Select AI) across real-time events without copying any data.

Monitor WarpStream Deployments With Datadog

WarpStream by Confluent is a diskless, Kafka-compatible data streaming platform built directly on top of object storage—meaning zero disks, zero inter-AZ costs, and zero cross-account identity and access management (IAM) access required. It scales infinitely and runs in your virtual private cloud (VPC).

The new Datadog integration for WarpStream offers comprehensive observability into the performance and health of WarpStream agents. By leveraging both the Datadog agent and WarpStream's StatsD integration, this connector enables real-time monitoring of key metrics such as connection status, latency, throughput, and error rates. This dual-layer approach ensures that users can proactively identify and address potential issues, optimizing the efficiency and reliability of their data streaming operations.

Monitor WarpStream agent health and performance in real time with Datadog.

Easily Unlock Real-Time Value From Existing systems and Applications

With a complete connector strategy, Confluent allows you to easily get data into and out of the platform without being a Kafka expert. You can accelerate development time and speed to launch with 120+ pre-built connectors, an ever-growing suite of custom connectors, and native integrations built directly into the tools your teams are already using.

No matter where you deploy—in the cloud, on-premises, at the edge, or in a hybrid environment—Confluent connects your entire business in real time.

Find and Configure Your Next Integration

Ready to get started? Check out the full library of Connect with Confluent partner integrations to easily integrate your application with fully managed data streams.

Not seeing what you need? Not to worry. Check out our repository of 120+ pre-built source and sink connectors, including 80+ that are fully managed.

Take your resume to the next level and enhance your career prospects. Get certified as a Confluent Data Streaming Engineer. It’s free!

Are you building an application that needs real-time data? Interested in joining the Connect with Confluent program? Become a Confluent partner and give your customers the absolute best experience for working with data streams—right within your application, supported by the Kafka experts.


Apache®, Apache Kafka®, Kafka®, the Kafka logo, Apache Iceberg™️, the Iceberg logo, Apache Polaris™️, and associated open source project names are either registered trademarks or trademarks of the Apache Software Foundation.

  • Greg Murphy is the Staff Product Marketing Manager focused on developing and evangelizing Confluent’s technology partner program. He helps customers better understand how Confluent’s data streaming platform fits within the larger partner ecosystem. Prior to Confluent, Greg held product marketing and product management roles at Salesforce and Google Cloud.

이 블로그 게시물이 마음에 드셨나요? 지금 공유해 주세요.