Welcome to the Connect with Confluent (CwC) partner program Q4 update where you’ll learn about everything new within Confluent’s expanding network of data streaming technology partners. Within this blog, we’ll introduce you to the latest cohort of partners who have built Confluent integrations and share details on our recently announced Data Streaming for AI initiative built together with CwC members.
In this quarter’s update, we’re especially pleased to welcome three of the leading vendors of vector databases to the CwC program: Pinecone, Weaviate, and Zilliz. Joining alongside CwC members Elastic, MongoDB, and Rockset, Confluent customers now have diverse, best-of-breed options for vector search capabilities to power real-time AI use cases.
AI made a massive resurgence in 2023 due in large part to breakthroughs across reusable large language models (LLMs), more accessible machine learning models, more data than ever, and more powerful GPU capabilities. However, these advances alone will not drive results for companies seeking to successfully deploy highly sophisticated use cases such as GenAI chatbots. While LLMs may be able to produce intelligent conversation with customers, they’re limited to subjects supported by the public, historic datasets training the model. Delivering the highest value experience to customers—for example, a retail chatbot aware of up-to-date inventory levels or fluctuating price points—requires direct, real-time access to the data systems that uniquely describe the business in its current state.
To solve this challenge, data streaming is becoming the data backbone of the modern AI stack. Confluent for real-time AI and machine learning have been deployed by customers for diverse use cases including predictive fraud detection, generative AI travel assistants, and personalized recommendations. Data streaming with Confluent Cloud allows businesses to:
Establish a dynamic, real-time knowledge repository to construct a unified source of real-time truth of all their data
Integrate real-time context at query execution by transforming and optimizing the treatment of raw data at the time of generation
Experiment, scale, and innovate with greater agility when working with a modern, decoupled architecture that eliminates point-to-point connections and communication bottlenecks
Craft governed, secure, and trustworthy AI data with robust data lineage, quality, and traceability measures
With Confluent, data can be sourced from anywhere with 120+ pre-built connectors supported across multicloud and hybrid deployments. Data is then made AI-ready for downstream tools like vector databases with our recently announced, serverless offering for Apache Flink (preview) which will allow for calls to OpenAI API from directly within Flink SQL.
Integral to the AI stack, vector databases can store, index, and augment large datasets in the formats that AI technologies like LLMs require. We’ve worked with CwC members Elastic, MongoDB, Pinecone, Rockset, Weaviate, and Zilliz to develop Confluent integrations that fuel their vector search capabilities with highly contextualized, AI-ready data streams sourced from anywhere throughout a customer’s business. Through these integrations, users can access governed, fully managed data streams directly within their vector database of choice, making it even easier to fuel highly sophisticated AI applications with trusted, real-time data.
Recently launched in July of 2023, our Connect with Confluent (CwC) partner program further extends the global data streaming ecosystem and brings Confluent Cloud data streams directly to developers’ doorsteps within the tools where they are already working. The program delivers:
Native Confluent integrations: CwC partners provide their users with the best data streaming experience—making Confluent data streams available from directly within the world’s most popular and disruptive data systems. Provided fully managed, these integrations allow customers to avoid the costs, complexities, and risks of self-managing open source Apache Kafka® and bespoke vendor integrations with the Kafka Connect framework. This allows for easier, more cost-effective development of truly real-time, low-latency experiences for end consumers. Use cases are easier to deploy, easier to evolve over time, and more feasible to reproduce throughout every corner of the business.
Removing the management burden of open source Apache Kafka, CwC partners give their customers fast access to data streams flowing through Confluent Cloud, a cloud-native and complete data streaming platform available everywhere their business needs it:
Cloud native: Spend more time building value when working with a 10x Apache Kafka service powered by the Kora Engine including GBps+ elastic scaling, infinite storage, a 99.99% uptime SLA, highly predictable/low latency performance, and more—all while lowering the total cost of ownership (TCO) for Kafka by up to 60%.
Complete: Deploy new use cases quickly, securely, and reliably when working with a complete data streaming platform with 120+ connectors, built-in stream processing with serverless Apache Flink (preview), enterprise-grade security controls, the industry’s only fully managed governance suite for Kafka, pre-built monitoring options, and more.
Everywhere: Maintain deployment flexibility whether running in the cloud, across clouds, on-premises, or in a hybrid environment. Confluent is available wherever your applications reside with clusters that sync in real time across environments to create a globally consistent central nervous system of real-time data to fuel the business.
New data streaming users: Through Connect with Confluent partners, businesses can expand beyond Kafka experts and accelerate their data streaming initiatives. Native integrations drive organic expansion of real-time use cases throughout the business—exposing more users to a new, high-value source of data they might previously never have accessed. Gone are the days of major, cross-functional internal efforts to figure out which team manages Kafka, what it will take to prioritize development of a new integration, and how long it will take for the integration to be ready for production use—typically a minimum of 3-6 engineering months plus a lifetime of maintenance.
Connect with Confluent partners to lower the bar of complexity and technical expertise required when it comes to working with Kafka. As seen within the Upsolver demo above, these new integrations are fully managed and ready out of the box from right within our partners’ product user interfaces. They provide users with step-by-step guidance for configuring Confuent as a source or destination, coupled with tips and tricks for users who may not have deep experience working with Kafka or data streams.
More real-time data: Altogether, every new integration from Connect with Confluent partners gives customers an easier means of sharing their data products instantaneously across Confluent’s data streaming network and fueling their entire business with more real-time data. As the program continues to expand, the value of data residing within any given system increases exponentially with the ability to be shared with every application throughout the network. Here’s a look at every technology involved in the program today:
Ready to get started? Check out the full library of Connect with Confluent partner integrations to easily integrate your application with fully managed data streams.
Not seeing what you need? Not to worry. Check out our repository of 120+ pre-built source and sink connectors including 70+ provided fully managed.
Are you building an application that needs real-time data? Interested in joining the CwC program? Become a Confluent partner and give your customers the absolute best experience for working with data streams—right within your application, supported by the Kafka experts.
Confluent’s technology partners provide their customers with the best experience for connecting with our data streaming platform and working with real-time data.
We empower partners with the technology and expertise to help innovate and solve their most critical data streaming challenges.