Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Online Talk

How to Build RAG Using Confluent with Flink AI Model Inference and MongoDB

Ver ahora

Available On-demand

Retrieval-augmented generation (RAG) is a pattern in GenAI designed to enhance the accuracy and relevance of responses generated by Large Language Models (LLMs), helping reduce hallucinations. RAG retrieves external data from a vector database at prompt time. To ensure that the data retrieved is always current, the vector database needs to be continuously updated with real-time information.

How do you build RAG with real-time data?

Join experts Britton LaRoche, Staff Solutions Engineer at Confluent, and Vasanth Kumar, Principal Architect at MongoDB, as they walk through a RAG tutorial using Confluent data streaming platform and MongoDB Atlas. Register now to learn:

  • How to implement RAG in 4 key steps: data augmentation, inference, workflows, and post-processing
  • How to use data streaming, Flink stream processing and AI Model Inference, and semantic vector search with a vector database like MongoDB Atlas
  • Step-by-step walkthrough of vector embedding for RAG
  • And get all your questions answered during live Q&A

Britton is a seasoned tech professional with 20 years of experience, including roles at Oracle, MongoDB, and Confluent. He’s a certified MongoDB Developer and DBA, known for developing predictive AI/ML models. At Confluent he has developed cutting-edge RAG Generative AI demos integrating Confluent Cloud with MongoDB Atlas Vector Search.

Vasanth is a principal architect at MongoDB and has two decades of experience in building enterprise grade products across different verticals. At MongoDB, Vasanth helps software companies to design and implement data infrastructure, ensuring seamless data integration and accessibility to realize a variety of use cases.