Semantic chunking for RAG allows us to build more concise chunks for our RAG pipelines, chatbots, and AI agents. We can pair this with various LLMs and embedding models from OpenAI, Cohere, Anthropic, etc, and libraries like LangChain or CrewAI to build potentially improved Retrieval Augmented Generation (RAG) pipelines.
📌 Code:
https://github.com/pinecone-io/exampl...
🚩 Intro to Semantic Chunking:
https://www.aurelio.ai/learn/semantic...
🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-si...
👋🏼 AI Consulting:
https://aurelio.ai
👾 Discord:
/ discord
Twitter: / jamescalam
LinkedIn: / jamescalam
00:00 Semantic Chunking for RAG
00:45 What is Semantic Chunking
03:31 Semantic Chunking in Python
12:17 Adding Context to Chunks
13:41 Providing LLMs with More Context
18:11 Indexing our Chunks
20:27 Creating Chunks for the LLM
27:18 Querying for Chunks
#artificialintelligence #ai #nlp #chatbot #openai
Watch video Semantic Chunking for RAG online without registration, duration hours minute second in high quality. This video was added by user James Briggs 04 May 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 24,214 once and liked it 697 people.