In this video, we'll build a RAG app using Ollama and an embedding model locally and for free. We'll track this app in LangSmith.
00:01 Introduction
01:08 Create a virtual environment
01:38 Installation
03:17 Initialize the local model
04:23 Enter LangSmith
07:28 Load data
09:14 Split data
11:50 Create a database
13:56 Retrieve data
15:35 Generate the output
20:23 Summary
🚀 Medium: / tirendazacademy
🚀 X: https://x.com/tirendazacademy
🚀 LinkedIn: / tirendaz-academy
▶️ LangChain Tutorials:
• LangChain Tutorials
▶️ Generative AI Tutorials:
• Generative AI Tutorials
▶️ LLMs Tutorials:
• LLMs Tutorials
▶️ HuggingFace Tutorials:
• HuggingFace Tutorials 🔥 NLP with Tran...
🔥 Thanks for watching. Don't forget to subscribe, like the video, and leave a comment.
🔗 Notebook: https://github.com/TirendazAcademy/La...
🔗 LangGraph Blog: https://blog.langchain.dev/langgraph/
🔗 LangSmith: https://smith.langchain.com/
🔗 Rag Prompt: https://smith.langchain.com/hub/rlm/r...
#ai #langgraph #generativeai
Watch video RAG with LangChain & Ollama Locally and For Free online without registration, duration hours minute second in high quality. This video was added by user Evren Ozkip 20 March 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 3,259 once and liked it 80 people.