#llm #rag #langchain #llama #ollama
The Python code, instruction manual, and pdf files are given here: https://ko-fi.com/s/05a82fdd6f
In this tutorial, we explain how to build a prototype of the Retrieval-Augmented Generation (RAG) application in Python from scratch. The RAG application will be based on the Ollama framework and the Llama3.1 Large Language Model (LLM), and on the LangChain Python framework.
The application will be able to build an embedded database of the provided PDF documents containing text, numbers, data, and tables (in the future tutorials, we will also explain how to embed images). This database will be used to augment the knowledge of the LLM. For example, the RAG application will be able to perform complex calculations on the basis of provided table data. Also, the developed application will be able to understand custom text documents, and to make intelligent conclusions on the basis of personal data. The techniques that you will learn in this tutorial are very important for the development of personal assistants and automation of daily tasks. In a generalized form that includes images or even videos, the developed RAG application can have a number of engineering and robotics applications. In the video, we run a demonstration of our application.
Watch video Create Retrieval-Augmented Generation RAG application in Python From Scratch Ollama Llama LangChain online without registration, duration hours minute second in high quality. This video was added by user Aleksandar Haber PhD 23 September 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 739 once and liked it 27 people.