We explore how you can train your own AI on consumer grade GPU's using QLORA. We will use the same techniques used to create the new guancano model (as well as exploring the model itself). We will look at how you can efficiently load and fine tune large models such as the gpt-neox-20b model for free in Google CoLab. This all possible through 4bit quantized models which give the same performance as 16bit models with a much smaller memory footprint. Using this technique in the future we can create our own powerful models like alpaca, vicuna or guancano
In this video, we will create our own LLM using the gpt-neox-20b model and the english quotes dataset.
Fine Tuning Notebook used in this video
https://github.com/chrishayuk/llm-col...
QLora Git Repository
https://github.com/artidoro/qlora
Watch video How To Fine Tune Your Own AI (guancano style) Using QLORA And Google Colab (tutorial) online without registration, duration hours minute second in high quality. This video was added by user Chris Hay 30 May 2023, don't forget to share it with your friends and acquaintances, it has been viewed on our site 6,864 once and liked it 192 people.