We explore how you can train your own AI on consumer grade GPU's using QLORA. We will use the same techniques used to create the new guancano model (as well as exploring the model itself). We will look at how you can efficiently load and fine tune large models such as the gpt-neox-20b model for free in Google CoLab. This all possible through 4bit quantized models which give the same performance as 16bit models with a much smaller memory footprint. Using this technique in the future we can create our own powerful models like alpaca, vicuna or guancano
In this video, we will create our own LLM using the gpt-neox-20b model and the english quotes dataset.
Fine Tuning Notebook used in this video
https://github.com/chrishayuk/llm-col...
QLora Git Repository
https://github.com/artidoro/qlora
Смотрите видео How To Fine Tune Your Own AI (guancano style) Using QLORA And Google Colab (tutorial) онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Chris Hay 30 Май 2023, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 6,864 раз и оно понравилось 192 людям.