Insights from Finetuning LLMs with Low-Rank Adaptation

Опубликовано: 17 Декабрь 2023
на канале: Sebastian Raschka
5,912
249

Sebastian's books: https://sebastianraschka.com/books/

Links:
LoRA: Low-Rank Adaptation of Large Language Models, https://arxiv.org/abs/2106.09685
LitGPT: https://github.com/Lightning-AI/lit-gpt
LitGPT LoRA Tutorial: https://github.com/Lightning-AI/lit-g...

Low-rank adaptation (LoRA) stands as one of the most popular and effective methods for efficiently training custom Large Language Models (LLMs). As practitioners of open-source LLMs, we regard LoRA as a crucial technique in our toolkit.

In this talk, I will delve into some practical insights gained from running hundreds of experiments with LoRA, addressing questions such as: How much can I save with quantized LoRA? Are Adam optimizers memory-intensive? Should we train for multiple epochs? How do we choose the LoRA rank?

---

To support this channel, please consider purchasing a copy of my books: https://sebastianraschka.com/books/

---

  / rasbt  
  / sebastianraschka  
https://magazine.sebastianraschka.com


Смотрите видео Insights from Finetuning LLMs with Low-Rank Adaptation онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Sebastian Raschka 17 Декабрь 2023, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 5,91 раз и оно понравилось 24 людям.