Sebastian's books: https://sebastianraschka.com/books/
Links:
LoRA: Low-Rank Adaptation of Large Language Models, https://arxiv.org/abs/2106.09685
LitGPT: https://github.com/Lightning-AI/lit-gpt
LitGPT LoRA Tutorial: https://github.com/Lightning-AI/lit-g...
Low-rank adaptation (LoRA) stands as one of the most popular and effective methods for efficiently training custom Large Language Models (LLMs). As practitioners of open-source LLMs, we regard LoRA as a crucial technique in our toolkit.
In this talk, I will delve into some practical insights gained from running hundreds of experiments with LoRA, addressing questions such as: How much can I save with quantized LoRA? Are Adam optimizers memory-intensive? Should we train for multiple epochs? How do we choose the LoRA rank?
---
To support this channel, please consider purchasing a copy of my books: https://sebastianraschka.com/books/
---
/ rasbt
/ sebastianraschka
https://magazine.sebastianraschka.com
Watch video Insights from Finetuning LLMs with Low-Rank Adaptation online without registration, duration hours minute second in high quality. This video was added by user Sebastian Raschka 17 December 2023, don't forget to share it with your friends and acquaintances, it has been viewed on our site 5,91 once and liked it 24 people.