[QA] LoRA Learns Less and Forgets Less

Published: 16 May 2024
on channel: Arxiv Papers
127
2

LoRA is a parameter-efficient finetuning method for large language models, but underperforms full finetuning in most cases. It offers strong regularization and diverse generations.

https://arxiv.org/abs//2405.09673

YouTube:    / @arxivpapers  

TikTok:   / arxiv_papers  

Apple Podcasts: https://podcasts.apple.com/us/podcast...

Spotify: https://podcasters.spotify.com/pod/sh...


Watch video [QA] LoRA Learns Less and Forgets Less online without registration, duration hours minute second in high quality. This video was added by user Arxiv Papers 16 May 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 12 once and liked it people.