In this video, we explore how the low-rank adaptation (LoRA) algorithm is used to fine-tune large language models (LLMs) like ChatGPT, Llama, BARD etc.
References
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
LoRA: Low-Rank Adaptation of Large Language Models paper: https://arxiv.org/abs/2106.09685
Related Videos
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Why Language Models Hallucinate: • Why Language Models Hallucinate
Grounding DINO, Open-Set Object Detection: • Object Detection Part 8: Grounding DI...
Detection Transformers (DETR), Object Queries: • Object Detection Part 7: Detection Tr...
Wav2vec2 A Framework for Self-Supervised Learning of Speech Representations - Paper Explained: • Wav2vec2 A Framework for Self-Supervi...
Transformer Self-Attention Mechanism Explained: • Transformer Self-Attention Mechanism ...
Contents
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
00:00 - Intro
00:43 - Fine-tuning LLMs
01:32 - LoRA intro
02:30 - Low-rank matrix
03:12 - LoRA low rank decomposition
03:43 - Outro
Follow Me
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
🐦 Twitter: @datamlistic / datamlistic
📸 Instagram: @datamlistic / datamlistic
📱 TikTok: @datamlistic / datamlistic
Channel Support
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The best way to support the channel is to share the content. ;)
If you'd like to also support the channel financially, donating the price of a coffee is always warmly welcomed! (completely optional and voluntary)
► Patreon: / datamlistic
► Bitcoin (BTC): 3C6Pkzyb5CjAUYrJxmpCaaNPVRgRVxxyTq
► Ethereum (ETH): 0x9Ac4eB94386C3e02b96599C05B7a8C71773c9281
► Cardano (ADA): addr1v95rfxlslfzkvd8sr3exkh7st4qmgj4ywf5zcaxgqgdyunsj5juw5
► Tether (USDT): 0xeC261d9b2EE4B6997a6a424067af165BAA4afE1a
#llm #largelanguagemodels #chatgpt #llama2 #lora
Watch video How to Fine-tune Large Language Models Like ChatGPT with Low-Rank Adaptation (LoRA) online without registration, duration hours minute second in high quality. This video was added by user DataMListic 18 November 2023, don't forget to share it with your friends and acquaintances, it has been viewed on our site 2,643 once and liked it 76 people.