RLHF: Training Language Models to Follow Instructions with Human Feedback - Paper Explained

Опубликовано: 22 Март 2024
на канале: DataMListic
780
19

In this video we talk about how we can train large language models (LLMs) to follow instructions with human feedback. The paper proposes a solution called InstructGPT, which involves fine-tuning GPT-3 using human feedback to align the model with user intent across various tasks. By collecting datasets of labeler demonstrations and rankings of model outputs, the InstructGPT model, despite having fewer parameters than GPT-3, shows preference in human evaluations and improvements in truthfulness and reduction in toxic output generation.

References
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
“Training language models to follow instructions
with human feedback” paper: https://arxiv.org/pdf/2203.02155.pdf

Related Videos
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Chain-of-Verification (COVE) Reduces Hallucination in Large Language Models:    • Chain-of-Verification (COVE) Reduces ...  
Why Language Models Hallucinate:    • Why Language Models Hallucinate  
Transformer Self-Attention Mechanism Explained:    • Transformer Self-Attention Mechanism ...  
Jailbroken: How Does LLM Safety Training Fail? - Paper Explained:    • Jailbroken: How Does LLM Safety Train...  
How to Fine-tune Large Language Models Like ChatGPT with Low-Rank Adaptation (LoRA):    • How to Fine-tune Large Language Model...  
Multi-Head Attention (MHA), Multi-Query Attention (MQA), Grouped Query Attention (GQA) Explained:    • Multi-Head Attention (MHA), Multi-Que...  
LLM Prompt Engineering with Random Sampling: Temperature, Top-k, Top-p:    • LLM Prompt Engineering with Random Sa...  

Contents
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
00:00 - Abstract & Intro
03:01 - Main Results - Human Preferences
04:45 - RLHF Overview
07:13 - Methods and Experiments
14:32- Results
18:45 - Discussion & Conclusions

Follow Me
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
🐦 Twitter: @datamlistic   / datamlistic  
📸 Instagram: @datamlistic   / datamlistic  
📱 TikTok: @datamlistic   / datamlistic  

Channel Support
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The best way to support the channel is to share the content. ;)

If you'd like to also support the channel financially, donating the price of a coffee is always warmly welcomed! (completely optional and voluntary)
► Patreon:   / datamlistic  
► Bitcoin (BTC): 3C6Pkzyb5CjAUYrJxmpCaaNPVRgRVxxyTq
► Ethereum (ETH): 0x9Ac4eB94386C3e02b96599C05B7a8C71773c9281
► Cardano (ADA): addr1v95rfxlslfzkvd8sr3exkh7st4qmgj4ywf5zcaxgqgdyunsj5juw5
► Tether (USDT): 0xeC261d9b2EE4B6997a6a424067af165BAA4afE1a

#llm #rlhf


Смотрите видео RLHF: Training Language Models to Follow Instructions with Human Feedback - Paper Explained онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь DataMListic 22 Март 2024, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 780 раз и оно понравилось 19 людям.