In this video, we start a new series where we explore the first 5 items in the reading that Ilya Sutskever, former OpenAI chief scientist, gave to John Carmack. Ilya followed by saying that "If you really learn all of these, you’ll know 90% of what matters today".
References
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Transformer Self-Attention Mechanism Explained: • Transformer Self-Attention Mechanism ...
Long Short-Term Memory (LSTM) Equations Explained: • Long Short-Term Memory (LSTM) Equatio...
Reading List
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The Annotated Transformer: https://nlp.seas.harvard.edu/annotate...
The First Law of Complexodynamics: https://scottaaronson.blog/?p=762
The Unreasonable Effectiveness of Recurrent Neural Networks: https://karpathy.github.io/2015/05/21...
Understanding LSTM Networks: https://colah.github.io/posts/2015-08...
Recurrent Neural Networks Regularization: https://arxiv.org/pdf/1409.2329
Related Videos
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Why Language Models Hallucinate: • Why Language Models Hallucinate
Grounding DINO, Open-Set Object Detection: • Object Detection Part 8: Grounding DI...
Transformer Self-Attention Mechanism Explained: • Transformer Self-Attention Mechanism ...
How to Fine-tune Large Language Models Like ChatGPT with Low-Rank Adaptation (LoRA): • How to Fine-tune Large Language Model...
Multi-Head Attention (MHA), Multi-Query Attention (MQA), Grouped Query Attention (GQA) Explained: • Multi-Head Attention (MHA), Multi-Que...
LLM Prompt Engineering with Random Sampling: Temperature, Top-k, Top-p: • LLM Prompt Engineering with Random Sa...
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits: • The Era of 1-bit LLMs: All Large Lang...
Contents
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
00:00 - Intro
00:59 - Item 1: The Annotated Transformer
01:29 - Item 2: The First Law of Complexodynamics
02:21 - Item 3: The Unreasonable Effectiveness of Recurrent Neural Networks
03:05 - Item 4: Understanding LSTM Networks
03:36 - Item 5: Recurrent Neural Networks Regularization
04:11 - Outro
Follow Me
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
🐦 Twitter: @datamlistic / datamlistic
📸 Instagram: @datamlistic / datamlistic
📱 TikTok: @datamlistic / datamlistic
Channel Support
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The best way to support the channel is to share the content. ;)
If you'd like to also support the channel financially, donating the price of a coffee is always warmly welcomed! (completely optional and voluntary)
► Patreon: / datamlistic
► Bitcoin (BTC): 3C6Pkzyb5CjAUYrJxmpCaaNPVRgRVxxyTq
► Ethereum (ETH): 0x9Ac4eB94386C3e02b96599C05B7a8C71773c9281
► Cardano (ADA): addr1v95rfxlslfzkvd8sr3exkh7st4qmgj4ywf5zcaxgqgdyunsj5juw5
► Tether (USDT): 0xeC261d9b2EE4B6997a6a424067af165BAA4afE1a
#transformer #lstm #rnn #complexodynamic #aireadinglist
Смотрите видео AI Reading List (by Ilya Sutskever) - Part 1 онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь DataMListic 08 Июнь 2024, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 16,823 раз и оно понравилось 648 людям.