Jay comments on one of his favorites machine learning articles which helped him break into NLP. This popular article demonstrated to the tech community how machine learning was starting to handle text in intriguing new ways. These developments in sequence-to-sequence models paved the way to the rapid acceleration of recent models like GPT3.
Contents:
Introduction (0:00)
Character-level language models (2:00)
RNN types figure (2:57)
Fun with RNNs (7:18)
Prediction and activation visualization 1(13:48)
Neuron visualization (19:08)
Subsequent related work (21:58)
The Unreasonable Effectiveness of Recurrent Neural Networks
Author: Andrej Karpathy ( / karpathy )
https://karpathy.github.io/2015/05/21...
------------------------------------
Twitter: / jayalammar
Blog: https://jalammar.github.io/
Mailing List: https://jayalammar.substack.com/
More videos by Jay:
Explainable AI Cheat Sheet - Five Key Categories
• Explainable AI Cheat Sheet - Five Key...
How GPT-3 Works - Easily Explained with Animations
• How GPT3 Works - Easily Explained wit...
Jay's Visual Intro to AI
• Jay's Visual Intro to AI
Смотрите видео The Unreasonable Effectiveness of RNNs (Article and Visualization Commentary) [2015 article] онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Jay Alammar 10 Май 2021, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 3,864 раз и оно понравилось 121 людям.