Jay comments on one of his favorites machine learning articles which helped him break into NLP. This popular article demonstrated to the tech community how machine learning was starting to handle text in intriguing new ways. These developments in sequence-to-sequence models paved the way to the rapid acceleration of recent models like GPT3.
Contents:
Introduction (0:00)
Character-level language models (2:00)
RNN types figure (2:57)
Fun with RNNs (7:18)
Prediction and activation visualization 1(13:48)
Neuron visualization (19:08)
Subsequent related work (21:58)
The Unreasonable Effectiveness of Recurrent Neural Networks
Author: Andrej Karpathy ( / karpathy )
https://karpathy.github.io/2015/05/21...
------------------------------------
Twitter: / jayalammar
Blog: https://jalammar.github.io/
Mailing List: https://jayalammar.substack.com/
More videos by Jay:
Explainable AI Cheat Sheet - Five Key Categories
• Explainable AI Cheat Sheet - Five Key...
How GPT-3 Works - Easily Explained with Animations
• How GPT3 Works - Easily Explained wit...
Jay's Visual Intro to AI
• Jay's Visual Intro to AI
Watch video The Unreasonable Effectiveness of RNNs (Article and Visualization Commentary) [2015 article] online without registration, duration hours minute second in high quality. This video was added by user Jay Alammar 10 May 2021, don't forget to share it with your friends and acquaintances, it has been viewed on our site 3,864 once and liked it 121 people.