"Neural Representations" is a category of Explainable AI methods that explore how models represent their inputs to make their predictions. A great first step to several methods in this category is to understand the neural activations table/matrix. This video is a gentle introduction to collecting activations, and then how to use them for an explainability method called "Dataset Examples" which can be seen in action in the OpenAI Microscope (https://microscope.openai.com/models).
Introduction (0:00)
Collecting activations of output neurons (1:00)
Getting activations of multiple input example (3:24)
Sorting the activation matrix (4:25)
Collecting all activations (5:28)
OpenAI Microscope and Dataset Examples (7:33)
------
Feature Visualization: https://distill.pub/2017/feature-visu...
@YannicKilcher on Feature Visualization and the OpenAI Microscope: • Feature Visualization & The OpenAI mi...
Explainable AI Cheat Sheet: https://ex.pegg.io/
Explainable AI Video: • Explainable AI Cheat Sheet - Five Key...
-----
Twitter: / jayalammar
Blog: https://jalammar.github.io/
Mailing List:https://jayalammar.substack.com/
More videos by Jay:
The Narrated Transformer Language Model
• The Narrated Transformer Language Model
Jay's Visual Intro to AI
• Jay's Visual Intro to AI
How GPT-3 Works - Easily Explained with Animations
• How GPT3 Works - Easily Explained wit...
Смотрите видео Neural Activations & Dataset Examples онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Jay Alammar 18 Май 2021, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 3,525 раз и оно понравилось 108 людям.