"Neural Representations" is a category of Explainable AI methods that explore how models represent their inputs to make their predictions. A great first step to several methods in this category is to understand the neural activations table/matrix. This video is a gentle introduction to collecting activations, and then how to use them for an explainability method called "Dataset Examples" which can be seen in action in the OpenAI Microscope (https://microscope.openai.com/models).
Introduction (0:00)
Collecting activations of output neurons (1:00)
Getting activations of multiple input example (3:24)
Sorting the activation matrix (4:25)
Collecting all activations (5:28)
OpenAI Microscope and Dataset Examples (7:33)
------
Feature Visualization: https://distill.pub/2017/feature-visu...
@YannicKilcher on Feature Visualization and the OpenAI Microscope: • Feature Visualization & The OpenAI mi...
Explainable AI Cheat Sheet: https://ex.pegg.io/
Explainable AI Video: • Explainable AI Cheat Sheet - Five Key...
-----
Twitter: / jayalammar
Blog: https://jalammar.github.io/
Mailing List:https://jayalammar.substack.com/
More videos by Jay:
The Narrated Transformer Language Model
• The Narrated Transformer Language Model
Jay's Visual Intro to AI
• Jay's Visual Intro to AI
How GPT-3 Works - Easily Explained with Animations
• How GPT3 Works - Easily Explained wit...
Watch video Neural Activations & Dataset Examples online without registration, duration hours minute second in high quality. This video was added by user Jay Alammar 18 May 2021, don't forget to share it with your friends and acquaintances, it has been viewed on our site 3,525 once and liked it 108 people.