Managing Sources of Randomness When Training Deep Neural Networks

Опубликовано: 16 Апрель 2024
на канале: Sebastian Raschka
2,499
96

Sebastian's books: https://sebastianraschka.com/books/

REFERENCES:
1. Link to the code on GitHub: https://github.com/rasbt/MachineLearn...
2. Link to the book mentioned at the end of the video: https://nostarch.com/machine-learning...

DESCRIPTION:
In this video, we managing common sources of randomness when training deep neural networks. We cover sources of randomness, including model weight initialization, dataset sampling and shuffling, nondeterministic algorithms, runtime algorithm differences, hardware and driver variations, and generative AI sampling.

---

To support this channel, please consider purchasing a copy of my books: https://sebastianraschka.com/books/

---

  / rasbt  
  / sebastianraschka  
https://magazine.sebastianraschka.com

---

OUTLINE:
00:00 – Introduction
01:14 – 1. Model Weight Initialization
04:28 – 2. Dataset Sampling and Shuffling
07:45 – 3. Nondeterministic Algorithms
11:13 – 4. Different Runtime Algorithms
14:30 – 5. Hardware and Drivers
15:39 – 6. Randomness and Generative AI
20:56 – Recap
22:34 – Surprise


Смотрите видео Managing Sources of Randomness When Training Deep Neural Networks онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Sebastian Raschka 16 Апрель 2024, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 2,499 раз и оно понравилось 96 людям.