New course with Hugging Face: Quantization Fundamentals

Опубликовано: 15 Апрель 2024
на канале: DeepLearningAI
3,344
112

Enroll now: https://bit.ly/3VUbDMo

Introducing a new short course: Quantization Fundamentals with Hugging Face.

Generative AI models often exceed the capabilities of consumer-grade hardware and are expensive to run. Compressing models through methods such as quantization makes them more efficient, faster, and accessible, while minimizing performance degradation.

Join this course and:

Learn to quantize any open source model with linear quantization using the Quanto library.
Get an overview of how linear quantization is implemented. This form of quantization can be applied to compress any model, including LLMs, vision models, etc.
Apply “downcasting,” another form of quantization, with the Transformers library, which enables you to load models in about half their normal size in the BFloat16 data type.

By the end of this course, you’ll have a foundation in quantization techniques and be able to apply them to compress and optimize your own open source models, allowing them to run on a wide variety of devices, including smartphones, personal computers, and edge devices.

Learn more: https://bit.ly/3VUbDMo


Смотрите видео New course with Hugging Face: Quantization Fundamentals онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь DeepLearningAI 15 Апрель 2024, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 3,344 раз и оно понравилось 112 людям.