In this video, we'll explore how to convert and run Google's Gemma 2 language model locally on your Mac using the MLX framework. You'll learn:
What Google Gemma 2 is and its variants
How to convert a Hugging Face/PyTorch model to MLX
Steps to run Gemma 2 on your local machine
What is Google Gemma 2?
Gemma 2 is a family of lightweight, state-of-the-art open-source language models developed using the same technology behind Google's Gemini models. It comes in three sizes:
1. Gemma 2.6B
2. Gemma 9B
3. Gemma 27B
Each size is available in pre-trained and instruction-tuned variants.
Is Google Gemma 2 free?
Yes, Gemma 2 is completely open-source and accessible through the Hugging Face Hub.
Model Weights
Quantized for MLX: https://huggingface.co/collections/ml...
Full Precision: https://huggingface.co/collections/go...
Additional Resources
Gemma 2 MLX Conversion Script:https://github.com/Blaizzy/LLMOps/blo...
Gemma 2 Transformers Implementation: https://github.com/huggingface/transf...
Gemma 2 PyTorch Implementation: https://github.com/google/gemma_pytor...
Fine-tuning Gemma Guide: https://unsloth.ai/blog/gemma2
Gradio App for MLX: https://github.com/SOSONAGI/mlx-simpl...
Connect with Me
LinkedIn: / prince-canuma
Twitter: / prince_canuma
Medium: / prince-canuma
Watch video Get started Gemma 2 Locally on Mac using MLX online without registration, duration hours minute second in high quality. This video was added by user Prince Canuma 20 July 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 1,223 once and liked it 43 people.