Local LLM Fine-tuning on Mac (M1 16GB)

Published: 29 July 2024
on channel: Shaw Talebi
19,705
649

Get exclusive access to AI resources and project ideas: https://the-data-entrepreneurs.kit.co...

Here, I show how to fine-tune an LLM locally using an M-series Mac. The example adapts Mistral 7b to respond to YT comments in my likeness.

📰 Blog: https://towardsdatascience.com/local-...
💻 GitHub Repo: https://github.com/ShawhinT/YouTube-B...

🎥 QLoRA:    • 3 Ways to Make a Custom AI Assistant ...  
🎥 Fine-tuning with OpenAI:    • 3 Ways to Make a Custom AI Assistant ...  
▶️ Series Playlist:    • Large Language Models (LLMs)  

More Resources:
[1] MLX: https://ml-explore.github.io/mlx/buil...
[2] Original code: https://github.com/ml-explore/mlx-exa...
[3] MLX community: https://huggingface.co/mlx-community
[4] Model: https://huggingface.co/mlx-community/...
[5] LoRA paper: https://arxiv.org/abs/2106.09685

--
Homepage: https://www.shawhintalebi.com/

Intro - 0:00
Motivation - 0:56
MLX - 1:57
GitHub Repo - 3:30
Setting up environment - 4:09
Example Code - 6:23
Inference with un-finetuned model - 8:57
Fine-tuning with QLoRA - 11:22
Aside: dataset formatting - 13:54
Running local training - 16:07
Inference with finetuned model - 18:20
Note on LoRA rank - 22:03


Watch video Local LLM Fine-tuning on Mac (M1 16GB) online without registration, duration hours minute second in high quality. This video was added by user Shaw Talebi 29 July 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 19,705 once and liked it 649 people.