LM Studio 0.3.4 ships with an MLX engine for running on-device LLMs super efficiently on Apple Silicon Macs.
MLX support in LM Studio 0.3.4 includes:
Search & download any supported MLX LLM from Hugging Face (just like you've been doing with GGUF models)
Use MLX models via the Chat UI, or from your code using an OpenAI-like local server running on localhost
Enforce LLM responses in specific JSON formats (thanks to Outlines)
Use Vision models like LLaVA and more, and use them via the chat or the API (thanks to mlx-vlm)
Load and run multiple simultaneous LLMs. You can even mix and match llama.cpp and MLX models!
https://lmstudio.ai/blog/lmstudio-v0.3.4
🔗 Links 🔗
❤️ If you want to support the channel ❤️
Support here:
Patreon - / 1littlecoder
Ko-Fi - https://ko-fi.com/1littlecoder
🧭 Follow me on 🧭
Twitter - / 1littlecoder
Linkedin - / amrrs
Watch video The ONLY Local LLM Tool for Mac (Apple Silicon)!! online without registration, duration hours minute second in high quality. This video was added by user 1littlecoder 08 October 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 4,354 once and liked it 162 people.