Llama.cpp for FULL LOCAL Semantic Router

Published: 19 January 2024
on channel: James Briggs
12,384
233

Using fully local semantic router for agentic AI with llama.cpp LLM and HuggingFace embedding models.

There are many reasons we might decide to use local LLMs rather than use a third-party service like OpenAI. It could be cost, privacy, compliance, or fear of the OpenAI apocalypse. To help you out, we made Semantic Router fully local with local LLMs available via llama.cpp like Mistral 7B.

Using llama.cpp also enables the use of quantized GGUF models, reducing the memory footprint of deployed models and allowing even 13-billion parameter models to run with hardware acceleration on an Apple M1 Pro chip. We also use LLM grammars to enable high output reliability even from the smallest of models.

In this video, we'll use HuggingFace's MiniLM encoder, and llama.cpp's Mistral-7B-instruct GGUF quantized.

⭐ GitHub Repo:
https://github.com/aurelio-labs/seman...

📌 Code:
https://github.com/aurelio-labs/seman...

🔥 Semantic Router Course:
https://www.aurelio.ai/course/semanti...

👋🏼 AI Consulting:
https://aurelio.ai

👾 Discord:
  / discord  

Twitter:   / jamescalam  
LinkedIn:   / jamescalam  


Watch video Llama.cpp for FULL LOCAL Semantic Router online without registration, duration hours minute second in high quality. This video was added by user James Briggs 19 January 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 12,384 once and liked it 233 people.