Faster LLM Function Calling — Dynamic Routes

Published: 15 January 2024
on channel: James Briggs
10,552
261

LLM function calling can be slow, particularly for AI agents. Using Semantic Router's dynamic routes, we can make this much faster and scale to thousands of tools and functions. Here we see how to use it with OpenAI's GPT-3.5 Turbo, but the library also supports Cohere and Llama.cpp for local deployments.

In semantic router there are two types of routes that can be chosen. Both routes belong to the Route object, the only difference between them is that static routes return a Route.name when chosen, whereas dynamic routes use an LLM call to produce parameter input values.

For example, a static route will tell us if a query is talking about mathematics by returning the route name (which could be "math" for example). A dynamic route can generate additional values, so it may decide a query is talking about maths, but it can also generate Python code that we can later execute to answer the user's query, this output may look like "math", "import math; output = math.sqrt(64).

⭐ GitHub Repo:
https://github.com/aurelio-labs/seman...

📌 Code:
https://github.com/aurelio-labs/seman...

🔥 Semantic Router Course:
https://www.aurelio.ai/course/semanti...

👋🏼 AI Consulting:
https://aurelio.ai

👾 Discord:
  / discord  

Twitter:   / jamescalam  
LinkedIn:   / jamescalam  

00:00 Fast LLM Function Calling
00:56 Semantic Router Setup for LLMs
02:20 Function Calling Schema
04:04 Dynamic Routes for Function Calling
05:51 How we can use Faster Agents


Watch video Faster LLM Function Calling — Dynamic Routes online without registration, duration hours minute second in high quality. This video was added by user James Briggs 15 January 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 10,552 once and liked it 261 people.