Are Large Language Models (LLMs) just advanced versions of autocomplete? While some AI experts describe them as “next-word predictors” this is an oversimplification. In this video, we’ll dive deep into how LLMs (like ChatGPT, Claude, and Gemini) actually choose the next word when generating text.
We’ll explore the difference between the modeling and decoding phases, and how decoding strategies—such as greedy search and beam search—impact the quality and creativity of a model’s output.
Video sections:
00:00 Are LLMs just autocomplete?
00:20 Token selection algorithms
01:32 Modeling vs. Decoding
03:13 What is language model decoding?
04:12 The probability of text
05:41 Greedy Search
06:49 Beam Search
07:54 What is neural text degeneration?
▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬▬
🖥️ Website: https://www.assemblyai.com
🐦 Twitter: / assemblyai
🦾 Discord: / discord
▶️ Subscribe: https://www.youtube.com/c/AssemblyAI?...
🔥 We're hiring! Check our open roles: https://www.assemblyai.com/careers
🔑 Get your AssemblyAI API key here: https://www.assemblyai.com/?utm_sourc...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#MachineLearning #DeepLearning #llms #algorithm
Watch video How Language Models Choose the Next Word online without registration, duration hours minute second in high quality. This video was added by user AssemblyAI 28 October 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 1,066 once and liked it 71 people.