What makes LLM tokenizers different from each other? GPT4 vs. FlanT5 Vs. Starcoder Vs. BERT and more

Published: 21 September 2023
on channel: Jay Alammar
17,186
555

Tokenizers are one of the key components of Large Language Models (LLMs). One of the best ways to understand what they do, is to compare the behavior of different tokenizers. In this video, Jay takes a carefully crafted piece of text (that contains English, code, indentation, numbers, emoji, and other languages) and passes it through different trained tokenizers to reveal what they succeed and fail at encoding, and the different design choices for different tokenizers and what they say about their respective models.

---

Contents:
0:00 Introduction
1:25 The carefully polished text to test tokenizers
2:19 BERT Uncased
3:59 BERT Cased
4:29 GPT-2
6:00 FLAN-T5
7:00 GPT-4
9:24 Starcoder
21:31 Galactica

---

Twitter:   / jayalammar  
Blog: https://jalammar.github.io/
Mailing List: https://jayalammar.substack.com/
Access the Early Release version of the book with a 30-day free trial of the O'Reilly learning platform: https://learning.oreilly.com/get-lear... [The formatting for the tokenization chapter is still a work-in-progress, but the video gives you a better look at the approach]


Watch video What makes LLM tokenizers different from each other? GPT4 vs. FlanT5 Vs. Starcoder Vs. BERT and more online without registration, duration hours minute second in high quality. This video was added by user Jay Alammar 21 September 2023, don't forget to share it with your friends and acquaintances, it has been viewed on our site 17,186 once and liked it 555 people.