The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from strings to tokens, and decode() back from tokens to strings. In this lecture we build from scratch the Tokenizer used in the GPT series from OpenAI. In the process, we will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We’ll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely.
Chapters:
00:00:00 intro: Tokenization, GPT-2 paper, tokenization-related issues
00:05:50 tokenization by example in a Web UI (tiktokenizer)
00:14:56 strings in Python, Unicode code points
00:18:15 Unicode byte encodings, ASCII, UTF-8, UTF-16, UTF-32
00:22:47 daydreaming: deleting tokenization
00:23:50 Byte Pair Encoding (BPE) algorithm walkthrough
00:27:02 starting the implementation
00:28:35 counting consecutive pairs, finding most common pair
00:30:36 merging the most common pair
00:34:58 training the tokenizer: adding the while loop, compression ratio
00:39:20 tokenizer/LLM diagram: it is a completely separate stage
00:42:47 decoding tokens to strings
00:48:21 encoding strings to tokens
00:57:36 regex patterns to force splits across categories
01:11:38 tiktoken library intro, differences between GPT-2/GPT-4 regex
01:14:59 GPT-2 released by OpenAI walkthrough
01:18:26 special tokens, tiktoken handling of, GPT-2/GPT-4 differences
01:25:28 minbpe exercise time! write your own GPT-4 tokenizer
01:28:42 sentencepiece library intro, used to train Llama 2 vocabulary
01:43:27 how to set vocabulary set? revisiting transformer
01:48:11 training new tokens, example of prompt compression
01:49:58 multimodal [image, video, audio] tokenization with vector quantization
01:51:41 revisiting and explaining the quirks of LLM tokenization
02:10:20 final recommendations
02:12:50 ??? :)
Exercises:
- Advised flow: reference this document and try to implement the steps before I give away the partial solutions in the video. The full solutions if you’re getting stuck are in the minbpe code
Links:
- Google colab for the video:
- GitHub repo for the video: minBPE
Supplementary links:
- tiktokenizer
- tiktoken from OpenAI:
- sentencepiece from Google
1 view
4612
1720
2 days ago 00:22:10 1
Постановление Правительства №1710 Оплата ЖКХ
2 days ago 00:01:44 1
День победы русских полков в Куликовской битве
2 days ago 00:04:03 1
Lady Gaga - Poker Face (Clean Lyrics)
2 days ago 00:12:01 1
СЕКРЕТ Пышного БИСКВИТА который НЕ ОПАДАЕТ! ПЫШНЫЙ БИСКВИТ без Разрыхлителя ❤️ Готовим Дома
2 days ago 00:25:19 1
АЛЯ В ЛИЦЕНЗИИ, СЛИВ АНИМЕ и другие аниме новости!
2 days ago 00:53:43 1
Rain: A Tribute to the Beatles / Live in Clearwater, Sound
2 days ago 00:00:00 2
[Стрим#1] Путешествие Элейны | Том 1 и 2 | Читаем ранобэ
2 days ago 00:00:06 1
NewsFrol
2 days ago 00:02:01 1
Народная артистка Роза Багланова о геноциде народа Казахстана руками ЦК КПСС и ее российских ученых.
2 days ago 00:23:43 1
Серия №8 | Название аниме в описании
2 days ago 00:18:15 1
Женщины доказали мэру Сыктывкара, что ЖКХ оплачено из бюджета Мэр признался!
2 days ago 00:59:52 1
🔥ТОЛК-ШОУ “JAM НА СВЯЗИ“🔥
2 days ago 00:54:10 1
ТОЛК-ШОУ “JAM НА СВЯЗИ“
2 days ago 00:15:01 1
Как физик-ядерщик стал священником / Непридуманные истории
2 days ago 00:28:09 1
Год с Синтезитом. Давление, лишний вес, волосы, кожа, одышка, холестерин, иммунитет. Отзыв № 428
2 days ago 00:03:16 1
Семинар «Твое Изобилие и Благосостояние»
2 days ago 01:13:16 1
Семейные игры. Битва за олимп | Сезон 1 | Выпуск 5
2 days ago 00:00:13 1
КОГДА В ЖИЗНИ НАСТУПАЮТ ПЕРЕЛОМНЫЕ МОМЕНТЫ
2 days ago 00:00:46 1
How to Confuse a Honkai Star Rail Player! Part 8 #shorts