Hi Jay, I love the work you have done! Ever since I read the Illustrated Transformer, I was blown away by your explanations and illustrations. You really explain advance concepts with such clarity and simplicity. I am very grateful to you for that! I really look forward to reading and learning from your book! Thank you so much!!
@devashishsoni9371
11 ай бұрын
Can you please share the link of that?
@lazycomedy9358
10 ай бұрын
yeah. Same here, shout out for that.
@WhatsAI
Жыл бұрын
Great video as always Jay! :)
@arp_ai
Жыл бұрын
Thank you Louis!
@jwilber92
Жыл бұрын
Great content as always, Jay!
@kidsfungaming6756
Жыл бұрын
Hi Jay, I love your presentation, it is so inspiring and you make the hard concepts simple and clearer. Regarding the tokenizer, if every word is one token and the same is mapped over a single vector (embeddings) then how do LLMs clearly understand the meaning of the same word in different contexts? I will appreciate your answer and I am sorry if my question is too naive. Thank you
@ranjancse26
Жыл бұрын
This is incredible. Great work! Keep it up :)
@KumR
7 ай бұрын
Hi Jay - Great video.. Wondering if this is similar to computers doing everything in 0s and 1s although from the OS level the abstraction is different . At least conceptually. Coming to the book , I am not able to find it anywhere... Is there a link ?
@boonkiathan
Жыл бұрын
Neither have our neurones
@arp_ai
Жыл бұрын
Aha! But which neurons though!
@prabaj84
Жыл бұрын
Hi Jay, thanks again for explaining a complex topic in simple way, if I may ask, what tool do you use to generate graphics for your blogs? Thanks in advance
@TheAero
Жыл бұрын
I would love if you could go into the following: RLHF. PPO. PEFT. LORA etc. Adapters. soft-prompting. scaling transformers.
@arp_ai
Жыл бұрын
Delicious topics indeed
@goelnikhils
Жыл бұрын
Great Work
@gama3181
Жыл бұрын
And how the people know wich tokenizer is the best way to split the vocab? This follow a math rule or statistical pattern? or it depend on the computing budget?
@tanmoy.mazumder
Жыл бұрын
could you perhaps do an even deeper dive about how these models exactly produce the output vectors and then how those get turned into these tokens and stuff?
@arp_ai
Жыл бұрын
Not much has changed since my videos on GPT3, honestly. Check those out.
@ashisranjanlahiri
Жыл бұрын
Hi Sir, your video always amazed me. Need more videos for sure. Can you please share the notebook link.
@arp_ai
Жыл бұрын
Thank you! Haven't published the notebook yet, but that's a good idea
@123arskas
Жыл бұрын
Good one
@mohamadbebah8416
Жыл бұрын
Great!! Thank you very much
@khaledsrrr
Жыл бұрын
❤ very nice
@amittripathi6664
Жыл бұрын
Hi Jay, thanks for the video. Could you also please share the code?
@Patapom3
Жыл бұрын
Great! But how does the tokenizer works now? 😅
@arp_ai
Жыл бұрын
Wonderful! If you feel comfortable to tackle this now, then this video has done its job. We'll address it more in the book (and possible a subsequent video). But if you wanna get into training tokenizers now, this is a great guide: huggingface.co/learn/nlp-course/chapter6/5?fw=pt
Пікірлер: 26