What do you get combining DeepMind's VQ-VAE, GANs, perceptual loss, and OpenAI's GPT-2, and CLIP? Well, I dunno but the results are awesome haha!
@moaidali874
3 жыл бұрын
The in-depth explanation is pretty useful. Thank you so much.
@hoomansedghamiz2288
3 жыл бұрын
Great work and explanation. Probably you have noticed but VQVAE is a bit rough to train since it’s not differentiable. In parallel there is GumbleSoft which is differentiable and therefore easier to train, wav2vec v2 use that. It might be interesting to cover that next :) cheers
@akashsuryawanshi6267
Жыл бұрын
keep it up with the detailed explanations. For those who are interested in the low level stuff can just skip the detailed parts, win for both. Thank you.
@MuhammadAli-mi5gg
2 жыл бұрын
Thanks again, a masterpiece like the VQ-VAE one. But it would be great if you also add the code part like in the VQ-VAE part, perhaps even more detailed one. Thanks aloooot again!
@kirtipandya4618
2 жыл бұрын
Answer : I find in depth explanation very very useful. 🙂 you could also explain codes here. But great work. Thanks. 👍🏻🙂 Could you please also review paper „A Disentangling Invertible Interpretation Network for Explaining Latent Representations“ from same author. It would be great. Thank you. 🙂
@rikki146
Жыл бұрын
15:56 I thought it is arbitrary at first but later realized it is just balancing between loss terms, namely L_{rec} and L_{GAN}. If gradients of L_{GAN} is big, then less weight on L_{GAN} and vice versa
@MostafaTIFAhaggag
Жыл бұрын
this is a master pieceee.
@jonathanballoch
2 жыл бұрын
i feel like you lost me on the semantic segmentation --> image generation step; you say that the semantic token vector from the semantic VQGAN is appended to the front of the CLS token and then the token vector of...the output VQGAN? and then this 2N+1 length vector is input, and the output is a length N vector? how is this possible, aren't transformers necessarily the same dimensional input and output?
@yasmimrodrigues5437
2 жыл бұрын
Some segments in the video are stamped not adjacent to each other
@TheAIEpiphany
2 жыл бұрын
What exactly do you mean by that?
@JustinXinx-i8x
9 күн бұрын
Clark Kevin Perez Edward Perez Helen
@LukeRosas
20 күн бұрын
Jackson Eric Clark Shirley Garcia Robert
@NeedhamEnoch-c7h
23 күн бұрын
Rodriguez Paul Jackson Matthew Lee Deborah
@lynnessmacadam3029
8 күн бұрын
Martinez Elizabeth Moore Frank Lee Daniel
@SinthiyaAhamed-g6q
18 күн бұрын
Lee Charles Gonzalez Margaret Harris Ruth
@ronniehamphrey5789
13 күн бұрын
Young Sandra Gonzalez Daniel Lewis Melissa
@JimmyChampions-t1d
18 күн бұрын
Young Brian Miller Margaret Walker Charles
@johnpope1473
3 жыл бұрын
I like the low level stuff. I attempt to read these papers and your grasp and explanations give me confidence that I can decode them too. Almost always they’re built on top of other work. I liked when you distilled that history out in stylegan session.
@TheAIEpiphany
3 жыл бұрын
Thanks, It's fairly a complex tradeoff to decide when to stop digging into more nitty-gritty details. 😅 I am still figuring it out
@johnpope1473
3 жыл бұрын
@@TheAIEpiphany . I once came across some python code I cloned on GitHub that could take a PDF and create multi quiz questions based off any content. Maybe I could help you one day and have you nut out the answer. You remember that sort of stuff in physics class where the teacher makes things clear eliminating nonsense and elucidating correct answer.
@EatuopLyreqzgj-f5l
Ай бұрын
Robinson Sandra Hall Melissa Lee David
@TF2Shows
3 ай бұрын
The adversarial loss - i think the explanation is wrong You said the discriminator tries to maximize it, however, you have just shown that it tries to minimize is (the term becomes 0 if D(x) is 1 and D(\hatX) is 0). So the discriminator tries to minimize it (and because its a loss function it makes sense), and the generator tries to do the opposite, maximize it, to fool the discriminator. So I think you mis-labeled the objective: L_GAN we try to minimize (minimize loss) in order to train the discriminator.
@daesoolee1083
2 жыл бұрын
I think you cover both the high-level explanation and details fairly well :) Keep it up, please.
@DebraMcClain-i5e
3 күн бұрын
Rodriguez Barbara Martinez Jose Johnson Larry
@TitusChristopher-b7z
27 күн бұрын
Lopez Michael Thompson Helen Rodriguez Jose
@VinodDunks
13 күн бұрын
Martin Jennifer Martin Patricia Anderson Daniel
@GajshAksjb-y3l
16 күн бұрын
Smith James Thomas Cynthia Clark Charles
@CaraParker-j7n
24 күн бұрын
Rodriguez Richard Young David Lee Sandra
@LynnsSillers-k2e
26 күн бұрын
Wilson Robert Miller Gary Robinson Matthew
@marjorielawman7456
17 күн бұрын
Jackson Anthony Brown Jason Hall Brenda
@samanthaelmers4964
22 күн бұрын
Brown Matthew Thompson Sarah Martin Jose
@ronitrastogi9016
Жыл бұрын
In-depth explanations are game changer. Keep doing the same. Great work!!
@jisujeon5799
2 жыл бұрын
KZitem should have recommended me this channel a year ago. What a quality content! Keep it up :D
@TheAIEpiphany
2 жыл бұрын
Hahah misterious are the paths of the YT algorithm. 😅
@HigherPowerMerch
20 күн бұрын
Harris Thomas Davis Amy Williams Mary
@vinciardovangoughci7775
3 жыл бұрын
Great Job! The condition part is super useful. The paper is confusing there.
@marcotroster8247
Жыл бұрын
It's always interesting to me how a bit of constrained resources can produce very intelligent, next-gen results instead of just pumping up the model with weights and using crazy amounts of compute 😂
@akashraut3581
3 жыл бұрын
U are on fire 🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥. This video was much needed for me, Thank you so much.
@TheAIEpiphany
3 жыл бұрын
I am just getting started 😂 awesome!
@xxxx4570
3 жыл бұрын
Thanks for your awesome explain about this paper, I want to ask a question, How does the transformer use the characteristics of the transformer to achieve autoregressive prediction?
@alexijohansen
2 жыл бұрын
So great! Love the explanation of the loss functions.
@dfergrg4053
2 жыл бұрын
how did you do it can you share with me , thank you
@vinhphanxuan5654
2 жыл бұрын
how did you do it can you share with me , thank you
Пікірлер: 46