"Training data-efficient image transformers & distillation through attention" paper explained!
How does the DeiT transformer for image recognition by @facebookai train with around 100x less training data than ViT?
➡️ AI Coffee Break Merch! 🛍️ aicoffeebreak.creator-spring....
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to boost our Coffee Bean production! ☕
Patreon: / aicoffeebreak
Ko-fi: ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
📺 ViT Transformer: • An image is worth 16x1...
📺 Transformer architecture explained: • The Transformer neural...
📺 Visual Chirality: • Can a neural network t...
Outline:
* 00:00 Facebook’s DeiT
* 01:34 Why is DeiT cool?
* 03:03 How does it work?
* 07:10 What does this mean?
📄 DeiT paper: arxiv.org/pdf/2012.12877.pdf
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou (2020) “Training data-efficient image transformers & distillation through attention”
💻 DeiT code: github.com/facebookresearch/deit
📚 For an in-depth understanding of how it works, check out this wonderful post by @JacobGildenblat jacobgil.github.io/deeplearni...
📚 On-point blog post by Andrei-Cristian Rad: / what-to-do-if-training...
News music 🎵 source:
News Theme 2 by Audionautix is licensed under a Creative Commons Attribution 4.0 licence. creativecommons.org/licenses/...
Artist: audionautix.com/
🔗 Links:
KZitem: / aicoffeebreak
Twitter: / aicoffeebreak
Reddit: / aicoffeebreak
#AICoffeeBreak #MsCoffeeBean #DeiT #MachineLearning #AI #research
Негізгі бет Data-efficient Image Transformers EXPLAINED! Facebook AI's DeiT paper
Пікірлер: 44