Low-rank adaptation, or LoRA, is one of the most popular methods for customizing large AI models. Why was it invented? What is it? Why should I consider using it? Find out the answers in this video from the inventor of LoRA.
*Like, subscribe, and share if you find this video valuable!*
Paper: arxiv.org/abs/2106.09685
Repo: github.com/microsoft/lora
0:00 - Intro
0:34 - How we came up with LoRA
1:33 - What is LoRA?
3:14 - How to choose the rank r?
3:57 - Does LoRA work for my model architecture?
4:48 - Benefits of using LoRA
6:03 - Engineering ideas enabled by LoRA
Training or serving multiple LoRA modules in a single batch with the following community implementations:
github.com/S-LoRA/S-LoRA
github.com/sabetAI/BLoRA
github.com/TUDB-Labs/multi-lo...
Follow me on Twitter:
/ edwardjhu
🙏Gratitude:
Super grateful to my coauthors on the LoRA paper: Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.
Негізгі бет Ғылым және технология What is Low-Rank Adaptation (LoRA) | explained by the inventor
Пікірлер: 45