Links:
- LoRA: Low-Rank Adaptation of Large Language Models, arxiv.org/abs/2106.09685
- LitGPT: github.com/Lightning-AI/lit-gpt
- LitGPT LoRA Tutorial: github.com/Lightning-AI/lit-g...
Low-rank adaptation (LoRA) stands as one of the most popular and effective methods for efficiently training custom Large Language Models (LLMs). As practitioners of open-source LLMs, we regard LoRA as a crucial technique in our toolkit.
In this talk, I will delve into some practical insights gained from running hundreds of experiments with LoRA, addressing questions such as: How much can I save with quantized LoRA? Are Adam optimizers memory-intensive? Should we train for multiple epochs? How do we choose the LoRA rank?
Негізгі бет Ғылым және технология Insights from Finetuning LLMs with Low-Rank Adaptation
Пікірлер: 15