Full text tutorial (requires MLExpert Pro): www.mlexpert.io/prompt-engine...
Learn how to fine-tune the Llama 2 7B base model on a custom dataset (using a single T4 GPU). We'll use the QLoRa technique to train an LLM for text summarization of conversations between support agents and customers over Twitter.
Discord: / discord
Prepare for the Machine Learning interview: mlexpert.io
Subscribe: bit.ly/venelin-subscribe
GitHub repository: github.com/curiousily/Get-Thi...
Join this channel to get access to the perks and support my work:
/ @venelin_valkov
00:00 - When to Fine-tune an LLM?
00:30 - Fine-tune vs Retrieval Augmented Generation (Custom Knowledge Base)
03:38 - Text Summarization (our example)
04:14 - Text Tutorial on MLExpert.io
04:47 - Dataset Selection
05:36 - Choose a Model (Llama 2)
06:22 - Google Colab Setup
07:26 - Process data
10:08 - Load Llama 2 Model & Tokenizer
11:18 - Training
14:49 - Compare Base Model with Fine-tuned Model
18:08 - Conclusion
#llama2 #llm #promptengineering #chatgpt #chatbot #langchain #gpt4 #summarization
Негізгі бет Fine-tuning Llama 2 on Your Own Dataset | Train an LLM for Your Use Case with QLoRA on a Single GPU
Пікірлер: 58