In this video, I'll show you how you can deploy and run large language model (LLM) chatbots locally. The steps followed are also valid for production environment and the tutorial is also production ready! By the end of the tutorial, you will be running an LLM like Falcon-7B (or 40B or any LLM) locally and you would have also deployed a chat interface to use the local llm and chat with it!
For the video, we will be using text-generation-inference: github.com/huggingface/text-g...
And chat-ui: github.com/huggingface/chat-ui
Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
My book, Approaching (Almost) Any Machine Learning problem, is available for free here: bit.ly/approachingml
Follow me on:
Twitter: / abhi1thakur
LinkedIn: / abhi1thakur
Kaggle: kaggle.com/abhishek
Негізгі бет Deploy FULLY PRIVATE & FAST LLM Chatbots! (Local + Production)
Пікірлер: 57