Your videos are absolute gems in these times and perfect for people who want to get their hands dirty with LLM based used cases. Thank you for making such a quality content!
@ai-yp
11 ай бұрын
Please make a subsequent video with all sorts of applications to better spawn other usage ideas😊 great video content as always, thanks for your always on point work!
@engineerprompt
11 ай бұрын
Will do! thank you
@dexterpratt2045
11 ай бұрын
Thanks! I would be very interested in seeing how well codeLlama + open interpreter can do on basic tasks of cleaning data, reorganizing tables so that they can be merged, etc. If the open source models can accomplish that annoying and time consuming task, that would be a big deal.
@oscarbertel1449
11 ай бұрын
the rigth comand for Windows with Nvidia GPUs is: set CMAKE_ARGS=-DLLAMA_CUBLAS=on && set FORCE_CMAKE=1 && pip install llama-cpp-python
@jaymatos100
11 ай бұрын
Hello add did this and still having the same issue. I've spent all morning trying to find a solution. The errors: Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python Python runtime state: preinitialized Error during installation with cuBLAS: Command '['C:\\Users\\Jesus\\.conda\\envs\\openinter\\python.exe', '-m', 'pip', 'install', 'llama-cpp-python']' returned non-zero exit status 1. Traceback (most recent call last): File "C:\Users\Jesus\.conda\envs\openinter\lib\site-packages\interpreter\get_hf_llm.py", line 141, in get_hf_llm from llama_cpp import Llama ModuleNotFoundError: No module named 'llama_cpp'
@jaymatos100
11 ай бұрын
Note: I have text-generation-webui on Pinokio using the TheBloke_WizardCoder-15B-1.0-GPTQ with GPU and it works fine.
@murraymacdonald4959
11 ай бұрын
Thank you for this wonderful content. Your explanations are clear, your voice is a pleasure to listen to and the subject matter is perfect. Subscribed and liked!
@engineerprompt
11 ай бұрын
🙏
@xlerb_again_to_music7908
11 ай бұрын
Hardware requirements?? Speed difference when not using a GPU?
@shuaichengwang9127
11 ай бұрын
Great video. What's the GPU size you recommend for the large (34B) llama-code model?
@dom12splayground
11 ай бұрын
8days later!!!! No respons 😢😢
@yuxrazafar8532
9 ай бұрын
i am loving your content. Thank you
@Ethitub
11 ай бұрын
As always, awesome content. Thank you!
@brybryBillions
10 ай бұрын
Thanks! If you're running this on a Macbook Air.. better to have Lower Parameter Count or Quality? e.g. pick a lower parameter count, but a higher quality, or a higher parameter county, but a lower quality?
@Lorenzo_T
11 ай бұрын
Nice work! Please, I still wait for the video of localgpt api on google colab. I can understand you have a lot of ideas and works, but please, it would be very very useful for me🙏🏻
@engineerprompt
11 ай бұрын
Thank you, will work on it soon!
@Lorenzo_T
11 ай бұрын
@@engineerprompt thanks a lot, you can't image how much it will be important for me🙏🏻🙏🏻🙏🏻
@MichealAngeloArts
11 ай бұрын
Thanks for sharing that is really useful. Are you still publishing a tutorial / walkthrough soon on how to set-up Llama.cpp with GPU support on Windows / Ubuntu?
@chibapu
11 ай бұрын
interesting, is it possible to read (with this tool or another) multiple files, like a full api made in node for example?
@harveybastidas
11 ай бұрын
Just what i was looking for! Thanks dude.
@AlexanderWeixelbaumer
11 ай бұрын
Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [20 lines of output] I always get this error
@johannesdeboeck
11 ай бұрын
I solved this by updating the following: Verify CMake Version: Make sure you have the correct version of CMake installed. You can check the version by running 'cmake --version' in your terminal. If you have an older version of CMake, try updating it to the latest version with 'brew upgrade cmake' Update Command Line Tools (Mac): If you are on macOS, try updating the Command Line Tools by running xcode-select --install in your terminal. This will install or update the necessary build tools. Otherwise update via software update in system settings.
@brybryBillions
10 ай бұрын
@@johannesdeboeck Thank you! That Update Command Line Tools (Mac) worked for me! I'm on a Macbook Air M1 16GB Ram.. To run codellama, what do you think I should be choose for my Parameter Count and Quality settings?
@jesusleguizamon6566
11 ай бұрын
Can it run in notebooks? Regards
@oliverli9630
11 ай бұрын
great one! open source is better for individuals
@engineerprompt
11 ай бұрын
Absolutely!
@PraveenKumarYadav-sp7wl
10 ай бұрын
Windows users need to install visualstudio build tools-cmake, SDK must be installed, and use the following command-set CMAKE_ARGS=-DLLAMA_CUBLAS=on && set FORCE_CMAKE=1 && pip install llama-cpp-python
@ct8060
11 ай бұрын
Nice video, thanks! However, since there are other, maybe more capable, models such as Phi, or wizardcofer, how can I make it run them as well?
@valm7397
11 ай бұрын
Thank you for your excellent video. Is it possible to make a video with Llama 2 + Metaphor to plug our LLM to the internet + LlamaIndex ? thank you very much
@officialseethesky
11 ай бұрын
Sir Can You Do a Tutorial on ChatOpenAI, I am new to LangChain & LLM Models.
@PraveenKumarYadav-sp7wl
10 ай бұрын
i had 16 GB RAM and 4 GB nvidia card , can 13 B model runs in my PC
@nedal1alex123
10 ай бұрын
Hey. Can I use this with another api key and use a custom model? Like, host my model on replicate and use OI with that.
@sunilanthony17
11 ай бұрын
Can you set a lower version of python if you have a higher version install?
@engineerprompt
11 ай бұрын
If you are using conda, that shouldn’t be a problem
@sunilanthony17
11 ай бұрын
@@engineerprompt I'm using miniconda
@ssah1L
9 ай бұрын
i am getting an error subprocess exited with errors. please help. also one more error metadata generation failed.
@swanknightscapt113
11 ай бұрын
Can you make a version for Google Colab with GPU support?
@engineerprompt
11 ай бұрын
I think there is a Colab provided by author: (colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing) but seems like it uses gpt. will look into it.
@kronikpillow
10 ай бұрын
go to what website to install code-llama-cpp?
@rimvydasb3531
11 ай бұрын
What is recommended GPU and VRAM size?
@MrMoonsilver
11 ай бұрын
Is there a possibility to use local models with a multi gpu setup?
@yagi2.092
11 ай бұрын
Great video , im installing it locally but got an error : File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 350, in _init_ assert self.ctx is not None ^^^^^^^^^^^^^^^^^^^^ AssertionError ▌ Failed to install TheBloke/CodeLlama-7B-Instruct-GGUF. how can i fix this AssertionError? thanks in advance
@akkitty22
10 ай бұрын
Same error here. No solution available by the looks of it.
@jdsguam
10 ай бұрын
Llama still does not work with a windows PC?
@DjDj-zr2wy
11 ай бұрын
Amazing. Game is on
@harvardnomadowl
11 ай бұрын
It does not work properly, LLAMA installation in window
@PraveenKumarYadav-sp7wl
10 ай бұрын
worked for me after installing build tools 2019 visual studio, i.e. cmake and SDK for windows 10 and 11(just search the error on GPT), after execute the command-set CMAKE_ARGS=-DLLAMA_CUBLAS=on && set FORCE_CMAKE=1 && pip install llama-cpp-python C:\Users\khana\AppData\Local\Open Interpreter\Open Interpreter\models
@daryladhityahenry
11 ай бұрын
The codellama: I think it's not running, it just an accident it gets the context right and it is because of the file name. It doesn't ask you to run the command at all right? Not like on gpt4.
@razdingz
11 ай бұрын
Poetry instead of conda ?
@BrandosLounge
11 ай бұрын
Great video! When using code llama, I get a "Reached token limit like every 3rd or 4th operation". Any way to raise the limit on Linux?
@engineerprompt
11 ай бұрын
Interesting, let me check. In theory, code llama can be extended to 100k generation tokens.
@eyescreamcake
11 ай бұрын
Incognito Pilot has a better interface
@latlov
11 ай бұрын
You don't need chat gpt plus for this open code interpreter ... but you do need paid open AI key 🙄
@borjonx
10 ай бұрын
Not if you're using Code-Llama instead
@fontenbleau
11 ай бұрын
Linux much better without errors. I recommend Pika OS, Ubuntu clone where pip not restricted.
Пікірлер: 57