Thank you so much. This was easy to follow and worked perfectly for me. A very cool addition to my workflow!
@the-ai-art
Ай бұрын
Glad it helped!
@Schnoidz
Ай бұрын
@@the-ai-art A question. Is there a way to save all the different prompts that the LLM generates, to a text file? Like right now I'm running a batch size of 4 and a batch count of 10. It would be nice to automatically save all 10 of the prompts it creates. Oh and possibly also save the descriptions from imported images too. Thanks! Subbed and watching more now.
@the-ai-art
Ай бұрын
@@Schnoidz Sure. On the same package that have the "Show text" (github.com/pythongosssss/ComfyUI-Custom-Scripts) you also have "Save text" node that allows just that. It supports Appending to an existing file so it gives you exactly what you need :)
@Schnoidz
Ай бұрын
@@the-ai-art Thanks!
@aitor451
15 күн бұрын
Excellent and clear explanation. Thank you.
@SebAnt
Ай бұрын
WOW!! So fascinating, I’ve liked and subscribed immediately.
@the-ai-art
18 күн бұрын
Thank you very much :)
@atahanacik365
Ай бұрын
Heyy, from the very first days of LLMs and midjourney, I am using a very long, specific instructions thatI am creating my prompts. With latest developments with models like flux and such workflow you demonstrated, I will be able to create my design bundles automatically as opensource art generators comes just a bit more closer to midjourney =) Thank you for the video mate, keep up Best
@the-ai-art
Ай бұрын
Thank you :) Glad you found it helpful.
@evolv_85
18 күн бұрын
Liked, subscribed. Awesome stuff. Thanks. 😄
@HanjLaoye
Ай бұрын
Thank you, great tutorial!
@the-ai-art
Ай бұрын
Glad it was helpful!
@markphillips1509
Ай бұрын
Excellent video, thank you
@the-ai-art
Ай бұрын
Glad you liked it!
@bwheldale
Ай бұрын
A really nice tutorial, not too much info and not lacking either. As far as I can tell my directions were similar to yours but my generated prompts are frequently including instructions to the prompter instead of a nice clean prompt. E.g., Title: followed by the title., Prompt: followed by the prompt, Recomendations: followed by recommendations etc. Maybe I've been running Ollama too long I was running ollama on other projects prior to comfyui. I'll see how it goes tomorrow with a fresh start.
@the-ai-art
Ай бұрын
Try to change the instruction or even use a different model. I had great results with the Llama3.18b
@germanchoGPT
Ай бұрын
Hi, I had the same issue, and what I did was add the following at the end of the text: only prompt and don't use quotes
@yiluwididreaming6732
Ай бұрын
Thank you! Gunna play with this one and see what works for me..... render process is really fast!!!
@the-ai-art
18 күн бұрын
LMK how it went
@CharlesLijt
29 күн бұрын
Try Joy Caption, its far superior than any thing you were demostrating for the Image captioning part
@the-ai-art
18 күн бұрын
Thanks. I'll check it out.
@eltalismandelafe7531
22 күн бұрын
Thanks for the video, it's great! i see at minute 8:16 that you use the llama3-70b model. I work all models except the 70b, do you know why it could be? Do you have to do something special, any extra configuration to make them work in ComfyUI?
@the-ai-art
18 күн бұрын
No need for anything special, but the model is quite demanding. I have a 16gb GPU and it hardly runs it.
@pawelthe1606
Ай бұрын
Hi. I've been using ComfyUI for a while now and maybe you could record the process of repairing old photos? Settings etc.? If you already have such a video because I haven't checked, maybe a link to it? I know there are such videos but I would honestly rely on your way of explaining the processes. Thanks
@the-ai-art
18 күн бұрын
Thanks for the Idea. I haven't done a video on it yet, but I use Jpeg denoisers, Upscaler, restoration and recoloring models for such a process. I will probably do a video on it in the future, but its a bit complex and I want to gradually reach the point of a more complex workflows.
@fairyroot1653
Ай бұрын
I made a similar node, and integrated Llava as well, you can check the ollama one by the author Fairy-Root
@the-ai-art
Ай бұрын
Will do. Thanks :)
@INVICTUSSOLIS
Ай бұрын
Wondering if this will work with the GGUF nodes for Flux.
@the-ai-art
18 күн бұрын
It should. Eventually all it does is generate text for the prompts.
@DodiInkoTariah
3 күн бұрын
@@the-ai-art Thank you, it worked.
@maindokontorora2575
Ай бұрын
I have python according to manager but am unable to find the show text node. Any suggestions?
@the-ai-art
Ай бұрын
Make sure this is the node installed: github.com/pythongosssss/ComfyUI-Custom-Scripts If it is, try clicking the Update All button in the manager to make sure all the required packages are installed correctly. If it didn't help, try uninstalling and reinstalling the node.
@vasilybodnar168
Ай бұрын
6+min with flux to generate one image... Without llm it takes about 40s (rtx4080)
@the-ai-art
18 күн бұрын
I didn't have any issues or delays with it.
@BabylonBaller
27 күн бұрын
you dont need ollama anymore, theres a node that calls an llm out already exciting stuff
@SebAnt
26 күн бұрын
What is the name to that node?
@BabylonBaller
26 күн бұрын
@@SebAnt Sebastian Kemp did a video 2 days ago that is called llm in comfy UI. Check it out
@the-ai-art
18 күн бұрын
Thanks for the info :) I do like the simplicity of using different models with Ollama not having to manually install them.
@abhinavbisht9851
Ай бұрын
what are your pc specs/?
@the-ai-art
Ай бұрын
i7 9700, 32Gb RAM, 4060ti 16Gb I build my PC's from ruined PCs usually. The exception is the GPU I recently bought :)
@abhinavbisht9851
Ай бұрын
@@the-ai-art using llm for promoting when using the flux model takes a lot of time..is this the same with you..like for the first generation it took me 20+mins after that it was fast.
@the-ai-art
Ай бұрын
It shouldn't cause any delay. It might be though that you use the bigger version of flux and that might cause you computer to lag? For me it didn't change anything and using the LLM in the Flux workflow didn't extend the time the sampler worked. (It takes around 2.5s/it). How fast does it work when you run it without the LLM in the workflow? If it only happens for the first run, it means it just takes long to load the model. Try putting the model on SSD, it will greatly improve the speed it loads it (if it is on HDD now). Also, Try the smaller size flux model (the 11Gb one [fp8])
@shirleywang9584
Ай бұрын
Hello, I'm Tess from Digiarty Software. Interested in a collab?
@the-ai-art
18 күн бұрын
Depends on what kind :) PM me on the discord.
@sven1858
Ай бұрын
thanks for the video, interested in more LLM contentent within comyfui
@the-ai-art
Ай бұрын
I will probably make more videos about it in the future :) Thanks for the feedback.
Пікірлер: 49