Really like your video format. Starting out by building a bare bones demonstration followed by a more complicated version really helps me connect the dots. Thanks for the video.
@stephantual
6 ай бұрын
Thank you. Means the world to me 👽
@MattMosquito
6 ай бұрын
Stephan, incredible cutting edge workflow, and I found your delivery to be super engaging personally. Keep up the great work!
@stephantual
6 ай бұрын
Hey thank you so much! If I can improve anything let me know! 👍👽
@WhySoBroke
6 ай бұрын
Today is a wonderful day... new videos from my fav YT channels!! Amazing tutorial!!! ❤️🇲🇽❤️
@stephantual
6 ай бұрын
👽👽👽👽
@fulldivemedia
2 ай бұрын
you have a natural talent for explaining and caring about your work, and you have a cute accent
@svenvarg6913
6 ай бұрын
Oeuff!!! This is moving so fast. My head is swivelling
@flisbonwlove
6 ай бұрын
Great work Stephan! Great explanations with a great sense of humor! You rock dude! 👽🖖
@stevietee3878
6 ай бұрын
Absolutely amazing work ! I thought I had learned quite a lot over the past year until I watched your video, I have so much more to catch up on.
@stephantual
6 ай бұрын
Glad to help! 👽👽👽
@electronicmusicartcollective
6 ай бұрын
YES Mercie for this very powerfull workflow
@stephantual
6 ай бұрын
You are welcome! 👍👽
@johnlenoob6951
6 ай бұрын
Great as always ;) Garde ta rigueur extraterrestre!!!
@stephantual
6 ай бұрын
C clair! ET pour la vie! 👽
@aidigitalmediaagency
6 ай бұрын
You are the fkn ComfyUI God. Wow, speechless. 🥂
@AndyHTu
6 ай бұрын
wow this is incredible
@ArianaBermudez
3 ай бұрын
ahahahah the owl tutorial joke killed me
@neokortexproductions3311
6 ай бұрын
Thanks Stephan! How do you know all of this information? is this your line of work or just someone interested in A.I?
@stephantual
6 ай бұрын
Good question - basically was semi-retired for 5 years taking care of my mother who suffered from Fronto-temporal Dementia. Spending many years near or in a care home, I noticed how so many things could have been improved for patients using AI - including, for example, generating new forms of cognitive tests (like the MMSE). This explains why there's a channel on YT with my face on it trying to raise awareness around Alzheimers etc. I have a coding background so I used Python+GFPGAN and when Comfy came out 'properly' around Jan last year I started toying around with it, it gives so much flexibility. From there, really, it's been a loooong trial and error type thing :) But I find that to be a good way to learn! Cheers! 👽👍
@neokortexproductions3311
6 ай бұрын
@@stephantualVery impressive! and that is very commendable of you to take care of your mother in need. Your right about how A.I can change the world and how it can help out those who will eventually need some support or improve the current modalities of the health industry. We appreciate all your help in the community!
@motgarbob7551
5 ай бұрын
this is amazing thank you
@benjaminaustnesnarum3900
5 ай бұрын
The ComfyUI-0246 breaks my Comfy, for some reason. With it installed, no nodes will load at all. It's just a blank canvas.
@697_
5 ай бұрын
Amazing video I learned alot. I have a problem though because I followed everything but I am missing some files and there is an error with the When loading the graph, the following node types were not found: IPAdapterApply IPAdapterApplyEncoded ComfyPets Nodes that have failed to load will show as red on the graph. I don't know which node to replace it with?
@diego13ev
Ай бұрын
Hey mate thanks for the tutorial. I have and issue, i installed the modelscopetv2 as you indicate by cmd on the model / clip directory i re open the COMFY UI again and trying to add it but it doesnt appear the modelscopet2v loader, do you know what i might be doing wrong is like isn't being detected. Thanks in advance for your help :D
@Chad-xd3vr
6 ай бұрын
intro very impressive, well done
@stephantual
6 ай бұрын
Thank you! Already working on the next one, it's pretty intense GPU-wise so I'll have a few episodes on more traditional server-side stuff with clusters and all :) 👽
@Chad-xd3vr
6 ай бұрын
It's still a numbers game, is there any way to direct it more like animatediff?@@stephantual
@Martin-bx1et
6 ай бұрын
I am not able to find three of the nodes: GetNode, SetNode and SUPIR_Upscale. They show up as missing when loading the workflow but aren't listed as missing through manager. Any thoughts?
@stephantual
6 ай бұрын
Get/Set are (AFAIK) standard comfy issue, but SUPIR is github installed (I don't use the manager, because it makes you lose control over individual node branches). I have a video on how to install it at kzitem.info/news/bejne/sm-vk2uEsJxjnJg, also don't forget to install requirements.txt via pip, the good news is that once you done one, they all get installed the same. 👽👽
@Martin-bx1et
6 ай бұрын
@@stephantualThanks Stephan that video helped. I found SetNode and GetNode in 'KJNodes for ComfyUI' (also from kijai) so maybe they would have been installed if I followed the steps in that video in the first place. Love your videos - they stretch me but in a good way!
@netstereo
6 ай бұрын
Hi @Stephen. My PC runs on 8 GB VRAM and 16 GB RAM. Can i run thru this? Especially with the upscale. If not, can you give some tips on how to make text2vid in comfyui with my limited specs? I have never run a workflow with animatediff
@stephantual
6 ай бұрын
T2V will run with very little vram - I think 6gb would be just fine on either these nodes or the OG ones. The V2V in Modelscope should also be fine. What takes the most vRAM are: a) supir - so use a model upscale instead, b) ultimateSD upscale if you pass it 2k frames (so limit it and tile as much as possible) c) suprisingly, FILMVFI (replace it with Rife49). It's like everything else with comfy, the more pixels or the larger the latents, the more vRam it needs :) 👽
@netstereo
6 ай бұрын
@@stephantual Thank you, sir.
@aivrar
5 ай бұрын
@@stephantual Hey man great tut thank you! Can we use 12 vram and lower vram command line arg to use it? Thank you again I enjoyed this.
@697_
5 ай бұрын
18:13 where to get this other workflow?
@见高-y4q
6 ай бұрын
Your workflow is really hard to understand how to use, even after watching your videos. Can you explain to us what each of your workflows is responsible for, and how the various switches in the switcher should be combined to prevent errors?
@Dabble-m4q
6 ай бұрын
Stephan...Are you some type of Immortal from the 7th heaven?
@stephantual
6 ай бұрын
Well, I did get genetically mutated on the 👽mothership so there's that. On the negative side, not a huge fan of the triple tentacle they replaced my left arm with. I feel pretty conscious about it 🐙🐙🐙
@bigmichiel
6 ай бұрын
Interesting video. I'm following along, but haven't got the same results. With all settings and models equal, it should be exactly the same right? I've double checked all settings, including the prompt. After the first 12 rendered frames, it switches scene/camera. Seems to happen with all seeds, so I'm guessing I'm overlooking a setting or something. I'm using a batch size of 24 in my empty latent image and a frame rate of 24 in video combine. Edit: Did some more testing. When rendering 48 frames at once, it switches after 24 frames. When rendering 16, it doesnt switch at all. Narrowing it down some more, it seems that if I try a batch_size of 18 or above, it will split the clip halfway through the video.
@stephantual
6 ай бұрын
With absolutely everything identical, it would still be *slighly* different, as per the comment in the video, comfyUI is non-deterministic even when --deterministic. There's a LOT of heated debate about this, see my video about it at kzitem.info/news/bejne/lZirq5OEhmN-dKg ... i'm staying neutral in that debate 😅
@bigmichiel
6 ай бұрын
@@stephantual I've recently watched a video about the samplers and (non)deterministic, and as I understood euler should be a deterministic one. I've (partially) watched the video you've linked, and I can see your setup method and results. That's a good experiment to add to my backlog to test it out for myself. Thanks for the tip.
@rluzentales
6 ай бұрын
@bigmichiel I get the same issue as you, where any batch_size over 16 frames will switch scenes. Have you any luck with a solution?
@juliandekeijzer
6 ай бұрын
Was hoping to get this to work but my video combine does not load the video formats that actually are in the video_formats folder. Instead it give me image/gif and image/webp. I foresee more trouble ahead since I am on Mac M1. Any ideas what I could do to get the right video formats loaded?
@stephantual
6 ай бұрын
That's weird, VHS combine should provide you all formats it has available regardless of your platform in the dropdown. That said, I don't have a mac to test it. Maybe if it's reproducible post it on github.com/Kosinkadink/ComfyUI-VideoHelperSuite/issues ? Cheers!
@elislifestyle4605
6 ай бұрын
How do you feel about ltx studio I liked your thoughts on sora?
@stephantual
6 ай бұрын
Never used it! I imagine we'll see a lot of competition in the space as there are more and more workflow to SaaS services popping out - very exciting! 👽
@Fomincev
6 ай бұрын
not enough Vram on my 4080 12Gb. Any edvice please
@stephantual
5 ай бұрын
It's likely SUPIR. Set the Unet to 8 bit precision, use a tiled sampler (they now have 4), or just Lanczos upscale. AD-LCM is doing all the work re: temporal consistency anyways.
@kleber1983
6 ай бұрын
your workflow gives me this error : "Error occurred when executing KSampler (Efficient): mat1 and mat2 shapes cannot be multiplied (2464x1024 and 768x320)" Any idea how to fix this? thx.
@stephantual
6 ай бұрын
I'm guessing you got a SD15 or SDXL chkpt loaded trying to leverage an incompatible set of CNs. The way i setup the flow in the download works fine, but if you switch model versions, make sure to adapt your CNs accordingly. The 3 ones i got listed for SDXL lightning work fine, the download links are on the comfyworkflow pages. Cheers! 👽
@kleber1983
6 ай бұрын
@@stephantual Yes, I´m aware of this issue and I revised all the workflow trying to find if I´m using an XL model by mistake but to no avail. I figured out tho if a disconect the model loader from the MODELSCOPE T2V LOADER, everything works fine (with a crappy quality, but works), the problem is that I´m pretty sure I´m using SD1.5 models I even tried one that I created myself way before the XD models even existed! I can´t find out what the problem could be, any idea would be much appreciated. thx. P.S. Not using any Controlnet.
@stephantual
6 ай бұрын
@@kleber1983 Ok - fair enough - join the discord, post your edited copy on the megathread for support on this and I'll have a look for you :) tinyurl.com/URSIUM. Cheers!👽
@a.akacic
6 ай бұрын
bbl.. _boots up ponyxl_
@stephantual
6 ай бұрын
Oh! 😂😂 Yeah i had to put an NSFW tag in the neg prompt because it will inherit the properties of whatever model you push in. 👽
Пікірлер: 57