Superb and clean tutorial. Also the attention to sharing all the links and files is THE BEST.
@jerrydavos
Ай бұрын
Glad you liked it
@saymew1878
Ай бұрын
Man, the quality is incredible!
@BuckwheatV
21 күн бұрын
omg it works! such a com[lex process, but very well organized and it actually works! thank you!
@jerrydavos
20 күн бұрын
Great to hear!
@张辰-r5o
Ай бұрын
This is awesome and very detailed, it saved me a lot of trouble, thumbs up and thanks for your hard work.
@INTELIGENCIAARTIFICIAL-eb7zq
Ай бұрын
AMAZING!!!! CONGRATULATIONS BRO
@jerrydavos
Ай бұрын
Thank you so much 😀
@jittthooce
Ай бұрын
keep 'em coming
@matsnilsson7922
Ай бұрын
Brilliant ! Thank you!
@jerrydavos
Ай бұрын
You're welcome!
@johnriperti3127
Ай бұрын
This is insane!
@ParvathyKapoor
Ай бұрын
Thanks a lot
@MajomHus
Ай бұрын
Great tutorial!
@jerrydavos
Ай бұрын
Thank you!
@leolis78
Ай бұрын
Great video!
@jerrydavos
Ай бұрын
Thank you
@ZainSarwar5
Ай бұрын
Error Motion module 'motionModel_v01.ckpt' is intended for SD1.5 models, but the provided model is type SDXL.
@jerrydavos
Ай бұрын
Use only Sd 1.5 compatible models in this workflow, SDXLs won't work
@t8levin
Ай бұрын
Is it possible to change the lighting without changing the main subject in the video? it creates too many deformities and it's not really usable for professional work
@jerrydavos
Ай бұрын
Yes, Unfortunately It re-renders the video from scratch with animatediff, which introduces the Artifacts caused by "AI" like morphing, bugged face, deformities.... etc This workflow might not be a good fit for professional projects yet.
@t8levin
Ай бұрын
@@jerrydavos bummer... Would be an absolute game changer for movies and music videos
@rosederrick9863
Ай бұрын
"Rebatch" doesn't work when loading long videos. "Load video VHS" still loads all frames into RAM and then it run out of memory. I have tried "Meta Batch Manager" with "Load video VHS" and "Video Combine VHS" which only generated discontinuous scenes. By the way, I have 32G RAM which can only load 20~24 frames to process. I'm still figuring out how to generate long videos.
@jerrydavos
Ай бұрын
Hey, You have to follow the video from 7:43 to extract frames. If still you are facing ram issues while extracting the passes, then you can use the passes exporter workflow from here: drive.google.com/drive/folders/1hLU5MhikUe6SnEnEPQc3tKTaNGmFT6p2 and how it works is here: www.patreon.com/posts/v4-0-controlnet-98846295 Extract the passes you need for IC light batch workflow, which is depth, mask and frames. Then Follow as normal in the video from 11:00
@Fucatstory
Ай бұрын
Hello bro, I have an error in Ksampler. T2IAdapter Advanced.control merge_inject() missing 1 required positional argument: 'output_dtype'
@Fucatstory
Ай бұрын
Do you know the reason why? help me please 🐱
@jerrydavos
Ай бұрын
In the manager press: 1) Update Comyui 2) Update All After updating it should be fixed
@Fucatstory
Ай бұрын
@@jerrydavos I have updated everything, updated the missing notes. But it seems like the workflow in my friend's videos all have this problem. 😢
@jerrydavos
Ай бұрын
@@Fucatstory hey please check if all the linked models are there or not... and only SD 1.5 models are used... if problem not solved please contact me on discord ID- jerrydavos I'll help you from there
@Fucatstory
Ай бұрын
@@jerrydavos yes so many thanks ❤️
@sam-ss9rn
Ай бұрын
Thank you. I am trying this one. I have question. Whenever running each bach(50), little bit defference occurs. Any way to avoid this difference?
@JosefK2275
Ай бұрын
the background changes too much even when it's off. I am not using a girl but a tennis shoe (I bypassed the face fix nodes), could that be the reason?
@jerrydavos
Ай бұрын
FaceFix don't change the scene much... you can try changing the "Depth" controlnet model and it's processing node to LineArt Controlnet Model and Lineart Preprocessor .... and play with the strength and end percent, maybe it can help your situation.
@holly1997-AI
Ай бұрын
so coool!! Thanks
@davimak4671
Ай бұрын
bro, can you make liveportrait + vid to vid workflow? it will be awesome tutorial
@jerrydavos
Ай бұрын
Yes, I'm testing on it, I'll post when I get some good results.
@byeongmokjang4826
22 күн бұрын
It's so cool. However, IC Raw Ksampler is experiencing an error. "KSamplerAdvanced: The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" How can I solve it?
@jerrydavos
22 күн бұрын
The light map should also have same number of frames or greater than the source video. Example 1 Source Video = 5 seconds Light Map video = 1 second Result : Error - The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" Example 2 Source Video = 5 seconds Light Map video = 5 second Result : Successful Render Hope this make it clear
@byeongmokjang4826
22 күн бұрын
@@jerrydavos I am using the source file you provided. helenpeng.mp4 and LightMap.mp4 The two are equal to 20 seconds. Do I need to set frame_load_cap? To zero?
@Bemyself1705
Ай бұрын
Hi, when I started render, comfyui showed me an error message saying that " The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" . I used chatgpt to fix it, and gpt kept to tried fixing execution.py codes which doesn't work at all. Have you had this kind of issue before? If you know how to fix it, I would really appreciate it. Thanks for your sharing.
@jerrydavos
Ай бұрын
The number of lightmaps should also be equal to the Source video ...
@HaoYang-if9tf
Ай бұрын
so coool!!
@JosefK2275
Ай бұрын
I don't get why the file output node has a # symbol. Can I change it with a normal save path?
@jerrydavos
Ай бұрын
Yes, you can. Just copy and paste your folder path where you want to save the video or the images.
@SiMBa27392
16 күн бұрын
Я кстати заметил что расположение промтов очень сильно влияет на результат будь то он написан сначала или в конце....
@jerrydavos
15 күн бұрын
Yes you are correct, The words written in the starting are prioritized more.
@user-eq9ge3vm5y
Ай бұрын
ty😇
@jonrich9675
2 күн бұрын
Make the same video without the lighting. Its nothing but massive problems. Also why is comfyui impact pack so buggy. Newest version just refuses to work.
@sdanimationart
Ай бұрын
Error occurred when executing PreviewImage: index 0 is out of bounds for dimension 0 with size 0
@jerrydavos
Ай бұрын
some images are not been able to be generated... please test on different video with a human character, to see if that video is the problem
@sdanimationart
Ай бұрын
@@jerrydavos it worked thanks
@hammad__official8756
Ай бұрын
Manager Button is not shown for me , how i install missing node ?
@jerrydavos
Ай бұрын
Hey, Sorry if I missed out the manager... Download it from here: github.com/ltdrdata/ComfyUI-Manager and Put it in ComfyUI > Custom nodes
@hammad__official8756
Ай бұрын
@@jerrydavos Work Thanks
@user-pp9xw9mc5k
28 күн бұрын
I cant 5 or 10 second videos only allow to under 1 second why?
@jerrydavos
28 күн бұрын
Set the frame_load_cap from 10 to 0 inn the load source video node to render all frames
@user-nn3kf9wr5m
Ай бұрын
please help how to fix error TypeError: T2IAdapterAdvanced.control_merge_inject() missing 1 required positional argument: 'output_dtype'
@jerrydavos
Ай бұрын
Hey, Please Update comfyui and all the other nodes, especially controlnet_aux node
@user-nn3kf9wr5m
Ай бұрын
@@jerrydavos Hello I updated everything But it gives the same warning again
@user-nn3kf9wr5m
Ай бұрын
@@jerrydavosI updated but it gives the same warning again, please help
@user-eq9ge3vm5y
Ай бұрын
I'm not very familiar with IClight, can it be used with LCM?
@jerrydavos
Ай бұрын
You will need to change scheduler and Sampler steps....a bit experimental.... I have not tried it yet, but others in the community has also successfully used LCM in this workflow
@ademayaashari1393
Ай бұрын
why i test my video duration 13 second but the result only 1 second output?
@jerrydavos
Ай бұрын
In the source video input node > Change the Frame load cap from 10 frames to 0 to render all frames
@ademayaashari1393
Ай бұрын
@@jerrydavos okay, cause i have low vram i decide to render every 10 frame. but every 10 frame the background note same. how to get same result background?
@user-kw3fz4uw9z
17 күн бұрын
Why is the video I generate very short, and what parameters do I need to modify
@jerrydavos
17 күн бұрын
The load cap is set to 10 frames in the load video node ... Increase it to how much you need or put it to 0 to render all frames The light map video should also have same or longer length else it will be give error
@user-kw3fz4uw9z
17 күн бұрын
@@jerrydavos i see ,thanks!
@BuckwheatV
13 күн бұрын
btw I was trying to figure out how I can decrease level of stylization, so my character would look closer to the original, but I really couldn't, forgive me my newbieness😅 Could you please share some hint?
@jerrydavos
10 күн бұрын
Hey, using the LineArt or tile controlnet would get you closer to same but it's complicated edit ... also it would ruin the light map
@BuckwheatV
9 күн бұрын
@@jerrydavos Thank you, will try it!
@user-rc9tl6lc3l
Ай бұрын
Is there a way to adjust the strength of checkpoints on this node? I can't find denoising strength in Ksampler 😭😭
@jerrydavos
Ай бұрын
The Start Step and End Step works as the denoising... it's a advanced ksampler setting.... you have to play with the values which value works for you
@Ella-book-714
27 күн бұрын
Want to ask, which light source material are from where
@jerrydavos
24 күн бұрын
it can be made using simple shapes and animating with after effects. Else you can search more "Contrasting" geometric pattern animation videos on stock websites, like shutterstock, gettimages, pexels, pixabay etc... Also I've included some samples light maps already in the workflow link folder here: drive.google.com/drive/folders/1bFfBs8mkN1HLtT1Xy6wsuOV4jl2WqiO4
@Lucas-uk6fj
Ай бұрын
Where can I find this handsome original video of her, thank you!It can help everyone, I succeeded,Issue News: [SAMLoader#2] The issue where the SAMLoader of the comfyUI-YOLO node conflicted with ComfyUI-Impact-Pack has been patched. Please update ComfyUI-YOLO to the latest version.
@jerrydavos
Ай бұрын
Hey I've also mentioned the sources in the description... thanks Here are the links 1) www.tiktok.com/@monominjii 2) instagram.com/reel/C3FyWgYIc_x/ 3) www.youtube.com/@HelenPeng 4) instagram.com/p/C4Lih8DIhBq/ 5) instagram.com/reel/C19CswgrLD3/ Some are unknown...
@calvinherbst304
Ай бұрын
Help! First off, thank you so much for the tutorial. I can tell you put a lot of effort into not only the project it's self, but the recourses for sharing this with us. I got everything set up and working correctly and ran a few quick generations to make sure all the models were installed. I then updated my control net custom nodes and now, even when I revert to your original work flow, get the error: Error occurred when executing ACN_AdvancedControlNetApply: ControlBase.set_cond_hint() takes from 2 to 4 positional arguments but 5 were given - any ideas? Thanks!
@jerrydavos
Ай бұрын
Hey, I updated all my nodes to check if any errors comes.. but it's working fine on mine. Check: 1) Check only SD 1.5 models are using in the CN models loaders... sdxl controlnets can cause this. 2) Check Clip text encode nodes and Controlnet Nodes are linked properly, no floating nodes... may be due to some bug it can get corrupt. Download the original workflow again and test. 3) Disconnect the Optional Mask Input from BOTH controlnets and test. If this fixes that means masks are not created properly. 4) Replace the SMZ clip text encode ++ nodes to the normal default clip text encode.... then check Hopefully the above should help!
@salomahal7287
Ай бұрын
Hi i would love to make this workflow work for me but i got a couple problems the output is heavily altered and looks really trippy with me simply inputing a video with ur settings disabling all loras at the start and press queue, there are no errors but the output is nothing at all like the source footage, also with load cap set to 10 it outputs 5 frames only?
@jerrydavos
Ай бұрын
1) Make sure light map is also have same number or greater than the source video. 2) Check Skip frame should be 0.. if you want to start render from beginning And for the Trippy part... this workflow re-renders the frames from scratch using the AI models... So the legacy AI Artifacts like bugged hands, faces will surely come in the output.
@salomahal7287
Ай бұрын
@@jerrydavos hey thanks for the reply seems the weirdness came from the upscale image node of the stationary lightmap not beeing set to crop. only thing to add for the future id recommend implementing keyboard shortcuts for the groups so u dont have to scroll through them everytime. But big thank you man
Ай бұрын
I can't see the "Manager" and "Share" buttons.
@jerrydavos
Ай бұрын
github.com/ltdrdata/ComfyUI-Manager Install manager from here, It's a great way to install nodes
Ай бұрын
@@jerrydavos Thank you 😍
@MsParkjinwan
Ай бұрын
Can you use this workflow to create a video featuring a specific anime character?
@jerrydavos
Ай бұрын
Maybe Possible with loras...
@DragonEspral
Ай бұрын
show
@ESGamingCentral
Ай бұрын
if you don't mind my asking how much vram does this use?
@jerrydavos
Ай бұрын
1) Minimum 8GB for img2img workflow, 2) Vid2Vido may require more.... I render with img2img workflow in small batches, I have 8GB vram
@asheronscall1234
Ай бұрын
What's the source for the clip at 0:25 ?
@jerrydavos
Ай бұрын
Here: instagram.com/reel/C3FyWgYIc_x/
@asheronscall1234
Ай бұрын
@@jerrydavos Thanks!
@Spindonesia
Ай бұрын
Bruh what RTX u use? can u make tutorial for webui auto 1111?
@jerrydavos
Ай бұрын
I have an RTX 3070 Ti laptop GPU 8GB ..... It's a complicated workflow and can be made only using nodes....It can't be made in A1111 yet
@calvinherbst304
Ай бұрын
@@jerrydavos You are a hero for doing this with 8gig VRAM, I'm on a similar setup and apprecaite that this workflow can be run on low VRAM GPU, and that you also designate which settings help run on low VRAM. Keep it up!
Пікірлер: 97