🎯 Key Takeaways for quick navigation: 00:00 🎬 *The video discusses using LCM in Automatic 1111 to generate videos 3 to 5 times faster, focusing on image-to-image video generation, which is simple and doesn't require extra extensions.* 01:12 🎨 *The video demonstrates using V Resolve and Photoshop to generate video frames and prepare them for image-to-image generation.* 02:48 🖼️ *It shows how to use LCM Laura for image-to-image generation and adjust parameters like sampling steps, CFG scale, and control nets.* 05:14 🧩 *Setting up temporal net and control net for enhanced image control is explained.* 06:52 ⚙️ *The video covers generating frames, checking their quality, and using Toas Photo Studio for image adjustments.* 09:39 🔄 *Adjusting video speed, retime, and scaling settings in V Resolve to enhance the final video quality is discussed.* 10:49 🔮 *The video mentions using IP Adapter for more style transfer and control in image-to-image generation.* 15:14 🤖 *LCM Laura is recommended for faster video and image generation, but it's noted that it may not work well with Anim Diff and requires experimentation.*
@AI-HowTo
10 ай бұрын
thanks, will try to include these in future videos, will do a reverse operation as well if i got the time later, thank you
@ohheyvoid
10 ай бұрын
This is such an awesome tutorial. Just found your channel. Excited to binge watch all of your videos. Thank you for sharing!
@AI-HowTo
10 ай бұрын
Thank you, hopefully you will find something useful here and some cool learning tips.
@59Marcel
10 ай бұрын
This is so good. ai imaging is so fascinating. Thanks for showing us how it works.
@AI-HowTo
10 ай бұрын
You are welcome, yes it's fun and interesting and will get better and faster overtime.
@razvanmatt
9 ай бұрын
Another great video from you! Thanks a lot for sharing this, great in-depth info!
@AI-HowTo
9 ай бұрын
Thanks for you kind remarks, hopefully it is useful for some.
@aidgmt
10 ай бұрын
영상을 부드럽게 만들수 있을까 했는데.. 여기에 방법이 있군요.. 당신은 최고입니다
@FifthSparkGaming
10 ай бұрын
Wow! Incredible tutorial! So much care and precision. I’m sure this video took a while to make + running your experiments. Thank you!! (Btw, how much VRAM do you have?)
@AI-HowTo
10 ай бұрын
Thanks, true, 8GB VRAM RTX 3070 Laptop GPU.
@michail_777
9 ай бұрын
Hi.If you don't want to wait for CN to be loaded and unloaded in Automattic, you can go to the settings and set the slider for CN-cache (I don't remember exactly), then you will have CN in memory all the time.But it takes more memory, but generation is faster. Also Optical Flow is in Deforum.And you will need to insert the input video into CN and into the "init" tab. Temporalnet 2 has also appeared. But in order to use it you need to configure something in Automatic. Have a nice day
@AI-HowTo
9 ай бұрын
thanks for the info, i dont think it works with 8GB VRAM unfortunately, indeed load and unload make things take a long time, temporalnet 2 file size is also very huge 5.7GB, that could be an issue on my Laptop as well... hopefully soon we get more optimized networks, otherwise, i should start using Runpod more often :)
@RaysAiPixelClips
10 ай бұрын
Latest Animate Diff update added the LCM sampler.
@AI-HowTo
10 ай бұрын
thanks for the info, will recheck that on A1111, my recent tests on A1111 were not nice, will try again with a fresh install
@APOLOVAILS
10 ай бұрын
super cool bro ! Thanks a lot ! please do one for comfyUI🙏
@Chronos-Aeon
6 ай бұрын
tried with sd forge, it works perfectly..thanks man.. as you all have python installed you can use the module "moviepy" to get the frames of your videos but also generate the video with the generated images after edit: i wonder if there is a way to use it in txt2img so we can use openpose rather than softedge so we can have more freedom on what we want (like the environment)
@AI-HowTo
6 ай бұрын
You are welcome, adnimate diff works better in Text2img with open pose for example, it gives more freedom over the environment, but requires more computing power, better GPU.
@julss6635
10 ай бұрын
Nice tutorial bro!
@joelandresnavarro9841
10 ай бұрын
Good video. I was just wondering what it would be like to make animations with LCM lora. Do you know how an animation could be made with a specific face while preserving its hair, beard, eyebrow, lips, nose... would I have to make a lora (like you have in other video with Elon) or could I do it with an image?
@AI-HowTo
10 ай бұрын
Yes, possible, Currently IP Adapter controlnet allows you to morph a face check kzitem.info/news/bejne/zGqQvX56b4lpl2U where i explain an example with IP Adapter, you just choose the model to be IP Adapter Face and put the face instead of the full body in the first controlnet .... or use face swap technology such as ReActor as in kzitem.info/news/bejne/yK1_ymmEjoB8d6A ... making a LoRA for a person really takes time and lots of experimentation, still, best results are achieved using a LoRA with After detailer (but it can takes days and lots of trials to achieve perfect LoRA for a person)
@aivideos322
10 ай бұрын
use reactor face swap, formerly ROOP.
@breathandrelax4367
8 ай бұрын
it's possible to have LCM A1111 , by adding some line codes in two files of a1111
@AI-HowTo
8 ай бұрын
Yes, i did that, I saw a post somewhere and followed it a while back, didnt see results to be an improvement over euler a
@fortniteitemshop4k
10 ай бұрын
sir plzz tell me how create videos like bryguy
@krupesh2
10 ай бұрын
I am trying to create LoRA for characters and clothes seperately. I have seen both videos of LoRA clothes and character. Is there any sure shot settings to create LoRA for characters which gives best accuracy in the result image? Because I need to automate the LoRA character where I will just need to select 5 6 images of the person and rest process can be automated? Same goes with clothes training LoRA? Can you suggest something to do so? Is it possible? I am training LoRA to get most realistic and accurate face but some faceswap results are better than generated images. Any Suggestions?
@AI-HowTo
10 ай бұрын
IP Adapter Controlnet, which allows face swap, style application, you might want to google that out unfortunately, based on what i have seen, LoRA training doesnt always produce great results, requires testing different settings in some occasions, but it yields better results when done right than face swap... I dont know of any tool for automating the process either...LoRA in general, might take time, because same settings may not work for different datasets, even results produced by some checkpoint might be better than another checkpoint, so lots of testings is required to produce something really good with LoRA.
@krupesh2
10 ай бұрын
using IP adapter controlnet inpaint, right? but this is the manual process to mask out face and dresses. I think i will need to find the face edge detection model and parse the image to it, and then the masked image can go through image to image. That's how i can automate the process. Let me know if you have any approach.
@musigx
10 ай бұрын
@AI-HowTo Hey any chance people can contact you for proper business discussion? :)
@AI-HowTo
10 ай бұрын
sorry, I cannot at the time being.
@musigx
10 ай бұрын
@@AI-HowTo Thx for your answer!
@CGFUN829
2 ай бұрын
wow looks like what i need thank you
@dragongaiden1992
4 ай бұрын
Friend, you can do it with XL since it is very difficult to guide yourself if you use SD 1.5, basically it is doing everything differently from your video and I find many errors and deformed images
@AI-HowTo
4 ай бұрын
true, XL is certainly better, but I still dont use it unfortunately on my 8GB video card.
@dreamzdziner8484
4 ай бұрын
How could I miss this gem of a video for so long. Thank you so much for this mate💛🤝😍
@AI-HowTo
4 ай бұрын
Glad you find it useful, you are welcome.
@souravmandal9264
6 ай бұрын
You haven't mention about the model. Also what to put in the VAE folder??
@AI-HowTo
6 ай бұрын
The video just focuses on how to to things, the model doesnt matter, any model can be used, some models dont require a VAE, so we usually keep the VAE to automatic or select a specific VAE depending on the model specs which tells us if we better use a VAE or if the VAE is baked into the model already... in this video i used a normal model which is aniverse v1.5 ... currently LCM sampler is also official supported in A1111, and there are LCM Models too that dont need a LoRA to be used.
@gu9838
9 ай бұрын
can still tell its ai. if they can get rid of the flickr and changes that would go so well but progress for sure! in a year or two yeahh lol
@AI-HowTo
9 ай бұрын
true, it will take few years even based on current progress before flickering disappears, but i think ,that future Videos will be 3D generated and animated for perfect consistency and zero flickering, because Stable difficution will always produce some flickering, even in more complicated animation methods using Animate Diff and other tools in ComfyUI.
@breathandrelax4367
8 ай бұрын
By the way from my own it kept iterating on the same picture for the whole set of frames was in the resolve outputs .... any idea wher it come from ?
@AI-HowTo
8 ай бұрын
not sure, double check that you are using batch folder properly.
@breathandrelax4367
8 ай бұрын
@@AI-HowTo thanks for your answer , Well i did check as seperated the input folder and output folder, i'll give it a new shot on less frames coz it took some while to process, comparing to your workflow i added adetailer, do you think it could come from there ?
@tyalcin
10 ай бұрын
Hi there & thanks for the tut. Quick question. Why does the output image looks better on comfyUi?
@AI-HowTo
10 ай бұрын
there you can use LCM sampler which gives slightly better image than Euler a, in A1111, there are some LCM Sampler implementations, but still not part of the official release of A1111.
@dlfang
10 ай бұрын
要是用lcm生成lora会怎样😏
@AI-HowTo
10 ай бұрын
not sure, I tested with other LoRA models and it works well... LCM LoRA is Trained using their own training script, so i guess, if we train using their script we just get a LoRA that can help generate images faster and generate a subject at the same time, i think, i have not tried it.
@sigitpermana8644
10 ай бұрын
I'm not good with logic and prompts, but can you explained this exactly a1111 method on comfyui? thank you
Пікірлер: 52