great insight! i think stable diffusion and control has great potential to replace certain rendering engines. i use it almost everyday but the biggest challenge is to fine tune and find the perfect parameters for a really crisp and accurate generation on SD and Controlnet. what model, sampling steps, CFG scale, Loras etc
@designinput
Жыл бұрын
Hey, thanks a lot for your comment! Absolutely agree, it is amazing for the initial idea visualization, but as you said, I can't say the same for the case of final views with all the details and different materials. I am trying for that hopefully, I will share a video about that case too :) I have used Realistic Vision V2.0, V3.0, and EpicRealism model. Between 24-40 sampling steps, 5-9 CFG scale and I didn't use any additional Loras. Which model or Loras do you use?
@abdoulayeidrissa9965
Жыл бұрын
@@designinput are you having consistent results with that? i used realistic vision 2.0, 3.0, Xarchitectural, Deliberate, Reliberate, between 25-50 steps. out of all this i get good result from realistic vision 3.0 with 40 steps and CFG scale of 15. lower CFG scale are good but sometimes the image comes out inconsistent with my prompt, like i want the walls to be white but they sometimes come out with a different material, so upping the CFG fixed that for me. as for control net what setting are you using? Loras i am using XSArchi_129, for glass windows and Lora Add-Detail that can somewhat add some crispness to the image. out of all the samplers i found DPM++ SDE Karras the best
@keiralx
Жыл бұрын
Outstanding! Love it, thank you
@designinput
Жыл бұрын
Hello, thanks a lot :) So happy to hear that, you are very welcome!
@yasminehab7513
Жыл бұрын
Thank you for sharing this 😍
@designinput
Жыл бұрын
My pleasure 😊 Happy to hear that you liked it!
@vontainer5109
Жыл бұрын
Thats amazing I’ll use it on my next university project!!
@designinput
Жыл бұрын
Thank you! Happy to hear you liked it
@pablessLoz
Жыл бұрын
Great video, thanks
@designinput
Жыл бұрын
Hi Pablo, thank you, glad to hear that! Your are very welcome!
@ilaydakaratas1957
Жыл бұрын
Great video!
@designinput
Жыл бұрын
Thank you :)
@MathieuDeVinois
7 ай бұрын
Looks great. Do you have a way to use a photo as a background but have a CAD drawing instead of a sketch? Its maybe more complicated as Camera views of models are difficult to match onto a photo. will AI understand the differen viewpoints and still merge them?
@musatekdal7135
Жыл бұрын
I appreciate how you describe the process in detail. I wonder if you are able to describe certain materials/furniture in stable diffusion to be more accurate or is there any script that can help to do this? And could you also describe a bit your computer setup please? I am interested in implementing it in my work.
@designinput
Жыл бұрын
Hey, you are very welcome, thanks for the feedback! You can try using multi-control with segmentation for more control but still, you may not be able to describe all the elements in your design. I am planning to share a workflow about that soon, hope that can help!
@marcinooooo
11 ай бұрын
Hey, once again amaizng video - so for the interior AI rendering, which packages I need to download (I want to put them on my runpod server and see if it works)
@designinput
11 ай бұрын
What do you mean by packages, checkpoints? If yes, you can try Realistic Vision 5.1 or epicRealism ones for SD1.5 version
@eng.mo3tasem127
11 ай бұрын
I’ve been looking for explaining like this! Thnx you🙏🏼❤️🩹
@designinput
11 ай бұрын
You’re welcome 😊
@brianz7877
Жыл бұрын
Awesome
@designinput
Жыл бұрын
Thank you
@firatgunesbalci2743
Жыл бұрын
Süper video Ömer. stable diffusion 2.0 mi arayüz?
@designinput
Жыл бұрын
Çok teşekkür ederim Fırat :) Automatic1111 arayüzünü kullanıyorum ama eğer model olarak soruyorsan, kullandığım bütün modeller genellikle 1.5 base modelli, SDXL'ı bekliyorum :)
@cekuhnen
11 ай бұрын
how to make this with Fabrie ? they seem not to have a direct controlnet access ?
@designinput
11 ай бұрын
Hi, it wasn't possible to do that when I shared the Fabrie video. But they just added an inpainting feature, where you can edit the parts of the images. However, I don't think you can insert your own sketches
@cekuhnen
11 ай бұрын
@@designinput yeah I saw the change - I have to text it. I work mainly with Vizcom.
@acerol
Жыл бұрын
Great video ! Is stable diffusion free ?
@designinput
Жыл бұрын
Hey, thank you :) Yes, it is absolutely free to use it locally
@MrBoardcube
Жыл бұрын
What do you think of realistic Vision 4, do you like the features of Vision 3 better?
@designinput
Жыл бұрын
Hey, apparently V3 got lots of negative comments that's why the developer uploaded a new V4. I haven't tested it out a lot yet so can't say much about it. But I will share my results as a comparison soon :) Thanks for your comment! What do you think about it? Which one did you like more?
@mekkoid
Жыл бұрын
Is it possible in MJ?
@designinput
11 ай бұрын
Hey, you can do it for the midjourney generated images but not for your own images :/
Пікірлер: 33