I think its really good and i can see the evolution of it in my head as a visualiser. Feels like one day you will really be able to custom load in pre written scripts that perform very specific functions to make it even more as per the experience of working for a client. Basically visualisation will become a bit like computer programming, not necessarily quicker or easier
@designinput
Жыл бұрын
Hi, thanks for your comment. Well said, totally agree!
@mukondeleliratshilavhi5634
Жыл бұрын
gpt aUTO
@ilaydakaratas1957
Жыл бұрын
Such useful tools!! I will definetly try it out! Thank you for the video!! Also, that was an interesting pavilion model
@designinput
Жыл бұрын
Hey there, thanks for your support and lovely comment ❤❤ I hope you liked the pavilion :)
@peterpanic7019
Жыл бұрын
Thanks for your great quality videos, I just watched the latest ones about AI and image generation, can't wait to try them out. Hope you channel grows :)
@designinput
Жыл бұрын
Hey, thank you so much for your support ❤ Please let us know what you think after you try it out :)
I am become a fan of your works. Will spen the next holidays of this month on these AI series. Thanks
@designinput
Жыл бұрын
Hi, thanks a lot for your lovely comment and feedback
@mkemaladro5942
5 ай бұрын
Very nice work, I'm a student trying to learn this and just stumbled on your video. It's constructive and informative, keep up the good work sir!!!
@B-water
Жыл бұрын
A gift from heaven...a million thanks 😃😃😃
@designinput
Жыл бұрын
Thanks for your great comments!
@sherifamr4160
Жыл бұрын
love the way you explained it, to the point and easy to follow up. I do have a question hopefully you will read my comment, but I wanted to ask if you already have materials on your pavilion would that somehow redirect the rendering process into what we want acting as a more parameters? ... I hope I am making sense in my comment. again thank you so much I love that you are sharing your knowledge with us shows how amazing you are as a person.
@designinput
Жыл бұрын
Hey, thanks a lot for your lovely comment! Unfortunately, it is not possible to use materials as a parameter at the moment, but I am sure soon we will be able to have more control over this workflow. Thanks a lot for your kind words
@NicoChin
Жыл бұрын
if you tell the client that the last picture is man-made and the same picture you say was created by AI. If the client's attitude does not change, then AI will really change the world.
@firatgunesbalci2743
Жыл бұрын
Great videos 👍🏻👍🏻👍🏻Can you explain Sketchup workflow as well?
@designinput
Жыл бұрын
Thanks a lot
@reflections191
Жыл бұрын
Very well explained, Thanks for the great video!
@designinput
Жыл бұрын
Hey, thanks for your lovely comment!
@amazingsound63
Жыл бұрын
Scary For Future Job Opportunity.
@JJSnel-uh3by
Жыл бұрын
I love the setup but the voice is just too funny xD
@designinput
Жыл бұрын
:)
@emekachime1089
Жыл бұрын
Looking forward to your next video of CLASSICAL RENDER VS AI RENDER .👍
@designinput
Жыл бұрын
Hey, thanks a lot for your lovely comment! It will be out soon :)
@MDLEUA
Жыл бұрын
Great tutorial, followed Ambrosini videos but I like this format more!
@designinput
Жыл бұрын
Hey, thank you! Glad to hear that you liked it! Did you have a chance to try it?
@HannesGrebin
Жыл бұрын
Wizard! Thank you so much for your concise introduction and other videos. Just came along from the Parametric Architecture course of Aturo Tedeschi who you might know (the grasshopper guy)
@designinput
Жыл бұрын
Hi, thanks a lot for your lovely comment and feedback
@pranayyalamuri3127
Жыл бұрын
Thanks for the content ❤
@designinput
Жыл бұрын
Hey, thanks a lot for your great comment and support!
@ilhan1936
Жыл бұрын
Thats really great thanks for the video! Eline saglik arkadasim :)
@designinput
Жыл бұрын
Hi Ilhan, thanks a lot for your lovely comment :)) ❤❤
@niirceollae2
Жыл бұрын
wow... that is insane. i have to try it now
@designinput
Жыл бұрын
Hey, thanks for your lovely comment! Please share your experiences after you tried it out, and feel free to ask if you have any problems.
@dkn822
Жыл бұрын
Thank you for all this amazing information and resources, I will definitely use this for my projects. Subscribed and eager to watch your upcoming videos! Keep it up!
@designinput
Жыл бұрын
Hey, thanks a lot for your lovely comment and support! I am happy to hear that you liked it! Please share your experiences with me once you try it out!
@william0916
Жыл бұрын
Thank you for sharing this fabulous workflow!! I am about to try it out, and I'm wondering if there are any newer extensions and development you would suggest us to use (since this video is from April, not sure if there's anything new in these 3 months!) Thank you in advance and have a nice day :)
@designinput
Жыл бұрын
Hey, thanks a lot for the feedback! Of course, there are lots of new developments happening every day, I am trying to stay updated as much as I can and share what I learn. But in terms of this specific workflow, there are new major updates for both Stable Diffusion and Grasshopper extensions. But both should still work fine!
@borchzhang2211
Жыл бұрын
How to handle parameter settings indoors to better align with the model?
@designinput
Жыл бұрын
Hey, thanks for your comment! For indoor views, you can try the Depth Model too. Is there any specific parameter you want to ask? Maybe I can help better with that one :)
@tatianagavrilova2252
Жыл бұрын
It look like a magic! Thanks a lot
@designinput
Жыл бұрын
Hi, thanks a lot, glad you liked it! You are very welcome!
@Masoud.Ansari
Жыл бұрын
Thank you for sharing this is awesome 👌
@designinput
Жыл бұрын
Hey, thanks a lot! Glad to hear that you liked it :)
@Masoud.Ansari
Жыл бұрын
@Design Input yourwelcome bro
@DannoHung
Жыл бұрын
Backing the rendered image out to a textured and lit scene is the next step probably, hah!
@designinput
Жыл бұрын
👍
@Albert_Riseal
Жыл бұрын
Awesome! I like it, thanks. Please make a tutorial using blender, if possible
@designinput
Жыл бұрын
Hey, thanks a lot
@motivizer5395
Жыл бұрын
Amazing video . Can you make a video for sketchup as well about this process ?
@designinput
Жыл бұрын
Hi, thanks for your comment and suggestion! I will definitely try it out and share the results!
@moaazaldahan1175
Жыл бұрын
thank you very much
@designinput
Жыл бұрын
Hey, your are very welcome
@mukondeleliratshilavhi5634
Жыл бұрын
I think it's a great tool for rapid prototyping with less images . It unlocks more possibilities and gives us and the client more variety with less time and energy. The biggest hope is we come to a final image that we might have not even though possible before. But for a final image I think the old method is still king. Who knows next year this time it might be a different story m Will I use it for my next project oh yes but the blender version it's always best to get in early with new technology
@designinput
Жыл бұрын
Hey, thanks for your comment; I totally agree! Hmm, that's interesting; why do you prefer Blender specifically?
@mukondeleliratshilavhi5634
Жыл бұрын
@@designinput there are a few reason. 1) Been open source it was easy access with out restrictions and invest time and resources on it. I'm a freelance/ business owner. It is important I run as lean as possible 2) rapid development : it can do a lot of things and it's ever expanding its reach. I'm able to complete a project in one software with out having to hop on another. Yes it's not as strong as rhino or Max but it's gives great quality. 3) the community: they drive the development and education of the software it's so of owned by us . The amount of tutorial and add on , stores available. There is more but let me park here
@СтепанКаштанов-в2с
Жыл бұрын
I need a plugin that can give a million likes to this video👍👍👍
@designinput
Жыл бұрын
Hey, thanks a lot for your lovely comment!
@firatgunesbalci2743
Жыл бұрын
When I first saw the teaser, I thought that you used ArkoAi
@designinput
Жыл бұрын
Hey, haha, yes, that's the most "popular" one nowadays, but I feel like you don't have much control over it. I will share a video soon to compare different AI Render alternatives. Thanks for your comments!
@METTI1986LA
Жыл бұрын
It’s actually good but I rather have control over the textures and put them where I want to have them - it’s really not that hard... of course it takes a bit more time but why would you need 1000 renders just to get overwhelmed by the choices you have
@MertMert-g7c
7 ай бұрын
I have a no data problem when I connect 2.24 LaunchSD to the panel, how can I solve it?
@dianaallaham2801
9 ай бұрын
Since your video there has been an update to the Ambrosinus, and for some reason I cannot get the port to be available. Do you happen to know what inputs should go into the LaunchSD as it has many more inputs now?
@azimbekibraev1249
5 ай бұрын
Selam aleykum Omer! Ambrosinus has updated and your sample GH fail is no longer work, could you please share the updated version, if this workflow is still relevant. Thank you in advance
@韩鹏坤
Жыл бұрын
2023-07-01 22:55:51,129 - ControlNet - WARNING - Guess Mode is removed since 1.1.136. Please use Control Mode instead. What should i do?
@designinput
Жыл бұрын
Hello, I think it should still work but if it doesn't update your ControlNet extension and it should solve this issue. Thank you!
@adel.419
Жыл бұрын
I have followed everything in the video but when I tried my own model and hit the generate button the AleNG-Ioc battery turned red and doesn't generate anything and the panel connected to the info says "No data was collected" even though the viewport appears in the LB image viewer
@user-ae5pa
Жыл бұрын
soooooo good
@designinput
Жыл бұрын
Hi, thanks a lot for your great comment! ❤
@arv3ryn
Жыл бұрын
Great video, also what is you computer specs, cuz I have a basic laptop, wondering whether I can run this
@designinput
Жыл бұрын
Hey, thanks a lot for you lovely feedback! I am using a laptop with RTX3060 (6GB VRAM) and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. I will share another workflow how you can use Stable Diffusion without any computer in couple of days.
@zafiriszafiropoulos5346
Жыл бұрын
hi there. I only have rhino 6, and the ambrosini tool is only available for rhino 7. is there another way?
@mrezaforoozandeh520
9 ай бұрын
thanks but by clicking start botton the webui-user.bat wont run by --api. i edit the bat file but after clicking start it wont be able to run it in that way and changes the bat file back to origin
@soitalwaysgoes
Жыл бұрын
Hello! I checked out your instagram and I would die for a tutorial on how to do those veil textures you did!
@designinput
Жыл бұрын
Hi, oh, thank you for your lovely feedback. Happy that you liked them ❤ I created them with Midjourney v5. Sure, I will do a video about it soon!
@shinndin
Жыл бұрын
Amazing
@designinput
Жыл бұрын
Hi Dina, thanks a lot for your excellent feedback ❤❤
@ezzathakimi2201
Жыл бұрын
Please make a video how to use it in 3ds Max + Corona
@hopperblue934
Жыл бұрын
great bro💖💖💖
@designinput
Жыл бұрын
Hi, thanks a lot for the lovely feedback
@infographie
Жыл бұрын
Excellent
@designinput
Жыл бұрын
Hi, thanks you!
@darkrider897
Жыл бұрын
Hi sir, I was stuck at 2:28 when u clicked on the administrator window. I tried to do it by right clicking webui-user.bat, then click run as administrator. However it just flashes but nothing happens. How do I solve the problem?
@designinput
Жыл бұрын
Hey, you don't need to run the webui-user.bat file as administrator, you need to run Rhino as administrator. And make sure to add the --api parameter to the .bat file. If you can't start Stable Diffusion inside Grasshopper you can just run it manually and if you have --api commend, it should automatically connect to the Grasshopper plugin.
@alexanderaggersbjerg5187
Жыл бұрын
Thanks for the great explanation! Got everything up and running:) One quick question, I am having issues working with the depth controlnet. I have downloaded the previous controlnet versions (aside from the new controlnet v1.1 versions) but the depth and canny masks are very bad quality. This is only an issue for me when I use controlnets in grasshopper. Any ideas what the problem may be?
@simongobel2709
Жыл бұрын
i have the same problem unfortunately .... any answer yet ?
@lawrencenathan351
Жыл бұрын
quick question : Do i just add this on top of sketchup? or is there any simple tutorial i can follow on combining ai in sketcuo? thanks
@designinput
Жыл бұрын
Hi, this workflow doesn't work with SketchUp at the moment, but you can try platforms like VerasAI. Thanks for your comment!
@wido.daniel
Жыл бұрын
Thank you man, this si SO good! to your knowledge, would it possible to use this in Revit through Dynamo?
@designinput
Жыл бұрын
Hey, thanks a lot for the feedback. ❤Hmm, I am not super sure, but I believe there is no extension for that yet. But I am experimenting with connecting Revit to this same workflow with Rhino.Inside.Revit. I will share it as soon as it's ready :)
@wido.daniel
Жыл бұрын
@@designinput that would be awesome!
@riccia888
Жыл бұрын
This is the most cofusiing software ever
@oof1498
Жыл бұрын
Great! How about if I want to use the same material on the same place but in different perspective?
@designinput
Жыл бұрын
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)
@Peter-hn9yv
Жыл бұрын
i got the error in grasshopper saying excepting index was out of range, have you encounter this issue before?
Жыл бұрын
hi, thanks for the video. i check other videos and came to somewhere until I stuck with webui part. my webui-user file looks different than yours. there is "--xformers" and "git pull" lines in yours but I don't have it. unfortunately just copying it as yours doesn't work :) . Dont know what is missing but I can say that it is pretty overwhelming setup for sure.
@designinput
Жыл бұрын
Hey Cankat, Thanks for your comment. "--xformers" is an additional step that you can use if you have an RTX 30 or 40-series GPU; it will speed up the generation process. And the "git pull" comment automatically checks for new updates when you run the SD. So you don't have to have them to use it; the only must is the "--api" to give access directly inside the Grasshopper file. Since it is an early experimental workflow, you are right that it is not so user-friendly. But it will surely develop, and I will share the newer versions very soon. Thank you!
@youssefdaadoush8755
Жыл бұрын
Thanks a lot for the video, is really incredible, I just have a question, I did everything exactly same and in the generation comes the results regardless of my base image, what could be the problem? otherwise it works directly in stable difussion in web window
@designinput
Жыл бұрын
Hey Youssef, thanks for your great comment! It looks like there is a problem with the ControlNet. Did you enable it?
@sirousghaffari9556
Жыл бұрын
Hello, thank you very much for your good lessons. In the 3rd minute of the tutorial, you say that I put the GrassHopper codes for you in the description section. But unfortunately I can't find it. Is it possible to guide me?
@designinput
Жыл бұрын
Hi, thanks for the feedback! You can find all the resources mentioned in the video here: designinputstudio.com/this-will-change-everything-in-architectural-visualization-forever/ And you can download the file here: www.notion.so/designinputs/AI-Render-Engine-Template-File-02d34b595f824ca6a9f1339470fb1387?pvs=4
@kedarundale972
Жыл бұрын
Thank you for the wonderful video. I had one question, so everything in the script works perfectly on my computer but when I connect value list to Mode, I get error. Do you know why this could be? Basically the mode doesn't take any other input apart from 0 - which is the T2I Basic. In my stable diffusion I do see the other models but I am not sure what the error is. The same thing is happening with SAMPLER MODEL, it does not take any input apart Euler A. Any suggestions will be helpful. Thank you.
@designinput
Жыл бұрын
Hey, thanks for your comment. I am not sure why you can't see the other modes. There was a new update to the ambrosinus-toolkit plugin since I published the video, maybe you should update it to work. I will check the file and upload an updated version soon. Let me know if you are still having problems with it. Thank you!
@Peter-hn9yv
Жыл бұрын
does this workflow saves the viewport and dimensions of the image?
@designinput
Жыл бұрын
Hey, yes, it saves the image exactly in the viewport size and uses the same aspect ratio for the new image. Thanks for your comment!
@jelisperez7968
Жыл бұрын
Thank you for sharing this amazing tutorial. Is it still working? I am having this issue with ControlNet updates: controlnet warning gess mode is removed since 1.1.136. please use Control Mode instead. If I choose the CN v1.1.X IN Ambrosinus tool, Result image differs completely from original image. Also changed directory to point directly to CNet path. Any hint? Is there a way to choose the SD Model? Best
@jelisperez7968
Жыл бұрын
I figured out that with the update, CN Depth modes are working as expected, but not Canny mode. I've posted the bug on food4Rhino. Many thanks again
@designinput
Жыл бұрын
Hey, good to hear that it's working :) For me, it was working without any issues. Thanks for your comment!
@전형욱-e1w
Жыл бұрын
Hi. What’s your rhino version and ladybug version? Ladybug is not working on my rhino.
@designinput
Жыл бұрын
Hey, I was using 1.6 version, you can download it here: www.food4rhino.com/en/app/ladybug-tools But even if Ladybug doesn't work, you can still use this workflow, just you won't be able to see the images directly inside Grasshopper.
@diegovazquezdesantos4667
Жыл бұрын
Thank you so much for the clear explanation. I tried to follow this video with the new update of ambrosinus but I was no able too. And when I installed v1.1.9 I was able to utilized your code. Although at the output SeeOUt (LA_SeeOut) an error occurs. “index was out of range” any ideas on how to fix this error?
@designinput
Жыл бұрын
Hey, thanks a lot! I think you just need to generate an image first, after that you will able to see it and the error will disappear.
@remyleblanc8778
Жыл бұрын
nice! wish it was 1000 times more simple
@designinput
Жыл бұрын
Hey, thanks! Haha, I feel you
@danr9277
Жыл бұрын
This is great how is the speed of the rendering? Seems very fast.
@designinput
Жыл бұрын
Hey, thanks for your comment! It mostly depends on your GPU, I am using a RTX 3060 with 6GB VRAM, and I can generate a 1024x1024 image in 1-2 minutes.
@NMPrecedent
Жыл бұрын
Can stable diffusion further elaborate the model so that at different views you can maintain the same materials, facades?
@designinput
Жыл бұрын
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. But I am sure we will see some developments about this very soon!
@firatgunesbalci2743
Жыл бұрын
Hi, what is your computer hardware configuration ?
@designinput
Жыл бұрын
Hey Fırat, I am using a laptop with RTX3060 (6GB VRAM) and 12th Gen Intel(R) Core(TM) i7-12700H CPU.
@Macora3251
Жыл бұрын
Can you get the same results twice if the client wants the exact same render but change just the column material for example?
@designinput
Жыл бұрын
Hey, thanks for your comment! Generating exactly the same image twice can be challenging. But if you want to change a part of it, you can use inpainting to edit it.
@sirousghaffari9556
Жыл бұрын
In the 4th minute, when you press the start button, it renders without any problem, but it is a problem for me because the SEE OUT code is red and it gives this error ( Solution exception:Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index) can you help?
@11Bashar
Жыл бұрын
Have you found a solution yet?
@sirousghaffari9556
Жыл бұрын
@@11Bashar Unfortunately, I was disappointed in connecting to Grasshopper because I don't notice its errors and there is no explanation about it anywhere.
@lorenzoguadagnucci-e1q
Жыл бұрын
Thank you so much!! I’m just having issues with the resolution of the “depth image” that it creates, its really low and cause of it I can t use my models Can I increase it ? Thank you anyway this tool is amazing 👍
@lorenzoguadagnucci-e1q
Жыл бұрын
being more precise, I probably have problems with the preprocessor I can't change it so it doesn't generate the correct depth image
@designinput
Жыл бұрын
Hey, thanks for the comment! If the image resolution is low from the viewport, you can try printing a view from Rhino with a custom resolution and use it in Stable Diffusion directly. It may help but don't go larger than 1024x1024 it will slow down the process dramatically, once you like one of the views than you can upscale the image later. Hope I understood your question correctly. Let me know if you have any other issues.
@韩鹏坤
Жыл бұрын
My rhino7 cannot be installed ambrosinus-toolkit,which version of am should i download?
@designinput
Жыл бұрын
Hey, I am also using Rhino7 and was able to use it without any issues with the latest version of Ambrosinus-toolkit, if you are still having issues you may contact the developer.
@cgimadesimple
Жыл бұрын
cool :)
@designinput
Жыл бұрын
@borchzhang2211
Жыл бұрын
succes 成功了
@designinput
Жыл бұрын
@user-ee7ko1yb9s
Жыл бұрын
Hi its looks amazing thank you for that but I tried it and also used the same parameters but unfortunately it generate a different image not the image of the pavilion it change it completely i dont know what i did wrong if you could help me thank you again
@designinput
Жыл бұрын
Hey, thanks for your comment! Probably there was a problem with the ControlNet. Do you have the ControlNet models installed locally?
@user-ee7ko1yb9s
Жыл бұрын
@@designinput hi thank you for replying back yes I already download it but controlnet doesn't work in Rhino it just work in the Browser no idea why
@ABCDEFGH-bi5tk
Жыл бұрын
Does this work with 3ds Max as well?
@designinput
Жыл бұрын
Hey, not with the exact workflow but it can be possible to use it with an extension. I am not using 3ds Max myself, that's why I haven't experimented with that one. Let me know if you try it :)
@pedorthicart1201
Жыл бұрын
I feel it is great and help me with visualization of orthopedic footwear designed through #Pedorthic Information Modeling! Waiting to have time to explore it! Thank you
@designinput
Жыл бұрын
Hey, thanks for your comment! I will share a video specifically about product photography and how to use AI. Thank you!
@pedorthicart1201
Жыл бұрын
@@designinput Waiting for it! Thanks!
@sossiopalmiero3582
Жыл бұрын
where i can find the grasshopper file?
@designinput
Жыл бұрын
Hey, you can find all the resources here: designinputstudio.com/this-will-change-everything-in-architectural-visualization-forever/
@mockingbird1128
Жыл бұрын
would this work with revit too?
@designinput
Жыл бұрын
Hey, maybe it could work with the Rhino.Inside.Revit, but I haven't tested it. But you can always take a screenshot and use the SD + ControlNet additionally.
@abdulmelikyetkin9721
Жыл бұрын
#DesignInput can u do this with sketchup
@designinput
Жыл бұрын
Hey, thanks for your comment! Technically yes, I had some issues creating this custom workflow on SketchUp; when I figure it out, I will share it :) Meanwhile, you can try extensions like VerasAI and ArkoAI extensions.
@bixp2k3
Жыл бұрын
how does it cost
@designinput
Жыл бұрын
Hey, it doesn't cost anything if you already have Rhino, because Stable Diffusion is running locally on your computer.
@iaspace6737
Жыл бұрын
I NEED SD+SU
@sabaahmed1261
Жыл бұрын
Does it work with revit ?
@GRUMPNUGS
Жыл бұрын
I know revit currently has one called Veras
@designinput
Жыл бұрын
Hi, I am currently experimenting with implementing this workflow in Revit. I will share a video about it soon :) Thanks for the comment!
@motassem85
Жыл бұрын
Looks too complicated for me still prefer 3ds max vray or lumion 😂
@designinput
Жыл бұрын
Haha, totally understand that :) But we will see much easier user interfaces soon, surely!
@shiryu7101
Жыл бұрын
Hi! Could you tell me why it says “Input image doesn’t exist or is not supported format” even I put png file? Thank you!
@oof1498
Жыл бұрын
Great! How about if I want to use the same material on the same place but in different perspective?
@designinput
Жыл бұрын
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)
@oof1498
Жыл бұрын
@@designinput thanks bro, appreciate your effort:)
Пікірлер: 162