In this video, you will learn how to accelerate image generation with an Intel Sapphire Rapids server. Using Stable Diffusion models, the Hugging Face Optimum Intel library and Intel OpenVINO, we're going to cut inference latency from 36+ seconds to 4.5 seconds!
⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️
⭐️⭐️⭐️ Want to buy me a coffee? I can always use more :) www.buymeacoffee.com/julsimon ⭐️⭐️⭐️
- Blog post: huggingface.co/blog/stable-di...
- Code: gitlab.com/juliensimon/huggin...
- Optimum Intel: github.com/huggingface/optimu...
- Intel Sapphire Rapids: en.wikipedia.org/wiki/Sapphir...
- Intel Advanced Matrix Extensions: en.wikipedia.org/wiki/Advance...
Негізгі бет Ғылым және технология Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 1) 🚀 🚀 🚀
Пікірлер: 20