In this tutorial, I dive into NanoSAM, a lightweight segmentation model that you can run in real-time on NVIDIA Jetson Orin Platforms using TensorRT. Unlike the original SAM, which uses the hefty ViT-H model with 608M parameters, NanoSAM leverages the much lighter ResNet18 with only 15.1M parameters, making it perfect for edge devices. I'll show you how to use NanoSAM for segmentation and even how to perform inference using onnx-runtime on either CPU or GPU, which makes experimentation a breeze. The code is available on GitHub, so you can follow along easily. Don't forget to like, comment, and subscribe for more content like this!
NanoSAM GitHub: github.com/NVI...
Notebook: github.com/AIA...
JOIN OUR DISCORD: / discord
Join this channel to get access to perks:
/ @aianytime
To further support the channel, you can contribute via the following methods:
Bitcoin Address: 32zhmo5T9jvu8gJDGW3LTuKBM1KPMHoCsW
UPI: sonu1000raw@ybl
#meta #nvidia #ai
Негізгі бет Ғылым және технология NanoSAM: Lightweight AI Segmentation Model (Runs on CPU)
Пікірлер: 2