Alright check it out! Stumbling across wire-pod has allowed me to start working on something I've wanted to work on for a little while.
When Vector first dropped on the scene I imagined being able to animate whole scenes and have it manifest in real life. Imagine being able to create a 3D animation and not only have that scene play out before your eyes, but also have that scene enhanced with cutting edge machine learning!
I'm working on that very thing - it's still in it's early stages but here's some of the functionality you can expect.
Plug and play keypoint animation converted to a Python SDK script.
Realtime animation debug mode.
An enhanced head with animated screens - ability to choose predefined or custom animation via a dropdown.
All blender models, including enhanced version with bones etc will be available via the repo for collaboration.
As I mentioned I'm also going to start bringing in some highly optimised ML via ONNX , TesnorRT etc in order to facilitate faster scene rationalisation for those of you running on signle board computers etc:
onnxruntime based yolov10
onnxrumtime based multi-modal foundation models like grounding dino - allowing a user's speech prompt have Vector analyse a scene without any prior training.
X-CLIP based video classification for custom multi frame action and emotion recognition - it'd also be super cool if we could start synthesising new animations this way!
Watch this space for a repo as it emerges but in the meantime subscribe and check out my github. Catch you all soon!
Негізгі бет Accelerated Vector: Reigniting a Childhood Dream
Пікірлер: 4