Regarding gathering kinematic tricking data, besides going the expensive and cumbersome mocap route, one could also leverage the latest monocular pose estimation AI models as they've become quite good in recent years. I remember testing a few of the then sota models last year on a couple of my trick recordings and they were able to find and track my body segments' positions quite accurately from a simple cellphone video. From this data you can recreate each segments' movement in 3D space (some models output this directly, but even those that output 2D points in the UV plane it's possible to get back the 3D coords with a few assumptions), add a more or less realistic mass model to the resulting skeletal keypoints and you should be able to compute many important and relevant quantities (momentum, angular momentum, center of mass, energy, ...) I don't know if the precision you'll get from this strategy will suffice but if it works it'll give a very easy and cheap way to gather a ton of useful data. And it's not too hard to throw together a quick proof of concept to check if it's an avenue worth exploring. 🥰
Пікірлер: 3