In this video, Professor Robert Katzschmann introduces teleoperation by dividing it into three main components: sensing, mapping and control. In particular, he explained a stereo camera set-up to detect the hand pose in the sensing part of the task, a mapping from our five-fingered hand to an arbitrary robotic hand, and finally discussed how to use previously seen algorithms to move the hand into the desired pose.
Following are some useful links for your further understanding:
A nice tool for activation functions: playground.tensorflow.org/
Example of mnist classifier: www.kaggle.com/code/heeralded...
Stereo vision: www.andreasjakl.com/understan...
Point triangulation: www.andreasjakl.com/understan...
Point matching: rpg.ifi.uzh.ch/docs/teaching/...
Негізгі бет Ғылым және технология Teleoperation using Machine Learning and Computer Vision | ETH Zürich Real World Robotics Tutorial 5
Пікірлер