Utilizing neural networks and reinforcement learning algorithms, Deep Reinforcement Learning (DRL) has become increasingly important for collision avoidance and navigation in mobile robotics. This approach allows agents to navigate through obstacles and reach destinations in complex and dynamic environments effectively. However, existing research has encountered limitations, such as being assisted by traditional algorithms, the need for multi-sensor data fusion, and application in overly simplified or non-random scenarios. To address these challenges, this letter presents a novel end-to-end DRL framework. Leveraging limited observations from the depth camera on the robot and incorporating differential robot motion constraints, our framework adeptly navigates and circumvents obstacles in dense, random, and mapless environments. Network model parameters are refined via the Proximal Policy Optimization (PPO) algorithm. Subsequent evaluations and tests against existing algorithms on the BARN_Dataset map set in simulation underscore its high success rate and generalizability in complex static environments. Furthermore, the model was deployed on a TurtleBot2 robot with a depth camera and tested in various real-world scenarios, including static, complex, and random environments, as well as with moving pedestrians, demonstrating its feasibility and robustness with exceptional results.
- 6 ай бұрын
Mobile Robot Collision Avoidance Based on Deep Reinforcement Learning With Motion Constraints
- Рет қаралды 366
Пікірлер: 3