Reinforcement learning is extensively employed in control tasks of robotic arms. However, factors such as noise and uncertainty introduced by complex environments often lead to the unstable performance of robotic arms. Taking inspiration from babies' tactile exploration during the learning process, we propose a deep reinforcement learning (DRL) method that combines vision and touch to achieve high-performance control of robotic arms based on the deep deterministic policy gradient (DDPG) algorithm. In this approach, the feedback information from the force sensor of the robot arm is fully utilized, and an intrinsic reward mechanism is formulated to encourage interaction between the robot arm and operating objects. This makes up for the deficiency of sparse reward settings that are difficult to provide positive feedback during the early stage of training. Additionally, a contact-prioritized experience replay strategy is adopted to enhance sample utilization, and an asymmetric actor-critic network structure is constructed to overcome challenges associated with directly learning from high-dimensional visual information, thereby improving learning efficiency. Experimental results demonstrate that the proposed algorithm can converge in less time and exhibit superior performance.
A Deep Reinforcement Learning Approach to Integrating Vision and Touch in Robotic Arm Control
Published 2025 in Cybersecurity and Cyberforensics Conference
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
Cybersecurity and Cyberforensics Conference
- Publication date
2025-07-28
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-15 of 15 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1