Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations and demonstrate its capacity for capturing and generating multimodal multifinger grasp configurations on a simulated grasping dataset.
Modeling Grasp Motor Imagery Through Deep Conditional Generative Models
M. Veres,M. Moussa,Graham W. Taylor
Published 2017 in IEEE Robotics and Automation Letters
ABSTRACT
PUBLICATION RECORD
- Publication year
2017
- Venue
IEEE Robotics and Automation Letters
- Publication date
2017-01-11
- Fields of study
Mathematics, Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-31 of 31 references · Page 1 of 1
CITED BY
Showing 1-38 of 38 citing papers · Page 1 of 1