Intuitive and robust multimodal robot control is the key toward human–robot collaboration (HRC) for manufacturing systems. Multimodal robot control methods were introduced in previous studies. The methods allow human operators to control robot intuitively without programming brand-specific code. However, most of the multimodal robot control methods are unreliable because the feature representations are not shared across multiple modalities. To target this problem, a deep learning-based multimodal fusion architecture is proposed in this paper for robust multimodal HRC manufacturing systems. The proposed architecture consists of three modalities: speech command, hand motion, and body motion. Three unimodal models are first trained to extract features, which are further fused for representation sharing. Experiments show that the proposed multimodal fusion model outperforms the three unimodal models. This paper indicates a great potential to apply the proposed multimodal fusion architecture to robust HRC manufacturing systems.
Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion
Hongyi Liu,Tongtong Fang,Tianyu Zhou,Lihui Wang
Published 2018 in IEEE Access
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
IEEE Access
- Publication date
Unknown publication date
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-64 of 64 references · Page 1 of 1
CITED BY
Showing 1-78 of 78 citing papers · Page 1 of 1