We propose a dual-stream, semi-supervised, attention-based approach that employs feature fusion of RGB and Laser Range Finder (LRF) modalities. Our method lever-ages the strength of two powerful transformer-based networks, i.e. Vision Transformer (ViT) and SegFormer, along with LRF information, to adequately predict whether the scene encountered in the image is safe for a robot to traverse. Towards this effort, we introduce an automated labelling system profiting from the combination of raw velocity readings and laser scanning information. Moreover, we show that overall GOINO-GO detection is enhanced by fusing RGB and laser modalities. Feature fusion is achieved through the employment of a Multi-Head Self-Attention (MHSA) module. Through cross-domain validation, we show that the proposed traversability estimation method can achieve decent amounts of transferability even with limited amount of training data.
Indoors Traversability Estimation with RGB-Laser Fusion
Christos Sevastopoulos,Michail Theofanidis,Aref Hebri,S. Konstantopoulos,V. Karkaletsis,F. Makedon
Published 2023 in 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE)
ABSTRACT
PUBLICATION RECORD
- Publication year
2023
- Venue
2023 IEEE 19th International Conference on Automation Science and Engineering (CASE)
- Publication date
2023-08-26
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-34 of 34 references · Page 1 of 1
CITED BY
Showing 1-3 of 3 citing papers · Page 1 of 1