Self-balancing exoskeletons are a key enabling technology for individuals with mobility impairments. While the current challenges focus on human-compliant hardware and control, unlocking their use for daily activities requires a scene perception system. In this work, we present Exosense, a vision-centric scene understanding system for self-balancing exoskeletons. We introduce a multi-sensor visual-inertial mapping device as well as a navigation stack for state estimation, terrain mapping, and long-term operation. We tested Exosense attached to both a human leg and Wandercraft's Personal Exoskeleton in real-world indoor scenarios. This enabled us to test the system during typical periodic walking gaits, as well as future uses in multi-story environments. We demonstrate that Exosense can achieve an odometry drift of about 4 cm per meter traveled, and construct terrain maps under 1 cm average reconstruction error. It can also work in a visual localization mode in a previously mapped environment, providing a step towards long-term operation of exoskeletons.
Exosense: A Vision-Based Scene Understanding System for Exoskeletons
Jianeng Wang,Matías Mattamala,Christina Kassab,Lintong Zhang,Maurice F. Fallon
Published 2024 in IEEE Robotics and Automation Letters
ABSTRACT
PUBLICATION RECORD
- Publication year
2024
- Venue
IEEE Robotics and Automation Letters
- Publication date
2024-03-21
- Fields of study
Medicine, Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-30 of 30 references · Page 1 of 1
CITED BY
Showing 1-3 of 3 citing papers · Page 1 of 1