With the rapid development of intelligent vehicles and Advanced Driving Assistance Systems (ADAS), a mixed level of human driver engagements is involved in the transportation system. Visual guidance for drivers is essential under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel sensor fusion methodology, integrating camera image and Digital Twin knowledge from the cloud. Target vehicle bounding box is drawn and matched by combining results of object detector running on ego vehicle and position information from the cloud. The best matching result, with a 79.2% accuracy under 0.7 Intersection over Union (IoU) threshold, is obtained with depth image served as an additional feature source. Game engine-based simulation results also reveal that the visual guidance system could improve driving safety significantly cooperate with the cloud Digital Twin system.
Sensor Fusion of Camera and Cloud Digital Twin Information for Intelligent Vehicles
Ziran Wang,Kyungtae Han,Zhenyu Shou,Prashant Tiwari,J. Hansen
Published 2020 in 2020 IEEE Intelligent Vehicles Symposium (IV)
ABSTRACT
PUBLICATION RECORD
- Publication year
2020
- Venue
2020 IEEE Intelligent Vehicles Symposium (IV)
- Publication date
2020-07-08
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-29 of 29 references · Page 1 of 1
CITED BY
Showing 1-66 of 66 citing papers · Page 1 of 1