Real-time continual learning which could continually learn a series of computer vision tasks, has received more attention on robotic platforms and embedded systems with on-device training manner. However, most existing real-time continual learning systems need more training time when learning new visual perception tasks; the class imbalance issues are neglected amongst most real-time continual learning systems, which could degrade the generalization performance among past learned tasks, as well as the new tasks. To address these challenges above, we in this work propose to preserve and reuse the learned knowledge to achieve real-time continual learning. To be specific, when encountering a new visual perception task, we choose to freeze the learned backbone weights of all the past tasks, which could speedup the training process for new tasks. Moreover, we also store several activations volumes from some intermediate layer, which could further reduce the computational cost. For the improved perception performance, a focal loss is employed to guide the attention for poorly-identified sample categories and mitigate the class imbalance issue. Experimental results on popularly-used continual learning justify the efficiency and effectiveness of our proposed method.
Proper Reuse of Features Extractor for Real-time Continual Learning
Yu Li,Gan Sun,Yuyang Liu,Wei Cong,Pengchao Cheng,C. Zang
Published 2022 in ACM Cloud and Autonomic Computing Conference
ABSTRACT
PUBLICATION RECORD
- Publication year
2022
- Venue
ACM Cloud and Autonomic Computing Conference
- Publication date
2022-11-25
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-35 of 35 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1