In continual learning, a system learns from non-stationary data streams or batches without catastrophic forgetting. While this problem has been heavily studied in supervised image classification and reinforcement learning, continual learning in neural networks designed for abstract reasoning has not yet been studied. Here, we study continual learning of analogical reasoning. Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are commonly used to measure non-verbal abstract reasoning in humans, and recently offline neural networks for the RPM problem have been proposed. In this paper, we establish experimental baselines, protocols, and forward and backward transfer metrics to evaluate continual learners on RPMs. We employ experience replay to mitigate catastrophic forgetting. Prior work using replay for image classification tasks has found that selectively choosing the samples to replay offers little, if any, benefit over random selection. In contrast, we find that selective replay can significantly outperform random selection for the RPM task1.
Selective Replay Enhances Learning in Online Continual Analogical Reasoning
Tyler L. Hayes,Christopher Kanan
Published 2021 in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
ABSTRACT
PUBLICATION RECORD
- Publication year
2021
- Venue
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- Publication date
2021-03-06
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-85 of 85 references · Page 1 of 1
CITED BY
Showing 1-23 of 23 citing papers · Page 1 of 1