Fine-grained recognition is challenging mainly because the inter-class differences between fine-grained classes are usually local and subtle while intra-class differences could be large due to pose variations. In order to distinguish them from intra-class variations, it is essential to zoom in on highly discriminative local regions. In this work, we introduce a reinforcement learning-based fully convolutional attention localization network to adaptively select multiple task-driven visual attention regions. We show that zooming in on the selected attention regions significantly improves the performance of fine-grained recognition. Compared to previous reinforcement learning-based models, the proposed approach is noticeably more computationally efficient during both training and testing because of its fully-convolutional architecture, and it is capable of simultaneous focusing its glimpse on multiple visual attention regions. The experiments demonstrate that the proposed method achieves notably higher classification accuracy on three benchmark fine-grained recognition datasets: Stanford Dogs, Stanford Cars, and CUB-200-2011.
Fully Convolutional Attention Localization Networks: Efficient Attention Localization for Fine-Grained Recognition
Xiao Liu,Tian Xia,Jiang Wang,Yuanqing Lin
Published 2016 in arXiv.org
ABSTRACT
PUBLICATION RECORD
- Publication year
2016
- Venue
arXiv.org
- Publication date
2016-03-22
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-29 of 29 references · Page 1 of 1