Abstract In this paper, we explore the idea of weight sharing over multiple scales in convolutional networks. Inspired by traditional computer vision approaches, we share the weights of convolution kernels over different scales in the same layers of the network. Although multi-scale feature aggregation and sharing inside convolutional networks are common in practice, none of the previous works address the issue of convolutional weight sharing. We evaluate our weight sharing scheme on two heterogeneous image recognition datasets – ImageNet (object recognition) and Places365-Standard (scene classification). With approximately 25% fewer parameters, our shared weight ResNet model provides similar performance compared to baseline ResNets. Shared-weight models are further validated via transfer learning experiments on four additional image recognition datasets – Caltech256 and Stanford 40 Actions (object-centric) and SUN397 and MIT Inddor67 (scene-centric). Experimental results demonstrate significant redundancy in the vanilla implementations of the deeper networks, and also indicate that a shift towards increasing the receptive field per parameter may improve future convolutional network architectures.
Multi-Scale Weight Sharing Network for Image Recognition
Shubhra Aich,I. Stavness,Y. Taniguchi,Masaki Yamazaki
Published 2020 in Pattern Recognition Letters
ABSTRACT
PUBLICATION RECORD
- Publication year
2020
- Venue
Pattern Recognition Letters
- Publication date
2020-01-09
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-40 of 40 references · Page 1 of 1
CITED BY
Showing 1-16 of 16 citing papers · Page 1 of 1