Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground/background masks. Unfortunately these priors either require pixel-level annotations/bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract accurate masks from networks pre-trained for the task of object recognition, thus forgoing external objectness modules. We first show how foreground/background masks can be obtained from the activations of higher-level convolutional layers of a network. We then show how to obtain multi-class masks by the fusion of foreground/background ones with information extracted from a weakly-supervised localization network. Our experiments evidence that exploiting these masks in conjunction with a weakly-supervised training loss yields state-of-the-art tag-based weakly-supervised semantic segmentation results.
Incorporating Network Built-in Priors in Weakly-Supervised Semantic Segmentation
F. Saleh,M. S. Aliakbarian,M. Salzmann,L. Petersson,J. Álvarez,Stephen Gould
Published 2017 in IEEE Transactions on Pattern Analysis and Machine Intelligence
ABSTRACT
PUBLICATION RECORD
- Publication year
2017
- Venue
IEEE Transactions on Pattern Analysis and Machine Intelligence
- Publication date
2017-06-06
- Fields of study
Medicine, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-53 of 53 references · Page 1 of 1
CITED BY
Showing 1-40 of 40 citing papers · Page 1 of 1