The ability to understand visual information from limited labeled data is an important aspect of machine learning. While image-level classification has been extensively studied in a semi-supervised setting, dense pixel-level classification with limited data has only drawn attention recently. In this work, we propose an approach for semi-supervised semantic segmentation that learns from limited pixel-wise annotated samples while exploiting additional annotation-free images. The proposed approach relies on adversarial training with a feature matching loss to learn from unlabeled images. It uses two network branches that link semi-supervised classification with semi-supervised segmentation including self-training. The dual-branch approach reduces both the low-level and the high-level artifacts typical when training with few labels. The approach attains significant improvement over existing methods, especially when trained with very few labeled samples. On several standard benchmarks—PASCAL VOC 2012, PASCAL-Context, and Cityscapes—the approach achieves new state-of-the-art in semi-supervised learning.
Semi-Supervised Semantic Segmentation With High- and Low-Level Consistency
Sudhanshu Mittal,Maxim Tatarchenko,T. Brox
Published 2019 in IEEE Transactions on Pattern Analysis and Machine Intelligence
ABSTRACT
PUBLICATION RECORD
- Publication year
2019
- Venue
IEEE Transactions on Pattern Analysis and Machine Intelligence
- Publication date
2019-08-15
- Fields of study
Medicine, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-41 of 41 references · Page 1 of 1