Foreground-background segmentation has been an active research area over the years. However, conventional models fail to produce accurate results when challenged with the videos of challenging illumination conditions. In this paper, we present a robust model that allows accurately extracting the foreground even in exceptionally dark or bright scenes and in continuously varying illumination in a video sequence. This is accomplished by a triple multi-task generative adversarial network (TMT-GAN) that effectively models the semantic relationship between the dark and bright images and performs binary segmentation end-to-end. Our contribution is twofold: first, we show that by jointly optimizing the GAN loss and the segmentation loss, our network simultaneously learns both tasks that mutually benefit each other. Second, fusing features of images with varying illumination into the segmentation branch vastly improve the performance of the network. Comparative evaluations on highly challenging real and synthetic benchmark datasets (ESI and SABS) demonstrate the robustness of TMT-GAN and its superiority over state-of-the-art approaches.
Illumination-Aware Multi-Task GANs for Foreground Segmentation
Dimitrios Sakkos,Edmond S. L. Ho,Hubert P. H. Shum
Published 2019 in IEEE Access
ABSTRACT
PUBLICATION RECORD
- Publication year
2019
- Venue
IEEE Access
- Publication date
2019-02-04
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-71 of 71 references · Page 1 of 1
CITED BY
Showing 1-29 of 29 citing papers · Page 1 of 1