Previous methods in salient object detection (SOD) mainly focused on favorable illumination circumstances while neglecting the performance in low-light condition, which significantly impedes the development of related down-stream tasks. In this work, considering that it is impractical to annotate the large-scale labels for this task, we present a framework (HDNet) to detect the salient objects in low-light images with the synthetic images. Our HDNet consists of a foreground highlight sub-network (HNet) and an appearance-aware detection sub-network (DNet), both of which can be learned jointly in an end-to-end manner. Specifically, to highlight the foreground objects, we design the HNet to estimate the parameters to adjust the dynamic range for each pixel adaptively, which can be trained via the weak supervision signals of the salient object labels. In addition, we design a simple detection network (DNet) with a contextual feature fusion module and a multi-scale feature refine module for detailed feature fusion and refinement. Furthermore, we contribute the first annotated dataset for salient object detection in low-light images (SOD-LL), including 6,000 labeled synthetic images (SOD-LLS) and 2,000 labeled real images (SOD-LLR). Experimental results on SOD-LL and other low-light videos in the wild demonstrate the effectiveness and generalization ability of our method. Our dataset and code are available at https://github.com/Ylinyuan/HDNet.
Low-Light Salient Object Detection by Learning to Highlight the Foreground Objects
Xiao Lu,Yulin Yuan,Xing Liu,Lucai Wang,Xuanyu Zhou,Yimin Yang
Published 2024 in IEEE transactions on circuits and systems for video technology (Print)
ABSTRACT
PUBLICATION RECORD
- Publication year
2024
- Venue
IEEE transactions on circuits and systems for video technology (Print)
- Publication date
2024-08-01
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-62 of 62 references · Page 1 of 1
CITED BY
Showing 1-21 of 21 citing papers · Page 1 of 1