This paper presents a new deep neural network design for salient object detection by maximizing the integration of local and global image context within, around, and beyond the salient objects. Our key idea is to adaptively propagate and aggregate the image context features with variable attenuation over the entire feature maps. To achieve this, we design the spatial attenuation context (SAC) module to recurrently translate and aggregate the context features independently with different attenuation factors and then to attentively learn the weights to adaptively integrate the aggregated context features. By further embedding the module to process individual layers in a deep network, namely SAC-Net, we can train the network end-to-end and optimize the context features for detecting salient objects. Compared with 29 state-of-the-art methods, experimental results show that our method performs favorably over all the others on six common benchmark data, both quantitatively and visually.
SAC-Net: Spatial Attenuation Context for Salient Object Detection
Xiaowei Hu,Chi-Wing Fu,Lei Zhu,Tianyu Wang,P. Heng
Published 2019 in IEEE transactions on circuits and systems for video technology (Print)
ABSTRACT
PUBLICATION RECORD
- Publication year
2019
- Venue
IEEE transactions on circuits and systems for video technology (Print)
- Publication date
2019-03-25
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-94 of 94 references · Page 1 of 1
CITED BY
Showing 1-66 of 66 citing papers · Page 1 of 1