In the field of spatial–spectral fusion, the variational model-based methods and the deep learning (DL)-based methods are state-of-the-art approaches. This paper presents a fusion method that combines the deep neural network with a variational model for the most common case of spatial–spectral fusion: panchromatic (PAN)/multispectral (MS) fusion. Specifically, a deep residual convolutional neural network (CNN) is first trained to learn the gradient features of the high spatial resolution multispectral image (HR-MS). The image observation variational models are then formulated to describe the relationships of the ideal fused image, the observed low spatial resolution multispectral image (LR-MS) image, and the gradient priors learned before. Then, fusion result can then be obtained by solving the fusion variational model. Both quantitative and visual assessments on high-quality images from various sources demonstrate that the proposed fusion method is superior to all the mainstream algorithms included in the comparison, in terms of overall fusion accuracy.
Spatial–Spectral Fusion by Combining Deep Learning and Variational Model
Huanfeng Shen,Menghui Jiang,Jie Li,Qiangqiang Yuan,Yanchong Wei,Liangpei Zhang
Published 2018 in IEEE Transactions on Geoscience and Remote Sensing
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
IEEE Transactions on Geoscience and Remote Sensing
- Publication date
2018-09-04
- Fields of study
Computer Science, Engineering, Environmental Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-52 of 52 references · Page 1 of 1
CITED BY
Showing 1-69 of 69 citing papers · Page 1 of 1