Rotation-Invariant Attention Network for Hyperspectral Image Classification

Xiangtao Zheng,Hao Sun,Xiaoqiang Lu,Wei Xie

Published 2022 in IEEE Transactions on Image Processing

ABSTRACT

<italic>Hyperspectral image</italic> (HSI) classification refers to identifying land-cover categories of pixels based on spectral signatures and spatial information of HSIs. In recent deep learning-based methods, to explore the spatial information of HSIs, the HSI patch is usually cropped from original HSI as the input. And <inline-formula> <tex-math notation="LaTeX">$3 \times 3$ </tex-math></inline-formula> convolution is utilized as a key component to capture spatial features for HSI classification. However, the <inline-formula> <tex-math notation="LaTeX">$3 \times 3$ </tex-math></inline-formula> convolution is sensitive to the spatial rotation of inputs, which results in that recent methods perform worse in rotated HSIs. To alleviate this problem, a <italic>rotation-invariant attention network</italic> (RIAN) is proposed for HSI classification. First, a <italic>center spectral attention</italic> (CSpeA) module is designed to avoid the influence of other categories of pixels to suppress redundant spectral bands. Then, a <italic>rectified spatial attention</italic> (RSpaA) module is proposed to replace <inline-formula> <tex-math notation="LaTeX">$3 \times 3$ </tex-math></inline-formula> convolution for extracting rotation-invariant spectral-spatial features from HSI patches. The CSpeA module, the <inline-formula> <tex-math notation="LaTeX">$1 \times 1$ </tex-math></inline-formula> convolution and the RSpaA module are utilized to build the proposed RIAN for HSI classification. Experimental results demonstrate that RIAN is invariant to the spatial rotation of HSIs and has superior performance, e.g., achieving an overall accuracy of 86.53% (1.04% improvement) on the Houston database. The codes of this work are available at <uri>https://github.com/spectralpublic/RIAN</uri>.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

  • No references are available for this paper.

Showing 0-0 of 0 references · Page 1 of 1

CITED BY

Showing 1-100 of 166 citing papers · Page 1 of 2