Kavli Affiliate: Jing Wang
| First 5 Authors: Judy X Yang, Jun Zhou, Jing Wang, Hui Tian, Alan Wee Chung Liew
| Summary:
Classifying hyperspectral images is a difficult task in remote sensing, due
to their complex high-dimensional data. To address this challenge, we propose
HSIMamba, a novel framework that uses bidirectional reversed convolutional
neural network pathways to extract spectral features more efficiently.
Additionally, it incorporates a specialized block for spatial analysis. Our
approach combines the operational efficiency of CNNs with the dynamic feature
extraction capability of attention mechanisms found in Transformers. However,
it avoids the associated high computational demands. HSIMamba is designed to
process data bidirectionally, significantly enhancing the extraction of
spectral features and integrating them with spatial information for
comprehensive analysis. This approach improves classification accuracy beyond
current benchmarks and addresses computational inefficiencies encountered with
advanced models like Transformers. HSIMamba were tested against three widely
recognized datasets Houston 2013, Indian Pines, and Pavia University and
demonstrated exceptional performance, surpassing existing state-of-the-art
models in HSI classification. This method highlights the methodological
innovation of HSIMamba and its practical implications, which are particularly
valuable in contexts where computational resources are limited. HSIMamba
redefines the standards of efficiency and accuracy in HSI classification,
thereby enhancing the capabilities of remote sensing applications.
Hyperspectral imaging has become a crucial tool for environmental surveillance,
agriculture, and other critical areas that require detailed analysis of the
Earth surface. Please see our code in HSIMamba for more details.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3