Interpreting hyperspectral remote sensing image classification methods via explainable artificial intelligence

Warning The system is temporarily closed to updates for reporting purpose.

Turan, Deren Ege and Aptoula, Erchan and Erturk, A. and Taskin, G. (2023) Interpreting hyperspectral remote sensing image classification methods via explainable artificial intelligence. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2023), Pasadena, CA, USA

Full text not available from this repository. (Request a copy)

Abstract

This study addresses the explainability challenges of deep-learning models in the context of hyperspectral remote sensing image classification. Three prominent explainable artificial intelligence methods, namely GradCAM, GradCAM++, and Guided Backpropagation, have been employed in order to comprehend the decision-making process of a typical convolutional neural network model during spatial-spectral hyperspectral image classification. The experiments that have been conducted investigate the impact of pixel patch sizes on spatial attention, as well as spectral band importance. The findings provide insights into the behavior of both convolutional neural networks, as well as the comparative performance of explainability techniques.
Item Type: Papers in Conference Proceedings
Uncontrolled Keywords: Explainable artificial intelligence; GradCam; guided backpropagation; hyperspectral images; interpretability
Divisions: Faculty of Engineering and Natural Sciences
Depositing User: Erchan Aptoula
Date Deposited: 08 Feb 2024 14:08
Last Modified: 08 Feb 2024 14:08
URI: https://research.sabanciuniv.edu/id/eprint/48806

Actions (login required)

View Item
View Item