What does my network learn? Assessing interpretability of deep learning for EEG

Pinar Göktepe-Kavis,F. M. Aellen,Sigurd L. Alnes,Athina Tzovara

Published 2025 in Imaging neuroscience

ABSTRACT

Abstract Electrophysiological studies are profiting from multivariate pattern analysis methods. However, these mostly rely on machine-learning algorithms that assume consistent response latencies across trials and individuals. Deep learning provides high performance without such assumptions, but often at the cost of interpretability of learned features. Here, we evaluated how the interpretability of deep learning for electroencephalography (EEG) data is affected by preprocessing choices, the network’s architecture, and the way the learned features are extracted and visualized. We trained two convolutional neural networks (CNN): (1) ResNet, a residual network, and (2) EEGNet, which leverages spatiotemporal properties of EEG. We trained these networks to decode single-trial EEG responses to three different visual stimuli (visual dataset) and to the presence of a sound (auditory dataset). We then extracted and visualized learned features with two gradient-based techniques: saliency and gradient-weighted activation maps (GradCam). Results showed that EEGNet and ResNet performed at a similar level. Yet, visualization of learned features revealed that different architectures learn different aspects of the data. Between the two CNNs, EEGNet features had a higher similarity to the EEG data than ResNet features. Moreover, the latency and distribution of important electrodes varied depending on the visualization technique. GradCam provided features more similar to EEG data than those with saliency, emphasizing the impact of the feature extraction method on interpretability. Our results call for careful consideration of network architecture and feature visualization methods to improve interpretability, which is a crucial step for advancing the use of deep learning in EEG research.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-93 of 93 references · Page 1 of 1

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1