ADformer: A Multi-Granularity Spatial-Temporal Transformer for EEG-Based Alzheimer Detection

Kavli Affiliate: Xiang Zhang

| First 5 Authors: Yihe Wang, Yihe Wang, , ,

| Summary:

Electroencephalography (EEG) has emerged as a cost-effective and efficient
tool to support neurologists in the detection of Alzheimer’s Disease (AD).
However, most existing approaches rely heavily on manual feature engineering or
data transformation. While such techniques may provide benefits when working
with small-scale datasets, they often lead to information loss and distortion
when applied to large-scale data, ultimately limiting model performance.
Moreover, the limited subject scale and demographic diversity of datasets used
in prior studies hinder comprehensive evaluation of model robustness and
generalizability, thus restricting their applicability in real-world clinical
settings. To address these challenges, we propose ADformer, a novel
multi-granularity spatial-temporal transformer designed to capture both
temporal and spatial features from raw EEG signals, enabling effective
end-to-end representation learning. Our model introduces multi-granularity
embedding strategies across both spatial and temporal dimensions, leveraging a
two-stage intra-inter granularity self-attention mechanism to learn both local
patterns within each granularity and global dependencies across granularities.
We evaluate ADformer on 4 large-scale datasets comprising a total of 1,713
subjects, representing one of the largest corpora for EEG-based AD detection to
date, under a cross-validated, subject-independent setting. Experimental
results demonstrate that ADformer consistently outperforms existing methods,
achieving subject-level F1 scores of 92.82%, 89.83%, 67.99%, and 83.98% on the
4 datasets, respectively, in distinguishing AD from healthy control (HC)
subjects.

| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3

Read More