Kavli Affiliate: Zhuo Li
| First 5 Authors: Lefei Shen, Lefei Shen, , ,
| Summary:
Transformer-based models have recently become dominant in Long-term Time
Series Forecasting (LTSF), yet the variations in their architecture, such as
encoder-only, encoder-decoder, and decoder-only designs, raise a crucial
question: What Transformer architecture works best for LTSF tasks? However,
existing models are often tightly coupled with various time-series-specific
designs, making it difficult to isolate the impact of the architecture itself.
To address this, we propose a novel taxonomy that disentangles these designs,
enabling clearer and more unified comparisons of Transformer architectures. Our
taxonomy considers key aspects such as attention mechanisms, forecasting
aggregations, forecasting paradigms, and normalization layers. Through
extensive experiments, we uncover several key insights: bi-directional
attention with joint-attention is most effective; more complete forecasting
aggregation improves performance; and the direct-mapping paradigm outperforms
autoregressive approaches. Furthermore, our combined model, utilizing optimal
architectural choices, consistently outperforms several existing models,
reinforcing the validity of our conclusions. We hope these findings offer
valuable guidance for future research on Transformer architectural designs in
LTSF. Our code is available at https://github.com/HALF111/TSF_architecture.
| Search Query: ArXiv Query: search_query=au:”Zhuo Li”&id_list=&start=0&max_results=3