Kavli Affiliate: Jing Wang
| First 5 Authors: Jiayu Xiong, Jiayu Xiong, , ,
| Summary:
Recently, Transformers (e.g., Audio Spectrogram Transformers, AST) and
state-space models (e.g., Audio Mamba, AuM) have achieved remarkable progress
in audio modeling. However, the O(L^2) computational complexity of the
Transformer architecture hinders efficient long-sequence processing, while the
Mamba architecture tends to become unstable when scaling parameters and data.
To address these challenges, this paper proposes AudioRWKV (A-RWKV), a highly
efficient and stable architecture for audio modeling. Specifically, we inherit
the stable and efficient recurrent formulation of RWKV7 and replace its 1D
token-shift operation with a 2D depthwise separable convolution to better
capture local spectro-temporal patterns. Furthermore, we adapt the original
causal WKV kernel into a bidirectional WKV kernel (Bi-WKV), enabling global
context modeling over the entire audio sequence while maintaining linear
computational complexity. Benefiting from the inherent stability of the RWKV7
foundation, A-RWKV scales seamlessly to larger model sizes. Experimental
results demonstrate that, under the same linear-model regime, A-RWKV-S (22M)
achieves performance parity with AuM-B (92M) while exhibiting more stable
throughput than AST; for long-form audio (~5 minutes 28 seconds), WKV7 achieves
up to a 13.3X speedup in processing.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3