NVR: Vector Runahead on NPUs for Sparse Memory Access

Kavli Affiliate: Jing Wang

| First 5 Authors: Hui Wang, Zhengpeng Zhao, Jing Wang, Yushu Du, Yuan Cheng

| Summary:

Deep Neural Networks are increasingly leveraging sparsity to reduce the
scaling up of model parameter size. However, reducing wall-clock time through
sparsity and pruning remains challenging due to irregular memory access
patterns, leading to frequent cache misses. In this paper, we present NPU
Vector Runahead (NVR), a prefetching mechanism tailored for NPUs to address
cache miss problems in sparse DNN workloads. Rather than optimising memory
patterns with high overhead and poor portability, NVR adapts runahead execution
to the unique architecture of NPUs. NVR provides a general micro-architectural
solution for sparse DNN workloads without requiring compiler or algorithmic
support, operating as a decoupled, speculative, lightweight hardware sub-thread
alongside the NPU, with minimal hardware overhead (under 5%). NVR achieves an
average 90% reduction in cache misses compared to SOTA prefetching in
general-purpose processors, delivering 4x average speedup on sparse workloads
versus NPUs without prefetching. Moreover, we investigate the advantages of
incorporating a small cache (16KB) into the NPU combined with NVR. Our
evaluation shows that expanding this modest cache delivers 5x higher
performance benefits than increasing the L2 cache size by the same amount.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3

Read More