Exploring The Neural Burden In Pruned Models: An Insight Inspired By Neuroscience

Kavli Affiliate: Yi Zhou

| First 5 Authors: Zeyu Wang, Weichen Dai, Xiangyu Zhou, Ji Qi, Yi Zhou

| Summary:

Vision Transformer and its variants have been adopted in many visual tasks
due to their powerful capabilities, which also bring significant challenges in
computation and storage. Consequently, researchers have introduced various
compression methods in recent years, among which the pruning techniques are
widely used to remove a significant fraction of the network. Therefore, these
methods can reduce significant percent of the FLOPs, but often lead to a
decrease in model performance. To investigate the underlying causes, we focus
on the pruning methods specifically belonging to the pruning-during-training
category, then drew inspiration from neuroscience and propose a new concept for
artificial neural network models named Neural Burden. We investigate its impact
in the model pruning process, and subsequently explore a simple yet effective
approach to mitigate the decline in model performance, which can be applied to
any pruning-during-training technique. Extensive experiments indicate that the
neural burden phenomenon indeed exists, and show the potential of our method.
We hope that our findings can provide valuable insights for future research.
Code will be made publicly available after this paper is published.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3

Read More