Kavli Affiliate: Feng Wang
| First 5 Authors: Feng Wang, Yaodong Yu, Guoyizhe Wei, Wei Shao, Yuyin Zhou
| Summary:
Since the introduction of Vision Transformer (ViT), patchification has long
been regarded as a de facto image tokenization approach for plain visual
architectures. By compressing the spatial size of images, this approach can
effectively shorten the token sequence and reduce the computational cost of
ViT-like plain architectures. In this work, we aim to thoroughly examine the
information loss caused by this patchification-based compressive encoding
paradigm and how it affects visual understanding. We conduct extensive patch
size scaling experiments and excitedly observe an intriguing scaling law in
patchification: the models can consistently benefit from decreased patch sizes
and attain improved predictive performance, until it reaches the minimum patch
size of 1×1, i.e., pixel tokenization. This conclusion is broadly applicable
across different vision tasks, various input scales, and diverse architectures
such as ViT and the recent Mamba models. Moreover, as a by-product, we discover
that with smaller patches, task-specific decoder heads become less critical for
dense prediction. In the experiments, we successfully scale up the visual
sequence to an exceptional length of 50,176 tokens, achieving a competitive
test accuracy of 84.6% with a base-sized model on the ImageNet-1k benchmark. We
hope this study can provide insights and theoretical foundations for future
works of building non-compressive vision models. Code is available at
https://github.com/wangf3014/Patch_Scaling.
| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3