Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer

Kavli Affiliate: Ke Wang

| First 5 Authors: Yuwen Tan, Qinhao Zhou, Xiang Xiang, Ke Wang, Yuchuan Wu

| Summary:

Class-incremental learning (CIL) aims to enable models to continuously learn
new classes while overcoming catastrophic forgetting. The introduction of
pre-trained models has brought new tuning paradigms to CIL. In this paper, we
revisit different parameter-efficient tuning (PET) methods within the context
of continual learning. We observe that adapter tuning demonstrates superiority
over prompt-based methods, even without parameter expansion in each learning
session. Motivated by this, we propose incrementally tuning the shared adapter
without imposing parameter update constraints, enhancing the learning capacity
of the backbone. Additionally, we employ feature sampling from stored
prototypes to retrain a unified classifier, further improving its performance.
We estimate the semantic shift of old prototypes without access to past samples
and update stored prototypes session by session. Our proposed method eliminates
model expansion and avoids retaining any image samples. It surpasses previous
pre-trained model-based CIL methods and demonstrates remarkable continual
learning capabilities. Experimental results on five CIL benchmarks validate the
effectiveness of our approach, achieving state-of-the-art (SOTA) performance.

| Search Query: ArXiv Query: search_query=au:”Ke Wang”&id_list=&start=0&max_results=3

Read More