COCA: Classifier-Oriented Calibration via Textual Prototype for Source-Free Universal Domain Adaptation

Kavli Affiliate: Yi Zhou

| First 5 Authors: Xinghong Liu, Yi Zhou, Tao Zhou, Chun-Mei Feng, Ling Shao

| Summary:

Universal domain adaptation (UniDA) aims to address domain and category
shifts across data sources. Recently, due to more stringent data restrictions,
researchers have introduced source-free UniDA (SF-UniDA). SF-UniDA methods
eliminate the need for direct access to source samples when performing
adaptation to the target domain. However, existing SF-UniDA methods still
require an extensive quantity of labeled source samples to train a source
model, resulting in significant labeling costs. To tackle this issue, we
present a novel plug-and-play classifier-oriented calibration (COCA) method.
COCA, which exploits textual prototypes, is designed for the source models
based on few-shot learning with vision-language models (VLMs). It endows the
VLM-powered few-shot learners, which are built for closed-set classification,
with the unknown-aware ability to distinguish common and unknown classes in the
SF-UniDA scenario. Crucially, COCA is a new paradigm to tackle SF-UniDA
challenges based on VLMs, which focuses on classifier instead of image encoder
optimization. Experiments show that COCA outperforms state-of-the-art UniDA and
SF-UniDA models.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3

Read More