Learning to Prompt Your Domain for Vision-Language Models

Kavli Affiliate: Feng Wang

| First 5 Authors: Guoyizhe Wei, Feng Wang, Anshul Shah, Rama Chellappa,

| Summary:

Prompt learning has recently become a very efficient transfer learning
paradigm for Contrastive Language Image Pretraining (CLIP) models. Compared
with fine-tuning the entire encoder, prompt learning can obtain highly
competitive results by optimizing only a small number of parameters, which
presents considerably exciting benefits for federated learning applications
that prioritizes communication efficiency. However, in this work, we identify
that directly transferring prompt learning approaches into federated learning
does not yield favorable results since the model often suffers from
considerable domain gaps across different clients. To address this issue, we
propose ADAPT, a novel domain-aware prompt learning approach that facilitates
both intra- and inter-domain prompts across federated participants. The basic
idea of ADAPT is that the prompted CLIP should detect the input image’s domain
correspondence and before making the prediction of its category. Extensive
experiments of ADAPT demonstrate its significant efficiency and effectiveness
in federated learning. For example, by learning and sharing only 0.08M
parameters, our ADAPT attains a 68.4% average accuracy over six domains in the
DomainNet dataset, which improves the original CLIP by a large margin of 14.8%.

| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3

Read More