OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Wanhua Li, Xiaoke Huang, Zheng Zhu, Yansong Tang, Xiu Li

| Summary:

This paper presents a language-powered paradigm for ordinal regression.
Existing methods usually treat each rank as a category and employ a set of
weights to learn these concepts. These methods are easy to overfit and usually
attain unsatisfactory performance as the learned concepts are mainly derived
from the training set. Recent large pre-trained vision-language models like
CLIP have shown impressive performance on various visual tasks. In this paper,
we propose to learn the rank concepts from the rich semantic CLIP latent space.
Specifically, we reformulate this task as an image-language matching problem
with a contrastive objective, which regards labels as text and obtains a
language prototype from a text encoder for each rank. While prompt engineering
for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable
prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists
of learnable context tokens and learnable rank embeddings; The learnable rank
embeddings are constructed by explicitly modeling numerical continuity,
resulting in well-ordered, compact language prototypes in the CLIP space. Once
learned, we can only save the language prototypes and discard the huge language
model, resulting in zero additional computational overhead compared with the
linear head counterpart. Experimental results show that our paradigm achieves
competitive performance in general ordinal regression tasks, and gains
improvements in few-shot and distribution shift settings for age estimation.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=10

Read More