Kavli Affiliate: Jing Wang
| First 5 Authors: Niklas Stoehr, Pengxiang Cheng, Jing Wang, Daniel Preotiuc-Pietro, Rajarshi Bhowmik
| Summary:
Language models contain ranking-based knowledge and are powerful solvers of
in-context ranking tasks. For instance, they may have parametric knowledge
about the ordering of countries by size or may be able to rank reviews by
sentiment. Recent work focuses on pairwise, pointwise, and listwise prompting
techniques to elicit a language model’s ranking knowledge. However, we find
that even with careful calibration and constrained decoding, prompting-based
techniques may not always be self-consistent in the rankings they produce. This
motivates us to explore an alternative approach that is inspired by an
unsupervised probing method called Contrast-Consistent Search (CCS). The idea
is to train a probing model guided by a logical constraint: a model’s
representation of a statement and its negation must be mapped to contrastive
true-false poles consistently across multiple statements. We hypothesize that
similar constraints apply to ranking tasks where all items are related via
consistent pairwise or listwise comparisons. To this end, we extend the binary
CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking
methods such as the Max-Margin Loss, Triplet Loss, and Ordinal Regression
objective. Our results confirm that, for the same language model, CCR probing
outperforms prompting and even performs on a par with prompting much larger
language models.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3