Unconfounded Propensity Estimation for Unbiased Ranking

Kavli Affiliate: Dan Luo

| First 5 Authors: Dan Luo, Lixin Zou, Qingyao Ai, Zhiyu Chen, Chenliang Li

| Summary:

The goal of unbiased learning to rank (ULTR) is to leverage implicit user
feedback for optimizing learning-to-rank systems. Among existing solutions,
automatic ULTR algorithms that jointly learn user bias models (i.e., propensity
models) with unbiased rankers have received a lot of attention due to their
superior performance and low deployment cost in practice. Despite their
theoretical soundness, the effectiveness is usually justified under a weak
logging policy, where the ranking model can barely rank documents according to
their relevance to the query. However, when the logging policy is strong, e.g.,
an industry-deployed ranking policy, the reported effectiveness cannot be
reproduced. In this paper, we first investigate ULTR from a causal perspective
and uncover a negative result: existing ULTR algorithms fail to address the
issue of propensity overestimation caused by the query-document relevance
confounder. Then, we propose a new learning objective based on backdoor
adjustment and highlight its differences from conventional propensity models,
which reveal the prevalence of propensity overestimation. On top of that, we
introduce a novel propensity model called Logging-Policy-aware Propensity (LPP)
model and its distinctive two-step optimization strategy, which allows for the
joint learning of LPP and ranking models within the automatic ULTR framework,
and actualize the unconfounded propensity estimation for ULTR. Extensive
experiments on two benchmarks demonstrate the effectiveness and
generalizability of the proposed method.

| Search Query: ArXiv Query: search_query=au:”Dan Luo”&id_list=&start=0&max_results=3

Read More