Kavli Affiliate: Jing Wang
| First 5 Authors: Zhiqiang Kou, Haoyuan Xuan, Jing Wang, Yuheng Jia, Xin Geng
| Summary:
Label Distribution Learning (LDL) is a novel machine learning paradigm that
addresses the problem of label ambiguity and has found widespread applications.
Obtaining complete label distributions in real-world scenarios is challenging,
which has led to the emergence of Incomplete Label Distribution Learning
(InLDL). However, the existing InLDL methods overlook a crucial aspect of LDL
data: the inherent imbalance in label distributions. To address this
limitation, we propose textbf{Incomplete and Imbalance Label Distribution
Learning (I(^2)LDL)}, a framework that simultaneously handles incomplete
labels and imbalanced label distributions. Our method decomposes the label
distribution matrix into a low-rank component for frequent labels and a sparse
component for rare labels, effectively capturing the structure of both head and
tail labels. We optimize the model using the Alternating Direction Method of
Multipliers (ADMM) and derive generalization error bounds via Rademacher
complexity, providing strong theoretical guarantees. Extensive experiments on
15 real-world datasets demonstrate the effectiveness and robustness of our
proposed framework compared to existing InLDL methods.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3