Explainable Recommendation via Interpretable Feature Mapping and Evaluation of Explainability

Kavli Affiliate: Li Xin Li

| First 5 Authors: Deng Pan, Xiangrui Li, Xin Li, Dongxiao Zhu,

| Summary:

Latent factor collaborative filtering (CF) has been a widely used technique
for recommender system by learning the semantic representations of users and
items. Recently, explainable recommendation has attracted much attention from
research community. However, trade-off exists between explainability and
performance of the recommendation where metadata is often needed to alleviate
the dilemma. We present a novel feature mapping approach that maps the
uninterpretable general features onto the interpretable aspect features,
achieving both satisfactory accuracy and explainability in the recommendations
by simultaneous minimization of rating prediction loss and interpretation loss.
To evaluate the explainability, we propose two new evaluation metrics
specifically designed for aspect-level explanation using surrogate ground
truth. Experimental results demonstrate a strong performance in both
recommendation and explaining explanation, eliminating the need for metadata.
Code is available from https://github.com/pd90506/AMCF.

| Search Query: ArXiv Query: search_query=au:”Li Xin Li”&id_list=&start=0&max_results=10

Read More

Leave a Reply