Kavli Affiliate: Xiang Zhang
| First 5 Authors: Shijie Zhou, Zhimeng Guo, Charu Aggarwal, Xiang Zhang, Suhang Wang
| Summary:
Link prediction is an important task that has wide applications in various
domains. However, the majority of existing link prediction approaches assume
the given graph follows homophily assumption, and designs similarity-based
heuristics or representation learning approaches to predict links. However,
many real-world graphs are heterophilic graphs, where the homophily assumption
does not hold, which challenges existing link prediction methods. Generally, in
heterophilic graphs, there are many latent factors causing the link formation,
and two linked nodes tend to be similar in one or two factors but might be
dissimilar in other factors, leading to low overall similarity. Thus, one way
is to learn disentangled representation for each node with each vector
capturing the latent representation of a node on one factor, which paves a way
to model the link formation in heterophilic graphs, resulting in better node
representation learning and link prediction performance. However, the work on
this is rather limited. Therefore, in this paper, we study a novel problem of
exploring disentangled representation learning for link prediction on
heterophilic graphs. We propose a novel framework DisenLink which can learn
disentangled representations by modeling the link formation and perform
factor-aware message-passing to facilitate link prediction. Extensive
experiments on 13 real-world datasets demonstrate the effectiveness of
DisenLink for link prediction on both heterophilic and hemophiliac graphs. Our
codes are available at https://github.com/sjz5202/DisenLink
| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=10