Kavli Affiliate: Xiang Zhang
| First 5 Authors: Tianxiang Zhao, Dongsheng Luo, Xiang Zhang, Suhang Wang,
| Summary:
Uncovering rationales behind predictions of graph neural networks (GNNs) has
received increasing attention over recent years. Instance-level GNN explanation
aims to discover critical input elements, like nodes or edges, that the target
GNN relies upon for making predictions. %These identified sub-structures can
provide interpretations of GNN’s behavior. Though various algorithms are
proposed, most of them formalize this task by searching the minimal subgraph
which can preserve original predictions. However, an inductive bias is
deep-rooted in this framework: several subgraphs can result in the same or
similar outputs as the original graphs. Consequently, they have the danger of
providing spurious explanations and failing to provide consistent explanations.
Applying them to explain weakly-performed GNNs would further amplify these
issues. To address this problem, we theoretically examine the predictions of
GNNs from the causality perspective. Two typical reasons for spurious
explanations are identified: confounding effect of latent variables like
distribution shift, and causal factors distinct from the original input.
Observing that both confounding effects and diverse causal rationales are
encoded in internal representations, tianxiang{we propose a new explanation
framework with an auxiliary alignment loss, which is theoretically proven to be
optimizing a more faithful explanation objective intrinsically. Concretely for
this alignment loss, a set of different perspectives are explored: anchor-based
alignment, distributional alignment based on Gaussian mixture models,
mutual-information-based alignment, etc. A comprehensive study is conducted
both on the effectiveness of this new framework in terms of explanation
faithfulness/consistency and on the advantages of these variants.
| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3