Kavli Affiliate: Xiang Zhang
| First 5 Authors: Tianchun Wang, Dongsheng Luo, Wei Cheng, Haifeng Chen, Xiang Zhang
| Summary:
Graph Neural Networks (GNNs) resurge as a trending research subject owing to
their impressive ability to capture representations from graph-structured data.
However, the black-box nature of GNNs presents a significant challenge in terms
of comprehending and trusting these models, thereby limiting their practical
applications in mission-critical scenarios. Although there has been substantial
progress in the field of explaining GNNs in recent years, the majority of these
studies are centered on static graphs, leaving the explanation of dynamic GNNs
largely unexplored. Dynamic GNNs, with their ever-evolving graph structures,
pose a unique challenge and require additional efforts to effectively capture
temporal dependencies and structural relationships. To address this challenge,
we present DyExplainer, a novel approach to explaining dynamic GNNs on the fly.
DyExplainer trains a dynamic GNN backbone to extract representations of the
graph at each snapshot, while simultaneously exploring structural relationships
and temporal dependencies through a sparse attention technique. To preserve the
desired properties of the explanation, such as structural consistency and
temporal continuity, we augment our approach with contrastive learning
techniques to provide priori-guided regularization. To model longer-term
temporal dependencies, we develop a buffer-based live-updating scheme for
training. The results of our extensive experiments on various datasets
demonstrate the superiority of DyExplainer, not only providing faithful
explainability of the model predictions but also significantly improving the
model prediction accuracy, as evidenced in the link prediction task.
| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3