Kavli Affiliate: Xiang Zhang
| First 5 Authors: Minhua Lin, Enyan Dai, Junjie Xu, Jinyuan Jia, Xiang Zhang
| Summary:
Graph Neural Networks (GNNs) have shown promising results in modeling graphs
in various tasks. The training of GNNs, especially on specialized tasks such as
bioinformatics, demands extensive expert annotations, which are expensive and
usually contain sensitive information of data providers. The trained GNN models
are often shared for deployment in the real world. As neural networks can
memorize the training samples, the model parameters of GNNs have a high risk of
leaking private training data. Our theoretical analysis shows the strong
connections between trained GNN parameters and the training graphs used,
confirming the training graph leakage issue. However, explorations into
training data leakage from trained GNNs are rather limited. Therefore, we
investigate a novel problem of stealing graphs from trained GNNs. To obtain
high-quality graphs that resemble the target training set, a graph diffusion
model with diffusion noise optimization is deployed as a graph generator.
Furthermore, we propose a selection method that effectively leverages GNN model
parameters to identify training graphs from samples generated by the graph
diffusion model. Extensive experiments on real-world datasets demonstrate the
effectiveness of the proposed framework in stealing training graphs from the
trained GNN.
| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3