AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators

Kavli Affiliate: Xiang Zhang

| First 5 Authors: Yihang Yin, Qingzhong Wang, Siyu Huang, Haoyi Xiong, Xiang Zhang

| Summary:

Contrastive learning has been widely applied to graph representation
learning, where the view generators play a vital role in generating effective
contrastive samples. Most of the existing contrastive learning methods employ
pre-defined view generation methods, e.g., node drop or edge perturbation,
which usually cannot adapt to input data or preserve the original semantic
structures well. To address this issue, we propose a novel framework named
Automated Graph Contrastive Learning (AutoGCL) in this paper. Specifically,
AutoGCL employs a set of learnable graph view generators orchestrated by an
auto augmentation strategy, where every graph view generator learns a
probability distribution of graphs conditioned by the input. While the graph
view generators in AutoGCL preserve the most representative structures of the
original graph in generation of every contrastive sample, the auto augmentation
learns policies to introduce adequate augmentation variances in the whole
contrastive learning procedure. Furthermore, AutoGCL adopts a joint training
strategy to train the learnable view generators, the graph encoder, and the
classifier in an end-to-end manner, resulting in topological heterogeneity yet
semantic similarity in the generation of contrastive samples. Extensive
experiments on semi-supervised learning, unsupervised learning, and transfer
learning demonstrate the superiority of our AutoGCL framework over the
state-of-the-arts in graph contrastive learning. In addition, the visualization
results further confirm that the learnable view generators can deliver more
compact and semantically meaningful contrastive samples compared against the
existing view generation methods.

| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=10

Read More

Leave a Reply