Kavli Affiliate: Feng Wang
| First 5 Authors: Ruoqi Wang, Haitao Wang, Qiong Luo, Feng Wang, Hejun Wu
| Summary:
Radio telescopes produce visibility data about celestial objects, but these
data are sparse and noisy. As a result, images created on raw visibility data
are of low quality. Recent studies have used deep learning models to
reconstruct visibility data to get cleaner images. However, these methods rely
on a substantial amount of labeled training data, which requires significant
labeling effort from radio astronomers. Addressing this challenge, we propose
VisRec, a model-agnostic semi-supervised learning approach to the
reconstruction of visibility data. Specifically, VisRec consists of both a
supervised learning module and an unsupervised learning module. In the
supervised learning module, we introduce a set of data augmentation functions
to produce diverse training examples. In comparison, the unsupervised learning
module in VisRec augments unlabeled data and uses reconstructions from
non-augmented visibility data as pseudo-labels for training. This hybrid
approach allows VisRec to effectively leverage both labeled and unlabeled data.
This way, VisRec performs well even when labeled data is scarce. Our evaluation
results show that VisRec outperforms all baseline methods in reconstruction
quality, robustness against common observation perturbation, and
generalizability to different telescope configurations.
| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3