Kavli Affiliate: Zheng Zhu
| First 5 Authors: Jianfei Yang, Xiangyu Peng, Kai Wang, Zheng Zhu, Jiashi Feng
| Summary:
Domain Adaptation of Black-box Predictors (DABP) aims to learn a model on an
unlabeled target domain supervised by a black-box predictor trained on a source
domain. It does not require access to both the source-domain data and the
predictor parameters, thus addressing the data privacy and portability issues
of standard domain adaptation. Existing DABP approaches mostly rely on model
distillation from the black-box predictor, emph{i.e.}, training the model with
its noisy target-domain predictions, which however inevitably introduces the
confirmation bias accumulated from the prediction noises. To mitigate such
bias, we propose a new method, named BETA, to incorporate knowledge
distillation and noisy label learning into one coherent framework. This is
enabled by a new divide-to-adapt strategy. BETA divides the target domain into
an easy-to-adapt subdomain with less noise and a hard-to-adapt subdomain. Then
it deploys mutually-teaching twin networks to filter the predictor errors for
each other and improve them progressively, from the easy to hard subdomains. As
such, BETA effectively purifies the noisy labels and reduces error
accumulation. We theoretically show that the target error of BETA is minimized
by decreasing the noise ratio of the subdomains. Extensive experiments
demonstrate BETA outperforms existing methods on all DABP benchmarks, and is
even comparable with the standard domain adaptation methods that use the
source-domain data.
| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=10