FIXED: Frustratingly Easy Domain Generalization with Mixup

Kavli Affiliate: Xiang Zhang

| First 5 Authors: Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang

| Summary:

Domain generalization (DG) aims to learn a generalizable model from multiple
training domains such that it can perform well on unseen target domains. A
popular strategy is to augment training data to benefit generalization through
methods such as Mixup~cite{zhang2018mixup}. While the vanilla Mixup can be
directly applied, theoretical and empirical investigations uncover several
shortcomings that limit its performance. Firstly, Mixup cannot effectively
identify the domain and class information that can be used for learning
invariant representations. Secondly, Mixup may introduce synthetic noisy data
points via random interpolation, which lowers its discrimination capability.
Based on the analysis, we propose a simple yet effective enhancement for
Mixup-based DG, namely domain-invariant Feature mIXup (FIX). It learns
domain-invariant representations for Mixup. To further enhance discrimination,
we leverage existing techniques to enlarge margins among classes to further
propose the domain-invariant Feature MIXup with Enhanced Discrimination (FIXED)
approach. We present theoretical insights about guarantees on its
effectiveness. Extensive experiments on seven public datasets across two
modalities including image classification (Digits-DG, PACS, Office-Home) and
time series (DSADS, PAMAP2, UCI-HAR, and USC-HAD) demonstrate that our approach
significantly outperforms nine state-of-the-art related methods, beating the
best performing baseline by 6.5% on average in terms of test accuracy. Code is
available at:
https://github.com/jindongwang/transferlearning/tree/master/code/deep/fixed.

| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3

Read More