DREAM+: Efficient Dataset Distillation by Bidirectional Representative Matching

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Yanqing Liu, Jianyang Gu, Kai Wang, Zheng Zhu, Kaipeng Zhang

| Summary:

Dataset distillation plays a crucial role in creating compact datasets with
similar training performance compared with original large-scale ones. This is
essential for addressing the challenges of data storage and training costs.
Prevalent methods facilitate knowledge transfer by matching the gradients,
embedding distributions, or training trajectories of synthetic images with
those of the sampled original images. Although there are various matching
objectives, currently the strategy for selecting original images is limited to
naive random sampling. We argue that random sampling overlooks the evenness of
the selected sample distribution, which may result in noisy or biased matching
targets. Besides, the sample diversity is also not constrained by random
sampling. Additionally, current methods predominantly focus on
single-dimensional matching, where information is not fully utilized. To
address these challenges, we propose a novel matching strategy called Dataset
Distillation by Bidirectional REpresentAtive Matching (DREAM+), which selects
representative original images for bidirectional matching. DREAM+ is applicable
to a variety of mainstream dataset distillation frameworks and significantly
reduces the number of distillation iterations by more than 15 times without
affecting performance. Given sufficient training time, DREAM+ can further
improve the performance and achieve state-of-the-art results. We have released
the code at github.com/NUS-HPC-AI-Lab/DREAM+.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More