DREAM: Efficient Dataset Distillation by Representative Matching

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Yanqing Liu, Jianyang Gu, Kai Wang, Zheng Zhu, Wei Jiang

| Summary:

Dataset distillation aims to synthesize small datasets with little
information loss from original large-scale ones for reducing storage and
training costs. Recent state-of-the-art methods mainly constrain the sample
synthesis process by matching synthetic images and the original ones regarding
gradients, embedding distributions, or training trajectories. Although there
are various matching objectives, currently the strategy for selecting original
images is limited to naive random sampling.
We argue that random sampling overlooks the evenness of the selected sample
distribution, which may result in noisy or biased matching targets.
Besides, the sample diversity is also not constrained by random sampling.
These factors together lead to optimization instability in the distilling
process and degrade the training efficiency. Accordingly, we propose a novel
matching strategy named as textbf{D}ataset distillation by
textbf{RE}presenttextbf{A}tive textbf{M}atching (DREAM), where only
representative original images are selected for matching. DREAM is able to be
easily plugged into popular dataset distillation frameworks and reduce the
distilling iterations by more than 8 times without performance drop. Given
sufficient training time, DREAM further provides significant improvements and
achieves state-of-the-art performances.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More