Kavli Affiliate: Zheng Zhu | First 5 Authors: Kai Wang, Jianyang Gu, Daquan Zhou, Zheng Zhu, Wei Jiang | Summary: Dataset distillation reduces the network training cost by synthesizing small and informative datasets from large-scale ones. Despite the success of the recent dataset distillation algorithms, three drawbacks still limit their wider application: i). the synthetic […]
Continue.. DiM: Distilling Dataset into Generative Model