Accelerating Dataset Distillation via Model Augmentation

Lei Zhang · Jie Zhang · Bowen Lei · Subhabrata Mukherjee · Xiang Pan · Bo Zhao · Caiwen Ding · Yao Li · Dongkuan Xu

West Building Exhibit Halls ABC 355
award Highlight
[ Abstract ]
Wed 21 Jun 10:30 a.m. PDT — noon PDT


Dataset Distillation (DD), a newly emerging field, aims at generating much smaller but efficient synthetic training datasets from large ones. Existing DD methods based on gradient matching achieve leading performance; however, they are extremely computationally intensive as they require continuously optimizing a dataset among thousands of randomly initialized models. In this paper, we assume that training the synthetic data with diverse models leads to better generalization performance. Thus we propose two model augmentation techniques, i.e. using early-stage models and parameter perturbation to learn an informative synthetic set with significantly reduced training cost. Extensive experiments demonstrate that our method achieves up to 20× speedup and comparable performance on par with state-of-the-art methods.

Chat is not available.