DomainStudio: Fine-Tuning Diffusion Models for Domain-Driven Image Generation using Limited Data

Kavli Affiliate: Jiansheng Chen

| First 5 Authors: Jingyuan Zhu, Huimin Ma, Jiansheng Chen, Jian Yuan,

| Summary:

Denoising diffusion probabilistic models (DDPMs) have been proven capable of
synthesizing high-quality images with remarkable diversity when trained on
large amounts of data. Typical diffusion models and modern large-scale
conditional generative models like text-to-image generative models are
vulnerable to overfitting when fine-tuned on extremely limited data. Existing
works have explored subject-driven generation using a reference set containing
a few images. However, few prior works explore DDPM-based domain-driven
generation, which aims to learn the common features of target domains while
maintaining diversity. This paper proposes a novel DomainStudio approach to
adapt DDPMs pre-trained on large-scale source datasets to target domains using
limited data. It is designed to keep the diversity of subjects provided by
source domains and get high-quality and diverse adapted samples in target
domains. We propose to keep the relative distances between adapted samples to
achieve considerable generation diversity. In addition, we further enhance the
learning of high-frequency details for better generation quality. Our approach
is compatible with both unconditional and conditional diffusion models. This
work makes the first attempt to realize unconditional few-shot image generation
with diffusion models, achieving better quality and greater diversity than
current state-of-the-art GAN-based approaches. Moreover, this work also
significantly relieves overfitting for conditional generation and realizes
high-quality domain-driven generation, further expanding the applicable
scenarios of modern large-scale text-to-image models.

| Search Query: ArXiv Query: search_query=au:”Jiansheng Chen”&id_list=&start=0&max_results=3

Read More