Kavli Affiliate: Jia Liu
| First 5 Authors: Jia Liu, Changlin Li, Qirui Sun, Jiahui Ming, Chen Fang
| Summary:
Fine-tuning advanced diffusion models for high-quality image stylization
usually requires large training datasets and substantial computational
resources, hindering their practical applicability. We propose Ada-Adapter, a
novel framework for few-shot style personalization of diffusion models.
Ada-Adapter leverages off-the-shelf diffusion models and pre-trained image
feature encoders to learn a compact style representation from a limited set of
source images. Our method enables efficient zero-shot style transfer utilizing
a single reference image. Furthermore, with a small number of source images
(three to five are sufficient) and a few minutes of fine-tuning, our method can
capture intricate style details and conceptual characteristics, generating
high-fidelity stylized images that align well with the provided text prompts.
We demonstrate the effectiveness of our approach on various artistic styles,
including flat art, 3D rendering, and logo design. Our experimental results
show that Ada-Adapter outperforms existing zero-shot and few-shot stylization
methods in terms of output quality, diversity, and training efficiency.
| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=3