Isolated Diffusion: Optimizing Multi-Concept Text-to-Image Generation Training-Freely with Isolated Diffusion Guidance

Kavli Affiliate: Jiansheng Chen

| First 5 Authors: Jingyuan Zhu, Huimin Ma, Jiansheng Chen, Jian Yuan,

| Summary:

Large-scale text-to-image diffusion models have achieved great success in
synthesizing high-quality and diverse images given target text prompts. Despite
the revolutionary image generation ability, current state-of-the-art models
still struggle to deal with multi-concept generation accurately in many cases.
This phenomenon is known as “concept bleeding" and displays as the unexpected
overlapping or merging of various concepts. This paper presents a general
approach for text-to-image diffusion models to address the mutual interference
between different subjects and their attachments in complex scenes, pursuing
better text-image consistency. The core idea is to isolate the synthesizing
processes of different concepts. We propose to bind each attachment to
corresponding subjects separately with split text prompts. Besides, we
introduce a revision method to fix the concept bleeding problem in
multi-subject synthesis. We first depend on pre-trained object detection and
segmentation models to obtain the layouts of subjects. Then we isolate and
resynthesize each subject individually with corresponding text prompts to avoid
mutual interference. Overall, we achieve a training-free strategy, named
Isolated Diffusion, to optimize multi-concept text-to-image synthesis. It is
compatible with the latest Stable Diffusion XL (SDXL) and prior Stable
Diffusion (SD) models. We compare our approach with alternative methods using a
variety of multi-concept text prompts and demonstrate its effectiveness with
clear advantages in text-image consistency and user study.

| Search Query: ArXiv Query: search_query=au:”Jiansheng Chen”&id_list=&start=0&max_results=3

Read More