Kavli Affiliate: Feng Wang
| First 5 Authors: Yuxue Yang, Lue Fan, Zuzeng Lin, Feng Wang, Zhaoxiang Zhang
| Summary:
Animated video separates foreground and background elements into layers, with
distinct processes for sketching, refining, coloring, and in-betweening.
Existing video generation methods typically treat animation as a monolithic
data domain, lacking fine-grained control over individual layers. In this
paper, we introduce LayerAnimate, a novel architectural approach that enhances
fine-grained control over individual animation layers within a video diffusion
model, allowing users to independently manipulate foreground and background
elements in distinct layers. To address the challenge of limited layer-specific
data, we propose a data curation pipeline that features automated element
segmentation, motion-state hierarchical merging, and motion coherence
refinement. Through quantitative and qualitative comparisons, and user study,
we demonstrate that LayerAnimate outperforms current methods in terms of
animation quality, control precision, and usability, making it an ideal tool
for both professional animators and amateur enthusiasts. This framework opens
up new possibilities for layer-specific animation applications and creative
flexibility. Our code is available at https://layeranimate.github.io.
| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3