WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Xiaofeng Wang, Zheng Zhu, Guan Huang, Boyuan Wang, Xinze Chen

| Summary:

World models play a crucial role in understanding and predicting the dynamics
of the world, which is essential for video generation. However, existing world
models are confined to specific scenarios such as gaming or driving, limiting
their ability to capture the complexity of general world dynamic environments.
Therefore, we introduce WorldDreamer, a pioneering world model to foster a
comprehensive comprehension of general world physics and motions, which
significantly enhances the capabilities of video generation. Drawing
inspiration from the success of large language models, WorldDreamer frames
world modeling as an unsupervised visual sequence modeling challenge. This is
achieved by mapping visual inputs to discrete tokens and predicting the masked
ones. During this process, we incorporate multi-modal prompts to facilitate
interaction within the world model. Our experiments show that WorldDreamer
excels in generating videos across different scenarios, including natural
scenes and driving environments. WorldDreamer showcases versatility in
executing tasks such as text-to-video conversion, image-tovideo synthesis, and
video editing. These results underscore WorldDreamer’s effectiveness in
capturing dynamic elements within diverse general world environments.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More