LogoMotion: Visually Grounded Code Generation for Content-Aware Animation

Kavli Affiliate: Matthew Fisher

| First 5 Authors: Vivian Liu, Rubaiat Habib Kazi, Li-Yi Wei, Matthew Fisher, Timothy Langlois

| Summary:

Animated logos are a compelling and ubiquitous way individuals and brands
represent themselves online. Manually authoring these logos can require
significant artistic skill and effort. To help novice designers animate logos,
design tools currently offer templates and animation presets. However, these
solutions can be limited in their expressive range. Large language models have
the potential to help novice designers create animated logos by generating
animation code that is tailored to their content. In this paper, we introduce
LogoMotion, an LLM-based system that takes in a layered document and generates
animated logos through visually-grounded program synthesis. We introduce
techniques to create an HTML representation of a canvas, identify primary and
secondary elements, synthesize animation code, and visually debug animation
errors. When compared with an industry standard tool, we find that LogoMotion
produces animations that are more content-aware and are on par in terms of
quality. We conclude with a discussion of the implications of LLM-generated
animation for motion design.

| Search Query: ArXiv Query: search_query=au:”Matthew Fisher”&id_list=&start=0&max_results=3

Read More