NIVeL: Neural Implicit Vector Layers for Text-to-Vector Generation

Kavli Affiliate: Matthew Fisher

| First 5 Authors: Vikas Thamizharasan, Difan Liu, Matthew Fisher, Nanxuan Zhao, Evangelos Kalogerakis

| Summary:

The success of denoising diffusion models in representing rich data
distributions over 2D raster images has prompted research on extending them to
other data representations, such as vector graphics. Unfortunately due to their
variable structure and scarcity of vector training data, directly applying
diffusion models on this domain remains a challenging problem. Using
workarounds like optimization via Score Distillation Sampling (SDS) is also
fraught with difficulty, as vector representations are non trivial to directly
optimize and tend to result in implausible geometries such as redundant or
self-intersecting shapes. NIVeL addresses these challenges by reinterpreting
the problem on an alternative, intermediate domain which preserves the
desirable properties of vector graphics — mainly sparsity of representation
and resolution-independence. This alternative domain is based on neural
implicit fields expressed in a set of decomposable, editable layers. Based on
our experiments, NIVeL produces text-to-vector graphics results of
significantly better quality than the state-of-the-art.

| Search Query: ArXiv Query: search_query=au:”Matthew Fisher”&id_list=&start=0&max_results=3

Read More