A Multi-Implicit Neural Representation for Fonts

Kavli Affiliate: Matthew Fisher

| First 5 Authors: Pradyumna Reddy, Zhifei Zhang, Matthew Fisher, Hailin Jin, Zhaowen Wang

| Summary:

Fonts are ubiquitous across documents and come in a variety of styles. They
are either represented in a native vector format or rasterized to produce fixed
resolution images. In the first case, the non-standard representation prevents
benefiting from latest network architectures for neural representations; while,
in the latter case, the rasterized representation, when encoded via networks,
results in loss of data fidelity, as font-specific discontinuities like edges
and corners are difficult to represent using neural networks. Based on the
observation that complex fonts can be represented by a superposition of a set
of simpler occupancy functions, we introduce textit{multi-implicits} to
represent fonts as a permutation-invariant set of learned implict functions,
without losing features (e.g., edges and corners). However, while
multi-implicits locally preserve font features, obtaining supervision in the
form of ground truth multi-channel signals is a problem in itself. Instead, we
propose how to train such a representation with only local supervision, while
the proposed neural architecture directly finds globally consistent
multi-implicits for font families. We extensively evaluate the proposed
representation for various tasks including reconstruction, interpolation, and
synthesis to demonstrate clear advantages with existing alternatives.
Additionally, the representation naturally enables glyph completion, wherein a
single characteristic font is used to synthesize a whole font family in the
target style.

| Search Query: ArXiv Query: search_query=au:”Matthew Fisher”&id_list=&start=0&max_results=10

Read More