NeMF: Neural Motion Fields for Kinematic Animation

Kavli Affiliate: Yi Zhou

| First 5 Authors: Chengan He, Jun Saito, James Zachary, Holly Rushmeier, Yi Zhou

| Summary:

We present an implicit neural representation to learn the spatio-temporal
space of kinematic motions. Unlike previous work that represents motion as
discrete sequential samples, we propose to express the vast motion space as a
continuous function over time, hence the name Neural Motion Fields (NeMF).
Specifically, we use a neural network to learn this function for miscellaneous
sets of motions, which is designed to be a generative model conditioned on a
temporal coordinate $t$ and a random vector $z$ for controlling the style. The
model is then trained as a Variational Autoencoder (VAE) with motion encoders
to sample the latent space. We train our model with diverse human motion
dataset and quadruped dataset to prove its versatility, and finally deploy it
as a generic motion prior to solve task-agnostic problems and show its
superiority in different motion generation and editing applications, such as
motion interpolation, in-betweening, and re-navigating.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=10

Read More