GRIP: Generating Interaction Poses Using Latent Consistency and Spatial Cues

Kavli Affiliate: Yi Zhou

| First 5 Authors: Omid Taheri, Yi Zhou, Dimitrios Tzionas, Yang Zhou, Duygu Ceylan

| Summary:

Hands are dexterous and highly versatile manipulators that are central to how
humans interact with objects and their environment. Consequently, modeling
realistic hand-object interactions, including the subtle motion of individual
fingers, is critical for applications in computer graphics, computer vision,
and mixed reality. Prior work on capturing and modeling humans interacting with
objects in 3D focuses on the body and object motion, often ignoring hand pose.
In contrast, we introduce GRIP, a learning-based method that takes, as input,
the 3D motion of the body and the object, and synthesizes realistic motion for
both hands before, during, and after object interaction. As a preliminary step
before synthesizing the hand motion, we first use a network, ANet, to denoise
the arm motion. Then, we leverage the spatio-temporal relationship between the
body and the object to extract two types of novel temporal interaction cues,
and use them in a two-stage inference pipeline to generate the hand motion. In
the first stage, we introduce a new approach to enforce motion temporal
consistency in the latent space (LTC), and generate consistent interaction
motions. In the second stage, GRIP generates refined hand poses to avoid
hand-object penetrations. Given sequences of noisy body and object motion, GRIP
upgrades them to include hand-object interaction. Quantitative experiments and
perceptual studies demonstrate that GRIP outperforms baseline methods and
generalizes to unseen objects and motions from different motion-capture
datasets.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3

Read More