Learning Visibility for Robust Dense Human Body Estimation

Kavli Affiliate: Yi Zhou

| First 5 Authors: Chun-Han Yao, Jimei Yang, Duygu Ceylan, Yi Zhou, Yang Zhou

| Summary:

Estimating 3D human pose and shape from 2D images is a crucial yet
challenging task. While prior methods with model-based representations can
perform reasonably well on whole-body images, they often fail when parts of the
body are occluded or outside the frame. Moreover, these results usually do not
faithfully capture the human silhouettes due to their limited representation
power of deformable models (e.g., representing only the naked body). An
alternative approach is to estimate dense vertices of a predefined template
body in the image space. Such representations are effective in localizing
vertices within an image but cannot handle out-of-frame body parts. In this
work, we learn dense human body estimation that is robust to partial
observations. We explicitly model the visibility of human joints and vertices
in the x, y, and z axes separately. The visibility in x and y axes help
distinguishing out-of-frame cases, and the visibility in depth axis corresponds
to occlusions (either self-occlusions or occlusions by other objects). We
obtain pseudo ground-truths of visibility labels from dense UV correspondences
and train a neural network to predict visibility along with 3D coordinates. We
show that visibility can serve as 1) an additional signal to resolve depth
ordering ambiguities of self-occluded vertices and 2) a regularization term
when fitting a human body model to the predictions. Extensive experiments on
multiple 3D human datasets demonstrate that visibility modeling significantly
improves the accuracy of human body estimation, especially for partial-body
cases. Our project page with code is at: https://github.com/chhankyao/visdb.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=10

Read More