GaitStrip: Gait Recognition via Effective Strip-based Feature Representations and Multi-Level Framework

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Ming Wang, Beibei Lin, Xianda Guo, Lincheng Li, Zheng Zhu

| Summary:

Many gait recognition methods first partition the human gait into N-parts and
then combine them to establish part-based feature representations. Their gait
recognition performance is often affected by partitioning strategies, which are
empirically chosen in different datasets. However, we observe that strips as
the basic component of parts are agnostic against different partitioning
strategies. Motivated by this observation, we present a strip-based multi-level
gait recognition network, named GaitStrip, to extract comprehensive gait
information at different levels. To be specific, our high-level branch explores
the context of gait sequences and our low-level one focuses on detailed posture
changes. We introduce a novel StriP-Based feature extractor (SPB) to learn the
strip-based feature representations by directly taking each strip of the human
body as the basic unit. Moreover, we propose a novel multi-branch structure,
called Enhanced Convolution Module (ECM), to extract different representations
of gaits. ECM consists of the Spatial-Temporal feature extractor (ST), the
Frame-Level feature extractor (FL) and SPB, and has two obvious advantages:
First, each branch focuses on a specific representation, which can be used to
improve the robustness of the network. Specifically, ST aims to extract
spatial-temporal features of gait sequences, while FL is used to generate the
feature representation of each frame. Second, the parameters of the ECM can be
reduced in test by introducing a structural re-parameterization technique.
Extensive experimental results demonstrate that our GaitStrip achieves
state-of-the-art performance in both normal walking and complex conditions.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=10

Read More

Leave a Reply