Kavli Affiliate: Zheng Zhu
| First 5 Authors: Jianqiang Xia, DianXi Shi, Ke Song, Linna Song, XiaoLei Wang
| Summary:
Most existing RGB-T tracking networks extract modality features in a separate
manner, which lacks interaction and mutual guidance between modalities. This
limits the network’s ability to adapt to the diverse dual-modality appearances
of targets and the dynamic relationships between the modalities. Additionally,
the three-stage fusion tracking paradigm followed by these networks
significantly restricts the tracking speed. To overcome these problems, we
propose a unified single-stage Transformer RGB-T tracking network, namely
USTrack, which unifies the above three stages into a single ViT (Vision
Transformer) backbone with a dual embedding layer through self-attention
mechanism. With this structure, the network can extract fusion features of the
template and search region under the mutual interaction of modalities.
Simultaneously, relation modeling is performed between these features,
efficiently obtaining the search region fusion features with better
target-background discriminability for prediction. Furthermore, we introduce a
novel feature selection mechanism based on modality reliability to mitigate the
influence of invalid modalities for prediction, further improving the tracking
performance. Extensive experiments on three popular RGB-T tracking benchmarks
demonstrate that our method achieves new state-of-the-art performance while
maintaining the fastest inference speed 84.2FPS. In particular, MPR/MSR on the
short-term and long-term subsets of VTUAV dataset increased by
11.1$%$/11.7$%$ and 11.3$%$/9.7$%$.
| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3