**作者:**Xiao, Changcheng, Qiong Cao, Yujie Zhong, Long Lan, Xiang Zhang, Huayue Cai, Zhigang Luo, and Dacheng Tao

作者单位:

发布时间:2023

发布期刊/会议:

出版商:Arxiv

**论文全称:**MotionTrack: Learning Motion Predictor for Multiple Object Tracking

论文地址:

MotionTrack: Learning Motion Predictor for Multiple Object Tracking

论文代码:

地位:

个人理解

一、摘要

Significant advancements have been made in multi object tracking (MOT) with the development of detection and reidentification (ReID) techniques. Despite these developments, the task of accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains challenging due to the insufficient discriminability of ReID features and the predominant use of linear motion models in MOT. In this context, we present a novel learnable motion predictor, named MotionTrack, which comprehensively incorporates two levels of granularity of motion features to enhance the modeling of tem poral dynamics and facilitate accurate future motion prediction of individual objects. Specifically, the proposed approach adopts a self-attention mechanism to capture token-level information and a Dynamic MLP layer to model channel-level features. MotionTrack is a simple, online tracking approach. Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on demanding datasets such as SportsMOT and Dancetrack, which feature highly nonlinear object motion. Notably, without fine-tuning on target datasets, MotionTrack also exhibits competitive performance on conventional benchmarks including MOT17 and MOT20.

二、Method