**作者:**Peize Sun, Jinkun Cao, Yi Jiang, Zehuan Yuan, Song Bai, Kris Kitani, Ping Luo

**作者单位:**香港大学,卡内基梅隆大学,字节跳动

发布时间:2022

发布期刊/会议:CVPR

出版商:IEEE

论文全称:DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion

论文地址:

CVPR 2022 Open Access Repository

论文代码:

https://github.com/DanceTrack/DanceTrack

https://github.com/PaddlePaddle/PaddleDetection

https://github.com/open-mmlab/mmtracking

地位:

个人理解

一、摘要

A typical pipeline for multi-object tracking (MOT) is to use a detector for object localization, and following reidentification (re-ID) for object association. This pipeline is partially motivated by recent progress in both object detection and re-ID, and partially motivated by biases in existing tracking datasets, where most objects tend to have distinguishing appearance and re-ID models are sufficient for establishing associations. In response to such bias, we would like to re-emphasize that methods for multi-object tracking should also work when object appearance is not sufficiently discriminative. To this end, we propose a large-scale dataset for multi-human tracking, where humans have similar appearance, diverse motion and extreme articulation. As the dataset contains mostly group dancing videos, we name it “DanceTrack”. We expect DanceTrack to provide a better platform to develop more MOT algorithms that rely less on visual discrimination and depend more on motion analysis. We benchmark several state-of-the-art trackers on our dataset and observe a significant performance drop on DanceTrack when compared against existing benchmarks. The dataset, project code and competition is released at:https://github.com/DanceTrack.

二、DanceTrack