作者:Yuang ZhangTiancai WangXiangyu Zhang

**作者单位:**上海交通大学、旷视科技、北京人工智能研究院

发布时间:2023

发布期刊/会议:2023 CVPR

出版商:

**论文全称:**MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors

论文地址:

CVPR 2023 Open Access Repository

论文代码:

https://github.com/megvii-research/MOTRv2

地位:

个人理解

一、摘要

In this paper, we propose MOTRv2, a simple yet effective pipeline to bootstrap end-to-end multi-object tracking with a pretrained object detector. Existing end-to-end methods, e.g. MOTR [43] and TrackFormer [20] are inferior to their tracking-by-detection counterparts mainly due to their poor detection performance. We aim to improve MOTR by elegantly incorporating an extra object detector. We first adopt the anchor formulation of queries and then use an extra object detector to generate proposals as anchors, providing detection prior to MOTR. The simple modification greatly eases the conflict between joint learning detection and association tasks in MOTR. MOTRv2 keeps the query propogation feature and scales well on large-scale benchmarks. MOTRv2 ranks the 1st place (73.4% HOTA on DanceTrack) in the 1st Multiple People Tracking in Group Dance Challenge. Moreover, MOTRv2 reaches state-of-the-art performance on the BDD100K dataset. We hope this simple and effective pipeline can provide some new insights to the endto-end MOT community

二、Method