作者:Bin YanYi JiangPeize SunDong WangZehuan YuanPing LuoHuchuan Lu

**作者单位:**大连理工大学信息与通信工程学院,字节跳动,香港大学,彭成实验室

发布时间:2022

发布期刊/会议:Arxiv

出版商:Arxiv

论文全称:Towards Grand Unification of Object Tracking

论文地址:

Towards Grand Unification of Object Tracking

论文代码:

https://github.com/MasterBin-IIAU/Unicorn

地位:

个人理解

一、摘要

We present a unified method, termed Unicorn, that can simultaneously solve four tracking problems (SOT, MOT, VOS, MOTS) with a single network using the same model parameters. Due to the fragmented definitions of the object tracking problem itself, most existing trackers are developed to address a single or part of tasks and overspecialize on the characteristics of specific tasks. By contrast, Unicorn provides a unified solution, adopting the same input, backbone, embedding, and head across all tracking tasks. For the first time, we accomplish the great unification of the tracking network architecture and learning paradigm. Unicorn performs on-par or better than its task-specific counterparts in 8 tracking datasets, including LaSOT, TrackingNet, MOT17, BDD100K, DA VIS16-17, MOTS20, and BDD100K MOTS. We believe that Unicorn will serve as a solid step towards the general vision model.