作者:Longyin Wen · Zhen Lei · Ming-Ching Chang · Honggang Qi · Siwei Lyu

**作者单位:**美国纽约州立大学奥尔巴尼大学计算机科学系,中国科学院自动化研究所模式识别国家实验室,美国纽约州立大学奥尔巴尼大学计算机工程系,中国科学院大学计算机与控制工程学院

发布时间:2017

发布期刊/会议:IEEE Cloud Computing

论文全称:Multi-Camera Multi-Target Tracking with Space-Time-View Hyper-graph

论文地址:

论文代码:

地位:

个人理解

一、摘要

****Incorporating multiple cameras is an effffective solution to improve the performance and robustness of multi-target tracking to occlusion and appearance ambiguities. In this paper, we propose a new multi-camera multi-target tracking method based on a space time-view hyper-graph that encodes higher-order constraints (i.e., beyond pairwise relations) on 3D geometry, appearance, motion continuity, and trajectory smoothness among 2D tracklets within and across different camera views. We solve tracking in each single view and reconstruction of tracked trajectories in 3D environment simultaneously by formulating the problem as an efficient search of dense sub-hypergraphs on the space-time-view hyper-graph using a sampling based approach. Experimental results on the PETS 2009 dataset and MOTChallenge 2015 3D benchmark demonstrate that our method performs favorably against the state-of-the-art methods in both single-camera and multi-camera multi-target tracking, while achieving close to real-time running efficiency. We also provide experimental analysis of the influence of various aspects of our method to the final tracking performance.