File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIP.2020.3014952
- Scopus: eid_2-s2.0-85090120511
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Visual Tracking with Multiview Trajectory Prediction
Title | Visual Tracking with Multiview Trajectory Prediction |
---|---|
Authors | |
Keywords | correlation filter Deep learning multiview tracking trajectory |
Issue Date | 2020 |
Citation | IEEE Transactions on Image Processing, 2020, v. 29, p. 8355-8367 How to Cite? |
Abstract | Recent progresses in visual tracking have greatly improved the tracking performance. However, challenges such as occlusion and view change remain obstacles in real world deployment. A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e.g. human), static cameras, and/or require camera calibration. To break through these limitations, we propose a generic multiview tracking (GMT) framework that allows camera movement, while requiring neither specific object model nor camera calibration. A key innovation in our framework is a cross-camera trajectory prediction network (TPN), which implicitly and dynamically encodes camera geometric relations, and hence addresses missing target issues such as occlusion. Moreover, during tracking, we assemble information across different cameras to dynamically update a novel collaborative correlation filter (CCF), which is shared among cameras to achieve robustness against view change. The two components are integrated into a correlation filter tracking framework, where features are trained offline using existing single view tracking datasets. For evaluation, we first contribute a new generic multiview tracking dataset (GMTD) with careful annotations, and then run experiments on the GMTD and CAMPUS datasets. The proposed GMT algorithm shows clear advantages in terms of robustness over state-of-the-art ones. |
Persistent Identifier | http://hdl.handle.net/10722/345010 |
ISSN | 2023 Impact Factor: 10.8 2023 SCImago Journal Rankings: 3.556 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, Minye | - |
dc.contributor.author | Ling, Haibin | - |
dc.contributor.author | Bi, Ning | - |
dc.contributor.author | Gao, Shenghua | - |
dc.contributor.author | Hu, Qiang | - |
dc.contributor.author | Sheng, Hao | - |
dc.contributor.author | Yu, Jingyi | - |
dc.date.accessioned | 2024-08-15T09:24:39Z | - |
dc.date.available | 2024-08-15T09:24:39Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | IEEE Transactions on Image Processing, 2020, v. 29, p. 8355-8367 | - |
dc.identifier.issn | 1057-7149 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345010 | - |
dc.description.abstract | Recent progresses in visual tracking have greatly improved the tracking performance. However, challenges such as occlusion and view change remain obstacles in real world deployment. A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e.g. human), static cameras, and/or require camera calibration. To break through these limitations, we propose a generic multiview tracking (GMT) framework that allows camera movement, while requiring neither specific object model nor camera calibration. A key innovation in our framework is a cross-camera trajectory prediction network (TPN), which implicitly and dynamically encodes camera geometric relations, and hence addresses missing target issues such as occlusion. Moreover, during tracking, we assemble information across different cameras to dynamically update a novel collaborative correlation filter (CCF), which is shared among cameras to achieve robustness against view change. The two components are integrated into a correlation filter tracking framework, where features are trained offline using existing single view tracking datasets. For evaluation, we first contribute a new generic multiview tracking dataset (GMTD) with careful annotations, and then run experiments on the GMTD and CAMPUS datasets. The proposed GMT algorithm shows clear advantages in terms of robustness over state-of-the-art ones. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Image Processing | - |
dc.subject | correlation filter | - |
dc.subject | Deep learning | - |
dc.subject | multiview | - |
dc.subject | tracking | - |
dc.subject | trajectory | - |
dc.title | Visual Tracking with Multiview Trajectory Prediction | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TIP.2020.3014952 | - |
dc.identifier.scopus | eid_2-s2.0-85090120511 | - |
dc.identifier.volume | 29 | - |
dc.identifier.spage | 8355 | - |
dc.identifier.epage | 8367 | - |
dc.identifier.eissn | 1941-0042 | - |