File Download

There are no files associated with this item.

Supplementary

Conference Paper: Dancetrack: Multi-object tracking in uniform appearance and diverse motion

TitleDancetrack: Multi-object tracking in uniform appearance and diverse motion
Authors
Issue Date2022
PublisherIEEE Computer Society.
Citation
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, New Orleans, Louisiana, USA, 19-24 June, 2022. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p. 20993-21002 How to Cite?
AbstractA typical pipeline for multi-object tracking (MOT) is to use a detector for object localization, and following re-identification (re-ID) for object association. This pipeline is partially motivated by recent progress in both object detection and re-ID, and partially motivated by biases in existing tracking datasets, where most objects tend to have distinguishing appearance and re-ID models are sufficient for establishing associations. In response to such bias, we would like to re-emphasize that methods for multi-object tracking should also work when object appearance is not sufficiently discriminative. To this end, we propose a large-scale dataset for multi-human tracking, where humans have similar appearance, diverse motion and extreme articulation. As the dataset contains mostly group dancing videos, we name it 'DanceTrack'. We expect DanceTrack to provide a better platform to develop more MOT algorithms that rely less on visual discrimination and depend more on motion analysis. We benchmark several state-of-the-art trackers on our dataset and observe a significant performance drop on DanceTrack when compared against existing benchmarks.
Persistent Identifierhttp://hdl.handle.net/10722/315858

 

DC FieldValueLanguage
dc.contributor.authorSUN, P-
dc.contributor.authorJIANG, Y-
dc.contributor.authorYUAN, Z-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T09:05:43Z-
dc.date.available2022-08-19T09:05:43Z-
dc.date.issued2022-
dc.identifier.citationIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, New Orleans, Louisiana, USA, 19-24 June, 2022. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p. 20993-21002-
dc.identifier.urihttp://hdl.handle.net/10722/315858-
dc.description.abstractA typical pipeline for multi-object tracking (MOT) is to use a detector for object localization, and following re-identification (re-ID) for object association. This pipeline is partially motivated by recent progress in both object detection and re-ID, and partially motivated by biases in existing tracking datasets, where most objects tend to have distinguishing appearance and re-ID models are sufficient for establishing associations. In response to such bias, we would like to re-emphasize that methods for multi-object tracking should also work when object appearance is not sufficiently discriminative. To this end, we propose a large-scale dataset for multi-human tracking, where humans have similar appearance, diverse motion and extreme articulation. As the dataset contains mostly group dancing videos, we name it 'DanceTrack'. We expect DanceTrack to provide a better platform to develop more MOT algorithms that rely less on visual discrimination and depend more on motion analysis. We benchmark several state-of-the-art trackers on our dataset and observe a significant performance drop on DanceTrack when compared against existing benchmarks.-
dc.languageeng-
dc.publisherIEEE Computer Society.-
dc.relation.ispartofProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022-
dc.rightsProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Copyright © IEEE Computer Society.-
dc.titleDancetrack: Multi-object tracking in uniform appearance and diverse motion-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335575-
dc.identifier.spage20993-
dc.identifier.epage21002-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats