File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Pose for everything: Towards category-agnostic pose estimation

TitlePose for everything: Towards category-agnostic pose estimation
Authors
Issue Date2022
PublisherOrtra Ltd.
Citation
European Conference on Computer Vision (Hybrid), Tel Aviv, Israel, October 23-27, 2022. In Proceedings of the European Conference on Computer Vision (ECCV), 2022 How to Cite?
AbstractExisting works on 2D pose estimation mainly focus on a certain category, e.g. human, animal, and vehicle. However, there are lots of application scenarios that require detecting the poses/keypoints of the unseen class of objects. In this paper, we introduce the task of Category-Agnostic Pose Estimation (CAPE), which aims to create a pose estimation model capable of detecting the pose of any class of object given only a few samples with keypoint definition. To achieve this goal, we formulate the pose estimation problem as a keypoint matching problem and design a novel CAPE framework, termed POse Matching Network (POMNet). A transformer-based Keypoint Interaction Module (KIM) is proposed to capture both the interactions among different keypoints and the relationship between the support and query images. We also introduce Multi-category Pose (MP-100) dataset, which is a 2D pose dataset of 100 object categories containing over 20K instances and is well-designed for developing CAPE algorithms. Experiments show that our method outperforms other baseline approaches by a large margin.
DescriptionOral
Persistent Identifierhttp://hdl.handle.net/10722/315798

 

DC FieldValueLanguage
dc.contributor.authorXu, L-
dc.contributor.authorJin, S-
dc.contributor.authorZeng, W-
dc.contributor.authorLiu, W-
dc.contributor.authorQian, C-
dc.contributor.authorOuyang, W-
dc.contributor.authorLuo, P-
dc.contributor.authorWang, X-
dc.date.accessioned2022-08-19T09:04:38Z-
dc.date.available2022-08-19T09:04:38Z-
dc.date.issued2022-
dc.identifier.citationEuropean Conference on Computer Vision (Hybrid), Tel Aviv, Israel, October 23-27, 2022. In Proceedings of the European Conference on Computer Vision (ECCV), 2022-
dc.identifier.urihttp://hdl.handle.net/10722/315798-
dc.descriptionOral-
dc.description.abstractExisting works on 2D pose estimation mainly focus on a certain category, e.g. human, animal, and vehicle. However, there are lots of application scenarios that require detecting the poses/keypoints of the unseen class of objects. In this paper, we introduce the task of Category-Agnostic Pose Estimation (CAPE), which aims to create a pose estimation model capable of detecting the pose of any class of object given only a few samples with keypoint definition. To achieve this goal, we formulate the pose estimation problem as a keypoint matching problem and design a novel CAPE framework, termed POse Matching Network (POMNet). A transformer-based Keypoint Interaction Module (KIM) is proposed to capture both the interactions among different keypoints and the relationship between the support and query images. We also introduce Multi-category Pose (MP-100) dataset, which is a 2D pose dataset of 100 object categories containing over 20K instances and is well-designed for developing CAPE algorithms. Experiments show that our method outperforms other baseline approaches by a large margin.-
dc.languageeng-
dc.publisherOrtra Ltd.-
dc.relation.ispartofProceedings of the European Conference on Computer Vision (ECCV), 2022-
dc.rightsProceedings of the European Conference on Computer Vision (ECCV). Copyright © IEEE.-
dc.titlePose for everything: Towards category-agnostic pose estimation-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.doi10.48550/arXiv.2207.10387-
dc.identifier.hkuros335583-
dc.publisher.placeIsrael-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats