File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Pose for everything: Towards category-agnostic pose estimation
Title | Pose for everything: Towards category-agnostic pose estimation |
---|---|
Authors | |
Issue Date | 2022 |
Publisher | Ortra Ltd. |
Citation | European Conference on Computer Vision (Hybrid), Tel Aviv, Israel, October 23-27, 2022. In Proceedings of the European Conference on Computer Vision (ECCV), 2022 How to Cite? |
Abstract | Existing works on 2D pose estimation mainly focus on a certain category, e.g. human, animal, and vehicle. However, there are lots of application scenarios that require detecting the poses/keypoints of the unseen class of objects. In this paper, we introduce the task of Category-Agnostic Pose Estimation (CAPE), which aims to create a pose estimation model capable of detecting the pose of any class of object given only a few samples with keypoint definition. To achieve this goal, we formulate the pose estimation problem as a keypoint matching problem and design a novel CAPE framework, termed POse Matching Network (POMNet). A transformer-based Keypoint Interaction Module (KIM) is proposed to capture both the interactions among different keypoints and the relationship between the support and query images. We also introduce Multi-category Pose (MP-100) dataset, which is a 2D pose dataset of 100 object categories containing over 20K instances and is well-designed for developing CAPE algorithms. Experiments show that our method outperforms other baseline approaches by a large margin. |
Description | Oral |
Persistent Identifier | http://hdl.handle.net/10722/315798 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xu, L | - |
dc.contributor.author | Jin, S | - |
dc.contributor.author | Zeng, W | - |
dc.contributor.author | Liu, W | - |
dc.contributor.author | Qian, C | - |
dc.contributor.author | Ouyang, W | - |
dc.contributor.author | Luo, P | - |
dc.contributor.author | Wang, X | - |
dc.date.accessioned | 2022-08-19T09:04:38Z | - |
dc.date.available | 2022-08-19T09:04:38Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | European Conference on Computer Vision (Hybrid), Tel Aviv, Israel, October 23-27, 2022. In Proceedings of the European Conference on Computer Vision (ECCV), 2022 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315798 | - |
dc.description | Oral | - |
dc.description.abstract | Existing works on 2D pose estimation mainly focus on a certain category, e.g. human, animal, and vehicle. However, there are lots of application scenarios that require detecting the poses/keypoints of the unseen class of objects. In this paper, we introduce the task of Category-Agnostic Pose Estimation (CAPE), which aims to create a pose estimation model capable of detecting the pose of any class of object given only a few samples with keypoint definition. To achieve this goal, we formulate the pose estimation problem as a keypoint matching problem and design a novel CAPE framework, termed POse Matching Network (POMNet). A transformer-based Keypoint Interaction Module (KIM) is proposed to capture both the interactions among different keypoints and the relationship between the support and query images. We also introduce Multi-category Pose (MP-100) dataset, which is a 2D pose dataset of 100 object categories containing over 20K instances and is well-designed for developing CAPE algorithms. Experiments show that our method outperforms other baseline approaches by a large margin. | - |
dc.language | eng | - |
dc.publisher | Ortra Ltd. | - |
dc.relation.ispartof | Proceedings of the European Conference on Computer Vision (ECCV), 2022 | - |
dc.rights | Proceedings of the European Conference on Computer Vision (ECCV). Copyright © IEEE. | - |
dc.title | Pose for everything: Towards category-agnostic pose estimation | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.doi | 10.48550/arXiv.2207.10387 | - |
dc.identifier.hkuros | 335583 | - |
dc.publisher.place | Israel | - |