File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR52733.2024.00075
- Scopus: eid_2-s2.0-85203817500
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios
Title | PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios |
---|---|
Authors | |
Issue Date | 2024 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2024, p. 718-728 How to Cite? |
Abstract | We address the challenge of content diversity and controllability in pedestrian simulation for driving scenarios. Recent pedestrian animation frameworks have a significant limitation wherein they primarily focus on either following trajectory [48] or the content of the reference video [60], consequently overlooking the potential diversity of human motion within such scenarios. This limitation restricts the ability to generate pedestrian behaviors that exhibit a wider range of variations and realistic motions and therefore re-stricts its usage to provide rich motion content for other components in the driving simulation system, e.g., suddenly changed motion to which the autonomous vehicle should respond. In our approach, we strive to surpass the limitation by showcasing diverse human motions obtained from various sources, such as generated human motions, in ad-dition to following the given trajectory. The fundamental contribution of our framework lies in combining the motion tracking task with trajectory following, which enables the tracking of specific motion parts (e.g., upper body) while simultaneously following the given trajectory by a single policy. This way, we significantly enhance both the diver-sity of simulated human motion within the given scenario and the controllability of the content, including language-based control. Our framework facilitates the generation of a wide range of human motions, contributing to greater re-alism and adaptability in pedestrian simulations for driving scenarios. |
Persistent Identifier | http://hdl.handle.net/10722/352474 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Jingbo | - |
dc.contributor.author | Luo, Zhengyi | - |
dc.contributor.author | Yuan, Ye | - |
dc.contributor.author | Li, Yixuan | - |
dc.contributor.author | Dai, Bo | - |
dc.date.accessioned | 2024-12-16T03:59:17Z | - |
dc.date.available | 2024-12-16T03:59:17Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2024, p. 718-728 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352474 | - |
dc.description.abstract | We address the challenge of content diversity and controllability in pedestrian simulation for driving scenarios. Recent pedestrian animation frameworks have a significant limitation wherein they primarily focus on either following trajectory [48] or the content of the reference video [60], consequently overlooking the potential diversity of human motion within such scenarios. This limitation restricts the ability to generate pedestrian behaviors that exhibit a wider range of variations and realistic motions and therefore re-stricts its usage to provide rich motion content for other components in the driving simulation system, e.g., suddenly changed motion to which the autonomous vehicle should respond. In our approach, we strive to surpass the limitation by showcasing diverse human motions obtained from various sources, such as generated human motions, in ad-dition to following the given trajectory. The fundamental contribution of our framework lies in combining the motion tracking task with trajectory following, which enables the tracking of specific motion parts (e.g., upper body) while simultaneously following the given trajectory by a single policy. This way, we significantly enhance both the diver-sity of simulated human motion within the given scenario and the controllability of the content, including language-based control. Our framework facilitates the generation of a wide range of human motions, contributing to greater re-alism and adaptability in pedestrian simulations for driving scenarios. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.title | PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR52733.2024.00075 | - |
dc.identifier.scopus | eid_2-s2.0-85203817500 | - |
dc.identifier.spage | 718 | - |
dc.identifier.epage | 728 | - |