File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Modeling 3D Shapes by Reinforcement Learning

TitleModeling 3D Shapes by Reinforcement Learning
Authors
Issue Date2020
PublisherSpringer. The Proceedings' web site is located at https://link.springer.com/conference/eccv
Citation
Proceedings of the 16th European Conference on Computer Vision (ECCV), Online, Glasgow, UK, 23-28 August 2020, pt X, p. 545-561 How to Cite?
AbstractWe explore how to enable machines to model 3D shapes like human modelers using deep reinforcement learning (RL). In 3D modeling software like Maya, a modeler usually creates a mesh model in two steps: (1) approximating the shape using a set of primitives; (2) editing the meshes of the primitives to create detailed geometry. Inspired by such artist-based modeling, we propose a two-step neural framework based on RL to learn 3D modeling policies. By taking actions and collecting rewards in an interactive environment, the agents first learn to parse a target shape into primitives and then to edit the geometry. To effectively train the modeling agents, we introduce a novel training algorithm that combines heuristic policy, imitation learning and reinforcement learning. Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models, which demonstrates the feasibility and effectiveness of the proposed RL framework .
Persistent Identifierhttp://hdl.handle.net/10722/289180
ISBN
Series/Report no.Lecture Notes in Computer Science (LNCS), v. 12355

 

DC FieldValueLanguage
dc.contributor.authorLin, C-
dc.contributor.authorFan, TT-
dc.contributor.authorWang, WP-
dc.contributor.authorNießner, M-
dc.date.accessioned2020-10-22T08:08:58Z-
dc.date.available2020-10-22T08:08:58Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the 16th European Conference on Computer Vision (ECCV), Online, Glasgow, UK, 23-28 August 2020, pt X, p. 545-561-
dc.identifier.isbn9783030586065-
dc.identifier.urihttp://hdl.handle.net/10722/289180-
dc.description.abstractWe explore how to enable machines to model 3D shapes like human modelers using deep reinforcement learning (RL). In 3D modeling software like Maya, a modeler usually creates a mesh model in two steps: (1) approximating the shape using a set of primitives; (2) editing the meshes of the primitives to create detailed geometry. Inspired by such artist-based modeling, we propose a two-step neural framework based on RL to learn 3D modeling policies. By taking actions and collecting rewards in an interactive environment, the agents first learn to parse a target shape into primitives and then to edit the geometry. To effectively train the modeling agents, we introduce a novel training algorithm that combines heuristic policy, imitation learning and reinforcement learning. Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models, which demonstrates the feasibility and effectiveness of the proposed RL framework .-
dc.languageeng-
dc.publisherSpringer. The Proceedings' web site is located at https://link.springer.com/conference/eccv-
dc.relation.ispartofProceedings of the European Conference on Computer Vision (ECCV)-
dc.relation.ispartofseriesLecture Notes in Computer Science (LNCS), v. 12355-
dc.titleModeling 3D Shapes by Reinforcement Learning-
dc.typeConference_Paper-
dc.identifier.emailWang, WP: wenping@cs.hku.hk-
dc.identifier.authorityWang, WP=rp00186-
dc.identifier.doi10.1007/978-3-030-58607-2_32-
dc.identifier.scopuseid_2-s2.0-85097386074-
dc.identifier.hkuros317158-
dc.identifier.volumeX-
dc.identifier.spage545-
dc.identifier.epage561-
dc.publisher.placeCham-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats