File Download

There are no files associated with this item.

Supplementary

Conference Paper: Dynamic visual reasoning by learning differentiable physics models from video and language

TitleDynamic visual reasoning by learning differentiable physics models from video and language
Authors
Issue Date2021
PublisherNeural Information Processing Systems Foundation.
Citation
35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 887-899 How to Cite?
AbstractIn this work, we propose a unified framework, called Visual Reasoning with Differentiable Physics (VRDP) 1, that can jointly learn visual concepts and infer physics models of objects and their interactions from videos and language. This is achieved by seamlessly integrating three components: a visual perception module, a concept learner, and a differentiable physics engine. The visual perception module parses each video frame into object-centric trajectories and represents them as latent scene representations. The concept learner grounds visual concepts (e.g., color, shape, and material) from these object-centric representations based on the language, thus providing prior knowledge for the physics engine. The differentiable physics model, implemented as an impulse-based differentiable rigid-body simulator, performs differentiable physical simulation based on the grounded concepts to infer physical properties, such as mass, restitution, and velocity, by fitting the simulated trajectories into the video observations. Consequently, these learned concepts and physical models can explain what we have seen and imagine what is about to happen in future and counterfactual scenarios. Integrating differentiable physics into the dynamic reasoning framework offers several appealing benefits. More accurate dynamics prediction in learned physics models enables state-of-the-art performance on both synthetic and real-world benchmarks while still maintaining high transparency and interpretability; most notably, VRDP improves the accuracy of predictive and counterfactual questions by 4.5% and 11.5% compared to its best counterpart. VRDP is also highly data-efficient: physical parameters can be optimized from very few videos, and even a single video can be sufficient. Finally, with all physical parameters inferred, VRDP can quickly learn new concepts from few examples.
Persistent Identifierhttp://hdl.handle.net/10722/315680
ISBN

 

DC FieldValueLanguage
dc.contributor.authorDing, M-
dc.contributor.authorChen, Z-
dc.contributor.authorDu, T-
dc.contributor.authorLuo, P-
dc.contributor.authorTenenbaum, J-
dc.contributor.authorGan, C-
dc.date.accessioned2022-08-19T09:02:26Z-
dc.date.available2022-08-19T09:02:26Z-
dc.date.issued2021-
dc.identifier.citation35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 887-899-
dc.identifier.isbn9781713845393-
dc.identifier.urihttp://hdl.handle.net/10722/315680-
dc.description.abstractIn this work, we propose a unified framework, called Visual Reasoning with Differentiable Physics (VRDP) 1, that can jointly learn visual concepts and infer physics models of objects and their interactions from videos and language. This is achieved by seamlessly integrating three components: a visual perception module, a concept learner, and a differentiable physics engine. The visual perception module parses each video frame into object-centric trajectories and represents them as latent scene representations. The concept learner grounds visual concepts (e.g., color, shape, and material) from these object-centric representations based on the language, thus providing prior knowledge for the physics engine. The differentiable physics model, implemented as an impulse-based differentiable rigid-body simulator, performs differentiable physical simulation based on the grounded concepts to infer physical properties, such as mass, restitution, and velocity, by fitting the simulated trajectories into the video observations. Consequently, these learned concepts and physical models can explain what we have seen and imagine what is about to happen in future and counterfactual scenarios. Integrating differentiable physics into the dynamic reasoning framework offers several appealing benefits. More accurate dynamics prediction in learned physics models enables state-of-the-art performance on both synthetic and real-world benchmarks while still maintaining high transparency and interpretability; most notably, VRDP improves the accuracy of predictive and counterfactual questions by 4.5% and 11.5% compared to its best counterpart. VRDP is also highly data-efficient: physical parameters can be optimized from very few videos, and even a single video can be sufficient. Finally, with all physical parameters inferred, VRDP can quickly learn new concepts from few examples.-
dc.languageeng-
dc.publisherNeural Information Processing Systems Foundation.-
dc.relation.ispartofAdvances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021)-
dc.titleDynamic visual reasoning by learning differentiable physics models from video and language-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335591-
dc.identifier.volume34-
dc.identifier.spage887-
dc.identifier.epage899-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats