File Download

There are no files associated with this item.

Supplementary

Conference Paper: Grounding Physical Concepts of Objects and Events Through Dynamic Visual Reasoning

TitleGrounding Physical Concepts of Objects and Events Through Dynamic Visual Reasoning
Authors
KeywordsConcept Learning
Neuro-Symbolic Learning
Video Reasoning
Visual Reasoning
Issue Date2021
Citation
The 9th International Conference on Learning Representations (ICLR 2021), Virtual Event, Austria, 3-7 May 2021 How to Cite?
AbstractWe study the problem of dynamic visual reasoning on raw videos. This is a challenging problem; currently, state-of-the-art models often require dense supervision on physical object properties and events from simulation, which are impractical to obtain in real life. In this paper, we present the Dynamic Concept Learner (DCL), a unified framework that grounds physical objects and events from video and language. DCL first adopts a trajectory extractor to track each object over time and to represent it as a latent, object-centric feature vector. Building upon this object-centric representation, DCL learns to approximate the dynamic interaction among objects using graph networks. DCL further incorporates a semantic parser to parse question into semantic programs and, finally, a program executor to run the program to answer the question, levering the learned dynamics model. After training, DCL can detect and associate objects across the frames, ground visual properties and physical events, understand the causal relationship between events, make future and counterfactual predictions, and leverage these extracted presentations for answering queries. DCL achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training. We further test DCL on a newly proposed video-retrieval and event localization dataset derived from CLEVRER, showing its strong generalization capacity.
DescriptionPoster Presentation
Persistent Identifierhttp://hdl.handle.net/10722/301145

 

DC FieldValueLanguage
dc.contributor.authorChen, Z-
dc.contributor.authorMao, J-
dc.contributor.authorWu, J-
dc.contributor.authorWong, KKY-
dc.contributor.authorTenenbaum, JB-
dc.contributor.authorGan, C-
dc.date.accessioned2021-07-27T08:06:47Z-
dc.date.available2021-07-27T08:06:47Z-
dc.date.issued2021-
dc.identifier.citationThe 9th International Conference on Learning Representations (ICLR 2021), Virtual Event, Austria, 3-7 May 2021-
dc.identifier.urihttp://hdl.handle.net/10722/301145-
dc.descriptionPoster Presentation-
dc.description.abstractWe study the problem of dynamic visual reasoning on raw videos. This is a challenging problem; currently, state-of-the-art models often require dense supervision on physical object properties and events from simulation, which are impractical to obtain in real life. In this paper, we present the Dynamic Concept Learner (DCL), a unified framework that grounds physical objects and events from video and language. DCL first adopts a trajectory extractor to track each object over time and to represent it as a latent, object-centric feature vector. Building upon this object-centric representation, DCL learns to approximate the dynamic interaction among objects using graph networks. DCL further incorporates a semantic parser to parse question into semantic programs and, finally, a program executor to run the program to answer the question, levering the learned dynamics model. After training, DCL can detect and associate objects across the frames, ground visual properties and physical events, understand the causal relationship between events, make future and counterfactual predictions, and leverage these extracted presentations for answering queries. DCL achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training. We further test DCL on a newly proposed video-retrieval and event localization dataset derived from CLEVRER, showing its strong generalization capacity.-
dc.languageeng-
dc.relation.ispartofInternational Conference on Learning Representations (ICLR) 2021-
dc.subjectConcept Learning-
dc.subjectNeuro-Symbolic Learning-
dc.subjectVideo Reasoning-
dc.subjectVisual Reasoning-
dc.titleGrounding Physical Concepts of Objects and Events Through Dynamic Visual Reasoning-
dc.typeConference_Paper-
dc.identifier.emailWong, KKY: kykwong@cs.hku.hk-
dc.identifier.authorityWong, KKY=rp01393-
dc.identifier.hkuros323466-
dc.publisher.placeVienna, Austria-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats