File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: The concurrent encoding of viewpoint-invariant and viewpoint-dependent information in visual object recognition

TitleThe concurrent encoding of viewpoint-invariant and viewpoint-dependent information in visual object recognition
Authors
Keywordsobject representation
view-invariance
view-dependence
Object constancy
Issue Date2017
Citation
Visual Cognition, 2017, p. 1-22 How to Cite?
Abstract© 2017 Informa UK Limited, trading as Taylor & Francis Group A major theme of Glyn Humphreys’ career was object constancy, defined in his paper with Jane Riddoch (1984, p. 385) as “the ability to recognize that an object has the same structure across changes in its retinal project.” In this seminal neuropsychological work, they posit that there may be two routes to object constancy: one using local distinctive features and one based on global structure. Much of the work following Humphreys has focused on a similar dichotomy: whether recognition is viewpoint-invariant or viewpoint-dependent. This question has been debated at length, but has never been resolved in that, under different circumstances, both view-dependent and view-invariant recognition behaviour have been observed. We hypothesize that these inconsistent patterns can be accounted for by object representations that encode information supporting both patterns of recognition performance whereby the nature of the information expressed during recognition depends upon context. Our present results establish that viewpoint-invariant and viewpoint-dependent information is encoded concurrently, even if one information type is not immediately relevant to encoding context. In Experiment 1, participants concurrently, driven by the discrimination context, recruited viewpoint-invariant information for some objects and viewpoint-dependent information for others. In Experiments 2 and 3, participants learned the identities of novel objects that could be differentiated on the basis of view-invariant information and displayed viewpoint-invariant behaviour over changes in view; however, when recognizing these now-familiar objects in the context of new objects that were visually similar, participants shifted to displaying viewpoint-dependent behaviour. This viewpoint dependence was related to the particular views had been seen earlier in the experiment, even though at that time recognition was viewpoint invariant. Our results indicate that object representations are neither viewpoint-dependent nor viewpoint-invariant, but rather encode multiple kinds of information that are deployed in a flexible manner appropriate to context and task.
Persistent Identifierhttp://hdl.handle.net/10722/244032
ISSN
2020 Impact Factor: 1.875
2020 SCImago Journal Rankings: 0.797
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorTarr, Michael J.-
dc.contributor.authorHayward, William G.-
dc.date.accessioned2017-08-31T02:29:27Z-
dc.date.available2017-08-31T02:29:27Z-
dc.date.issued2017-
dc.identifier.citationVisual Cognition, 2017, p. 1-22-
dc.identifier.issn1350-6285-
dc.identifier.urihttp://hdl.handle.net/10722/244032-
dc.description.abstract© 2017 Informa UK Limited, trading as Taylor & Francis Group A major theme of Glyn Humphreys’ career was object constancy, defined in his paper with Jane Riddoch (1984, p. 385) as “the ability to recognize that an object has the same structure across changes in its retinal project.” In this seminal neuropsychological work, they posit that there may be two routes to object constancy: one using local distinctive features and one based on global structure. Much of the work following Humphreys has focused on a similar dichotomy: whether recognition is viewpoint-invariant or viewpoint-dependent. This question has been debated at length, but has never been resolved in that, under different circumstances, both view-dependent and view-invariant recognition behaviour have been observed. We hypothesize that these inconsistent patterns can be accounted for by object representations that encode information supporting both patterns of recognition performance whereby the nature of the information expressed during recognition depends upon context. Our present results establish that viewpoint-invariant and viewpoint-dependent information is encoded concurrently, even if one information type is not immediately relevant to encoding context. In Experiment 1, participants concurrently, driven by the discrimination context, recruited viewpoint-invariant information for some objects and viewpoint-dependent information for others. In Experiments 2 and 3, participants learned the identities of novel objects that could be differentiated on the basis of view-invariant information and displayed viewpoint-invariant behaviour over changes in view; however, when recognizing these now-familiar objects in the context of new objects that were visually similar, participants shifted to displaying viewpoint-dependent behaviour. This viewpoint dependence was related to the particular views had been seen earlier in the experiment, even though at that time recognition was viewpoint invariant. Our results indicate that object representations are neither viewpoint-dependent nor viewpoint-invariant, but rather encode multiple kinds of information that are deployed in a flexible manner appropriate to context and task.-
dc.languageeng-
dc.relation.ispartofVisual Cognition-
dc.subjectobject representation-
dc.subjectview-invariance-
dc.subjectview-dependence-
dc.subjectObject constancy-
dc.titleThe concurrent encoding of viewpoint-invariant and viewpoint-dependent information in visual object recognition-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1080/13506285.2017.1324933-
dc.identifier.scopuseid_2-s2.0-85022051580-
dc.identifier.spage1-
dc.identifier.epage22-
dc.identifier.eissn1464-0716-
dc.identifier.isiWOS:000423979800009-
dc.identifier.issnl1350-6285-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats