File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

TitleFeature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search
Authors
Keywordsdimension-based attention
feature integration theory
salience map
visual search
Issue Date2009
PublisherAmerican Psychological Association. The Journal's web site is located at http://www.apa.org/journals/xhp.html
Citation
Journal Of Experimental Psychology: Human Perception And Performance, 2009, v. 35 n. 1, p. 119-132 How to Cite?
AbstractIn feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. © 2009 American Psychological Association.
Persistent Identifierhttp://hdl.handle.net/10722/60772
ISSN
2023 Impact Factor: 2.1
2023 SCImago Journal Rankings: 1.034
ISI Accession Number ID
Funding AgencyGrant Number
University of Hong Kong (HKU)
Hong Kong Research Grants CouncilHKU 7649/06H
Funding Information:

This research was supported by a University of Hong Kong (HKU) Postgraduate Fellowship to Louis K. H. Chan and by Hong Kong Research Grants Council Grant HKU 7649/06H to William G. Hayward. We would like to thank Hermann J. Muller and Joseph Krummenacher for their helpful comments.

References

 

DC FieldValueLanguage
dc.contributor.authorChan, LKHen_HK
dc.contributor.authorHayward, WGen_HK
dc.date.accessioned2010-05-31T04:18:09Z-
dc.date.available2010-05-31T04:18:09Z-
dc.date.issued2009en_HK
dc.identifier.citationJournal Of Experimental Psychology: Human Perception And Performance, 2009, v. 35 n. 1, p. 119-132en_HK
dc.identifier.issn0096-1523en_HK
dc.identifier.urihttp://hdl.handle.net/10722/60772-
dc.description.abstractIn feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. © 2009 American Psychological Association.en_HK
dc.languageengen_HK
dc.publisherAmerican Psychological Association. The Journal's web site is located at http://www.apa.org/journals/xhp.htmlen_HK
dc.relation.ispartofJournal of Experimental Psychology: Human Perception and Performanceen_HK
dc.rightsJournal of Experimental Psychology: Human Perception and Performance. Copyright © American Psychological Association.en_HK
dc.subjectdimension-based attentionen_HK
dc.subjectfeature integration theoryen_HK
dc.subjectsalience mapen_HK
dc.subjectvisual searchen_HK
dc.titleFeature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Searchen_HK
dc.typeArticleen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=0096-1523&volume=35&spage=119&epage=132&date=2009&atitle=Feature+Integration+Theory+revisited:+Dissociating+feature+detection+and+attentional+guidance+in+visual+searchen_HK
dc.identifier.emailChan, LKH:clouis@graduate.hku.hken_HK
dc.identifier.emailHayward, WG:whayward@hkucc.hku.hken_HK
dc.identifier.authorityChan, LKH=rp00851en_HK
dc.identifier.authorityHayward, WG=rp00630en_HK
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1037/0096-1523.35.1.119en_HK
dc.identifier.pmid19170475-
dc.identifier.scopuseid_2-s2.0-60349090452en_HK
dc.identifier.hkuros156393en_HK
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-60349090452&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume35en_HK
dc.identifier.issue1en_HK
dc.identifier.spage119en_HK
dc.identifier.epage132en_HK
dc.identifier.eissn1939-1277-
dc.identifier.isiWOS:000262838300010-
dc.publisher.placeUnited Statesen_HK
dc.identifier.scopusauthoridChan, LKH=37039134300en_HK
dc.identifier.scopusauthoridHayward, WG=7006352956en_HK
dc.identifier.citeulike4206759-
dc.identifier.issnl0096-1523-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats