File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.3758/s13428-017-0876-8
- Scopus: eid_2-s2.0-85017464983
- WOS: WOS:000424922400024
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Scanpath modeling and classification with Hidden Markov Models
Title | Scanpath modeling and classification with Hidden Markov Models |
---|---|
Authors | |
Keywords | Classification Eye movements Hidden Markov models Machine-learning Scanpath Toolbox |
Issue Date | 2018 |
Publisher | Springer Verlag, co-published with Psychonomic Society. The Journal's web site is located at http://brm.psychonomic-journals.org/ |
Citation | Behavior Research Methods, 2018, v. 50 n. 1, p. 362-379 How to Cite? |
Abstract | How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement. |
Persistent Identifier | http://hdl.handle.net/10722/244722 |
ISSN | 2023 Impact Factor: 4.6 2023 SCImago Journal Rankings: 2.396 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Coutrot, A | - |
dc.contributor.author | Hsiao, JHW | - |
dc.contributor.author | Chan, AB | - |
dc.date.accessioned | 2017-09-18T01:57:50Z | - |
dc.date.available | 2017-09-18T01:57:50Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Behavior Research Methods, 2018, v. 50 n. 1, p. 362-379 | - |
dc.identifier.issn | 1554-351X | - |
dc.identifier.uri | http://hdl.handle.net/10722/244722 | - |
dc.description.abstract | How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement. | - |
dc.language | eng | - |
dc.publisher | Springer Verlag, co-published with Psychonomic Society. The Journal's web site is located at http://brm.psychonomic-journals.org/ | - |
dc.relation.ispartof | Behavior Research Methods | - |
dc.rights | The final publication is available at Springer via http://dx.doi.org/10.3758/s13428-017-0876-8 | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Classification | - |
dc.subject | Eye movements | - |
dc.subject | Hidden Markov models | - |
dc.subject | Machine-learning | - |
dc.subject | Scanpath | - |
dc.subject | Toolbox | - |
dc.title | Scanpath modeling and classification with Hidden Markov Models | - |
dc.type | Article | - |
dc.identifier.email | Hsiao, JHW: jhsiao@hku.hk | - |
dc.identifier.authority | Hsiao, JHW=rp00632 | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.3758/s13428-017-0876-8 | - |
dc.identifier.scopus | eid_2-s2.0-85017464983 | - |
dc.identifier.hkuros | 276069 | - |
dc.identifier.volume | 50 | - |
dc.identifier.issue | 1 | - |
dc.identifier.spage | 362 | - |
dc.identifier.epage | 379 | - |
dc.identifier.isi | WOS:000424922400024 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 1554-351X | - |