
Proceedings Paper
LSTM-based eye-movement trajectory analysis for reading behavior classificationFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Studying eye movement trajectory through eye tracker can provide insights into classification mechanism of memory and reasoning. For the given text or images, memory and reasoning based observations are useful in multiple key areas of psychology, cognition and related fields. For our project, the EyeLink 1000 Plus is used to inspect and capture fixation, saccades, blinks etc. in eye tracking protocols of each participant during trials. Based on fixation sequence of each participant our trained deep recurrent neural network architecture–LSTM classifies whether a participant is performing the given text-based task by memorizing it or by inference. To set trial sentences, syllogism– deductive reasoning type is followed. Sixty university students (mature readers) participated in both memory and reasoning-based trials. Participants were then divided into two equal groups and of which one group was instructed to perform memory task first while the second group was instructed to perform reasoning based task first and then their order of performing the given task was changed. From sixty-one different sentences, sixty randomly selected sentences were presented to each participant. Sequential signals from fixation of each participant were then processed to get the results. Our trained LSTM model’s high accuracy to classify the memory and the reasoning based reading of participants ensures the significance of our work which will provide a solid base for future works on eye movements to build intelligent techniques in the field of A.I backed psychology, healthcare and neuro-marketing. Our trained model has the potential for (a) achieving very high accuracy for memory and reasoning classification, (b) for data learning, it saves enormous time, (c) Economical and comfortable for trials to record data via EyeLink 1000 plus (d) and finally, possible future work opportunities are enlisted.
Paper Details
Date Published: 29 April 2022
PDF: 6 pages
Proc. SPIE 12247, International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2022), 1224715 (29 April 2022); doi: 10.1117/12.2636952
Published in SPIE Proceedings Vol. 12247:
International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2022)
Michael Opoku Agyeman; Seppo Sirkemaa, Editor(s)
PDF: 6 pages
Proc. SPIE 12247, International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2022), 1224715 (29 April 2022); doi: 10.1117/12.2636952
Show Author Affiliations
Ahmad Hassan, Jiangxi Normal Univ. (China)
Wei Fan, Jiangxi Normal Univ. (China)
Xiaoyu Hu, Jiangxi Normal Univ. (China)
Wei Fan, Jiangxi Normal Univ. (China)
Xiaoyu Hu, Jiangxi Normal Univ. (China)
Wenhao Wang, Jiangxi Normal Univ. (China)
Hanxi Li, Jiangxi Normal Univ. (China)
Hanxi Li, Jiangxi Normal Univ. (China)
Published in SPIE Proceedings Vol. 12247:
International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2022)
Michael Opoku Agyeman; Seppo Sirkemaa, Editor(s)
© SPIE. Terms of Use
