可用性和应用眼动追踪出版物
以下按年份列出了截至2025年(包括2026年初)的所有EyeLink眼动仪可用性和应用研究出版物。您可以使用诸如驾驶、运动、工作量等关键字搜索眼动追踪出版物。您还可以搜索单个作者的姓名。如果我们错过了任何EyeLink可用性或应用文章,请给我们发电子邮件!
2025 |
Yelda Semizer; Ruth Rosenholtz The effect of background clutter on visual search in video conferencing Journal Article In: Cognitive Research: Principles and Implications, vol. 10, no. 1, pp. 1–16, 2025. @article{Semizer2025,The use of video conferencing tools has become increasingly common recently. The visual displays in these tools are highly complex, being composed of multiple faces with varying image quality and lighting conditions. On top of this, users have the ability to choose their own backgrounds. Some choose simple artificial backgrounds, some appear in front of a real or simulated room, and some use something more abstract. How do these choices affect the user's ability to use the tool, for example, finding the current speaker or a reaction symbol? Vision science can certainly provide answers to these questions; however, most search studies use simple displays with a uniform background, or more recently, real-world scenes. How does what we know about search generalize to these more complex displays? The current study sought to examine how our understanding of visual search applies to well-controlled video conferencing displays. Specifically, we investigated the effect of display clutter (i.e., background complexity and variability) on perceptual tasks relevant for video conferencing. In an eye-tracking set-up, participants searched either for the speaker whose image was highlighted (Experiment 1) or for a reaction symbol (raised-hand) embedded on one of the attendees' background. Results showed a significant effect of background complexity and variability, suggesting that search performance declined as the display clutter increased. Image-based analysis showed that the choice of backgrounds mediated these effects, suggesting that some virtual backgrounds were not optimal for perceptual processes. |
Mergime Ibrahimi; Anu Masso; Mauro Bellone Sociotechnical imaginaries of autonomous vehicles: Comparing laboratory and online eye-tracking methods Journal Article In: PLoS One, vol. 20, pp. 1–24, 2025. @article{Ibrahimi2025,This study investigates sociotechnical imaginaries of autonomous vehicles (AVs) using a dual approach: in-lab and online eye-tracking experiments. We examine how cognitive engagement varies across hypothetical decision-making scenarios involving algorithmic failure of AVs. In comparison with non-AV scenarios. This article highlights the characteristics, advantages, and limitations of methods, emphasizing their complementary contributions to understanding how individuals perceive and engage with emerging technologies. The in-lab experiment revealed high-quality and precise data from a homogeneous sample, while the online experiment enabled us to scale the research and explore diverse sociotechnical imaginaries from a global sample through crowd-sourced platforms. Key findings show that both in-lab and online participants exhibited longer gaze durations at one point, predominantly longer in AV scenarios. However, a deeper analysis of overall cognitive engagement revealed that in-lab participants, with more concentrated sociotechnical imaginaries, were more focused on non-AV scenarios, indicating a stronger emphasis on human decision-making. In contrast, online participants, whose imaginaries may be shaped by global perspectives and diverse experiences with data and algorithms, displayed increased attention toward AV scenarios, with significant visual variations among participants, reflecting global interest or concern over high-stakes algorithmic decisions. These findings contribute to our understanding of how perception of AVs differs globally and offer insights into emerging concerns around algorithmic decision-making in everyday life. |
Andi Wang The integration of auditory and textual input in vocabulary learning from subtitled viewing: An eye-tracking study Journal Article In: Language Learning & Technology, vol. 29, no. 3, pp. 70–91, 2025. @article{Wang2025c,Numerous studies have documented the benefits of watching audio-visual materials with on-screen text for L2 vocabulary learning (Montero Perez, 2022). The provision of both auditory and textual input allows learners to link auditory and written forms (or L1 meanings) of unknown words during viewing, which could potentially facilitate vocabulary learning. However, little is known about the dynamics of text-audio synchrony in subtitled viewing and how the processing of written words in relation to the audio may lead to vocabulary learning. Eighty-one intermediate-to-advanced Chinese learners of English watched an English documentary with one of three on-screen texts (i.e., captions, L1 subtitles, and bilingual subtitles), while their eye movements were monitored. Participants' awareness of 17 unknown words and vocabulary learning gains were assessed via stimulated recalls and three vocabulary tests. Results revealed that captions facilitated text-audio synchronisation, whereas L1 subtitles generally led to reading ahead and skipping. Bilingual subtitles enabled synchronisation of L1 translations with L2 audio but often resulted in skipping L2 forms. Most text-audio processing behaviours led to moderate predicted probabilities of vocabulary learning and participants' reported awareness, with no significant within-group difference, except for the processing of L2 unknown words in bilingual subtitles. |
Paweł Cybulski Predicting cartographic symbol location with eye-tracking data and machine learning approach Journal Article In: Journal of Eye Movement Research, vol. 18, no. 4, pp. 1–75, 2025. @article{Cybulski2025,Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces. |
Sahar Moradizeyveh; Ambreen Hanif; Sidong Liu; Yuankai Qi; Amin Beheshti; Antonio Di Ieva Eye-guided multimodal fusion: Toward an adaptive learning framework using explainable artificial intelligence Journal Article In: Sensors, vol. 25, no. 15, pp. 1–15, 2025. @article{Moradizeyveh2025,Interpreting diagnostic imaging and identifying clinically relevant features remain challenging tasks, particularly for novice radiologists who often lack structured guidance and expert feedback. To bridge this gap, we propose an Eye-Gaze Guided Multimodal Fusion framework that leverages expert eye-tracking data to enhance learning and decision-making in medical image interpretation. By integrating chest X-ray (CXR) images with expert fixation maps, our approach captures radiologists' visual attention patterns and highlights regions of interest (ROIs) critical for accurate diagnosis. The fusion model utilizes a shared backbone architecture to jointly process image and gaze modalities, thereby minimizing the impact of noise in fixation data. We validate the system's interpretability using Gradient-weighted Class Activation Mapping (Grad-CAM) and assess both classification performance and explanation alignment with expert annotations. Comprehensive evaluations, including robustness under gaze noise and expert clinical review, demonstrate the framework's effectiveness in improving model reliability and interpretability. This work offers a promising pathway toward intelligent, human-centered AI systems that support both diagnostic accuracy and medical training. |
Shan-Mei Chang; Dai-Yi Wang; Zheng-Hong Guan Craving and attentional bias in gaming: Comparing esports, casual, and high-risk gamers using eye-tracking Journal Article In: Computers in Human Behavior, vol. 168, pp. 1–11, 2025. @article{Chang2025d,Attentional biases, as measured through eye movements, have been observed in both gaming disorders and substance addictions. However, few studies compare these biases among esports gamers (ESG), high-risk gamers (HRG), and other frequent gamers, despite ESG and HRG both groups dedicating significant time to gaming. This study included 47 male participants aged 15 to 19. Participants were categorized as ESG, casual gamers (CG), or HRG based on their MOBA experience, esports training, and Internet Gaming Disorder (IGD) scores. Each participant completed a dot-probe task with 56 stimulus conditions based on gaming cues, while eye-tracking technology recorded eye movements. The results indicated that HRG spent more total viewing time on stimulus images than ESG and CG. Additionally, HRG had longer first fixation durations and fewer saccade counts than the other two groups. Furthermore, HRG reported higher impulsivity and lower attentional focusing, suggesting a distinct psychological profile. Although ESG did not exhibit the same attentional biases as HRG, their self-reported gaming time was similar. This may be due to gaming being a career commitment for ESG, while for HRG, it serves as an escape from life pressures. Notably, eye-movement measures can identify high-risk tendencies early and uncover differences missed by self-report scales, including saccade count and attentional shifting. Caution is needed when diagnosing gaming disorder solely based on gaming time and self-reports. Future research could use attentional bias tasks as complementary diagnostic tools and further explore higher depression levels in HRG and ESG compared to CG. |
Dries Cavents; July De Wilde; Jelena Vranjes Towards a multimodal approach for analysing interpreter's management of rapport challenge in onsite and video remote interpreting Journal Article In: Journal of Pragmatics, vol. 235, pp. 220–237, 2025. @article{Cavents2025,Recently, interpreters' management of rapport is increasingly being investigated. Yet little attention has been directed towards the role of the interpreter's non-verbal behaviour when managing rapport and to the influence of video mediated forms of interpreting on the use of non-verbal behaviour. Therefore, this study proposes a multimodal micro-interactional framework for analysing interpreters' management of rapport challenge in both onsite (OSI) and video remote interpreting (VRI) interaction. The paper introduces a multimodal coding scheme based on Spencer-Oatey's Rapport Management Theory (2008), which is then applied to a dataset of video recorded interpreter-mediated interactions to examine how interpreters employ verbal, paraverbal, and non-verbal resources to multimodally address rapport challenge. Data were collected from simulated interactions involving professional public service interpreters and role-players adopting the role of primary participants in a reception centre for asylum seekers. The findings reveal that in OSI interpreters use a wide range of non-verbal resources when conveying rapport challenges, whereas VRI imposes constraints on non-verbal communication, often necessitating more disruptive verbal strategies to manage rapport. The study underscores the importance of a multimodal approach to interpreting research, highlighting how non-verbal behaviours significantly contribute to the management of interpersonal relations in interpreter-mediated talk. |
Jack Dempsey; Anna Tsiola; Nigel Bosch; Kiel Christianson; Mallory Stites Eye-movement indices of reading while debugging Python source code Journal Article In: Journal of Cognitive Psychology, vol. 37, no. 2, pp. 89–107, 2025. @article{Dempsey2025,Unlike text reading, the eye-movement behaviours associated with reading Python, a computer programming language, are largely understudied through a psycholinguistic lens. A general understanding of the eye movements involved in reading while troubleshooting Python, and how these behaviours compare to proofreading text, is critical for developing educational interventions and interactive tools for helping programmers debug their code. These data may also highlight to what extent humans use their underlying text reading ability when reading source code. The current work provides a profile of global reading behaviours associated with reading Python source code for debugging purposes. To this end, we recorded experienced programmers' eye movements while they determined whether 21 different Python functions would produce the desired output, an incorrect output, or an error message. Some reading behaviours seem to mirror those found in text reading (e.g. effects of stimulus complexity), while others may be specific to reading code. Results suggest that semantic errors that produce undesired outputs in programming source code may influence early stages of processing, likely due to the largely top-down strategy employed by experienced programmers when reading source code. The findings are framed to invigorate discussion and further exploration into psycholinguistic analysis of human source code reading. |
Gregory J. DiGirolamo; Federico Sorcini; Zachary Zaniewski; Jonathan B. Kruskal; Max P. Rosen In: Radiology, vol. 314, no. 2, pp. 1–7, 2025. @article{DiGirolamo2025,Background: Diagnostic error rates for detecting small lung nodules on chest CT scans remain high at 50%, despite advances in imaging technology and radiologist training. These failure rates may stem from limitations in conscious recognition processes. However, successful visual processes may be detecting the nodule independent of the radiologist's report. Purpose: To investigate visual processing in radiologists during the assessment of chest nodules to determine if radiologists have successful non- conscious processes that detect lung nodules on chest CT scans even when not consciously recognized or considered, as evidenced by changes in how long they look (dwell time) and pupil size to missed nodules. Materials and Methods: This prospective study, conducted from August 2014 to September 2023, compared six experienced radiologists with six medically naive control participants. Participants viewed 18 chest CT scans (nine abnormal with 16 nodules, nine normal) to detect lung nodules. High-speed video eye tracking measured gaze duration and pupil size (indicating physiological arousal) at missed nodule locations and the same locations on normal CT scans. The reference standard was the known presence or absence of nodules (as determined by a four-radiologist consensus panel) on abnormal and normal CT scans, respectively. Primary outcome measures were detection rates of nodules, and dwell time and pupil size at nodule locations versus normal tissue. Paired t tests were used for statistical analysis. Results: Twelve participants (six radiologists with an average of 9.3 years of radiologic experience and six controls with no radiologic experience) performed the evaluations. Radiologists missed on average 59% (9.5 of 16) of these lung nodules. For the missed nodules, radiologists exhibited longer dwell times (mean, 228 msec vs 175 msec; P = .005) and larger pupil size (mean, 1446 pixels vs 1349 pixels; P = .04.) than for normal tissue. Control participants showed no differences in dwell time (mean, 197 msec vs 180 msec; P = .64) or pupil size (mean, 1426 pixels vs 1714 pixels; P = .23) for missed nodules versus normal tissue locations. Conclusion: Radiologists' non-conscious processes during visual assessment of CT scans can detect lung nodules on chest CT scans even when conscious recognition fails, as evidenced by increased dwell time and larger pupil size. This successful non-conscious detection is a result of general radiology training. |
Lara Fontana; Javier Albayay; Letizia Zurlo; Viola Ciliberto; Massimiliano Zampini Olfactory modulation of visual attention and preference towards congruent food products: An eye tracking study Journal Article In: Food Quality and Preference, vol. 124, pp. 1–11, 2025. @article{Fontana2025,In multisensory environments, odours often accompany visual stimuli, directing attention towards congruent objects. While previous research shows that people fixate longer on objects that match a recently smelled odour, it remains unclear whether odours directly influence product choices. Since odours persist in real-world settings, we investigated the effects of repeated odour exposure on visual attention and product choice, accounting for potential olfactory habituation. In a within-participant design, 30 participants completed a task where either a lemon odour (experimental condition) or clean air (control) was paired with congruent lemon-based food images, which varied to prevent visual habituation. We measured eye movements and choice preferences for these food products. Results revealed that participants exhibited longer gaze durations and more frequent fixations on food products congruent with the lemon odour. Repeated odour exposure had no effect on gaze patterns, as participants consistently focused on odour-congruent products throughout the experiment. The intensity and pleasantness of the lemon odour remained stable over time, suggesting no olfactory habituation occurred with this food-related odour. Despite this stable visual attention and odour intensity and pleasantness, participants began to diversify their product choices, selecting fewer odour-congruent items over time. These findings suggest that while odours continue to direct attention towards matching products, repeated exposure may reduce their influence on product choice, highlighting the complex role of olfactory stimuli in decision-making. The study provides insights into how odours interact with visual cues and influence consumer behaviour in prolonged exposure scenarios. |
Qian Huangfu; Qianmei He; Sisi Luo; Weilin Huang; Yahua Yang Does teacher enthusiasm facilitate students' chemistry learning in video lectures regardless of students' prior chemistry knowledge levels? Journal Article In: Journal of Computer Assisted Learning, vol. 41, no. 1, pp. 1–14, 2025. @article{Huangfu2025,Background: Video lectures which include the teachers' presence have become increasingly common. As teacher enthusiasm is a nonverbal cue in video lectures, more and more studies are focusing on this topic. However, little research has been carried out on the interactions between teacher enthusiasm and prior knowledge when learning from video lectures. Objectives: We tested whether prior chemistry knowledge moderated the impact of teacher enthusiasm on students' chemistry learning during video lectures. Methods: One hundred and forty-two Grade 7 (low-prior chemistry knowledge) and Grade 9 (high-prior chemistry knowledge) Chinese students engaged with this research. Each group of students was randomised into viewing a video lecture with either a low or high degree of teacher enthusiasm. Outcomes were assessed by attention allocation, learning performance, cognitive load, learning satisfaction and student engagement. Results and Conclusions: Our findings revealed significant benefits of teacher enthusiasm and also showed that prior chemistry knowledge could moderate the impact of teacher enthusiasm on students' attention and cognitive outcomes (cognitive load and transfer). Visual attention mediates the effects on transfer. For students with low prior knowledge, there is more focus on the learning content, lower extraneous cognitive load, and higher transfer scores when watching videos with high levels of teacher enthusiasm; however, students with high prior knowledge do not show these differences. In addition, high prior chemistry knowledge had a significant beneficial impact on the motivational outcomes of the students (satisfaction and engagement). Implications: The results suggest that teacher enthusiasm in a video lecture may affect students' chemistry learning, and students' prior chemistry knowledge should be considered when choosing whether to display teacher enthusiasm. |
Gianna Jeyarajan; Lian Buwadi; Azar Ayaz; Lindsay S. Nagamatsu; Denait Haile; Liye Zou; Matthew Heath Passive and active exercise do not mitigate mental fatigue during a sustained vigilance task Journal Article In: Experimental Brain Research, vol. 243, no. 1, pp. 1–13, 2025. @article{Jeyarajan2025,Executive function (EF) is improved following a single bout of exercise and impaired when an individual experiences mental fatigue (MF). These performance outcomes have been linked to a bi-directional change in cerebral blood flow (CBF). Here, we sought to determine whether MF-induced by a sustained vigilance task (i.e., psychomotor vigilance task: PVT) is mitigated when preceded by a single bout of exercise. Participants completed 20-min single bouts of active exercise (cycle ergometry involving volitional muscle activation), passive exercise (cycle ergometry involving a mechanical flywheel) and a non-exercise control intervention. EF was assessed pre- and post-intervention via the antisaccade task. Following each intervention, a 20-min PVT was completed to induce and assess MF, and transcranial Doppler ultrasound of middle cerebral artery velocity (MCAv) was used to estimate intervention- and PVT-based changes in CBF. Active and passive exercise provided a post-intervention reduction in antisaccade reaction times; that is, exercise benefitted EF. Notably, however, frequentist and Bayesian statistics indicated the EF benefit did not mitigate MF during the PVT. As well, although exercise (active and passive) and the PVT respectively increased and decreased CBF, these changes were not correlated with behavioral measures of EF or MF. Accordingly, a postexercise EF benefit does not mitigate MF during a sustained vigilance task and a bi-directional change in CBF does not serve as a primary mechanism associated with EF and MF changes. Such results provide a framework for future work to explore how different exercise types, intensities and durations may impact MF. |
Yu Cin Jian; Leo Yuk Ting Cheung Prediction of text-and-diagram reading comprehension by eye-movement indicators: A longitudinal study in elementary schools Journal Article In: European Journal of Psychology of Education, vol. 40, no. 1, pp. 1–25, 2025. @article{Jian2025,Eye-movement technology has been often used to examine reading processes, but research has seldom examined the relationship between the reading process and comprehension performance, and whether the relationships are similar or different across grades. To investigate this, we conducted a 3-year longitudinal study starting at grade 4, with 175 effect samples to track the development data of eye movements on text-and-diagram reading. A series of temporal and spatial eye-movement predictors were identified to predict reading comprehension in various grades. The result of a hierarchical regression model established that total fixation duration measures (reflects processing level) and mean fixation duration (reflects decoding efficiency) were relatively better predictors of the post-reading tests at grades 5 and 6. That is, the readers made more mental efforts and had better decoding ability, which predict better post-reading test scores. Meanwhile, in grades 5 and 6, rereading total fixation duration on both the main text and diagrams consistently predicted the post-reading tests, indicating that the readers' self-regulated study time on re-processing some specific information is important for reading comprehension. Besides, a longitudinal structural equation modeling (SEM) revealed that the readers' fixation durations and text-and-diagram regression count in the lower fourth grade could significantly predict the same indicators in the following 2 years. In summary, this study identified the critical eye-movement indicators for predicting reading-test performance, and these predictions were more effective for the readers in upper grades than for those in the lower grades. |
Jan Louis Kruger; Sixin Liao Busting “ghost subtitles” on streaming services Journal Article In: Translation and Interpreting, vol. 17, no. 2, pp. 38–54, 2025. @article{Kruger2025,In this paper we focus on a phenomenon that all subtitle users experience: “ghost subtitles”. “Ghost subtitles” are subtitles we notice in our peripheral vision, only to find them gone by the time our eyes have moved down to start reading or disappearing while we are still reading. “Ghost subtitles” often meet the minimum duration and maximum speed requirements set by platforms or broadcasters but disregard the time it takes to move gaze from the image to the subtitle (i.e., processing latency). The one-speed-fits-all approach means that, many subtitles are not on screen long enough to allow viewers to finish reading them, which could result in frustrating viewing experiences. To determine how prevalent fast subtitles are on streaming platforms, this paper presents an analysis of the distribution of subtitle speeds based on a corpus of subtitles from one of the major streaming platforms. We further investigated the impact of subtitle speed and audio language on processing latency based on eye-movement data from a total of 109 participants in two separate experiments. We found that almost 15% of subtitles in our corpus were faster than 20 cps, and almost 8% of subtitles were shorter than one second. We also found processing latencies of around 400 ms for fast subtitles to around 700 ms at speeds of 12 cps, and between 580 ms and 760 ms in different audio conditions. This points to the importance of setting subtitle speed and duration in a way that allows viewers enough time to process both the image and the subtitle properly. |
Zheng Liang; Riman Ga; Han Bai; Qingbai Zhao; Guixian Wang; Qing Lai; Shi Chen; Quanlei Yu; Zhijin Zhou Teaching expectancy improves video-based learning: Evidence from eye-movement synchronization Journal Article In: British Journal of Educational Technology, vol. 56, pp. 231–249, 2025. @article{Liang2025,Abstract: Video-based learning (VBL) is popular, yet students tend to learn video material passively. Instilling teaching expectancy is a strategy to promote active processing by learners, but it is unclear how effective it will be in improving VBL. This study examined the role of teaching expectancy on VBL by comparing the learning outcomes and metacognitive monitoring of 94 learners with different expectancies (teaching, test or no expectancy). Results showed that the teaching expectancy group had better learning outcomes and no significant difference in the metacognitive monitoring of three groups. We further explored the visual behaviour patterns of learners with different expectancies by using the indicator of eye-movement synchronization. It was found that synchronization was significantly lower in both the teaching and test expectancy groups than in the no expectancy group, and the test expectancy group was significantly lower than the teaching expectancy group. This result suggests that both teaching and test expectancy enhance the active processing of VBL. However, by sliding window analysis, we found that the teaching expectancy group used a flexible and planned attention allocation. Our findings confirmed the effectiveness of teaching expectancy in VBL. Also, this study provided evidence for the applicability of eye-tracking techniques to assess VBL. |
Xingyang Lv; Zixin Yuan; Fang Wan; Tian Lan; Gila Oren Do tourists experience suffering when they touch the wailing wall? Journal Article In: Tourism Management, vol. 106, pp. 1–21, 2025. @article{Lv2025,Tactile engagement is a critical aspect of tourist experiences. Embodied cognition theory suggests a direct correlation between physical sensations and psychological perceptions. For example, touching the textured stones at the Wailing Wall, a revered religious site in Jerusalem, can evoke intense emotions in tourists. This study explores the impact of rough tactile sensations on dark experiences through six studies. We used content analysis, on-site surveys, eye movement experiments, and scenario experiments to validate these effects. Our findings emphasize the pivotal role of rough tactile sensations in shaping profound emotions and individual experiences while uncovering alternative routes for developing sensory strategies to enrich dark tourism experiences. |
Stanford Martinez; Carolina Ramirez-Tamayo; Syed Hasib Akhter Faruqui; Kal Clark; Adel Alaeddini; Nicholas Czarnek; Aarushi Aggarwal; Sahra Emamzadeh; Jeffrey R. Mock; Edward J. Golob Discrimination of radiologists' experience level using eye-tracking technology and machine learning: Case study Journal Article In: JMIR Formative Research, vol. 9, pp. 1–16, 2025. @article{Martinez2025,Background: Perception-related errors comprise most diagnostic mistakes in radiology. To mitigate this problem, radiologists use personalized and high-dimensional visual search strategies, otherwise known as search patterns. Qualitative descriptions of these search patterns, which involve the physician verbalizing or annotating the order he or she analyzes the image, can be unreliable due to discrepancies in what is reported versus the actual visual patterns. This discrepancy can interfere with quality improvement interventions and negatively impact patient care. Objective: The objective of this study is to provide an alternative method for distinguishing between radiologists by means of captured eye-tracking data such that the raw gaze (or processed fixation data) can be used to discriminate users based on subconscious behavior in visual inspection. Methods: We present a novel discretized feature encoding based on spatiotemporal binning of fixation data for efficient geometric alignment and temporal ordering of eye movement when reading chest x-rays. The encoded features of the eye-fixation data are used by machine learning classifiers to discriminate between faculty and trainee radiologists. A clinical trial case study was conducted using metrics such as the area under the curve, accuracy, F1-score, sensitivity, and specificity to evaluate the discriminability between the 2 groups regarding their level of experience. The classification performance was then compared with state-of-the-art methodologies. In addition, a repeatability experiment using a separate dataset, experimental protocol, and eye tracker was performed with 8 participants to evaluate the robustness of the proposed approach. Results: The numerical results from both experiments demonstrate that classifiers using the proposed feature encoding methods outperform the current state-of-the-art in differentiating between radiologists in terms of experience level. An average performance gain of 6.9% is observed compared with traditional features while classifying experience levels of radiologists. This gain in accuracy is also substantial across different eye tracker–collected datasets, with improvements of 6.41% using the Tobii eye tracker and 7.29% using the EyeLink eye tracker. These results signify the potential impact of the proposed method for identifying radiologists' level of expertise and those who would benefit from additional training. Conclusions: The effectiveness of the proposed spatiotemporal discretization approach, validated across diverse datasets and various classification metrics, underscores its potential for objective evaluation, informing targeted interventions and training strategies in radiology. This research advances reliable assessment tools, addressing challenges in perception-related errors to enhance patient care outcomes. |
Kate Matsunaga; Kleanthis Avramidis; Mark S. Borchert; Shrikanth Narayanan; Melinda Y. Chang Method for assessing visual saliency in children with cerebral/cortical visual impairment using generative artificial intelligence Journal Article In: Frontiers in Human Neuroscience, vol. 18, pp. 1–9, 2025. @article{Matsunaga2025,Cerebral/cortical visual impairment (CVI) is a leading cause of pediatric visual impairment in the United States and other developed countries, and is increasingly diagnosed in developing nations due to improved care and survival of children who are born premature or have other risk factors for CVI. Despite this, there is currently no objective, standardized method to quantify the diverse visual impairments seen in children with CVI who are young and developmentally delayed. We propose a method that combines eye tracking and an image-based generative artificial intelligence (AI) model (SegCLIP) to assess higher- and lower-level visual characteristics in children with CVI. We will recruit 40 CVI participants (aged 12 months to 12 years) and 40 age-matched controls, who will watch a series of images on a monitor while eye gaze position is recorded using eye tracking. SegCLIP will be prompted to generate saliency maps for each of the images in the experimental protocol. The saliency maps (12 total) will highlight areas of interest that pertain to specific visual features, allowing for analysis of a range of individual visual characteristics. Eye tracking fixation maps will then be compared to the saliency maps to calculate fixation saliency values, which will be assigned based on the intensity of the pixel corresponding to the location of the fixation in the saliency map. Fixation saliency values will be compared between CVI and control participants. Fixation saliency values will also be correlated to corresponding scores on a functional vision assessment, the CVI Range-CR. We expect that fixation saliency values on visual characteristics that require higher-level processing will be significantly lower in CVI participants compared to controls, whereas fixation saliency values on lower-level visual characteristics will be similar or higher in CVI participants. Furthermore, we anticipate that fixation saliency values will be significantly correlated to scores on corresponding items on the CVI Range-CR. Together, these findings would suggest that AI-enabled saliency analysis using eye tracking can objectively quantify abnormalities of lower- and higher-order visual processing in children with CVI. This novel technique has the potential to guide individualized interventions and serve as an outcome measure in future clinical trials. |
Yunxian Pan; Jie Xu Human-machine plan conflict and conflict resolution in a visual search task Journal Article In: International Journal of Human-Computer Studies, vol. 193, pp. 1–12, 2025. @article{Pan2025,With rapid technological development, humans are more likely to cooperatively work with intelligence systems in everyday life and work. Similar to interpersonal teamwork, the effectiveness of human-machine teams is affected by conflicts. Some human-machine conflict scenarios occur when neither the human nor the system was at fault, for example, when the human and the system formulated different but equally effective plans to achieve the same goal. In this study, we conducted two experiments to explore the effects of human-machine plan conflict and the different conflict resolution approaches (human adapting to the system, system adapting to the human, and transparency design) in a computer-aided visual search task. The results of the first experiment showed that when conflicts occurred, the participants reported higher mental load during the task, performed worse, and provided lower subjective evaluations towards the aid. The second experiment showed that all three conflict resolution approaches were effective in maintaining task performance, however, only the transparency design and the human adapting to the system approaches were effective in reducing mental load and improving subjective evaluations. The results highlighted the need to design appropriate human-machine conflict resolution strategies to optimize system performance and user experience. |
Long Qin; Yi Shi; Xin Zhang; Peichun Liao; Yongjie Li; Xianshi Zhang; Hongmei Yan Salient object detection of dynamic night scenes via bio-inspired spotlight attention and hierarchical edge-texture fusion Journal Article In: IEEE Open Journal of Intelligent Transportation Systems, vol. 6, pp. 1377–1390, 2025. @article{Qin2025b,The perception of night scenes is of crucial importance for driving safety. In the dimly lit night environment, as the visibility of objects decreases, both experienced and inexperienced drivers often struggle to fully notice the objects closely related to the driving task. Moreover, because the contours of many objects are blurred in dim night, locating and detecting objects are much more difficult than that in daytime scenes, especially for the small traffic objects, which undoubtedly greatly increases the potential road hazards. Till now, there are few studies specifically focusing on the night object detection based on driver's attention. This research is dedicated to solving the detection problem of significant objects in night scenes, particularly small salient objects. First, we constructed a Night Eye-Tracking Object Detection Dataset (NETOD), which can provide a benchmark for research on attention-driven object detection in night scenes. Then, we proposed a salient object detection model for night traffic scenes, named NS-YOLO. NS-YOLO integrates a Bio-Inspired Spotlight Attention Module (BSAM) that combines bottom-up feature enhancement with top-down semantic guidance to accurately localize salient objects. Additionally, a hierarchical multi-scale detection architecture is introduced, leveraging cross-layer feature pyramid and dynamic upsampling to enhance the detection of small objects. The experimental results on the NETOD dataset show that the proposed salient small object detection model for night traffic scenes achieved mean Average Precision (mAP) value of 93.0%, outperforming other advanced models. It has important potential application values in driver assistance, danger warning, and other aspects, and is expected to significantly improve the safety and intelligence of night driving. Beyond technical advancements, this work highlights the necessity of human-centric attention mechanisms in autonomous systems, paving the way for safer and more interpretable AI-driven vehicles. |
Justine Staal; | Jelmer Alsma; Jos Van Der Geest; | Sílvia Mamede; Els Jansen; | Maarten; A Frens; | Walter; W Van Den Broek; Laura Zwaan Selective processing of clinical information related to correct and incorrect diagnoses: An eye-tracking experiment Journal Article In: Medical Education, vol. 59, no. 5, pp. 540–549, 2025. @article{Staal2025,Introduction: Diagnostic errors are often attributed to erroneous selection and interpretation of patients' clinical information, due to either cognitive biases or knowledge deficits. However, whether the selection or processing of clinical information differs between correct and incorrect diagnoses in written clinical cases remains unclear. We hypothesised that residents would spend more time processing clinical information that was relevant to their final diagnosis, regardless of whether their diagnosis was correct. Methods: In this within-subjects eye-tracking experiment, 19 internal or emergency medicine residents diagnosed 12 written cases. Half the cases contained a correct diagnostic suggestion and the others an incorrect suggestion. We measured how often (i.e. number of fixations) and how long (i.e. , dwell time) residents attended to clinical information relevant for either suggestion. Additionally, we measured confidence and time to diagnose in each case. Results: Residents looked longer and more often at clinical information relevant for the correct diagnostic suggestion if they received an incorrect suggestion and were able to revise this suggestion to the correct diagnosis (dwell time: M: 6.3 seconds, SD: 5.1 seconds; compared to an average of 4 seconds in other conditions; number of fixations: M: 25 fixations, SD: 20; compared to an average of 16–17 fixations). Accordingly, time to diagnose was longer in cases with an incorrect diagnostic suggestion (M: 86 seconds, SD: 47 seconds; compared to an average of 70 seconds in other conditions). Confidence (range: 64%–67%) did not differ depending on residents' accuracy or the diagnostic suggestion. Discussion: Selectivity in information processing was not directly associated with an increase in diagnostic errors but rather seemed related to recognising and revising a biased suggestion in favour of the correct diagnosis. This could indicate an important role for case-specific knowledge in avoiding biases and diagnostic errors. Future research should examine information processing for other types of clinical information. |
Natalie G. Wall; Oliver Smith; Linda Campbell; Carmel Loughland; Ulrich Schall Using EEG and eye tracking to evaluate an emotion recognition ipad app for autistic children Journal Article In: Clinical EEG and Neuroscience, pp. 1–11, 2025. @article{Wall2025c,Autism is a neurodevelopmental condition that impacts individuals' communication and social interaction skills. Autistic children often have smaller N170 amplitudes in response to faces than neurotypical children. Autistic children also avoid the salient areas of the face. Technology-based interventions have been developed to teach autistic children how to recognise facial expressions, but the results have exhibited considerable variability across studies. The current study explored the effectiveness of an iPad app designed to support autistic children in recognising facial expressions by examining how participants process facial information through event-related potentials (ERP) and eye-tracking recordings. ERPs and eye tracking were recorded from 20 neurotypical and 15 autistic children aged between 6 and 12 years. The results replicated previous work, with the autistic group having smaller N170 and Vertex Positive Potential amplitudes and more scan time off the face when compared to non-autistic children. Following the intervention, some changes were observed in facial feature scanning among autistic participants, characterised by increased time spent on the face and decreased fixations. These findings add to the work, indicating that eye tracking may be a valuable biomarker for intervention outcomes in autism. Further research into N170 as a biomarker is needed. |
Chunyu Zhao; Tao Deng; Pengcheng Du; Wenbo Liu; Yi Huang; Fei Yan VP2Net: Visual perception-inspired network for exploring the causes of drivers' attention shift Journal Article In: IEEE Transactions on Intelligent Transportation Systems, vol. 26, no. 11, pp. 20012–20026, 2025. @article{Zhao2025b,With the rapid development of autonomous driving technology, the recognition/understanding of driving events has become increasingly important for improving road safety. Existing methods for recognizing driving events rely solely on the inherent features of driving scenes, lacking real-time modeling of driver attention and the integration of driver attention for understanding driving events. Research has shown that understanding driver attention will be beneficial for subsequent analysis of driving events. We propose the attention-based driving event dataset (ADED), which includes rich driving scenes, eye movement data, reasons for attention shifts, and event time windows. It enables the use of prior information about driver attention to guide the recognition of driving events. Based on our dataset, we propose a visual dual-perception network, named VP2Net, to explore the reasons behind driver attention shifts. The goal of VP2Net is to use driver attention to guide the recognition of driving events. Inspired by the human visual dual cognition process mechanism, we build a bottom-up sequential information encoding branch for extracting spatio-temporal low-level information in the driving scene. Additionally, we establish a top-down attention perceptual encoding branch that simulates the driver's high-level visual cognitive process. It not only captures the driver's spatial attention allocation (“where to focus”) but also performs a temporal dimensional perceptual enhancement (“when to focus”), allowing us to extract the driver's spatial attention enhancement information. We use the driver's spatial attention enhancement information to guide the fusion of spatio-temporal information of the driving scene and selectively highlight the core objects/areas in the current driving task/event. Finally, we compare our proposed model with other SOTA networks and visualize the results of the key components of the model. |
2024 |
Jordan C. Abramowitz; Matthew J. Goupell; Kristina DeRoy Milvae Cochlear–implant simulated signal degradation exacerbates listening effort in older listeners Journal Article In: Ear & Hearing, vol. 45, no. 2, pp. 441–450, 2024. @article{Abramowitz2024,Objectives: Individuals with cochlear implants (CIs) often report that listening requires high levels of effort. Listening effort can increase with decreasing spectral resolution, which occurs when listening with a CI, and can also increase with age. What is not clear is whether these factors interact; older CI listeners potentially experience even higher listening effort with greater signal degradation than younger CI listeners. This study used pupillometry as a physiological index of listening effort to examine whether age, spectral resolution, and their interaction affect listening effort in a simulation of CI listening. Design: Fifteen younger normal-hearing listeners (ages 18 to 31 years) and 15 older normal-hearing listeners (ages 65 to 75 years) participated in this experiment; they had normal hearing thresholds from 0.25 to 4 kHz. Participants repeated sentences presented in quiet that were either unprocessed or vocoded, simulating CI listening. Stimuli frequency spectra were limited to below 4 kHz (to control for effects of age-related high-frequency hearing loss), and spectral resolution was decreased by decreasing the number of vocoder channels, with 32-, 16-, and 8-channel conditions. Behavioral speech recognition scores and pupil dilation were recorded during this task. In addition, cognitive measures of working memory and processing speed were obtained to examine if individual differences in these measures predicted changes in pupil dilation. Results: For trials where the sentence was recalled correctly, there was a significant interaction between age and spectral resolution, with significantly greater pupil dilation in the older normal-hearing listeners for the 8- and 32-channel vocoded conditions. Cognitive measures did not predict pupil dilation. Conclusions: There was a significant interaction between age and spectral resolution, such that older listeners appear to exert relatively higher listening effort than younger listeners when the signal is highly degraded, with the largest effects observed in the eight-channel condition. The clinical implication is that older listeners may be at higher risk for increased listening effort with a CI. |
Tarek Alakmeh; David Reich; Lena Jäger; Thomas Fritz Predicting code comprehension: A novel approach to align human gaze with code using deep neural networks Journal Article In: Proceedings of the ACM on Software Engineering, vol. 1, pp. 1982–2004, 2024. @article{Alakmeh2024,The better the code quality and the less complex the code, the easier it is for software developers to comprehend and evolve it. Yet, how do we best detect quality concerns in the code? Existing measures to assess code quality, such as McCabe's cyclomatic complexity, are decades old and neglect the human aspect. Research has shown that considering how a developer reads and experiences the code can be an indicator of its quality. In our research, we built on these insights and designed, trained, and evaluated the first deep neural network that aligns a developer's eye gaze with the code tokens the developer looks at to predict code comprehension and perceived difficulty. To train and analyze our approach, we performed an experiment in which 27 participants worked on a range of 16 short code comprehension tasks while we collected fine-grained gaze data using an eye tracker. The results of our evaluation show that our deep neural sequence model that integrates both the human gaze and the stimulus code, can predict (a) code comprehension and (b) the perceived code difficulty significantly better than current state-of-the-art reference methods. We also show that aligning human gaze with code leads to better performance than models that rely solely on either code or human gaze. We discuss potential applications and propose future work to build better human-inclusive code evaluation systems. |
Robert G. Alexander; Ashwin Venkatakrishnan; Jordi Chanovas; Sophie Ferguson; Stephen L. Macknik; Susana Martinez-Conde Why did Rubens add a parrot to Titian's The Fall of Man? A pictorial manipulation of joint attention Journal Article In: Journal of Vision, vol. 24, no. 4, pp. 1–13, 2024. @article{Alexander2024,Almost 400 years ago, Rubens copied Titian's The Fall of Man, albeit with important changes. Rubens altered Titian's original composition in numerous ways, including by changing the gaze directions of the depicted characters and adding a striking red parrot to the painting. Here, we quantify the impact of Rubens's choices on the viewer's gaze behavior. We displayed digital copies of Rubens's and Titian's artworks—as well as a version of Rubens's painting with the parrot digitally removed—on a computer screen while recording the eye movements produced by observers during free visual exploration of each image. To assess the effects of Rubens's changes to Titian's composition, we directly compared multiple gaze parameters across the different images. We found that participants gazed at Eve's face more frequently in Rubens's painting than in Titian's. In addition, gaze positions were more tightly focused for the former than for the latter, consistent with different allocations of viewer interest. We also investigated how gaze fixation on Eve's face affected the perceptual visibility of the parrot in Rubens's composition and how the parrot's presence versus its absence impacted gaze dynamics. Taken together, our results demonstrate that Rubens's critical deviations from Titian's painting have powerful effects on viewers' oculomotor behavior. |
Zack Carpenter; David DeLiema Linking epistemic stance and problem-solving with self-confidence during play in a puzzle-based video game Journal Article In: Computers & Education, vol. 216, pp. 1–22, 2024. @article{Carpenter2024,Our goal in bridging several fields is to address a gap at the intersection of problem-solving, epistemic stance, and play. Specifically, we investigate how epistemically laminated problem-solving moves connect with self-confidence while students play Baba is You. Our research at this juncture revolves around three research questions: 1. How are problem-solving moves laminated with epistemic stance? 2. How do epistemically laminated problem-solving moves relate to players' self-confidence? 3. How do epistemically laminated problem-solving moves emerge as a dynamic, moment-to-moment interaction between player and game? |
Soazig Casteau; Daniel T. Smith How does contextual information affect aesthetic appreciation and gaze behavior in figurative and abstract artwork? Journal Article In: Journal of Vision, vol. 24, no. 12, pp. 1–15, 2024. @article{Casteau2024,Numerous studies have investigated how providing contextual information with artwork influences gaze behavior, yet the evidence that contextually triggered changes in oculomotor behavior when exploring artworks may be linked to changes in aesthetic experience remains mixed. The aim of this study was to investigate how three levels of contextual information influenced people's aesthetic appreciation and visual exploration of both abstract and figurative art. Participants were presented with an artwork and one of three contextual information levels: a title, title plus information on the aesthetic design of the piece, or title plus information about the semantic meaning of the piece. We measured participants liking, interest and understanding of artworks and recorded exploration duration, fixation count and fixation duration on regions of interest for each piece. Contextual information produced greater aesthetic appreciation and more visual exploration in abstract artworks. In contrast, figurative artworks were highly dependent on liking preferences and less affected by contextual information. Our results suggest that the effect of contextual information on aesthetic ratings arises from an elaboration effect, such that the viewer aesthetic experience is enhanced by additional information, but only when the meaning of an artwork is not obvious. |
Meijun Chen; Yuyi Chen; Ruoxi Qi; Janet Hui Hsiao; Wendy Wing Tak Lam; Qiuyan Liao In: Journal of Environmental Psychology, vol. 100, pp. 102485, 2024. @article{Chen2024d,Promoting sustainable diets is consistently documented to be beneficial to health, the environment, and long-term food security. There remains limited understanding of the effects of activating the goal of sustainable diets for achieving co-benefits on sustainable food choices and the potential mechanisms. This study was a pre-registered online randomized controlled trial combined with eye tracking to compare the effects of three priming interventions: health-benefit priming (HP), environment-benefit priming (EP), and combined-benefit priming (CoP), on sustainable food choice. Sustainable food choices were assessed by a simulated online shopping task. Participants' eye movement data were tracked while they were choosing foods during simulated online shopping. Participants' executive function (EF), environmental values, health values, and social orientation values were also measured. The results showed a significant difference in sustainable food choices among the four groups, with CoP showing a significant increase compared to the control. The eye-tracking data revealed that the attention to sustainable foods with an eco-friendly logo mediated the association between priming and participants' sustainable food choices. Furthermore, priming with the co-benefits of sustainable diets can be more effective for participants with greater delay discounting to increase their sustainable food choices. These findings suggest that priming with co-benefits of sustainable diets can be a promising strategy to support more sustainable food choices particularly for consumers with more difficulty in delaying their immediate awards. |
Sijia Chen; Jan-Louis Kruger Visual processing during computer-assisted consecutive interpreting Journal Article In: Interpreting, vol. 26, no. 2, pp. 231–252, 2024. @article{Chen2024g,This study investigates the visual processing patterns during computer-assisted consecutive interpreting (CACI). In phase I of the proposed CACI workflow, the interpreter listens to the source speech and respeaks it into speech recognition (SR) software. In phase II, the interpreter produces target speech supported by the SR text and its machine translation (MT) output. A group of students performed CACI with their eye movements tracked. In phase I, the participants devoted the majority of their attention to listening and respeaking, with very limited attention distributed to the SR text. However, a positive correlation was found between the percentage of dwell time on the SR text and the quality of respeaking, which suggests that active monitoring could be important. In phase II, the participants devoted more visual attention to the MT text than to the SR text and engaged in deeper and more effortful processing when reading the MT text. We identified a positive correlation between the percentage of dwell time on the MT text and interpreting quality in the L2–L1 direction but not in the L1–L2 direction. These results contribute to our understanding of computer-assisted interpreting and can provide insights for future research and training in this area. |
Dina Abdel Salam El-Dakhs; Suhad Sonbul; Ahmed Masrai An eye-tracking study on the processing of L2 collocations: The effect of congruency, proficiency, and transparency Journal Article In: Journal of Psycholinguistic Research, vol. 53, no. 2, pp. 1–30, 2024. @article{ElDakhs2024,The availability of a first language translation equivalent (i.e., congruency) has repeatedly been shown to influence second-language collocation processing in decontextualized tasks. However, no study to date has examined how L2 speakers process congruent/incongruent collocations on-line in a real-world context. The present study aimed to fill this gap by examining the eye-movement behavior of 31 Arabic-English speakers and 30 native English speakers as they read 20 congruent and 20 incongruent collocations (in addition to 40 control phrases) in short contexts. The study also examined possible modulating effects of proficiency level and transparency on congruency effects. Results showed that non-natives (similar to native speakers) showed a processing advantage for collocations over control phrases. However, there was no effect of congruency (i.e., no difference between congruent and incongruent collocations) for either group, and no modulating effect of proficiency or transparency on congruency. We discuss implications of the findings for theories of L2 lexical processing. |
Jonas Frenkel; Anke Cajar; Ralf Engbert; Rebecca Lazarides Exploring the impact of nonverbal social behavior on learning outcomes in instructional video design Journal Article In: Scientific Reports, vol. 14, no. 1, pp. 1–12, 2024. @article{Frenkel2024,Online education has become increasingly popular in recent years, and video lectures have emerged as a common instructional format. While the importance of instructors' nonverbal social cues such as gaze, facial expression, and gestures for learning progress in face-to-face teaching is well-established, their impact on instructional videos is not fully understood. Most studies on nonverbal social cues in instructional videos focus on isolated cues rather than considering multimodal nonverbal behavior patterns and their effects on the learning progress. This study examines the role of instructors' nonverbal immediacy (a construct capturing multimodal nonverbal behaviors that reduce psychological distance) in video lectures with respect to learners' cognitive, affective, and motivational outcomes. We carried out an eye-tracking experiment with 87 participants (Mage = 24.11 |
Beatriz García-Carrión; Francisco Muñoz-Leiva; Salvador Del Barrio-García; Lucia Porcu The effect of online message congruence, destination-positioning, and emojis on users' cognitive effort and affective evaluation Journal Article In: Journal of Destination Marketing and Management, vol. 31, pp. 1–13, 2024. @article{GarciaCarrion2024,In today's digital world, it is crucial that Destination Management Organizations (DMOs) understand how tourists process and assimilate the information they receive through social media, whether this is posted online by the destination itself or by other users. When it comes to understanding the effectiveness of DMOs' integrated marketing communication (IMC) strategies, it is important to examine the extent to which the congruence between those online messages posted by the destination and those posted by other users (electronic word-of-mouth) influences the effectiveness of the communication. Similarly, it is also of value to understand the degree to which the use of emojis in social media messages may enhance the effect of congruence on IMC effectiveness. The scientific literature has found that tourists' responses to the information published online by the destination will depend on the type of positioning it adopts on its social media. The novelty of the present study work lies in addressing these issues from a neuroscientific perspective, using eye-tracking technology, to study (i) the user's cognitive effort (based on ocular indicators) when processing social media content and (ii) their affective evaluation of that content. A factorial experiment is conducted on a sample of 58 Facebook users. The results point to the important role played by the level of message congruence in users' information-processing and demonstrate the contextualizing effect exerted by emojis. Additionally, this study highlights the need for further research into the cognitive processing of tourism messages relative to different positioning strategies. |
Jasenia Hartman; Jenny Saffran; Ruth Litovsky Word learning in deaf adults who use cochlear implants: The role of talker variability and attention to the mouth Journal Article In: Ear & Hearing, vol. 45, no. 2, pp. 337–350, 2024. @article{Hartman2024,OBJECTIVES: Although cochlear implants (CIs) facilitate spoken language acquisition, many CI listeners experience difficulty learning new words. Studies have shown 29that highly variable stimulus input and audiovisual cues improve speech perception in CI listeners. However, less is known whether these two factors improve perception in a word learning context. Furthermore, few studies have examined how CI listeners direct their gaze to efficiently capture visual information available on a talker's face. The purpose of this study was two-fold: (1) to examine whether talker variability could improve word learning in CI listeners and (2) to examine how CI listeners direct their gaze while viewing a talker speak. DESIGN: Eighteen adults with CIs and 10 adults with normal hearing (NH) learned eight novel word-object pairs spoken by a single talker or six different talkers (multiple talkers). The word learning task comprised of nonsense words following the phonotactic rules of English. Learning was probed using a novel talker in a two-alternative forced-choice eye gaze task. Learners' eye movements to the mouth and the target object (accuracy) were tracked over time. RESULTS: Both groups performed near ceiling during the test phase, regardless of whether they learned from the same talker or different talkers. However, compared to listeners with NH, CI listeners directed their gaze significantly more to the talker's mouth while learning the words. CONCLUSIONS: Unlike NH listeners who can successfully learn words without focusing on the talker's mouth, CI listeners tended to direct their gaze to the talker's mouth, which may facilitate learning. This finding is consistent with the hypothesis that CI listeners use a visual processing strategy that efficiently captures redundant audiovisual speech cues available at the mouth. Due to ceiling effects, however, it is unclear whether talker variability facilitated word learning for adult CI listeners, an issue that should be addressed in future work using more difficult listening conditions. |
Moritz Held; Jochem W. Rieger; Jelmer P. Borst Multitasking while driving: Central bottleneck or problem state interference? Journal Article In: Human Factors, vol. 66, no. 5, pp. 1564–1582, 2024. @article{Held2024,Objective: The objective of this work was to investigate if visuospatial attention and working memory load interact at a central control resource or at a task-specific, information processing resource during driving. Background: In previous multitasking driving experiments, interactions between different cognitive concepts (e.g., attention and working memory) have been found. These interactions have been attributed to a central bottleneck or to the so-called problem-state bottleneck, related to working memory usage. Method: We developed two different cognitive models in the cognitive architecture ACT-R, which implement the central vs. problem-state bottleneck. The models performed a driving task, during which we varied visuospatial attention and working memory load. We evaluated the model by conducting an experiment with human participants and compared the behavioral data to the model's behavior. Results: The problem-state-bottleneck model could account for decreased driving performance due to working memory load as well as increased visuospatial attentional demands as compared to the central-bottleneck model, which could not account for effects of increased working memory load. Conclusion: The interaction between working memory and visuospatial attention in our dual tasking experiment can be best characterized by a bottleneck in the working memory. The model results suggest that as working memory load becomes higher, drivers manage to perform fewer control actions, which leads to decreasing driving performance. Application: Predictions about the effect of different mental loads can be used to quantify the contribution of each subtask allowing for precise assessments of the current overall mental load, which automated driving systems may adapt to. |
Scott S. Hsieh; Akitoshi Inoue; Mariana Yalon; David A. Cook; Hao Gong; Parvathy Sudhir Pillai; Matthew P. Johnson; Jeff L. Fidler; Shuai Leng; Lifeng Yu; Rickey E. Carter; David R. Holmes; Cynthia H. McCollough; Joel G. Fletcher Targeted training reduces search errors but not classification errors for hepatic metastasis detection at contrast-enhanced CT Journal Article In: Academic Radiology, vol. 31, no. 2, pp. 448–456, 2024. @article{Hsieh2024,Rationale and Objectives: Methods are needed to improve the detection of hepatic metastases. Errors occur in both lesion detection (search) and decisions of benign versus malignant (classification). Our purpose was to evaluate a training program to reduce search errors and classification errors in the detection of hepatic metastases in contrast-enhanced abdominal computed tomography (CT). Materials and Methods: After Institutional Review Board approval, we conducted a single-group prospective pretest-posttest study. Pretest and posttest were identical and consisted of interpreting 40 contrast-enhanced abdominal CT exams containing 91 liver metastases under eye tracking. Between pretest and posttest, readers completed search training with eye-tracker feedback and coaching to increase interpretation time, use liver windows, and use coronal reformations. They also completed classification training with part-task practice, rating lesions as benign or malignant. The primary outcome was metastases missed due to search errors (<2 seconds gaze under eye tracker) and classification errors (>2 seconds). Jackknife free-response receiver operator characteristic (JAFROC) analysis was also conducted. Results: A total of 31 radiologist readers (8 abdominal subspecialists, 8 nonabdominal subspecialists, 15 senior residents/fellows) participated. Search errors were reduced (pretest 11%, posttest 8%, difference 3% [95% confidence interval, 0.3%-5.1%] |
Qian Huangfu; Hong Li; Yuanyuan Ban; Jiamei He An eye-tracking study on the effects of displayed teacher enthusiasm on students' learning procedural knowledge of chemistry in video lectures Journal Article In: Journal of Chemical Education, vol. 101, no. 2, pp. 259–269, 2024. @article{Huangfu2024,Teacher enthusiasm is known to affect students' learning in traditional classroom environments, but it is unclear how displayed teacher enthusiasm can optimize learning of chemistry procedural knowledge in multimedia learning environments. In this context, the present study used eye-tracking technology and quantitative analysis to examine how displayed teacher enthusiasm in video lectures affects students' positive emotions, visual attention, cognitive load, and learning outcomes. Measures were collected from 128 eighth-grade middle school students. An EyeLink 1000 Plus eye-tracker was used to capture the students' eye movements. The percentage of total fixation duration, percentage of fixation count, mean pupil size, and blink rate were used as metrics to analyze the eye-gaze data. The results showed that an enthusiastic teacher positively affected students' positive emotions, reduced students' cognitive load, and made students more concentrated on the learning-content area. Additionally, the higher level of displayed teacher enthusiasm improved learners' learning outcomes. |
John Chi Wa Ko; Mandy M. Cheng; Wendy J. Green In: European Accounting Review, vol. 33, no. 4, pp. 2–28, 2024. @article{Ko2024,Using eye-tracking technology, we examine whether the information processing patterns of nonprofessional investors with a directional investment preference are affected by performance information presented based on either a focus on stakeholders (stakeholder format) or on strategic goals (strategic theme format). We find that when a company's financial performance has declined but nonfinancial performance has improved, a strategic theme (stakeholder) format causes investors in a long investment position to focus on negative financial information to a lesser (greater) extent than those in a short investment position. These results indicate that a strategic theme format encourages biased investors to draw on favorable nonfinancial information to support their position, whereas a stakeholder format causes them to closely scrutinize unfavorable financial information. We also find that the level of bias in investors' earnings forecasts is lower when information is presented in a strategic theme format than in a stakeholder format; however, a supplementary experiment finds that this result is reversed when a company's financial performance has improved but its nonfinancial performance has declined. Our results have implications for external report preparers, standard setters, and analysts. |
Aylin Koçak; Nicolas Dirix; Wouter Duyck; Maaike Schellaert; Eva Derous Older and younger job seekers' attention towards metastereotypes in job ads Journal Article In: PLoS ONE, vol. 19, no. 10, pp. 1–23, 2024. @article{Kocak2024,Building on social identity theory and cognitive models on information processing, the present paper considered whether and how stereotyped information in job ads impairs older/ younger job seekers' job attraction. Two eye-tracking experiments with older (Study 1) and younger job seekers (Study 2) investigated effects of negatively metastereotyped personality requirements (i.e., traits) on job attraction and whether attention to and memory for negative information mediated these effects. Within-participants analyses showed for both older and younger job seekers that job attraction was lower when ads included negative metastereotypes and that more attention was allocated towards these negative metastereotypes. Older, but not younger job seekers, also better recalled these negative metastereotypes compared to not negative metastereotypes. The effect of metastereotypes on job attraction was not mediated by attention or recall of information. Organizations should therefore avoid negative metastereotypes in job ads that may capture older/younger job seekers' attention and lower job attraction. |
Dimitrios Liaskos; Vassilios Krassanakis In: Multimodal Technologies and Interaction, vol. 8, no. 6, pp. 1–18, 2024. @article{Liaskos2024,In the present study, a new eye-tracking dataset (OnMapGaze) and a graph-based metric (GraphGazeD) for modeling visual perception differences are introduced. The dataset includes both experimental and analyzed gaze data collected during the observation of different cartographic backgrounds used in five online map services, including Google Maps, Wikimedia, Bing Maps, ESRI, and OSM, at three different zoom levels (12z, 14z, and 16z). The computation of the new metric is based on the utilization of aggregated gaze behavior data. Our dataset aims to serve as an objective ground truth for feeding artificial intelligence (AI) algorithms and developing computational models for predicting visual behavior during map reading. Both the OnMapGaze dataset and the source code for computing the GraphGazeD metric are freely distributed to the scientific community. |
Chenya Ma; Hang Zhou; Ling Wang; Yushi Jiang Who posts the advertisement: The influence of advertising authorship on in-feed advertising effectiveness Journal Article In: Journal of Consumer Behaviour, vol. 23, pp. 3030–3045, 2024. @article{Ma2024,Advertisers typically publish in-feed ads with two types of authorship: brand or influencer, yet little is known about the effectiveness of in-feed ads between these two authors. In this study, we investigated the interactions and mechanisms of ad authorship (brand vs. influencer) and brand type (luxury vs. mass) on advertising effectiveness, and tested the moderating effect of upward social comparison based on the stereotype content model. A pilot study, by coding the secondary data from Most Liked WeChat Moment Ads, found that a greater proportion of luxury (vs. mass) brands were authored by brands (vs. influencers). Study 1 used eye tracking technique to identify the interactive effect of ad authorship and brand type on visual attention. Study 2 further identified perceived competence and warmth as mediators. Study 3 verified the moderating effect of upward social comparison on the above effects. This paper contributes to the theoretical literature on in-feed advertising by showing the interactive effect of advertising authorship and brand types on advertising effectiveness. It also offers valuable insights for luxury or mass brands on strategically leveraging the brand itself or influencer for advertising. |
Tetsuya Sato; Austin Jackson; Yusuke Yamani Number of interrupting events influences response time in multitasking, but not trust in automation Journal Article In: International Journal of Aerospace Psychology, vol. 34, no. 4, pp. 208–224, 2024. @article{Sato2024,Objective: The present study examined how the number of interrupting events (interruption load) influences the effect of task load on human-automation trust and resource allocation in a low-fidelity flight simulation environment. Background: Trust is one critical factor that influences successful human-automation interaction. In the previous research, operators reported lower trust scores and made fewer fixation toward an automated system, which assisted a task, when competing task in the same workspace demanded more attention from the operator. However, it is unclear whether human-automation trust is influenced by frequent shift of attention away from a task assisted by an automated signaling system. Methods: Participants concurrently performed a tracking task, a system monitoring task, and a communication task. An automated signaling system was employed to assist the system monitoring task with 70% reliability. Task load was manipulated by the difficulty of the tracking task while interruption load was manipulated by the varying the frequency of auditory messages in the communication task. Results: Results demonstrated an effect of task load on human-automation trust and resource allocation, replicating previous findings. Further, participants responded faster to an auditory message that occurred less frequently when performing a tracking task at the low difficulty level but automation trust did not vary. Conclusion: While operators reported higher trust levels to imperfect automation under lower task load, number of interrupting events does not influence their trust. |
Mustafa Shirzad; James Van Riesen; Nikan Behboodpour; Matthew Heath 10-min exposure to a 2.5% hypercapnic environment increases cerebral blood blow but does not impact executive function Journal Article In: Life Sciences in Space Research, vol. 40, pp. 143–150, 2024. @article{Shirzad2023,Space travel and exploration are associated with increased ambient CO2 (i.e., a hypercapnic environment). Some work reported that the physiological changes (e.g., increased cerebral blood flow [CBF]) associated with a chronic hypercapnic environment contributes to a “space fog” that adversely impacts cognition and psychomotor performance, whereas other work reported no change or a positive change. Here, we employed the antisaccade task to evaluate whether transient exposure to a hypercapnic environment influences top-down executive function (EF). Antisaccades require a goal-directed eye movement mirror-symmetrical to a target and are an ideal tool for identifying subtle EF changes. Healthy young adults (aged 19–25 years) performed blocks of antisaccade trials prior to (i.e., pre-intervention), during (i.e., concurrent) and after (i.e., post-intervention) 10-min of breathing factional inspired CO2 (FiCO2) of 2.5% (i.e., hypercapnic condition) and during a normocapnic (i.e., control) condition. In both conditions, CBF, ventilatory and cardiorespiratory responses were measured. Results showed that the hypercapnic condition increased CBF, ventilation and end-tidal CO2 and thus demonstrated an expected physiological adaptation to increased FiCO2. Notably, however, null hypothesis and equivalence tests indicated that concurrent and post-intervention antisaccade reaction times were refractory to the hypercapnic environment; that is, transient exposure to a FiCO2 of 2.5% did not produce a real-time or lingering influence on an oculomotor-based measure of EF. Accordingly, results provide a framework that – in part – establishes the FiCO2 percentage and timeline by which high-level EF can be maintained. Future work will explore CBF and EF dynamics during chronic hypercapnic exposure as more direct proxy for the challenges of space flight and exploration. |
Agnieszka Szarkowska; Valentina Ragni; David Orrego-Carmona; Sharon Black; Sonia Szkriba; Jan-Louis Louis Kruger; Krzysztof Krejtz; Breno Silva The impact of video and subtitle speed on subtitle reading: An eye-tracking replication study Journal Article In: Journal of Audiovisual Translation, vol. 7, no. 1, pp. 1–23, 2024. @article{Szarkowska2024,We present results of a direct replication of Liao et al.'s (2021) study on how subtitle speed and the presence of concurrent video impact subtitle reading among British and Polish viewers. Our goal was to assess the generalisability of the original study's findings on a cohort of Australian English. The study explored both subtitle-level and word-level effects, considering the presence or absence of concurrent video and three subtitle speeds: 12 characters per second, 20 cps, and 28 cps. Overall, most of the original results were replicated, confirming that the presence of video and the speed of the subtitles have a measurable impact on processing across different viewer groups. Additionally, differences in how native and non-native speakers process subtitles emerged, in particular related to wrap-up, word frequency and word length effects. The paper describes the replication in detail, presents the findings, and discusses some of their implications. Lay summary In our study we were interested in the effects that the presence of video and various subtitle speeds have on how viewers watch subtitled videos and how they understand them. We also wanted to know if the previous results obtained in a study by Liao et al. (2021) in Australia hold true for other viewers living in different locations. With this goal in mind, we repeated Liao et al.'s (2021) study on British and Polish viewers. The study explored both subtitle-level and word-level effects, considering the presence or absence of video and three subtitle speeds: 12 characters per second, 20 cps, and 28 cps. Overall, most of the original results were confirmed, showing that the presence of video and the speed of the subtitles have an impact on processing across different viewer groups. Additionally, differences in how native and non-native speakers process subtitles emerged, in particular related to well-known linguistic effects from reading studies, such as wrap-up, word frequency and word length effects. The paper describes the replication in detail, presents the findings, and discusses some of their implications. |
Agnieszka Szarkowska; Valentina Ragni; Sonia Szkriba; Sharon Black; David Orrego-Carmona; Jan Louis Kruger In: PLoS ONE, vol. 19, no. 10, pp. 1–29, 2024. @article{Szarkowska2024a,Every day, millions of viewers worldwide engage with subtitled content, and an increasing number choose to watch without sound. In this mixed-methods study, we examine the impact of sound presence or absence on the viewing experience of both first-language (L1) and second-language (L2) viewers when they watch subtitled videos. We explore this novel phenomenon through comprehension and recall post-tests, self-reported cognitive load, immersion, and enjoyment measures, as well as gaze pattern analysis using eye tracking. We also investigate viewers' motivations for opting for audiovisual content without sound and explore how the absence of sound impacts their viewing experience, using in-depth, semi-structured interviews. Our goal is to ascertain whether these effects are consistent among L2 and L1 speakers from different language varieties. To achieve this, we tested L1-British English, L1-Australian English and L2-English (L1-Polish) language speakers (n = 168) while they watched English-language audiovisual material with English subtitles with and without sound. The findings show that when watching videos without sound, viewers experienced increased cognitive load, along with reduced comprehension, immersion and overall enjoyment. Examination of participants' gaze revealed that the absence of sound significantly affected the viewing experience, increasing the need for subtitles and thus increasing the viewers' propensity to process them more thoroughly. The absence of sound emerged as a global constraint that made reading more effortful. Triangulating data from multiple sources made it possible to tap into some of the metacognitive strategies employed by viewers to maintain comprehension in the absence of sound. We discuss the implications within the context of the growing trend of watching subtitled videos without sound, emphasising its potential impact on cognitive processes and the viewing experience. |
Jiahui Wang Mind wandering in videos that integrate instructor's visuals: An eye tracking study Journal Article In: Innovations in Education and Teaching International, vol. 61, no. 5, pp. 972–987, 2024. @article{Wang2024a,With an increasing number of videos integrating instructor's visuals on screen, we know little about the impacts of this design on mind wandering. The study aims to investigate a) how instructor visibility impacts mind wandering; b) the relationship between mind wandering and retention performance; c) how visual behaviour during video-watching influences mind wandering. Each participant watched a video with or without instructor visibility, while their visual behaviour was recorded by an eye tracker. Retention performance was measured at the completion of the video. Mind wandering was inferred via global self-report measure and objective eye tracking measure. Both measures of mind wandering indicated the instructor visible video resulted in less mind wandering. Findings suggested mind wandering impaired retention performance. Additionally, visual attention to the instructor was associated with less mind wandering. |
Jiahui Wang Does working memory capacity influence learning from video and attentional processing of the instructor's visuals? Journal Article In: Behaviour & Information Technology, vol. 43, no. 1, pp. 95–109, 2024. @article{Wang2024c,Existing evidence suggested learners with differences in attention and cognition might respond to the same media in differential ways. The current study focused on one format of video design–instructor visibility and explored the moderating effects of working memory capacity on learning from such video design and if learners with high and low working memory capacity attended to the instructor's visuals differently. Participants watched a video either with or without the instructor's visuals on the screen, while their visual attention was recorded simultaneously. After the video, participants responded to a learning test that measured retention and transfer. Although the results did not show working memory capacity moderated the instructor visibility effects on learning or influenced learners' visual attention to the instructor's visuals, the findings did indicate working memory capacity was a positive predictor of retention performance regardless of the video design. Discussions and implications of the findings were provided. |
Pengchao Wang; Wei Mu; Gege Zhan; Aiping Wang; Zuoting Song; Tao Fang; Xueze Zhang; Junkongshuai Wang; Lan Niu; Jianxiong Bin; Lihua Zhang; Jie Jia; Xiaoyang Kang Preference detection of the humanoid robot face based on EEG and eye movement Journal Article In: Neural Computing and Applications, vol. 36, no. 19, pp. 11603–11621, 2024. @article{Wang2024f,The face of a humanoid robot can affect the user experience, and the detection of face preference is particularly important. Preference detection belongs to a branch of emotion recognition that has received much attention from researchers. Most of the previous preference detection studies have been conducted based on a single modality. In this paper, we detect face preferences of humanoid robots based on electroencephalogram (EEG) signals and eye movement signals for single modality, canonical correlation analysis fusion modality, and bimodal deep autoencoder (BDAE) fusion modality, respectively. We validated the theory of frontal asymmetry by analyzing the preference patterns of EEG and found that participants had higher alpha wave energy for preference faces. In addition, hidden preferences extracted by EEG signals were better classified than preferences from participants' subjective feedback, and also, the classification performance of eye movement data was improved. Finally, experimental results showed that BDAE multimodal fusion using frontal alpha and beta power spectral densities and eye movement information as features performed best, with the highest average accuracy of 83.13% for the SVM and 71.09% for the KNN. |
Aengus Ward; Shiyu He Medieval reading in the twenty-first century? Journal Article In: Digital Scholarship in the Humanities, vol. 39, no. 4, pp. 1134–1155, 2024. @article{Ward2024,Reading practices in medieval manuscripts have often been the subject of critical analysis in the past. Recent technological developments have extended the range of analytical possibilities; one such development is that of eye tracking. In the present article, we outline the results of an experiment using eye tracking technologies which were carried out recently in Spain. The analysis points to particular trends in the ways in which modern readers interact with medieval textual forms and we use this analysis to point to future possibilities in the use of eye tracking to broaden and deepen our understanding of the workings of the medieval page. |
Tiansheng Xia; Yingqi Yan; Jiayue Guo Color in web-advertising: The effect of color hue contrast on web satisfaction and advertising memory Journal Article In: Current Psychology, vol. 43, no. 16, pp. 14645–14658, 2024. @article{Xia2024b,There has been a growth in e-commerce, presenting consumers with varied forms of advertising. A key goal of web advertising is to leave a lasting impression on the user, and web satisfaction is an important measure of the quality and usability of a web page after an ad is placed on it. This experiment manipulated participants' purpose in web browsing (free browsing versus goal oriented) and the color combination of the web background and the vertical-ad background (high or low hue contrast) to predict users' satisfaction with the web page and the degree of ad recall. The psychological mechanisms of this effect were also explored using an eye-tracking device to record and analyze eye movements. The participants were 120 university students, 64.2% of whom were female and 35.8% of whom were male. During free browsing, participants could simulate the daily use of a browser to browse the web and were given 120 s to do so, and in the task-oriented browsing condition, participants were told in advance that they had to summarize the headlines of each news item one at a time within 120 s. The results showed that, in the free-viewing task, the hue contrast between the ad–web background colors negatively affected web satisfaction and ad memory whereas there was no significant difference in this effect in the goal-oriented task. Furthermore, in the free-viewing task, the level of attentional intrusion mediated the effect of ad–web hue contrast on the degree of ad recall; color harmony mediated the effect of hue contrast on the user's evaluation of web satisfaction. These results can act as a new reference for web design research and marketing practice. |
Xinyong Zhang Evaluating target expansion for eye pointing tasks Journal Article In: Interacting with Computers, vol. 36, no. 4, pp. 209–223, 2024. @article{Zhang2024c,The idea of target expansion was proposed two decades ago for manual target acquisition, but it is not feasible to implement this idea in traditional user interfaces as the interactive system cannot know exactly which target is the desired one and should be expanded among several candidates. With the increasing maturity of eye tracking technology, gaze input has moved from an academically promising technique to an input method with built-in support in Windows 10; and target expansion has already become very feasible in the context of gaze input, as the user's eye gaze is inherently an indicator of the desired target due to the natural eye-hand coordination in everyday tasks. However, a comprehensive evaluation is still lacking. In this study, two experiments were conducted, each with a different group of subjects, to investigate the effects of target expansion under different expansion feedback styles (visible vs. invisible), expansion factors, as well as different target appearances (i.e., circular vs. rectangular). The experimental results indicated that (1) the index of difficulty in eye pointing tasks (IDeye) does not depend on the initial size of the target, but on its final size, and that the corresponding human performance can be accurately predicted using the IDeye model instead of Fitts' law; and that (2) the visible expansion style could disrupt the user's fixations, making the measured human performance less efficient to some extent, but overall the theoretical predictions using the IDeye model were almost the same as the baselines. Following the experimental results, this study also provided some practical suggestions for UI design. |
2023 |
Břetislav Andrlík; Stanislav Mokrý; Petr David The indirect administrative burden of road tax proposal: Survey and eye-tracking experiment in the Czech Republic Journal Article In: Acta academica karviniensia, vol. 23, no. 2, pp. 5–17, 2023. @article{Andrlik2023,This paper presents the results of an eye-tracking experiment to uncover the indirect administrative burden related to a general road tax. The aim of the research was to use eye-tracking technology and a supplementary questionnaire survey to identify the time taken to complete the model road tax return form used in the Czech Republic. Simultaneously, the aim was to test the possibility of using neuroscience technology for research in the field of accounting research. In the experiment, we focused on the following factors: the effect of a number of vehicles; the effect of gender; the context of educational attainment; previous experience of completing the tax return form; car ownership, and their effect on the dependent variable - total time to complete the form. According to our results, filling in a higher number of vehicles brings time savings of scale for the participants. For this reason, a regressive impact of the administrative burden generated can be assumed, with a more significant impact on owners of a lower number of vehicles. |
Sourish Chakravarty; Jacob Donoghue; Ayan S. Waite; Meredith Mahnke; Indie C. Garwood; Sebastian Gallo; Earl K. Miller; Emery N. Brown Closed-loop control of anesthetic state in nonhuman primates Journal Article In: PNAS Nexus, vol. 2, no. 10, pp. 1–14, 2023. @article{Chakravarty2023,Research in human volunteers and surgical patients has shown that unconsciousness under general anesthesia can be reliably tracked using real-time electroencephalogram processing. Hence, a closed-loop anesthesia delivery (CLAD) system that maintains precisely specified levels of unconsciousness is feasible and would greatly aid intraoperative patient management. The US Federal Drug Administration has approved no CLAD system for human use due partly to a lack of testing in appropriate animal models. To address this key roadblock, we implement a nonhuman primate (NHP) CLAD system that controls the level of unconsciousness using the anesthetic propofol. The key system components are a local field potential (LFP) recording system; propofol pharmacokinetics and pharmacodynamic models; the control variable (LFP power between 20 and 30 Hz), a programmable infusion system and a linear quadratic integral controller. Our CLAD system accurately controlled the level of unconsciousness along two different 125-min dynamic target trajectories for 18 h and 45 min in nine experiments in two NHPs. System performance measures were comparable or superior to those in previous CLAD reports. We demonstrate that an NHP CLAD system can reliably and accurately control in real-time unconsciousness maintained by anesthesia. Our findings establish critical steps for CLAD systems' design and testing prior to human testing. |
Jacky R. Claydon; Matthew C. Fysh; Jonathan E. Prunty; Filipe Cristino; Reuben Moreton; Markus Bindemann Facial comparison behaviour of forensic facial examiners Journal Article In: Applied Cognitive Psychology, vol. 37, no. 1, pp. 6–25, 2023. @article{Claydon2023,Facial examiners make visual comparisons of face images to establish the identities of persons in police investigations. This study utilised eye-tracking and an individual differences approach to investigate whether these experts exhibit specialist viewing behaviours during identification, by comparing facial examiners with forensic fingerprint analysts and untrained novices across three tasks. These comprised of face matching under unlimited (Experiment 1) and time-restricted viewing (Experiment 2), and with a feature-comparison protocol derived from examiner casework procedures (Experiment 3). Facial examiners exhibited individual differences in facial comparison accuracy and did not consistently outperform fingerprint analysts and novices. Their behaviour was also marked by similarities to the comparison groups in terms of how faces were viewed, as evidenced from eye movements, and how faces were perceived, based on the made feature judgements and identification decisions. These findings further understanding of how facial comparisons are performed and clarify the nature of examiner expertise. |
Tao Deng; Lianfang Jiang; Yi Shi; Jiang Wu; Zhangbi Wu; Shun Yan; Xianshi Zhang; Hongmei Yan Driving visual saliency prediction of dynamic night scenes via a spatio-temporal dual-encoder network Journal Article In: IEEE Transactions on Intelligent Transportation Systems, pp. 1–11, 2023. @article{Deng2023a,Driving at night is more challenging and dangerous than driving during the day. Modeling driver eye movement and attention allocation during night driving can help guide unmanned intelligent vehicles and improve safety during similar situations. However, until now, few studies have modeled a drivers' true fixations and attention allocation in specific night circumstance. Therefore, we collected an eye tracking dataset from 30 experienced drivers while they viewed night driving videos under a hypothetical driving condition, termed Driver Fix- ation Dataset in night (DrFixD(night)). Based on DrFixD(night) which includes multiple drivers' attention allocation, we proposed a spatio-temporal dual-encoder network model, named as STDE- Net, to improve saliency detection in night driving condition. The model includes three modules: i) spatio-temporal dual encoding module, ii) fusion module based on attention mechanism, and iii) decoding module. A convolutional LSTM is employed to learn the time connection of video sequences, and a convolution neural net- work combined pyramid dilated convolution is adopted to extract spatial features in the spatio-temporal dual encoding module. The attention mechanism is exploited to fuse the temporal and spatial features together and selectively highlight the significant features in night traffic scene. We compared the proposed model with other traditional methods and deep learning models, both qualitatively and quantitatively, and found that the proposed model can predict driver's fixation more accurately. Specifically, the proposed model not only predicts the main goals, but also predicts the important sub goals, such as pedestrians, bicycles and so on, showing excellent prediction of dimly lit targets at night. Index |
Rui Fu; Tao Huang; Mingyue Li; Qinyu Sun; Yunxing Chen In: Expert Systems with Applications, vol. 214, pp. 1–12, 2023. @article{Fu2023,The prediction of the driver's focus of attention (DFoA) is becoming essential research for the driver distraction detection and intelligent vehicle. Therefore, this work makes an attempt to predict DFoA. However, traffic driving environment is a complex and dynamic changing scene. The existing methods lack full utilization of driving scene information and ignore the importance of different objects or regions of the driving scene. To alleviate this, we propose a multimodal deep neural network based on anthropomorphic attention mechanism and prior knowledge (MDNN-AAM-PK). Specifically, a more comprehensive information of driving scene (RGB images, semantic images, optical flow images and depth images of successive frames) is as the input of MDNN-AAM-PK. An anthropomorphic attention mechanism is developed to calculate the importance of each pixel in the driving scene. A graph attention network is adopted to learn semantic context features. The convolutional long short-term memory network (ConvLSTM) is used to achieve the transition of fused features in successive frames. Furthermore, a training method based on prior knowledge is designed to improve the efficiency of training and the performance of DFoA prediction. These experiments, including experimental comparison with the state-of-the-art methods, the ablation study of the proposed method, the evaluation on different datasets and the visual assessment experiment in vehicle simulation platform, show that the proposed method can accurately predict DFoA and is better than the state-of-the-art methods. |
Zhibing Gao; Ziang Li; Xiangling Zhuang; Guojie Ma Advantages of graphical nutrition facts label: Faster attention capture and improved healthiness judgement Journal Article In: Ergonomics, vol. 66, no. 5, pp. 627–643, 2023. @article{Gao2023a,Consumers have to rely on the traditional back-of-package nutrition facts label (NFL) to obtain nutrition information in many countries. However, traditional NFLs have been criticised for their poor visualisation and low efficiency. This study redesigned back-of-package NFLs integrated with bar graphs (black or coloured) to visually indicate nutrient reference values (NRVs). Two eye movement studies were performed to evaluate the ergonomic advantages of the graphical NFLs. Our findings suggested that the newly designed NFLs led to faster and better healthiness evaluation performance. The newly designed graphical labels led to a shorter time to first fixation duration and offered a higher percentage of fixation time in the nutrient reference values region compared with that observed using traditional text labels. Nowadays, many chronic diseases are associated with poor eating habits, therefore, the importance of visualisation design to nudge healthier food choices could be paid more attention to by policymakers and food manufacturers. |
Beatriz García-Carrión; Salvador Del Barrio-García; Francisco Muñoz-Leiva; Lucia Porcu In: Journal of Hospitality and Tourism Management, vol. 55, no. August 2022, pp. 78–90, 2023. @article{GarciaCarrion2023,Social networks are a source of competitive advantage for destination management organizations (DMOs) in promoting user-generated content. In the online environment, the generational cohort to which the user belongs significantly determines their motivations, preferences, and behaviors. Against this backdrop, and in context of culinary tourism, the present work aims to: (1) examine the degree of congruence between the messages that tourist receives from DMOs and other tourists through social network comments affects their attention and affective responses; (2) analyze the effect of generational cohort on user responses; (3) investigate the differences in gastronomy-related messages between generational cohorts according to different levels of congruence. An eye-tracking experiment is conducted to manipulate message congruence (high vs. low) and user's generational cohort (Millennials vs. Generation Z). Findings show faster attention-capture and higher cognitive processing in low-congruence gastronomy-related comments in both cohorts, while Generation Z users reported greater attention to culinary visuals. |
Jessica N. Goetz; Mark B. Neider Keep it real, keep it simple: The effects of icon characteristics on visual search Journal Article In: Behaviour & Information Technology, pp. 1–20, 2023. @article{Goetz2023,Previous research examining how icons' concreteness, visual complexity, and distinctiveness influence visual search performance have led to disagreements over which icon characteristic most affects behaviour. These icon characteristics are often poorly defined and interrelated, particularly concreteness. Accordingly, drawing strong inferences about the robustness of concreteness as a factor in search for visual icons is challenging. Here, we operationalised concreteness into three distinct levels: concrete icons were images of real-world objects, photorealistic icons were drawings of the object, and abstract icons were images with no conceptual information. Across two experiments, participants rated each icon on various icon characteristics (e.g. concreteness, visual complexity) to provide a ground truth for these factors and to validate our concreteness manipulation. In a separate study, naive participants performed a visual search task for a target icon. Oculomotor measures were utilised to elucidate how various icon characteristics affected search performance. Although we were unable to fully disassociate concreteness from visual complexity, we found that icons high in concreteness improved search performance, but as visual complexity increased, object identification became slower. This was largely demonstrated through increased verification times for complex targets. The present set of studies indicate that highly concrete and simple icons engender search benefits. |
Pei Hsuan Hsieh; Po I. Hsu Displaying software installation agreements to motivate users' reading Journal Article In: International Journal of Human-Computer Interaction, vol. 39, no. 20, pp. 4006–4023, 2023. @article{Hsieh2023a,The purpose of this study is to identify an effective display mode that best motivates software users to read the software installation agreements before downloading, thereby enhancing their understanding of intellectual property rights and preventing potential legal issues. This study randomly assigned the participants to enter either an eye-tracking or a computer-based experiment in which one of three display modes was presented. A computer-based pre-test and post-test related to intellectual property rights were given to the participants. The final results showed that the “keyword mode” was the most effective in keeping their attention on the key content. The results of a survey about software installation experiences and attitudes toward reading software installation agreements and the follow-up interviews confirmed the experimental findings. The study's contribution lies in revealing to software providers the most effective reading mode that best enhances the software users' understanding of the moral and legal concepts. |
Jia Jin; Chenchen Lin; Fenghua Wang; Ting Xu; Wuke Zhang A study of cognitive effort involved in the framing effect of summary descriptions of online product reviews for search vs. experience products Journal Article In: Electronic Commerce Research, vol. 23, no. 2, pp. 785–806, 2023. @article{Jin2023b,Few studies have focused on summary descriptions of online product reviews regarding purchase decisions, and there is a gap between individual product reviews and summary descriptions of online product reviews. The current study applied eye-tracking to explore how the product type moderates the framing effect of summary descriptions of product reviews on e-consumers' purchase decisions. The results showed that product type moderated the framing effect of summary reviews on e-consumers' purchase intention. Specifically, for search products, compared with a negative frame, a positive frame increased e-consumers' attention to function attributes and led to higher purchase intention. However, with experience products, e-consumers' attention and purchase intention did not vary across framing messages. Referring to information asymmetry theory and signal theory, we posit that the cognitive effort involved in summary review information is high for search products and low for experience products since summary reviews are a more useful signal in reducing information asymmetry for search products than for experience products. The theoretical and practical implications are also discussed. |
Jia Jin; Ailian Wang; Cuicui Wang; Qingguo Ma How do consumers perceive and process online overall vs. individual text-based reviews? Behavioral and eye-tracking evidence Journal Article In: Information and Management, vol. 60, no. 5, pp. 1–13, 2023. @article{Jin2023a,Building on the Heuristic-Systematic model, we use a survey and two eye-tracking experiments to investigate consumers' perceived usefulness of overall and individual text-based reviews (OTRs vs. ITRs) for search vs. experience products, and the information processing features. Results indicate that OTRs show higher usefulness than ITRs, regardless of product type. ITRs are perceived to be more useful for experience products than for search products. Furthermore, two eye-tracking studies confirm these results from a physiological standpoint and reveal the attentional allocation during information processing. OTRs affect subjects' processing of ITRs information differently in purchase search and experience products. |
Clare Kirtley; Christopher Murray; Phillip B. Vaughan; Benjamin W. Tatler Navigating the narrative: An eye-tracking study of readers' strategies when reading comic page layouts Journal Article In: Applied Cognitive Psychology, vol. 37, no. 1, pp. 52–70, 2023. @article{Kirtley2023,In multimedia stimuli (e.g., comics), the reader must follow a narrative in which text and image both contribute information, and artists may use more irregular layouts which must still be followed correctly. While previous work has found that the external structure (outlines) of panels is a major contributor to navigation decisions in comics, other studies have shown that panel content can affect reading order. The present studies use eye-tracking to investigate these contributions further. In Experiment 1, the reading behaviors on six layout variations were compared. The influence of the external structure was replicated, but an effect of text location was also found for one layout type. Experiment 2 focused on variations of this particular layout, manipulating the location of text within critical panels. Panel content was a consistent effect for all variations. While most navigation decisions are made using the external structure, content becomes key when resolving ambiguous layouts. |
Yongkai Li; Shuai Zhang; Gancheng Zhu; Zehao Huang; Rong Wang; Xiaoting Duan; Zhiguo Wang A CNN-based wearable system for driver drowsiness detection Journal Article In: Sensors, vol. 23, no. 7, 2023. @article{Li2023l,Drowsiness poses a serious challenge to road safety and various in-cabin sensing technologies have been experimented with to monitor driver alertness. Cameras offer a convenient means for contactless sensing, but they may violate user privacy and require complex algorithms to accommodate user (e.g., sunglasses) and environmental (e.g., lighting conditions) constraints. This paper presents a lightweight convolution neural network that measures eye closure based on eye images captured by a wearable glass prototype, which features a hot mirror-based design that allows the camera to be installed on the glass temples. The experimental results showed that the wearable glass prototype, with the neural network in its core, was highly effective in detecting eye blinks. The blink rate derived from the glass output was highly consistent with an industrial gold standard EyeLink eye-tracker. As eye blink characteristics are sensitive measures of driver drowsiness, the glass prototype and the lightweight neural network presented in this paper would provide a computationally efficient yet viable solution for real-world applications. |
Sotiris Plainis; Emmanouil Ktistakis; Miltiadis K. Tsilimbaris Presbyopia correction with multifocal contact lenses: Evaluation of silent reading performance using eye movements analysis Journal Article In: Contact Lens and Anterior Eye, vol. 46, no. 4, pp. 1–8, 2023. @article{Plainis2023,Purpose: Many activities of daily living rely on reading, thus is not surprising that complaints from presbyopes originate in reading difficulties rather in visual acuity. Here, the effectiveness of presbyopia correction with multifocal contact lenses (CLs) is evaluated using an eye-fixation based method of silent reading performance. Μethods: Visual performance of thirty presbyopic volunteers (age: 50 ± 5 yrs) was assessed monocularly and binocularly following 15 days of wear of monthly disposable CLs (AIR OPTIX™ plus HydraGlyde™, Alcon Laboratories) with: (a) single vision (SV) lenses – uncorrected for near (b) aspheric multifocal (MF) CLs. LogMAR acuity was measured with ETDRS charts. Reading performance was evaluated using standard IReST paragraphs displayed on a screen (0.4 logMAR print size at 40 cm distance). Eye movements were monitored with an infrared eyetracker (Eye-Link II, SR Research Ltd). Data analysis included computation of reading speed, fixation duration, fixations per word and percentage of regressions. Results: Average reading speed was 250 ± 68 and 235 ± 70 wpm, binocularly and monocularly, with SV CLs, improving statistically significantly to 280 ± 67 (p = 0.002) and 260 ± 59 wpm (p = 0.01), respectively, with MF CLs. Moreover, fixation duration, fixations per word and ex-Gaussian parameter of fixation duration, μ, showed a statistically significant improvement when reading with MF CLs, with fixation duration exhibiting the stronger correlation (r = 0.79, p < 0.001) with improvement in reading speed. The correlation between improvement in VA and reading speed was moderate (r = 0.46 |
Kathryn Nicole Sam; K. Jayasankara Reddy The effect of music and editing style on subjective perception of time when watching videos Journal Article In: Projections, vol. 17, no. 2, pp. 41–61, 2023. @article{Sam2023,Arousal, editing style, and eye movements have been implicated in time perception when watching videos. However, little multimodal research has explored how manipulating both the auditory and visual properties of videos affects temporal processing. This study investigated how editing density and music-induced arousal affect viewers' time perception. Thirty-nine participants watched six videos varying in editing density and music while their eye movements were recorded. They estimated the videos' duration and reported their subjective experience of time passage and emotional involvement. Fast-paced editing was associated with the feeling of time passing faster, a relationship mediated by fixation durations. High-arousal background music was also associated with the feeling of time passing faster. The consequences of this study in terms of a possible auditory driving effect are explored. |
Yuanping Shen; Qin Wang; Hongli Liu; Jianye Luo; Qunyue Liu; Yuxiang Lan Landscape design intensity and its associated complexity of forest landscapes in relation to preference and eye movements Journal Article In: Forests, vol. 14, no. 4, pp. 1–16, 2023. @article{Shen2023b,Understanding how people perceive landscapes is essential for the design of forest landscapes. The study investigates how design intensity affects landscape complexity, preference, and eye movements for urban forest settings. Eight groups of twenty-four pictures, representing lawn, path, and waterscape settings in urban forests, with each type of setting having two groups of pictures and one group having four pictures, were selected. The four pictures in each group were classified into slight, low, medium, and high design intensities. A total of 76 students were randomly assigned to observe one group of pictures within each type of landscape with an eye-tracking apparatus and give ratings of complexity and preference. The results indicate that design intensity was positively associated with subjective landscape complexity but was positively or negatively related to objective landscape complexity in three types of settings. Subjective landscape complexity was found to significantly contribute to visual preference across landscape types, while objective landscape complexity did not contribute to preference. In addition, the marginal effect of medium design intensity on preference was greater than that of low and high design intensity in most cases. Moreover, although some eye movement metrics were significantly related to preference in lawn settings, none were found to be indicative predictors for preference. The findings enrich research in visual preference and assist landscape designers during the design process to effectively arrange landscape design intensity in urban forests. |
Katrine Falcon Soby; Evelyn Arko Milburn; Line Burholt Kristensen; Valentin Vulchanov; Mila Vulchanova In the native speaker's eye: Online processing of anomalous learner syntax Journal Article In: Applied Psycholinguistics, vol. 44, no. 1, pp. 1–28, 2023. @article{Soby2023,How do native speakers process texts with anomalous learner syntax? Second-language learners of Norwegian, and other verb-second (V2) languages, frequently place the verb in third position (e.g.,*Adverbial-Subject-Verb), although it is mandatory for the verb in these languages to appear in second position (Adverbial-Verb-Subject). In an eye-Tracking study, native Norwegian speakers read sentences with either grammatical V2 or ungrammatical verb-Third (V3) word order. Unlike previous eye-Tracking studies of ungrammaticality, which have primarily addressed morphosyntactic anomalies, we exclusively manipulate word order with no morphological or semantic changes. We found that native speakers reacted immediately to ungrammatical V3 word order, indicated by increased fixation durations and more regressions out on the subject, and subsequently on the verb. Participants also recovered quickly, already on the following word. The effects of grammaticality were unaffected by the length of the initial adverbial. The study contributes to future models of sentence processing which should be able to accommodate various types of noisy input, that is, non-standard variation. Together with new studies of processing of other L2 anomalies in Norwegian, the current findings can help language instructors and students prioritize which aspects of grammar to focus on. |
Wenfang Song; Xinze Xie; Wenyue Huang; Qianqian Yu The design of automotive interior for Chinese young consumers based on Kansei engineering and eye-tracking technology Journal Article In: Applied Sciences, vol. 13, no. 19, pp. 1–20, 2023. @article{Song2023,The reasonable CMF (Color, Material and Finishing) design for automotive interiors could bring positive psychophysical and affective responses of customers, providing an important guideline for automobile enterprises making differentiated products. However, current studies mainly focus on an aspect of CMF design or a single style of the automotive interior, and examined the design mainly through human visual perception. There lack systematic studies on the design and evaluation of automobile interior CMF, and more scientific evaluation of the design through human visual and touching perception was required. Therefore, this study systematically designed the automobile interior CMF based on Kansei engineering and eye-tracking technology. The study consists of five steps: (1) Product positioning: the Chinese young consumers, the new energy vehicles, and bridge and seat are the target users, the automotive model and the key interior components. (2) Kansei physiological measurement: nine groups of Kansei words and thirty-three interior samples were selected, and the interior samples were scored by the Kansei words. (3) Kansei data analysis: three design types were determined, i.e., “hard and stately”, “concise and technological” and “comfortable and safe”. Meanwhile, the CMF design elements of the automotive interiors under the three styles were obtained through mathematical methods. (4) Design practice: four CMF samples under each design style (12 samples) were developed. (5) Kansei evaluation: the design themes were conducted using eye-tracking technology, and the optimal sample that mostly satisfy the user's Kansei requirements under each style was obtained. The proposed design process of automotive interior CMF may have great implications in the design of automotive interiors. |
Chaitanya Thammineni; Hemanth Manjunatha; Ehsan T. Esfahani Selective eye-gaze augmentation to enhance imitation learning in Atari games Journal Article In: Neural Computing and Applications, vol. 35, no. 32, pp. 23401–23410, 2023. @article{Thammineni2023,This paper presents the selective use of eye-gaze information in learning human actions in Atari games. Extensive evidence suggests that our eye movements convey a wealth of information about the direction of our attention and mental states and encode the information necessary to complete a task. Based on this evidence, we hypothesize that selective use of eye-gaze, as a clue for attention direction, will enhance the learning from demonstration. For this purpose, we propose a selective eye-gaze augmentation (SEA) network that learns when to use the eye-gaze information. The proposed network architecture consists of three sub-networks: gaze prediction, gating, and action prediction network. Using the prior 4 game frames, a gaze map is predicted by the gaze prediction network, which is used for augmenting the input frame. The gating network will determine whether the predicted gaze map should be used in learning and is fed to the final network to predict the action at the current frame. To validate this approach, we use publicly available Atari Human Eye-Tracking And Demonstration (Atari-HEAD) dataset consists of 20 Atari games with 28 million human demonstrations and 328 million eye-gazes (over game frames) collected from four subjects. We demonstrate the efficacy of selective eye-gaze augmentation compared to the state-of-the-art Attention Guided Imitation Learning (AGIL) and Behavior Cloning (BC). The results indicate that the selective augmentation approach (the SEA network) performs significantly better than the AGIL and BC. Moreover, to demonstrate the significance of selective use of gaze through the gating network, we compare our approach with the random selection of the gaze. Even in this case, the SEA network performs significantly better, validating the advantage of selectively using the gaze in demonstration learning. |
Aditya Upadhyayula; John M. Henderson Spatiotemporal jump detection during continuous film viewing Journal Article In: Journal of Vision, vol. 23, no. 2, pp. 1–17, 2023. @article{Upadhyayula2023,Prior research on film viewing has demonstrated that participants frequently fail to notice spatiotemporal disruptions, such as scene edits in the movies.Whether such insensitivity to spatiotemporal disruptions extends beyond scene edits in film viewing is not well understood. Across three experiments, we created spatiotemporal disruptions by presenting participants with minute long movie clips, and occasionally jumping the movie clips ahead or backward in time. Participants were instructed to press a button when they noticed any disruptions while watching the clips. The results from experiments 1 and 2 indicate that participants failed to notice the disruptions in continuity about 10% to 30% of the time depending on the magnitude of the jump. In addition, detection rates were lower by approximately 10% when the videos jumped ahead in time compared to the backward jumps across all jump magnitudes, suggesting a role of knowledge about the future affects jump detection. An additional analysis used optic flow similarity during these disruptions. Our findings suggest that insensitivity to spatiotemporal disruptions during film viewing is influenced by knowledge about future states. |
Jelena Vranjes; Bart Defrancq To repair or not to repair? Repairs and risk taking in video remote interpreting Journal Article In: Perspectives: Studies in Translation Theory and Practice, pp. 1–22, 2023. @article{Vranjes2023,The importance of video remote interpreting (VRI) for providing interpreting services has drastically increased over the last decade. Empirical research has shown, however, that interpreting through video link may have a significant impact on the interaction and the interpreting performance in dialogue interpreting contexts. The present study contributes to this growing field of research by focusing on interpreters' repair initiations in the context of VRI. Although repairs are an important mechanism for addressing problems in communication, very little research has been devoted to the study of interpreter-initiated repair in dialogue settings. Based on a corpus of video-recorded interpreted interactions (Dutch-Russian), where the interpreter is either present onsite or connected through video link, we analyse interpreters' repair initiations and related risk taking behaviour. More specifically, we examine how interpreters manage a specific type of repair initiations in video remote interpreting, namely postponed repairs. The analysis reveals differences in repair patterns between video remote and onsite interpreting and we propose that these differences result from differential risk management in the two settings. |
Ailian Wang; Jing Pan; Caihong Jiang; Jia Jin Create the best first glance: The cross-cultural effect of image background on purchase intention Journal Article In: Decision Support Systems, vol. 170, pp. 1–12, 2023. @article{Wang2023a,As globalization drives more firms toward cross-border e-commerce (CBEC), a well-designed decision support system becomes crucial to gain a competitive edge in the international market. Product images, a vital aspect of the system interface, play a significant role in shaping users' first impressions, facilitating seller-buyer information interaction, and ultimately enhancing users' decisions making in the system. Across a series of studies, this research investigates the effect of cultural differences (thinking style: holistic vs. analytic) on image background and reveals the underlying mechanism. Results show that online consumers from cultures characterized by holistic thinking style (Chinese sample) are more prone to purchase products presented with contextual backgrounds than those with white backgrounds, while this effect is absent for online consumers from cultures that tend to think in an analytic way (American sample). This effect is also observed when the thinking style is primed within the culture in separate samples from the United States and China. Study 3 employs eye-tracking technology and shows that holistic thinking, compared to analytic thinking, results in an asymmetry in cognitive effort to purchase the same products framed with contextual and white background images. Specifically, contextual (vs. white) background information greatly assists holistic thinking consumers in understanding the product, enabling them to spend less cognitive effort on product information processing. Instead, the cognitive effort that analytic thinking consumers spare in the product information is not affected by the background. Finally, we discuss theoretical contributions and practical insights for CBEC retailers and system designers that the findings indicate. |
Craig A. Williamson; Jari J. Morganti; Hannah E. Smithson Bright-light distractions and visual performance Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–11, 2023. @article{Williamson2023,Visual distractions pose a significant risk to transportation safety, with laser attacks against aircraft pilots being a common example. This study used a research-grade High Dynamic Range (HDR) display to produce bright-light distractions for 12 volunteer participants performing a combined visual task across central and peripheral visual fields. The visual scene had an average luminance of 10 cd∙m−2 with targets of approximately 0.5° angular size, while the distractions had a maximum luminance of 9,000 cd∙m−2 and were 3.6° in size. The dependent variables were the mean fixation duration during task execution (representative of information processing time), and the critical stimulus duration required to support a target level of performance (representative of task efficiency). The experiment found a statistically significant increase in mean fixation duration, rising from 192 ms without distractions to 205 ms with bright-light distractions (p = 0.023). This indicates a decrease in visibility of the low contrast targets or an increase in cognitive workload that required greater processing time for each fixation in the presence of the bright-light distractions. Mean critical stimulus duration was not significantly affected by the distraction conditions used in this study. Future experiments are suggested to replicate driving and/or piloting tasks and employ bright-light distractions based on real-world data, and we advocate the use of eye-tracking metrics as sensitive measures of changes in performance. |
Mo Xiaohong; Xie Zhihao; Luh Ding-Bang A hybrid macro and micro method for consumer emotion and behavior research Journal Article In: IEEE Access, vol. 11, pp. 83430–83445, 2023. @article{Xiaohong2023,To investigate impacts of intelligent and fashion factors of sports bras on consumers' emotions, decision-making and behavior, a quantitative analysis method combing macro affective computing and micro emotion data was proposed. The context where a consumer purchased sports bras was first simulated. In this process, an eye tracker and a multi-channel physiological recorder were utilized to collect physiological signal data from participants in an experimental setting. Then, big data and machine learning were both adopted to macroscopically perform data pre-processing, build a computational model, fulfill relevant prediction and evaluation, analyze correlations in physiological data features, and explore potential values existing in data. Furthermore, highly correlated data features were extracted to investigate micro causalities and identify reasons why consumer behavior and decision-making were supported by data about emotional physiology. The proposed method may provide considerably reliable data support for designers, product service providers, and other practitioners. As an innovative and universal integration approach, it has the potential to be applied in medical science, psychology, management science and other fields. |
Zedong Xie; Meng Zhang; Zunping Ma The impact of mental simulation on subsequent tourist experience–dual evidence from eye tracking and self-reported measurement Journal Article In: Current Issues in Tourism, vol. 26, no. 18, pp. 2915–2930, 2023. @article{Xie2023c,Tourism research has always sought to find ways to improve tourists' experience evaluation and create added value for them. However, the academic community has focused on the on-site and post-travel stages of tourists, and neglected the pre-travel stage. This study examines the influence of guided mental simulation of an upcoming tourist experience on subsequent on-site tourist experience and experience evaluation. The research simulated real-world experience with tour videos shot from the first-person perspective, and measured the variables using both eye movements and self-reporting. Multivariate ANOVA and multigroup analysis were then performed on the data. The results showed that a process simulation of tourists having an engagement experience and an outcome simulation of tourists having a sight-seeing experience resulted in a higher engagement level and higher emotional response during the on-site experience, higher evaluation of the experience, and a greater impact of engagement level on their evaluation. This study expands the research on tourists' psychological experience in the pre-travel stage. Results indicate that the period from the moment consumers book or purchase the tourist product to the moment they actually embark on the tourist experience is a valuable marketing window. |
Ying Xu; Jia-Qiong Xie; Fu-Xing Wang; Rebecca L. Monk; James Gaskin; Jin-Liang Wang The impact of Weibo features on user's information comprehension: The mediating role of cognitive load Journal Article In: Social Science Computer Review, vol. 41, no. 6, pp. 2010–2028, 2023. @article{Xu2023b,Social media, such as Microblogs, have become an important source for people to obtain information. However, we know little about how this would influence our comprehension over online information. Based on the cognitive load theory, this research explores whether and how two important features of Weibo, which are the feedback function and information fragmentation, would increase cognitive load and may in turn hinder users' information comprehension in Weibo. A 2 (feedback or non-feedback) × 2 (strong-interference or weak-interference information) between-participants experimental design was conducted. Our results revealed that the Weibo feedback function and interference information exerted a negative impact over information comprehension via inducing increased cognitive load. Specifically, these results deepened our understanding regarding the impact of Weibo features on online information comprehension and suggest the mechanism by which this occurs. This finding has implications for how to minimize the potential cost of using Weibo and maximize the adaptive development of social media. |
Zhihao Yan; Zeyang Yang; Mark D. Griffiths “Danmu” preference, problematic online video watching, loneliness and personality: An eye-tracking study and survey study Journal Article In: BMC Psychiatry, vol. 23, no. 1, pp. 1–13, 2023. @article{Yan2023b,‘Danmu' (i.e., comments that scroll across online videos), has become popular on several Asian online video platforms. Two studies were conducted to investigate the relationships between Danmu preference, problematic online video watching, loneliness and personality. Study 1 collected self-report data on the study variables from 316 participants. Study 2 collected eye-tracking data of Danmu fixation (duration, count, and the percentages) from 87 participants who watched videos. Results show that fixation on Danmu was significantly correlated with problematic online video watching, loneliness, and neuroticism. Self-reported Danmu preference was positively associated with extraversion, openness, problematic online video watching, and loneliness. The studies indicate the potential negative effects of Danmu preference (e.g., problematic watching and loneliness) during online video watching. The study is one of the first empirical investigations of Danmu and problematic online video watching using eye-tracking software. Online video platforms could consider adding more responsible use messaging relating to Danmu in videos. Such messages may help users to develop healthier online video watching habits. |
2022 |
Sahar Mahdie Klim Al Zaidawi; Martin H. U. Prinzler; Jonas Lührs; Sebastian Maneth An extensive study of user identification via eye movements across multiple datasets Journal Article In: Signal Processing: Image Communication, vol. 108, pp. 1–11, 2022. @article{AlZaidawi2022,Several studies have reported that biometric identification based on eye movement characteristics can be used for authentication. This paper provides an extensive study of user identification via eye movements across multiple datasets based on an improved version of a method originally proposed by George and Routray. We analyzed our method with respect to several factors that affect the identification accuracy, such as the type of stimulus, the Identification by Velocity-Threshold (IVT) parameters (used for segmenting the trajectories into fixation and saccades), adding new features such as higher-order derivatives of eye movements, the inclusion of blink information, template aging, age and gender. We find that three methods namely selecting optimal IVT parameters, adding higher-order derivatives features and including an additional blink classifier have a positive impact on the identification accuracy. When we combine all our methods, we are able to improve the best known accuracy over the BioEye 2015 competition dataset from 86% to 96%. |
Olugbemi Aroke; Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd; Rebecca Brock In: Journal of Construction Engineering and Management, vol. 148, no. 6, pp. 1–16, 2022. @article{Aroke2022,This study investigated the moderating effect of personality traits in the association between worker characteristics (work experience, training, and previous injury exposure) and hazard-identification performance through mechanisms of visual attentional indicators. Through an integrated moderated mediation model, the attentional distribution, search strategy, and hazard-identification performance of participants were examined across 115 fall hazards. Results indicate that individuals with more work experience and safety training were better at hazard identification independent of visual attention and regardless of personality. Furthermore, individual differences in conscientiousness and openness personality dimensions significantly moderated the associations between (1) worker characteristics and visual attention; and (2) visual attention and hazard identification. This study provides empirical evidence for the potentially pivotal role of worker characteristics and dispositional traits with regard to hazard-identification performance on jobsites. These findings can empower safety managers to identify at-risk workers and design personalized intervention strategies to improve the hazard-identification skills of workers. |
Joe Cutting; Paul Cairns Investigating game attention using the Distraction Recognition Paradigm Journal Article In: Behaviour & Information Technology, vol. 41, no. 5, pp. 981–1001, 2022. @article{Cutting2022,Digital games are well known for holding players' attention and stopping them from being distracted by events around them. Being able to quantify how well games hold attention provides a behavioral foundation for measures of game engagement and a link to existing research on attention. We developed a new behavioral measure of how well games hold attention, based on players' post-game recognition of irrelevant distractors which are shown around the game. This is known as the Distractor Recognition Paradigm (DRP). In two studies we show that the DRP is an effective measure of how well self-paced games hold attention. We show that even simple self-paced games can hold players' attention completely and the consistency of attentional focus is moderated by game engagement. We compare the DRP to existing measures of both attention and engagement and consider how practical it is as a measure of game engagement. We find no evidence that eye tracking is a superior measure of attention to distractor recognition. We discuss existing research on attention and consider implications for areas such as motivation to play and serious games. |
Jessica Dawson; Tom Foulsham Your turn to speak? Audiovisual social attention in the lab and in the wild Journal Article In: Visual Cognition, vol. 30, no. 1-2, pp. 116–134, 2022. @article{Dawson2022,In everyday group conversations, we must decide whom to pay attention to and when. This process of dynamic social attention is important for goals both perceptual and social. The present study investigated gaze during a conversation in a realistic group and in a controlled laboratory study where third-party observers watched videos of the same group. In both contexts, we explore how gaze allocation is related to turn-taking in speech. Experimental video clips were edited to either remove the sound, freeze the video, or transition to a blank screen, allowing us to determine how shifts in attention between speakers depend on visual or auditory cues. Gaze behaviour in the real, interactive situation was similar to the fixations made by observers watching a video. Eyetracked participants often fixated the person speaking and shifted gaze in response to changes in speaker, even when sound was removed or the video freeze-framed. These findings suggest we sometimes fixate the location of speakers even when no additional visual information can be gained. Our novel approach offers both a comparison of interactive and third-party viewing and the opportunity for controlled experimental manipulations. This delivers a rich understanding of gaze behaviour and multimodal attention during a conversation following. |
Joost Winter; Jimmy Hu; Bastiaan Petermeijer Ipsilateral and contralateral warnings: Effects on decision-making and eye movements in near-collision scenarios Journal Article In: Journal on Multimodal User Interfaces, vol. 16, pp. 303–317, 2022. @article{Winter2022,Cars are increasingly capable of providing drivers with warnings and advice. However, whether drivers should be provided with ipsilateral warnings (signaling the direction to steer towards) or contralateral warnings (signaling the direction to avoid) is inconclusive. Furthermore, how auditory warnings and visual information from the driving environment together contribute to drivers' responses is relatively unexplored. In this study, 34 participants were presented with animated video clips of traffic situations on a three-lane road, while their eye movements were recorded with an eye-tracker. The videos ended with a near collision in front after 1, 3, or 6 s, while either the left or the right lane was safe to swerve into. Participants were instructed to make safe lane-change decisions by pressing the left or right arrow key. Upon the start of each video, participants heard a warning: Go Left/Right (ipsilateral), Danger Left/Right (contralateral), and nondirectional beeps (Baseline), emitted from the spatially corresponding left and right speakers. The results showed no significant differences in response times and accuracy between ipsilateral and contralateral warnings, although participants rated ipsilateral warnings as more satisfactory. Ipsilateral and contralateral warnings both improved response times in situations in which the left/right hazard was not yet manifest or was poorly visible. Participants fixated on salient and relevant vehicles as quickly as 220 ms after the trial started, with no significant differences between the audio types. In conclusion, directional warnings can aid in making a correct left/right evasive decision while not affecting the visual attention distribution. |
Yke Bauke Eisma; Dirk J. Eijssen; Joost C. F. Winter What attracts the driver's eye? Attention as a function of task and events Journal Article In: Information, vol. 13, pp. 1–15, 2022. @article{Eisma2022,This study explores how drivers of an automated vehicle distribute their attention as a function of environmental events and driving task instructions. Twenty participants were asked to monitor pre-recorded videos of a simulated driving trip while their eye movements were recorded using an eye-tracker. The results showed that eye movements are strongly situation-dependent, with areas of interest (windshield, mirrors, and dashboard) attracting attention when events (e.g., passing vehicles) occurred in those areas. Furthermore, the task instructions provided to participants (i.e., speed monitoring or hazard monitoring) affected their attention distribution in an interpretable manner. It is concluded that eye movements while supervising an automated vehicle are strongly ‘top-down', i.e., based on an expected value. The results are discussed in the context of the development of driver availability monitoring systems. |
Chelsea L. Fitzpatrick; Hyoun S. Kim; Christopher R. Sears; Daniel S. Mcgrath Attentional bias in non–smoking electronic cigarette users: An eye-tracking study Journal Article In: Nicotine and Tobacco Research, vol. 24, pp. 1439–1447, 2022. @article{Fitzpatrick2022,Introduction: This study examined attentional bias (AB) to e-cigarette cues among a sample of non–smoking daily e-cigarette users (n = 27), non–smoking occasional e-cigarette users (n = 32), and control participants (n = 61) who did not smoke or use e-cigarettes. The possibility that e-cigarette users develop a transference of cues to traditional cigarettes was also examined. Methods: AB was assessed using a free-viewing eye-gaze tracking methodology, in which participants viewed 180 pairs of images for 4 seconds (e-cigarette and neutral image, e-cigarette and smoking image, smoking and neutral image). Results: Daily and occasional e-cigarette users attended to pairs of e-cigarette and neutral images equally, whereas non–users attended to neu- tral images significantly more than e-cigarette images. All three groups attended to e-cigarette images significantly more than smoking images, with significantly larger biases for e-cigarette users. There were no between-group differences in attention to pairs of smoking and neutral images. A moderation analysis indicated that for occasional users but not daily users, years of vaping reduced the bias toward neutral images over smoking images. Conclusions: Taken together, the results indicate that the e-cigarette users exhibit heightened attention to e-cigarettes relative to non–users, which may have implications as to how they react to e-cigarette cues in real-world settings. AB for e-cigarettes did not transfer to traditional cig- arette cues, which indicates that further research is required to identify the mechanisms involved in the migration of e-cigarettes to traditional cigarettes. Implications: This study is the first attempt to examine attentional biases for e-cigarette cues among non–smoking current e-cigarette users using eye-gaze tracking. The results contribute to the growing literature on the correlates of problematic e-cigarette use and indicate that daily and occasional e-cigarette use is associated with attentional biases for e-cigarettes. The existence of attentional biases in e-cigarette users may help to explain the high rate of failure to quit e-cigarettes and provides support for the utility of attentional bias modification in the treatment of problematic e-cigarette use. Introduction |
Erin T. Gannon; Michael A. Grubb How filmmakers guide the eye: The effect of average shot length on intersubject attentional synchrony Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, vol. 16, no. 1, pp. 125–134, 2022. @article{Gannon2022,As editing technology has advanced, filmmakers have become increasingly skilled at manipulating overt attention such that eye movements are highly synchronized during film viewing. Average shot length (ASL; film length/number of shots) is a quantitative metric in film studies that may help us understand this perceptual phenomenon. Since shorter shots give viewers less time to voluntarily scan images, we predicted that shorter ASLs would yield greater attentional synchrony across viewers. We recorded participants' eye movements as they viewed clips from commercially produced films with varying ASLs, and in line with our hypothesis, we found that ASL and attentional synchrony were negatively related. These findings were replicated in an independent sample of participants who viewed a different set of clips from the same films used in Experiment 1. Comparing across experiments, we found that within the same films, clips with shorter ASLs synchronized eye movements to a greater extent than did clips with longer ASLs. Studies of film perception have long implied that ASL modulates eye movements across viewers, and this study provides robust empirical evidence to support that claim. |
Mairéad Hogan; Chris Barry; Michael Lang Dissecting optional micro-decisions in online transactions: Perceptions, deceptions, and errors Journal Article In: ACM Transactions on Computer-Human Interaction, vol. 29, no. 6, pp. 1–27, 2022. @article{Hogan2022,Online firms frequently increase profit by selling optional extras. However, opt-in rates tend to be low. In response, questionable design practices have emerged to nudge consumers into inadvertent choices. Many of these design constructs are presented using an opt-out design. Using eye tracking and think-aloud data techniques, this research investigates the impact of the framing and optionality of micro-decisions on user perceptions and error rates. Focusing on opt-out decisions, the study found: up to one in three users make errors in decision-making; there is a higher error rate for rejection-framed opt-out decisions; users widely misinterpret decision framing; and failure to read decision text results in rushed and unsighted decisions, even leading users to automatically construe un-ticked checkboxes as opt-in decisions. In talking afterwards about their experiences, users expressed strong negative emotions, feeling confused, manipulated and resentful. Many suggested they would, in practice, steer away from similar encounters toward more unambiguous and honest sites. These findings might alert managers and developers, tempted to use dark patterns, that such a strategy might backfire over time. |
Luke Hsiao; Brooke Krajancich; Philip Levis; Gordon Wetzstein; Keith Winstein Towards retina-quality VR video streaming: 15 ms Could Save You 80% of Your Bandwidth Journal Article In: Computer Communication Review, vol. 52, no. 1, pp. 10–19, 2022. @article{Hsiao2022a,Virtual reality systems today cannot yet stream immersive, retina-quality virtual reality video over a network. One of the greatest challenges to this goal is the sheer data rates required to transmit retina-quality video frames at high resolutions and frame rates. Recent work has leveraged the decay of visual acuity in human perception in novel gaze-contingent video compression techniques. In this paper, we show that reducing the motion-to-photon latency of a system itself is a key method for improving the compression ratio of gaze-contingent compression. Our key finding is that a client and streaming server system with sub-15ms latency can achieve 5x better compression than traditional techniques while also using simpler software algorithms than previous work. |
Fatemeh Jam; Hamid Reza Azemati; Abdulhamid Ghanbaran; Jamal Esmaily; Reza Ebrahimpour The role of expertise in visual exploration and aesthetic judgment of residential building façades: An eye-tracking study Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, vol. 16, no. 1, pp. 148–163, 2022. @article{Jam2022,The building façade has considerable effects on the aesthetic experience of observers. However, the experience may differ depending on the observers' expertise. This study was conducted to explore the impact of expertise on preference, visual exploration, and cognitive experience during the aesthetic judgment of designed façades. For this purpose, we developed a paradigm in two separate parts: aesthetic judgment (AJ) and eye movement recording (EMR). Thirty-eight participants participated in this experiment in two groups (21 experts/17 nonexperts). The results revealed significant differences between the two groups in terms of the type and number of preferred façades, as well as eye movement indicators. In addition, based on judgment reaction time and fixation duration as proxy measures of cognitive experience, it was found that expertise might be correlated with cognitive load and task demand. The findings indicate the importance of façades for both groups and suggest that their physical attributes could be effectively manipulated to impact aesthetic experiences in relation to architectural designs. |
Lewis T. Jayes; Gemma Fitzsimmons; Mark J. Weal; Johanna K. Kaakinen; Denis Drieghe The impact of hyperlinks, skim reading and perceived importance when reading on the Web Journal Article In: PLoS ONE, vol. 17, no. 2, pp. 1–28, 2022. @article{Jayes2022,It has previously been shown that readers spend a great deal of time skim reading on the Web and that this type of reading can affect comprehension of text. Across two experiments, we examine how hyperlinks influence perceived importance of sentences and how perceived importance in turn affects reading behaviour. In Experiment 1, participants rated the importance of sentences across passages of Wikipedia text. In Experiment 2, a different set of participants read these passages while their eye movements were tracked, with the task being either reading for comprehension or skim reading. Reading times of sentences were analysed in relation to the type of task and the importance ratings from Experiment 1. Results from Experiment 1 show readers rated sentences without hyperlinks as being of less importance than sentences that did feature hyperlinks, and this effect is larger when sentences are lower on the page. It was also found that short sentences with more links were rated as more important, but only when they were presented at the top of the page. Long sentences with more links were rated as more important regardless of their position on the page. In Experiment 2, higher importance scores resulted in longer sentence reading times, measured as fixation durations. When skim reading, however, importance ratings had a lesser impact on online reading behaviour than when reading for comprehension. We suggest readers are less able to establish the importance of a sentence when skim reading, even though importance could have been assessed by information that would be fairly easy to extract (i.e. presence of hyperlinks, length of sentences, and position on the screen). |
Rebecca L. Johnson; Devika Nambiar; Gabriella Suman Using eye-movements to assess underlying factors in online purchasing behaviors Journal Article In: International Journal of Consumer Studies, vol. 46, no. 4, pp. 1365–1380, 2022. @article{Johnson2022,The field of consumer neuroscience allows researchers to account for an individual's explicit reported behaviors as well as their implicit behaviors that are reflected in the neural mechanisms that occur during the purchase decision phase of a consumer's online shopping experience. The purpose of the current study was to use eye-tracking technology in conjunction with self-report purchase intention data to observe the relative impact of star rating, price, discount, and time pressure on purchase decisions. The results suggest that purchase intention was most affected by star rating, price, and discount with higher purchase intentions on items with higher star ratings, lower prices, and greater discounts. The eye movement data revealed that these factors, as well as time pressure, influenced where consumers directed their attention in making their purchasing decisions. These findings have significant implications for future ecommerce marketing strategy, especially across efforts to increase purchase intention. |
Nadezhda Kerimova; Pavel Sivokhin; Diana Kodzokova; Karine Nikogosyan; Vasily Klucharev Visual processing of green zones in shared courtyards during renting decisions: An eye-tracking study Journal Article In: Urban Forestry and Urban Greening, vol. 68, pp. 127460, 2022. @article{Kerimova2022,We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1. |
Hassen Kerkeni; Dominik Brügger; Georgios Mantokoudis; Mathias Abegg; David S. Zee Pharmacological and behavioral strategies to improve vision in acquired pendular nystagmus Journal Article In: American Journal of Case Reports, vol. 23, pp. 1–5, 2022. @article{Kerkeni2022,Objective: Unusual setting of medical care. Background: Acquired pendular nystagmus (APN) is a back and forth, oscillatory eye movement in which the 2 oppositely directed slow phases have similar waveforms. APN occurs commonly in multiple sclerosis and causes a disabling oscillopsia that impairs vision. Previous studies have proven that symptomatic therapy with gabapentin or me-mantine can reduce the nystagmus amplitude or frequency. However, the effect of these medications on visual acuity (VA) is less known and to our knowledge the impact of non-pharmacological strategies such as blinking on VA has not been reported. This is a single observational study without controls (Class IV) and is meant to suggest a future strategy for study of vision in patients with disabling nystagmus and impaired vision. Case Report: A 49-year-old woman with primary progressive multiple sclerosis with spastic paraparesis and a history of optic atrophy presented with asymmetrical binocular APN and bothersome oscillopsia. We found that in the eye with greater APN her visual acuity improved by 1 line (from 0.063 to 0.08 decimals) immediately after blinking. During treatment with memantine, her VA without blinking increased by 2 lines, from 0.063 to 0.12, but improved even more (from 0.12 to 0.16) after blinking. In the contralateral eye with a barely visible nystagmus, VA was reduced by 1 line briefly (~500 ms) after blinking. Conclusions: In a patient with APN, blinking transiently improved vision. The combination of pharmacological treatment with memantine and the blinking strategy may induce better VA and less oscillopsia than either alone. |
Hyoun S. Kim; Emma V. Ritchie; Christopher R. Sears; David C. Hodgins; Kristy R. Kowatch; Daniel S. McGrath In: Journal of Behavioral Addictions, vol. 11, no. 2, pp. 386–395, 2022. @article{Kim2022a,Background and aims: Attentional bias to gambling-related stimuli is associated with increased severity of gambling disorder. However, the addiction-related moderators of attentional bias among those who gamble are largely unknown. Impulsivity is associated with attentional bias among those who abuse substances, and we hypothesized that impulsivity would moderate the relationship between disordered electronic gaming machine (EGM) gambling and attentional bias. Methods: We tested whether facets of impulsivity, as measured by the UPPS-P (positive urgency, negative urgency, sensation seeking, lack of perseverance, lack of premeditation) and the Barratt Impulsiveness Scale-11 (cognitive, motor, non-planning) moderated the relationship between increased severity of gambling disorder, as measured by the Problem Gambling Severity Index (PGSI), and attentional bias. Seventy-five EGM players participated in a free-viewing eye-tracking paradigm to measure attentional bias to EGM images. Results: Attentional bias was significantly correlated with Barratt Impulsiveness Scale-11 (BIS-11) motor, positive urgency, and negative urgency. Only positive and negative urgency moderated the relationship between PGSI scores and attentional bias. For participants with high PGSI scores, higher positive and negative urgency were associated with larger attentional biases to EGM stimuli. Discussion: The results indicate that affective impulsivity is an important contributor to the association between gambling disorder and attentional bias. |
Jan-Louis Kruger; Natalia Wisniewska; Sixin Liao Why subtitle speed matters: Evidence from word skipping and rereading Journal Article In: Applied Psycholinguistics, vol. 43, pp. 211–236, 2022. @article{Kruger2022,High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers' reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration). |
Meijia Li; Huamao Peng How cues of being watched promote risk seeking in fund investment in older adults Journal Article In: Frontiers in Psychology, vol. 12, pp. 1–15, 2022. @article{Li2022c,Social cues, such as being watched, can subtly alter fund investment choices. This study aimed to investigate how cues of being watched influence decision-making, attention allocation, and risk tendencies. Using decision scenarios adopted from the “Asian Disease Problem,” we examined participants' risk tendency in a financial scenario when they were watched. A total of 63 older and 66 younger adults participated. Eye tracking was used to reveal the decision-maker's attention allocation (fixations and dwell time per word). The results found that both younger and older adults tend to seek risk in the loss frame than in the gain frame (i.e., framing effect). Watching eyes tended to escalate reckless gambling behaviors among older adults, which led them to maintain their share in the depressed fund market, regardless of whether the options were gain or loss framed. The eye-tracking results revealed that older adults gave less attention to the sure option in the eye condition (i.e., fewer fixations and shorter dwell time). However, their attention was maintained on the gamble options. In comparison, images of “watching eyes” did not influence the risk seeking of younger adults but decreased their framing effect. Being watched can affect financial risk preference in decision-making. The exploration of the contextual sensitivity of being watched provides us with insight into developing decision aids to promote rational financial decision-making, such as human-robot interactions. Future research on age differences still requires further replication. |
Wenjie Li; Yuxiao Zhou; Shijian Luo; Yenan Dong Design factors to improve the consistency and austainable user experience of responsive interface design Journal Article In: Sustainability, vol. 14, pp. 1–26, 2022. @article{Li2022h,Computers have been extended to a variety of devices, such as smart phones, tablets, and smart watches, thereby increasing the importance of responsive interfaces across multi-terminal devices. To ensure a consistent and sustainable user experience for websites and software products, it is important to study the layout, design elements, and users' visual perception of different terminal interfaces. In this paper, the multi-terminal interfaces of 40 existing responsive websites were studied in a hierarchical grouping experiment, and six typical interface layouts were classified and extracted. Then, the main design factors affecting interface consistency of the responsive website were extracted and classified through eye tracking and a questionnaire survey. Finally, taking a sales management software tool (SA) as an example for design application, we successfully created responsive interfaces across multi-terminal devices with a consistent and sustainable experience. |
Chi-Hung Liu; June Hung; Chun-Wei Chang; John J. H. Lin; Elaine Shinwei Huang; Shu-Ling Wang; Li-Ang Lee; Cheng-Ting Hsiao; Pi-Shan Sung; Yi-Ping Chao; Yeu-Jhy Chang Oral presentation assessment and image reading behaviour on brain computed tomography reading in novice clinical learners: An eye-tracking study Journal Article In: BMC Medical Education, vol. 22, pp. 1–10, 2022. @article{Liu2022a,Background: To study whether oral presentation (OP) assessment could reflect the novice learners' interpretation skills and reading behaviour on brain computed tomography (CT) reading. Methods: Eighty fifth-year medical students were recruited, received a 2-hour interactive workshop on how to read brain CT, and were assigned to read two brain CT images before and after instruction. We evaluated their image reading behaviour in terms of overall OP post-test rating, the lesion identification, and competency in systematic image reading after instruction. Students' reading behaviour in searching for the target lesions were recorded by the eye-tracking technique and were used to validate the accuracy of lesion reports. Statistical analyses, including lag sequential analysis (LSA), linear mixed models, and transition entropy (TE) were conducted to reveal temporal relations and spatial complexity of systematic image reading from the eye movement perspective. Results: The overall OP ratings [pre-test vs. post-test: 0 vs. 1 in case 1, 0 vs. 1 in case 2, p < 0.001] improved after instruction. Both the scores of systematic OP ratings [0 vs.1 in both cases, p < 0.001] and eye-tracking studies (Case 1: 3.42 ± 0.62 and 3.67 ± 0.37 in TE |
Jing Luan; Jie Xiao; Pengfei Tang; Meng Li Positive effects of negative reviews: An eye-tracking perspective Journal Article In: Internet Research, vol. 32, no. 1, pp. 197–218, 2022. @article{Luan2022,Purpose: A counterintuitive finding of existing research is that negative reviews can produce positive effects; for example, they can increase purchase likelihood and sales by increasing product awareness. It is important to continue highlighting this fact and to develop further insights into this positive effect, as a more thorough analysis can provide online retailers with a more comprehensive understanding of how to effectively manage and use negative reviews. Thus, by using an eye-tracking method, this paper attempts to provide a further thorough analysis of positive effects of negative reviews from a cognitive perspective. Design/methodology/approach: An eye-tracking experiment with two tests over a time delay was performed to examine whether negative reviews have some positive effects. Review valence (positive vs. negative), brand popularity (popular vs. unpopular) and advertising exposure (no repetition vs. repetition) were considered in the experiment. Findings: The results show that a cognitive process of attention allocation happens when consumers deal with brand popularity cues and that arousal evoking and attention allocation occur when handling review valence. Allocation of more attention to unpopular brands helps improve brand awareness and enhance brand memory, and larger arousal from negative reviews narrows attention and leads to a better memory of products and brands. However, with the passage of time, the memory of review valence can dissociate and fade, and the remaining awareness of and familiarity with unpopular brands with negative reviews contribute to a positive reversion, which leads to the production of positive effects from negative reviews. Originality/value: This paper contributes to the literature on online reviews by examining the visual processing of review valence and brand popularity with an eye-tracking method and by revealing the cognitive mechanism of positive effects of negative reviews from a visual attention perspective. |
Beatriz Martín-Luengo; Andriy Myachykov; Yury Shtyrov Deliberative process in sharing information with different audiences: Eye-tracking correlates Journal Article In: Quarterly Journal of Experimental Psychology, vol. 75, no. 4, pp. 730–741, 2022. @article{MartinLuengo2022,Research on conversational pragmatics demonstrates how interlocutors tailor the information they share depending on the audience. Previous research showed that, in informal contexts, speakers often provide several alternative answers, whereas in formal contexts, they tend to give only a single answer; however, the psychological underpinnings of these effects remain obscure. To investigate this answer selection process, we measured participants' eye movements in different experimentally modelled social contexts. Participants answered general knowledge questions by providing responses with either single (one) or plural (three) alternatives. Then, a formal (job interview) or informal (conversation with friends) context was presented and participants decided either to report or withdraw their responses after considering the given social context. Growth curve analysis on the eye movements indicates that the selected response option attracted more eye movements. There was a discrepancy between the answer selection likelihood and the proportion of fixations to the corresponding option—but only in the formal context. These findings support a more elaborate decision-making processes in formal contexts. They also suggest that eye movements do not necessarily accompany the options considered in the decision-making processes. |
