EyeLink Usability / Applied Publications
All EyeLink usability and applied research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications using keywords such as Driving, Sport, Workload, etc. You can also search for individual author names. If we missed any EyeLink usability or applied article, please email us!
Jia Qiong Xie; Detlef H Rost; Fu Xing Wang; Jin Liang Wang; Rebecca L Monk
In: Information & Management, 58 (2), pp. 1–12, 2021.
Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distfile:///Users/PrinzEugen/Desktop/PDF/Uploaded/1-s2.0-S0378720620303530-main.pdfraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.
Hanna Brinkmann; Louis Williams; Raphael Rosenberg; Eugene McSorley
In: Art and Perception, 8 (1), pp. 27–48, 2020.
Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's 'action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants - 20 laypeople and 20 experts (art students) - and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour - something that has long been assumed by art historians but had never been empirically supported.
Jaana Simola; Jarmo Kuisma; Johanna K Kaakinen
In: Journal of Business Research, 111 , pp. 249–261, 2020.
We examined the effectiveness of direct and indirect advertising. Direct ads openly depict advertised products and brands. In indirect ads, the ad message requires elaboration. Eye movements were recorded while consumers viewed direct and indirect advertisements under fixed (5 s) or unlimited exposure time. Recognition of ads, brand logos and preference for brands were tested under two different delays (after 24 h or 45 min) from the ad exposure. The total viewing time was longer for the indirect ads when exposure time was unlimited. Overall, ad pictorials received more fixations and the brand preference was higher in the indirect condition. Recognition improved for brand logos of indirect ads when tested after the shorter delay. Consumers experienced indirect ads as more original, surprising, intellectually challenging and harder to interpret than direct ads. Current results indicate that indirect ads elicit cognitive elaboration that translates into higher preference and memorability for brands.
Sabrina Karl; Magdalena Boch; Zsófia Virányi; Claus Lamm; Ludwig Huber
Training pet dogs for eye-tracking and awake fMRI Journal Article
In: Behavior Research Methods, 52 (2), pp. 838–856, 2020.
In recent years, two well-developed methods of studying mental processes in humans have been successively applied to dogs. First, eye-tracking has been used to study visual cognition without distraction in unrestrained dogs. Second, noninvasive functional magnetic resonance imaging (fMRI) has been used for assessing the brain functions of dogs in vivo. Both methods, however, require dogs to sit, stand, or lie motionless while yet remaining attentive for several minutes, during which time their brain activity and eye movements are measured. Whereas eye-tracking in dogs is performed in a quiet and, apart from the experimental stimuli, nonstimulating and highly controlled environment, MRI scanning can only be performed in a very noisy and spatially restraining MRI scanner, in which dogs need to feel relaxed and stay motionless in order to study their brain and cognition with high precision. Here we describe in detail a training regime that is perfectly suited to train dogs in the required skills, with a high success probability and while keeping to the highest ethical standards of animal welfare—that is, without using aversive training methods or any other compromises to the dog's well-being for both methods. By reporting data from 41 dogs that successfully participated in eye-tracking training and 24 dogs IN fMRI training, we provide robust qualitative and quantitative evidence for the quality and efficiency of our training methods. By documenting and validating our training approach here, we aim to inspire others to use our methods to apply eye-tracking or fMRI for their investigations of canine behavior and cognition.
Sabrina Karl; Magdalena Boch; Anna Zamansky; Dirk van der Linden; Isabella C Wagner; Christoph J Völter; Claus Lamm; Ludwig Huber
In: Scientific Reports, 10 , pp. 1–15, 2020.
Behavioural studies revealed that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver's face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans.
Josiah P J King; Jia E Loy; Hannah Rohde; Martin Corley
Interpreting nonverbal cues to deception in real time Journal Article
In: PLoS ONE, 15 (3), pp. 1–25, 2020.
When questioning the veracity of an utterance, we perceive certain non-linguistic behaviours to indicate that a speaker is being deceptive. Recent work has highlighted that listeners' associations between speech disfluency and dishonesty are detectable at the earliest stages of reference comprehension, suggesting that the manner of spoken delivery influences pragmatic judgements concurrently with the processing of lexical information. Here, we investigate the integration of a speaker's gestures into judgements of deception, and ask if and when associations between nonverbal cues and deception emerge. Participants saw and heard a video of a potentially dishonest speaker describe treasure hidden behind an object, while also viewing images of both the named object and a distractor object. Their task was to click on the object behind which they believed the treasure to actually be hidden. Eye and mouse movements were recorded. Experiment 1 investigated listeners' associations between visual cues and deception, using a variety of static and dynamic cues. Experiment 2 focused on adaptor gestures. We show that a speaker's nonverbal behaviour can have a rapid and direct influence on listeners' pragmatic judgements, supporting the idea that communication is fundamentally multimodal.
Miguel A Lago; Craig K Abbey; Miguel P Eckstein
Foveated model observers for visual search in 3D medical images Journal Article
In: IEEE Transactions on Medical Imaging, 2020.
Model observers have a long history of success in predicting human observer performance in clinically-relevant detection tasks. New 3D image modalities provide more signal information but vastly increase the search space to be scrutinized. Here, we compared standard linear model observers (ideal observers, non-pre-whitening matched filter with eye filter, and various versions of Channelized Hotelling models) to human performance searching in 3D 1/f2.8 filtered noise images and assessed its relationship to the more traditional location known exactly detection tasks and 2D search. We investigated two different signal types that vary in their detectability away from the point of fixation (visual periphery). We show that the influence of 3D search on human performance interacts with the signal’s detectability in the visual periphery. Detection performance for signals difficult to detect in the visual periphery deteriorates greatly in 3D search but not in 3D location known exactly and 2D search. Standard model observers do not predict the interaction between 3D search and signal type. A proposed extension of the Channelized Hotelling model (foveated search model) that processes the image with reduced spatial detail away from the point of fixation, explores the image through eye movements, and scrolls across slices can successfully predict the interaction observed in humans and also the types of errors in 3D search. Together, the findings highlight the need for foveated model observers for image quality evaluation with 3D search.
Fan Li; Chun Hsien Chen; Gangyan Xu; Li Pheng Khoo
In: IEEE Transactions on Human-Machine Systems, 50 (5), pp. 465–474, 2020.
Eye-tracking-based human fatigue detection at traffic control centers suffers from an unavoidable problem of low-quality eye-tracking data caused by noisy and missing gaze points. In this article, the authors conducted pioneering work by investigating the effects of data quality on eye-tracking-based fatigue indicators and by proposing a hierarchical-based interpolation approach to extract the eye-tracking-based fatigue indicators from low-quality eye-tracking data. This approach adaptively classified the missing gaze points and hierarchically interpolated them based on the temporal-spatial characteristics of the gaze points. In addition, the definitions of applicable fixations and saccades for human fatigue detection is proposed. Two experiments are conducted to verify the effectiveness and efficiency of the method in extracting eye-tracking-based fatigue indicators and detecting human fatigue. The results indicate that most eye-tracking parameters are significantly affected by the quality of the eye-tracking data. In addition, the proposed approach can achieve much better performance than the classic velocity threshold identification algorithm (I-VT) and a state-of-the-art method (U'n'Eye) in parsing low-quality eye-tracking data. Specifically, the proposed method attained relatively stable eye-tracking-based fatigue indicators and reported the highest accuracy in human fatigue detection. These results are expected to facilitate the application of eye movement-based human fatigue detection in practice.
Sixin Liao; Lili Yu; Erik D Reichle; Jan Louis Kruger
Using eye movements to study the reading of subtitles in video Journal Article
In: Scientific Studies of Reading, pp. 1–19, 2020.
This article reports the first eye-movement experiment to examine how the presence versus absence of concurrent video content and presentation speed affect the reading of subtitles. Results indicated that participants adapted their visual routines to examine video content while simultaneously prioritizing the reading of subtitles, especially when the latter was displayed only briefly. Although decisions about when and where to move the eyes largely remained under local (cognitive) control, this control was also modulated by global task demands, suggesting an integration of local and global eye-movement control. The theoretical and pedagogical implications of these findings are discussed, and we also briefly describe a new theoretical framework for understanding all forms of multimodal reading, including the reading of subtitles in video.
Zhenji Lu; Riender Happee; Joost C F de Winter
In: Transportation Research Part F: Traffic Psychology and Behaviour, 72 , pp. 211–225, 2020.
In highly automated driving, drivers occasionally need to take over control of the car due to limitations of the automated driving system. Research has shown that visually distracted drivers need about 7 s to regain situation awareness (SA). However, it is unknown whether the presence of a hazard affects SA. In the present experiment, 32 participants watched animated video clips from a driver's perspective while their eyes were recorded using eye-tracking equipment. The videos had lengths between 1 and 20 s and contained either no hazard or an impending crash in the form of a stationary car in the ego lane. After each video, participants had to (1) decide (no need to take over, evade left, evade right, brake only), (2) rate the danger of the situation, (3) rebuild the situation from a top-down perspective, and (4) rate the difficulty of the rebuilding task. The results showed that the hazard situations were experienced as more dangerous than the non-hazard situations, as inferred from self-reported danger and pupil diameter. However, there were no major differences in SA: hazard and non-hazard situations yielded equivalent speed and distance errors in the rebuilding task and equivalent self-reported difficulty scores. An exception occurred for the shortest time budget (1 s) videos, where participants showed impaired SA in the hazard condition, presumably because the threat inhibited participants from looking into the rear-view mirror. Correlations between measures of SA and decision-making accuracy were low to moderate. It is concluded that hazards do not substantially affect the global awareness of the traffic situation, except for short time budgets.
Xueer Ma; Xiangling Zhuang; Guojie Ma
In: Frontiers in Psychology, 11 , pp. 1–11, 2020.
Transparent windows on food packaging can effectively highlight the actual food inside. The present study examined whether food packaging with transparent windows (relative to packaging with food‐ and non-food graphic windows in the same position and of the same size) has more advantages in capturing consumer attention and determining consumers' willingness to purchase. In this study, college students were asked to evaluate prepackaged foods presented on a computer screen, and their eye movements were recorded. The results showed salience effects for both packaging with transparent and food-graphic windows, which were also regulated by food category. Both transparent and graphic packaging gained more viewing time than the non-food graphic baseline condition for all the three selected products (i.e., nuts, preserved fruits, and instant cereals). However, no significant difference was found between transparent and graphic window conditions. For preserved fruits, time to first fixations was shorter in transparent packaging than other conditions. For nuts, the willingness to purchase was higher in both transparent and graphic conditions than the baseline condition, while the packaging attractiveness played a key role in mediating consumers' willingness to purchase. The implications for stakeholders and future research directions are discussed.
Nadine Matton; Pierre Vincent Paubel; Sébastien Puma
Toward the use of pupillary responses for pilot selection Journal Article
In: Human Factors, pp. 1–13, 2020.
Objective: For selection practitioners, it seems important to assess the level of mental resources invested in order to perform a demanding task. In this study, we investigated the potential of pupil size measurement to discriminate the most proficient pilot students from the less proficient. Background: Cognitive workload is known to influence learning outcome. More specifically, cognitive difficulties observed during pilot training are often related to a lack of efficient mental workload management. Method: Twenty pilot students performed a laboratory multitasking scenario, composed of several stages with increasing workload, while their pupil size was recorded. Two levels of pilot students were compared according to the outcome after 2 years of training: high success and medium success. Results: Our findings suggested that task-evoked pupil size measurements could be a promising predictor of flight training difficulties during the 2-year training. Indeed, high-level pilot students showed greater pupil size changes from low-load to high-load stages of the multitasking scenario than medium-level pilot students. Moreover, average pupil diameters at the low-load stage were smallest for the high-level pilot students. Conclusion: Following the neural efficiency hypothesis framework, the most proficient pilot students supposedly used their mental resources more efficiently than the least proficient while performing the multitasking scenario. Application: These findings might introduce a new way of managing selection processes complemented with ocular measurements. More specifically, pupil size measurement could enable identification of applicants with greater chances of success during pilot training.
Anna Miscenà; Jozsef Arato; Raphael Rosenberg
In: Journal of Eye Movement Research, 13 (2), pp. 1–13, 2020.
Among the most renowned painters of the early twentieth century, Gustav Klimt is often associated – by experts and laymen alike - with a distinctive style of representation: the visual juxtaposition of realistic features and flattened ornamental patterns. Art historical writing suggests that this juxtaposition allows a two-fold experience; the perception of both the realm of art and the realm of life. While Klimt adopted a variety of stylistic choices in his career, this one popularised his work and was hardly ever used by other artists. The following study was designed to observe whether Klimt's distinctive style causes a specific behaviour of the viewer, at the level of eye-movements. Twenty-one portraits were shown to thirty viewers while their eye-movements were recorded. The pictures included artworks by Klimt in both his distinctive and non-distinctive styles, as well as other artists of the same historical period. The recorded data show that only Klimt's distinctive paintings induce a specific eye- movement pattern with alternating longer (“absorbed”) and shorter (“scattered”) fixations. We therefore claim that there is a behavioural correspondence to what art historical interpretations have so far asserted: The perception of “Klimt's style” can be described as two-fold also at a physiological level.
Malik M Naeem Mannan; Ahmad M Kamran; Shinil Kang; Hak Soo Choi; Myung Yung Jeong
In: Sensors, 20 (3), pp. 1–20, 2020.
Steady‐state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli‐responsive hybrid speller by using electroencephalography (EEG) and video‐based eye‐tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)‐based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI‐speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI‐spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued‐spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free‐spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI‐based system will ultimately enable a truly high-speed communication channel.
Diederick C Niehorster; Thiago Santini; Roy S Hessels; Ignace T C Hooge; Enkelejda Kasneci; Marcus Nyström
In: Behavior Research Methods, 52 (3), pp. 1140–1160, 2020.
Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant's head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs' Pupil in 3D mode, and (iv) Pupil-Labs' Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
Paul Henri Prévot; Kevin Gehere; Fabrice Arcizet; Himanshu Akolkar; Mina A Khoei; Kévin Blaize; Omar Oubari; Pierre Daye; Marion Lanoë; Manon Valet; Sami Dalouz; Paul Langlois; Elric Esposito; Valérie Forster; Elisabeth Dubus; Nicolas Wattiez; Elena Brazhnikova; Céline Nouvel-Jaillard; Yannick LeMer; Joanna Demilly; Claire Maëlle Fovet; Philippe Hantraye; Morgane Weissenburger; Henri Lorach; Elodie Bouillet; Martin Deterre; Ralf Hornig; Guillaume Buc; José Alain Sahel; Guillaume Chenegros; Pierre Pouget; Ryad Benosman; Serge Picaud
In: Nature Biomedical Engineering, 4 (2), pp. 172–180, 2020.
Retinal dystrophies and age-related macular degeneration related to photoreceptor degeneration can cause blindness. In blind patients, although the electrical activation of the residual retinal circuit can provide useful artificial visual perception, the resolutions of current retinal prostheses have been limited either by large electrodes or small numbers of pixels. Here we report the evaluation, in three awake non-human primates, of a previously reported near-infrared-light-sensitive photovoltaic subretinal prosthesis. We show that multipixel stimulation of the prosthesis within radiation safety limits enabled eye tracking in the animals, that they responded to stimulations directed at the implant with repeated saccades and that the implant-induced responses were present two years after device implantation. Our findings pave the way for the clinical evaluation of the prosthesis in patients affected by dry atrophic age-related macular degeneration.
David Randall; Sophie Lauren Fox; John Wesley Fenner; Gemma Elizabeth Arblaster; Anne Bjerre; Helen Jane Griffiths
In: Current Eye Research, 45 (12), pp. 1611–1618, 2020.
Purpose: Oscillopsia is a debilitating symptom resulting from involuntary eye movement most commonly associated with acquired nystagmus. Investigating and documenting the effects of oscillopsia severity on visual acuity (VA) is challenging. This paper aims to further understanding of the effects of oscillopsia using a virtual reality simulation. Methods: Fifteen right-beat horizontal nystagmus waveforms, with different amplitude (1°, 3°, 5°, 8° and 11°) and frequency (1.25 Hz, 2.5 Hz and 5 Hz) combinations, were produced and imported into virtual reality to simulate different severities of oscillopsia. Fifty participants without ocular pathology were recruited to read logMAR charts in virtual reality under stationary conditions (no oscillopsia) and subsequently while experiencing simulated oscillopsia. The change in VA (logMAR) was calculated for each oscillopsia simulation (logMAR VA with oscillopsia–logMAR VA with no oscillopsia), removing the influence of different baseline VAs between participants. A one-tailed paired t-test was used to assess statistical significance in the worsening in VA caused by the oscillopsia simulations. Results: VA worsened with each incremental increase in simulated oscillopsia intensity (frequency x amplitude), either by increasing frequency or amplitude, with the exception of statistically insignificant changes at lower intensity simulations. Theoretical understanding predicted a linear relationship between increasing oscillopsia intensity and worsening VA. This was supported by observations at lower intensity simulations but not at higher intensities, with incremental changes in VA gradually levelling off. A potential reason for the difference at higher intensities is the influence of frame rate when using digital simulations in virtual reality. Conclusions: The frequency and amplitude were found to equally affect VA, as predicted. These results not only consolidate the assumption that VA degrades with oscillopsia but also provide quantitative information that relates these changes to amplitude and frequency of oscillopsia.
Deirdre A Robertson; Peter D Lunn
In: Appetite, 144 , pp. 1–10, 2020.
We manipulated the presence and spatial location of calorie labels on menus while tracking eye movements. A novel “lab-in-the-field” experimental design allowed eye movements to be recorded while participants chose lunch from a menu, unaware that their choice was part of a study. Participants exposed to calorie information ordered 93 fewer calories (11%) relative to a control group who saw no calorie labels. The difference in number of calories consumed was greater still. The impact was strongest when calorie information was displayed just to the right of the price, in an equivalent font. The effects were mediated by knowledge of the amount of calories in the meal, implying that calorie posting led to more informed decision-making. There was no impact on enjoyment of the meal. The eye-tracking data suggested that the spatial arrangement altered individuals' search strategies while viewing the menu. This research suggests that the spatial location of calories on menus may be an important consideration when designing calorie posting legislation and policy. 1.
Qëndresa Rramani; Ian Krajbich; Laura Enax; Lisa Brustkern; Bernd Weber
In: Nutrition Research, 80 , pp. 106–116, 2020.
Nutrition labels are the most commonly used tools to promote healthy choices. Research has shown that color-coded traffic light (TL) labels are more effective than purely numerical Guideline Daily Amount (GDA) labels at promoting healthy eating. While these effects of TL labels on food choice are hypothesized to rely on attention, how this occurs remains unknown. Based on previous eye-tracking research we hypothesized that TL labels compared to GDA labels will attract more attention, will induce shifts in attention allocation to healthy food items, and will increase the influence of attention to the labels on food choice. To test our hypotheses, we conducted an eye-tracking experiment where participants chose between healthy and unhealthy food items accompanied either by TL or GDA labels. We found that TL labels biased choices towards healthier items because their presence caused participants to allocate more attention to healthy items and less to unhealthy items. Moreover, our data indicated that TL labels were more likely to be looked at, and had a larger effect on choice, despite attracting less dwell time. These results reveal that TL labels increase healthy food choice, relative to GDA labels, by shifting attention and the effects of attention on choice.
Donghyun Ryu; Andrew Cooke; Eduardo Bellomo; Tim Woodman
In: Accident Analysis and Prevention, 146 , pp. 1–13, 2020.
The objectives of this paper were to directly examine the roles of central and peripheral vision in hazard perception and to test whether perceptual training can enhance hazard perception. We also examined putative cortical mechanisms underpinning any effect of perceptual training on performance. To address these objectives, we used the gaze-contingent display paradigm to selectively present information to central and peripheral parts of the visual field. In Experiment 1, we compared hazard perception abilities of experienced and inexperienced drivers while watching video clips in three different viewing conditions (full vision; clear central and blurred peripheral vision; blurred central and clear peripheral vision). Participants' visual search behaviour and cortical activity were simultaneously recorded. In Experiment 2, we determined whether training with clear central and blurred peripheral vision could improve hazard perception among non-licensed drivers. Results demonstrated that (i) information from central vision is more important than information from peripheral vision in identifying hazard situations, for screen-based hazard perception tests, (ii) clear central and blurred peripheral vision viewing helps the alignment of line-of-gaze and attention, (iii) training with clear central and blurred peripheral vision can improve screen-based hazard perception. The findings have important implications for road safety and provide a new training paradigm to improve hazard perception.
Steven W Savage; Douglas D Potter; Benjamin W Tatler
In: Accident Analysis and Prevention, 138 , pp. 1–11, 2020.
Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance.
Lisa Schäfer; Ricarda Schmidt; Silke M Müller; Arne Dietrich; Anja Hilbert
In: Journal of Psychiatric Research, 129 , pp. 214–221, 2020.
Research documented the effectiveness of obesity surgery (OS) for long-term weight loss and improvements in medical and psychosocial sequelae, and general cognitive functioning. However, there is only preliminary evidence for changes in attentional processing of food cues after OS. This study longitudinally investigated visual attention towards food cues from pre- to 1-year post-surgery. Using eye tracking (ET) and a Visual Search Task (VST), attentional processing of food versus non-food cues was assessed in n = 32 patients with OS and n = 31 matched controls without weight-loss treatment at baseline and 1-year follow-up. Associations with experimentally assessed impulsivity and eating disorder psychopathology and the predictive value of changes in visual attention towards food cues for weight loss and eating behaviors were determined. During ET, both groups showed significant gaze duration biases to non-food cues without differences and changes over time. No attentional biases over group and time were found by the VST. Correlations between attentional data and clinical variables were sparse and not robust over time. Changes in visual attention did not predict weight loss and eating disorder psychopathology after OS. The present study provides support for a top-down regulation of visual attention to non-food cues in individuals with severe obesity. No changes in attentional processing of food cues were detected 1-year post-surgery. Further studies are needed with comparable methodology and longer follow-ups to clarify the role of biased visual attention towards food cues for long-term weight outcomes and eating behaviors after OS.
Huiru Shao; Jing Li; Wenbo Wan; Huaxiang Zhang; Jiande Sun
Saccadic trajectory-based identity authentication Journal Article
In: Multimedia Tools and Applications, 79 (7-8), pp. 4891–4905, 2020.
The saccadic trajectory is generated by extra-ocular muscles in the eyes, which is a complex mechanism related to brain-driven neural signal. The saccadic trajectory has the characteristics of non-reproducibility and non-contact. In this paper, we propose a saccadic trajectory-based identity authentication method considering that saccadic trajectory can be used as a behavior-based biometric. In this method, we adopt Velocity-Threshold (I-VT) algorithm to extract saccadic trajectories from the whole eye movement data, extract features via wavelet packet transform and authenticate the identity via classifying these features by SVM. In this paper, we verify the proposed method on EMDBv1.0 dataset for horizontal eye movements. We select one subject to be the host and randomly choose another 50 subjects from the remaining 58 subjects as the attackers. We achieve the best performance via optimizing feature selection and the parameter of SVM. The experiment results show that the average accuracy for accepting the host can reach 98.09%, and the average accuracy for rejecting the attackers can reach 99.55%. It demonstrates that the saccadic trajectory-based identity authentication is promising in information security.
Nino Sharvashidze; Alexander C Schütz
Task-dependent eye-movement patterns in viewing art Journal Article
In: Journal of Eye Movement Research, 13 (2), pp. 1–17, 2020.
In art schools and classes for art history students are trained to pay attention to different aspects of an artwork, such as art movement characteristics and painting techniques. Experts are better at processing style and visual features of an artwork than nonprofessionals. Here we tested the hypothesis that experts in art use different, task-dependent viewing strategies than nonprofes- sionals when analyzing a piece of art. We compared a group of art history students with a group of students with no art education background, while viewing 36 paintings under three discrim- ination tasks. Participants were asked to determine the art movement, the date and the medium of the paintings. We analyzed behavioral and eye-movement data of 27 participants. Our ob- servers adjusted their viewing strategies according to the task, resulting in longer fixation du- rations and shorter saccade amplitudes for the medium detection task. We found higher task accuracy and subjective confidence, less congruence and higher dispersion in fixation locations in experts. Expertise also influenced saccade metrics, biasing it towards larger saccade ampli- tudes, advocating a more holistic scanning strategy of experts in all three tasks. WIBBLE:
Carlos Sillero-Rejon; Ute Leonards; Marcus R Munafò; Craig Hedge; Janet Hoek; Benjamin Toll; Harry Gove; Isabel Willis; Rose Barry; Abi Robinson; Olivia M Maynard
Avoidance of tobacco health warnings? An eye-tracking approach Journal Article
In: Addiction, 116 , pp. 126–138, 2020.
Aims: Among three eye-tracking studies, we examined how cigarette pack features affected visual attention and self-reported avoidance of and reactance to warnings. Design: Study 1: smoking status × warning immediacy (short-term versus long-term health consequences) × warning location (top versus bottom of pack). Study 2: smoking status × warning framing (gain-framed versus loss-framed) × warning format (text-only versus pictorial). Study 3: smoking status × warning severity (highly severe versus moderately severe consequences of smoking). Setting: University of Bristol, UK, eye-tracking laboratory. Participants: Study 1: non-smokers (n = 25), weekly smokers (n = 25) and daily smokers (n = 25). Study 2: non-smokers (n = 37), smokers contemplating quitting (n = 37) and smokers not contemplating quitting (n = 43). Study 3: non-smokers (n = 27), weekly smokers (n = 26) and daily smokers (n = 26). Measurements: For all studies: visual attention, measured as the ratio of the number of fixations to the warning versus the branding, self-reported predicted avoidance of and reactance to warnings and for study 3, effect of warning on quitting motivation. Findings: Study 1: greater self-reported avoidance [mean difference (MD) = 1.14; 95% confidence interval (CI) = 0.94, 1.35, P textless 0.001, $eta$p2 = 0.64] and visual attention (MD = 0.89, 95% CI = 0.09, 1.68
Brenda M Stoesz; Jessica Sutton
In: Canadian Journal of Learning and Technology, 46 (2), pp. 1–21, 2020.
Research has demonstrated that students' learning outcomes and motivation to learn are influenced by the visual design of learning technologies (e.g., learning management systems or LMS). One aspect of LMS design that has not been thoroughly investigated is visual complexity. In two experiments, postsecondary students rated the visual complexity of images of LMS after exposure durations of 50-500 ms. Perceptions of complexity were positively correlated across timed conditions and working memory capacity was associated with complexity ratings. Low-level image metrics were also found to predict perceptions of the LMS complexity. Results demonstrate the importance of the visual design of learning technologies and suggest that additional research on the impact of LMS visual complexity on learning outcomes is warranted.
Byunghoon “Tony” Ahn; Jason M Harley
In: British Journal of Educational Technology, 51 (5), pp. 1563–1576, 2020.
Learning analytics (LA) incorporates analyzing cognitive, social and emotional processes in learning scenarios to make informed decisions regarding instructional design and delivery. Research has highlighted important roles that emotions play in learning. We have extended this field of research by exploring the role of emotions in a relatively uncommon learning scenario: learning about queer history with a multimedia mobile app. Specifically, we used an automatic facial recognition software (FaceReader 7) to measure learners' discrete emotions and a counter-balanced multiple-choice quiz to assess learning. We also used an eye tracker (EyeLink 1000) to identify the emotions learners experienced while they read specific content, as opposed to the emotions they experienced over the course of the entire learning session. A total of 33 out of 57 of the learners' data were eligible to be analyzed. Results revealed that learners expressed more negative-activating emotions (ie, anger, anxiety) and negative-deactivating emotions (ie, sadness) than positive-activating emotions (ie, happiness). Learners with an angry emotion profile had the highest learning gains. The importance of examining typically undesirable emotions in learning, such as anger, is discussed using the control-value theory of achievement emotions. Further, this study describes a multimodal methodology to integrate behavioral trace data into learning analytics research.
Hamidreza Azemati; Fatemeh Jam; Modjtaba Ghorbani; Matthias Dehmer; Reza Ebrahimpour; Abdolhamid Ghanbaran; Frank Emmert-Streib
In: Symmetry, 12 , pp. 1–15, 2020.
Symmetry is an important visual feature for humans and its application in architecture is completely evident. This paper aims to investigate the role of symmetry in the aesthetics judgment of residential building façades and study the pattern of eye movement based on the expertise of subjects in architecture. In order to implement this in the present paper, we have created images in two categories: symmetrical and asymmetrical façade images. The experiment design allows us to investigate the preference of subjects and their reaction time to decide about presented images as well as record their eye movements. It was inferred that the aesthetic experience of a building façade is influenced by the expertise of the subjects. There is a significant difference between experts and non-experts in all conditions, and symmetrical façades are in line with the taste of non-expert subjects. Moreover, the patterns of fixational eye movements indicate that the horizontal or vertical symmetry (mirror symmetry) has a profound influence on the observer's attention, but there is a difference in the points watched and their fixation duration. Thus, although symmetry may attract the same attention during eye movements on façade images, it does not necessarily lead to the same preference between the expert and non-expert groups.
Anissa Boutabla; Samuel Cavuscens; Maurizio Ranieri; Céline Crétallaz; Herman Kingma; Raymond van de Berg; Nils Guinand; Angélica Pérez Fornos
In: Journal of Neurology, 267 (1), pp. S273–S284, 2020.
Background and purpose: Vestibular implants seem to be a promising treatment for patients suffering from severe bilateral vestibulopathy. To optimize outcomes, we need to investigate how, and to which extent, the different vestibular pathways are activated. Here we characterized the simultaneous responses to electrical stimuli of three different vestibular pathways. Methods: Three vestibular implant recipients were included. First, activation thresholds and amplitude growth functions of electrically evoked vestibulo-ocular reflexes (eVOR), cervical myogenic potentials (ecVEMPs) and vestibular percepts (vestibulo-thalamo-cortical, VTC) were recorded upon stimulation with single, biphasic current pulses (200 µs/phase) delivered through five different vestibular electrodes. Latencies of eVOR and ecVEMPs were also characterized. Then we compared the amplitude growth functions of the three pathways using different stimulation profiles (1-pulse, 200 µs/phase; 1-pulse, 50 µs/phase; 4-pulses, 50 µs/phase, 1600 pulses-per-second) in one patient (two electrodes). Results: The median latencies of the eVOR and ecVEMPs were 8 ms (8–9 ms) and 10.2 ms (9.6–11.8 ms), respectively. While the amplitude of eVOR and ecVEMP responses increased with increasing stimulation current, the VTC pathway showed a different, step-like behavior. In this study, the 200 µs/phase paradigm appeared to give the best balance to enhance responses at lower stimulation currents. Conclusions: This study is a first attempt to evaluate the simultaneous activation of different vestibular pathways. However, this issue deserves further and more detailed investigation to determine the actual possibility of selective stimulation of a given pathway, as well as the functional impact of the contribution of each pathway to the overall rehabilitation process.
Christopher D D Cabrall; Riender Happee; Joost C F De Winter
In: Transportation Research Part F: Traffic Psychology and Behaviour, 68 , pp. 187–197, 2020.
For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control.
Andrea Caoli; Silvio P Sabatini; Agostino Gibaldi; Guido Maiello; Anna Kosovicheva; Peter Bex
In: Scientific Reports, 10 , pp. 1–13, 2020.
Strabismus is a prevalent impairment of binocular alignment that is associated with a spectrum of perceptual deficits and social disadvantages. Current treatments for strabismus involve ocular alignment through surgical or optical methods and may include vision therapy exercises. In the present study, we explore the potential of real-time dichoptic visual feedback that may be used to quantify and manipulate interocular alignment. A gaze-contingent ring was presented independently to each eye of 11 normally-sighted observers as they fixated a target dot presented only to their dominant eye. Their task was to center the rings within 2° of the target for at least 1 s, with feedback provided by the sizes of the rings. By offsetting the ring in the non-dominant eye temporally or nasally, this task required convergence or divergence, respectively, of the non-dominant eye. Eight of 11 observers attained 5° asymmetric convergence and 3 of 11 attained 3° asymmetric divergence. The results suggest that real-time gaze-contingent feedback may be used to quantify and transiently simulate strabismus and holds promise as a method to augment existing therapies for oculomotor alignment disorders.
Matthew R Cavanaugh; Lisa M Blanchard; Michael McDermott; Byron L Lam; Madhura Tamhankar; Steven E Feldon
In: Ophthalmology, pp. 1–11, 2020.
Purpose: To evaluate the efficacy of motion discrimination training as a potential therapy for stroke-induced hemianopic visual field defects. Design: Clinical trial. Participants: Forty-eight patients with stroke-induced homonymous hemianopia (HH) were randomized into 2 training arms: intervention and control. Patients were between 21 and 75 years of age and showed no ocular issues at presentation. Methods: Patients were trained on a motion discrimination task previously evidenced to reduce visual field deficits, but not in a randomized clinical trial. Patients were randomized with equal allocation to receive training in either their sighted or deficit visual fields. Training was performed at home for 6 months, consisting of repeated visual discriminations at a single location for 20 to 30 minutes daily. Study staff and patients were masked to training type. Testing before and after training was identical, consisting of Humphrey visual fields (Carl Zeiss Meditech), macular integrity assessment perimetry, OCT, motion discrimination performance, and visual quality- of-life questionnaires. Main Outcome Measures: Primary outcome measures were changes in perimetric mean deviation (PMD) on Humphrey Visual Field Analyzer in both eyes. Results: Mean PMDs improved over 6 months in deficit-trained patients (mean change in the right eye, 0.58 dB; 95% confidence interval, 0.07e1.08 dB; mean change in the left eye 0.84 dB; 95% confidence interval, 0.22e1.47 dB). No improvement was observed in sighted-trained patients (mean change in the right eye, 0.12 dB; 95% confidence interval, e0.38 to 0.62 dB; mean change in the left eye, 0.10 dB; 95% confidence interval, e0.52 to 0.72 dB). However, no significant differences were found between the alternative training methods (right eye
Xianglan Chen; Hulin Ren; Yamin Liu; Bendegul Okumus; Anil Bilgihan
In: International Journal of Hospitality Management, 84 , pp. 1–10, 2020.
Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items.
Agnieszka Chmiel; Przemysław Janikowski; Agnieszka Lijewska
Multimodal processing in simultaneous interpreting with text Journal Article
In: Target, 32 (1), pp. 37–58, 2020.
The present study focuses on (in)congruence of input between the visual and the auditory modality in simultaneous interpreting with text. We asked twenty-four professional conference interpreters to simultaneously interpret an aurally and visually presented text with controlled incongruences in three categories (numbers, names and control words), while measuring interpreting accuracy and eye movements. The results provide evidence for the dominance of the visual modality, which goes against the professional standard of following the auditory modality in the case of incongruence. Numbers enjoyed the greatest accuracy across conditions possibly due to simple cross-language semantic mappings. We found no evidence for a facilitation effect for congruent items, and identified an impeding effect of the presence of the visual text for incongruent items. These results might be interpreted either as evidence for the Colavita effect (in which visual stimuli take precedence over auditory ones) or as strategic behaviour applied by professional interpreters to avoid risk.
Francisco M Costela; José J Castro-Torres
In: Transportation Research Part F: Traffic Psychology and Behaviour, 74 , pp. 511–521, 2020.
Background: Many studies have found that eye movement behavior provides a real-time index of mental activity. Risk management architectures embedded in autonomous vehicles fail to include human cognitive aspects. We set out to evaluate whether eye movements during a risk driving detection task are able to predict risk situations. Methods: Thirty-two normally sighted subjects (15 female) saw 20 clips of recorded driving scenes while their gaze was tracked. They reported when they considered the car should brake, anticipating any hazard. We applied both a mixed-effect logistic regression model and feedforward neural networks between hazard reports and eye movement descriptors. Results: All subjects reported at least one major collision hazard in each video (average 3.5 reports). We found that hazard situations were predicted by larger saccades, more and longer fixations, fewer blinks, and a smaller gaze dispersion in both horizontal and vertical dimensions. Performance between models incorporating a different combination of descriptors was compared running a test equality of receiver operating characteristic areas. Feedforward neural networks outperformed logistic regressions in accuracies. The model including saccadic magnitude, fixation duration, dispersion in ×, and pupil returned the highest ROC area (0.73). Conclusion: We evaluated each eye movement descriptor successfully and created separate models that predicted hazard events with an average efficacy of 70% using both logistic regressions and feedforward neural networks. The use of driving simulators and hazard detection videos can be considered a reliable methodology to study risk prediction.
Joe Cutting; Paul Cairns
In: Behaviour and Information Technology, pp. 1–21, 2020.
Digital games are well known for holding players' attention and stopping them from being distracted by events around them. Being able to quantify how well games hold attention provides a behavioral foundation for measures of game engagement and a link to existing research on attention. We developed a new behavioral measure of how well games hold attention, based on players' post-game recognition of irrelevant distractors which are shown around the game. This is known as the Distractor Recognition Paradigm (DRP). In two studies we show that the DRP is an effective measure of how well self-paced games hold attention. We show that even simple self-paced games can hold players' attention completely and the consistency of attentional focus is moderated by game engagement. We compare the DRP to existing measures of both attention and engagement and consider how practical it is as a measure of game engagement. We find no evidence that eye tracking is a superior measure of attention to distractor recognition. We discuss existing research on attention and consider implications for areas such as motivation to play and serious games.
Giorgia D'Innocenzo; Alexander V Nowicky; Daniel T Bishop
In: Behavioural Brain Research, 379 , pp. 1–13, 2020.
Action observation elicits changes in primary motor cortex known as motor resonance, a phenomenon thought to underpin several functions, including our ability to understand and imitate others' actions. Motor resonance is modulated not only by the observer's motor expertise, but also their gaze behaviour. The aim of the present study was to investigate motor resonance and eye movements during observation of a dynamic goal-directed action, relative to an everyday one – a reach-grasp-lift (RGL) action, commonly used in action-observation-based neurorehabilitation protocols. Skilled and novice golfers watched videos of a golf swing and an RGL action as we recorded MEPs from three forearm muscles; gaze behaviour was concurrently monitored. Corticospinal excitability increased during golf swing observation, but it was not modulated by expertise, relative to baseline; no such changes were observed for the RGL task. MEP amplitudes were related to participants' gaze behaviour: in the RGL condition, target viewing was associated with lower MEP amplitudes; in the golf condition, MEP amplitudes were positively correlated with time spent looking at the effector or neighbouring regions. Viewing of a dynamic action such as the golf swing may enhance action observation treatment, especially when concurrent physical practice is not possible.
Trafton Drew; James Guthrie; Isabel Reback
In: Journal of Experimental Psychology: Applied, 26 (4), pp. 659–670, 2020.
Computer-aided detection (CAD) is applied during screening mammography for millions of women each year. Despite its popularity, several large studies have observed no benefit in breast cancer detection for practices that use CAD. This lack of benefit may be driven by how CAD information is conveyed to the radiologist. In the current study, we examined this possibility in an artificial task modeled after screening mammography. Prior work at high (50%) target prevalence suggested that CAD marks might disrupt visual attention: Targets that are missed by the CAD system are more likely to be missed by the user. However, targets are much less common in screening mammography. Moreover, the prior work on this topic has focused on simple binary CAD systems that place marks on likely locations, but some modern CAD systems employ interactive CAD (iCAD) systems that may mitigate the previously observed costs. Here, we examined the effects of target prevalence and CAD system. We found that the costs of binary CAD were exacerbated at low prevalence. Meanwhile, iCAD did not lead to a cost on unmarked targets, which suggests that this sort of CAD implementation may be superior to more traditional binary CAD implementations when targets occur infrequently.
Yke Bauke Eisma; Clark Borst; René van Paassen; Joost de Winter
Augmented visual feedback: Cure or distraction? Journal Article
In: Human Factors, pp. 1–13, 2020.
Objective: The aim of the study was to investigate the effect of augmented feedback on participants' workload, performance, and distribution of visual attention. Background: An important question in human–machine interface design is whether the operator should be provided with direct solutions. We focused on the solution space diagram (SSD), a type of augmented feedback that shows directly whether two aircraft are on conflicting trajectories. Method: One group of novices (n = 13) completed conflict detection tasks with SSD, whereas a second group (n = 11) performed the same tasks without SSD. Eye-tracking was used to measure visual attention distribution. Results: The mean self-reported task difficulty was substantially lower for the SSD group compared to the No-SSD group. The SSD group had a better conflict detection rate than the No-SSD group, whereas false-positive rates were equivalent. High false-positive rates for some scenarios were attributed to participants who misunderstood the SSD. Compared to the No-SSD group, the SSD group spent a large proportion of their time looking at the SSD aircraft while looking less at other areas of interest. Conclusion: Augmented feedback makes the task subjectively easier but has side effects related to visual tunneling and misunderstanding. Application: Caution should be exercised when human operators are expected to reproduce task solutions that are provided by augmented visual feedback.
Camilla E J Elphick; Graham E Pike; Graham J Hole
In: Psychology, Crime and Law, 26 (1), pp. 67–92, 2020.
As pupil size is affected by cognitive processes, we investigated whether it could serve as an independent indicator of target recognition in lineups. Participants saw a simulated crime video, followed by two viewings of either a target-present or target-absent video lineup while pupil size was measured with an eye-tracker. Participants who made correct identifications showed significantly larger pupil sizes when viewing the target compared with distractors. Some participants were uncertain about their choice of face from the lineup, but nevertheless showed pupillary changes when viewing the target, suggesting covert recognition of the target face had occurred. The results suggest that pupillometry might be a useful aid in assessing the accuracy of an eyewitness' identification.
Gemma Fitzsimmons; Lewis T Jayes; Mark J Weal; Denis Drieghe
In: PLoS ONE, 15 (9), pp. 1–23, 2020.
It has been shown that readers spend a great deal of time skim reading on the Web and that this type of reading can affect lexical processing of words. Across two experiments, we utilised eye tracking methodology to explore how hyperlinks and navigating webpages affect reading behaviour. In Experiment 1, participants read static Webpages either for comprehension or whilst skim reading, while in Experiment 2, participants additionally read through a navigable Web environment. Embedded target words were either hyperlinks or not and were either high-frequency or low-frequency words. Results from Experiment 1 show that while readers lexically process both linked and unlinked words when reading for comprehension, readers only fully lexically process linked words when skim reading, as was evidenced by a frequency effect that was absent for the unlinked words. They did fully lexically process both linked and unlinked words when reading for comprehension. In Experiment 2, which allowed for navigating, readers only fully lexically processed linked words compared to unlinked words, regardless of whether they were skim reading or reading for comprehension. We suggest that readers engage in an efficient reading strategy where they attempt to minimise comprehension loss while maintaining a high reading speed. Readers use hyperlinks as markers to suggest important information and use them to navigate through the text in an efficient and effective way. The task of reading on the Web causes readers to lexically process words in a markedly different way from typical reading experiments.
Mathilda Froesel; Quentin Goudard; Marc Hauser; Maëva Gacoin; Suliann Ben Hamed
In: Scientific Reports, 10 , pp. 1–11, 2020.
Heart rate (HR) is extremely valuable in the study of complex behaviours and their physiological correlates in non-human primates. However, collecting this information is often challenging, involving either invasive implants or tedious behavioural training. In the present study, we implement a Eulerian video magnification (EVM) heart tracking method in the macaque monkey combined with wavelet transform. This is based on a measure of image to image fluctuations in skin reflectance due to changes in blood influx. We show a strong temporal coherence and amplitude match between EVM-based heart tracking and ground truth ECG, from both color (RGB) and infrared (IR) videos, in anesthetized macaques, to a level comparable to what can be achieved in humans. We further show that this method allows to identify consistent HR changes following the presentation of conspecific emotional voices or faces. EVM is used to extract HR in humans but has never been applied to non-human primates. Video photoplethysmography allows to extract awake macaques HR from RGB videos. In contrast, our method allows to extract awake macaques HR from both RGB and IR videos and is particularly resilient to the head motion that can be observed in awake behaving monkeys. Overall, we believe that this method can be generalized as a tool to track HR of the awake behaving monkey, for ethological, behavioural, neuroscience or welfare purposes.
Agostino Gibaldi; Silvio P Sabatini
In: Behavior Research Methods, pp. 1–21, 2020.
Saccades are rapid ballistic eye movements that humans make to direct the fovea to an object of interest. Their kinematics is well defined, showing regular relationships between amplitude, duration, and velocity: the saccadic 'main sequence'. Deviations of eye movements from the main sequence can be used as markers of specific neurological disorders. Despite its significance, there is no general methodological consensus for reliable and repeatable measurements of the main sequence. In this work, we propose a novel approach for standard indicators of oculomotor performance. The obtained measurements are characterized by high repeatability, allowing for fine assessments of inter- and intra-subject variability, and inter-ocular differences. The designed experimental procedure is natural and non-fatiguing, thus it is well suited for fragile or non-collaborative subjects like neurological patients and infants. The method has been released as a software toolbox for public use. This framework lays the foundation for a normative dataset of healthy oculomotor performance for the assessment of oculomotor dysfunctions.
Alexander Goettker; Kevin J MacKenzie; Scott T Murdison
In: Journal of the Society for Information Display, 28 (6), pp. 509–519, 2020.
We used perceptual and oculomotor measures to understand the negative impacts of low (phantom array) and high (motion blur) duty cycles with a high-speed, AR-likehead-mounted display prototype. We observed large intersubject variability for the detection of phantom array artifacts but a highly consistent and systematic effect on saccadic eye movement targeting during low duty cycle presentations. This adverse effect on saccade endpoints was also related to an increased error rate in a perceptual discrimination task, showing a direct effect of display duty cycle on the perceptual quality. For high duty cycles, the probability of detecting motion blur increased during head movements, and this effect was elevated at lower refresh rates. We did not find an impact of the temporal display characteristics on compensatory eye movements during head motion (e.g., VOR). Together, our results allow us to quantify the tradeoff of different negative spatiotemporal impacts of user movements and make subsequent recommendations for optimized temporal HMD parameters.
Andrea Grant; Gregory J Metzger; Pierre François Van de Moortele; Gregor Adriany; Cheryl Olman; Lin Zhang; Joseph Koopermeiners; Yiğitcan Eryaman; Margaret Koeritzer; Meredith E Adams; Thomas R Henry; Kamil Uğurbil
In: Magnetic Resonance Imaging, 73 , pp. 163–176, 2020.
Purpose: To perform a pilot study to quantitatively assess cognitive, vestibular, and physiological function during and after exposure to a magnetic resonance imaging (MRI) system with a static field strength of 10.5 Tesla at multiple time scales. Methods: A total of 29 subjects were exposed to a 10.5 T MRI field and underwent vestibular, cognitive, and physiological testing before, during, and after exposure; for 26 subjects, testing and exposure were repeated within 2–4 weeks of the first visit. Subjects also reported sensory perceptions after each exposure. Comparisons were made between short and long term time points in the study with respect to the parameters measured in the study; short term comparison included pre-vs-isocenter and pre-vs-post (1–24 h), while long term compared pre-exposures 2–4 weeks apart. Results: Of the 79 comparisons, 73 parameters were unchanged or had small improvements after magnet exposure. The exceptions to this included lower scores on short term (i.e. same day) executive function testing, greater isocenter spontaneous eye movement during visit 1 (relative to pre-exposure), increased number of abnormalities on videonystagmography visit 2 versus visit 1 and a mix of small increases (short term visit 2) and decreases (short term visit 1) in blood pressure. In addition, more subjects reported metallic taste at 10.5 T in comparison to similar data obtained in previous studies at 7 T and 9.4 T. Conclusion: Initial results of 10.5 T static field exposure indicate that 1) cognitive performance is not compromised at isocenter, 2) subjects experience increased eye movement at isocenter, and 3) subjects experience small changes in vital signs but no field-induced increase in blood pressure. While small but significant differences were found in some comparisons, none were identified as compromising subject safety. A modified testing protocol informed by these results was devised with the goal of permitting increased enrollment while providing continued monitoring to evaluate field effects.
Agnes Hardardottir; Mohammed Al-Hamdani; Raymond Klein; Austin Hurst; Sherry H Stewart
In: Nicotine & Tobacco Research, 22 (10), pp. 1788–1794, 2020.
INTRODUCTION: The social and health care costs of smoking are immense. To reduce these costs, several tobacco control policies have been introduced (eg, graphic health warnings [GHWs] on cigarette packs). Previous research has found plain packaging (a homogenized form of packaging), in comparison to branded packaging, effectively increases attention to GHWs using UK packaging prototypes. Past studies have also found that illness sensitivity (IS) protects against health-impairing behaviors. Building on this evidence, the goal of the current study was to assess the effect of packaging type (plain vs. branded), IS level, and their interaction on attention to GHWs on cigarette packages using proposed Canadian prototypes. AIMS AND METHODS: We assessed the dwell time and fixations on the GHW component of 40 cigarette pack stimuli (20 branded; 20 plain). Stimuli were presented in random order to 50 smokers (60.8% male; mean age = 33.1; 92.2% daily smokers) using the EyeLink 1000 system. Participants were divided into low IS (n = 25) and high IS (n = 25) groups based on scores on the Illness Sensitivity Index. RESULTS: Overall, plain packaging relative to branded packaging increased fixations (but not dwell time) on GHWs. Moreover, low IS (but not high IS) smokers showed more fixations to GHWs on plain versus branded packages. CONCLUSIONS: These findings demonstrate that plain packaging is a promising intervention for daily smokers, particularly those low in IS, and contribute evidence in support of impending implementation of plain packaging in Canada. IMPLICATIONS: Our findings have three important implications. First, our study provides controlled experimental evidence that plain packaging is a promising intervention for daily smokers. Second, the findings of this study contribute supportive evidence for the impending plain packaging policy in Canada, and can therefore aid in defense against anticipated challenges from the tobacco industry upon its implementation. Third, given its effects in increasing attention to GHWs, plain packaging is an intervention likely to provide smokers enhanced incentive for smoking cessation, particularly among those low in IS who may otherwise be less interested in seeking treatment for tobacco dependence.
Claudia R Hebert; Li Z Sha; Roger W Remington; Yuhong V Jiang
Redundancy gain in visual search of simulated X-ray images Journal Article
In: Attention, Perception, and Psychophysics, 82 (4), pp. 1669–1681, 2020.
Cancer diagnosis frequently relies on the interpretation of medical images such as chest X-rays and mammography. This process is error prone; misdiagnoses can reach a rate of 15% or higher. Of particular interest are false negatives—tumors that are present but missed. Previous research has identified several perceptual and attentional problems underlying inaccurate perception of these images. But how might these problems be reduced? The psychological literature has shown that presenting multiple, duplicate images can improve performance. Here we explored whether redundant image presentation can improve target detection in simulated X-ray images, by presenting four identical or similar images concurrently. Displays with redundant images, including duplicates of the same image, showed reduced false-negative rates, compared with displays with a single image. This effect held both when the target's prevalence rate was high and when it was low. Eye tracking showed that fixating on two or more images in the redundant condition speeded target detection and prolonged search, and that the latter effect was the key to reducing false negatives. The redundancy gain may result from both perceptual enhancement and an increase in the search quitting threshold.
In: Journal of Medical Imaging, 7 (2), pp. 1–22, 2020.
The scientific, clinical, and pedagogical significance of devising methodologies to train nonprofessional subjects to recognize diagnostic visual patterns in medical images has been broadly recognized. However, systematic approaches to doing so remain poorly established. Using mammography as an exemplar case, we use a series of experiments to demonstrate that deep learning (DL) techniques can, in principle, be used to train naïve subjects to reliably detect certain diagnostic visual patterns of cancer in medical images. In the main experiment, subjects were required to learn to detect statistical visual patterns diagnostic of cancer in mammograms using only the mammograms and feedback provided following the subjects' response. We found not only that the subjects learned to perform the task at statistically significant levels, but also that their eye movements related to image scrutiny changed in a learning-dependent fashion. Two additional, smaller exploratory experiments suggested that allowing subjects to re-examine the mammogram in light of various items of diagnostic information may help further improve DL of the diagnostic patterns. Finally, a fourth small, exploratory experiment suggested that the image information learned was similar across subjects. Together, these results prove the principle that DL methodologies can be used to train nonprofessional subjects to reliably perform those aspects of medical image perception tasks that depend on visual pattern recognition expertise.
David R Howell; Anna N Brilliant; Christina L Master; William P Meehan
In: Clinical Journal of Sport Medicine, 30 (5), pp. 444–450, 2020.
OBJECTIVE: To determine the test-retest correlation of an objective eye-tracking device among uninjured youth athletes. DESIGN: Repeated-measures study. SETTING: Sports-medicine clinic. PARTICIPANTS: Healthy youth athletes (mean age = 14.6 ± 2.2 years; 39% women) completed a brief, automated, and objective eye-tracking assessment. INDEPENDENT VARIABLES: Participants completed the eye-tracking assessment at 2 different testing sessions. MAIN OUTCOME MEASURES: During the assessment, participants watched a 220-second video clip while it moved around a computer monitor in a clockwise direction as an eye tracker recorded eye movements. We obtained 13 eye movement outcome variables and assessed correlations between the assessments made at the 2 time points using Spearman's Rho (rs). RESULTS: Thirty-one participants completed the eye-tracking evaluation at 2 time points [median = 7 (interquartile range = 6-9) days between tests]. No significant differences in outcomes were found between the 2 testing times. Several eye movement variables demonstrated moderate to moderately high test-retest reliability. Combined eye conjugacy metric (BOX score
Elisa Infanti; Samuel D Schwarzkopf
Mapping sequences can bias population receptive field estimates Journal Article
In: NeuroImage, 211 , pp. 1–13, 2020.
Population receptive field (pRF) modelling is a common technique for estimating the stimulus-selectivity of populations of neurons using neuroimaging. Here, we aimed to address if pRF properties estimated with this method depend on the spatio-temporal structure and the predictability of the mapping stimulus. We mapped the polar angle preference and tuning width of voxels in visual cortex (V1–V4) of healthy, adult volunteers. We compared sequences sweeping orderly through the visual field or jumping from location to location employing stimuli of different width (45° vs 6°) and cycles of variable duration (8s vs 60s). While we did not observe any systematic influence of stimulus predictability, the temporal structure of the sequences significantly affected tuning width estimates. Ordered designs with large wedges and short cycles produced systematically smaller estimates than random sequences. Interestingly, when we used small wedges and long cycles, we obtained larger tuning width estimates for ordered than random sequences. We suggest that ordered and random mapping protocols show different susceptibility to other design choices such as stimulus type and duration of the mapping cycle and can produce significantly different pRF results.
Leah A Irish; Allison C Veronda; Amanda E van Lamsweerde; Michael P Mead; Stephen A Wonderlich
In: International Journal of Behavioral Medicine, pp. 3–5, 2020.
Background: Although self-help strategies to improve sleep are widely accessible, little is known about the ways in which individuals interact with these resources and the extent to which people are successful at improving their own sleep based on sleep health recommendations. The present study developed a lab-based model of self-help behavior by observing the development of sleep health improvement plans (SHIPs) and examining factors that may influence SHIP development. Method: Sixty healthy, young adults were identified as poor sleepers during one week of actigraphy baseline and recruited to develop and implement a SHIP. Participants viewed a list of sleep health recommendations through an eye tracker and provided information on their current sleep health habits. Each participant implemented their SHIP for 1 week during which sleep was assessed with actigraphy. Results: Current sleep health habits, but not patterns of visual attention, predicted SHIP goal selection. Sleep duration increased significantly during the week of SHIP implementation. Conclusions: Findings indicate that the SHIP protocol is an effective strategy for observing self-help behavior and examining factors that influence goal selection. The increase in sleep duration suggests that individuals may be successful at extending their own sleep, though causal mechanisms have not yet been established. This study presents a lab-based protocol for studying self-help sleep improvement behavior and takes an initial step toward gaining knowledge required to improve sleep health recommendations.
Ondřej Javora; Tereza Hannemann; Kristina Volná; Filip Děchtěrenko; Tereza Tetourová; Tereza Stárková; Cyril Brom
In: Journal of Computer Assisted Learning, pp. 1–14, 2020.
The present study investigates affective-motivational, attention, and learning effects of unexplored emotional design manipulation: Contextual animation (animation of contextual elements) in multimedia learning game (MLGs) for children. Participants (N = 134; Mage = 9.25; Grades 3 and 4) learned either from an experimental version of the MLG with a high amount of contextual animation or from an identical MLG with no contextual animation (control). Children strongly preferred ($chi$2 = 87.04, p textless.001) and found the experimental version more attractive (p textless.001
Anthony J Lambert; Tanvi Sharma; Nathan Ryckman
In: Vision, 4 , pp. 1–13, 2020.
Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered.
Jiawen Zhu; Kara Dawson; Albert D Ritzhaupt; Pavlo Pasha Antonenko
In: Journal of Educational Multimedia and Hypermedia, 29 (3), pp. 265–284, 2020.
This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning
In: Revista Argentina de Clinica Psicologica, 29 (2), pp. 523–529, 2020.
Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.
In: International Journal of Frontiers in Sociology, 2 (7), pp. 1–12, 2020.
Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.
Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu
In: British Journal of Educational Technology, pp. 1–13, 2020.
Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.
Liis Uiga; Catherine M Capio; Donghyun Ryu; William R Young; Mark R Wilson; Thomson W L Wong; Andy C Y Tse; Rich S W Masters
In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, 75 (2), pp. 282–292, 2020.
Objectives: The aim of this study was to examine the association between conscious monitoring and control of movements (i.e., movement-specific reinvestment) and visuomotor control during walking by older adults. Method: The Movement-Specific Reinvestment Scale (MSRS) was administered to 92 community-dwelling older adults, aged 65-81 years, who were required to walk along a 4.8-m walkway and step on the middle of a target as accurately as possible. Participants' movement kinematics and gaze behavior were measured during approach to the target and when stepping on it. Results: High scores on the MSRS were associated with prolonged stance and double support times during approach to the stepping target, and less accurate foot placement when stepping on the target. No associations between MSRS and gaze behavior were observed. Discussion: Older adults with a high propensity for movement-specific reinvestment seem to need more time to "plan" future stepping movements, yet show worse stepping accuracy than older adults with a low propensity for movement-specific reinvestment. Future research should examine whether older adults with a higher propensity for reinvestment are more likely to display movement errors that lead to falling.
Bao Zhang; Shuhui Liu; Cenlou Hu; Ziwen Luo; Sai Huang; Jie Sui
In: Computers in Human Behavior, 107 , pp. 1–7, 2020.
Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.
Ye Xia; Mauro Manassi; Ken Nakayama; Karl Zipser; David Whitney
Visual crowding in driving Journal Article
In: Journal of Vision, 20 (6), pp. 1–17, 2020.
Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.
Jorrig Vogels; David M Howcroft; Elli Tourtouri; Vera Demberg
How speakers adapt object descriptions to listeners under load Journal Article
In: Language, Cognition and Neuroscience, 35 (1), pp. 78–92, 2020.
A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.
In: International Journal of Trend in Research and Development, 7 (3), pp. 146–148, 2020.
Taking Table Lamp as the research object, the eye movement analysis method and subjective questionnaire survey method are used to explore the aesthetic preference of college students for the shape of table Lamp through the comprehensive analysis of the eye movement data of the subjects and the subjective questionnaire survey data, so as to provide design reference for enterprises and peer designers. An Sr research eyelink helmet-mounted oculomotor is used to record the eye movement characteristics of 20 subjects during viewing pictures of different Table Lamp shapes. The results show that the modern simplicity style is the most popular. The second is European style and Chinese style.
Pedro G Vieira; Matthew R Krause; Christopher C Pack
In: PLoS Biology, 18 (10), pp. 1–14, 2020.
Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.
Lauren Williams; Ann Carrigan; William Auffermann; Megan Mills; Anina Rich; Joann Elmore; Trafton Drew
In: Psychonomic Bulletin & Review, pp. 1–9, 2020.
Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.
Louis Williams; Eugene McSorley; Rachel McCloy
In: i-Perception, 11 (2), pp. 1–25, 2020.
The aesthetic experience of the perceiver of art has been suggested to relate to the art-making process of the artist. The artist's gestures during the creation process have been stated to influence the perceiver's art-viewing experience. However, limited studies explore the art-viewing experience in relation to the creative process of the artist. We introduced eye-tracking measures to further establish how congruent actions with the artist influence perceiver's gaze behaviour. Experiments 1 and 2 showed that simultaneous congruent and incongruent actions do not influence gaze behaviour. However, brushstroke paintings were found to be more pleasing than pointillism paintings. In Experiment 3, participants were trained to associate painting actions with hand primes to enhance visuomotor and visuovisual associations with the artist's actions. A greater amount of time was spent fixating brushstroke paintings when presented with a congruent prime compared with an incongruent prime, and fewer fixations were made to these styles of paintings when presented with an incongruent prime. The results suggest that explicit links that allow perceivers to resonate with the artist's actions lead to greater exploration of preferred artwork styles.
Victoria I Nicholls; Geraldine Jean-Charles; Junpeng Lao; Peter de Lissa; Roberto Caldara; Sébastien Miellet
In: Scientific Reports, 9 , pp. 4176, 2019.
In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.
Michele Scaltritti; Aliaksei Miniukovich; Paola Venuti; Remo Job; Antonella De Angeli; Simone Sulpizio
In: Scientific Reports, 9 , pp. 12711, 2019.
Webpage reading is ubiquitous in daily life. As Web technologies allow for a large variety of layouts and visual styles, the many formatting options may lead to poor design choices, including low readability. This research capitalizes on the existing readability guidelines for webpage design to outline several visuo-typographic variables and explore their effect on eye movements during webpage reading. Participants included children and adults, and for both groups typical readers and readers with dyslexia were considered. Actual webpages, rather than artificial ones, served as stimuli. This allowed to test multiple typographic variables in combination and in their typical ranges rather than in possibly unrealistic configurations. Several typographic variables displayed a significant effect on eye movements and reading performance. The effect was mostly homogeneous across the four groups, with a few exceptions. Beside supporting the notion that a few empirically-driven adjustments to the texts' visual appearance can facilitate reading across different populations, the results also highlight the challenge of making digital texts accessible to readers with dyslexia. Theoretically, the results highlight the importance of low-level visual factors, corroborating the emphasis of recent psychological models on visual attention and crowding in reading.
Katarzyna Stachowiak-Szymczak; Paweł Korpal
In: Across Languages and Cultures, 20 (2), pp. 235–251, 2019.
Simultaneous interpreting is a cognitively demanding task, based on performing several activities concurrently (Gile 1995; Seeber 2011). While multitasking itself is challenging, there are numerous tasks which make interpreting even more diffi cult, such as rendering of numbers and proper names, or dealing with a speaker's strong accent (Gile 2009). Among these, number interpreting is cognitively taxing since numerical data cannot be derived from the context and it needs to be rendered in a word-to-word manner (Mazza 2001). In our study, we aimed to examine cognitive load involved in number interpreting and to verify whether access to visual materials in the form of slides increases number interpreting accuracy in simultaneous interpreting performed by professional interpreters (N = 26) and interpreting trainees (N = 22). We used a remote EyeLink 1000+ eye-tracker to measure fi xation count, mean fi xation duration, and gaze time. The participants interpreted two short speeches from English into Polish, both containing 10 numerals. Slides were provided for one of the presentations. Our results show that novices are characterised by longer fixations and they provide a less accurate interpretation than professional interpreters. In addi- tion, access to slides increases number interpreting accuracy. The results obtained might be a valuable contribution to studies on visual processing in simultaneous interpreting, number interpreting as a competence, as well as interpreter training.
In: PeerJ, 7 , pp. 1–15, 2019.
This article compares the differences in eye movements between orienteers of different skill levels on map information searches and explores the visual search patterns of orienteers during precise map reading so as to explore the cognitive characteristics of orienteers' visual search. We recruited 44 orienteers at different skill levels (experts, advanced beginners, and novices), and recorded their behavioral responses and eye movement data when reading maps of different complexities. We found that the complexity of map (complex vs. simple) affects the quality of orienteers' route planning during precise map reading. Specifically, when observing complex maps, orienteers of higher competency tend to have a better quality of route planning (i.e., a shorter route planning time, a longer gaze time, and a more concentrate distribution of gazes). Expert orienteers demonstrated obvious cognitive advantages in the ability to find key information. We also found that in the stage of route planning, expert orienteers and advanced beginners first pay attention to the checkpoint description table. The expert group extracted information faster, and their attention was more concentrated, whereas the novice group paid less attention to the checkpoint description table, and their gaze was scattered. We found that experts regarded the information in the checkpoint description table as the key to the problem and they give priority to this area in route decision making. These results advance our understanding of professional knowledge and problem solving in orienteering.
Shlomit Yuval-Greenberg; Anat Keren; Rinat Hilo; Adar Paz; Navah Ratzon
In: American Journal of Occupational Therapy, 73 (3), pp. 1–8, 2019.
Importance: Attention deficit hyperactivity disorder (ADHD) is associated with driving deficits. Visual standards for driving define minimum qualifications for safe driving, including acuity and field of vision, but they do not consider the ability to explore the environment efficiently by shifting the gaze, which is a critical element of safe driving.
Objective: To examine visual exploration during simulated driving in adolescents with and without ADHD.
Design: Adolescents with and without ADHD drove a driving simulator for approximately 10 min while their gaze was monitored. They then completed a battery of questionnaires.
Setting: University lab.
Participants: Participants with (n = 16) and without (n = 15) ADHD were included. Participants had a history of neurological disorders other than ADHD and normal or corrected-to-normal vision. Control participants reported not having a diagnosis of ADHD. Participants with ADHD had been previously diagnosed by a qualified professional.
Outcomes and Measures: We compared the following measures between ADHD and non-ADHD groups: dashboard dwell times, fixation variance, entropy, and fixation duration.
Results: Findings showed that participants with ADHD were more restricted in their patterns of exploration than control group participants. They spent considerably more time gazing at the dashboard and had longer periods of fixation with lower variability and randomness.
Conclusions and Relevance: The results support the hypothesis that adolescents with ADHD engage in less active exploration during simulated driving.
What This Article Adds: This study raises concerns regarding the driving competence of people with ADHD and opens up new directions for potential training programs that focus on exploratory gaze control.
Zepeng Wang; Ping Li; Luming Zhang; Ling Shao
In: IEEE Transactions on Multimedia, pp. 1–11, 2019.
Computational photo quality evaluation is a useful technique in many tasks of computer vision and graphics, e.g., photo retaregeting, 3D rendering, and fashion recommendation. Conventional photo quality models are designed by characterizing pictures from all communities (e.g., “architecture” and “colorful”) indiscriminately, wherein community-specific features are not encoded explicitly. In this work, we develop a new community-aware photo quality evaluation framework. It uncovers the latent community-specific topics by a regularized latent topic model (LTM), and captures human visual quality perception by exploring multiple attributes. More specifically, given massive-scale online photos from multiple communities, a novel ranking algorithm is proposed to measure the visual/semantic attractiveness of regions inside each photo. Meanwhile, three attributes: photo quality scores, weak semantic tags, and inter-region correlations, are seamlessly and collaboratively incorporated during ranking. Subsequently, we construct gaze shifting path (GSP) for each photo by sequentially linking the top-ranking regions from each photo, and an aggregation-based deep CNN calculates the deep representation for each GSP. Based on this, an LTM is proposed to model the GSP distribution from multiple communities in the latent space. To mitigate the overfitting problem caused by communities with very few photos, a regularizer is added into our LTM. Finally, given a test photo, we obtain its deep GSP representation and its quality score is determined by the posterior probability of the regularized LTM. Comprehensive comparative studies on four image sets have shown the competitiveness of our method. Besides, eye tracking experiments demonstrated that our ranking-based GSPs are highly consistent with real human gaze movements.
In: Translation, Cognition & Behavior, 2 (1), pp. 79–100, 2019.
This article tackles directionality as one of the most contentious issues in translation studies, still without solid empirical footing. The research presented here shows that, to understand directionality effects on the process of translation and its end product, performance in L2 → L1 and L1 → L2 translation needs to be compared in a specific setting in which more factors than directionality are considered-especially text type. For 26 professional translators who participated in an experimental study, L1 → L2 translation did not take significantly more time than L2 → L1 translation and the end products of both needed improvement from proofreaders who are native speakers of the target language. A close analysis of corrections made by the proofreaders shows that different aspects of translation quality are affected by directionality. A case study of two translators who produced high quality L1 → L2 translations reveals that their performance was affected more by text type than by directionality.
Hongyan Wang; Zhongling Pi; Weiping Hu
In: Journal of Computer Assisted Learning, 35 (1), pp. 42–50, 2019.
Instructor behaviour is known to affect learning performance, but it is unclear which specific instructor behaviours can optimize learning. We used eye-tracking technology and questionnaires to test whether the instructor's gaze guidance affected learners' visual attention, social presence, and learning performance, using four video lectures: declarative knowledge with and without the instructor's gaze guidance and procedural knowledge with and without the instructor's gaze guidance. The results showed that the instructor's gaze guidance not only guided learners to allocate more visual attention to corresponding learning content but also increased learners' sense of social presence and learning. Furthermore, the link between the instructor's gaze guidance and better learning was especially strong for participants with a high sense of social connection with the instructor when they learned procedural knowledge. The findings lead to a strong recommendation for educational practitioners: Instructors should provide gaze guidance in video lectures for better learning performance.
In: Journal of Contemporary Marketing Science, 2 (1), pp. 23–33, 2019.
Purpose: The purpose of this paper is to control the size of online advertising by the use of the single factor experiment design using the eight matching methods of logo and commodity picture elements as independent variables, under the premise of background color and content complexity and to investigate the best visual search law of logo elements in online advertising format. The result shows that when the picture element is fixed in the center of the advertisement, it is suggested that the logo element should be placed in the middle position parallel to the picture element (left middle and upper left), placing the logo element at the bottom of the picture element, especially at the bottom left should be avoided. The designer can determine the best online advertising format based on the visual search effect of the logo element and the actual marketing purpose. Design/methodology/approach: In this experiment, the repeated measurement experiment design was used in a single factor test. According to the criteria of different types of commodities and eight matching methods, 20 advertisements were randomly selected from 50 original advertisements as experimental stimulation materials, as shown in Section 2.3. The eight matching methods were processed to obtain a total of 20×8=160 experimental stimuli. At the same time, in order to minimize the memory effect of the repeated appearance of the same product, all pictures, etc., the probability was randomly presented. In addition, in order to avoid the pre-judgment of the test for the purpose of the experiment, 80 additional filler online advertisements were added. Therefore, each testee was required to watch 160+80=240 pieces of stimulation materials.Findings On one hand, when the image elements are fixed for an advertisement, the advertiser should first try to place the logo element in the right middle position parallel to the picture element, because the commodity logo in this matching mode can get the longest average time of consumers' attention, and the duration of attention is the most. Danaher and Mullarkey (2003) clearly pointed out that as consumers look at online advertising, the length of fixation time increases, the degree of memory of online advertisement is also improved accordingly. Second, you can consider placing the logo element in the left or upper left of the picture element. In contrast, advertisers should try to avoid placing the logo element at the bottom of the picture element (lower left and lower right), especially at the lower left, because, at this area, the logo attracts less attention, resulting in shortest duration of consumer attention, less than a quarter of consumers' total attention. This conclusion is consistent with the related research results.
Tammy Sue Wynne Liu; Yeu Ting Liu; Chun-Yin Doris Chen
In: Interactive Learning Environments, 27 (2), pp. 181–199, 2019.
This study employed eye-tracking technology to probe the online reading behavior of 52 advanced L2 English learners. These participants read an e-book containing six types of multimedia supports for either vocabulary acquisition or comprehension. The six supports consisted of three micro-level supports that provided information about specific words (glosses, vocabulary focus, and footnotes), and three macro-level supports that provided global or background information (illustrations, infographics, and photos). The participants read the ebook under two presentation modes: (1) simultaneous mode: where digital input and supports were presented at the same time; and (2) sequential mode: where the digital content and supports were incrementally presented. Analyses showed that when reading for vocabulary acquisition, vocabulary focus and glosses were significantly fixated on, and when reading for comprehension, illustrations were more intensely fixated on. Additionally, when the digital content was incrementally presented, vocabulary focus received significantly higher total fixation duration. This suggests that reading under the sequential mode has the potency to guide L2 learners' focal attention toward micro-level supports. In contrast, under the simultaneous presentation mode, L2 learners seemed to divide their focal attention among both micro-level and macro-level supports. Pedagogical implications are discussed based on the findings of this study.
Sinè McDougall; Judy Edworthy; Deili Sinimeri; Jamie Goodliffe; Daniel Bradley; James Foster
In: Journal of Experimental Psychology: Applied, 26 (1), pp. 1–19, 2019.
Given the ease with which the diverse array of environmental sounds can be understood, the difficulties encountered in using auditory alarm signals on medical devices are surprising. In two experiments, with nonclinical participants, alarm sets which relied on similarities to environmental sounds (concrete alarms, such as a heartbeat sound to indicate "check cardiovascular function") were compared to alarms using abstract tones to represent functions on medical devices. The extent to which alarms were acoustically diverse was also examined: alarm sets were either acoustically different or acoustically similar within each set. In Experiment 1, concrete alarm sets, which were also acoustically different, were learned more quickly than abstract alarms which were acoustically similar. Importantly, the abstract similar alarms were devised using guidelines from the current global medical device standard (International Electrotechnical Commission 60601-1-8, 2012). Experiment 2 replicated these findings. In addition, eye tracking data showed that participants were most likely to fixate first on the correct medical devices in an operating theater scene when presented with concrete acoustically different alarms using real world sounds. A new set of alarms which are related to environmental sounds and differ acoustically have therefore been proposed as a replacement for the current medical device standard.
Zhongling Pi; Jiumin Yang; Weiping Hu; Jianzhong Hong
In: Interactive Learning Environments, pp. 1–9, 2019.
An emerging body of research has focused on students' creativity in group contexts, with the assumption that students could be inspired by peers' ideas. Although students' openness and attention to peers' ideas are claimed to play important roles in their creativity in group settings, there is little empirical research that tests this assumption. This study examined the moderating effect of attention to peers' ideas in the relation between openness and creativity in electronic brainstorming. Participants were 91 undergraduate students who took about 10 min to complete a creative idea generation task during electronic brainstorming. Regression analyses found that students who were characterized by high openness were more creative, but only when they showed more attention to peers' ideas. This suggests that electronic brainstorming can be useful for enhancing the creativity of some students.
Gordy Pleyers; Nicolas Vermeulen
How does interactivity of online media hamper ad effectiveness Journal Article
In: International Journal of Market Research, pp. 1–18, 2019.
The development of the Internet has increasingly led to advertisements presented on rich and interactive websites offering users a high level of control over the contents they are exposed to—sometimes to the extent of allowing them to skip “unwanted” ads preceding the desired content. While previous studies have shown that such interactivity and control can positively impact users' subjective experience and attitude toward the advertisements, the present study examined their impact on users' attention to the ad (using eye-tracking) and actual ad effectiveness (ad memory). It relied on an experimental design allowing for comparing the effectiveness of similar ads that were presented by realistic interfaces simulating common types of online media (in addition to “traditional television” as a form of passive baseline comparison condition). The interfaces consisted of a news website (including many stimuli surrounding the ads and an “ad countdown timer,” that might detract users' attention from the ads) and YouTube (also including the “skip ad” option). Ad memory correlated positively (negatively) with gaze direction to the ad area (outside the ad area) and was particularly low when users had the opportunity to stop the ad after a few seconds. These results emphasize the scale of ad effectiveness decrease that may occur when the media interfaces offer users easy ways of avoiding video ads by gazing toward surrounding stimuli and by skipping the ads. The implications of these findings for advertisers are addressed, and it is suggested that future studies on the topic should include other measures of ad effectiveness and other distracting factors that might further detract users from online ad video content in real-life contexts.
Victoria A Roach; Graham M Fraser; James H Kryklywy; Derek G V Mitchell; Timothy D Wilson
In: Anatomical Sciences Education, 12 (1), pp. 32–42, 2019.
Research suggests that spatial ability may predict success in complex disciplines including anatomy, where mastery requires a firm understanding of the intricate relationships occurring along the course of veins, arteries, and nerves, as they traverse through and around bones, muscles, and organs. Debate exists on the malleability of spatial ability, and some suggest that spatial ability can be enhanced through training. It is hypothesized that spatial ability can be trained in low-performing individuals through visual guidance. To address this, training was completed through a visual guidance protocol. This protocol was based on eye-movement patterns of high-performing individuals, collected via eye-tracking as they completed an Electronic Mental Rotations Test (EMRT). The effects of guidance were evaluated using 33 individuals with low mental rotation ability, in a counterbalanced crossover design. Individuals were placed in one of two treatment groups (late or early guidance) and completed both a guided, and an unguided EMRT. A third group (no guidance/control) completed two unguided EMRTs. All groups demonstrated an increase in EMRT scores on their second test (P textless 0.001); however, an interaction was observed between treatment and test iteration (P = 0.024). The effect of guidance on scores was contingent on when the guidance was applied. When guidance was applied early, scores were significantly greater than expected (P = 0.028). These findings suggest that by guiding individuals with low mental rotation ability “where” to look early in training, better search approaches may be adopted, yielding improvements in spatial reasoning scores. It is proposed that visual guidance may be applied in spatial fields, such as STEMM (science, technology, engineering, mathematics and medicine), surgery, and anatomy to improve student's interpretation of visual content.
Čeněk Šašinka; Zdeněk Stachoň; Petr Kubíček; Sascha Tamm; Aleš Matas; Markéta Kukaňová
In: Cartographic Journal, 56 (2), pp. 175–191, 2019.
The form of visual representation affects both the way in which the visual representation is processed and the effectiveness of this processing. Different forms of visual representation may require the employment of different cognitive strategies in order to solve a particular task; at the same time, the different representations vary as to the extent to which they correspond with an individual's preferred cognitive style. The present study employed a Navon-type task to learn about the occurrence of global/local bias. The research was based on close interdisciplinary cooperation between the domains of both psychology and cartography. Several different types of tasks were made involving avalanche hazard maps with intrinsic/extrinsic visual representations, each of them employing different types of graphic variables representing the level of avalanche hazard and avalanche hazard uncertainty. The research sample consisted of two groups of participants, each of which was provided with a different form of visual representation of identical geographical data, such that the representations could be regarded as ‘informationally equivalent'. The first phase of the research consisted of two correlation studies, the first involving subjects with a high degree of map literacy (students of cartography) (intrinsic method: N = 35; extrinsic method: N = 37). The second study was performed after the results of the first study were analyzed. The second group of participants consisted of subjects with a low expected degree of map literacy (students of psychology; intrinsic method: N = 35; extrinsic method: N = 27).The first study revealed a statistically significant moderate correlation between the students' response times in extrinsic visualization tasks and their response times in a global subtest (r = 0.384, p textless 0.05); likewise, a statistically significant moderate correlation was found between the students' response times in intrinsic visualization tasks and their response times in the local subtest (r = 0.387, p textless 0.05). At the same time, no correlation was found between the students' performance in the local subtest and their performance in extrinsic visualization tasks, or between their scores in the global subtest and their performance in intrinsic visualization tasks. The second correlation study did not confirm the results of the first correlation study (intrinsic visualization/‘small figures test': r = 0.221; extrinsic visualization/‘large figures test': r = 0.135). The first phase of the research, where the data was subjected to statistical analysis, was followed by a comparative eye-tracking study, whose aim was to provide more detailed insight into the cognitive strategies employed when solving map-related tasks. More specifically, the eye-tracking study was expected to be able to detect possible differences between the cognitive patterns employed when solving extrinsic- as opposed to intrinsic visualization tasks. The results of an exploratory eye-tracking data analysis support the hypothesis of different strategies of visual information processing being used in reaction to different types of visualization.
Wenxiang Chen; Xiangling Zhuang; Zixin Cui; Guojie Ma
In: Transportation Research Part F: Traffic Psychology and Behaviour, 64 , pp. 552–564, 2019.
Drivers' recognition of pedestrian road crossing intentions is an essential process during driver-pedestrian interaction. However, compared with the rich observational findings on interaction behavior, little is known on drivers' performance in recognizing pedestrian intentions, as well as the underlying cognitive processes. To fill in the gap, this study evaluated drivers' performance in making judgments of pedestrians' road crossing intentions in recorded natural driving scenes. Experienced and novice drivers identified pedestrians as “will cross” or “will not cross” at some time-to-arrival while their eye movements were recorded. The results showed that experienced drivers were more conservative in discriminating whether a pedestrian would cross or not (preferred a “pedestrian will cross” judgment) and took a higher level of information processing of pedestrian intention. Regardless of driving experience, drivers had a higher detection rate, earlier detection, higher level of information processing and quicker response over pedestrians who intended to cross than those did not intend to cross. A quicker response was also achieved when the time-to-arrival was smaller. Analysis of eye movements showed attentional bias to the upper body of pedestrians when recognizing intention. These findings offer an initial understanding of the intention recognition process during driver-pedestrian interaction and inform directions for autonomous driving research when interacting with pedestrians.
Rajib Chowdhury; A F M Saifuddin Saif
In: International Journal of Software Engineering and Computer Systems, 53 (1), pp. 52–56, 2019.
The main purpose of this research is to investigate the human brain sensor activities related prior researches towards the needs of an efficient method to improve the human brain sensor activities. Human brain activities mainly measured by brain signal acquired from the brain sensor electrodes positioned on several parts of the brain cortex. Although previous researches investigated human brain activities in various aspects, the improvement of the human brain sensor activities is still unsolved. In today's world, it is very crucial need for improving the sensor activities of the human brain using that human brain improved signal externally. This research demonstrated a comprehensive critical analysis of human brain activities related prior researches to claim for an efficient method integrated with proposed neuroheadset device. This research presented a comprehensive review in various aspects like previous methods, existing frameworks analysis and existing results analysis with the discussion to establish an efficient method for acquiring human brain signal, improving the acquired signal and developing the sensor activities of the human brain using that human brain improved signal. Demonstrated critical review has expected for constituting an efficient method to improve the performance of maneuverability, visualization, subliminal activities and so forth on human brain activities.
Freya Crosby; Frouke Hermens
In: Quarterly Journal of Experimental Psychology, 72 (3), pp. 599–615, 2019.
Studies of fear of crime often focus on demographic and social factors, but these can be difficult to change. Studies of visual aspects have suggested that features reflecting incivilities, such as litter, graffiti, and vandalism increase fear of crime, but methods often rely on participants actively mentioning such aspects, and more subtle, less conscious aspects may be overlooked. To address these concerns, this study examined people's eye movements while they judged scenes for safety. In total, 40 current and former university students were asked to rate images of day-time and night-time scenes of Lincoln, UK (where they studied) and Egham, UK (unfamiliar location) for safety, maintenance, and familiarity while their eye movements were recorded. Another 25 observers not from Lincoln or Egham rated the same images in an Internet survey. Ratings showed a strong association between safety and maintenance and lower safety ratings for night-time scenes for both groups, in agreement with earlier findings. Eye movements of the Lincoln participants showed increased dwell times on buildings, houses, and vehicles during safety judgements and increased dwell times on streets, pavements, and markers of incivilities for maintenance. Results confirm that maintenance plays an important role in perceptions of safety, but eye movements suggest that observers also look for indicators of current or recent presence of people.
Gemma Fitzsimmons; Mark J Weal; Denis Drieghe
The impact of hyperlinks on reading text Journal Article
In: PLoS ONE, 14 (2), pp. e0210900, 2019.
There has been debate about whether blue hyperlinks on the Web cause disruption to reading. A series of eye tracking experiments were conducted to explore if coloured words in black text had any impact on reading behaviour outside and inside a Web environment. Experiment 1 and 2 explored the saliency of coloured words embedded in single sentences and the impact on reading behaviour. In Experiment 3, the effects of coloured words/hyperlinks in passages of text in a Web-like environment was explored. Experiment 1 and 2 showed that multiple coloured words in text had no negative impact on reading behaviour. However, if the sentence featured only a single coloured word, a reduction in skipping rates was observed. This suggests that the visual saliency associated with a single coloured word may signal to the reader that the word is important, whereas this signalling is reduced when multiple words are coloured. In Experiment 3, when reading passages of text containing hyperlinks in a Web environment, participants showed a tendency to re-read sentences that contained hyperlinked, uncommon words compared to hyperlinked, common words. Hyperlinks highlight important information and suggest additional content, which for more difficult concepts, invites rereading of the preceding text.
Victoria Foglia; Annie Roy-Charland; Dominique Leroux; Suzanne Lemieux; Nicole Yantzi; Tina Skjonsby-McKinnon; Sylvain Fiset; Dominic Guitard
In: Canadian Journal of Experimental Psychology, pp. 1–14, 2019.
This study examined eye-movement patterns of young adults, while they were viewing texting and driving prevention advertisements, to determine which format attracts the most attention. As young adults are the most at risk for this public health issue, understanding which format is most successful at maintaining young adults' attention is especially important. Participants viewed nondriving, general distracted driving, and texting and driving advertisements. Each of these advertisement types were edited to contain text-only, image-only, and text and image content. Participants were told that they had unlimited time to view each advertisement, while their eye-movements were recorded throughout. Participants spent more time viewing the texting and driving advertisements than other types when they comprised text only. When examining differences in attention to the text and image portions of the advertisements, participants spent more time viewing the images than the text for the nondriving and general distracted driving advertisements. However, for texting and driving-specific advertisements the text-only format resulted in the most attention toward the advertisements. These results indicate that in attracting young adults' attention to texting and driving public health advertisements, the most successful format would be text-based.
Susan M Gass; Paula Winke; Daniel R Isbell; Jieun Ahn
In: Language Learning and Technology, 23 (2), pp. 84–104, 2019.
Captions provide a useful aid to language learners for comprehending videos and learning new vocabulary, aligning with theories of multimedia learning. Multimedia learning predicts that a learner's working memory (WM) influences the usefulness of captions. In this study, we present two eye-tracking experiments investigating the role of WM in captioned video viewing behavior and comprehension. In Experiment 1, Spanish-as-a-foreign-language learners differed in caption use according to their level of comprehension and to a lesser extent, their WM capacities. WM did not impact comprehension. In Experiment 2, English-as-a-second-language learners differed in comprehension according to their WM capacities. Those with high comprehension and high WM used captions less on a second viewing. These findings highlight the effects of potential individual differences and have implications for the integration of multimedia with captions in instructed language learning. We discuss how captions may help neutralize some of working memory's limiting effects on learning.
Hannah Harvey; Stephen J Anderson; Robin Walker
In: Optometry and Vision Science, 96 (8), pp. 609–616, 2019.
SIGNIFICANCE: Scrolling text can be an effective reading aid for those with central vision loss. Our results suggest that increased interword spacing with scrolling text may further improve the reading experience of this population. This conclusion may be of particular interest to low-vision aid developers and visual rehabilitation practitioners. PURPOSE: The dynamic, horizontally scrolling text format has been shown to improve reading performance in individuals with central visual loss. Here, we sought to determine whether reading performance with scrolling text can be further improved by modulating interword spacing to reduce the effects of visual crowding, a factor known to impact negatively on reading with peripheral vision. METHODS: The effects of interword spacing on reading performance (accuracy, memory recall, and speed) were assessed for eccentrically viewed single sentences of scrolling text. Separate experiments were used to determine whether performance measures were affected by any confound between interword spacing and text presentation rate in words per minute. Normally sighted participants were included, with a central vision loss implemented using a gaze-contingent scotoma of 8° diameter. In both experiments, participants read sentences that were presented with an interword spacing of one, two, or three characters. RESULTS: Reading accuracy and memory recall were significantly enhanced with triple-character interword spacing (both measures, P ≤.01). These basic findings were independent of the text presentation rate (in words per minute). CONCLUSIONS: We attribute the improvements in reading performance with increased interword spacing to a reduction in the deleterious effects of visual crowding. We conclude that increased interword spacing may enhance reading experience and ability when using horizontally scrolling text with a central vision loss.
Sogand Hasanzadeh; Bac Dao; Behzad Esmaeili; Michael D Dodd
In: Journal of Construction Engineering and Management, 145 (9), pp. 1–14, 2019.
Workers' attentional failures or inattention toward detecting a hazard can lead to inappropriate decisions and unsafe behaviors. Previous research has shown that individual characteristics such as past injury exposure contribute greatly to skill-based (e.g., attention failure) and perception-based (e.g., failure to identify and misperception) errors and subsequent accident involvement. However, a dearth of research empirically examined how a worker's personality affects his or her attention and hazard identification. This study addresses this knowledge gap by exploring the impacts of the personality dimensions on the selective attention of workers exposed to fall hazards. To this end, construction workers were recruited to engage in a laboratory eye-tracking experiment that consisted of 115 potential and active fall scenarios in 35 construction images captured from actual projects within the United States. Construction workers' personalities were assessed through the self-completion of the Big Five personality questionnaire, and their visual attention was monitored continuously using a wearable eye-tracking apparatus. The results of the study show that workers' personality dimensions - specifically, extraversion, conscientiousness, and openness to experience - significantly relate to and impact attentional allocations and the search strategies of workers exposed to fall hazards. A more detailed investigation of this connection showed that individuals who are introverted, more conscientious, or more open to experience are less prone to injury and return their attention more frequently to hazardous areas. This study is the first attempt to illustrate how examining relationships among personality, attention, and hazard identification can reveal opportunities for the early detection of at-risk workers who are more likely to be involved in accidents. A better understanding of these connections provides valuable insight into both practice and theory regarding the transformation of current training and educational practices by providing appropriate intervention strategies for personalized safety guidelines and effective training materials to transform personality-driven at-risk workers into safer workers.
Marti Hearst; Emily Pedersen; Lekha Priya Patil; Elsie Lee; Paul Laskowski; Steven L Franconeri
An evaluation of semantically grouped word cloud designs Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2019.
Word clouds continue to be a popular tool for summarizing textual information, despite their well-documented deficiencies for analytic tasks. Much of their popularity rests on their playful visual appeal. In this paper, we present the results of a series of controlled experiments that show that layouts in which words are arranged into semantically and visually distinct zones are more effective for understanding the underlying topics than standard word cloud layouts. White space separators and/or spatially grouped color coding led to significantly stronger understanding of the underlying topics compared to a standard Wordle layout, while simultaneously scoring higher on measures of aesthetic appeal. This work is an advance on prior research on semantic layouts for word clouds because that prior work has either not ensured that the different semantic groupings are visually or semantically distinct, or has not performed usability studies. An additional contribution of this work is the development of a dataset for a semantic category identification task that can be used for replication of these results or future evaluations of word cloud designs.
Olivier J Hénaff; Robbe L T Goris; Eero P Simoncelli
Perceptual straightening of natural videos Journal Article
In: Nature Neuroscience, 22 , pp. 984–991, 2019.
Many behaviors rely on predictions derived from recent visual input, but the temporal evolution of those inputs is generally complex and difficult to extrapolate. We propose that the visual system transforms these inputs to follow straighter temporal trajectories. To test this ‘temporal straightening' hypothesis, we develop a methodology for estimating the curvature of an internal trajectory from human perceptual judgments. We use this to test three distinct predictions: natural sequences that are highly curved in the space of pixel intensities should be substantially straighter perceptually; in contrast, artificial sequences that are straight in the intensity domain should be more curved perceptually; finally, naturalistic sequences that are straight in the intensity domain should be relatively less curved. Perceptual data validate all three predictions, as do population models of the early visual system, providing evidence that the visual system specifically straightens natural videos, offering a solution for tasks that rely on prediction.
Austin R Hicklin; Bradford T Ulery; Thomas A Busey; Maria Antonia Roberts; Jo Ann Buscaglia
In: Cognitive Research: Principles and Implications, 4 (12), pp. 1–20, 2019.
Background: The comparison of fingerprints by expert latent print examiners generally involves repeating a process in which the examiner selects a small area of distinctive features in one print (a target group), and searches for it in the other print. In order to isolate this key element of fingerprint comparison, we use eye-tracking data to describe the behavior of latent fingerprint examiners on a narrowly defined “find the target” task. Participants were shown a fingerprint image with a target group indicated and asked to find the corresponding area of ridge detail in a second impression of the same finger and state when they found the target location. Target groups were presented on latent and plain exemplar fingerprint images, and as small areas cropped from the plain exemplars, to assess how image quality and the lack of surrounding visual context affected task performance and eye behavior. One hundred and seventeen participants completed a total of 675 trials. Results: The presence or absence of context notably affected the areas viewed and time spent in comparison; differences between latent and plain exemplar tasks were much less significant. In virtually all trials, examiners repeatedly looked back and forth between the images, suggesting constraints on the capacity of visual working memory. On most trials where context was provided, examiners looked immediately at the corresponding location: with context, median time to find the corresponding location was less than 0.3 s (second fixation); however, without context, median time was 1.9 s (five fixations). A few trials resulted in errors in which the examiner did not find the correct target location. Basic gaze measures of overt behaviors, such as speed, areas visited, and back-and-forth behavior, were used in conjunction with the known target area to infer the underlying cognitive state of the examiner. Conclusions: Visual context has a significant effect on the eye behavior of latent print examiners. Localization errors suggest how errors may occur in real comparisons: examiners sometimes compare an incorrect but similar target group and do not continue to search for a better candidate target group. The analytic methods and predictive models developed here can be used to describe the more complex behavior involved in actual fingerprint comparisons.
Aurélie Calabrèse; Carlos Aguilar; Géraldine Faure; Frédéric Matonti; Louis Hoffart; Eric Castet
In: Optometry and Vision Science, 95 (9), pp. 738–746, 2018.
SIGNIFICANCE: The overall goal of this work is to validate a low vision aid system that uses gaze as a pointing tool and provides smart magnification. We conclude that smart visual enhancement techniques as well as gaze contingency should improve the efficiency of assistive technology for the visually impaired. PURPOSE: A low vision aid, using gaze-contingent visual enhancement and primarily intended to help reading with central vision loss, was recently designed and tested with simulated scotoma. Here, we present a validation of this system for face recognition in age-related macular degeneration patients. METHODS: Twelve individuals with binocular central vision loss were recruited and tested on a face identification-matching task. Gaze position was measured in real time, thanks to an eye tracker. In the visual enhancement condition, at any time during the screen exploration, the fixated face was segregated from background and considered as a region of interest that could be magnified into a region of augmented vision by the participant, if desired. In the natural exploration condition, participants also performed the matching task but without the visual aid. Response time and accuracy were analyzed with mixed-effects models to (1) compare the performance with and without visual aid and (2) estimate the usability of the system. RESULTS: On average, the percentage of correct response for the natural exploration condition was 41%. This value was significantly increased to 63% with visual enhancement (95% confidence interval, 45 to 78%). For the large majority of our participants (83%), this improvement was accompanied by moderate increase in response time, suggesting a real functional benefit for these individuals. CONCLUSIONS Without visual enhancement, participants with age-related macular degeneration performed poorly, confirming their struggle for face recognition and the need to use efficient visual aids. Our system significantly improved face identification accuracy by 55%, proving to be helpful under laboratory conditions.
Tao Deng; Hongmei Yan; Yong Jie Li
In: IEEE Transactions on Intelligent Transportation Systems, 19 (9), pp. 3059–3067, 2018.
Saliency detection, an important step in many computer vision applications, can, for example, predict where drivers look in a vehicular traffic environment. While many bottom-up and top-down saliency detection models have been proposed for fixation prediction in outdoor scenes, no specific attempt has been made for traffic images. Here, we propose a learning saliency detection model based on a random forest (RF) to predict drivers' fixation positions in a driving environment. First, we extract low-level (color, intensity, orientation, etc.) and high-level (e.g., the vanishing point and center bias) features and then predict the fixation points via RF-based learning. Finally, we evaluate the performance of our saliency prediction model qualitatively and quantitatively. We use quantitative evaluation metrics that include the revised receiver operating characteristic (ROC), the area under the ROC curve value, and the normalized scan-path saliency score. The experimental results on real traffic images indicate that our model can more accurately predict a driver's fixation area, while driving than the state-of-the-art bottom-up saliency models.
Ashleigh J Filtness; Vanessa Beanland
Sleep loss and change detection in driving scenes Journal Article
In: Transportation Research Part F: Traffic Psychology and Behaviour, 57 , pp. 10–22, 2018.
Driver sleepiness is a significant road safety problem. Sleep-related crashes occur on both urban and rural roads, yet to date driver-sleepiness research has focused on understanding impairment in rural and motorway driving. The ability to detect changes is an attention and awareness skill vital for everyday safe driving. Previous research has demonstrated that person states, such as age or motivation, influence susceptibility to change blindness (i.e., failure or delay in detecting changes). The current work considers whether sleepiness increases the likelihood of change blindness within urban and rural driving contexts. Twenty fully-licenced drivers completed a change detection ‘flicker' task twice in a counterbalanced design: once following a normal night of sleep (7–8 h) and once following sleep restriction (5 h). Change detection accuracy and response time were recorded while eye movements were continuously tracked. Accuracy was not significantly affected by sleep loss; however, following sleep loss there was some evidence of slowed change detection responses to urban images, but faster responses for rural images. Visual scanning across the images remained consistent between sleep conditions, resulting in no difference in the probability of fixating on the change target. Overall, the results suggest that sleep loss has minimal impact on change detection accuracy and visual scanning for changes in driving scenes. However, a subtle difference in response time to change detection between urban and rural images indicates that change blindness may have implications for sleep-related crashes in more visually complex urban environments. Further research is needed to confirm this finding.
Lisena Hasanaj; Sujata P Thawani; Nikki Webb; Julia D Drattell; Liliana Serrano; Rachel C Nolan; Jenelle Raynowska; Todd E Hudson; John-Ross Rizzo; Weiwei Dai; Bryan McComb; Judith D Goldberg; Janet C Rucker; Steven L Galetta; Laura J Balcer
In: Journal of Neuro-Ophthalmology, 38 (1), pp. 24–29, 2018.
Objective: We determined the relation of rapid number naming time scores on the King-Devick (K-D) test to video-oculographic eye movement performance during pre-season baseline assessments in a collegiate ice hockey team cohort. Background: The K-D test is a reliable visual performance measure that is a sensitive sideline indicator of concussion when time scores worsen (lengthen) from pre-season baseline. Methods: Athletes from collegiate ice hockey team received pre-season baseline testing as part of an ongoing study of rapid sideline/ rinkside performance measures for concussion. These included the K-D test (spiral bound cards and tablet computer versions). Participants also performed a laboratory-based version of the K-D test with simultaneous infrared-based video-oculographic recordings using EyeLink 1000+. This allowed measurement of temporal and spatial characteristics of eye movements, including saccade velocity, duration and inter-saccadic intervals. Results: Among 13 male athletes, aged 18 to 23 years (mean 20.5+/-1.6 years), prolongation of the inter-saccadic interval (ISI, a combined measure of saccade latency and fixation duration) was the eye movement measure most associated with slower baseline KD scores (mean 38.2+/-6.2 seconds
Katja I Häuser; Vera Demberg; Jutta Kray
In: Psychology and Aging, 33 (8), pp. 1168–1180, 2018.
Even though older adults are known to have difficulty at language processing when a secondary task has to be performed simultaneously, few studies have addressed how older adults process language in dual-task demands when linguistic load is systematically varied. Here, we manipulated surprisal, an information theoretic measure that quantifies the amount of new information conveyed by a word, to investigate how linguistic load affects younger and older adults during early and late stages of sentence processing under conditions when attention is split between two tasks. In high-surprisal sentences, target words were implausible and mismatched with semantic expectancies based on context, thereby causing integration difficulty. Participants performed semantic meaningfulness judgments on sentences that were presented in isolation (single task) or while performing a secondary tracking task (dual task). Cognitive load was measured by means of pupillometry. Mixed-effects models were fit to the data, showing the following: (a) During the dual task, younger but not older adults demonstrated early sensitivity to surprisal (higher levels of cognitive load, indexed by pupil size) as sentences were heard online; (b) Older adults showed no immediate reaction to surprisal, but a delayed response, where their meaningfulness judgments to high-surprisal words remained stable in accuracy, while secondary tracking performance declined. Findings are discussed in relation to age-related trade-offs in dual tasking and differences in the allocation of attentional resources during language processing. Collectively, our data show that higher linguistic load leads to task trade-offs in older adults and differently affects the time course of online language processing in aging.
Claire Louise Heard; Tim Rakow; Tom Foulsham
In: Medical Decision Making, 38 (6), pp. 646–657, 2018.
Background. Past research finds that treatment evaluations are more negative when risks are presented after benefits. This study investigates this order effect: manipulating tabular orientation and order of risk–benefit information, and examining information search order and gaze duration via eye-tracking. Design. 108 (Study 1) and 44 (Study 2) participants viewed information about treatment risks and benefits, in either a horizontal (left-right) or vertical (above- below) orientation, with the benefits or risks presented first (left side or at top). For 4 scenarios, participants answered 6 treatment evaluation questions (1–7 scales) that were combined into overall evaluation scores. In addi- tion, Study 2 collected eye-tracking data during the benefit–risk presentation. Results. Participants tended to read one set of information (i.e., all risks or all benefits) before transitioning to the other. Analysis of order of fixations showed this tendency was stronger in the vertical (standardized mean rank difference further from 0
Lukáš Hejtmánek; Ivana Oravcová; Jiří Motýl; Jiří Horáček; Iveta Fajnerová
In: International Journal of Human-Computer Studies, 116 , pp. 15–24, 2018.
There is a vibrant debate about consequences of mobile devices on our cognitive capabilities. Use of technology guided navigation has been linked with poor spatial knowledge and wayfinding in both virtual and real world experiments. Our goal was to investigate how the attention people pay to the GPS aid influences their navigation performance. We developed navigation tasks in a virtual city environment and during the experiment, we measured participants' eye movements. We also tested their cognitive traits and interviewed them about their navigation confidence and experience. Our results show that the more time participants spend with the GPS-like map, the less accurate spatial knowledge they manifest and the longer paths they travel without GPS guidance. This poor performance cannot be explained by individual differences in cognitive skills. We also show that the amount of time spent with the GPS is related to participant's subjective evaluation of their own navigation skills, with less confident navigators using GPS more intensively. We therefore suggest that despite an extensive use of navigation aids may have a detrimental effect on person's spatial learning, its general use is modulated by a perception of one's own navigation abilities.
Jiaxin Wu; Sheng hua Zhong; Zheng Ma; Stephen J Heinen; Jianmin Jiang
Foveated convolutional neural networks for video summarization Journal Article
In: Multimedia Tools and Applications, 77 (22), pp. 29245–29267, 2018.
With the proliferation of video data, video summarization is an ideal tool for users to browse video content rapidly. In this paper, we propose a novel foveated convolutional neural networks for dynamic video summarization. We are the first to integrate gaze information into a deep learning network for video summarization. Foveated images are constructed based on subjects' eye movements to represent the spatial information of the input video. Multi-frame motion vectors are stacked across several adjacent frames to convey the motion clues. To evaluate the proposed method, experiments are conducted on two video summarization benchmark datasets. The experimental results validate the effectiveness of the gaze information for video summarization despite the fact that the eye movements are collected from different subjects from those who generated Jiaxin Wu and Sheng-hua Zhong contributed equally to this work. Multimed Tools Appl summaries. Empirical validations also demonstrate that our proposed foveated convolutional neural networks for video summarization can achieve state-of-the-art performances on these benchmark datasets.
Jiahui Wang; Pavlo Antonenko; Mehmet Celepkolu; Yerika Jimenez; Ethan Fieldman; Ashley Fieldman
In: International Journal of Human-Computer Interaction, pp. 1–12, 2018.
This study explored the relationships between eye tracking and traditional usability testing data in the context of analyzing the usability of Algebra Nation™, an online system for learning mathematics used by hundreds of thousands of students. Thirty-five undergraduate students (20 females) completed seven usability tasks in the Algebra Nation™ online learning environment. The participants were asked to log in, select an instructor for the instructional video, post a question on the collaborative wall, search for an explanation of a mathematics concept on the wall, find information relating to Karma Points (an incentive for engagement and learning), and watch two instructional videos of varied content difficulty. Participants' eye movements (fixations and saccades) were simultaneously recorded by an eye tracker. Usability testing software was used to capture all participants' interactions with the system, task completion time, and task difficulty ratings. Upon finishing the usability tasks, participants completed the System Usability Scale. Important relationships were identified between the eye movement metrics and traditional usability testing metrics such as task difficulty rating and completion time. Eye tracking data were investigated quantitatively using aggregated fixation maps, and qualitative examination was performed on video replay of participants' fixation behavior. Augmenting the traditional usability testing methods, eye movement analysis provided additional insights regarding revisions to the interface elements associated with these usability tasks.