EyeLink Usability / Applied Publications
All EyeLink usability and applied research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications using keywords such as Driving, Sport, Workload, etc. You can also search for individual author names. If we missed any EyeLink usability or applied article, please email us!
2020 |
Byunghoon “Tony” Ahn; Jason M. Harley Facial expressions when learning with a Queer History App: Application of the control value theory of achievement emotions Journal Article In: British Journal of Educational Technology, vol. 51, no. 5, pp. 1563–1576, 2020. @article{Ahn2020, Learning analytics (LA) incorporates analyzing cognitive, social and emotional processes in learning scenarios to make informed decisions regarding instructional design and delivery. Research has highlighted important roles that emotions play in learning. We have extended this field of research by exploring the role of emotions in a relatively uncommon learning scenario: learning about queer history with a multimedia mobile app. Specifically, we used an automatic facial recognition software (FaceReader 7) to measure learners' discrete emotions and a counter-balanced multiple-choice quiz to assess learning. We also used an eye tracker (EyeLink 1000) to identify the emotions learners experienced while they read specific content, as opposed to the emotions they experienced over the course of the entire learning session. A total of 33 out of 57 of the learners' data were eligible to be analyzed. Results revealed that learners expressed more negative-activating emotions (ie, anger, anxiety) and negative-deactivating emotions (ie, sadness) than positive-activating emotions (ie, happiness). Learners with an angry emotion profile had the highest learning gains. The importance of examining typically undesirable emotions in learning, such as anger, is discussed using the control-value theory of achievement emotions. Further, this study describes a multimodal methodology to integrate behavioral trace data into learning analytics research. |
Hamidreza Azemati; Fatemeh Jam; Modjtaba Ghorbani; Matthias Dehmer; Reza Ebrahimpour; Abdolhamid Ghanbaran; Frank Emmert-Streib The role of symmetry in the aesthetics of residential building façades using cognitive science methods Journal Article In: Symmetry, vol. 12, pp. 1–15, 2020. @article{Azemati2020, Symmetry is an important visual feature for humans and its application in architecture is completely evident. This paper aims to investigate the role of symmetry in the aesthetics judgment of residential building façades and study the pattern of eye movement based on the expertise of subjects in architecture. In order to implement this in the present paper, we have created images in two categories: symmetrical and asymmetrical façade images. The experiment design allows us to investigate the preference of subjects and their reaction time to decide about presented images as well as record their eye movements. It was inferred that the aesthetic experience of a building façade is influenced by the expertise of the subjects. There is a significant difference between experts and non-experts in all conditions, and symmetrical façades are in line with the taste of non-expert subjects. Moreover, the patterns of fixational eye movements indicate that the horizontal or vertical symmetry (mirror symmetry) has a profound influence on the observer's attention, but there is a difference in the points watched and their fixation duration. Thus, although symmetry may attract the same attention during eye movements on façade images, it does not necessarily lead to the same preference between the expert and non-expert groups. |
Anissa Boutabla; Samuel Cavuscens; Maurizio Ranieri; Céline Crétallaz; Herman Kingma; Raymond Berg; Nils Guinand; Angélica Pérez Fornos Simultaneous activation of multiple vestibular pathways upon electrical stimulation of semicircular canal afferents Journal Article In: Journal of Neurology, vol. 267, no. 1, pp. S273–S284, 2020. @article{Boutabla2020, Background and purpose: Vestibular implants seem to be a promising treatment for patients suffering from severe bilateral vestibulopathy. To optimize outcomes, we need to investigate how, and to which extent, the different vestibular pathways are activated. Here we characterized the simultaneous responses to electrical stimuli of three different vestibular pathways. Methods: Three vestibular implant recipients were included. First, activation thresholds and amplitude growth functions of electrically evoked vestibulo-ocular reflexes (eVOR), cervical myogenic potentials (ecVEMPs) and vestibular percepts (vestibulo-thalamo-cortical, VTC) were recorded upon stimulation with single, biphasic current pulses (200 µs/phase) delivered through five different vestibular electrodes. Latencies of eVOR and ecVEMPs were also characterized. Then we compared the amplitude growth functions of the three pathways using different stimulation profiles (1-pulse, 200 µs/phase; 1-pulse, 50 µs/phase; 4-pulses, 50 µs/phase, 1600 pulses-per-second) in one patient (two electrodes). Results: The median latencies of the eVOR and ecVEMPs were 8 ms (8–9 ms) and 10.2 ms (9.6–11.8 ms), respectively. While the amplitude of eVOR and ecVEMP responses increased with increasing stimulation current, the VTC pathway showed a different, step-like behavior. In this study, the 200 µs/phase paradigm appeared to give the best balance to enhance responses at lower stimulation currents. Conclusions: This study is a first attempt to evaluate the simultaneous activation of different vestibular pathways. However, this issue deserves further and more detailed investigation to determine the actual possibility of selective stimulation of a given pathway, as well as the functional impact of the contribution of each pathway to the overall rehabilitation process. |
Christopher D. D. Cabrall; Riender Happee; Joost C. F. Winter Prediction of effort and eye movement measures from driving scene components Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 68, pp. 187–197, 2020. @article{Cabrall2020, For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control. |
Andrea Caoli; Silvio P. Sabatini; Agostino Gibaldi; Guido Maiello; Anna Kosovicheva; Peter Bex A dichoptic feedback-based oculomotor training method to manipulate interocular alignment Journal Article In: Scientific Reports, vol. 10, pp. 15634, 2020. @article{Caoli2020, Strabismus is a prevalent impairment of binocular alignment that is associated with a spectrum of perceptual deficits and social disadvantages. Current treatments for strabismus involve ocular alignment through surgical or optical methods and may include vision therapy exercises. In the present study, we explore the potential of real-time dichoptic visual feedback that may be used to quantify and manipulate interocular alignment. A gaze-contingent ring was presented independently to each eye of 11 normally-sighted observers as they fixated a target dot presented only to their dominant eye. Their task was to center the rings within 2° of the target for at least 1 s, with feedback provided by the sizes of the rings. By offsetting the ring in the non-dominant eye temporally or nasally, this task required convergence or divergence, respectively, of the non-dominant eye. Eight of 11 observers attained 5° asymmetric convergence and 3 of 11 attained 3° asymmetric divergence. The results suggest that real-time gaze-contingent feedback may be used to quantify and transiently simulate strabismus and holds promise as a method to augment existing therapies for oculomotor alignment disorders. |
Xianglan Chen; Hulin Ren; Yamin Liu; Bendegul Okumus; Anil Bilgihan Attention to Chinese menus with metaphorical or metonymic names: An eye movement lab experiment Journal Article In: International Journal of Hospitality Management, vol. 84, pp. 1–10, 2020. @article{Chen2020f, Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items. |
Agnieszka Chmiel; Przemysław Janikowski; Agnieszka Lijewska Multimodal processing in simultaneous interpreting with text Journal Article In: Target, vol. 32, no. 1, pp. 37–58, 2020. @article{Chmiel2020, The present study focuses on (in)congruence of input between the visual and the auditory modality in simultaneous interpreting with text. We asked twenty-four professional conference interpreters to simultaneously interpret an aurally and visually presented text with controlled incongruences in three categories (numbers, names and control words), while measuring interpreting accuracy and eye movements. The results provide evidence for the dominance of the visual modality, which goes against the professional standard of following the auditory modality in the case of incongruence. Numbers enjoyed the greatest accuracy across conditions possibly due to simple cross-language semantic mappings. We found no evidence for a facilitation effect for congruent items, and identified an impeding effect of the presence of the visual text for incongruent items. These results might be interpreted either as evidence for the Colavita effect (in which visual stimuli take precedence over auditory ones) or as strategic behaviour applied by professional interpreters to avoid risk. |
Francisco M. Costela; José J. Castro-Torres Risk prediction model using eye movements during simulated driving with logistic regressions and neural networks Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 74, pp. 511–521, 2020. @article{Costela2020, Background: Many studies have found that eye movement behavior provides a real-time index of mental activity. Risk management architectures embedded in autonomous vehicles fail to include human cognitive aspects. We set out to evaluate whether eye movements during a risk driving detection task are able to predict risk situations. Methods: Thirty-two normally sighted subjects (15 female) saw 20 clips of recorded driving scenes while their gaze was tracked. They reported when they considered the car should brake, anticipating any hazard. We applied both a mixed-effect logistic regression model and feedforward neural networks between hazard reports and eye movement descriptors. Results: All subjects reported at least one major collision hazard in each video (average 3.5 reports). We found that hazard situations were predicted by larger saccades, more and longer fixations, fewer blinks, and a smaller gaze dispersion in both horizontal and vertical dimensions. Performance between models incorporating a different combination of descriptors was compared running a test equality of receiver operating characteristic areas. Feedforward neural networks outperformed logistic regressions in accuracies. The model including saccadic magnitude, fixation duration, dispersion in ×, and pupil returned the highest ROC area (0.73). Conclusion: We evaluated each eye movement descriptor successfully and created separate models that predicted hazard events with an average efficacy of 70% using both logistic regressions and feedforward neural networks. The use of driving simulators and hazard detection videos can be considered a reliable methodology to study risk prediction. |
Giorgia D'Innocenzo; Alexander V. Nowicky; Daniel T. Bishop Dynamic task observation: A gaze-mediated complement to traditional action observation treatment? Journal Article In: Behavioural Brain Research, vol. 379, pp. 1–13, 2020. @article{DInnocenzo2020, Action observation elicits changes in primary motor cortex known as motor resonance, a phenomenon thought to underpin several functions, including our ability to understand and imitate others' actions. Motor resonance is modulated not only by the observer's motor expertise, but also their gaze behaviour. The aim of the present study was to investigate motor resonance and eye movements during observation of a dynamic goal-directed action, relative to an everyday one – a reach-grasp-lift (RGL) action, commonly used in action-observation-based neurorehabilitation protocols. Skilled and novice golfers watched videos of a golf swing and an RGL action as we recorded MEPs from three forearm muscles; gaze behaviour was concurrently monitored. Corticospinal excitability increased during golf swing observation, but it was not modulated by expertise, relative to baseline; no such changes were observed for the RGL task. MEP amplitudes were related to participants' gaze behaviour: in the RGL condition, target viewing was associated with lower MEP amplitudes; in the golf condition, MEP amplitudes were positively correlated with time spent looking at the effector or neighbouring regions. Viewing of a dynamic action such as the golf swing may enhance action observation treatment, especially when concurrent physical practice is not possible. |
Trafton Drew; James Guthrie; Isabel Reback Worse in real life: An eye-tracking examination of the cost of CAD at low prevalence Journal Article In: Journal of Experimental Psychology: Applied, vol. 26, no. 4, pp. 659–670, 2020. @article{Drew2020a, Computer-aided detection (CAD) is applied during screening mammography for millions of women each year. Despite its popularity, several large studies have observed no benefit in breast cancer detection for practices that use CAD. This lack of benefit may be driven by how CAD information is conveyed to the radiologist. In the current study, we examined this possibility in an artificial task modeled after screening mammography. Prior work at high (50%) target prevalence suggested that CAD marks might disrupt visual attention: Targets that are missed by the CAD system are more likely to be missed by the user. However, targets are much less common in screening mammography. Moreover, the prior work on this topic has focused on simple binary CAD systems that place marks on likely locations, but some modern CAD systems employ interactive CAD (iCAD) systems that may mitigate the previously observed costs. Here, we examined the effects of target prevalence and CAD system. We found that the costs of binary CAD were exacerbated at low prevalence. Meanwhile, iCAD did not lead to a cost on unmarked targets, which suggests that this sort of CAD implementation may be superior to more traditional binary CAD implementations when targets occur infrequently. |
Camilla E. J. Elphick; Graham E. Pike; Graham J. Hole You can believe your eyes: Measuring implicit recognition in a lineup with pupillometry Journal Article In: Psychology, Crime and Law, vol. 26, no. 1, pp. 67–92, 2020. @article{Elphick2020, As pupil size is affected by cognitive processes, we investigated whether it could serve as an independent indicator of target recognition in lineups. Participants saw a simulated crime video, followed by two viewings of either a target-present or target-absent video lineup while pupil size was measured with an eye-tracker. Participants who made correct identifications showed significantly larger pupil sizes when viewing the target compared with distractors. Some participants were uncertain about their choice of face from the lineup, but nevertheless showed pupillary changes when viewing the target, suggesting covert recognition of the target face had occurred. The results suggest that pupillometry might be a useful aid in assessing the accuracy of an eyewitness' identification. |
Gemma Fitzsimmons; Lewis T. Jayes; Mark J. Weal; Denis Drieghe The impact of skim reading and navigation when reading hyperlinks on the web Journal Article In: PLoS ONE, vol. 15, no. 9, pp. e0239134, 2020. @article{Fitzsimmons2020, It has been shown that readers spend a great deal of time skim reading on the Web and that this type of reading can affect lexical processing of words. Across two experiments, we utilised eye tracking methodology to explore how hyperlinks and navigating webpages affect reading behaviour. In Experiment 1, participants read static Webpages either for comprehension or whilst skim reading, while in Experiment 2, participants additionally read through a navigable Web environment. Embedded target words were either hyperlinks or not and were either high-frequency or low-frequency words. Results from Experiment 1 show that while readers lexically process both linked and unlinked words when reading for comprehension, readers only fully lexically process linked words when skim reading, as was evidenced by a frequency effect that was absent for the unlinked words. They did fully lexically process both linked and unlinked words when reading for comprehension. In Experiment 2, which allowed for navigating, readers only fully lexically processed linked words compared to unlinked words, regardless of whether they were skim reading or reading for comprehension. We suggest that readers engage in an efficient reading strategy where they attempt to minimise comprehension loss while maintaining a high reading speed. Readers use hyperlinks as markers to suggest important information and use them to navigate through the text in an efficient and effective way. The task of reading on the Web causes readers to lexically process words in a markedly different way from typical reading experiments. |
Mathilda Froesel; Quentin Goudard; Marc Hauser; Maëva Gacoin; Suliann Ben Hamed Automated video-based heart rate tracking for the anesthetized and behaving monkey Journal Article In: Scientific Reports, vol. 10, pp. 17940, 2020. @article{Froesel2020, Heart rate (HR) is extremely valuable in the study of complex behaviours and their physiological correlates in non-human primates. However, collecting this information is often challenging, involving either invasive implants or tedious behavioural training. In the present study, we implement a Eulerian video magnification (EVM) heart tracking method in the macaque monkey combined with wavelet transform. This is based on a measure of image to image fluctuations in skin reflectance due to changes in blood influx. We show a strong temporal coherence and amplitude match between EVM-based heart tracking and ground truth ECG, from both color (RGB) and infrared (IR) videos, in anesthetized macaques, to a level comparable to what can be achieved in humans. We further show that this method allows to identify consistent HR changes following the presentation of conspecific emotional voices or faces. EVM is used to extract HR in humans but has never been applied to non-human primates. Video photoplethysmography allows to extract awake macaques HR from RGB videos. In contrast, our method allows to extract awake macaques HR from both RGB and IR videos and is particularly resilient to the head motion that can be observed in awake behaving monkeys. Overall, we believe that this method can be generalized as a tool to track HR of the awake behaving monkey, for ethological, behavioural, neuroscience or welfare purposes. |
Alexander Goettker; Kevin J. MacKenzie; T. Scott Murdison Differences between oculomotor and perceptual artifacts for temporally limited head mounted displays Journal Article In: Journal of the Society for Information Display, vol. 28, no. 6, pp. 509–519, 2020. @article{Goettker2020b, We used perceptual and oculomotor measures to understand the negative impacts of low (phantom array) and high (motion blur) duty cycles with a high-speed, AR-likehead-mounted display prototype. We observed large intersubject variability for the detection of phantom array artifacts but a highly consistent and systematic effect on saccadic eye movement targeting during low duty cycle presentations. This adverse effect on saccade endpoints was also related to an increased error rate in a perceptual discrimination task, showing a direct effect of display duty cycle on the perceptual quality. For high duty cycles, the probability of detecting motion blur increased during head movements, and this effect was elevated at lower refresh rates. We did not find an impact of the temporal display characteristics on compensatory eye movements during head motion (e.g., VOR). Together, our results allow us to quantify the tradeoff of different negative spatiotemporal impacts of user movements and make subsequent recommendations for optimized temporal HMD parameters. |
Andrea Grant; Gregory J. Metzger; Pierre François Van de Moortele; Gregor Adriany; Cheryl Olman; Lin Zhang; Joseph Koopermeiners; Yiğitcan Eryaman; Margaret Koeritzer; Meredith E. Adams; Thomas R. Henry; Kamil Uğurbil 10.5 T MRI static field effects on human cognitive, vestibular, and physiological function Journal Article In: Magnetic Resonance Imaging, vol. 73, pp. 163–176, 2020. @article{Grant2020, Purpose: To perform a pilot study to quantitatively assess cognitive, vestibular, and physiological function during and after exposure to a magnetic resonance imaging (MRI) system with a static field strength of 10.5 Tesla at multiple time scales. Methods: A total of 29 subjects were exposed to a 10.5 T MRI field and underwent vestibular, cognitive, and physiological testing before, during, and after exposure; for 26 subjects, testing and exposure were repeated within 2–4 weeks of the first visit. Subjects also reported sensory perceptions after each exposure. Comparisons were made between short and long term time points in the study with respect to the parameters measured in the study; short term comparison included pre-vs-isocenter and pre-vs-post (1–24 h), while long term compared pre-exposures 2–4 weeks apart. Results: Of the 79 comparisons, 73 parameters were unchanged or had small improvements after magnet exposure. The exceptions to this included lower scores on short term (i.e. same day) executive function testing, greater isocenter spontaneous eye movement during visit 1 (relative to pre-exposure), increased number of abnormalities on videonystagmography visit 2 versus visit 1 and a mix of small increases (short term visit 2) and decreases (short term visit 1) in blood pressure. In addition, more subjects reported metallic taste at 10.5 T in comparison to similar data obtained in previous studies at 7 T and 9.4 T. Conclusion: Initial results of 10.5 T static field exposure indicate that 1) cognitive performance is not compromised at isocenter, 2) subjects experience increased eye movement at isocenter, and 3) subjects experience small changes in vital signs but no field-induced increase in blood pressure. While small but significant differences were found in some comparisons, none were identified as compromising subject safety. A modified testing protocol informed by these results was devised with the goal of permitting increased enrollment while providing continued monitoring to evaluate field effects. |
Agnes Hardardottir; Mohammed Al-Hamdani; Raymond Klein; Austin Hurst; Sherry H. Stewart The effect of cigarette packaging and illness sensitivity on attention to graphic health warnings: A controlled study Journal Article In: Nicotine & Tobacco Research, vol. 22, no. 10, pp. 1788–1794, 2020. @article{Hardardottir2020, INTRODUCTION: The social and health care costs of smoking are immense. To reduce these costs, several tobacco control policies have been introduced (eg, graphic health warnings [GHWs] on cigarette packs). Previous research has found plain packaging (a homogenized form of packaging), in comparison to branded packaging, effectively increases attention to GHWs using UK packaging prototypes. Past studies have also found that illness sensitivity (IS) protects against health-impairing behaviors. Building on this evidence, the goal of the current study was to assess the effect of packaging type (plain vs. branded), IS level, and their interaction on attention to GHWs on cigarette packages using proposed Canadian prototypes. AIMS AND METHODS: We assessed the dwell time and fixations on the GHW component of 40 cigarette pack stimuli (20 branded; 20 plain). Stimuli were presented in random order to 50 smokers (60.8% male; mean age = 33.1; 92.2% daily smokers) using the EyeLink 1000 system. Participants were divided into low IS (n = 25) and high IS (n = 25) groups based on scores on the Illness Sensitivity Index. RESULTS: Overall, plain packaging relative to branded packaging increased fixations (but not dwell time) on GHWs. Moreover, low IS (but not high IS) smokers showed more fixations to GHWs on plain versus branded packages. CONCLUSIONS: These findings demonstrate that plain packaging is a promising intervention for daily smokers, particularly those low in IS, and contribute evidence in support of impending implementation of plain packaging in Canada. IMPLICATIONS: Our findings have three important implications. First, our study provides controlled experimental evidence that plain packaging is a promising intervention for daily smokers. Second, the findings of this study contribute supportive evidence for the impending plain packaging policy in Canada, and can therefore aid in defense against anticipated challenges from the tobacco industry upon its implementation. Third, given its effects in increasing attention to GHWs, plain packaging is an intervention likely to provide smokers enhanced incentive for smoking cessation, particularly among those low in IS who may otherwise be less interested in seeking treatment for tobacco dependence. |
Claudia R. Hebert; Li Z. Sha; Roger W. Remington; Yuhong V. Jiang Redundancy gain in visual search of simulated X-ray images Journal Article In: Attention, Perception, and Psychophysics, vol. 82, no. 4, pp. 1669–1681, 2020. @article{Hebert2020, Cancer diagnosis frequently relies on the interpretation of medical images such as chest X-rays and mammography. This process is error prone; misdiagnoses can reach a rate of 15% or higher. Of particular interest are false negatives—tumors that are present but missed. Previous research has identified several perceptual and attentional problems underlying inaccurate perception of these images. But how might these problems be reduced? The psychological literature has shown that presenting multiple, duplicate images can improve performance. Here we explored whether redundant image presentation can improve target detection in simulated X-ray images, by presenting four identical or similar images concurrently. Displays with redundant images, including duplicates of the same image, showed reduced false-negative rates, compared with displays with a single image. This effect held both when the target's prevalence rate was high and when it was low. Eye tracking showed that fixating on two or more images in the redundant condition speeded target detection and prolonged search, and that the latter effect was the key to reducing false negatives. The redundancy gain may result from both perceptual enhancement and an increase in the search quitting threshold. |
Jay Hegdé In: Journal of Medical Imaging, vol. 7, no. 2, pp. 1–22, 2020. @article{Hegde2020, The scientific, clinical, and pedagogical significance of devising methodologies to train nonprofessional subjects to recognize diagnostic visual patterns in medical images has been broadly recognized. However, systematic approaches to doing so remain poorly established. Using mammography as an exemplar case, we use a series of experiments to demonstrate that deep learning (DL) techniques can, in principle, be used to train naïve subjects to reliably detect certain diagnostic visual patterns of cancer in medical images. In the main experiment, subjects were required to learn to detect statistical visual patterns diagnostic of cancer in mammograms using only the mammograms and feedback provided following the subjects' response. We found not only that the subjects learned to perform the task at statistically significant levels, but also that their eye movements related to image scrutiny changed in a learning-dependent fashion. Two additional, smaller exploratory experiments suggested that allowing subjects to re-examine the mammogram in light of various items of diagnostic information may help further improve DL of the diagnostic patterns. Finally, a fourth small, exploratory experiment suggested that the image information learned was similar across subjects. Together, these results prove the principle that DL methodologies can be used to train nonprofessional subjects to reliably perform those aspects of medical image perception tasks that depend on visual pattern recognition expertise. |
David R. Howell; Anna N. Brilliant; Christina L. Master; William P. Meehan Reliability of objective eye-tracking measures among healthy adolescent athletes Journal Article In: Clinical Journal of Sport Medicine, vol. 30, no. 5, pp. 444–450, 2020. @article{Howell2020, OBJECTIVE: To determine the test-retest correlation of an objective eye-tracking device among uninjured youth athletes. DESIGN: Repeated-measures study. SETTING: Sports-medicine clinic. PARTICIPANTS: Healthy youth athletes (mean age = 14.6 ± 2.2 years; 39% women) completed a brief, automated, and objective eye-tracking assessment. INDEPENDENT VARIABLES: Participants completed the eye-tracking assessment at 2 different testing sessions. MAIN OUTCOME MEASURES: During the assessment, participants watched a 220-second video clip while it moved around a computer monitor in a clockwise direction as an eye tracker recorded eye movements. We obtained 13 eye movement outcome variables and assessed correlations between the assessments made at the 2 time points using Spearman's Rho (rs). RESULTS: Thirty-one participants completed the eye-tracking evaluation at 2 time points [median = 7 (interquartile range = 6-9) days between tests]. No significant differences in outcomes were found between the 2 testing times. Several eye movement variables demonstrated moderate to moderately high test-retest reliability. Combined eye conjugacy metric (BOX score |
Sabrina Karl; Magdalena Boch; Zsófia Virányi; Claus Lamm; Ludwig Huber Training pet dogs for eye-tracking and awake fMRI Journal Article In: Behavior Research Methods, vol. 52, no. 2, pp. 838–856, 2020. @article{Karl2020a, In recent years, two well-developed methods of studying mental processes in humans have been successively applied to dogs. First, eye-tracking has been used to study visual cognition without distraction in unrestrained dogs. Second, noninvasive functional magnetic resonance imaging (fMRI) has been used for assessing the brain functions of dogs in vivo. Both methods, however, require dogs to sit, stand, or lie motionless while yet remaining attentive for several minutes, during which time their brain activity and eye movements are measured. Whereas eye-tracking in dogs is performed in a quiet and, apart from the experimental stimuli, nonstimulating and highly controlled environment, MRI scanning can only be performed in a very noisy and spatially restraining MRI scanner, in which dogs need to feel relaxed and stay motionless in order to study their brain and cognition with high precision. Here we describe in detail a training regime that is perfectly suited to train dogs in the required skills, with a high success probability and while keeping to the highest ethical standards of animal welfare—that is, without using aversive training methods or any other compromises to the dog's well-being for both methods. By reporting data from 41 dogs that successfully participated in eye-tracking training and 24 dogs IN fMRI training, we provide robust qualitative and quantitative evidence for the quality and efficiency of our training methods. By documenting and validating our training approach here, we aim to inspire others to use our methods to apply eye-tracking or fMRI for their investigations of canine behavior and cognition. |
Sabrina Karl; Magdalena Boch; Anna Zamansky; Dirk Linden; Isabella C. Wagner; Christoph J. Völter; Claus Lamm; Ludwig Huber Exploring the dog–human relationship by combining fMRI, eye-tracking and behavioural measures Journal Article In: Scientific Reports, vol. 10, pp. 22273, 2020. @article{Karl2020, Behavioural studies revealed that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver's face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans. |
Josiah P. J. King; Jia E. Loy; Hannah Rohde; Martin Corley Interpreting nonverbal cues to deception in real time Journal Article In: PLoS ONE, vol. 15, no. 3, pp. e0229486, 2020. @article{King2020, When questioning the veracity of an utterance, we perceive certain non-linguistic behaviours to indicate that a speaker is being deceptive. Recent work has highlighted that listeners' associations between speech disfluency and dishonesty are detectable at the earliest stages of reference comprehension, suggesting that the manner of spoken delivery influences pragmatic judgements concurrently with the processing of lexical information. Here, we investigate the integration of a speaker's gestures into judgements of deception, and ask if and when associations between nonverbal cues and deception emerge. Participants saw and heard a video of a potentially dishonest speaker describe treasure hidden behind an object, while also viewing images of both the named object and a distractor object. Their task was to click on the object behind which they believed the treasure to actually be hidden. Eye and mouse movements were recorded. Experiment 1 investigated listeners' associations between visual cues and deception, using a variety of static and dynamic cues. Experiment 2 focused on adaptor gestures. We show that a speaker's nonverbal behaviour can have a rapid and direct influence on listeners' pragmatic judgements, supporting the idea that communication is fundamentally multimodal. |
Miguel A. Lago; Craig K. Abbey; Miguel P. Eckstein Foveated model observers for visual search in 3D medical images Journal Article In: IEEE Transactions on Medical Imaging, 2020. @article{Lago2020, Model observers have a long history of success in predicting human observer performance in clinically-relevant detection tasks. New 3D image modalities provide more signal information but vastly increase the search space to be scrutinized. Here, we compared standard linear model observers (ideal observers, non-pre-whitening matched filter with eye filter, and various versions of Channelized Hotelling models) to human performance searching in 3D 1/f2.8 filtered noise images and assessed its relationship to the more traditional location known exactly detection tasks and 2D search. We investigated two different signal types that vary in their detectability away from the point of fixation (visual periphery). We show that the influence of 3D search on human performance interacts with the signal’s detectability in the visual periphery. Detection performance for signals difficult to detect in the visual periphery deteriorates greatly in 3D search but not in 3D location known exactly and 2D search. Standard model observers do not predict the interaction between 3D search and signal type. A proposed extension of the Channelized Hotelling model (foveated search model) that processes the image with reduced spatial detail away from the point of fixation, explores the image through eye movements, and scrolls across slices can successfully predict the interaction observed in humans and also the types of errors in 3D search. Together, the findings highlight the need for foveated model observers for image quality evaluation with 3D search. |
Anthony J. Lambert; Tanvi Sharma; Nathan Ryckman Accident vulnerability and vision for action: A pilot investigation Journal Article In: Vision, vol. 4, pp. 1–13, 2020. @article{Lambert2020a, Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered. |
Fan Li; Chun-Hsien Chen; Gangyan Xu; Li Pheng Khoo Hierarchical eye-tracking data analytics for human fatigue detection at a traffic control center Journal Article In: IEEE Transactions on Human-Machine Systems, vol. 50, no. 5, pp. 465–474, 2020. @article{Li2020b, Eye-tracking-based human fatigue detection at traffic control centers suffers from an unavoidable problem of low-quality eye-tracking data caused by noisy and missing gaze points. In this article, the authors conducted pioneering work by investigating the effects of data quality on eye-tracking-based fatigue indicators and by proposing a hierarchical-based interpolation approach to extract the eye-tracking-based fatigue indicators from low-quality eye-tracking data. This approach adaptively classified the missing gaze points and hierarchically interpolated them based on the temporal-spatial characteristics of the gaze points. In addition, the definitions of applicable fixations and saccades for human fatigue detection is proposed. Two experiments are conducted to verify the effectiveness and efficiency of the method in extracting eye-tracking-based fatigue indicators and detecting human fatigue. The results indicate that most eye-tracking parameters are significantly affected by the quality of the eye-tracking data. In addition, the proposed approach can achieve much better performance than the classic velocity threshold identification algorithm (I-VT) and a state-of-the-art method (U'n'Eye) in parsing low-quality eye-tracking data. Specifically, the proposed method attained relatively stable eye-tracking-based fatigue indicators and reported the highest accuracy in human fatigue detection. These results are expected to facilitate the application of eye movement-based human fatigue detection in practice. |
Zhenji Lu; Riender Happee; Joost C. F. Winter Take over! A video-clip study measuring attention, situation awareness, and decision-making in the face of an impending hazard Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 72, pp. 211–225, 2020. @article{Lu2020, In highly automated driving, drivers occasionally need to take over control of the car due to limitations of the automated driving system. Research has shown that visually distracted drivers need about 7 s to regain situation awareness (SA). However, it is unknown whether the presence of a hazard affects SA. In the present experiment, 32 participants watched animated video clips from a driver's perspective while their eyes were recorded using eye-tracking equipment. The videos had lengths between 1 and 20 s and contained either no hazard or an impending crash in the form of a stationary car in the ego lane. After each video, participants had to (1) decide (no need to take over, evade left, evade right, brake only), (2) rate the danger of the situation, (3) rebuild the situation from a top-down perspective, and (4) rate the difficulty of the rebuilding task. The results showed that the hazard situations were experienced as more dangerous than the non-hazard situations, as inferred from self-reported danger and pupil diameter. However, there were no major differences in SA: hazard and non-hazard situations yielded equivalent speed and distance errors in the rebuilding task and equivalent self-reported difficulty scores. An exception occurred for the shortest time budget (1 s) videos, where participants showed impaired SA in the hazard condition, presumably because the threat inhibited participants from looking into the rear-view mirror. Correlations between measures of SA and decision-making accuracy were low to moderate. It is concluded that hazards do not substantially affect the global awareness of the traffic situation, except for short time budgets. |
Xueer Ma; Xiangling Zhuang; Guojie Ma Transparent windows on food packaging do not always capture attention and increase purchase intention Journal Article In: Frontiers in Psychology, vol. 11, pp. 593690, 2020. @article{Ma2020a, Transparent windows on food packaging can effectively highlight the actual food inside. The present study examined whether food packaging with transparent windows (relative to packaging with food‐ and non-food graphic windows in the same position and of the same size) has more advantages in capturing consumer attention and determining consumers' willingness to purchase. In this study, college students were asked to evaluate prepackaged foods presented on a computer screen, and their eye movements were recorded. The results showed salience effects for both packaging with transparent and food-graphic windows, which were also regulated by food category. Both transparent and graphic packaging gained more viewing time than the non-food graphic baseline condition for all the three selected products (i.e., nuts, preserved fruits, and instant cereals). However, no significant difference was found between transparent and graphic window conditions. For preserved fruits, time to first fixations was shorter in transparent packaging than other conditions. For nuts, the willingness to purchase was higher in both transparent and graphic conditions than the baseline condition, while the packaging attractiveness played a key role in mediating consumers' willingness to purchase. The implications for stakeholders and future research directions are discussed. |
Anna Miscenà; Jozsef Arato; Raphael Rosenberg Absorbing the gaze, scattering looks: Klimt's distinctive style and its two-fold effect on the eye of the beholder Journal Article In: Journal of Eye Movement Research, vol. 13, no. 2, pp. 1–13, 2020. @article{Miscena2020, Among the most renowned painters of the early twentieth century, Gustav Klimt is often associated – by experts and laymen alike - with a distinctive style of representation: the visual juxtaposition of realistic features and flattened ornamental patterns. Art historical writing suggests that this juxtaposition allows a two-fold experience; the perception of both the realm of art and the realm of life. While Klimt adopted a variety of stylistic choices in his career, this one popularised his work and was hardly ever used by other artists. The following study was designed to observe whether Klimt's distinctive style causes a specific behaviour of the viewer, at the level of eye-movements. Twenty-one portraits were shown to thirty viewers while their eye-movements were recorded. The pictures included artworks by Klimt in both his distinctive and non-distinctive styles, as well as other artists of the same historical period. The recorded data show that only Klimt's distinctive paintings induce a specific eye- movement pattern with alternating longer (“absorbed”) and shorter (“scattered”) fixations. We therefore claim that there is a behavioural correspondence to what art historical interpretations have so far asserted: The perception of “Klimt's style” can be described as two-fold also at a physiological level. |
Malik M. Naeem Mannan; M. Ahmad Kamran; Shinil Kang; Hak Soo Choi; Myung Yung Jeong A hybrid speller design using eye tracking and SSVEP brain–computer interface Journal Article In: Sensors, vol. 20, no. 3, pp. 1–20, 2020. @article{NaeemMannan2020, Steady‐state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli‐responsive hybrid speller by using electroencephalography (EEG) and video‐based eye‐tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)‐based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI‐speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI‐spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued‐spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free‐spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI‐based system will ultimately enable a truly high-speed communication channel. |
Diederick C. Niehorster; Thiago Santini; Roy S. Hessels; Ignace T. C. Hooge; Enkelejda Kasneci; Marcus Nyström The impact of slippage on the data quality of head-worn eye trackers Journal Article In: Behavior Research Methods, vol. 52, no. 3, pp. 1140–1160, 2020. @article{Niehorster2020, Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant's head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs' Pupil in 3D mode, and (iv) Pupil-Labs' Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup. |
Paul Henri Prévot; Kevin Gehere; Fabrice Arcizet; Himanshu Akolkar; Mina A. Khoei; Kévin Blaize; Omar Oubari; Pierre Daye; Marion Lanoë; Manon Valet; Sami Dalouz; Paul Langlois; Elric Esposito; Valérie Forster; Elisabeth Dubus; Nicolas Wattiez; Elena Brazhnikova; Céline Nouvel-Jaillard; Yannick LeMer; Joanna Demilly; Claire Maëlle Fovet; Philippe Hantraye; Morgane Weissenburger; Henri Lorach; Elodie Bouillet; Martin Deterre; Ralf Hornig; Guillaume Buc; José Alain Sahel; Guillaume Chenegros; Pierre Pouget; Ryad Benosman; Serge Picaud Behavioural responses to a photovoltaic subretinal prosthesis implanted in non-human primates Journal Article In: Nature Biomedical Engineering, vol. 4, no. 2, pp. 172–180, 2020. @article{Prevot2020, Retinal dystrophies and age-related macular degeneration related to photoreceptor degeneration can cause blindness. In blind patients, although the electrical activation of the residual retinal circuit can provide useful artificial visual perception, the resolutions of current retinal prostheses have been limited either by large electrodes or small numbers of pixels. Here we report the evaluation, in three awake non-human primates, of a previously reported near-infrared-light-sensitive photovoltaic subretinal prosthesis. We show that multipixel stimulation of the prosthesis within radiation safety limits enabled eye tracking in the animals, that they responded to stimulations directed at the implant with repeated saccades and that the implant-induced responses were present two years after device implantation. Our findings pave the way for the clinical evaluation of the prosthesis in patients affected by dry atrophic age-related macular degeneration. |
David Randall; Sophie Lauren Fox; John Wesley Fenner; Gemma Elizabeth Arblaster; Anne Bjerre; Helen Jane Griffiths Using VR to investigate the relationship between visual acuity and severity of simulated oscillopsia Journal Article In: Current Eye Research, vol. 45, no. 12, pp. 1611–1618, 2020. @article{Randall2020, Purpose: Oscillopsia is a debilitating symptom resulting from involuntary eye movement most commonly associated with acquired nystagmus. Investigating and documenting the effects of oscillopsia severity on visual acuity (VA) is challenging. This paper aims to further understanding of the effects of oscillopsia using a virtual reality simulation. Methods: Fifteen right-beat horizontal nystagmus waveforms, with different amplitude (1°, 3°, 5°, 8° and 11°) and frequency (1.25 Hz, 2.5 Hz and 5 Hz) combinations, were produced and imported into virtual reality to simulate different severities of oscillopsia. Fifty participants without ocular pathology were recruited to read logMAR charts in virtual reality under stationary conditions (no oscillopsia) and subsequently while experiencing simulated oscillopsia. The change in VA (logMAR) was calculated for each oscillopsia simulation (logMAR VA with oscillopsia–logMAR VA with no oscillopsia), removing the influence of different baseline VAs between participants. A one-tailed paired t-test was used to assess statistical significance in the worsening in VA caused by the oscillopsia simulations. Results: VA worsened with each incremental increase in simulated oscillopsia intensity (frequency x amplitude), either by increasing frequency or amplitude, with the exception of statistically insignificant changes at lower intensity simulations. Theoretical understanding predicted a linear relationship between increasing oscillopsia intensity and worsening VA. This was supported by observations at lower intensity simulations but not at higher intensities, with incremental changes in VA gradually levelling off. A potential reason for the difference at higher intensities is the influence of frame rate when using digital simulations in virtual reality. Conclusions: The frequency and amplitude were found to equally affect VA, as predicted. These results not only consolidate the assumption that VA degrades with oscillopsia but also provide quantitative information that relates these changes to amplitude and frequency of oscillopsia. |
2019 |
Victoria I. Nicholls; Geraldine Jean-Charles; Junpeng Lao; Peter Lissa; Roberto Caldara; Sébastien Miellet Developing attentional control in naturalistic dynamic road crossing situations Journal Article In: Scientific Reports, vol. 9, pp. 4176, 2019. @article{Nicholls2019, In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task. |
Michele Scaltritti; Aliaksei Miniukovich; Paola Venuti; Remo Job; Antonella De Angeli; Simone Sulpizio Investigating effects of typographic variables on webpage reading through eye movements Journal Article In: Scientific Reports, vol. 9, pp. 12711, 2019. @article{Scaltritti2019, Webpage reading is ubiquitous in daily life. As Web technologies allow for a large variety of layouts and visual styles, the many formatting options may lead to poor design choices, including low readability. This research capitalizes on the existing readability guidelines for webpage design to outline several visuo-typographic variables and explore their effect on eye movements during webpage reading. Participants included children and adults, and for both groups typical readers and readers with dyslexia were considered. Actual webpages, rather than artificial ones, served as stimuli. This allowed to test multiple typographic variables in combination and in their typical ranges rather than in possibly unrealistic configurations. Several typographic variables displayed a significant effect on eye movements and reading performance. The effect was mostly homogeneous across the four groups, with a few exceptions. Beside supporting the notion that a few empirically-driven adjustments to the texts' visual appearance can facilitate reading across different populations, the results also highlight the challenge of making digital texts accessible to readers with dyslexia. Theoretically, the results highlight the importance of low-level visual factors, corroborating the emphasis of recent psychological models on visual attention and crowding in reading. |
Katarzyna Stachowiak-Szymczak; Paweł Korpal Interpreting accuracy and visual processing of numbers in professional and student snterpreters: An eye-tracking study Journal Article In: Across Languages and Cultures, vol. 20, no. 2, pp. 235–251, 2019. @article{StachowiakSzymczak2019, Simultaneous interpreting is a cognitively demanding task, based on performing several activities concurrently (Gile 1995; Seeber 2011). While multitasking itself is challenging, there are numerous tasks which make interpreting even more diffi cult, such as rendering of numbers and proper names, or dealing with a speaker's strong accent (Gile 2009). Among these, number interpreting is cognitively taxing since numerical data cannot be derived from the context and it needs to be rendered in a word-to-word manner (Mazza 2001). In our study, we aimed to examine cognitive load involved in number interpreting and to verify whether access to visual materials in the form of slides increases number interpreting accuracy in simultaneous interpreting performed by professional interpreters (N = 26) and interpreting trainees (N = 22). We used a remote EyeLink 1000+ eye-tracker to measure fi xation count, mean fi xation duration, and gaze time. The participants interpreted two short speeches from English into Polish, both containing 10 numerals. Slides were provided for one of the presentations. Our results show that novices are characterised by longer fixations and they provide a less accurate interpretation than professional interpreters. In addi- tion, access to slides increases number interpreting accuracy. The results obtained might be a valuable contribution to studies on visual processing in simultaneous interpreting, number interpreting as a competence, as well as interpreter training. |
Yang Liu Visual search characteristics of precise map reading by orienteers Journal Article In: PeerJ, vol. 7, pp. 1–15, 2019. @article{Liu2019c, This article compares the differences in eye movements between orienteers of different skill levels on map information searches and explores the visual search patterns of orienteers during precise map reading so as to explore the cognitive characteristics of orienteers' visual search. We recruited 44 orienteers at different skill levels (experts, advanced beginners, and novices), and recorded their behavioral responses and eye movement data when reading maps of different complexities. We found that the complexity of map (complex vs. simple) affects the quality of orienteers' route planning during precise map reading. Specifically, when observing complex maps, orienteers of higher competency tend to have a better quality of route planning (i.e., a shorter route planning time, a longer gaze time, and a more concentrate distribution of gazes). Expert orienteers demonstrated obvious cognitive advantages in the ability to find key information. We also found that in the stage of route planning, expert orienteers and advanced beginners first pay attention to the checkpoint description table. The expert group extracted information faster, and their attention was more concentrated, whereas the novice group paid less attention to the checkpoint description table, and their gaze was scattered. We found that experts regarded the information in the checkpoint description table as the key to the problem and they give priority to this area in route decision making. These results advance our understanding of professional knowledge and problem solving in orienteering. |
Shlomit Yuval-Greenberg; Anat Keren; Rinat Hilo; Adar Paz; Navah Ratzon Gaze control during simulator driving in adolescents with and without attention deficit hyperactivity disorder Journal Article In: American Journal of Occupational Therapy, vol. 73, no. 3, pp. 1–8, 2019. @article{YuvalGreenberg2019, Importance: Attention deficit hyperactivity disorder (ADHD) is associated with driving deficits. Visual standards for driving define minimum qualifications for safe driving, including acuity and field of vision, but they do not consider the ability to explore the environment efficiently by shifting the gaze, which is a critical element of safe driving. Objective: To examine visual exploration during simulated driving in adolescents with and without ADHD. Design: Adolescents with and without ADHD drove a driving simulator for approximately 10 min while their gaze was monitored. They then completed a battery of questionnaires. Setting: University lab. Participants: Participants with (n = 16) and without (n = 15) ADHD were included. Participants had a history of neurological disorders other than ADHD and normal or corrected-to-normal vision. Control participants reported not having a diagnosis of ADHD. Participants with ADHD had been previously diagnosed by a qualified professional. Outcomes and Measures: We compared the following measures between ADHD and non-ADHD groups: dashboard dwell times, fixation variance, entropy, and fixation duration. Results: Findings showed that participants with ADHD were more restricted in their patterns of exploration than control group participants. They spent considerably more time gazing at the dashboard and had longer periods of fixation with lower variability and randomness. Conclusions and Relevance: The results support the hypothesis that adolescents with ADHD engage in less active exploration during simulated driving. What This Article Adds: This study raises concerns regarding the driving competence of people with ADHD and opens up new directions for potential training programs that focus on exploratory gaze control. |
Victoria A. Roach; Graham M. Fraser; James H. Kryklywy; Derek G. V. Mitchell; Timothy D. Wilson Guiding low spatial ability individuals through visual cueing: The dual importance of where and when to look Journal Article In: Anatomical Sciences Education, vol. 12, no. 1, pp. 32–42, 2019. @article{Roach2019, Research suggests that spatial ability may predict success in complex disciplines including anatomy, where mastery requires a firm understanding of the intricate relationships occurring along the course of veins, arteries, and nerves, as they traverse through and around bones, muscles, and organs. Debate exists on the malleability of spatial ability, and some suggest that spatial ability can be enhanced through training. It is hypothesized that spatial ability can be trained in low-performing individuals through visual guidance. To address this, training was completed through a visual guidance protocol. This protocol was based on eye-movement patterns of high-performing individuals, collected via eye-tracking as they completed an Electronic Mental Rotations Test (EMRT). The effects of guidance were evaluated using 33 individuals with low mental rotation ability, in a counterbalanced crossover design. Individuals were placed in one of two treatment groups (late or early guidance) and completed both a guided, and an unguided EMRT. A third group (no guidance/control) completed two unguided EMRTs. All groups demonstrated an increase in EMRT scores on their second test (P < 0.001); however, an interaction was observed between treatment and test iteration (P = 0.024). The effect of guidance on scores was contingent on when the guidance was applied. When guidance was applied early, scores were significantly greater than expected (P = 0.028). These findings suggest that by guiding individuals with low mental rotation ability “where” to look early in training, better search approaches may be adopted, yielding improvements in spatial reasoning scores. It is proposed that visual guidance may be applied in spatial fields, such as STEMM (science, technology, engineering, mathematics and medicine), surgery, and anatomy to improve student's interpretation of visual content. |
Čeněk Šašinka; Zdeněk Stachoň; Petr Kubíček; Sascha Tamm; Aleš Matas; Markéta Kukaňová The impact of global/local bias on task-solving in map-related tasks employing extrinsic and intrinsic visualization of risk uncertainty maps Journal Article In: The Cartographic Journal, vol. 56, no. 2, pp. 175–191, 2019. @article{Sasinka2019, The form of visual representation affects both the way in which the visual representation is processed and the effectiveness of this processing. Different forms of visual representation may require the employment of different cognitive strategies in order to solve a particular task; at the same time, the different representations vary as to the extent to which they correspond with an individual's preferred cognitive style. The present study employed a Navon-type task to learn about the occurrence of global/local bias. The research was based on close interdisciplinary cooperation between the domains of both psychology and cartography. Several different types of tasks were made involving avalanche hazard maps with intrinsic/extrinsic visual representations, each of them employing different types of graphic variables representing the level of avalanche hazard and avalanche hazard uncertainty. The research sample consisted of two groups of participants, each of which was provided with a different form of visual representation of identical geographical data, such that the representations could be regarded as ‘informationally equivalent'. The first phase of the research consisted of two correlation studies, the first involving subjects with a high degree of map literacy (students of cartography) (intrinsic method: N = 35; extrinsic method: N = 37). The second study was performed after the results of the first study were analyzed. The second group of participants consisted of subjects with a low expected degree of map literacy (students of psychology; intrinsic method: N = 35; extrinsic method: N = 27).The first study revealed a statistically significant moderate correlation between the students' response times in extrinsic visualization tasks and their response times in a global subtest (r = 0.384, p < 0.05); likewise, a statistically significant moderate correlation was found between the students' response times in intrinsic visualization tasks and their response times in the local subtest (r = 0.387, p < 0.05). At the same time, no correlation was found between the students' performance in the local subtest and their performance in extrinsic visualization tasks, or between their scores in the global subtest and their performance in intrinsic visualization tasks. The second correlation study did not confirm the results of the first correlation study (intrinsic visualization/‘small figures test': r = 0.221; extrinsic visualization/‘large figures test': r = 0.135). The first phase of the research, where the data was subjected to statistical analysis, was followed by a comparative eye-tracking study, whose aim was to provide more detailed insight into the cognitive strategies employed when solving map-related tasks. More specifically, the eye-tracking study was expected to be able to detect possible differences between the cognitive patterns employed when solving extrinsic- as opposed to intrinsic visualization tasks. The results of an exploratory eye-tracking data analysis support the hypothesis of different strategies of visual information processing being used in reaction to different types of visualization. |
Hongyan Wang; Zhongling Pi; Weiping Hu The instructor's gaze guidance in video lectures improves learning Journal Article In: Journal of Computer Assisted Learning, vol. 35, no. 1, pp. 42–50, 2019. @article{Wang2019c, Instructor behaviour is known to affect learning performance, but it is unclear which specific instructor behaviours can optimize learning. We used eye-tracking technology and questionnaires to test whether the instructor's gaze guidance affected learners' visual attention, social presence, and learning performance, using four video lectures: declarative knowledge with and without the instructor's gaze guidance and procedural knowledge with and without the instructor's gaze guidance. The results showed that the instructor's gaze guidance not only guided learners to allocate more visual attention to corresponding learning content but also increased learners' sense of social presence and learning. Furthermore, the link between the instructor's gaze guidance and better learning was especially strong for participants with a high sense of social connection with the instructor when they learned procedural knowledge. The findings lead to a strong recommendation for educational practitioners: Instructors should provide gaze guidance in video lectures for better learning performance. |
Zepeng Wang; Ping Li; Luming Zhang; Ling Shao Community-aware photo quality evaluation by deeply encoding human perception Journal Article In: IEEE Transactions on Multimedia, pp. 1–11, 2019. @article{Wang2019l, Computational photo quality evaluation is a useful technique in many tasks of computer vision and graphics, e.g., photo retaregeting, 3D rendering, and fashion recommendation. Conventional photo quality models are designed by characterizing pictures from all communities (e.g., “architecture” and “colorful”) indiscriminately, wherein community-specific features are not encoded explicitly. In this work, we develop a new community-aware photo quality evaluation framework. It uncovers the latent community-specific topics by a regularized latent topic model (LTM), and captures human visual quality perception by exploring multiple attributes. More specifically, given massive-scale online photos from multiple communities, a novel ranking algorithm is proposed to measure the visual/semantic attractiveness of regions inside each photo. Meanwhile, three attributes: photo quality scores, weak semantic tags, and inter-region correlations, are seamlessly and collaboratively incorporated during ranking. Subsequently, we construct gaze shifting path (GSP) for each photo by sequentially linking the top-ranking regions from each photo, and an aggregation-based deep CNN calculates the deep representation for each GSP. Based on this, an LTM is proposed to model the GSP distribution from multiple communities in the latent space. To mitigate the overfitting problem caused by communities with very few photos, a regularizer is added into our LTM. Finally, given a test photo, we obtain its deep GSP representation and its quality score is determined by the posterior probability of the regularized LTM. Comprehensive comparative studies on four image sets have shown the competitiveness of our method. Besides, eye tracking experiments demonstrated that our ranking-based GSPs are highly consistent with real human gaze movements. |
Bogusława Whyatt In search of directionality effects in the translation process and in the end product Journal Article In: Translation, Cognition and Behavior, vol. 2, no. 1, pp. 79–100, 2019. @article{Whyatt2019, This article tackles directionality as one of the most contentious issues in translation studies, still without solid empirical footing. The research presented here shows that, to understand directionality effects on the process of translation and its end product, performance in L2 → L1 and L1 → L2 translation needs to be compared in a specific setting in which more factors than directionality are considered-especially text type. For 26 professional translators who participated in an experimental study, L1 → L2 translation did not take significantly more time than L2 → L1 translation and the end products of both needed improvement from proofreaders who are native speakers of the target language. A close analysis of corrections made by the proofreaders shows that different aspects of translation quality are affected by directionality. A case study of two translators who produced high quality L1 → L2 translations reveals that their performance was affected more by text type than by directionality. |
Jiang Yushi Research on the best visual search effect of logo elements in internet advertising layout Journal Article In: Journal of Contemporary Marketing Science, vol. 2, no. 1, pp. 23–33, 2019. @article{Yushi2019, Purpose: The purpose of this paper is to control the size of online advertising by the use of the single factor experiment design using the eight matching methods of logo and commodity picture elements as independent variables, under the premise of background color and content complexity and to investigate the best visual search law of logo elements in online advertising format. The result shows that when the picture element is fixed in the center of the advertisement, it is suggested that the logo element should be placed in the middle position parallel to the picture element (left middle and upper left), placing the logo element at the bottom of the picture element, especially at the bottom left should be avoided. The designer can determine the best online advertising format based on the visual search effect of the logo element and the actual marketing purpose. Design/methodology/approach: In this experiment, the repeated measurement experiment design was used in a single factor test. According to the criteria of different types of commodities and eight matching methods, 20 advertisements were randomly selected from 50 original advertisements as experimental stimulation materials, as shown in Section 2.3. The eight matching methods were processed to obtain a total of 20×8=160 experimental stimuli. At the same time, in order to minimize the memory effect of the repeated appearance of the same product, all pictures, etc., the probability was randomly presented. In addition, in order to avoid the pre-judgment of the test for the purpose of the experiment, 80 additional filler online advertisements were added. Therefore, each testee was required to watch 160+80=240 pieces of stimulation materials.Findings On one hand, when the image elements are fixed for an advertisement, the advertiser should first try to place the logo element in the right middle position parallel to the picture element, because the commodity logo in this matching mode can get the longest average time of consumers' attention, and the duration of attention is the most. Danaher and Mullarkey (2003) clearly pointed out that as consumers look at online advertising, the length of fixation time increases, the degree of memory of online advertisement is also improved accordingly. Second, you can consider placing the logo element in the left or upper left of the picture element. In contrast, advertisers should try to avoid placing the logo element at the bottom of the picture element (lower left and lower right), especially at the lower left, because, at this area, the logo attracts less attention, resulting in shortest duration of consumer attention, less than a quarter of consumers' total attention. This conclusion is consistent with the related research results. |
Olivier J. Hénaff; Robbe L. T. Goris; Eero P. Simoncelli Perceptual straightening of natural videos Journal Article In: Nature Neuroscience, vol. 22, pp. 984–991, 2019. @article{Henaff2019, Many behaviors rely on predictions derived from recent visual input, but the temporal evolution of those inputs is generally complex and difficult to extrapolate. We propose that the visual system transforms these inputs to follow straighter temporal trajectories. To test this ‘temporal straightening' hypothesis, we develop a methodology for estimating the curvature of an internal trajectory from human perceptual judgments. We use this to test three distinct predictions: natural sequences that are highly curved in the space of pixel intensities should be substantially straighter perceptually; in contrast, artificial sequences that are straight in the intensity domain should be more curved perceptually; finally, naturalistic sequences that are straight in the intensity domain should be relatively less curved. Perceptual data validate all three predictions, as do population models of the early visual system, providing evidence that the visual system specifically straightens natural videos, offering a solution for tasks that rely on prediction. |
R. Austin Hicklin; Bradford T. Ulery; Thomas A. Busey; Maria Antonia Roberts; Jo Ann Buscaglia Gaze behavior and cognitive states during fingerprint target group localization Journal Article In: Cognitive Research: Principles and Implications, vol. 4, no. 12, pp. 1–20, 2019. @article{Hicklin2019, Background: The comparison of fingerprints by expert latent print examiners generally involves repeating a process in which the examiner selects a small area of distinctive features in one print (a target group), and searches for it in the other print. In order to isolate this key element of fingerprint comparison, we use eye-tracking data to describe the behavior of latent fingerprint examiners on a narrowly defined “find the target” task. Participants were shown a fingerprint image with a target group indicated and asked to find the corresponding area of ridge detail in a second impression of the same finger and state when they found the target location. Target groups were presented on latent and plain exemplar fingerprint images, and as small areas cropped from the plain exemplars, to assess how image quality and the lack of surrounding visual context affected task performance and eye behavior. One hundred and seventeen participants completed a total of 675 trials. Results: The presence or absence of context notably affected the areas viewed and time spent in comparison; differences between latent and plain exemplar tasks were much less significant. In virtually all trials, examiners repeatedly looked back and forth between the images, suggesting constraints on the capacity of visual working memory. On most trials where context was provided, examiners looked immediately at the corresponding location: with context, median time to find the corresponding location was less than 0.3 s (second fixation); however, without context, median time was 1.9 s (five fixations). A few trials resulted in errors in which the examiner did not find the correct target location. Basic gaze measures of overt behaviors, such as speed, areas visited, and back-and-forth behavior, were used in conjunction with the known target area to infer the underlying cognitive state of the examiner. Conclusions: Visual context has a significant effect on the eye behavior of latent print examiners. Localization errors suggest how errors may occur in real comparisons: examiners sometimes compare an incorrect but similar target group and do not continue to search for a better candidate target group. The analytic methods and predictive models developed here can be used to describe the more complex behavior involved in actual fingerprint comparisons. |
Tammy Sue-Wynne Liu; Yeu-Ting Liu; Chun-Yin Doris Chen Meaningfulness is in the eye of the reader: Eye-tracking insights of L2 learners reading e-books and their pedagogical implications Journal Article In: Interactive Learning Environments, vol. 27, no. 2, pp. 181–199, 2019. @article{Liu2019a, This study employed eye-tracking technology to probe the online reading behavior of 52 advanced L2 English learners. These participants read an e-book containing six types of multimedia supports for either vocabulary acquisition or comprehension. The six supports consisted of three micro-level supports that provided information about specific words (glosses, vocabulary focus, and footnotes), and three macro-level supports that provided global or background information (illustrations, infographics, and photos). The participants read the ebook under two presentation modes: (1) simultaneous mode: where digital input and supports were presented at the same time; and (2) sequential mode: where the digital content and supports were incrementally presented. Analyses showed that when reading for vocabulary acquisition, vocabulary focus and glosses were significantly fixated on, and when reading for comprehension, illustrations were more intensely fixated on. Additionally, when the digital content was incrementally presented, vocabulary focus received significantly higher total fixation duration. This suggests that reading under the sequential mode has the potency to guide L2 learners' focal attention toward micro-level supports. In contrast, under the simultaneous presentation mode, L2 learners seemed to divide their focal attention among both micro-level and macro-level supports. Pedagogical implications are discussed based on the findings of this study. |
Sinè McDougall; Judy Edworthy; Deili Sinimeri; Jamie Goodliffe; Daniel Bradley; James Foster Searching for meaning in sound: Learning and interpreting alarm signals in visual environments Journal Article In: Journal of Experimental Psychology: Applied, vol. 26, no. 1, pp. 1–19, 2019. @article{McDougall2019, Given the ease with which the diverse array of environmental sounds can be understood, the difficulties encountered in using auditory alarm signals on medical devices are surprising. In two experiments, with nonclinical participants, alarm sets which relied on similarities to environmental sounds (concrete alarms, such as a heartbeat sound to indicate "check cardiovascular function") were compared to alarms using abstract tones to represent functions on medical devices. The extent to which alarms were acoustically diverse was also examined: alarm sets were either acoustically different or acoustically similar within each set. In Experiment 1, concrete alarm sets, which were also acoustically different, were learned more quickly than abstract alarms which were acoustically similar. Importantly, the abstract similar alarms were devised using guidelines from the current global medical device standard (International Electrotechnical Commission 60601-1-8, 2012). Experiment 2 replicated these findings. In addition, eye tracking data showed that participants were most likely to fixate first on the correct medical devices in an operating theater scene when presented with concrete acoustically different alarms using real world sounds. A new set of alarms which are related to environmental sounds and differ acoustically have therefore been proposed as a replacement for the current medical device standard. |
Wenxiang Chen; Xiangling Zhuang; Zixin Cui; Guojie Ma Drivers' recognition of pedestrian road-crossing intentions: Performance and process Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 64, pp. 552–564, 2019. @article{Chen2019a, Drivers' recognition of pedestrian road crossing intentions is an essential process during driver-pedestrian interaction. However, compared with the rich observational findings on interaction behavior, little is known on drivers' performance in recognizing pedestrian intentions, as well as the underlying cognitive processes. To fill in the gap, this study evaluated drivers' performance in making judgments of pedestrians' road crossing intentions in recorded natural driving scenes. Experienced and novice drivers identified pedestrians as “will cross” or “will not cross” at some time-to-arrival while their eye movements were recorded. The results showed that experienced drivers were more conservative in discriminating whether a pedestrian would cross or not (preferred a “pedestrian will cross” judgment) and took a higher level of information processing of pedestrian intention. Regardless of driving experience, drivers had a higher detection rate, earlier detection, higher level of information processing and quicker response over pedestrians who intended to cross than those did not intend to cross. A quicker response was also achieved when the time-to-arrival was smaller. Analysis of eye movements showed attentional bias to the upper body of pedestrians when recognizing intention. These findings offer an initial understanding of the intention recognition process during driver-pedestrian interaction and inform directions for autonomous driving research when interacting with pedestrians. |
Rajib Chowdhury; A. F. M. Saifuddin Saif Efficient method to improve human brain sensor activities using proposed neuroheadset device embedded with sensors: A comprehensive study Journal Article In: International Journal of Software Engineering and Computer Systems, vol. 53, no. 1, pp. 52–56, 2019. @article{Chowdhury2019, The main purpose of this research is to investigate the human brain sensor activities related prior researches towards the needs of an efficient method to improve the human brain sensor activities. Human brain activities mainly measured by brain signal acquired from the brain sensor electrodes positioned on several parts of the brain cortex. Although previous researches investigated human brain activities in various aspects, the improvement of the human brain sensor activities is still unsolved. In today's world, it is very crucial need for improving the sensor activities of the human brain using that human brain improved signal externally. This research demonstrated a comprehensive critical analysis of human brain activities related prior researches to claim for an efficient method integrated with proposed neuroheadset device. This research presented a comprehensive review in various aspects like previous methods, existing frameworks analysis and existing results analysis with the discussion to establish an efficient method for acquiring human brain signal, improving the acquired signal and developing the sensor activities of the human brain using that human brain improved signal. Demonstrated critical review has expected for constituting an efficient method to improve the performance of maneuverability, visualization, subliminal activities and so forth on human brain activities. |
Freya Crosby; Frouke Hermens Does it look safe? An eye tracking study into the visual aspects of fear of crime Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 3, pp. 599–615, 2019. @article{Crosby2019, Studies of fear of crime often focus on demographic and social factors, but these can be difficult to change. Studies of visual aspects have suggested that features reflecting incivilities, such as litter, graffiti, and vandalism increase fear of crime, but methods often rely on participants actively mentioning such aspects, and more subtle, less conscious aspects may be overlooked. To address these concerns, this study examined people's eye movements while they judged scenes for safety. In total, 40 current and former university students were asked to rate images of day-time and night-time scenes of Lincoln, UK (where they studied) and Egham, UK (unfamiliar location) for safety, maintenance, and familiarity while their eye movements were recorded. Another 25 observers not from Lincoln or Egham rated the same images in an Internet survey. Ratings showed a strong association between safety and maintenance and lower safety ratings for night-time scenes for both groups, in agreement with earlier findings. Eye movements of the Lincoln participants showed increased dwell times on buildings, houses, and vehicles during safety judgements and increased dwell times on streets, pavements, and markers of incivilities for maintenance. Results confirm that maintenance plays an important role in perceptions of safety, but eye movements suggest that observers also look for indicators of current or recent presence of people. |
Gemma Fitzsimmons; Mark J. Weal; Denis Drieghe The impact of hyperlinks on reading text Journal Article In: PLoS ONE, vol. 14, no. 2, pp. e0210900, 2019. @article{Fitzsimmons2019, There has been debate about whether blue hyperlinks on the Web cause disruption to reading. A series of eye tracking experiments were conducted to explore if coloured words in black text had any impact on reading behaviour outside and inside a Web environment. Experiment 1 and 2 explored the saliency of coloured words embedded in single sentences and the impact on reading behaviour. In Experiment 3, the effects of coloured words/hyperlinks in passages of text in a Web-like environment was explored. Experiment 1 and 2 showed that multiple coloured words in text had no negative impact on reading behaviour. However, if the sentence featured only a single coloured word, a reduction in skipping rates was observed. This suggests that the visual saliency associated with a single coloured word may signal to the reader that the word is important, whereas this signalling is reduced when multiple words are coloured. In Experiment 3, when reading passages of text containing hyperlinks in a Web environment, participants showed a tendency to re-read sentences that contained hyperlinked, uncommon words compared to hyperlinked, common words. Hyperlinks highlight important information and suggest additional content, which for more difficult concepts, invites rereading of the preceding text. |
Victoria Foglia; Annie Roy-Charland; Dominique Leroux; Suzanne Lemieux; Nicole Yantzi; Tina Skjonsby-McKinnon; Sylvain Fiset; Dominic Guitard When pictures take away from the message: An examination of young adults' attention to texting and driving advertisements Journal Article In: Canadian Journal of Experimental Psychology, pp. 1–14, 2019. @article{Foglia2019, This study examined eye-movement patterns of young adults, while they were viewing texting and driving prevention advertisements, to determine which format attracts the most attention. As young adults are the most at risk for this public health issue, understanding which format is most successful at maintaining young adults' attention is especially important. Participants viewed nondriving, general distracted driving, and texting and driving advertisements. Each of these advertisement types were edited to contain text-only, image-only, and text and image content. Participants were told that they had unlimited time to view each advertisement, while their eye-movements were recorded throughout. Participants spent more time viewing the texting and driving advertisements than other types when they comprised text only. When examining differences in attention to the text and image portions of the advertisements, participants spent more time viewing the images than the text for the nondriving and general distracted driving advertisements. However, for texting and driving-specific advertisements the text-only format resulted in the most attention toward the advertisements. These results indicate that in attracting young adults' attention to texting and driving public health advertisements, the most successful format would be text-based. |
Susan M. Gass; Paula Winke; Daniel R. Isbell; Jieun Ahn How captions help people learn languages: A working-memory, eye-tracking study Journal Article In: Language Learning and Technology, vol. 23, no. 2, pp. 84–104, 2019. @article{Gass2019, Captions provide a useful aid to language learners for comprehending videos and learning new vocabulary, aligning with theories of multimedia learning. Multimedia learning predicts that a learner's working memory (WM) influences the usefulness of captions. In this study, we present two eye-tracking experiments investigating the role of WM in captioned video viewing behavior and comprehension. In Experiment 1, Spanish-as-a-foreign-language learners differed in caption use according to their level of comprehension and to a lesser extent, their WM capacities. WM did not impact comprehension. In Experiment 2, English-as-a-second-language learners differed in comprehension according to their WM capacities. Those with high comprehension and high WM used captions less on a second viewing. These findings highlight the effects of potential individual differences and have implications for the integration of multimedia with captions in instructed language learning. We discuss how captions may help neutralize some of working memory's limiting effects on learning. |
Hannah Harvey; Stephen J. Anderson; Robin Walker Increased word spacing improves performance for reading scrolling text with central vision loss Journal Article In: Optometry and Vision Science, vol. 96, no. 8, pp. 609–616, 2019. @article{Harvey2019, SIGNIFICANCE: Scrolling text can be an effective reading aid for those with central vision loss. Our results suggest that increased interword spacing with scrolling text may further improve the reading experience of this population. This conclusion may be of particular interest to low-vision aid developers and visual rehabilitation practitioners. PURPOSE: The dynamic, horizontally scrolling text format has been shown to improve reading performance in individuals with central visual loss. Here, we sought to determine whether reading performance with scrolling text can be further improved by modulating interword spacing to reduce the effects of visual crowding, a factor known to impact negatively on reading with peripheral vision. METHODS: The effects of interword spacing on reading performance (accuracy, memory recall, and speed) were assessed for eccentrically viewed single sentences of scrolling text. Separate experiments were used to determine whether performance measures were affected by any confound between interword spacing and text presentation rate in words per minute. Normally sighted participants were included, with a central vision loss implemented using a gaze-contingent scotoma of 8° diameter. In both experiments, participants read sentences that were presented with an interword spacing of one, two, or three characters. RESULTS: Reading accuracy and memory recall were significantly enhanced with triple-character interword spacing (both measures, P ≤.01). These basic findings were independent of the text presentation rate (in words per minute). CONCLUSIONS: We attribute the improvements in reading performance with increased interword spacing to a reduction in the deleterious effects of visual crowding. We conclude that increased interword spacing may enhance reading experience and ability when using horizontally scrolling text with a central vision loss. |
Sogand Hasanzadeh; Bac Dao; Behzad Esmaeili; Michael D. Dodd In: Journal of Construction Engineering and Management, vol. 145, no. 9, pp. 1–14, 2019. @article{Hasanzadeh2019, Workers' attentional failures or inattention toward detecting a hazard can lead to inappropriate decisions and unsafe behaviors. Previous research has shown that individual characteristics such as past injury exposure contribute greatly to skill-based (e.g., attention failure) and perception-based (e.g., failure to identify and misperception) errors and subsequent accident involvement. However, a dearth of research empirically examined how a worker's personality affects his or her attention and hazard identification. This study addresses this knowledge gap by exploring the impacts of the personality dimensions on the selective attention of workers exposed to fall hazards. To this end, construction workers were recruited to engage in a laboratory eye-tracking experiment that consisted of 115 potential and active fall scenarios in 35 construction images captured from actual projects within the United States. Construction workers' personalities were assessed through the self-completion of the Big Five personality questionnaire, and their visual attention was monitored continuously using a wearable eye-tracking apparatus. The results of the study show that workers' personality dimensions - specifically, extraversion, conscientiousness, and openness to experience - significantly relate to and impact attentional allocations and the search strategies of workers exposed to fall hazards. A more detailed investigation of this connection showed that individuals who are introverted, more conscientious, or more open to experience are less prone to injury and return their attention more frequently to hazardous areas. This study is the first attempt to illustrate how examining relationships among personality, attention, and hazard identification can reveal opportunities for the early detection of at-risk workers who are more likely to be involved in accidents. A better understanding of these connections provides valuable insight into both practice and theory regarding the transformation of current training and educational practices by providing appropriate intervention strategies for personalized safety guidelines and effective training materials to transform personality-driven at-risk workers into safer workers. |
Marti Hearst; Emily Pedersen; Lekha Priya Patil; Elsie Lee; Paul Laskowski; Steven L. Franconeri An evaluation of semantically grouped word cloud designs Journal Article In: IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2019. @article{Hearst2019, Word clouds continue to be a popular tool for summarizing textual information, despite their well-documented deficiencies for analytic tasks. Much of their popularity rests on their playful visual appeal. In this paper, we present the results of a series of controlled experiments that show that layouts in which words are arranged into semantically and visually distinct zones are more effective for understanding the underlying topics than standard word cloud layouts. White space separators and/or spatially grouped color coding led to significantly stronger understanding of the underlying topics compared to a standard Wordle layout, while simultaneously scoring higher on measures of aesthetic appeal. This work is an advance on prior research on semantic layouts for word clouds because that prior work has either not ensured that the different semantic groupings are visually or semantically distinct, or has not performed usability studies. An additional contribution of this work is the development of a dataset for a semantic category identification task that can be used for replication of these results or future evaluations of word cloud designs. |
2018 |
Aurélie Calabrèse; Carlos Aguilar; Géraldine Faure; Frédéric Matonti; Louis Hoffart; Eric Castet A vision enhancement system to improve face recognition with central vision loss Journal Article In: Optometry and Vision Science, vol. 95, no. 9, pp. 738–746, 2018. @article{Calabrese2018, SIGNIFICANCE: The overall goal of this work is to validate a low vision aid system that uses gaze as a pointing tool and provides smart magnification. We conclude that smart visual enhancement techniques as well as gaze contingency should improve the efficiency of assistive technology for the visually impaired. PURPOSE: A low vision aid, using gaze-contingent visual enhancement and primarily intended to help reading with central vision loss, was recently designed and tested with simulated scotoma. Here, we present a validation of this system for face recognition in age-related macular degeneration patients. METHODS: Twelve individuals with binocular central vision loss were recruited and tested on a face identification-matching task. Gaze position was measured in real time, thanks to an eye tracker. In the visual enhancement condition, at any time during the screen exploration, the fixated face was segregated from background and considered as a region of interest that could be magnified into a region of augmented vision by the participant, if desired. In the natural exploration condition, participants also performed the matching task but without the visual aid. Response time and accuracy were analyzed with mixed-effects models to (1) compare the performance with and without visual aid and (2) estimate the usability of the system. RESULTS: On average, the percentage of correct response for the natural exploration condition was 41%. This value was significantly increased to 63% with visual enhancement (95% confidence interval, 45 to 78%). For the large majority of our participants (83%), this improvement was accompanied by moderate increase in response time, suggesting a real functional benefit for these individuals. CONCLUSIONS Without visual enhancement, participants with age-related macular degeneration performed poorly, confirming their struggle for face recognition and the need to use efficient visual aids. Our system significantly improved face identification accuracy by 55%, proving to be helpful under laboratory conditions. |
Tao Deng; Hongmei Yan; Yong-Jie Li Learning to boost bottom-up fixation prediction in driving environments via random forest Journal Article In: IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 9, pp. 3059–3067, 2018. @article{Deng2018, Saliency detection, an important step in many computer vision applications, can, for example, predict where drivers look in a vehicular traffic environment. While many bottom-up and top-down saliency detection models have been proposed for fixation prediction in outdoor scenes, no specific attempt has been made for traffic images. Here, we propose a learning saliency detection model based on a random forest (RF) to predict drivers' fixation positions in a driving environment. First, we extract low-level (color, intensity, orientation, etc.) and high-level (e.g., the vanishing point and center bias) features and then predict the fixation points via RF-based learning. Finally, we evaluate the performance of our saliency prediction model qualitatively and quantitatively. We use quantitative evaluation metrics that include the revised receiver operating characteristic (ROC), the area under the ROC curve value, and the normalized scan-path saliency score. The experimental results on real traffic images indicate that our model can more accurately predict a driver's fixation area, while driving than the state-of-the-art bottom-up saliency models. |
Ashleigh J. Filtness; Vanessa Beanland Sleep loss and change detection in driving scenes Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 57, pp. 10–22, 2018. @article{Filtness2018, Driver sleepiness is a significant road safety problem. Sleep-related crashes occur on both urban and rural roads, yet to date driver-sleepiness research has focused on understanding impairment in rural and motorway driving. The ability to detect changes is an attention and awareness skill vital for everyday safe driving. Previous research has demonstrated that person states, such as age or motivation, influence susceptibility to change blindness (i.e., failure or delay in detecting changes). The current work considers whether sleepiness increases the likelihood of change blindness within urban and rural driving contexts. Twenty fully-licenced drivers completed a change detection ‘flicker' task twice in a counterbalanced design: once following a normal night of sleep (7–8 h) and once following sleep restriction (5 h). Change detection accuracy and response time were recorded while eye movements were continuously tracked. Accuracy was not significantly affected by sleep loss; however, following sleep loss there was some evidence of slowed change detection responses to urban images, but faster responses for rural images. Visual scanning across the images remained consistent between sleep conditions, resulting in no difference in the probability of fixating on the change target. Overall, the results suggest that sleep loss has minimal impact on change detection accuracy and visual scanning for changes in driving scenes. However, a subtle difference in response time to change detection between urban and rural images indicates that change blindness may have implications for sleep-related crashes in more visually complex urban environments. Further research is needed to confirm this finding. |
Lisena Hasanaj; Sujata P. Thawani; Nikki Webb; Julia D. Drattell; Liliana Serrano; Rachel C. Nolan; Jenelle Raynowska; Todd E. Hudson; John-Ross Rizzo; Weiwei Dai; Bryan McComb; Judith D. Goldberg; Janet C. Rucker; Steven L. Galetta; Laura J. Balcer Rapid number naming and quantitative eye movements may reflect contact sport exposure in a collegiate ice hockey cohort Journal Article In: Journal of Neuro-Ophthalmology, vol. 38, no. 1, pp. 24–29, 2018. @article{Hasanaj2018, Objective: We determined the relation of rapid number naming time scores on the King-Devick (K-D) test to video-oculographic eye movement performance during pre-season baseline assessments in a collegiate ice hockey team cohort. Background: The K-D test is a reliable visual performance measure that is a sensitive sideline indicator of concussion when time scores worsen (lengthen) from pre-season baseline. Methods: Athletes from collegiate ice hockey team received pre-season baseline testing as part of an ongoing study of rapid sideline/ rinkside performance measures for concussion. These included the K-D test (spiral bound cards and tablet computer versions). Participants also performed a laboratory-based version of the K-D test with simultaneous infrared-based video-oculographic recordings using EyeLink 1000+. This allowed measurement of temporal and spatial characteristics of eye movements, including saccade velocity, duration and inter-saccadic intervals. Results: Among 13 male athletes, aged 18 to 23 years (mean 20.5+/-1.6 years), prolongation of the inter-saccadic interval (ISI, a combined measure of saccade latency and fixation duration) was the eye movement measure most associated with slower baseline KD scores (mean 38.2+/-6.2 seconds |
Katja I. Häuser; Vera Demberg; Jutta Kray In: Psychology and Aging, vol. 33, no. 8, pp. 1168–1180, 2018. @article{Haeuser2018, Even though older adults are known to have difficulty at language processing when a secondary task has to be performed simultaneously, few studies have addressed how older adults process language in dual-task demands when linguistic load is systematically varied. Here, we manipulated surprisal, an information theoretic measure that quantifies the amount of new information conveyed by a word, to investigate how linguistic load affects younger and older adults during early and late stages of sentence processing under conditions when attention is split between two tasks. In high-surprisal sentences, target words were implausible and mismatched with semantic expectancies based on context, thereby causing integration difficulty. Participants performed semantic meaningfulness judgments on sentences that were presented in isolation (single task) or while performing a secondary tracking task (dual task). Cognitive load was measured by means of pupillometry. Mixed-effects models were fit to the data, showing the following: (a) During the dual task, younger but not older adults demonstrated early sensitivity to surprisal (higher levels of cognitive load, indexed by pupil size) as sentences were heard online; (b) Older adults showed no immediate reaction to surprisal, but a delayed response, where their meaningfulness judgments to high-surprisal words remained stable in accuracy, while secondary tracking performance declined. Findings are discussed in relation to age-related trade-offs in dual tasking and differences in the allocation of attentional resources during language processing. Collectively, our data show that higher linguistic load leads to task trade-offs in older adults and differently affects the time course of online language processing in aging. |
Claire Louise Heard; Tim Rakow; Tom Foulsham Understanding the effect of information presentation order and orientation on information search and treatment evaluation Journal Article In: Medical Decision Making, vol. 38, no. 6, pp. 646–657, 2018. @article{Heard2018, Background. Past research finds that treatment evaluations are more negative when risks are presented after benefits. This study investigates this order effect: manipulating tabular orientation and order of risk–benefit information, and examining information search order and gaze duration via eye-tracking. Design. 108 (Study 1) and 44 (Study 2) participants viewed information about treatment risks and benefits, in either a horizontal (left-right) or vertical (above- below) orientation, with the benefits or risks presented first (left side or at top). For 4 scenarios, participants answered 6 treatment evaluation questions (1–7 scales) that were combined into overall evaluation scores. In addi- tion, Study 2 collected eye-tracking data during the benefit–risk presentation. Results. Participants tended to read one set of information (i.e., all risks or all benefits) before transitioning to the other. Analysis of order of fixations showed this tendency was stronger in the vertical (standardized mean rank difference further from 0 |
James H. Smith-Spark; Hillary B. Katz; Thomas D. W. Wilcockson; Alexander P. Marchant Optimal approaches to the quality control checking of product labels Journal Article In: International Journal of Industrial Ergonomics, vol. 68, pp. 118–124, 2018. @article{SmithSpark2018, Quality control checkers at fresh produce packaging facilities occasionally fail to detect incorrect information presented on labels. Despite being infrequent, such errors have significant financial and environmental repercussions. To understand why label-checking errors occur, observations and interviews were undertaken at a large packaging facility and followed up with a laboratory-based label-checking task. The observations highlighted the dynamic, complex environment in which label-checking took place, whilst the interviews revealed that operatives had not received formal training in label-checking. On the laboratory-based task, overall error detection accuracy was high but considerable individual differences were found between professional label-checkers. Response times were shorter when participants failed to detect label errors, suggesting incomplete checking or ineffective checking strategies. Furthermore, eye movement recordings indicated that checkers who adopted a systematic approach to checking were more successful in detecting errors. The extent to which a label checker adopted a systematic approach was not found to correlate with the number of years of experience that they had accrued in label-checking. To minimize the chances of label errors going undetected, explicit instruction and training, personnel selection and/or the use of software to guide performance towards a more systematic approach is recommended. |
Jiahui Wang; Pavlo Antonenko; Mehmet Celepkolu; Yerika Jimenez; Ethan Fieldman; Ashley Fieldman Exploring relationships between eye tracking and traditional usability testing data Journal Article In: International Journal of Human-Computer Interaction, pp. 1–12, 2018. @article{Wang2018d, This study explored the relationships between eye tracking and traditional usability testing data in the context of analyzing the usability of Algebra Nation™, an online system for learning mathematics used by hundreds of thousands of students. Thirty-five undergraduate students (20 females) completed seven usability tasks in the Algebra Nation™ online learning environment. The participants were asked to log in, select an instructor for the instructional video, post a question on the collaborative wall, search for an explanation of a mathematics concept on the wall, find information relating to Karma Points (an incentive for engagement and learning), and watch two instructional videos of varied content difficulty. Participants' eye movements (fixations and saccades) were simultaneously recorded by an eye tracker. Usability testing software was used to capture all participants' interactions with the system, task completion time, and task difficulty ratings. Upon finishing the usability tasks, participants completed the System Usability Scale. Important relationships were identified between the eye movement metrics and traditional usability testing metrics such as task difficulty rating and completion time. Eye tracking data were investigated quantitatively using aggregated fixation maps, and qualitative examination was performed on video replay of participants' fixation behavior. Augmenting the traditional usability testing methods, eye movement analysis provided additional insights regarding revisions to the interface elements associated with these usability tasks. |
Jiaxin Wu; Sheng-hua Zhong; Zheng Ma; Stephen J. Heinen; Jianmin Jiang Foveated convolutional neural networks for video summarization Journal Article In: Multimedia Tools and Applications, vol. 77, no. 22, pp. 29245–29267, 2018. @article{Wu2018b, With the proliferation of video data, video summarization is an ideal tool for users to browse video content rapidly. In this paper, we propose a novel foveated convolutional neural networks for dynamic video summarization. We are the first to integrate gaze information into a deep learning network for video summarization. Foveated images are constructed based on subjects' eye movements to represent the spatial information of the input video. Multi-frame motion vectors are stacked across several adjacent frames to convey the motion clues. To evaluate the proposed method, experiments are conducted on two video summarization benchmark datasets. The experimental results validate the effectiveness of the gaze information for video summarization despite the fact that the eye movements are collected from different subjects from those who generated Jiaxin Wu and Sheng-hua Zhong contributed equally to this work. Multimed Tools Appl summaries. Empirical validations also demonstrate that our proposed foveated convolutional neural networks for video summarization can achieve state-of-the-art performances on these benchmark datasets. |
Lukáš Hejtmánek; Ivana Oravcová; Jiří Motýl; Jiří Horáček; Iveta Fajnerová Spatial knowledge impairment after GPS guided navigation: Eye-tracking study in a virtual town Journal Article In: International Journal of Human-Computer Studies, vol. 116, pp. 15–24, 2018. @article{Hejtmanek2018, There is a vibrant debate about consequences of mobile devices on our cognitive capabilities. Use of technology guided navigation has been linked with poor spatial knowledge and wayfinding in both virtual and real world experiments. Our goal was to investigate how the attention people pay to the GPS aid influences their navigation performance. We developed navigation tasks in a virtual city environment and during the experiment, we measured participants' eye movements. We also tested their cognitive traits and interviewed them about their navigation confidence and experience. Our results show that the more time participants spend with the GPS-like map, the less accurate spatial knowledge they manifest and the longer paths they travel without GPS guidance. This poor performance cannot be explained by individual differences in cognitive skills. We also show that the amount of time spent with the GPS is related to participant's subjective evaluation of their own navigation skills, with less confident navigators using GPS more intensively. We therefore suggest that despite an extensive use of navigation aids may have a detrimental effect on person's spatial learning, its general use is modulated by a perception of one's own navigation abilities. |
Archonteia Kyroudi; Kristoffer Petersson; Mahmut Ozsahin; Jean Bourhis; François Bochud; Raphaël Moeckli Analysis of the treatment plan evaluation process in radiotherapy through eye tracking Journal Article In: Zeitschrift fur Medizinische Physik, vol. 28, no. 4, pp. 318–324, 2018. @article{Kyroudi2018, Background and purpose: Treatment plan evaluation is a clinical decision-making problem that involves visual search and analysis in a contextually rich environment, including delineated structures and isodose lines superposed on CT data. It is a two-step process that includes visual analysis and clinical reasoning. In this work, we used eye tracking methods to gain more knowledge about the treatment plan evaluation process in radiation therapy. Materials and methods: Dose distributions on a single transverse slice of ten prostate cancer treatment plans were presented to eight decision makers. Their eye movements and fixations were recorded with an EyeLink1000 remote eye-tracker. Total evaluation time, dwell time, number and duration of fixations on pre-segmented areas of interest were measured. Results: The main structures receiving more and longer fixations (PTV, rectum, bladder) correspond to the main trade-offs evaluated in a typical prostate plan. Radiation oncologists made more fixations on the main structures compared to the medical physicists. Radiation oncologists fixated longer on the rectum when visited for the first time, while medical physicists fixated longer on the bladder. Conclusion: Our results quantify differences in the visual evaluation patterns between radiation oncologists and medical physicists, which indicate differences in their decision making strategies. |
Allison M. Londerée; Megan E. Roberts; Mary E. Wewers; Ellen Peters; Amy K. Ferketich; Dylan D. Wagner Adolescent attentional bias toward real-world flavored e-cigarette marketing Journal Article In: Tobacco Regulatory Science, vol. 4, no. 6, pp. 57–65, 2018. @article{Londeree2018, Objectives: E-cigarettes are now the most commonly-used tobacco product among adoles- cents; yet, little work has examined how the appealing food and flavor cues used in their mar- keting might attract adolescents' attention, thereby increasing willingness to try these prod- ucts. In the present study, we tested whether advertisements for fruit/sweet/savory-flavored (“flavored”) e-cigarettes attracted adolescent attention in real-world scenes more than tobacco flavored (“unflavored”) e-cigarettes. Additionally, we examined the relationship between ado- lescent attentional bias and willingness to try flavored e-cigarettes. Methods: Participants were 46 adolescents (age range: 16-18 years). All participants took part in an eye-tracking paradigm that examined attentional bias to flavored and unflavored e-cigarette advertisements embed- ded in pictures of real-world storefront scenes. Afterwards, participants' willingness to try fla- vored and unflavored e-cigarettes was assessed. Results: In support of our primary hypothesis, adolescents looked longer and fixated more frequently on flavored (vs unflavored) e-cigarette advertisements. Moreover, this attentional bias towards flavored e-cigarette advertisements predicted a greater willingness to try flavored vs unflavored e-cigarettes. Conclusions: These findings suggest that flavored e-cigarette marketing attracts the attention of adolescents, in- creases their willingness to try flavored e-cigarette products, and could, therefore, put them at greater risk for tobacco initiation. Key |
Daniel S. McGrath; Amadeus Meitner; Christopher R. Sears The specificity of attentional biases by type of gambling: An eye-tracking study Journal Article In: PLoS ONE, vol. 13, no. 1, pp. e0190614, 2018. @article{McGrath2018, A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls) took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games). Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers. |
Bettina Olk; Alina Dinu; David J. Zielinski; Regis Kopper Measuring visual search and distraction in immersive virtual reality Journal Article In: Royal Society Open Science, vol. 5, pp. 1–15, 2018. @article{Olk2018, An important issue of psychological research is how experiments conducted in the laboratory or theories based on such experiments relate to human performance in daily life. Immersive virtual reality (VR) allows control over stimuli and conditions at increased ecological validity. The goal of the present study was to accomplish a transfer of traditional paradigms that assess attention and distraction to immersive VR. To further increase ecological validity we explored attentional effects with daily objects as stimuli instead of simple letters. Participants searched for a target among distractors on the countertop of a virtual kitchen. Target–distractor discriminability was varied and the displays were accompanied by a peripheral flanker that was congruent or incongruent to the target. Reaction time was slower when target–distractor discriminability was low and when flankers were incongruent. The results were replicated in a second experiment in which stimuli were presented on a computer screen in two dimensions. The study demonstrates the successful translation of traditional paradigms and manipulations into immersive VR and lays a foundation for future research on attention and distraction in VR. Further, we provide an outline for future studies that should use features of VR that are not available in traditional laboratory research. |
David Randall; Helen Griffiths; Gemma Arblaster; Anne Bjerre; John Fenner Simulation of oscillopsia in virtual reality Journal Article In: British and Irish Orthoptic Journal, vol. 14, no. 1, pp. 1–5, 2018. @article{Randall2018, PURPOSE: Nystagmus is characterised by involuntary eye movement. A proportion of those with nystagmus experience the world constantly in motion as their eyes move: a symptom known as oscillopsia. Individuals with oscillopsia can be incapacitated and often feel neglected due to limited treatment options. Effective communication of the condition is challenging and no tools to aid communication exist. This paper describes a virtual reality (VR) application that recreates the effects of oscillopsia, enabling others to appreciate the condition. METHODS: Eye tracking data was incorporated into a VR oscillopsia simulator and released as a smartphone app - "Nystagmus Oscillopsia Sim VR". When a smartphone is used in conjunction with a Google Cardboard headset, it presents an erratic image consistent with oscillopsia. The oscillopsia simulation was appraised by six participants for its representativeness. These individuals have nystagmus and had previously experienced oscillopsia but were not currently symptomatic; they were therefore uniquely placed to judge the app. The participants filled in a questionnaire to record impressions and the usefulness of the app. RESULTS: The published app has been downloaded $sim$3700 times (28/02/2018) and received positive feedback from the nystagmus community. The validation study questionnaire scored the accuracy of the simulation an average of 7.8/10 while its ability to aid communication received 9.2/10. CONCLUSION: The evidence indicates that the simulation can effectively recreate the sensation of oscillopsia and facilitate effective communication of the symptoms associated with the condition. This has implications for communication of other visual conditions. |
2017 |
Nicola Binetti; Charlotte Harrison; Isabelle Mareschal; Alan Johnston Pupil response hazard rates predict perceived gaze durations Journal Article In: Scientific Reports, vol. 7, pp. 3969, 2017. @article{Binetti2017, We investigated the mechanisms for evaluating perceived gaze-shift duration. Timing relies on the accumulation of endogenous physiological signals. Here we focused on arousal, measured through pupil dilation, as a candidate timing signal. Participants timed gaze-shifts performed by face stimuli in a Standard/Probe comparison task. Pupil responses were binned according to “Longer/Shorter” judgements in trials where Standard and Probe were identical. This ensured that pupil responses reflected endogenous arousal fluctuations opposed to differences in stimulus content. We found that pupil hazard rates predicted the classification of sub-second intervals (steeper dilation = “Longer” classifications). This shows that the accumulation of endogenous arousal signals informs gaze-shift timing judgements. We also found that participants relied exclusively on the 2nd stimulus to perform the classification, providing insights into timing strategies under conditions of maximum uncertainty. We observed no dissociation in pupil responses when timing equivalent neutral spatial displacements, indicating that a stimulus-dependent timer exploits arousal to time gaze-shifts. |
Avigael M. Aizenman; Trafton Drew; Krista A. Ehinger; Dianne Georgian-smith; Jeremy M. Wolfe Comparing search patterns in digital breast tomosynthesis and full-field digital mammography: An eye tracking study Journal Article In: Journal of Medical Imaging, vol. 4, no. 4, pp. 1–22, 2017. @article{Aizenman2017, As a promising imaging modality, digital breast tomosynthesis (DBT) leads to better diagnostic per- formance than traditional full-field digital mammograms (FFDM) alone. DBT allows different planes of the breast to be visualized, reducing occlusion from overlapping tissue. Although DBT is gaining popularity, best practices for search strategies in this medium are unclear. Eye tracking allowed us to describe search patterns adopted by radiologists searching DBT and FFDM images. Eleven radiologists examined eight DBT and FFDM cases. Observers marked suspicious masses with mouse clicks. Eye position was recorded at 1000 Hz and was coregistered with slice/depth plane as the radiologist scrolled through the DBT images, allowing a 3-D representation of eye position. Hit rate for masses was higher for tomography cases than 2-D cases and DBT led to lower false positive rates. However, search duration was much longer for DBT cases than FFDM. DBT was associated with longer fixations but similar saccadic amplitude compared with FFDM. When comparing radiologists' eye movements to a previous study, which tracked eye movements as radiologists read chest CT, we found DBT viewers did not align with previously identified “driller” or “scanner” strategies, although their search strategy most closely aligns with a type of vigorous drilling strategy. |
Elham Azizi; Larry Allen Abel; Matthew J. Stainer The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 2, pp. 484–497, 2017. @article{Azizi2017, Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes. |
Jan W. Brascamp; Marnix Naber Eye tracking under dichoptic viewing conditions: A practical solution Journal Article In: Behavior Research Methods, vol. 49, no. 4, pp. 1303–1309, 2017. @article{Brascamp2017, In several research contexts it is important to obtain eye-tracking measures while presenting visual stimuli inde- pendently to each of the two eyes (dichoptic stimulation). However, the hardware that allows dichoptic viewing, such as mirrors, often interferes with high-quality eye tracking, es- pecially when using a video-based eye tracker. Here we detail an approach to combining mirror-based dichoptic stimulation with video-based eye tracking, centered on the fact that some mirrors, although they reflect visible light, are selectively transparent to the infrared wavelength range in which eye trackers record their signal. Although the method we propose is straightforward, affordable (on the order ofUS$1,000) and easy to implement, for many purposes it makes for an im- provement over existing methods, which tend to require spe- cialized equipment and often compromise on the quality ofthe visual stimulus and/or the eye tracking signal. The proposed method is compatible with standard display screens and eye trackers, and poses no additional limitations on the quality or nature of the stimulus presented or the data obtained. We in- clude an evaluation ofthe quality ofeye tracking data obtained using our method, and a practical guide to building a specific version of the setup used in our laboratories. |
Etzel Cardeña; Barbara Nordhjem; David Marcusson-Clavertz; Kenneth Holmqvist The "hypnotic state" and eye movements: Less there than meets the eye? Journal Article In: PLoS ONE, vol. 12, no. 8, pp. e0182546, 2017. @article{Cardena2017, Responsiveness to hypnotic procedures has been related to unusual eye behaviors for centuries. Kallio and collaborators claimed recently that they had found a reliable index for "the hypnotic state" through eye-tracking methods. Whether or not hypnotic responding involves a special state of consciousness has been part of a contentious debate in the field, so the potential validity of their claim would constitute a landmark. However, their conclusion was based on 1 highly hypnotizable individual compared with 14 controls who were not measured on hypnotizability. We sought t o replicate their results with a sample screened for High (n = 16) or Low (n = 13) hypnotizability. We used a factorial 2 (high vs. low hypnotizability) x 2 (hypnosis vs. resting conditions) counterbalanced order design with these eye-tracking tasks: Fixation, Saccade, Optokinetic nystagmus (OKN), Smooth pursuit, and Antisaccade (the first three tasks has been used in Kallio et al.'s experiment). Highs reported being more deeply in hypnosis than Lows but only in the hypnotic condition, as expected. There were no significant main or interaction effects for the Fixation, OKN, or Smooth pursuit tasks. For the Saccade task both Highs and Lows had smaller saccades during hypnosis, and in the Antisaccade task both groups had slower Antisaccades during hypnosis. Although a couple of results suggest that a hypnotic condition may produce reduced eye motility, the lack of significant interactions (e.g., showing only Highs expressing a particular eye behavior during hypnosis) does not support the claim that eye behaviors (at least as measured with the techniques used) are an indicator of a "hypnotic state.” Our results do not preclude the possibility that in a more spontaneous or different setting the experience of being hypnotized might relate to specific eye behaviors. |
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken Translation methods and experience: A comparative analysis of human translation and post-editing with students and professional translators Journal Article In: Meta, vol. 62, no. 2, pp. 245–270, 2017. @article{Daems2017, While the benefits of using post-editing for technical texts have been more or less acknowledged, it remains unclear whether post-editing is a viable alternative to human translation for more general text types. In addition, we need a better understanding of both translation methods and how they are performed by students as well as professionals, so that pitfalls can be determined and translator training can be adapted accordingly. In this article, we aim to get a better understanding of the differences between human translation and post-editing for newspaper articles. Processes are registered by means of eye tracking and keystroke logging, which allows us to study translation speed, cognitive load, and the use of external resources. We also look at the final quality of the product as well as translators' attitude towards both methods of translation. Studying these different aspects shows that both methods and groups are more similar than anticipated. |
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken Identifying the machine translation error types with the greatest impact on post-editing effort Journal Article In: Frontiers in Psychology, vol. 8, pp. 1282, 2017. @article{Daems2017a, Translation Environment Tools make translators' work easier by providing them with term lists, translation memories and machine translation output. Ideally, such tools automatically predict whether it is more effortful to post-edit than to translate from scratch, and determine whether or not to provide translators with machine translation output. Current machine translation quality estimation systems heavily rely on automatic metrics, even though they do not accurately capture actual post-editing effort. In addition, these systems do not take translator experience into account, even though novices' translation processes are different from those of professional translators. In this paper, we report on the impact of machine translation errors on various types of post-editing effort indicators, for professional translators as well as student translators. We compare the impact of MT quality on a product effort indicator (HTER) with that on various process effort indicators. The translation and post-editing process of student translators and professional translators was logged with a combination of keystroke logging and eye-tracking, and the MT output was analyzed with a fine-grained translation quality assessment approach. We find that most post-editing effort indicators (product as well as process) are influenced by machine translation quality, but that different error types affect different post-editing effort indicators, confirming that a more fine-grained MT quality analysis is needed to correctly estimate actual post-editing effort. Coherence, meaning shifts, and structural issues are shown to be good indicators of post-editing effort. The additional impact of experience on these interactions between MT quality and post-editing effort is smaller than expected. |
Ewa Domaradzka; Maksymilian Bielecki Deadly attraction - attentional bias toward preferred cigarette brand in smokers Journal Article In: Frontiers in Psychology, vol. 8, pp. 1365, 2017. @article{Domaradzka2017, Numerous studies have shown that biases in visual attention might be evoked by affective and personally relevant stimuli, for example addiction-related objects. Despite the fact that addiction is often linked to specific products and systematic purchase behaviors, no studies focused directly on the existence of bias evoked by brands. Smokers are characterized by high levels of brand loyalty and everyday contact with cigarette packaging. Using the incentive-salience mechanism as a theoretical framework, we hypothesized that this group might exhibit a bias toward the preferred cigarette brand. In our study, a group of smokers (N = 40) performed a dot probe task while their eye movements were recorded. In every trial a pair of pictures was presented – each of them showed a single cigarette pack. The visual properties of stimuli were carefully controlled, so branding information was the key factor affecting subjects' reactions. For each participant, we compared gaze behavior related to the preferred vs. other brands. The analyses revealed no attentional bias in the early, orienting phase of the stimulus processing and strong differences in maintenance and disengagement. Participants spent more time looking at the preferred cigarettes and saccades starting at the preferred brand location had longer latencies. In sum, our data shows that attentional bias toward brands might be found in situations not involving choice or decision making. These results provide important insights into the mechanisms of formation and maintenance of attentiona l biases to stimuli of personal relevance and might serve as a first step toward developing new attitude measurement techniques. |
Mackenzie G. Glaholt; Grace Sim Gaze-contingent center-surround fusion of infrared images to facilitate visual search for human targets Journal Article In: Journal of Imaging Science and Technology, vol. 61, no. 1, pp. 230–235, 2017. @article{Glaholt2017, We investigated gaze-contingent fusion of infrared imagery during visual search. Eye movements were monitored while subjects searched for and identified human targets in images captured simultaneously in the short-wave (SWIR) and long-wave (LWIR) infrared bands. Based on the subject's gaze position, the search displaywas updated such that imagery from one sensorwas continuously presented to the subject's central visual field (“center”) and another sensor was presented to the subject's non-central visual field (“surround”). Analysis ofperformance data indicated that, compared to the other combinations, the scheme featuring SWIR imagery in the center region and LWIR imagery in the surround region constituted an optimal combination of the SWIR and LWIR information: it inherited the superior target detection performance of LWIR imagery and the superior target identification performance of SWIR imagery. This demonstrates a novel method for efficiently combining imagery from two infrared sources as an alternative to conventional image fusion. |
Elise Grison; Valérie Gyselinck; Jean Marie Burkhardt; Jan M. Wiener Route planning with transportation network maps: An eye-tracking study Journal Article In: Psychological Research, vol. 81, no. 5, pp. 1020–1034, 2017. @article{Grison2017, Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning. |
Jessica Hanley; David E. Warren; Natalie Glass; Daniel Tranel; Matthew Karam; Joseph Buckwalter Visual interpretation of plain radiographs in orthopaedics using eye-tracking technology Journal Article In: The Iowa Orthopaedic Journal, vol. 37, pp. 225–231, 2017. @article{Hanley2017, BACKGROUND: Despite the importance of radiographic interpretation in orthopaedics, there not a clear understanding of the specific visual strategies used while analyzing a plain film. Eyetracking technology allows for the objective study of eye movements while performing a dynamic task, such as reading X-rays. Our study looks to elucidate objective differences in image interpretation between novice and experienced orthopaedic trainees using this novel technology. METHODS: Novice and experienced orthopaedic trainees (N=23) were asked to interpret AP pelvis films, searching for unilateral acetabular fractures while eye-movements were assessed for pattern of gaze, fixation on regions of interest, and time of fixation at regions of interest. Participants were asked to label radiographs as "fractured" or "not fractured." If "fractured", the participant was asked to determine the fracture pattern. A control condition employed Ekman faces and participants judged gender and facial emotion. Data were analyzed for variation in eye movements between participants, accuracy of responses, and response time. RESULTS: Accuracy: There was no significant difference by level of training for accurately identifing fracture images (p=0.3255). There was a significant association between higher level of training and correctly identifying non-fractured images (p=0.0155); greater training was also associated with more success in identifying the correct Judet-Letournel classification (p=0.0029). Response Time: Greater training was associated with faster response times (p=0.0009 for fracture images and 0.0012 for non-fractured images). Fixation Duration: There was no correlation of average fixation duration with experience (p=0.9632). Regions of Interest (ROIs): More experience was associated with an average of two fewer fixated ROIs (p=0.0047). Number of Fixations: Increased experience was associated with fewer fixations overall (p=0.0007). CONCLUSIONS: Experience has a significant impact on both accuracy and efficiency in interpreting plain films. Greater training is associated with a shift toward a more efficient and thorough assessment of plain radiographs. Eyetracking is a useful descriptive tool in the setting of plain film interpretation. CLINICAL RELEVANCE: We propose further assessment of eye movements in larger populations of orthopaedic surgeons, including staff orthopaedists. Describing the differences between novice and expert interpretation may provide insight into ways to accelerate the learning process in young orthopaedists. |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd In: Journal of Management in Engineering, vol. 33, no. 5, pp. 1–17, 2017. @article{Hasanzadeh2017a, Although several studies have highlighted the importance of attention in reducing the number of injuries in the construction industry, few have attempted to empirically measure the attention of construction workers. One technique that can be used to measure worker attention is eye tracking, which is widely accepted as the most direct and continuous measure of attention because where one looks is highly correlated with where one is focusing his or her attention. Thus, with the fundamental objective of measuring the impacts of safety knowledge (specifically, training, work experience, and injury exposure) on construction workers' attentional allocation, this study demonstrates the application of eye tracking to the realm of construction safety practices. To achieve this objective, a laboratory experiment was designed in which participants identified safety hazards presented in 35 construction site images ordered randomly, each of which showed multiple hazards varying in safety risk. During the experiment, the eye movements of 27 construction workers were recorded using a head-mounted EyeLink II system. The impact of worker safety knowledge in terms of training, work experience, and injury exposure (independent variables) on eye-tracking metrics (dependent variables) was then assessed by implementing numerous permutation simulations. The results show that tacit safety knowledge acquired from work experience and injury exposure can significantly improve construction workers' hazard detection and visual search strategies. The results also demonstrate that (1) there is minimal difference, with or without the Occupational Safety and Health Administration 10-h certificate, in workers' search strategies and attentional patterns while exposed to or seeing hazardous situations; (2) relative to less experienced workers (<5 years), more experienced workers (>10 years) need less processing time and deploy more frequent short fixations on hazardous areas to maintain situational awareness of the environment; and (3) injury exposure significantly impacts a worker's visual search strategy and attentional allocation. In sum, practical safety knowledge and judgment on a jobsite requires the interaction of both tacit and explicit knowledge gained through work experience, injury exposure, and interactive safety training. This study significantly contributes to the literature by demonstrating the potential application of eye-tracking technology in studying the attentional allocation of construction workers. Regarding practice, the results of the study show that eye tracking can be used to improve worker training and preparedness, which will yield safer working conditions, detect at-risk workers, and improve the effectiveness of safety-training programs. |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd Impact of construction workers' hazard identification skills on their visual attention Journal Article In: Journal of Construction Engineering and Management, vol. 143, no. 10, pp. 1–16, 2017. @article{Hasanzadeh2017, Eye-movement metrics have been shown to correlate with attention and, therefore, represent a means of identifying and analyzing an individual's cognitive processes. Human errors--such as failure to identify a hazard--are often attributed to a worker's lack of attention. Piecemeal attempts have been made to investigate the potential of harnessing eye movements as predictors of human error (e.g., failure to identify a hazard) in the construction industry, although more attempts have investigated human error via subjective measurements. To address this knowledge gap, the present study harnessed eye-tracking technology to evaluate the impacts of workers' hazard-identification skills on their attentional distributions and visual search strategies. To achieve this objective, an experiment was designed in which the eye movements of 31 construction workers were tracked while they searched for hazards in 35 randomly ordered construction scenario images. Workers were then divided into three groups on the basis of their hazard identification performance. Three fixation-related metrics--fixation count, dwell-time percentage, and run count--were analyzed during the eye-tracking experiment for each group (low, medium, and high hazard-identification skills) across various types of hazards. Then, multivariate ANOVA (MANOVA) was used to evaluate the impact of workers' hazard-identification skills on their visual attention. To further investigate the effect of hazard identification skills on the dependent variables (eye movement metrics), two distinct processes followed: separate ANOVAs on each of the dependent variables, and a discriminant function analysis. The analyses indicated that hazard identification skills significantly impact workers' visual search strategies: workers with higher hazard-identification skills had lower dwell-time percentages on ladder-related hazards; higher fixation counts on fall-to-lower-level hazards; and higher fixation counts and run counts on fall-protection systems, struck-by, housekeeping, and all hazardous areas combined. Among the eye-movement metrics studied, fixation count had the largest standardized coefficient in all canonical discriminant functions, which implies that this eye-movement metric uniquely discriminates workers with high hazard-identification skills and at-risk workers. Because discriminant function analysis is similar to regression, discriminant function (linear combinations of eye-movement metrics) can be used to predict workers' hazard-identification capabilities. In conclusion, this study provides a proof of concept that certain eye- movement metrics are predictive indicators of human error due to attentional failure. These outcomes stemmed from a laboratory setting, and, foreseeably, safety managers in the future will be able to use these findings to identify at-risk construction workers, pinpoint required safety training, measure training effectiveness, and eventually improve future personal protective equipment to measure construction workers' situation awareness in real time. |
Matthew Heath; Erin M. Shellington; Sam Titheridge; Dawn P. Gill; Robert J. Petrella In: Journal of Alzheimer's Disease, vol. 56, no. 1, pp. 167–183, 2017. @article{Heath2017, Exercise programs involving aerobic and resistance training (i.e., multiple-modality) have shown promise in improving cognition and executive control in older adults at risk, or experiencing, cognitive decline. It is, however, unclear whether cognitive training within a multiple-modality program elicits an additive benefit to executive/cognitive processes. This is an important question to resolve in order to identify optimal training programs that delay, or ameliorate, executive deficits in persons at risk for further cognitive decline. In the present study, individuals with a self-reported cognitive complaint (SCC) participated in a 24-week multiple-modality (i.e., the M2 group) exercise intervention program. In addition, a separate group of individuals with a SCC completed the same aerobic and resistance training as the M2 group but also completed a cognitive-based stepping task (i.e., multiple-modality, mind-motor intervention: M4 group). Notably, pre- and post-intervention executive control was examined via the antisaccade task (i.e., eye movement mirror-symmetrical to a target). Antisaccades are an ideal tool for the study of individuals with subtle executive deficits because of its hands- and language-free nature and because the task's neural mechanisms are linked to neuropathology in cognitive decline (i.e., prefrontal cortex). Results showed that M2 and M4 group antisaccade reaction times reliably decreased from pre- to post-intervention and the magnitude of the decrease was consistent across groups. Thus, multi-modality exercise training improved executive performance in persons with a SCC independent of mind-motor training. Accordingly, we propose that multiple-modality training provides a sufficient intervention to improve executive control in persons with a SCC. |
Yu-Cin Jian Eye-movement patterns and reader characteristics of students with good and poor performance when reading scientific text with diagrams Journal Article In: Reading and Writing, vol. 30, no. 7, pp. 1447–1472, 2017. @article{Jian2017a, This study investigated the cognitive processes and reader characteristics of sixth graders who had good and poor performance when reading scientific text with diagrams. We first measured the reading ability and reading self-efficacy of sixth-grade participants, and then recorded their eye movements while they were reading an illustrated scientific text and scored their answers to content-related questions. Finally, the participants evaluated the difficulty of the article, the attractiveness of the content and diagram, and their learning performance. The participants were then classified into groups based on how many correct responses they gave to questions related to reading. The results showed that readers with good performance had better character recognition ability and reading self-efficacy, were more attracted to the diagrams, and had higher self-evaluated learning levels than the readers with poor performance did. Eye-movement data indicated that readers with good performance spent significantly more reading time on the whole article, the text section, and the diagram section than the readers with poor performance did. Interestingly, readers with good performance had significantly longer mean fixation duration on the diagrams than readers with poor performance did; further, readers with good performance made more saccades between the text and the diagrams. Additionally, sequential analysis of eye movements showed that readers with good performance preferred to observe the diagram rather than the text after reading the title, but this tendency was not present in readers with poor performance. In sum, using eye-tracking technology and several reading tests and questionnaires, we found that various cognitive aspects (reading strategy, diagram utilization) and affective aspects (reading self-efficacy, article likeness, diagram attraction, and self-evaluation of learning) affected sixth graders' reading performance in this study. |
Yu-Cin Jian; Hwa-Wei Ko Influences of text difficulty and reading ability on learning illustrated science texts for children: An eye movement study Journal Article In: Computers and Education, vol. 113, pp. 263–279, 2017. @article{Jian2017, In this study, eye movement recordings and comprehension tests were used to investigate children's cognitive processes and comprehension when reading illustrated science texts. Ten-year-old children (N = 42) who were beginning to read to learn, with high and low reading ability read two illustrated science texts in Chinese (one medium-difficult article, one difficult article), and then answered questions that measured comprehension of textual and pictorial information as well as text-and-picture integration. The high-ability group outperformed the low-ability group on all questions. Eye movement analyses showed that both group of students spent roughly the same amount of time reading both articles, but had different methods of reading them. The low-ability group was inclined to read what seemed easier to them and read the text more. The high-ability group attended more to the difficult article and made an effort to integrate the textual and pictorial information. During a first-pass reading of the difficult article, high- but not low-ability readers returned to the previous paragraph. The low-ability readers spent more time reading the less difficult article and not the difficult one that required teachers' attention. Suggestions for classroom instruction are proposed accordingly. |
Shijian Luo; Yi Hu; Yuxiao Zhou Factors attracting Chinese Generation Y in the smartphone application marketplace Journal Article In: Frontiers of Computer Science, vol. 11, no. 2, pp. 290–306, 2017. @article{Luo2017, Smartphone applications (apps) are becoming increasingly popular all over the world, particularly in the Chinese Generation Y population; however, surprisingly, only a small number of studies on app factors valued by this important group have been conducted. Because the competition among app developers is increasing, app factors that attract users' attention are worth studying for sales promotion. This paper examines these factors through two separate studies. In the first study, i.e., Experiment 1, which consists of a survey, perceptual rating and verbal protocol methods are employed, and 90 randomly selected app websites are rated by 169 experienced smartphone users according to app attraction. Twelve of the most rated apps (six highest rated and six lowest rated) are selected for further investigation, and 11 influential factors that Generation Y members value are listed. A second study, i.e., Experiment 2, is conducted using the most and least rated app websites from Experiment 1, and eye tracking and verbal protocol methods are used. The eye movements of 45 participants are tracked while browsing these websites, providing evidence about what attracts these users' attention and the order in which the app components are viewed. The results of these two studies suggest that Chinese Generation Y is a content-centric group when they browse the smartphone app marketplace. Icon, screenshot, price, rating, and name are the dominant and indispensable factors that influence purchase intentions, among which icon and screenshot should be meticulously designed. Price is another key factor that drives Chinese Generation Y's attention. The recommended apps are the least dominant element. Design suggestions for app websites are also proposed. This research has important implications. |
Min-Yuan Ma; Hsien-Chih Chuang An exploratory study of the effect of enclosed structure on type design with fixation dispersion: Evidence from eye movements Journal Article In: International Journal of Technology and Design Education, vol. 27, no. 1, pp. 149–164, 2017. @article{Ma2017, Type design is the process of re-organizing visual elements and their corresponding meanings into a new organic entity, particularly for the highly logographic Chinese characters whose intrinsic features are retained even after reorganization. Due to this advantage, designers believe that such a re-organization process will not affect Chinese character recognition. However, not having an effect on recognition is not the same as not affecting the viewing process, especially when the character is so highly deconstructed that, along with the viewing process, the original intention of the design and its efficacy are both indirectly affected. Therefore, besides capturing the changes of character features, a good type designer should understand how characters are viewed. Past studies have found that character structure will affect character recognition, particularly for enclosed and non-enclosed characters whose differences are significant, although the interpretation of such differences remains open for discussion. This study explored the viewing process of Chinese characters with eye-tracking methods and calculated the concentration and saccadic amplitude of fixation in the viewing process in terms of the descriptive approach in a geographic information system, so as to investigate the differences among types of character modules with the spatial dispersion index. This study found that the overall vision when viewing enclosed structures is more concentrated than non-enclosed structures. |
Andrew K. Mackenzie; Julie M. Harris A link between attentional function, effective eye movements, and driving ability Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 2, pp. 381–394, 2017. @article{Mackenzie2017, The misallocation of driver visual attention has been suggested as a major contributing factor to vehicle accidents. One possible reason is that the relatively high cognitive demands of driving limits the ability to efficiently allocate gaze. We present an experiment that explores the relationship between attentional function and visual performance when driving. Drivers performed two variations of a multiple object tracking task targeting aspects of cognition including sustained attention, dual-tasking, covert attention and visuomotor skill. They also drove a number of courses in a driving simulator. Eye movements were recorded throughout. We found that individuals who performed better in the cognitive tasks exhibited more effective eye movement strategies when driving, such as scanning more of the road, and they also exhibited better driving performance. We discuss the potential link between an individual's attentional function, effective eye movements and driving ability. We also discuss the use of a visuomotor task in assessing driving behaviour. |
Yousri Marzouki; Valériane Dusaucy; Myriam Chanceaux; Sebastiaan Mathôt The World (of Warcraft) through the eyes of an expert Journal Article In: PeerJ, vol. 5, pp. 1–21, 2017. @article{Marzouki2017, Negative correlations between pupil size and the tendency to look at salient locations were found in recent studies (e.g., Mathôt et al., 2015). It is hypothesized that this negative correlation might be explained by the mental effort put by participants in the task that leads in return to pupil dilation. Here we present an exploratory study on the effect of expertise on eye-movement behavior. Because there is no available standard tool to evaluate WoW players' expertise, we built an off-game questionnaire testing players' knowledge about WoW and acquired skills through completed raids, highest rated battlegrounds, Skill Points, etc. Experts ( N = 4) and novices ( N = 4) in the massively multiplayer online role-playing game World of Warcraft (WoW) viewed 24 designed video segments from the game that differ in regards with their content (i.e, informative locations) and visual complexity (i.e, salient locations). Consistent with previous studies, we found a negative correlation between pupil size and the tendency to look at salient locations (experts |
Olivia M. Maynard; Jonathan C. W. Brooks; Marcus R. Munafò; Ute Leonards Neural mechanisms underlying visual attention to health warnings on branded and plain cigarette packs Journal Article In: Addiction, vol. 112, no. 4, pp. 662–672, 2017. @article{Maynard2017, Aims: To (1) test if activation in brain regions related to reward (nucleus accumbens) and emotion (amygdala) differ when branded and plain packs of cigarettes are viewed, (2) test whether these activation patterns differ by smoking status and (3) examine whether activation patterns differ as a function of visual attention to health warning labels on cigarette packs. Design: Cross-sectional observational study combining functional magnetic resonance imaging (fMRI) with eye-tracking. Non-smokers, weekly smokers and daily smokers performed a memory task on branded and plain cigarette packs with pictorial health warnings presented in an event-related design. Setting: Clinical Research and Imaging Centre, University of Bristol, UK. Participants: Non-smokers, weekly smokers and daily smokers (n = 72) were tested. After exclusions, data from 19 non-smokers, 19 weekly smokers and 20 daily smokers were analysed. Measurements: Brain activity was assessed in whole brain analyses and in pre-specified masked analyses in the amygdala and nucleus accumbens. On-line eye-tracking during scanning recorded visual attention to health warnings. Findings: There was no evidence for a main effect of pack type or smoking status in either the nucleus accumbens or amygdala, and this was unchanged when taking account of visual attention to health warnings. However, there was evidence for an interaction, such that we observed increased activation in the right amygdala when viewing branded as compared with plain packs among weekly smokers (P = 0.003). When taking into account visual attention to health warnings, we observed higher levels of activation in the visual cortex in response to plain packaging compared with branded packaging of cigarettes (P = 0.020). Conclusions: Based on functional magnetic resonance imaging and eye-tracking data, health warnings appear to be more salient on ‘plain' cigarette packs than branded packs. |
Rebecca L. Monk; J. Westwood; Derek Heim; Adam W. Qureshi The effect of pictorial content on attention levels and alcohol-related beliefs: An eye-tracking study Journal Article In: Journal of Applied Social Psychology, vol. 47, no. 3, pp. 158–164, 2017. @article{Monk2017, To examine attention levels to different types of alcohol warning labels. Twenty-two participants viewed neutral or graphic warning messages while dwell times for text and image components of messages were assessed. Pre and postexposure outcome expectancies were assessed in order to compute change scores. Dwell times were significantly higher for the image, as opposed to the text, components of warnings, irrespective of image type. Participants whose expectancies increased after exposure to the warnings spent longer looking at the image than did those whose positive expectancies remained static or decreased. Images in alcohol warnings appear beneficial for drawing attention, although findings may suggest that this is also associated with heightened positive alcohol-related beliefs. Implications for health intervention are discussed and future research in this area is recommended. |
Parashkev Nachev; Geoff E. Rose; David H. Verity; Sanjay G. Manohar; Kelly MacKenzie; Gill Adams; Maria Theodorou; Quentin A. Pankhurst; Christopher Kennard Magnetic oculomotor prosthetics for acquired nystagmus Journal Article In: Ophthalmology, vol. 124, no. 10, pp. 1556–1564, 2017. @article{Nachev2017, Purpose: Acquired nystagmus, a highly symptomatic consequence of damage to the substrates of oculomotor control, often is resistant to pharmacotherapy. Although heterogeneous in its neural cause, its expression is unified at the effector—the eye muscles themselves—where physical damping of the oscillation offers an alternative approach. Because direct surgical fixation would immobilize the globe, action at a distance is required to damp the oscillation at the point of fixation, allowing unhindered gaze shifts at other times. Implementing this idea magnetically, herein we describe the successful implantation of a novel magnetic oculomotor prosthesis in a patient. Design: Case report of a pilot, experimental intervention. Participant: A 49-year-old man with longstanding, medication-resistant, upbeat nystagmus resulting from a paraneoplastic syndrome caused by stage 2A, grade I, nodular sclerosing Hodgkin's lymphoma. Methods: We designed a 2-part, titanium-encased, rare-earth magnet oculomotor prosthesis, powered to damp nystagmus without interfering with the larger forces involved in saccades. Its damping effects were confirmed when applied externally. We proceeded to implant the device in the patient, comparing visual functions and high-resolution oculography before and after implantation and monitoring the patient for more than 4 years after surgery. Main Outcome Measures: We recorded Snellen visual acuity before and after intervention, as well as the amplitude, drift velocity, frequency, and intensity of the nystagmus in each eye. Results The patient reported a clinically significant improvement of 1 line of Snellen acuity (from 6/9 bilaterally to 6/6 on the left and 6/5–2 on the right), reflecting an objectively measured reduction in the amplitude, drift velocity, frequency, and intensity of the nystagmus. These improvements were maintained throughout a follow-up of 4 years and enabled him to return to paid employment. Conclusions: This work opens a new field of implantable therapeutic devices—oculomotor prosthetics—designed to modify eye movements dynamically by physical means in cases where a purely neural approach is ineffective. Applied to acquired nystagmus refractory to all other interventions, it is shown successfully to damp pathologic eye oscillations while allowing normal saccadic shifts of gaze. |
Andrew D. Ogle; Dan J. Graham; Rachel G. Lucas-Thompson; Christina A. Roberto Influence of cartoon media characters on children's attention to and preference for food and beverage products Journal Article In: Journal of the Academy of Nutrition and Dietetics, vol. 117, no. 2, pp. 265–270, 2017. @article{Ogle2017, Background: Over-consuming unhealthful foods and beverages contributes to pediatric obesity and associated diseases. Food marketing influences children's food preferences, choices, and intake. Objective: To examine whether adding licensed media characters to healthful food/beverage packages increases children's attention to and preference for these products. We hypothesized that children prefer less- (vs more-) healthful foods, and pay greater attention to and preferentially select products with (vs without) media characters regardless of nutritional quality. We also hypothesized that children prefer more-healthful products when characters are present over less-healthful products without characters. Design: On a computer, participants viewed food/beverage pairs of more-healthful and less-healthful versions of similar products. The same products were shown with and without licensed characters on the packaging. An eye-tracking camera monitored participant gaze, and participants chose which product they preferred from each of 60 pairs. Participants/setting: Six- to 9-year-old children (n=149; mean age=7.36, standard deviation=1.12) recruited from the Twin Cities, MN, area in 2012-2013. Main outcome measures: Visual attention and product choice. Statistical analyses performed Attention to products was compared using paired-samples t tests, and product choice was analyzed with single-sample t tests. Analyses of variance were conducted to test for interaction effects of specific characters and child sex and age. Results: Children paid more attention to products with characters and preferred less-healthful products. Contrary to our prediction, children chose products without characters approximately 62% of the time. Children's choices significantly differed based on age, sex, and the specific cartoon character displayed, with characters in this study being preferred by younger boys. Conclusions: Results suggest that putting licensed media characters on more-healthful food/beverage products might not encourage all children to make healthier food choices, but could increase selection of healthy foods among some, particularly younger children, boys, and those who like the featured character(s). Effective use likely requires careful demographic targeting. |
Cheng S. Qian; Jan W. Brascamp How to build a dichoptic presentation system that includes an eye tracker Journal Article In: Journal of Visualized Experiments, no. 127, pp. 1–9, 2017. @article{Qian2017, The presentation of different stimuli to the two eyes, dichoptic presentation, is essential for studies involving 3D vision and interocular suppression. There is a growing literature on the unique experimental value of pupillary and oculomotor measures, especially for research on interocular suppression. Although obtaining eye-tracking measures would thus benefit studies that use dichoptic presentation, the hardware essential for dichoptic presentation (e.g. mirrors) often interferes with high-quality eye tracking, especially when using a video-based eye tracker. We recently described an experimental setup that combines a standard dichoptic presentation system with an infrared eye tracker by using infrared-transparent mirrors1. The setup is compatible with standard monitors and eye trackers, easy to implement, and affordable (on the order of US$1,000). Relative to existing methods it has the benefits of not requiring special equipment and posing few limits on the nature and quality of the visual stimulus. Here we provide a visual guide to the construction and use of our setup. |
Ioannis Rigas; Oleg V. Komogortsev Current research in eye movement biometrics: An analysis based on BioEye 2015 competition Journal Article In: Image and Vision Computing, vol. 58, pp. 129–141, 2017. @article{Rigas2017a, On the onset of the second decade of research in eye movement biometrics, the already demonstrated results strongly support the promising perspectives of the field. This paper presents a description of the research conducted in eye movement biometrics based on an extended analysis of the characteristics and results of the “BioEye 2015: Competition on Biometrics via Eye Movements.” This extended presentation can contribute to the understanding of the current level of research in eye movement biometrics, covering areas such as the previous work in the field, the procedures for the creation of a database of eye movement recordings, and the different approaches that can be used for the analysis of eye movements. Also, the presented results from BioEye 2015 competition can demonstrate the potential identification accuracy that can be achieved under easier and more difficult scenarios. Based on the provided presentation, we discuss topics related to the current status in eye movement biometrics and suggest possible directions for the future research in the field. |
Sergei L. Shishkin; Darisii G. Zhao; Andrei V. Isachenko; Boris M. Velichkovsky Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction Journal Article In: Psychology in Russia: State of the Art, vol. 10, no. 3, pp. 120–137, 2017. @article{Shishkin2017, Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system's information transfer limits. Will brain-computer interfaces (BCIs) and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more efficient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important filter to separate intentional gaze dwells from non-intentional ones. conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI) interface will have a chance to enable natural, fluent, and effortless interaction with machines in various fields of application. |
Jan-philipp Tauscher; Maryam Mustafa; Marcus Magnor; T. U. Braunschweig Comparative analysis of three different modalities for perception of artifacts in videos Journal Article In: ACM Transactions on Applied Perception, vol. 14, no. 4, pp. 1–12, 2017. @article{Tauscher2017, This study compares three popular modalities for analyzing perceived video quality; user ratings, eye tracking, and EEG. We contrast these three modalities for a given video sequence to determine if there is a gap between what humans consciously see and what we implicitly perceive. Participants are shown a video sequence with different artifacts appearing at specific distances in their field of vision; near foveal, middle peripheral, and far peripheral. Our results show distinct differences between what we saccade to (eye tracking), howwe consciously rate video quality, and our neural responses (EEG data). Our findings indicate that the measurement of perceived quality depends on the specific modality used. |
Philip R. K. Turnbull; John R. Phillips Ocular effects of virtual reality headset wear in young adults Journal Article In: Scientific Reports, vol. 7, pp. 16172, 2017. @article{Turnbull2017a, Virtual Reality (VR) headsets create immersion by displaying images on screens placed very close to the eyes, which are viewed through high powered lenses. Here we investigate whether this viewing arrangement alters the binocular status of the eyes, and whether it is likely to provide a stimulus for myopia development. We compared binocular status after 40-minute trials in indoor and outdoor environments, in both real and virtual worlds. We also measured the change in thickness of the ocular choroid, to assess the likely presence of signals for ocular growth and myopia development. We found that changes in binocular posture at distance and near, gaze stability, amplitude of accommodation and stereopsis were not different after exposure to each of the 4 environments. Thus, we found no evidence that the VR optical arrangement had an adverse effect on the binocular status of the eyes in the short term. Choroidal thickness did not change after either real world trial, but there was a significant thickening (≈10 microns) after each VR trial (p < 0.001). The choroidal thickening which we observed suggest that a VR headset may not be a myopiagenic stimulus, despite the very close viewing distances involved. |