EyeLink Usability / Applied Publications
All EyeLink usability and applied research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications using keywords such as Driving, Sport, Workload, etc. You can also search for individual author names. If we missed any EyeLink usability or applied article, please email us!
Lisa Schäfer; Ricarda Schmidt; Silke M. Müller; Arne Dietrich; Anja Hilbert
In: Journal of Psychiatric Research, vol. 129, pp. 214–221, 2020.
Research documented the effectiveness of obesity surgery (OS) for long-term weight loss and improvements in medical and psychosocial sequelae, and general cognitive functioning. However, there is only preliminary evidence for changes in attentional processing of food cues after OS. This study longitudinally investigated visual attention towards food cues from pre- to 1-year post-surgery. Using eye tracking (ET) and a Visual Search Task (VST), attentional processing of food versus non-food cues was assessed in n = 32 patients with OS and n = 31 matched controls without weight-loss treatment at baseline and 1-year follow-up. Associations with experimentally assessed impulsivity and eating disorder psychopathology and the predictive value of changes in visual attention towards food cues for weight loss and eating behaviors were determined. During ET, both groups showed significant gaze duration biases to non-food cues without differences and changes over time. No attentional biases over group and time were found by the VST. Correlations between attentional data and clinical variables were sparse and not robust over time. Changes in visual attention did not predict weight loss and eating disorder psychopathology after OS. The present study provides support for a top-down regulation of visual attention to non-food cues in individuals with severe obesity. No changes in attentional processing of food cues were detected 1-year post-surgery. Further studies are needed with comparable methodology and longer follow-ups to clarify the role of biased visual attention towards food cues for long-term weight outcomes and eating behaviors after OS.
Victoria I. Nicholls; Geraldine Jean-Charles; Junpeng Lao; Peter Lissa; Roberto Caldara; Sébastien Miellet
In: Scientific Reports, vol. 9, pp. 4176, 2019.
In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.
Michele Scaltritti; Aliaksei Miniukovich; Paola Venuti; Remo Job; Antonella De Angeli; Simone Sulpizio
In: Scientific Reports, vol. 9, pp. 12711, 2019.
Webpage reading is ubiquitous in daily life. As Web technologies allow for a large variety of layouts and visual styles, the many formatting options may lead to poor design choices, including low readability. This research capitalizes on the existing readability guidelines for webpage design to outline several visuo-typographic variables and explore their effect on eye movements during webpage reading. Participants included children and adults, and for both groups typical readers and readers with dyslexia were considered. Actual webpages, rather than artificial ones, served as stimuli. This allowed to test multiple typographic variables in combination and in their typical ranges rather than in possibly unrealistic configurations. Several typographic variables displayed a significant effect on eye movements and reading performance. The effect was mostly homogeneous across the four groups, with a few exceptions. Beside supporting the notion that a few empirically-driven adjustments to the texts' visual appearance can facilitate reading across different populations, the results also highlight the challenge of making digital texts accessible to readers with dyslexia. Theoretically, the results highlight the importance of low-level visual factors, corroborating the emphasis of recent psychological models on visual attention and crowding in reading.
Katarzyna Stachowiak-Szymczak; Paweł Korpal
In: Across Languages and Cultures, vol. 20, no. 2, pp. 235–251, 2019.
Simultaneous interpreting is a cognitively demanding task, based on performing several activities concurrently (Gile 1995; Seeber 2011). While multitasking itself is challenging, there are numerous tasks which make interpreting even more diffi cult, such as rendering of numbers and proper names, or dealing with a speaker's strong accent (Gile 2009). Among these, number interpreting is cognitively taxing since numerical data cannot be derived from the context and it needs to be rendered in a word-to-word manner (Mazza 2001). In our study, we aimed to examine cognitive load involved in number interpreting and to verify whether access to visual materials in the form of slides increases number interpreting accuracy in simultaneous interpreting performed by professional interpreters (N = 26) and interpreting trainees (N = 22). We used a remote EyeLink 1000+ eye-tracker to measure fi xation count, mean fi xation duration, and gaze time. The participants interpreted two short speeches from English into Polish, both containing 10 numerals. Slides were provided for one of the presentations. Our results show that novices are characterised by longer fixations and they provide a less accurate interpretation than professional interpreters. In addi- tion, access to slides increases number interpreting accuracy. The results obtained might be a valuable contribution to studies on visual processing in simultaneous interpreting, number interpreting as a competence, as well as interpreter training.
In: PeerJ, vol. 7, pp. 1–15, 2019.
This article compares the differences in eye movements between orienteers of different skill levels on map information searches and explores the visual search patterns of orienteers during precise map reading so as to explore the cognitive characteristics of orienteers' visual search. We recruited 44 orienteers at different skill levels (experts, advanced beginners, and novices), and recorded their behavioral responses and eye movement data when reading maps of different complexities. We found that the complexity of map (complex vs. simple) affects the quality of orienteers' route planning during precise map reading. Specifically, when observing complex maps, orienteers of higher competency tend to have a better quality of route planning (i.e., a shorter route planning time, a longer gaze time, and a more concentrate distribution of gazes). Expert orienteers demonstrated obvious cognitive advantages in the ability to find key information. We also found that in the stage of route planning, expert orienteers and advanced beginners first pay attention to the checkpoint description table. The expert group extracted information faster, and their attention was more concentrated, whereas the novice group paid less attention to the checkpoint description table, and their gaze was scattered. We found that experts regarded the information in the checkpoint description table as the key to the problem and they give priority to this area in route decision making. These results advance our understanding of professional knowledge and problem solving in orienteering.
Shlomit Yuval-Greenberg; Anat Keren; Rinat Hilo; Adar Paz; Navah Ratzon
In: American Journal of Occupational Therapy, vol. 73, no. 3, pp. 1–8, 2019.
Importance: Attention deficit hyperactivity disorder (ADHD) is associated with driving deficits. Visual standards for driving define minimum qualifications for safe driving, including acuity and field of vision, but they do not consider the ability to explore the environment efficiently by shifting the gaze, which is a critical element of safe driving.
Objective: To examine visual exploration during simulated driving in adolescents with and without ADHD.
Design: Adolescents with and without ADHD drove a driving simulator for approximately 10 min while their gaze was monitored. They then completed a battery of questionnaires.
Setting: University lab.
Participants: Participants with (n = 16) and without (n = 15) ADHD were included. Participants had a history of neurological disorders other than ADHD and normal or corrected-to-normal vision. Control participants reported not having a diagnosis of ADHD. Participants with ADHD had been previously diagnosed by a qualified professional.
Outcomes and Measures: We compared the following measures between ADHD and non-ADHD groups: dashboard dwell times, fixation variance, entropy, and fixation duration.
Results: Findings showed that participants with ADHD were more restricted in their patterns of exploration than control group participants. They spent considerably more time gazing at the dashboard and had longer periods of fixation with lower variability and randomness.
Conclusions and Relevance: The results support the hypothesis that adolescents with ADHD engage in less active exploration during simulated driving.
What This Article Adds: This study raises concerns regarding the driving competence of people with ADHD and opens up new directions for potential training programs that focus on exploratory gaze control.
In: Journal of Contemporary Marketing Science, vol. 2, no. 1, pp. 23–33, 2019.
Purpose: The purpose of this paper is to control the size of online advertising by the use of the single factor experiment design using the eight matching methods of logo and commodity picture elements as independent variables, under the premise of background color and content complexity and to investigate the best visual search law of logo elements in online advertising format. The result shows that when the picture element is fixed in the center of the advertisement, it is suggested that the logo element should be placed in the middle position parallel to the picture element (left middle and upper left), placing the logo element at the bottom of the picture element, especially at the bottom left should be avoided. The designer can determine the best online advertising format based on the visual search effect of the logo element and the actual marketing purpose. Design/methodology/approach: In this experiment, the repeated measurement experiment design was used in a single factor test. According to the criteria of different types of commodities and eight matching methods, 20 advertisements were randomly selected from 50 original advertisements as experimental stimulation materials, as shown in Section 2.3. The eight matching methods were processed to obtain a total of 20×8=160 experimental stimuli. At the same time, in order to minimize the memory effect of the repeated appearance of the same product, all pictures, etc., the probability was randomly presented. In addition, in order to avoid the pre-judgment of the test for the purpose of the experiment, 80 additional filler online advertisements were added. Therefore, each testee was required to watch 160+80=240 pieces of stimulation materials.Findings On one hand, when the image elements are fixed for an advertisement, the advertiser should first try to place the logo element in the right middle position parallel to the picture element, because the commodity logo in this matching mode can get the longest average time of consumers' attention, and the duration of attention is the most. Danaher and Mullarkey (2003) clearly pointed out that as consumers look at online advertising, the length of fixation time increases, the degree of memory of online advertisement is also improved accordingly. Second, you can consider placing the logo element in the left or upper left of the picture element. In contrast, advertisers should try to avoid placing the logo element at the bottom of the picture element (lower left and lower right), especially at the lower left, because, at this area, the logo attracts less attention, resulting in shortest duration of consumer attention, less than a quarter of consumers' total attention. This conclusion is consistent with the related research results.
Hongyan Wang; Zhongling Pi; Weiping Hu
In: Journal of Computer Assisted Learning, vol. 35, no. 1, pp. 42–50, 2019.
Instructor behaviour is known to affect learning performance, but it is unclear which specific instructor behaviours can optimize learning. We used eye-tracking technology and questionnaires to test whether the instructor's gaze guidance affected learners' visual attention, social presence, and learning performance, using four video lectures: declarative knowledge with and without the instructor's gaze guidance and procedural knowledge with and without the instructor's gaze guidance. The results showed that the instructor's gaze guidance not only guided learners to allocate more visual attention to corresponding learning content but also increased learners' sense of social presence and learning. Furthermore, the link between the instructor's gaze guidance and better learning was especially strong for participants with a high sense of social connection with the instructor when they learned procedural knowledge. The findings lead to a strong recommendation for educational practitioners: Instructors should provide gaze guidance in video lectures for better learning performance.
Zepeng Wang; Ping Li; Luming Zhang; Ling Shao
In: IEEE Transactions on Multimedia, pp. 1–11, 2019.
Computational photo quality evaluation is a useful technique in many tasks of computer vision and graphics, e.g., photo retaregeting, 3D rendering, and fashion recommendation. Conventional photo quality models are designed by characterizing pictures from all communities (e.g., “architecture” and “colorful”) indiscriminately, wherein community-specific features are not encoded explicitly. In this work, we develop a new community-aware photo quality evaluation framework. It uncovers the latent community-specific topics by a regularized latent topic model (LTM), and captures human visual quality perception by exploring multiple attributes. More specifically, given massive-scale online photos from multiple communities, a novel ranking algorithm is proposed to measure the visual/semantic attractiveness of regions inside each photo. Meanwhile, three attributes: photo quality scores, weak semantic tags, and inter-region correlations, are seamlessly and collaboratively incorporated during ranking. Subsequently, we construct gaze shifting path (GSP) for each photo by sequentially linking the top-ranking regions from each photo, and an aggregation-based deep CNN calculates the deep representation for each GSP. Based on this, an LTM is proposed to model the GSP distribution from multiple communities in the latent space. To mitigate the overfitting problem caused by communities with very few photos, a regularizer is added into our LTM. Finally, given a test photo, we obtain its deep GSP representation and its quality score is determined by the posterior probability of the regularized LTM. Comprehensive comparative studies on four image sets have shown the competitiveness of our method. Besides, eye tracking experiments demonstrated that our ranking-based GSPs are highly consistent with real human gaze movements.
In: Translation, Cognition and Behavior, vol. 2, no. 1, pp. 79–100, 2019.
This article tackles directionality as one of the most contentious issues in translation studies, still without solid empirical footing. The research presented here shows that, to understand directionality effects on the process of translation and its end product, performance in L2 → L1 and L1 → L2 translation needs to be compared in a specific setting in which more factors than directionality are considered-especially text type. For 26 professional translators who participated in an experimental study, L1 → L2 translation did not take significantly more time than L2 → L1 translation and the end products of both needed improvement from proofreaders who are native speakers of the target language. A close analysis of corrections made by the proofreaders shows that different aspects of translation quality are affected by directionality. A case study of two translators who produced high quality L1 → L2 translations reveals that their performance was affected more by text type than by directionality.
R. Austin Hicklin; Bradford T. Ulery; Thomas A. Busey; Maria Antonia Roberts; Jo Ann Buscaglia
In: Cognitive Research: Principles and Implications, vol. 4, no. 12, pp. 1–20, 2019.
Background: The comparison of fingerprints by expert latent print examiners generally involves repeating a process in which the examiner selects a small area of distinctive features in one print (a target group), and searches for it in the other print. In order to isolate this key element of fingerprint comparison, we use eye-tracking data to describe the behavior of latent fingerprint examiners on a narrowly defined “find the target” task. Participants were shown a fingerprint image with a target group indicated and asked to find the corresponding area of ridge detail in a second impression of the same finger and state when they found the target location. Target groups were presented on latent and plain exemplar fingerprint images, and as small areas cropped from the plain exemplars, to assess how image quality and the lack of surrounding visual context affected task performance and eye behavior. One hundred and seventeen participants completed a total of 675 trials. Results: The presence or absence of context notably affected the areas viewed and time spent in comparison; differences between latent and plain exemplar tasks were much less significant. In virtually all trials, examiners repeatedly looked back and forth between the images, suggesting constraints on the capacity of visual working memory. On most trials where context was provided, examiners looked immediately at the corresponding location: with context, median time to find the corresponding location was less than 0.3 s (second fixation); however, without context, median time was 1.9 s (five fixations). A few trials resulted in errors in which the examiner did not find the correct target location. Basic gaze measures of overt behaviors, such as speed, areas visited, and back-and-forth behavior, were used in conjunction with the known target area to infer the underlying cognitive state of the examiner. Conclusions: Visual context has a significant effect on the eye behavior of latent print examiners. Localization errors suggest how errors may occur in real comparisons: examiners sometimes compare an incorrect but similar target group and do not continue to search for a better candidate target group. The analytic methods and predictive models developed here can be used to describe the more complex behavior involved in actual fingerprint comparisons.
Tammy Sue Wynne Liu; Yeu Ting Liu; Chun-Yin Doris Chen
In: Interactive Learning Environments, vol. 27, no. 2, pp. 181–199, 2019.
This study employed eye-tracking technology to probe the online reading behavior of 52 advanced L2 English learners. These participants read an e-book containing six types of multimedia supports for either vocabulary acquisition or comprehension. The six supports consisted of three micro-level supports that provided information about specific words (glosses, vocabulary focus, and footnotes), and three macro-level supports that provided global or background information (illustrations, infographics, and photos). The participants read the ebook under two presentation modes: (1) simultaneous mode: where digital input and supports were presented at the same time; and (2) sequential mode: where the digital content and supports were incrementally presented. Analyses showed that when reading for vocabulary acquisition, vocabulary focus and glosses were significantly fixated on, and when reading for comprehension, illustrations were more intensely fixated on. Additionally, when the digital content was incrementally presented, vocabulary focus received significantly higher total fixation duration. This suggests that reading under the sequential mode has the potency to guide L2 learners' focal attention toward micro-level supports. In contrast, under the simultaneous presentation mode, L2 learners seemed to divide their focal attention among both micro-level and macro-level supports. Pedagogical implications are discussed based on the findings of this study.
Sinè McDougall; Judy Edworthy; Deili Sinimeri; Jamie Goodliffe; Daniel Bradley; James Foster
In: Journal of Experimental Psychology: Applied, vol. 26, no. 1, pp. 1–19, 2019.
Given the ease with which the diverse array of environmental sounds can be understood, the difficulties encountered in using auditory alarm signals on medical devices are surprising. In two experiments, with nonclinical participants, alarm sets which relied on similarities to environmental sounds (concrete alarms, such as a heartbeat sound to indicate "check cardiovascular function") were compared to alarms using abstract tones to represent functions on medical devices. The extent to which alarms were acoustically diverse was also examined: alarm sets were either acoustically different or acoustically similar within each set. In Experiment 1, concrete alarm sets, which were also acoustically different, were learned more quickly than abstract alarms which were acoustically similar. Importantly, the abstract similar alarms were devised using guidelines from the current global medical device standard (International Electrotechnical Commission 60601-1-8, 2012). Experiment 2 replicated these findings. In addition, eye tracking data showed that participants were most likely to fixate first on the correct medical devices in an operating theater scene when presented with concrete acoustically different alarms using real world sounds. A new set of alarms which are related to environmental sounds and differ acoustically have therefore been proposed as a replacement for the current medical device standard.
Zhongling Pi; Jiumin Yang; Weiping Hu; Jianzhong Hong
In: Interactive Learning Environments, pp. 1–9, 2019.
An emerging body of research has focused on students' creativity in group contexts, with the assumption that students could be inspired by peers' ideas. Although students' openness and attention to peers' ideas are claimed to play important roles in their creativity in group settings, there is little empirical research that tests this assumption. This study examined the moderating effect of attention to peers' ideas in the relation between openness and creativity in electronic brainstorming. Participants were 91 undergraduate students who took about 10 min to complete a creative idea generation task during electronic brainstorming. Regression analyses found that students who were characterized by high openness were more creative, but only when they showed more attention to peers' ideas. This suggests that electronic brainstorming can be useful for enhancing the creativity of some students.
Victoria A. Roach; Graham M. Fraser; James H. Kryklywy; Derek G. V. Mitchell; Timothy D. Wilson
In: Anatomical Sciences Education, vol. 12, no. 1, pp. 32–42, 2019.
Research suggests that spatial ability may predict success in complex disciplines including anatomy, where mastery requires a firm understanding of the intricate relationships occurring along the course of veins, arteries, and nerves, as they traverse through and around bones, muscles, and organs. Debate exists on the malleability of spatial ability, and some suggest that spatial ability can be enhanced through training. It is hypothesized that spatial ability can be trained in low-performing individuals through visual guidance. To address this, training was completed through a visual guidance protocol. This protocol was based on eye-movement patterns of high-performing individuals, collected via eye-tracking as they completed an Electronic Mental Rotations Test (EMRT). The effects of guidance were evaluated using 33 individuals with low mental rotation ability, in a counterbalanced crossover design. Individuals were placed in one of two treatment groups (late or early guidance) and completed both a guided, and an unguided EMRT. A third group (no guidance/control) completed two unguided EMRTs. All groups demonstrated an increase in EMRT scores on their second test (P < 0.001); however, an interaction was observed between treatment and test iteration (P = 0.024). The effect of guidance on scores was contingent on when the guidance was applied. When guidance was applied early, scores were significantly greater than expected (P = 0.028). These findings suggest that by guiding individuals with low mental rotation ability “where” to look early in training, better search approaches may be adopted, yielding improvements in spatial reasoning scores. It is proposed that visual guidance may be applied in spatial fields, such as STEMM (science, technology, engineering, mathematics and medicine), surgery, and anatomy to improve student's interpretation of visual content.
Čeněk Šašinka; Zdeněk Stachoň; Petr Kubíček; Sascha Tamm; Aleš Matas; Markéta Kukaňová
In: The Cartographic Journal, vol. 56, no. 2, pp. 175–191, 2019.
The form of visual representation affects both the way in which the visual representation is processed and the effectiveness of this processing. Different forms of visual representation may require the employment of different cognitive strategies in order to solve a particular task; at the same time, the different representations vary as to the extent to which they correspond with an individual's preferred cognitive style. The present study employed a Navon-type task to learn about the occurrence of global/local bias. The research was based on close interdisciplinary cooperation between the domains of both psychology and cartography. Several different types of tasks were made involving avalanche hazard maps with intrinsic/extrinsic visual representations, each of them employing different types of graphic variables representing the level of avalanche hazard and avalanche hazard uncertainty. The research sample consisted of two groups of participants, each of which was provided with a different form of visual representation of identical geographical data, such that the representations could be regarded as ‘informationally equivalent'. The first phase of the research consisted of two correlation studies, the first involving subjects with a high degree of map literacy (students of cartography) (intrinsic method: N = 35; extrinsic method: N = 37). The second study was performed after the results of the first study were analyzed. The second group of participants consisted of subjects with a low expected degree of map literacy (students of psychology; intrinsic method: N = 35; extrinsic method: N = 27).The first study revealed a statistically significant moderate correlation between the students' response times in extrinsic visualization tasks and their response times in a global subtest (r = 0.384, p < 0.05); likewise, a statistically significant moderate correlation was found between the students' response times in intrinsic visualization tasks and their response times in the local subtest (r = 0.387, p < 0.05). At the same time, no correlation was found between the students' performance in the local subtest and their performance in extrinsic visualization tasks, or between their scores in the global subtest and their performance in intrinsic visualization tasks. The second correlation study did not confirm the results of the first correlation study (intrinsic visualization/‘small figures test': r = 0.221; extrinsic visualization/‘large figures test': r = 0.135). The first phase of the research, where the data was subjected to statistical analysis, was followed by a comparative eye-tracking study, whose aim was to provide more detailed insight into the cognitive strategies employed when solving map-related tasks. More specifically, the eye-tracking study was expected to be able to detect possible differences between the cognitive patterns employed when solving extrinsic- as opposed to intrinsic visualization tasks. The results of an exploratory eye-tracking data analysis support the hypothesis of different strategies of visual information processing being used in reaction to different types of visualization.
Wenxiang Chen; Xiangling Zhuang; Zixin Cui; Guojie Ma
In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 64, pp. 552–564, 2019.
Drivers' recognition of pedestrian road crossing intentions is an essential process during driver-pedestrian interaction. However, compared with the rich observational findings on interaction behavior, little is known on drivers' performance in recognizing pedestrian intentions, as well as the underlying cognitive processes. To fill in the gap, this study evaluated drivers' performance in making judgments of pedestrians' road crossing intentions in recorded natural driving scenes. Experienced and novice drivers identified pedestrians as “will cross” or “will not cross” at some time-to-arrival while their eye movements were recorded. The results showed that experienced drivers were more conservative in discriminating whether a pedestrian would cross or not (preferred a “pedestrian will cross” judgment) and took a higher level of information processing of pedestrian intention. Regardless of driving experience, drivers had a higher detection rate, earlier detection, higher level of information processing and quicker response over pedestrians who intended to cross than those did not intend to cross. A quicker response was also achieved when the time-to-arrival was smaller. Analysis of eye movements showed attentional bias to the upper body of pedestrians when recognizing intention. These findings offer an initial understanding of the intention recognition process during driver-pedestrian interaction and inform directions for autonomous driving research when interacting with pedestrians.
Rajib Chowdhury; A F M Saifuddin Saif
In: International Journal of Software Engineering and Computer Systems, vol. 53, no. 1, pp. 52–56, 2019.
The main purpose of this research is to investigate the human brain sensor activities related prior researches towards the needs of an efficient method to improve the human brain sensor activities. Human brain activities mainly measured by brain signal acquired from the brain sensor electrodes positioned on several parts of the brain cortex. Although previous researches investigated human brain activities in various aspects, the improvement of the human brain sensor activities is still unsolved. In today's world, it is very crucial need for improving the sensor activities of the human brain using that human brain improved signal externally. This research demonstrated a comprehensive critical analysis of human brain activities related prior researches to claim for an efficient method integrated with proposed neuroheadset device. This research presented a comprehensive review in various aspects like previous methods, existing frameworks analysis and existing results analysis with the discussion to establish an efficient method for acquiring human brain signal, improving the acquired signal and developing the sensor activities of the human brain using that human brain improved signal. Demonstrated critical review has expected for constituting an efficient method to improve the performance of maneuverability, visualization, subliminal activities and so forth on human brain activities.
Freya Crosby; Frouke Hermens
In: Quarterly Journal of Experimental Psychology, vol. 72, no. 3, pp. 599–615, 2019.
Studies of fear of crime often focus on demographic and social factors, but these can be difficult to change. Studies of visual aspects have suggested that features reflecting incivilities, such as litter, graffiti, and vandalism increase fear of crime, but methods often rely on participants actively mentioning such aspects, and more subtle, less conscious aspects may be overlooked. To address these concerns, this study examined people's eye movements while they judged scenes for safety. In total, 40 current and former university students were asked to rate images of day-time and night-time scenes of Lincoln, UK (where they studied) and Egham, UK (unfamiliar location) for safety, maintenance, and familiarity while their eye movements were recorded. Another 25 observers not from Lincoln or Egham rated the same images in an Internet survey. Ratings showed a strong association between safety and maintenance and lower safety ratings for night-time scenes for both groups, in agreement with earlier findings. Eye movements of the Lincoln participants showed increased dwell times on buildings, houses, and vehicles during safety judgements and increased dwell times on streets, pavements, and markers of incivilities for maintenance. Results confirm that maintenance plays an important role in perceptions of safety, but eye movements suggest that observers also look for indicators of current or recent presence of people.
Gemma Fitzsimmons; Mark J. Weal; Denis Drieghe
The impact of hyperlinks on reading text Journal Article
In: PLoS ONE, vol. 14, no. 2, pp. e0210900, 2019.
There has been debate about whether blue hyperlinks on the Web cause disruption to reading. A series of eye tracking experiments were conducted to explore if coloured words in black text had any impact on reading behaviour outside and inside a Web environment. Experiment 1 and 2 explored the saliency of coloured words embedded in single sentences and the impact on reading behaviour. In Experiment 3, the effects of coloured words/hyperlinks in passages of text in a Web-like environment was explored. Experiment 1 and 2 showed that multiple coloured words in text had no negative impact on reading behaviour. However, if the sentence featured only a single coloured word, a reduction in skipping rates was observed. This suggests that the visual saliency associated with a single coloured word may signal to the reader that the word is important, whereas this signalling is reduced when multiple words are coloured. In Experiment 3, when reading passages of text containing hyperlinks in a Web environment, participants showed a tendency to re-read sentences that contained hyperlinked, uncommon words compared to hyperlinked, common words. Hyperlinks highlight important information and suggest additional content, which for more difficult concepts, invites rereading of the preceding text.
Victoria Foglia; Annie Roy-Charland; Dominique Leroux; Suzanne Lemieux; Nicole Yantzi; Tina Skjonsby-McKinnon; Sylvain Fiset; Dominic Guitard
In: Canadian Journal of Experimental Psychology, pp. 1–14, 2019.
This study examined eye-movement patterns of young adults, while they were viewing texting and driving prevention advertisements, to determine which format attracts the most attention. As young adults are the most at risk for this public health issue, understanding which format is most successful at maintaining young adults' attention is especially important. Participants viewed nondriving, general distracted driving, and texting and driving advertisements. Each of these advertisement types were edited to contain text-only, image-only, and text and image content. Participants were told that they had unlimited time to view each advertisement, while their eye-movements were recorded throughout. Participants spent more time viewing the texting and driving advertisements than other types when they comprised text only. When examining differences in attention to the text and image portions of the advertisements, participants spent more time viewing the images than the text for the nondriving and general distracted driving advertisements. However, for texting and driving-specific advertisements the text-only format resulted in the most attention toward the advertisements. These results indicate that in attracting young adults' attention to texting and driving public health advertisements, the most successful format would be text-based.
Susan M. Gass; Paula Winke; Daniel R. Isbell; Jieun Ahn
In: Language Learning and Technology, vol. 23, no. 2, pp. 84–104, 2019.
Captions provide a useful aid to language learners for comprehending videos and learning new vocabulary, aligning with theories of multimedia learning. Multimedia learning predicts that a learner's working memory (WM) influences the usefulness of captions. In this study, we present two eye-tracking experiments investigating the role of WM in captioned video viewing behavior and comprehension. In Experiment 1, Spanish-as-a-foreign-language learners differed in caption use according to their level of comprehension and to a lesser extent, their WM capacities. WM did not impact comprehension. In Experiment 2, English-as-a-second-language learners differed in comprehension according to their WM capacities. Those with high comprehension and high WM used captions less on a second viewing. These findings highlight the effects of potential individual differences and have implications for the integration of multimedia with captions in instructed language learning. We discuss how captions may help neutralize some of working memory's limiting effects on learning.
Hannah Harvey; Stephen J. Anderson; Robin Walker
In: Optometry and Vision Science, vol. 96, no. 8, pp. 609–616, 2019.
SIGNIFICANCE: Scrolling text can be an effective reading aid for those with central vision loss. Our results suggest that increased interword spacing with scrolling text may further improve the reading experience of this population. This conclusion may be of particular interest to low-vision aid developers and visual rehabilitation practitioners. PURPOSE: The dynamic, horizontally scrolling text format has been shown to improve reading performance in individuals with central visual loss. Here, we sought to determine whether reading performance with scrolling text can be further improved by modulating interword spacing to reduce the effects of visual crowding, a factor known to impact negatively on reading with peripheral vision. METHODS: The effects of interword spacing on reading performance (accuracy, memory recall, and speed) were assessed for eccentrically viewed single sentences of scrolling text. Separate experiments were used to determine whether performance measures were affected by any confound between interword spacing and text presentation rate in words per minute. Normally sighted participants were included, with a central vision loss implemented using a gaze-contingent scotoma of 8° diameter. In both experiments, participants read sentences that were presented with an interword spacing of one, two, or three characters. RESULTS: Reading accuracy and memory recall were significantly enhanced with triple-character interword spacing (both measures, P ≤.01). These basic findings were independent of the text presentation rate (in words per minute). CONCLUSIONS: We attribute the improvements in reading performance with increased interword spacing to a reduction in the deleterious effects of visual crowding. We conclude that increased interword spacing may enhance reading experience and ability when using horizontally scrolling text with a central vision loss.
Sogand Hasanzadeh; Bac Dao; Behzad Esmaeili; Michael D. Dodd
In: Journal of Construction Engineering and Management, vol. 145, no. 9, pp. 1–14, 2019.
Workers' attentional failures or inattention toward detecting a hazard can lead to inappropriate decisions and unsafe behaviors. Previous research has shown that individual characteristics such as past injury exposure contribute greatly to skill-based (e.g., attention failure) and perception-based (e.g., failure to identify and misperception) errors and subsequent accident involvement. However, a dearth of research empirically examined how a worker's personality affects his or her attention and hazard identification. This study addresses this knowledge gap by exploring the impacts of the personality dimensions on the selective attention of workers exposed to fall hazards. To this end, construction workers were recruited to engage in a laboratory eye-tracking experiment that consisted of 115 potential and active fall scenarios in 35 construction images captured from actual projects within the United States. Construction workers' personalities were assessed through the self-completion of the Big Five personality questionnaire, and their visual attention was monitored continuously using a wearable eye-tracking apparatus. The results of the study show that workers' personality dimensions - specifically, extraversion, conscientiousness, and openness to experience - significantly relate to and impact attentional allocations and the search strategies of workers exposed to fall hazards. A more detailed investigation of this connection showed that individuals who are introverted, more conscientious, or more open to experience are less prone to injury and return their attention more frequently to hazardous areas. This study is the first attempt to illustrate how examining relationships among personality, attention, and hazard identification can reveal opportunities for the early detection of at-risk workers who are more likely to be involved in accidents. A better understanding of these connections provides valuable insight into both practice and theory regarding the transformation of current training and educational practices by providing appropriate intervention strategies for personalized safety guidelines and effective training materials to transform personality-driven at-risk workers into safer workers.
Marti Hearst; Emily Pedersen; Lekha Priya Patil; Elsie Lee; Paul Laskowski; Steven L. Franconeri
An evaluation of semantically grouped word cloud designs Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2019.
Word clouds continue to be a popular tool for summarizing textual information, despite their well-documented deficiencies for analytic tasks. Much of their popularity rests on their playful visual appeal. In this paper, we present the results of a series of controlled experiments that show that layouts in which words are arranged into semantically and visually distinct zones are more effective for understanding the underlying topics than standard word cloud layouts. White space separators and/or spatially grouped color coding led to significantly stronger understanding of the underlying topics compared to a standard Wordle layout, while simultaneously scoring higher on measures of aesthetic appeal. This work is an advance on prior research on semantic layouts for word clouds because that prior work has either not ensured that the different semantic groupings are visually or semantically distinct, or has not performed usability studies. An additional contribution of this work is the development of a dataset for a semantic category identification task that can be used for replication of these results or future evaluations of word cloud designs.
Olivier J. Hénaff; Robbe L. T. Goris; Eero P. Simoncelli
Perceptual straightening of natural videos Journal Article
In: Nature Neuroscience, vol. 22, pp. 984–991, 2019.
Many behaviors rely on predictions derived from recent visual input, but the temporal evolution of those inputs is generally complex and difficult to extrapolate. We propose that the visual system transforms these inputs to follow straighter temporal trajectories. To test this ‘temporal straightening' hypothesis, we develop a methodology for estimating the curvature of an internal trajectory from human perceptual judgments. We use this to test three distinct predictions: natural sequences that are highly curved in the space of pixel intensities should be substantially straighter perceptually; in contrast, artificial sequences that are straight in the intensity domain should be more curved perceptually; finally, naturalistic sequences that are straight in the intensity domain should be relatively less curved. Perceptual data validate all three predictions, as do population models of the early visual system, providing evidence that the visual system specifically straightens natural videos, offering a solution for tasks that rely on prediction.
Aurélie Calabrèse; Carlos Aguilar; Géraldine Faure; Frédéric Matonti; Louis Hoffart; Eric Castet
In: Optometry and Vision Science, vol. 95, no. 9, pp. 738–746, 2018.
SIGNIFICANCE: The overall goal of this work is to validate a low vision aid system that uses gaze as a pointing tool and provides smart magnification. We conclude that smart visual enhancement techniques as well as gaze contingency should improve the efficiency of assistive technology for the visually impaired. PURPOSE: A low vision aid, using gaze-contingent visual enhancement and primarily intended to help reading with central vision loss, was recently designed and tested with simulated scotoma. Here, we present a validation of this system for face recognition in age-related macular degeneration patients. METHODS: Twelve individuals with binocular central vision loss were recruited and tested on a face identification-matching task. Gaze position was measured in real time, thanks to an eye tracker. In the visual enhancement condition, at any time during the screen exploration, the fixated face was segregated from background and considered as a region of interest that could be magnified into a region of augmented vision by the participant, if desired. In the natural exploration condition, participants also performed the matching task but without the visual aid. Response time and accuracy were analyzed with mixed-effects models to (1) compare the performance with and without visual aid and (2) estimate the usability of the system. RESULTS: On average, the percentage of correct response for the natural exploration condition was 41%. This value was significantly increased to 63% with visual enhancement (95% confidence interval, 45 to 78%). For the large majority of our participants (83%), this improvement was accompanied by moderate increase in response time, suggesting a real functional benefit for these individuals. CONCLUSIONS Without visual enhancement, participants with age-related macular degeneration performed poorly, confirming their struggle for face recognition and the need to use efficient visual aids. Our system significantly improved face identification accuracy by 55%, proving to be helpful under laboratory conditions.
Tao Deng; Hongmei Yan; Yong Jie Li
In: IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 9, pp. 3059–3067, 2018.
Saliency detection, an important step in many computer vision applications, can, for example, predict where drivers look in a vehicular traffic environment. While many bottom-up and top-down saliency detection models have been proposed for fixation prediction in outdoor scenes, no specific attempt has been made for traffic images. Here, we propose a learning saliency detection model based on a random forest (RF) to predict drivers' fixation positions in a driving environment. First, we extract low-level (color, intensity, orientation, etc.) and high-level (e.g., the vanishing point and center bias) features and then predict the fixation points via RF-based learning. Finally, we evaluate the performance of our saliency prediction model qualitatively and quantitatively. We use quantitative evaluation metrics that include the revised receiver operating characteristic (ROC), the area under the ROC curve value, and the normalized scan-path saliency score. The experimental results on real traffic images indicate that our model can more accurately predict a driver's fixation area, while driving than the state-of-the-art bottom-up saliency models.
Ashleigh J. Filtness; Vanessa Beanland
Sleep loss and change detection in driving scenes Journal Article
In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 57, pp. 10–22, 2018.
Driver sleepiness is a significant road safety problem. Sleep-related crashes occur on both urban and rural roads, yet to date driver-sleepiness research has focused on understanding impairment in rural and motorway driving. The ability to detect changes is an attention and awareness skill vital for everyday safe driving. Previous research has demonstrated that person states, such as age or motivation, influence susceptibility to change blindness (i.e., failure or delay in detecting changes). The current work considers whether sleepiness increases the likelihood of change blindness within urban and rural driving contexts. Twenty fully-licenced drivers completed a change detection ‘flicker' task twice in a counterbalanced design: once following a normal night of sleep (7–8 h) and once following sleep restriction (5 h). Change detection accuracy and response time were recorded while eye movements were continuously tracked. Accuracy was not significantly affected by sleep loss; however, following sleep loss there was some evidence of slowed change detection responses to urban images, but faster responses for rural images. Visual scanning across the images remained consistent between sleep conditions, resulting in no difference in the probability of fixating on the change target. Overall, the results suggest that sleep loss has minimal impact on change detection accuracy and visual scanning for changes in driving scenes. However, a subtle difference in response time to change detection between urban and rural images indicates that change blindness may have implications for sleep-related crashes in more visually complex urban environments. Further research is needed to confirm this finding.
Lisena Hasanaj; Sujata P. Thawani; Nikki Webb; Julia D. Drattell; Liliana Serrano; Rachel C. Nolan; Jenelle Raynowska; Todd E. Hudson; John-Ross Rizzo; Weiwei Dai; Bryan McComb; Judith D. Goldberg; Janet C. Rucker; Steven L. Galetta; Laura J. Balcer
In: Journal of Neuro-Ophthalmology, vol. 38, no. 1, pp. 24–29, 2018.
Objective: We determined the relation of rapid number naming time scores on the King-Devick (K-D) test to video-oculographic eye movement performance during pre-season baseline assessments in a collegiate ice hockey team cohort. Background: The K-D test is a reliable visual performance measure that is a sensitive sideline indicator of concussion when time scores worsen (lengthen) from pre-season baseline. Methods: Athletes from collegiate ice hockey team received pre-season baseline testing as part of an ongoing study of rapid sideline/ rinkside performance measures for concussion. These included the K-D test (spiral bound cards and tablet computer versions). Participants also performed a laboratory-based version of the K-D test with simultaneous infrared-based video-oculographic recordings using EyeLink 1000+. This allowed measurement of temporal and spatial characteristics of eye movements, including saccade velocity, duration and inter-saccadic intervals. Results: Among 13 male athletes, aged 18 to 23 years (mean 20.5+/-1.6 years), prolongation of the inter-saccadic interval (ISI, a combined measure of saccade latency and fixation duration) was the eye movement measure most associated with slower baseline KD scores (mean 38.2+/-6.2 seconds
Katja I. Häuser; Vera Demberg; Jutta Kray
In: Psychology and Aging, vol. 33, no. 8, pp. 1168–1180, 2018.
Even though older adults are known to have difficulty at language processing when a secondary task has to be performed simultaneously, few studies have addressed how older adults process language in dual-task demands when linguistic load is systematically varied. Here, we manipulated surprisal, an information theoretic measure that quantifies the amount of new information conveyed by a word, to investigate how linguistic load affects younger and older adults during early and late stages of sentence processing under conditions when attention is split between two tasks. In high-surprisal sentences, target words were implausible and mismatched with semantic expectancies based on context, thereby causing integration difficulty. Participants performed semantic meaningfulness judgments on sentences that were presented in isolation (single task) or while performing a secondary tracking task (dual task). Cognitive load was measured by means of pupillometry. Mixed-effects models were fit to the data, showing the following: (a) During the dual task, younger but not older adults demonstrated early sensitivity to surprisal (higher levels of cognitive load, indexed by pupil size) as sentences were heard online; (b) Older adults showed no immediate reaction to surprisal, but a delayed response, where their meaningfulness judgments to high-surprisal words remained stable in accuracy, while secondary tracking performance declined. Findings are discussed in relation to age-related trade-offs in dual tasking and differences in the allocation of attentional resources during language processing. Collectively, our data show that higher linguistic load leads to task trade-offs in older adults and differently affects the time course of online language processing in aging.
Claire Louise Heard; Tim Rakow; Tom Foulsham
In: Medical Decision Making, vol. 38, no. 6, pp. 646–657, 2018.
Background. Past research finds that treatment evaluations are more negative when risks are presented after benefits. This study investigates this order effect: manipulating tabular orientation and order of risk–benefit information, and examining information search order and gaze duration via eye-tracking. Design. 108 (Study 1) and 44 (Study 2) participants viewed information about treatment risks and benefits, in either a horizontal (left-right) or vertical (above- below) orientation, with the benefits or risks presented first (left side or at top). For 4 scenarios, participants answered 6 treatment evaluation questions (1–7 scales) that were combined into overall evaluation scores. In addi- tion, Study 2 collected eye-tracking data during the benefit–risk presentation. Results. Participants tended to read one set of information (i.e., all risks or all benefits) before transitioning to the other. Analysis of order of fixations showed this tendency was stronger in the vertical (standardized mean rank difference further from 0
Lukáš Hejtmánek; Ivana Oravcová; Jiří Motýl; Jiří Horáček; Iveta Fajnerová
In: International Journal of Human-Computer Studies, vol. 116, pp. 15–24, 2018.
There is a vibrant debate about consequences of mobile devices on our cognitive capabilities. Use of technology guided navigation has been linked with poor spatial knowledge and wayfinding in both virtual and real world experiments. Our goal was to investigate how the attention people pay to the GPS aid influences their navigation performance. We developed navigation tasks in a virtual city environment and during the experiment, we measured participants' eye movements. We also tested their cognitive traits and interviewed them about their navigation confidence and experience. Our results show that the more time participants spend with the GPS-like map, the less accurate spatial knowledge they manifest and the longer paths they travel without GPS guidance. This poor performance cannot be explained by individual differences in cognitive skills. We also show that the amount of time spent with the GPS is related to participant's subjective evaluation of their own navigation skills, with less confident navigators using GPS more intensively. We therefore suggest that despite an extensive use of navigation aids may have a detrimental effect on person's spatial learning, its general use is modulated by a perception of one's own navigation abilities.
James H. Smith-Spark; Hillary B. Katz; Thomas D. W. Wilcockson; Alexander P. Marchant
In: International Journal of Industrial Ergonomics, vol. 68, pp. 118–124, 2018.
Quality control checkers at fresh produce packaging facilities occasionally fail to detect incorrect information presented on labels. Despite being infrequent, such errors have significant financial and environmental repercussions. To understand why label-checking errors occur, observations and interviews were undertaken at a large packaging facility and followed up with a laboratory-based label-checking task. The observations highlighted the dynamic, complex environment in which label-checking took place, whilst the interviews revealed that operatives had not received formal training in label-checking. On the laboratory-based task, overall error detection accuracy was high but considerable individual differences were found between professional label-checkers. Response times were shorter when participants failed to detect label errors, suggesting incomplete checking or ineffective checking strategies. Furthermore, eye movement recordings indicated that checkers who adopted a systematic approach to checking were more successful in detecting errors. The extent to which a label checker adopted a systematic approach was not found to correlate with the number of years of experience that they had accrued in label-checking. To minimize the chances of label errors going undetected, explicit instruction and training, personnel selection and/or the use of software to guide performance towards a more systematic approach is recommended.
Jiaxin Wu; Sheng Zhong; Zheng Ma; Stephen J. Heinen; Jianmin Jiang
Foveated convolutional neural networks for video summarization Journal Article
In: Multimedia Tools and Applications, vol. 77, no. 22, pp. 29245–29267, 2018.
With the proliferation of video data, video summarization is an ideal tool for users to browse video content rapidly. In this paper, we propose a novel foveated convolutional neural networks for dynamic video summarization. We are the first to integrate gaze information into a deep learning network for video summarization. Foveated images are constructed based on subjects' eye movements to represent the spatial information of the input video. Multi-frame motion vectors are stacked across several adjacent frames to convey the motion clues. To evaluate the proposed method, experiments are conducted on two video summarization benchmark datasets. The experimental results validate the effectiveness of the gaze information for video summarization despite the fact that the eye movements are collected from different subjects from those who generated Jiaxin Wu and Sheng-hua Zhong contributed equally to this work. Multimed Tools Appl summaries. Empirical validations also demonstrate that our proposed foveated convolutional neural networks for video summarization can achieve state-of-the-art performances on these benchmark datasets.
Jiahui Wang; Pavlo Antonenko; Mehmet Celepkolu; Yerika Jimenez; Ethan Fieldman; Ashley Fieldman
In: International Journal of Human-Computer Interaction, pp. 1–12, 2018.
This study explored the relationships between eye tracking and traditional usability testing data in the context of analyzing the usability of Algebra Nation™, an online system for learning mathematics used by hundreds of thousands of students. Thirty-five undergraduate students (20 females) completed seven usability tasks in the Algebra Nation™ online learning environment. The participants were asked to log in, select an instructor for the instructional video, post a question on the collaborative wall, search for an explanation of a mathematics concept on the wall, find information relating to Karma Points (an incentive for engagement and learning), and watch two instructional videos of varied content difficulty. Participants' eye movements (fixations and saccades) were simultaneously recorded by an eye tracker. Usability testing software was used to capture all participants' interactions with the system, task completion time, and task difficulty ratings. Upon finishing the usability tasks, participants completed the System Usability Scale. Important relationships were identified between the eye movement metrics and traditional usability testing metrics such as task difficulty rating and completion time. Eye tracking data were investigated quantitatively using aggregated fixation maps, and qualitative examination was performed on video replay of participants' fixation behavior. Augmenting the traditional usability testing methods, eye movement analysis provided additional insights regarding revisions to the interface elements associated with these usability tasks.
Archonteia Kyroudi; Kristoffer Petersson; Mahmut Ozsahin; Jean Bourhis; François Bochud; Raphaël Moeckli
In: Zeitschrift fur Medizinische Physik, vol. 28, no. 4, pp. 318–324, 2018.
Background and purpose: Treatment plan evaluation is a clinical decision-making problem that involves visual search and analysis in a contextually rich environment, including delineated structures and isodose lines superposed on CT data. It is a two-step process that includes visual analysis and clinical reasoning. In this work, we used eye tracking methods to gain more knowledge about the treatment plan evaluation process in radiation therapy. Materials and methods: Dose distributions on a single transverse slice of ten prostate cancer treatment plans were presented to eight decision makers. Their eye movements and fixations were recorded with an EyeLink1000 remote eye-tracker. Total evaluation time, dwell time, number and duration of fixations on pre-segmented areas of interest were measured. Results: The main structures receiving more and longer fixations (PTV, rectum, bladder) correspond to the main trade-offs evaluated in a typical prostate plan. Radiation oncologists made more fixations on the main structures compared to the medical physicists. Radiation oncologists fixated longer on the rectum when visited for the first time, while medical physicists fixated longer on the bladder. Conclusion: Our results quantify differences in the visual evaluation patterns between radiation oncologists and medical physicists, which indicate differences in their decision making strategies.
Allison M. Londerée; Megan E. Roberts; Mary E. Wewers; Ellen Peters; Amy K. Ferketich; Dylan D. Wagner
In: Tobacco Regulatory Science, vol. 4, no. 6, pp. 57–65, 2018.
Objectives: E-cigarettes are now the most commonly-used tobacco product among adoles- cents; yet, little work has examined how the appealing food and flavor cues used in their mar- keting might attract adolescents' attention, thereby increasing willingness to try these prod- ucts. In the present study, we tested whether advertisements for fruit/sweet/savory-flavored (“flavored”) e-cigarettes attracted adolescent attention in real-world scenes more than tobacco flavored (“unflavored”) e-cigarettes. Additionally, we examined the relationship between ado- lescent attentional bias and willingness to try flavored e-cigarettes. Methods: Participants were 46 adolescents (age range: 16-18 years). All participants took part in an eye-tracking paradigm that examined attentional bias to flavored and unflavored e-cigarette advertisements embed- ded in pictures of real-world storefront scenes. Afterwards, participants' willingness to try fla- vored and unflavored e-cigarettes was assessed. Results: In support of our primary hypothesis, adolescents looked longer and fixated more frequently on flavored (vs unflavored) e-cigarette advertisements. Moreover, this attentional bias towards flavored e-cigarette advertisements predicted a greater willingness to try flavored vs unflavored e-cigarettes. Conclusions: These findings suggest that flavored e-cigarette marketing attracts the attention of adolescents, in- creases their willingness to try flavored e-cigarette products, and could, therefore, put them at greater risk for tobacco initiation. Key
Daniel S. McGrath; Amadeus Meitner; Christopher R. Sears
In: PLoS ONE, vol. 13, no. 1, pp. e0190614, 2018.
A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls) took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games). Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers.
Bettina Olk; Alina Dinu; David J. Zielinski; Regis Kopper
In: Royal Society Open Science, vol. 5, pp. 1–15, 2018.
An important issue of psychological research is how experiments conducted in the laboratory or theories based on such experiments relate to human performance in daily life. Immersive virtual reality (VR) allows control over stimuli and conditions at increased ecological validity. The goal of the present study was to accomplish a transfer of traditional paradigms that assess attention and distraction to immersive VR. To further increase ecological validity we explored attentional effects with daily objects as stimuli instead of simple letters. Participants searched for a target among distractors on the countertop of a virtual kitchen. Target–distractor discriminability was varied and the displays were accompanied by a peripheral flanker that was congruent or incongruent to the target. Reaction time was slower when target–distractor discriminability was low and when flankers were incongruent. The results were replicated in a second experiment in which stimuli were presented on a computer screen in two dimensions. The study demonstrates the successful translation of traditional paradigms and manipulations into immersive VR and lays a foundation for future research on attention and distraction in VR. Further, we provide an outline for future studies that should use features of VR that are not available in traditional laboratory research.
David Randall; Helen Griffiths; Gemma Arblaster; Anne Bjerre; John Fenner
Simulation of oscillopsia in virtual reality Journal Article
In: British and Irish Orthoptic Journal, vol. 14, no. 1, pp. 1–5, 2018.
PURPOSE: Nystagmus is characterised by involuntary eye movement. A proportion of those with nystagmus experience the world constantly in motion as their eyes move: a symptom known as oscillopsia. Individuals with oscillopsia can be incapacitated and often feel neglected due to limited treatment options. Effective communication of the condition is challenging and no tools to aid communication exist. This paper describes a virtual reality (VR) application that recreates the effects of oscillopsia, enabling others to appreciate the condition. METHODS: Eye tracking data was incorporated into a VR oscillopsia simulator and released as a smartphone app - "Nystagmus Oscillopsia Sim VR". When a smartphone is used in conjunction with a Google Cardboard headset, it presents an erratic image consistent with oscillopsia. The oscillopsia simulation was appraised by six participants for its representativeness. These individuals have nystagmus and had previously experienced oscillopsia but were not currently symptomatic; they were therefore uniquely placed to judge the app. The participants filled in a questionnaire to record impressions and the usefulness of the app. RESULTS: The published app has been downloaded $sim$3700 times (28/02/2018) and received positive feedback from the nystagmus community. The validation study questionnaire scored the accuracy of the simulation an average of 7.8/10 while its ability to aid communication received 9.2/10. CONCLUSION: The evidence indicates that the simulation can effectively recreate the sensation of oscillopsia and facilitate effective communication of the symptoms associated with the condition. This has implications for communication of other visual conditions.
Nicola Binetti; Charlotte Harrison; Isabelle Mareschal; Alan Johnston
Pupil response hazard rates predict perceived gaze durations Journal Article
In: Scientific Reports, vol. 7, pp. 3969, 2017.
We investigated the mechanisms for evaluating perceived gaze-shift duration. Timing relies on the accumulation of endogenous physiological signals. Here we focused on arousal, measured through pupil dilation, as a candidate timing signal. Participants timed gaze-shifts performed by face stimuli in a Standard/Probe comparison task. Pupil responses were binned according to “Longer/Shorter” judgements in trials where Standard and Probe were identical. This ensured that pupil responses reflected endogenous arousal fluctuations opposed to differences in stimulus content. We found that pupil hazard rates predicted the classification of sub-second intervals (steeper dilation = “Longer” classifications). This shows that the accumulation of endogenous arousal signals informs gaze-shift timing judgements. We also found that participants relied exclusively on the 2nd stimulus to perform the classification, providing insights into timing strategies under conditions of maximum uncertainty. We observed no dissociation in pupil responses when timing equivalent neutral spatial displacements, indicating that a stimulus-dependent timer exploits arousal to time gaze-shifts.
Avigael M. Aizenman; Trafton Drew; Krista A. Ehinger; Dianne Georgian-smith; Jeremy M. Wolfe
In: Journal of Medical Imaging, vol. 4, no. 4, pp. 1–22, 2017.
As a promising imaging modality, digital breast tomosynthesis (DBT) leads to better diagnostic per- formance than traditional full-field digital mammograms (FFDM) alone. DBT allows different planes of the breast to be visualized, reducing occlusion from overlapping tissue. Although DBT is gaining popularity, best practices for search strategies in this medium are unclear. Eye tracking allowed us to describe search patterns adopted by radiologists searching DBT and FFDM images. Eleven radiologists examined eight DBT and FFDM cases. Observers marked suspicious masses with mouse clicks. Eye position was recorded at 1000 Hz and was coregistered with slice/depth plane as the radiologist scrolled through the DBT images, allowing a 3-D representation of eye position. Hit rate for masses was higher for tomography cases than 2-D cases and DBT led to lower false positive rates. However, search duration was much longer for DBT cases than FFDM. DBT was associated with longer fixations but similar saccadic amplitude compared with FFDM. When comparing radiologists' eye movements to a previous study, which tracked eye movements as radiologists read chest CT, we found DBT viewers did not align with previously identified “driller” or “scanner” strategies, although their search strategy most closely aligns with a type of vigorous drilling strategy.
Elham Azizi; Larry Allen Abel; Matthew J. Stainer
In: Attention, Perception, and Psychophysics, vol. 79, no. 2, pp. 484–497, 2017.
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.
Jan W. Brascamp; Marnix Naber
In: Behavior Research Methods, vol. 49, no. 4, pp. 1303–1309, 2017.
In several research contexts it is important to obtain eye-tracking measures while presenting visual stimuli inde- pendently to each of the two eyes (dichoptic stimulation). However, the hardware that allows dichoptic viewing, such as mirrors, often interferes with high-quality eye tracking, es- pecially when using a video-based eye tracker. Here we detail an approach to combining mirror-based dichoptic stimulation with video-based eye tracking, centered on the fact that some mirrors, although they reflect visible light, are selectively transparent to the infrared wavelength range in which eye trackers record their signal. Although the method we propose is straightforward, affordable (on the order ofUS$1,000) and easy to implement, for many purposes it makes for an im- provement over existing methods, which tend to require spe- cialized equipment and often compromise on the quality ofthe visual stimulus and/or the eye tracking signal. The proposed method is compatible with standard display screens and eye trackers, and poses no additional limitations on the quality or nature of the stimulus presented or the data obtained. We in- clude an evaluation ofthe quality ofeye tracking data obtained using our method, and a practical guide to building a specific version of the setup used in our laboratories.
Etzel Cardeña; Barbara Nordhjem; David Marcusson-Clavertz; Kenneth Holmqvist
In: PLoS ONE, vol. 12, no. 8, pp. e0182546, 2017.
Responsiveness to hypnotic procedures has been related to unusual eye behaviors for centuries. Kallio and collaborators claimed recently that they had found a reliable index for "the hypnotic state" through eye-tracking methods. Whether or not hypnotic responding involves a special state of consciousness has been part of a contentious debate in the field, so the potential validity of their claim would constitute a landmark. However, their conclusion was based on 1 highly hypnotizable individual compared with 14 controls who were not measured on hypnotizability. We sought t o replicate their results with a sample screened for High (n = 16) or Low (n = 13) hypnotizability. We used a factorial 2 (high vs. low hypnotizability) x 2 (hypnosis vs. resting conditions) counterbalanced order design with these eye-tracking tasks: Fixation, Saccade, Optokinetic nystagmus (OKN), Smooth pursuit, and Antisaccade (the first three tasks has been used in Kallio et al.'s experiment). Highs reported being more deeply in hypnosis than Lows but only in the hypnotic condition, as expected. There were no significant main or interaction effects for the Fixation, OKN, or Smooth pursuit tasks. For the Saccade task both Highs and Lows had smaller saccades during hypnosis, and in the Antisaccade task both groups had slower Antisaccades during hypnosis. Although a couple of results suggest that a hypnotic condition may produce reduced eye motility, the lack of significant interactions (e.g., showing only Highs expressing a particular eye behavior during hypnosis) does not support the claim that eye behaviors (at least as measured with the techniques used) are an indicator of a "hypnotic state.” Our results do not preclude the possibility that in a more spontaneous or different setting the experience of being hypnotized might relate to specific eye behaviors.
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken
In: Meta, vol. 62, no. 2, pp. 245–270, 2017.
While the benefits of using post-editing for technical texts have been more or less acknowledged, it remains unclear whether post-editing is a viable alternative to human translation for more general text types. In addition, we need a better understanding of both translation methods and how they are performed by students as well as professionals, so that pitfalls can be determined and translator training can be adapted accordingly. In this article, we aim to get a better understanding of the differences between human translation and post-editing for newspaper articles. Processes are registered by means of eye tracking and keystroke logging, which allows us to study translation speed, cognitive load, and the use of external resources. We also look at the final quality of the product as well as translators' attitude towards both methods of translation. Studying these different aspects shows that both methods and groups are more similar than anticipated.
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken
In: Frontiers in Psychology, vol. 8, pp. 1282, 2017.
Translation Environment Tools make translators' work easier by providing them with term lists, translation memories and machine translation output. Ideally, such tools automatically predict whether it is more effortful to post-edit than to translate from scratch, and determine whether or not to provide translators with machine translation output. Current machine translation quality estimation systems heavily rely on automatic metrics, even though they do not accurately capture actual post-editing effort. In addition, these systems do not take translator experience into account, even though novices' translation processes are different from those of professional translators. In this paper, we report on the impact of machine translation errors on various types of post-editing effort indicators, for professional translators as well as student translators. We compare the impact of MT quality on a product effort indicator (HTER) with that on various process effort indicators. The translation and post-editing process of student translators and professional translators was logged with a combination of keystroke logging and eye-tracking, and the MT output was analyzed with a fine-grained translation quality assessment approach. We find that most post-editing effort indicators (product as well as process) are influenced by machine translation quality, but that different error types affect different post-editing effort indicators, confirming that a more fine-grained MT quality analysis is needed to correctly estimate actual post-editing effort. Coherence, meaning shifts, and structural issues are shown to be good indicators of post-editing effort. The additional impact of experience on these interactions between MT quality and post-editing effort is smaller than expected.
Ewa Domaradzka; Maksymilian Bielecki
In: Frontiers in Psychology, vol. 8, pp. 1365, 2017.
Numerous studies have shown that biases in visual attention might be evoked by affective and personally relevant stimuli, for example addiction-related objects. Despite the fact that addiction is often linked to specific products and systematic purchase behaviors, no studies focused directly on the existence of bias evoked by brands. Smokers are characterized by high levels of brand loyalty and everyday contact with cigarette packaging. Using the incentive-salience mechanism as a theoretical framework, we hypothesized that this group might exhibit a bias toward the preferred cigarette brand. In our study, a group of smokers (N = 40) performed a dot probe task while their eye movements were recorded. In every trial a pair of pictures was presented – each of them showed a single cigarette pack. The visual properties of stimuli were carefully controlled, so branding information was the key factor affecting subjects' reactions. For each participant, we compared gaze behavior related to the preferred vs. other brands. The analyses revealed no attentional bias in the early, orienting phase of the stimulus processing and strong differences in maintenance and disengagement. Participants spent more time looking at the preferred cigarettes and saccades starting at the preferred brand location had longer latencies. In sum, our data shows that attentional bias toward brands might be found in situations not involving choice or decision making. These results provide important insights into the mechanisms of formation and maintenance of attentiona l biases to stimuli of personal relevance and might serve as a first step toward developing new attitude measurement techniques.
Mackenzie G. Glaholt; Grace Sim
In: Journal of Imaging Science and Technology, vol. 61, no. 1, pp. 230–235, 2017.
We investigated gaze-contingent fusion of infrared imagery during visual search. Eye movements were monitored while subjects searched for and identified human targets in images captured simultaneously in the short-wave (SWIR) and long-wave (LWIR) infrared bands. Based on the subject's gaze position, the search displaywas updated such that imagery from one sensorwas continuously presented to the subject's central visual field (“center”) and another sensor was presented to the subject's non-central visual field (“surround”). Analysis ofperformance data indicated that, compared to the other combinations, the scheme featuring SWIR imagery in the center region and LWIR imagery in the surround region constituted an optimal combination of the SWIR and LWIR information: it inherited the superior target detection performance of LWIR imagery and the superior target identification performance of SWIR imagery. This demonstrates a novel method for efficiently combining imagery from two infrared sources as an alternative to conventional image fusion.
Elise Grison; Valérie Gyselinck; Jean Marie Burkhardt; Jan M. Wiener
In: Psychological Research, vol. 81, no. 5, pp. 1020–1034, 2017.
Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning.
Jessica Hanley; David E. Warren; Natalie Glass; Daniel Tranel; Matthew Karam; Joseph Buckwalter
In: The Iowa Orthopaedic Journal, vol. 37, pp. 225–231, 2017.
BACKGROUND: Despite the importance of radiographic interpretation in orthopaedics, there not a clear understanding of the specific visual strategies used while analyzing a plain film. Eyetracking technology allows for the objective study of eye movements while performing a dynamic task, such as reading X-rays. Our study looks to elucidate objective differences in image interpretation between novice and experienced orthopaedic trainees using this novel technology. METHODS: Novice and experienced orthopaedic trainees (N=23) were asked to interpret AP pelvis films, searching for unilateral acetabular fractures while eye-movements were assessed for pattern of gaze, fixation on regions of interest, and time of fixation at regions of interest. Participants were asked to label radiographs as "fractured" or "not fractured." If "fractured", the participant was asked to determine the fracture pattern. A control condition employed Ekman faces and participants judged gender and facial emotion. Data were analyzed for variation in eye movements between participants, accuracy of responses, and response time. RESULTS: Accuracy: There was no significant difference by level of training for accurately identifing fracture images (p=0.3255). There was a significant association between higher level of training and correctly identifying non-fractured images (p=0.0155); greater training was also associated with more success in identifying the correct Judet-Letournel classification (p=0.0029). Response Time: Greater training was associated with faster response times (p=0.0009 for fracture images and 0.0012 for non-fractured images). Fixation Duration: There was no correlation of average fixation duration with experience (p=0.9632). Regions of Interest (ROIs): More experience was associated with an average of two fewer fixated ROIs (p=0.0047). Number of Fixations: Increased experience was associated with fewer fixations overall (p=0.0007). CONCLUSIONS: Experience has a significant impact on both accuracy and efficiency in interpreting plain films. Greater training is associated with a shift toward a more efficient and thorough assessment of plain radiographs. Eyetracking is a useful descriptive tool in the setting of plain film interpretation. CLINICAL RELEVANCE: We propose further assessment of eye movements in larger populations of orthopaedic surgeons, including staff orthopaedists. Describing the differences between novice and expert interpretation may provide insight into ways to accelerate the learning process in young orthopaedists.
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd
In: Journal of Management in Engineering, vol. 33, no. 5, pp. 1–17, 2017.
Although several studies have highlighted the importance of attention in reducing the number of injuries in the construction industry, few have attempted to empirically measure the attention of construction workers. One technique that can be used to measure worker attention is eye tracking, which is widely accepted as the most direct and continuous measure of attention because where one looks is highly correlated with where one is focusing his or her attention. Thus, with the fundamental objective of measuring the impacts of safety knowledge (specifically, training, work experience, and injury exposure) on construction workers' attentional allocation, this study demonstrates the application of eye tracking to the realm of construction safety practices. To achieve this objective, a laboratory experiment was designed in which participants identified safety hazards presented in 35 construction site images ordered randomly, each of which showed multiple hazards varying in safety risk. During the experiment, the eye movements of 27 construction workers were recorded using a head-mounted EyeLink II system. The impact of worker safety knowledge in terms of training, work experience, and injury exposure (independent variables) on eye-tracking metrics (dependent variables) was then assessed by implementing numerous permutation simulations. The results show that tacit safety knowledge acquired from work experience and injury exposure can significantly improve construction workers' hazard detection and visual search strategies. The results also demonstrate that (1) there is minimal difference, with or without the Occupational Safety and Health Administration 10-h certificate, in workers' search strategies and attentional patterns while exposed to or seeing hazardous situations; (2) relative to less experienced workers (<5 years), more experienced workers (>10 years) need less processing time and deploy more frequent short fixations on hazardous areas to maintain situational awareness of the environment; and (3) injury exposure significantly impacts a worker's visual search strategy and attentional allocation. In sum, practical safety knowledge and judgment on a jobsite requires the interaction of both tacit and explicit knowledge gained through work experience, injury exposure, and interactive safety training. This study significantly contributes to the literature by demonstrating the potential application of eye-tracking technology in studying the attentional allocation of construction workers. Regarding practice, the results of the study show that eye tracking can be used to improve worker training and preparedness, which will yield safer working conditions, detect at-risk workers, and improve the effectiveness of safety-training programs.
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd
In: Journal of Construction Engineering and Management, vol. 143, no. 10, pp. 1–16, 2017.
Eye-movement metrics have been shown to correlate with attention and, therefore, represent a means of identifying and analyzing an individual's cognitive processes. Human errors--such as failure to identify a hazard--are often attributed to a worker's lack of attention. Piecemeal attempts have been made to investigate the potential of harnessing eye movements as predictors of human error (e.g., failure to identify a hazard) in the construction industry, although more attempts have investigated human error via subjective measurements. To address this knowledge gap, the present study harnessed eye-tracking technology to evaluate the impacts of workers' hazard-identification skills on their attentional distributions and visual search strategies. To achieve this objective, an experiment was designed in which the eye movements of 31 construction workers were tracked while they searched for hazards in 35 randomly ordered construction scenario images. Workers were then divided into three groups on the basis of their hazard identification performance. Three fixation-related metrics--fixation count, dwell-time percentage, and run count--were analyzed during the eye-tracking experiment for each group (low, medium, and high hazard-identification skills) across various types of hazards. Then, multivariate ANOVA (MANOVA) was used to evaluate the impact of workers' hazard-identification skills on their visual attention. To further investigate the effect of hazard identification skills on the dependent variables (eye movement metrics), two distinct processes followed: separate ANOVAs on each of the dependent variables, and a discriminant function analysis. The analyses indicated that hazard identification skills significantly impact workers' visual search strategies: workers with higher hazard-identification skills had lower dwell-time percentages on ladder-related hazards; higher fixation counts on fall-to-lower-level hazards; and higher fixation counts and run counts on fall-protection systems, struck-by, housekeeping, and all hazardous areas combined. Among the eye-movement metrics studied, fixation count had the largest standardized coefficient in all canonical discriminant functions, which implies that this eye-movement metric uniquely discriminates workers with high hazard-identification skills and at-risk workers. Because discriminant function analysis is similar to regression, discriminant function (linear combinations of eye-movement metrics) can be used to predict workers' hazard-identification capabilities. In conclusion, this study provides a proof of concept that certain eye- movement metrics are predictive indicators of human error due to attentional failure. These outcomes stemmed from a laboratory setting, and, foreseeably, safety managers in the future will be able to use these findings to identify at-risk construction workers, pinpoint required safety training, measure training effectiveness, and eventually improve future personal protective equipment to measure construction workers' situation awareness in real time.
Matthew Heath; Erin M. Shellington; Sam Titheridge; Dawn P. Gill; Robert J. Petrella
In: Journal of Alzheimer's Disease, vol. 56, no. 1, pp. 167–183, 2017.
Exercise programs involving aerobic and resistance training (i.e., multiple-modality) have shown promise in improving cognition and executive control in older adults at risk, or experiencing, cognitive decline. It is, however, unclear whether cognitive training within a multiple-modality program elicits an additive benefit to executive/cognitive processes. This is an important question to resolve in order to identify optimal training programs that delay, or ameliorate, executive deficits in persons at risk for further cognitive decline. In the present study, individuals with a self-reported cognitive complaint (SCC) participated in a 24-week multiple-modality (i.e., the M2 group) exercise intervention program. In addition, a separate group of individuals with a SCC completed the same aerobic and resistance training as the M2 group but also completed a cognitive-based stepping task (i.e., multiple-modality, mind-motor intervention: M4 group). Notably, pre- and post-intervention executive control was examined via the antisaccade task (i.e., eye movement mirror-symmetrical to a target). Antisaccades are an ideal tool for the study of individuals with subtle executive deficits because of its hands- and language-free nature and because the task's neural mechanisms are linked to neuropathology in cognitive decline (i.e., prefrontal cortex). Results showed that M2 and M4 group antisaccade reaction times reliably decreased from pre- to post-intervention and the magnitude of the decrease was consistent across groups. Thus, multi-modality exercise training improved executive performance in persons with a SCC independent of mind-motor training. Accordingly, we propose that multiple-modality training provides a sufficient intervention to improve executive control in persons with a SCC.
Yu Cin Jian
In: Reading and Writing, vol. 30, no. 7, pp. 1447–1472, 2017.
This study investigated the cognitive processes and reader characteristics of sixth graders who had good and poor performance when reading scientific text with diagrams. We first measured the reading ability and reading self-efficacy of sixth-grade participants, and then recorded their eye movements while they were reading an illustrated scientific text and scored their answers to content-related questions. Finally, the participants evaluated the difficulty of the article, the attractiveness of the content and diagram, and their learning performance. The participants were then classified into groups based on how many correct responses they gave to questions related to reading. The results showed that readers with good performance had better character recognition ability and reading self-efficacy, were more attracted to the diagrams, and had higher self-evaluated learning levels than the readers with poor performance did. Eye-movement data indicated that readers with good performance spent significantly more reading time on the whole article, the text section, and the diagram section than the readers with poor performance did. Interestingly, readers with good performance had significantly longer mean fixation duration on the diagrams than readers with poor performance did; further, readers with good performance made more saccades between the text and the diagrams. Additionally, sequential analysis of eye movements showed that readers with good performance preferred to observe the diagram rather than the text after reading the title, but this tendency was not present in readers with poor performance. In sum, using eye-tracking technology and several reading tests and questionnaires, we found that various cognitive aspects (reading strategy, diagram utilization) and affective aspects (reading self-efficacy, article likeness, diagram attraction, and self-evaluation of learning) affected sixth graders' reading performance in this study.
Yu Cin Jian; Hwa Wei Ko
In: Computers and Education, vol. 113, pp. 263–279, 2017.
In this study, eye movement recordings and comprehension tests were used to investigate children's cognitive processes and comprehension when reading illustrated science texts. Ten-year-old children (N = 42) who were beginning to read to learn, with high and low reading ability read two illustrated science texts in Chinese (one medium-difficult article, one difficult article), and then answered questions that measured comprehension of textual and pictorial information as well as text-and-picture integration. The high-ability group outperformed the low-ability group on all questions. Eye movement analyses showed that both group of students spent roughly the same amount of time reading both articles, but had different methods of reading them. The low-ability group was inclined to read what seemed easier to them and read the text more. The high-ability group attended more to the difficult article and made an effort to integrate the textual and pictorial information. During a first-pass reading of the difficult article, high- but not low-ability readers returned to the previous paragraph. The low-ability readers spent more time reading the less difficult article and not the difficult one that required teachers' attention. Suggestions for classroom instruction are proposed accordingly.
Shijian Luo; Yi Hu; Yuxiao Zhou
In: Frontiers of Computer Science, vol. 11, no. 2, pp. 290–306, 2017.
Smartphone applications (apps) are becoming increasingly popular all over the world, particularly in the Chinese Generation Y population; however, surprisingly, only a small number of studies on app factors valued by this important group have been conducted. Because the competition among app developers is increasing, app factors that attract users' attention are worth studying for sales promotion. This paper examines these factors through two separate studies. In the first study, i.e., Experiment 1, which consists of a survey, perceptual rating and verbal protocol methods are employed, and 90 randomly selected app websites are rated by 169 experienced smartphone users according to app attraction. Twelve of the most rated apps (six highest rated and six lowest rated) are selected for further investigation, and 11 influential factors that Generation Y members value are listed. A second study, i.e., Experiment 2, is conducted using the most and least rated app websites from Experiment 1, and eye tracking and verbal protocol methods are used. The eye movements of 45 participants are tracked while browsing these websites, providing evidence about what attracts these users' attention and the order in which the app components are viewed. The results of these two studies suggest that Chinese Generation Y is a content-centric group when they browse the smartphone app marketplace. Icon, screenshot, price, rating, and name are the dominant and indispensable factors that influence purchase intentions, among which icon and screenshot should be meticulously designed. Price is another key factor that drives Chinese Generation Y's attention. The recommended apps are the least dominant element. Design suggestions for app websites are also proposed. This research has important implications.
Min-Yuan Ma; Hsien-Chih Chuang
In: International Journal of Technology and Design Education, vol. 27, no. 1, pp. 149–164, 2017.
Type design is the process of re-organizing visual elements and their corresponding meanings into a new organic entity, particularly for the highly logographic Chinese characters whose intrinsic features are retained even after reorganization. Due to this advantage, designers believe that such a re-organization process will not affect Chinese character recognition. However, not having an effect on recognition is not the same as not affecting the viewing process, especially when the character is so highly deconstructed that, along with the viewing process, the original intention of the design and its efficacy are both indirectly affected. Therefore, besides capturing the changes of character features, a good type designer should understand how characters are viewed. Past studies have found that character structure will affect character recognition, particularly for enclosed and non-enclosed characters whose differences are significant, although the interpretation of such differences remains open for discussion. This study explored the viewing process of Chinese characters with eye-tracking methods and calculated the concentration and saccadic amplitude of fixation in the viewing process in terms of the descriptive approach in a geographic information system, so as to investigate the differences among types of character modules with the spatial dispersion index. This study found that the overall vision when viewing enclosed structures is more concentrated than non-enclosed structures.
Andrew K. Mackenzie; Julie M. Harris
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 2, pp. 381–394, 2017.
The misallocation of driver visual attention has been suggested as a major contributing factor to vehicle accidents. One possible reason is that the relatively high cognitive demands of driving limits the ability to efficiently allocate gaze. We present an experiment that explores the relationship between attentional function and visual performance when driving. Drivers performed two variations of a multiple object tracking task targeting aspects of cognition including sustained attention, dual-tasking, covert attention and visuomotor skill. They also drove a number of courses in a driving simulator. Eye movements were recorded throughout. We found that individuals who performed better in the cognitive tasks exhibited more effective eye movement strategies when driving, such as scanning more of the road, and they also exhibited better driving performance. We discuss the potential link between an individual's attentional function, effective eye movements and driving ability. We also discuss the use of a visuomotor task in assessing driving behaviour.
Yousri Marzouki; Valériane Dusaucy; Myriam Chanceaux; Sebastiaan Mathôt
The World (of Warcraft) through the eyes of an expert Journal Article
In: PeerJ, vol. 5, pp. 1–21, 2017.
Negative correlations between pupil size and the tendency to look at salient locations were found in recent studies (e.g., Mathôt et al., 2015). It is hypothesized that this negative correlation might be explained by the mental effort put by participants in the task that leads in return to pupil dilation. Here we present an exploratory study on the effect of expertise on eye-movement behavior. Because there is no available standard tool to evaluate WoW players' expertise, we built an off-game questionnaire testing players' knowledge about WoW and acquired skills through completed raids, highest rated battlegrounds, Skill Points, etc. Experts ( N = 4) and novices ( N = 4) in the massively multiplayer online role-playing game World of Warcraft (WoW) viewed 24 designed video segments from the game that differ in regards with their content (i.e, informative locations) and visual complexity (i.e, salient locations). Consistent with previous studies, we found a negative correlation between pupil size and the tendency to look at salient locations (experts
Olivia M. Maynard; Jonathan C. W. Brooks; Marcus R. Munafò; Ute Leonards
In: Addiction, vol. 112, no. 4, pp. 662–672, 2017.
Aims: To (1) test if activation in brain regions related to reward (nucleus accumbens) and emotion (amygdala) differ when branded and plain packs of cigarettes are viewed, (2) test whether these activation patterns differ by smoking status and (3) examine whether activation patterns differ as a function of visual attention to health warning labels on cigarette packs. Design: Cross-sectional observational study combining functional magnetic resonance imaging (fMRI) with eye-tracking. Non-smokers, weekly smokers and daily smokers performed a memory task on branded and plain cigarette packs with pictorial health warnings presented in an event-related design. Setting: Clinical Research and Imaging Centre, University of Bristol, UK. Participants: Non-smokers, weekly smokers and daily smokers (n = 72) were tested. After exclusions, data from 19 non-smokers, 19 weekly smokers and 20 daily smokers were analysed. Measurements: Brain activity was assessed in whole brain analyses and in pre-specified masked analyses in the amygdala and nucleus accumbens. On-line eye-tracking during scanning recorded visual attention to health warnings. Findings: There was no evidence for a main effect of pack type or smoking status in either the nucleus accumbens or amygdala, and this was unchanged when taking account of visual attention to health warnings. However, there was evidence for an interaction, such that we observed increased activation in the right amygdala when viewing branded as compared with plain packs among weekly smokers (P = 0.003). When taking into account visual attention to health warnings, we observed higher levels of activation in the visual cortex in response to plain packaging compared with branded packaging of cigarettes (P = 0.020). Conclusions: Based on functional magnetic resonance imaging and eye-tracking data, health warnings appear to be more salient on ‘plain' cigarette packs than branded packs.
Rebecca L. Monk; J. Westwood; Derek Heim; Adam W. Qureshi
In: Journal of Applied Social Psychology, vol. 47, no. 3, pp. 158–164, 2017.
To examine attention levels to different types of alcohol warning labels. Twenty-two participants viewed neutral or graphic warning messages while dwell times for text and image components of messages were assessed. Pre and postexposure outcome expectancies were assessed in order to compute change scores. Dwell times were significantly higher for the image, as opposed to the text, components of warnings, irrespective of image type. Participants whose expectancies increased after exposure to the warnings spent longer looking at the image than did those whose positive expectancies remained static or decreased. Images in alcohol warnings appear beneficial for drawing attention, although findings may suggest that this is also associated with heightened positive alcohol-related beliefs. Implications for health intervention are discussed and future research in this area is recommended.
Parashkev Nachev; Geoff E. Rose; David H. Verity; Sanjay G. Manohar; Kelly MacKenzie; Gill Adams; Maria Theodorou; Quentin A. Pankhurst; Christopher Kennard
Magnetic oculomotor prosthetics for acquired nystagmus Journal Article
In: Ophthalmology, vol. 124, no. 10, pp. 1556–1564, 2017.
Purpose: Acquired nystagmus, a highly symptomatic consequence of damage to the substrates of oculomotor control, often is resistant to pharmacotherapy. Although heterogeneous in its neural cause, its expression is unified at the effector—the eye muscles themselves—where physical damping of the oscillation offers an alternative approach. Because direct surgical fixation would immobilize the globe, action at a distance is required to damp the oscillation at the point of fixation, allowing unhindered gaze shifts at other times. Implementing this idea magnetically, herein we describe the successful implantation of a novel magnetic oculomotor prosthesis in a patient. Design: Case report of a pilot, experimental intervention. Participant: A 49-year-old man with longstanding, medication-resistant, upbeat nystagmus resulting from a paraneoplastic syndrome caused by stage 2A, grade I, nodular sclerosing Hodgkin's lymphoma. Methods: We designed a 2-part, titanium-encased, rare-earth magnet oculomotor prosthesis, powered to damp nystagmus without interfering with the larger forces involved in saccades. Its damping effects were confirmed when applied externally. We proceeded to implant the device in the patient, comparing visual functions and high-resolution oculography before and after implantation and monitoring the patient for more than 4 years after surgery. Main Outcome Measures: We recorded Snellen visual acuity before and after intervention, as well as the amplitude, drift velocity, frequency, and intensity of the nystagmus in each eye. Results The patient reported a clinically significant improvement of 1 line of Snellen acuity (from 6/9 bilaterally to 6/6 on the left and 6/5–2 on the right), reflecting an objectively measured reduction in the amplitude, drift velocity, frequency, and intensity of the nystagmus. These improvements were maintained throughout a follow-up of 4 years and enabled him to return to paid employment. Conclusions: This work opens a new field of implantable therapeutic devices—oculomotor prosthetics—designed to modify eye movements dynamically by physical means in cases where a purely neural approach is ineffective. Applied to acquired nystagmus refractory to all other interventions, it is shown successfully to damp pathologic eye oscillations while allowing normal saccadic shifts of gaze.
Andrew D. Ogle; Dan J. Graham; Rachel G. Lucas-Thompson; Christina A. Roberto
In: Journal of the Academy of Nutrition and Dietetics, vol. 117, no. 2, pp. 265–270, 2017.
Background: Over-consuming unhealthful foods and beverages contributes to pediatric obesity and associated diseases. Food marketing influences children's food preferences, choices, and intake. Objective: To examine whether adding licensed media characters to healthful food/beverage packages increases children's attention to and preference for these products. We hypothesized that children prefer less- (vs more-) healthful foods, and pay greater attention to and preferentially select products with (vs without) media characters regardless of nutritional quality. We also hypothesized that children prefer more-healthful products when characters are present over less-healthful products without characters. Design: On a computer, participants viewed food/beverage pairs of more-healthful and less-healthful versions of similar products. The same products were shown with and without licensed characters on the packaging. An eye-tracking camera monitored participant gaze, and participants chose which product they preferred from each of 60 pairs. Participants/setting: Six- to 9-year-old children (n=149; mean age=7.36, standard deviation=1.12) recruited from the Twin Cities, MN, area in 2012-2013. Main outcome measures: Visual attention and product choice. Statistical analyses performed Attention to products was compared using paired-samples t tests, and product choice was analyzed with single-sample t tests. Analyses of variance were conducted to test for interaction effects of specific characters and child sex and age. Results: Children paid more attention to products with characters and preferred less-healthful products. Contrary to our prediction, children chose products without characters approximately 62% of the time. Children's choices significantly differed based on age, sex, and the specific cartoon character displayed, with characters in this study being preferred by younger boys. Conclusions: Results suggest that putting licensed media characters on more-healthful food/beverage products might not encourage all children to make healthier food choices, but could increase selection of healthy foods among some, particularly younger children, boys, and those who like the featured character(s). Effective use likely requires careful demographic targeting.
Cheng S. Qian; Jan W. Brascamp
In: Journal of Visualized Experiments, no. 127, pp. 1–9, 2017.
The presentation of different stimuli to the two eyes, dichoptic presentation, is essential for studies involving 3D vision and interocular suppression. There is a growing literature on the unique experimental value of pupillary and oculomotor measures, especially for research on interocular suppression. Although obtaining eye-tracking measures would thus benefit studies that use dichoptic presentation, the hardware essential for dichoptic presentation (e.g. mirrors) often interferes with high-quality eye tracking, especially when using a video-based eye tracker. We recently described an experimental setup that combines a standard dichoptic presentation system with an infrared eye tracker by using infrared-transparent mirrors1. The setup is compatible with standard monitors and eye trackers, easy to implement, and affordable (on the order of US$1,000). Relative to existing methods it has the benefits of not requiring special equipment and posing few limits on the nature and quality of the visual stimulus. Here we provide a visual guide to the construction and use of our setup.
Ioannis Rigas; Oleg V. Komogortsev
In: Image and Vision Computing, vol. 58, pp. 129–141, 2017.
On the onset of the second decade of research in eye movement biometrics, the already demonstrated results strongly support the promising perspectives of the field. This paper presents a description of the research conducted in eye movement biometrics based on an extended analysis of the characteristics and results of the “BioEye 2015: Competition on Biometrics via Eye Movements.” This extended presentation can contribute to the understanding of the current level of research in eye movement biometrics, covering areas such as the previous work in the field, the procedures for the creation of a database of eye movement recordings, and the different approaches that can be used for the analysis of eye movements. Also, the presented results from BioEye 2015 competition can demonstrate the potential identification accuracy that can be achieved under easier and more difficult scenarios. Based on the provided presentation, we discuss topics related to the current status in eye movement biometrics and suggest possible directions for the future research in the field.
Aiping Xiong; Robert W. Proctor; Weining Yang; Ninghui Li
In: Human Factors, vol. 59, no. 4, pp. 640–660, 2017.
OBJECTIVE: To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. BACKGROUND: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. METHOD: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. RESULTS: Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. CONCLUSION: Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. APPLICATION: Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.
Philip R. K. Turnbull; John R. Phillips
Ocular effects of virtual reality headset wear in young adults Journal Article
In: Scientific Reports, vol. 7, pp. 16172, 2017.
Virtual Reality (VR) headsets create immersion by displaying images on screens placed very close to the eyes, which are viewed through high powered lenses. Here we investigate whether this viewing arrangement alters the binocular status of the eyes, and whether it is likely to provide a stimulus for myopia development. We compared binocular status after 40-minute trials in indoor and outdoor environments, in both real and virtual worlds. We also measured the change in thickness of the ocular choroid, to assess the likely presence of signals for ocular growth and myopia development. We found that changes in binocular posture at distance and near, gaze stability, amplitude of accommodation and stereopsis were not different after exposure to each of the 4 environments. Thus, we found no evidence that the VR optical arrangement had an adverse effect on the binocular status of the eyes in the short term. Choroidal thickness did not change after either real world trial, but there was a significant thickening (≈10 microns) after each VR trial (p < 0.001). The choroidal thickening which we observed suggest that a VR headset may not be a myopiagenic stimulus, despite the very close viewing distances involved.
Lauren H. Williams; Trafton Drew
In: Cognitive Research: Principles and Implications, vol. 2, no. 1, pp. 12, 2017.
Observational studies have shown that interruptions are a frequent occurrence in diagnostic radiology. The present study used an experimental design in order to quantify the cost of these interruptions during search through volumetric medical images. Participants searched through chest CT scans for nodules that are indicative of lung cancer. In half of the cases, search was interrupted by a series of true or false math equations. The primary cost of these interruptions was an increase in search time with no corresponding increase in accuracy or lung coverage. This time cost was not modulated by the difficulty of the interruption task or an individual's working memory capacity. Eye-tracking suggests that this time cost was driven by impaired memory for which regions of the lung were searched prior to the interruption. Potential interventions will be discussed in the context of these results.
Sergei L. Shishkin; Darisii G. Zhao; Andrei V. Isachenko; Boris M. Velichkovsky
In: Psychology in Russia: State of the Art, vol. 10, no. 3, pp. 120–137, 2017.
Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system's information transfer limits. Will brain-computer interfaces (BCIs) and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more efficient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important filter to separate intentional gaze dwells from non-intentional ones. conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI) interface will have a chance to enable natural, fluent, and effortless interaction with machines in various fields of application.
Thomas Zawisza; Ray Garza
In: Journal of Police and Criminal Psychology, vol. 32, no. 3, pp. 203–213, 2017.
This research examines the extent to which visual cues influence a person's decision to burglarize. Participants in this study (n = 65) viewed ten houses through an eye tracking device and were asked whether or not they thought each house was vulnerable to burglary. The eye tracking device recorded where a person looked and for how long they looked (in milliseconds). Our findings showed that windows and doors were two of the most important visual stimuli. Results from our follow-up questionnaire revealed that stimuli such as fencing, beware of pet signs, cars in driveways, and alarm systems are also considered. There are a number of implications for future research and policy.
Jan-philipp Tauscher; Maryam Mustafa; Marcus Magnor; T. U. Braunschweig
In: ACM Transactions on Applied Perception, vol. 14, no. 4, pp. 1–12, 2017.
This study compares three popular modalities for analyzing perceived video quality; user ratings, eye tracking, and EEG. We contrast these three modalities for a given video sequence to determine if there is a gap between what humans consciously see and what we implicitly perceive. Participants are shown a video sequence with different artifacts appearing at specific distances in their field of vision; near foveal, middle peripheral, and far peripheral. Our results show distinct differences between what we saccade to (eye tracking), howwe consciously rate video quality, and our neural responses (EEG data). Our findings indicate that the measurement of perceived quality depends on the specific modality used.
Julia A. Wolfson; Dan J. Graham; Sara N. Bleich
In: Journal of Nutrition Education and Behavior, vol. 49, no. 1, pp. 35–42.e1, 2017.
Objective Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. Design An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Setting Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. Participants The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Main Outcome Measure(s) Attention to and attitudes about activity-equivalent calorie information. Analysis Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Results Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Conclusions and Implications Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions.
Ying Yan; Xiaofei Wang; Ludan Shi; Haoxue Liu
In: Traffic Injury Prevention, vol. 18, no. 1, pp. 102–110, 2017.
OBJECTIVE: Special light zone is a new illumination technique that promises to improve the visual environment and improve traffic safety in extra-long tunnels. The purpose of this study is to identify how light zones affect the dynamic visual characteristics and information perception of drivers as they pass through extra-long tunnels on highways. METHODS: Thirty-two subjects were recruited for this study, and fixation data were recorded using eye movement tracking devices. A back-propagation artificial neural network was employed to predict and analyze the influence of special light zones on the variations in the fixation duration and pupil area of drivers. The analytic coordinates of focus points at different light zones were clustered to obtain different visual fixation regions using dynamic cluster theory. RESULTS: The findings of this study indicated that the special light zones had different influences on fixation duration and pupil area compared to other sections. Drivers gradually changed their fixation points from a scattered pattern to a narrow and zonal distribution that mainly focused on the main visual area at the center, the road just ahead, and the right side of the main visual area while approaching the special light zones. The results also showed that the variation in illumination and landscape in light zones was more important than driving experience to yield changes in visual cognition and driving behavior. CONCLUSIONS: It can be concluded that the special light zones can help relieve drivers' vision fatigue to some extent and further develop certain visual stimulus that can enhance drivers' attention. The study would provide a scientific basis for safety measurement implementation in extra-long tunnels.
John-Ross Rizzo; Todd E. Hudson; Weiwei Dai; Ninad Desai; Arash Yousefi; Dhaval Palsana; Ivan Selesnick; Laura J. Balcer; Steven L. Galetta; Janet C. Rucker
In: Journal of the Neurological Sciences, vol. 362, pp. 232–239, 2016.
Objective Concussion is a major public health problem and considerable efforts are focused on sideline-based diagnostic testing to guide return-to-play decision-making and clinical care. The King-Devick (K-D) test, a sensitive sideline performance measure for concussion detection, reveals slowed reading times in acutely concussed subjects, as compared to healthy controls; however, the normal behavior of eye movements during the task and deficits underlying the slowing have not been defined. Methods Twelve healthy control subjects underwent quantitative eye tracking during digitized K-D testing. Results The total K-D reading time was 51.24 (± 9.7) seconds. A total of 145 saccades (± 15) per subject were generated, with average peak velocity 299.5°/s and average amplitude 8.2°. The average inter-saccadic interval was 248.4 ms. Task-specific horizontal and oblique saccades per subject numbered, respectively, 102 (± 10) and 17 (± 4). Subjects with the fewest saccades tended to blink more, resulting in a larger amount of missing data; whereas, subjects with the most saccades tended to make extra saccades during line transitions. Conclusions Establishment of normal and objective ocular motor behavior during the K-D test is a critical first step towards defining the range of deficits underlying abnormal testing in concussion. Further, it sets the groundwork for exploration of K-D correlations with cognitive dysfunction and saccadic paradigms that may reflect specific neuroanatomic deficits in the concussed brain.
Lynn Huestegge; Anne Böckler
In: Journal of Vision, vol. 16, no. 2, pp. 1–15, 2016.
Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards.
Yu Cin Jian; Chao Jung Wu
In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016.
Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas.
Ioanna Katidioti; Jelmer P. Borst; Douwe J. Bierens de Haan; Tamara Pepping; Marieke K. Vugt; Niels A. Taatgen
In: International Journal of Human-Computer Interaction, vol. 32, no. 10, pp. 791–801, 2016.
Interruptions are prevalent in everyday life and can be very disruptive. An important factor that affects the level of disruptiveness is the timing of the interruption: Interruptions at low-workload moments are known to be less disruptive than interruptions at high-workload moments. In this study, we developed a task-independent interruption management system (IMS) that interrupts users at low-workload moments in order to minimize the disruptiveness of interruptions. The IMS identifies low-workload moments in real time by measuring users' pupil dilation, which is a well-known indicator of workload. Using an experimental setup we showed that the IMS succeeded in finding the optimal moments for interruptions and marginally improved performance. Because our IMS is task-independent—it does not require a task analysis—it can be broadly applied.
Ellen M. Kok; Halszka Jarodzka; Anique B. H. Bruin; Hussain A. N. BinAmir; Simon G. F. Robben; Jeroen J. G. Merriënboer
Systematic viewing in radiology: Seeing more, missing less? Journal Article
In: Advances in Health Sciences Education, vol. 21, no. 1, pp. 189–205, 2016.
To prevent radiologists from overlooking lesions, radiology textbooks rec- ommend ‘‘systematic viewing,'' a technique whereby anatomical areas are inspected in a fixed order. This would ensure complete inspection (full coverage) of the image and, in turn, improve diagnostic performance. To test this assumption, two experiments were performed. Both experiments investigated the relationship between systematic viewing, coverage, and diagnostic performance. Additionally, the first investigated whether sys- tematic viewing increases with expertise; the second investigated whether novices benefit from full-coverage or systematic viewing training. In Experiment 1, 11 students, ten res- idents, and nine radiologists inspected five chest radiographs. Experiment 2 had 75 students undergo a training in either systematic, full-coverage (without being systematic) or non- systematic viewing. Eye movements and diagnostic performance were measured throughout both experiments. In Experiment 1, no significant correlations were found between systematic viewing and coverage
Oleg V. Komogortsev; Alexey Karpov
In: IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 621–632, 2016.
This paper presents an objective evaluation of the effects of environmental factors, such as stimulus presentation and eye tracking specifications, on the biometric accuracy of oculomotor plant characteristic (OPC) biometrics. The study examines the largest known dataset for eye movement biometrics, with eye movements recorded from 323 subjects over multiple sessions. Six spatial precision tiers (0.01°, 0.11°, 0.21°, 0.31°, 0.41°, 0.51°), six temporal resolution tiers (1000 Hz, 500 Hz, 250 Hz, 120 Hz, 75 Hz, 30 Hz), and three stimulus types (horizontal, random, textual) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment providing at least 0.1° spatial precision and 30 Hz sampling rate for biometric purposes, and the use of a horizontal pattern stimulus when using the two- dimensional oculomotor plant model developed by Komogortsev et al. 
Mark A. LeBoeuf; Jessica M. Choplin; Debra Pogrund Stark
In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 307–321, 2016.
The federal government mandates the use of home-loan disclosure forms to facilitate understanding of offered loans, enable comparison shopping, and prevent predatory lending. Predatory lending persists, however, and scant research has examined how salespeople might undermine the effectiveness of these forms. Three eye-tracking studies (a laboratory simulation and two controlled experiments) investigated how conversational norms affect the information consumers can glean from these forms. Study 1 was a laboratory simulation that recreated in the laboratory; the effects that previous literature suggested is likely happening in the field, namely, that following or violating conversational norms affects the information that consumers can glean from home-loan disclosure forms and the home-loan decisions they make. Studies 2 and 3 were controlled experiments that isolated the possible factors responsible for the observed biases in the information gleaned from these forms. The results suggest that attentional biases are largely responsible for the effects of conversation on the information consumers get and that perceived importance plays little to no role. Policy implications and how eye-tracking technology can be employed to improve decision-making are considered.
Tsu Chiang Lei; Shih Chieh Wu; Chi Wen Chao; Su Hsin Lee
In: GeoJournal, vol. 81, no. 2, pp. 153–167, 2016.
With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace.
Qian Li; Zhuowei Joy Huang; Kiel Christianson
In: Tourism Management, vol. 54, pp. 243–258, 2016.
This study examines consumers' visual attention toward tourism photographs with text naturally embedded in landscapes and their perceived advertising effectiveness. Eye-tracking is employed to record consumers' visual attention and a questionnaire is administered to acquire information about the perceived advertising effectiveness. The impacts of text elements are examined by two factors: viewers' understanding of the text language (understand vs. not understand), and the number of textual messages (single vs. multiple). Findings indicate that text within the landscapes of tourism photographs draws the majority of viewers' visual attention, irrespective of whether or not participants understand the text language. People spent more time viewing photographs with text in a known language compared to photographs with an unknown language, and more time viewing photographs with a single textual message than those with multiple textual messages. Viewers reported higher perceived advertising effectiveness toward tourism photographs that included text in the known language.
Joan López-Moliner; Eli Brenner
Flexible timing of eye movements when catching a ball Journal Article
In: Journal of Vision, vol. 16, no. 5, pp. 1–11, 2016.
In ball games, one cannot direct ones gaze at the ball all the time because one must also judge other aspects of the game, such as other players' positions. We wanted to know whether there are times at which obtaining information about the ball is particularly beneficial for catching it. We recently found that people could catch successfully if they saw any part of the ball's flight except the very end, when sensory-motor delays make it impossible to use new information. Nevertheless, there may be a preferred time to see the ball. We examined when six catchers would choose to look at the ball if they had to both catch the ball and find out what to do with it while the ball was approaching. A catcher and a thrower continuously threw a ball back and forth. We recorded their hand movements, the catcher's eye movements, and the ball's path. While the ball was approaching the catcher, information was provided on a screen about how the catcher should throw the ball back to the thrower (its peak height). This information disappeared just before the catcher caught the ball. Initially there was a slight tendency to look at the ball before looking at the screen but, later, most catchers tended to look at the screen before looking at the ball. Rather than being particularly eager to see the ball at a certain time, people appear to adjust their eye movements to the combined requirements of the task.
Bob McMurray; Ashley Farris-Trimble; Michael Seedorff; Hannah Rigler
In: Ear and Hearing, vol. 37, no. 1, pp. e37–e51, 2016.
OBJECTIVES: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/ʃ, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/ʃ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ʃ-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions.
Zhongling Pi; Jianzhong Hong
In: Innovations in Education and Teaching International, vol. 53, no. 2, pp. 135–144, 2016.
Video podcasts have become one of the fastest developing trends in learning and teaching. The study explored the effect of the presenting mode of educational video podcasts on the learning process and learning outcomes. Prior to viewing a video podcast, the 94 Chinese undergraduates participating in the study completed a demographic questionnaire and prior knowledge test. The learning process was investigated by eye-tracking and the learning outcome by a learning test. The results revealed that the participants using the video podcast with both the instructor and PPT slides gained the best learning outcomes. It was noted that they allocated much more visual attention to the instructor than to the PPT slides. It was additionally found that the 22 min was the time at which the participants reached the peak of mental fatigue. The results of our study imply that the use of educational technology is culture bound.
Alessandro Piras; Ivan M. Lanzoni; Milena Raffi; Michela Persiani; Salvatore Squatrito
In: International Journal of Sports Science & Coaching, vol. 11, no. 4, pp. 523–531, 2016.
The aim of this study was to examine the differences in visual search behaviour between a group of expert-level and one of novice table tennis players, to determine the temporal and spatial aspects of gaze orientation associated with correct responses. Expert players were classified as successful or unsuccessful depending on their performance in a video-based test of anticipation skill involving two kinds of stroke techniques: forehand top spin and backhand drive. Eye movements were recorded binocularly with a video-based eye tracking system. Successful experts were more effective than novices and unsuccessful experts in accurately anticipating both type and direction of stroke, showing fewer fixations of longer duration. Participants fixated mainly on arm area during forehand top spin, and on hand–racket and trunk areas during backhand drive. This study can help to develop interventions that facilitate the acquisition of anticipatory skills by improving visual search strategies.
Ioannis Rigas; Evgeniy Abdulin; Oleg V. Komogortsev
In: Information Fusion, vol. 32, pp. 13–25, 2016.
This paper presents a research for the use of multi-source information fusion in the field of eye movement biometrics. In the current state-of-the-art, there are different techniques developed to extract the physical and the behavioral biometric characteristics of the eye movements. In this work, we explore the effects from the multi-source fusion of the heterogeneous information extracted by different biometric algorithms under the presence of diverse visual stimuli. We propose a two-stage fusion approach with the employment of stimulus-specific and algorithm-specific weights for fusing the information from different matchers based on their identification efficacy. The experimental evaluation performed on a large database of 320 subjects reveals a considerable improvement in biometric recognition accuracy, with minimal equal error rate (EER) of 5.8%, and best case Rank-1 identification rate (Rank-1 IR) of 88.6%. It should be also emphasized that although the concept of multi-stimulus fusion is currently evaluated specifically for the eye movement biometrics, it can be adopted by other biometric modalities too, in cases when an exogenous stimulus affects the extraction of the biometric features.
Ioannis Rigas; Oleg V. Komogortsev; Reza Shadmehr
In: ACM Transactions on Applied Perception, vol. 13, no. 2, pp. 1–21, 2016.
Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6–33.5% for the verification scenario, and in range of 22.3–53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues.
Donghyun Ryu; David L. Mann; Bruce Abernethy; Jamie M. Poolton
Gaze-contingent training enhances perceptual skill acquisition Journal Article
In: Journal of Vision, vol. 16, no. 2, pp. 1–21, 2016.
The purpose of this study was to determine whether decision-making skill in perceptual-cognitive tasks could be enhanced using a training technique that impaired selective areas of the visual field. Recreational basketball players performed perceptual training over 3 days while viewing with a gaze-contingent manipulation that displayed either (a) a moving window (clear central and blurred peripheral vision), (b) a moving mask (blurred central and clear peripheral vision), or (c) full (unrestricted) vision. During the training, participants watched video clips of basketball play and at the conclusion of each clip made a decision about to which teammate the player in possession of the ball should pass. A further control group watched unrelated videos with full vision. The effects of training were assessed using separate tests of decision-making skill conducted in a pretest, posttest, and 2-week retention test. The accuracy of decision making was greater in the posttest than in the pretest for all three intervention groups when compared with the control group. Remarkably, training with blurred peripheral vision resulted in a further improvement in performance from posttest to retention test that was not apparent for the other groups. The type of training had no measurable impact on the visual search strategies of the participants, and so the training improvements appear to be grounded in changes in information pickup. The findings show that learning with impaired peripheral vision offers a promising form of training to support improvements in perceptual skill.
Sameer Saproo; Victor Shih; David C. Jangraw; Paul Sajda
In: Journal of Neural Engineering, vol. 13, pp. 1–12, 2016.
Objective. We investigated the neural correlates of workload buildup in a fine visuomotor task called the boundary avoidance task (BAT). The BAT has been known to induce naturally occurring failures of human–machine coupling in high performance aircraft that can potentially lead to a crash—these failures are termed pilot induced oscillations (PIOs). Approach. We recorded EEG and pupillometry data from human subjects engaged in a flight BAT simulated within a virtual 3D environment. Main results. We find that workload buildup in a BAT can be successfully decoded from oscillatory features in the electroencephalogram (EEG). Information in delta, theta, alpha, beta, and gamma spectral bands of the EEG all contribute to successful decoding, however gamma band activity with a lateralized somatosensory topography has the highest contribution, while theta band activity with a fronto-central topography has the most robust contribution in terms of real-world usability. We show that the output of the spectral decoder can be used to predict PIO susceptibility. We also find that workload buildup in the task induces pupil dilation, the magnitude of which is significantly correlated with the magnitude of the decoded EEG signals. These results suggest that PIOs may result from the dysregulation of cortical networks such as the locus coeruleus (LC)—anterior cingulate cortex (ACC) circuit. Significance. Our findings may generalize to similar control failures in other cases of tight manmachine coupling where gains and latencies in the control system must be inferred and compensated for by the human operators. A closed-loop intervention using neurophysiological decoding of workload buildup that targets the LC-ACC circuit may positively impact operator performance in such situations.
Hosam Al-Samarraie; Samer Muthana Sarsam; Hans Guesgen
In: Behaviour and Information Technology, vol. 35, no. 8, pp. 644–653, 2016.
It is a well-known fact that users vary in their preferences and needs. Therefore, it is very crucial to provide the customisation or personalisation for users in certain usage conditions that are more associated with their preferences. With the current limitation in adopting perceptual processing into user interface personalisation, we introduced the possibility of inferring interface design preferences from the user?s eye-movement behaviour. We firstly captured the user?s preferences of graphic design elements using an eye-tracker. Then we diagnosed these preferences towards the region of interests to build a prediction model for interface customisation. The prediction models from eye-movement behaviour showed a high potential for predicting users? preferences of interface design based on the paralleled relation between their fixation and saccadic movement. This mechanism provides a novel way of user interface design customisation and opens the door for new research in the areas of human?computer interaction and decision-making.
Joseph E. Barton; Anindo Roy; John D. Sorkin; Mark W. Rogers; Richard F. Macko
In: Journal of Biomechanical Engineering, vol. 138, no. 1, pp. 1–11, 2016.
We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task-with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities.
In: Nordidactica – Journal of Humanities and Social Science Education, vol. 1, pp. 38–62, 2016.
This paper investigates how textbook design may influence students' visual attention to graphics, photos and text in current geography textbooks. Eye tracking, a visual method of data collection and analysis, was utilised to precisely monitor students' eye movements while observing geography textbook spreads. In an exploratory study utilising random sampling, the eye movements of 20 students (secondary school students 15–17 years of age and university students 20–24 years of age) were recorded. The research entities were double- page spreads of current German geography textbooks covering an identical topic, taken from five separate textbooks. A two-stage test was developed. Each participant was given the task of first looking at the entire textbook spread to determine what was being explained on the pages. In the second stage, participants solved one of the tasks from the exercise section. Overall, each participant studied five different textbook spreads and completed five set tasks. After the eye tracking study, each participant completed a questionnaire. The results may verify textbook design as one crucial factor for successful knowledge acquisition from textbooks. Based on the eye tracking documentation, learning-related challenges posed by images and complex image-text structures in textbooks are elucidated and related to educational psychology insights and findings from visual communication and textbook analysis.
Palash Bera; Louis Philippe Sirois
Displaying background maps in business intelligence dashboards Journal Article
In: Iranian Journal of Psychiatry, vol. 18, no. 5, pp. 58–65, 2016.
Business data in geographic maps, called data maps, can be displayed via business intelligence dashboards. An important emerging feature is the use of background maps that overlap with existing data maps. Here, the authors examine the usefulness of background maps in dashboards and investigate how much cognitive effort users put in when they use dashboards with background maps as compared to dashboards without them. To test the extent of cognitive effort, the authors conducted an eye-tracking study in which users performed a decision-making task with maps in dashboards. In a separate study, users were asked directly about the mental effort required to perform tasks with the dashboards. Both studies identified that when users use background maps, they required less cognitive effort than users who use dashboards in which the information on the background map is represented in another form, such as a bar chart.
Raymond Bertram; Johanna K. Kaakinen; Frank Bensch; Laura Helle; Eila Lantto; Pekka Niemi; Nina Lundbom
In: Radiology, vol. 281, no. 3, pp. 805–815, 2016.
PURPOSE: To establish potential markers of visual expertise in eye movement (EM) patterns of early residents, advanced residents, and specialists who interpret abdominal computed tomography (CT) studies. MATERIAL AND METHODS: The institutional review board approved use of anonymized CT studies as research materials and to obtain anonymized eye-tracking data from volunteers. Participants gave written informed consent. RESULTS: Early residents (n = 15), advanced residents (n = 14), and specialists (n = 12) viewed 26 abdominal CT studies as a sequence of images at either 3 or 5 frames per second while EMs were recorded. Data were analyzed by using linear mixed-effects models. Early residents' detection rate decreased with working hours (odds ratio, 0.81; 95% confidence interval [CI]: 0.73, 0.91; P = .001). They detected less of the low visual contrast (but not of the high visual contrast) lesions (45% [13 of 29]) than did specialists (62% [18 of 29]) (odds ratio, 0.39; 95% CI: 0.25, 0.61; P , .001) or advanced residents (56% [16 of 29]) (odds ratio, 0.55; 95% CI: 0.33, 0.93; P = .024). Specialists and advanced residents had longer fixation durations at 5 than at 3 frames per second (specialists: b = .01; 95% CI: .004, .026; P = .008; advanced residents: b = .04; 95% CI: .03, .05; P , .001). In the presence of lesions, saccade lengths of specialists shortened more than those of advanced (b = .02; 95% CI: .007, .04; P = .003) and of early residents (b = .02; 95% CI: .008, 0.04; P = .003). Irrespective of expertise, high detection rate correlated with greater reduction of saccade length in the presence of lesions (b = 2.10; 95% CI: 2.16, 2.04; P = .002) and greater increase at higher presentation speed (b = .11; 95% CI: .04, .17; P = .001). CONCLUSION: Expertise in CT reading is characterized by greater adaptivity in EM patterns in response to the demands of the task and environment.
Federica Bianchi; Sébastien Santurette; Dorothea Wendt; Torsten Dau
In: JARO - Journal of the Association for Research in Otolaryngology, vol. 17, no. 1, pp. 69–79, 2016.
Musicians typically show enhanced pitch discrimination abilities compared to non-musicians. The present study investigated this perceptual enhancement behaviorally and objectively for resolved and unresolved complex tones to clarify whether the enhanced performance in musicians can be ascribed to increased peripheral frequency selectivity and/or to a different processing effort in performing the task. In a first experiment, pitch discrimination thresholds were obtained for harmonic complex tones with fundamental frequencies (F0s) between 100 and 500 Hz, filtered in either a low- or a high-frequency region, leading to variations in the resolvability of audible harmonics. The results showed that pitch discrimination performance in musicians was enhanced for resolved and unresolved complexes to a similar extent. Additionally, the harmonics became resolved at a similar F0 in musicians and non-musicians, suggesting similar peripheral frequency selectivity in the two groups of listeners. In a follow-up experiment, listeners' pupil dilations were measured as an indicator of the required effort in performing the same pitch discrimination task for conditions of varying resolvability and task difficulty. Pupillometry responses indicated a lower processing effort in the musicians versus the non-musicians, although the processing demand imposed by the pitch discrimination task was individually adjusted according to the behavioral thresholds. Overall, these findings indicate that the enhanced pitch discrimination abilities in musicians are unlikely to be related to higher peripheral frequency selectivity and may suggest an enhanced pitch representation at more central stages of the auditory system in musically trained listeners.
Indu P. Bodala; Junhua Li; Nitish V. Thakor; Hasan Al-Nashash
In: Frontiers in Human Neuroscience, vol. 10, pp. 273, 2016.
Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using ‘challenge integration', a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p < 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p < 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p < 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RT mean = 3.65 ± 1.4 secs) compared to initial monotonous phase without challenge (RT mean = 4.6 ± 2.7 secs). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration.
Tom Bullock; James C. Elliott; John T. Serences; Barry Giesbrecht
In: Journal of Cognitive Neuroscience, vol. 29, no. 4, pp. 605–618, 2016.
An organism's current behavioral state influences ongoing brain activity. Nonhuman mammalian and invertebrate brains exhibit large increases in the gain of feature-selective neural responses in sensory cortex during locomotion, suggesting that the visual system becomes more sensitive when actively exploring the environment. This raises the possibility that human vision is also more sensitive during active movement. To investigate this possibility, we used an inverted encoding model technique to estimate feature-selective neural response profiles from EEG data acquired from participants performing an orientation discrimination task. Participants (n = 18) fixated at the center of a flickering (15 Hz) circular grating presented at one of nine different orientations and monitored for a brief shift in orientation that occurred on every trial. Participants completed the task while seated on a stationary exercise bike at rest and during low- and high-intensity cycling. We found evidence for inverted-U effects; such that the peak of the reconstructed feature-selective tuning profiles was highest during low-intensity exercise compared with those estimated during rest and high-intensity exercise. When modeled, these effects were driven by changes in the gain of the tuning curve and in the profile bandwidth during low-intensity exercise relative to rest. Thus, despite profound differences in visual pathways across species, these data show that sensitivity in human visual cortex is also enhanced during locomotive behavior. Our results reveal the nature of exercise-induced gain on feature-selective coding in human sensory cortex and provide valuable evidence linking the neural mechanisms of behavior state across species.