Usability and Applied Eye-Tracking Publications
All EyeLink eye tracker usability and applied research publications up until 2024 (with some early 2025s) are listed below by year. You can search the eye-tracking publications using keywords such as Driving, Sport, Workload, etc. You can also search for individual author names. If we missed any EyeLink usability or applied article, please email us!
2016 |
Tao Deng; Kaifu Yang; Yongjie Li; Hongmei Yan Where does the driver look? Top-down-based saliency detection in a traffic driving environment Journal Article In: IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 2051–2062, 2016. @article{Deng2016a, A traffic driving environment is a complex and dynamically changing scene. When driving, drivers always allocate their attention to the most important and salient areas or targets. Traffic saliency detection, which computes the salient and prior areas or targets in a specific driving environment, is an indispensable part of intelligent transportation systems and could be useful in supporting autonomous driving, traffic sign detection, driving training, car collision warning, and other tasks. Recently, advances in visual attention models have provided substantial progress in describing eye movements over simple stimuli and tasks such as free viewing or visual search. However, to date, there exists no computational framework that can accurately mimic a driver's gaze behavior and saliency detection in a complex traffic driving environment. In this paper, we analyzed the eye-tracking data of 40 subjects consisted of nondrivers and experienced drivers when viewing 100 traffic images. We found that a driver's attention was mostly concentrated on the end of the road in front of the vehicle. We proposed that the vanishing point of the road can be regarded as valuable top-down guidance in a traffic saliency detection model. Subsequently, we build a framework of a classic bottom-up and top-down combined traffic saliency detection model. The results show that our proposed vanishing-point-based top-down model can effectively simulate a driver's attention areas in a driving environment. |
Leandro L. Di Stasi; Michael B. McCamy; Susana Martinez-Conde; Ellis Gayles; Chad Hoare; Michael Foster; Andrés Catena; Stephen L. Macknik Effects of long and short simulated flights on the saccadic eye movement velocity of aviators Journal Article In: Physiology and Behavior, vol. 153, pp. 91–96, 2016. @article{DiStasi2016, Aircrew fatigue is a major contributor to operational errors in civil and military aviation. Objective detection of pilot fatigue is thus critical to prevent aviation catastrophes. Previous work has linked fatigue to changes in oculomotor dynamics, but few studies have studied this relationship in critical safety environments. Here we measured the eye movements of US Marine Corps combat helicopter pilots before and after simulated flight missions of different durations. We found a decrease in saccadic velocities after long simulated flights compared to short simulated flights. These results suggest that saccadic velocity could serve as a biomarker of aviator fatigue. |
Carolina Diaz-Piedra; Héctor Rieiro; Juan Suárez; Francisco Rios-Tejada; Andrés Catena; Leandro Luigi Di Stasi Fatigue in the military: Towards a fatigue detection test based on the saccadic velocity Journal Article In: Physiological Measurement, vol. 37, no. 9, pp. N62–N75, 2016. @article{DiazPiedra2016, Fatigue is a major contributing factor to operational errors. Therefore, the validation of objective and sensitive indices to detect fatigue is critical to prevent accidents and catastrophes. Whereas tests based on saccadic velocity (SV) have become popular, their sensitivity in the military is not yet clear, since most research has been conducted in laboratory settings using not fully validated instruments. Field studies remain scarce, especially in extreme conditions such as real flights. Here, we investigated the effects of real, long flights on SV. We assessed five newly commissioned military helicopter pilots during their naviation training. Pilots flew Sikorsky S-76C helicopters, under instrumental flight rules, for more than 2 h (ca. 150 min). Eye movements were recorded before and after the flight with an eye tracker using a standard guided-saccade task. We also collected subjective ratings of fatigue. SV significantly decreased from the Pre-Flight to the Post-Flight session in all pilots by around 3% (range: 1-4%). Subjective ratings showed the same tendency. We provide conclusive evidence about the high sensitivity of fatigue tests based on SV in real flight conditions, even in small samples. This result might offer military medical departments a valid and useful biomarker of warfighter physiological state. |
Benjamin Gagl Blue hypertext is a good design decision: No perceptual disadvantage in reading and successful highlighting of relevant information Journal Article In: PeerJ, vol. 4, pp. 1–11, 2016. @article{Gagl2016, BACKGROUND: Highlighted text in the Internet (i.e., hypertext) is predominantly blue and underlined. The perceptibility of these hypertext characteristics was heavily questioned by applied research and empirical tests resulted in inconclusive results. The ability to recognize blue text in foveal and parafoveal vision was identified as potentially constrained by the low number of foveally centered blue light sensitive retinal cells. The present study investigates if foveal and parafoveal perceptibility of blue hypertext is reduced in comparison to normal black text during reading. METHODS: A silent-sentence reading study with simultaneous eye movement recordings and the invisible boundary paradigm, which allows the investigation of foveal and parafoveal perceptibility, separately, was realized (comparing fixation times after degraded vs. un-degraded parafoveal previews). Target words in sentences were presented in either black or blue and either underlined or normal. RESULTS: No effect of color and underlining, but a preview benefit could be detected for first pass reading measures. Fixation time measures that included re-reading, e.g., total viewing times, showed, in addition to a preview effect, a reduced fixation time for not highlighted (black not underlined) in contrast to highlighted target words (either blue or underlined or both). DISCUSSION: The present pattern reflects no detectable perceptual disadvantage of hyperlink stimuli but increased attraction of attention resources, after first pass reading, through highlighting. Blue or underlined text allows readers to easily perceive hypertext and at the same time readers re-visited highlighted words longer. On the basis of the present evidence, blue hypertext can be safely recommended to web designers for future use. |
Ziad M. Hafed; Katarina Stingl; Karl Ulrich Bartz-Schmidt; Florian Gekeler; Eberhart Zrenner Oculomotor behavior of blind patients seeing with a subretinal visual implant Journal Article In: Vision Research, vol. 118, pp. 119–131, 2016. @article{Hafed2016, Electronic implants are able to restore some visual function in blind patients with hereditary retinal degenerations. Subretinal visual implants, such as the CE-approved Retina Implant Alpha IMS (Retina Implant AG, Reutlingen, Germany), sense light through the eye's optics and subsequently stimulate retinal bipolar cells via ~1500 independent pixels to project visual signals to the brain. Because these devices are directly implanted beneath the fovea, they potentially harness the full benefit of eye movements to scan scenes and fixate objects. However, so far, the oculomotor behavior of patients using subretinal implants has not been characterized. Here, we tracked eye movements in two blind patients seeing with a subretinal implant, and we compared them to those of three healthy controls. We presented bright geometric shapes on a dark background, and we asked the patients to report seeing them or not. We found that once the patients visually localized the shapes, they fixated well and exhibited classic oculomotor fixational patterns, including the generation of microsaccades and ocular drifts. Further, we found that a reduced frequency of saccades and microsaccades was correlated with loss of visibility. Last, but not least, gaze location corresponded to the location of the stimulus, and shape and size aspects of the viewed stimulus were reflected by the direction and size of saccades. Our results pave the way for future use of eye tracking in subretinal implant patients, not only to understand their oculomotor behavior, but also to design oculomotor training strategies that can help improve their quality of life. |
Lynn Huestegge; Anne Böckler Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–15, 2016. @article{Huestegge2016, Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards. |
Yu-Cin Jian; Chao-Jung Wu In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016. @article{Jian2016a, Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas. |
Ioanna Katidioti; Jelmer P. Borst; Douwe J. Bierens de Haan; Tamara Pepping; Marieke K. Vugt; Niels A. Taatgen Interrupted by your pupil: An interruption management system based on pupil dilation Journal Article In: International Journal of Human-Computer Interaction, vol. 32, no. 10, pp. 791–801, 2016. @article{Katidioti2016a, Interruptions are prevalent in everyday life and can be very disruptive. An important factor that affects the level of disruptiveness is the timing of the interruption: Interruptions at low-workload moments are known to be less disruptive than interruptions at high-workload moments. In this study, we developed a task-independent interruption management system (IMS) that interrupts users at low-workload moments in order to minimize the disruptiveness of interruptions. The IMS identifies low-workload moments in real time by measuring users' pupil dilation, which is a well-known indicator of workload. Using an experimental setup we showed that the IMS succeeded in finding the optimal moments for interruptions and marginally improved performance. Because our IMS is task-independent—it does not require a task analysis—it can be broadly applied. |
Ellen M. Kok; Halszka Jarodzka; Anique B. H. Bruin; Hussain A. N. BinAmir; Simon G. F. Robben; Jeroen J. G. Merriënboer Systematic viewing in radiology: Seeing more, missing less? Journal Article In: Advances in Health Sciences Education, vol. 21, no. 1, pp. 189–205, 2016. @article{Kok2016, To prevent radiologists from overlooking lesions, radiology textbooks rec- ommend ‘‘systematic viewing,'' a technique whereby anatomical areas are inspected in a fixed order. This would ensure complete inspection (full coverage) of the image and, in turn, improve diagnostic performance. To test this assumption, two experiments were performed. Both experiments investigated the relationship between systematic viewing, coverage, and diagnostic performance. Additionally, the first investigated whether sys- tematic viewing increases with expertise; the second investigated whether novices benefit from full-coverage or systematic viewing training. In Experiment 1, 11 students, ten res- idents, and nine radiologists inspected five chest radiographs. Experiment 2 had 75 students undergo a training in either systematic, full-coverage (without being systematic) or non- systematic viewing. Eye movements and diagnostic performance were measured throughout both experiments. In Experiment 1, no significant correlations were found between systematic viewing and coverage |
Oleg V. Komogortsev; Alexey Karpov Oculomotor plant characteristics : The effects of environment and stimulus Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 621–632, 2016. @article{Komogortsev2016, This paper presents an objective evaluation of the effects of environmental factors, such as stimulus presentation and eye tracking specifications, on the biometric accuracy of oculomotor plant characteristic (OPC) biometrics. The study examines the largest known dataset for eye movement biometrics, with eye movements recorded from 323 subjects over multiple sessions. Six spatial precision tiers (0.01°, 0.11°, 0.21°, 0.31°, 0.41°, 0.51°), six temporal resolution tiers (1000 Hz, 500 Hz, 250 Hz, 120 Hz, 75 Hz, 30 Hz), and three stimulus types (horizontal, random, textual) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment providing at least 0.1° spatial precision and 30 Hz sampling rate for biometric purposes, and the use of a horizontal pattern stimulus when using the two- dimensional oculomotor plant model developed by Komogortsev et al. [1] |
Mark A. LeBoeuf; Jessica M. Choplin; Debra Pogrund Stark Eye see what you are saying: Testing conversational influences on the information gleaned from home-loan disclosure forms Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 307–321, 2016. @article{LeBoeuf2016, The federal government mandates the use of home-loan disclosure forms to facilitate understanding of offered loans, enable comparison shopping, and prevent predatory lending. Predatory lending persists, however, and scant research has examined how salespeople might undermine the effectiveness of these forms. Three eye-tracking studies (a laboratory simulation and two controlled experiments) investigated how conversational norms affect the information consumers can glean from these forms. Study 1 was a laboratory simulation that recreated in the laboratory; the effects that previous literature suggested is likely happening in the field, namely, that following or violating conversational norms affects the information that consumers can glean from home-loan disclosure forms and the home-loan decisions they make. Studies 2 and 3 were controlled experiments that isolated the possible factors responsible for the observed biases in the information gleaned from these forms. The results suggest that attentional biases are largely responsible for the effects of conversation on the information consumers get and that perceived importance plays little to no role. Policy implications and how eye-tracking technology can be employed to improve decision-making are considered. |
Tsu-Chiang Lei; Shih-Chieh Wu; Chi-Wen Chao; Su-Hsin Lee Evaluating differences in spatial visual attention in wayfinding strategy when using 2D and 3D electronic maps Journal Article In: GeoJournal, vol. 81, no. 2, pp. 153–167, 2016. @article{Lei2016, With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace. |
Qian Li; Zhuowei Joy Huang; Kiel Christianson Visual attention toward tourism photographs with text: An eye-tracking study Journal Article In: Tourism Management, vol. 54, pp. 243–258, 2016. @article{Li2016b, This study examines consumers' visual attention toward tourism photographs with text naturally embedded in landscapes and their perceived advertising effectiveness. Eye-tracking is employed to record consumers' visual attention and a questionnaire is administered to acquire information about the perceived advertising effectiveness. The impacts of text elements are examined by two factors: viewers' understanding of the text language (understand vs. not understand), and the number of textual messages (single vs. multiple). Findings indicate that text within the landscapes of tourism photographs draws the majority of viewers' visual attention, irrespective of whether or not participants understand the text language. People spent more time viewing photographs with text in a known language compared to photographs with an unknown language, and more time viewing photographs with a single textual message than those with multiple textual messages. Viewers reported higher perceived advertising effectiveness toward tourism photographs that included text in the known language. |
Joan López-Moliner; Eli Brenner Flexible timing of eye movements when catching a ball Journal Article In: Journal of Vision, vol. 16, no. 5, pp. 1–11, 2016. @article{LopezMoliner2016, In ball games, one cannot direct ones gaze at the ball all the time because one must also judge other aspects of the game, such as other players' positions. We wanted to know whether there are times at which obtaining information about the ball is particularly beneficial for catching it. We recently found that people could catch successfully if they saw any part of the ball's flight except the very end, when sensory-motor delays make it impossible to use new information. Nevertheless, there may be a preferred time to see the ball. We examined when six catchers would choose to look at the ball if they had to both catch the ball and find out what to do with it while the ball was approaching. A catcher and a thrower continuously threw a ball back and forth. We recorded their hand movements, the catcher's eye movements, and the ball's path. While the ball was approaching the catcher, information was provided on a screen about how the catcher should throw the ball back to the thrower (its peak height). This information disappeared just before the catcher caught the ball. Initially there was a slight tendency to look at the ball before looking at the screen but, later, most catchers tended to look at the screen before looking at the ball. Rather than being particularly eager to see the ball at a certain time, people appear to adjust their eye movements to the combined requirements of the task. |
Bob McMurray; Ashley Farris-Trimble; Michael Seedorff; Hannah Rigler The effect of residual acoustic hearing and adaptation to uncertainty on speech perception in cochlear implant users: Evidence from eye-tracking Journal Article In: Ear & Hearing, vol. 37, no. 1, pp. e37–e51, 2016. @article{McMurray2016, OBJECTIVES: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/ʃ, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/ʃ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ʃ-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions. |
2015 |
Seung Kweon Hong Comparison of vertical and horizontal eye movement times in the selection of visual targets by an eye input device Journal Article In: Journal of the Ergonomics Society of Korea, vol. 34, no. 1, pp. 19–27, 2015. @article{Hong2015, Objective: The aim of this study is to investigate how well eye movement times in visual target selection tasks by an eye input device follows the typical Fitts' Law and to compare vertical and horizontal eye movement times. Background: Typically manual pointing provides excellent fit to the Fitts' Law model. However, when an eye input device is used for the visual target selection tasks, there were some debates on whether the eye movement times in can be described by the Fitts' Law. More empirical studies should be added to resolve these debates. This study is an empirical study for resolving this debate. On the other hand, many researchers reported the direction of movement in typical manual pointing has some effects on the movement times. The other question in this study is whether the direction of eye movement also affects the eye movement times. Method: A cursor movement times in visual target selection tasks by both input devices were collected. The layout of visual targets was set up by two types. Cursor starting position for vertical movement times were in the top of the monitor and visual targets were located in the bottom, while cursor starting positions for horizontal movement times were in the right of the monitor and visual targets were located in the left. Results: Although eye movement time was described by the Fitts' Law, the error rate was high and correlation was relatively low (R2 = 0.80 for horizontal movements and R2 = 0.66 for vertical movements), compared to those of manual movement. According to the movement direction, manual movement times were not significantly different, but eye movement times were significantly different. Conclusion: Eye movement times in the selection of visual targets by an eye-gaze input device could be described and predicted by the Fitts' Law. Eye movement times were significantly different according to the direction of eye movement. Application: The results of this study might help to understand eye movement times in visual target selection tasks by the eye input devices. |
Oleg V. Komogortsev; Alexey Karpov; Corey D. Holland Attack of mechanical replicas: Liveness detection with eye movements Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 10, no. 4, pp. 716–725, 2015. @article{Komogortsev2015, This paper investigates liveness detection techniques in the area of eye movement biometrics. We investigate a specific scenario, in which an impostor constructs an artificial replica of the human eye. Two attack scenarios are considered: 1) the impostor does not have access to the biometric templates representing authentic users, and instead utilizes average anatomical values from the relevant literature and 2) the impostor gains access to the complete biometric database, and is able to employ exact anatomical values for each individual. In this paper, liveness detection is performed at the feature and match score levels for several existing forms of eye movement biometric, based on different aspects of the human visual system. The ability of each technique to differentiate between live and artificial recordings is measured by its corresponding false spoof acceptance rate, false live rejection rate, and classification rate. The results suggest that eye movement biometrics are highly resistant to circumvention by artificial recordings when liveness detection is performed at the feature level. Unfortunately, not all techniques provide feature vectors that are suitable for liveness detection at the feature level. At the match score level, the accuracy of liveness detection depends highly on the biometric techniques employed. |
Moritz Köster; Marco Rüth; Kai Christoph Hamborg; Kai Kaspar Effects of personalized banner ads on visual attention and recognition memory Journal Article In: Applied Cognitive Psychology, vol. 29, no. 2, pp. 181–192, 2015. @article{Koester2015, Internet companies collect a vast amount of data about their users in order to personalize banner ads. However, very little is known about the effects of personalized banners on attention and memory. In the present study, 48 subjects performed search tasks on web pages containing personalized or nonpersonalized banners. Overt attention was measured by an eye-tracker, and recognition of banner and task-relevant information was subsequently examined. The entropy of fixations served as a measure for the overall exploration of web pages. Results confirm the hypotheses that personalization enhances recognition for the content of banners while the effect on attention was weaker and partially nonsignificant. In contrast, overall exploration of web pages and recognition of task-relevant information was not influenced. The temporal course of fixations revealed that visual exploration of banners typically proceeds from the picture to the logo and finally to the slogan. We discuss theoretical and practical implications. |
Linnéa Larsson; Marcus Nyström; Richard Andersson; Martin Stridh Detection of fixations and smooth pursuit movements in high-speed eye-tracking data Journal Article In: Biomedical Signal Processing and Control, vol. 18, pp. 145–152, 2015. @article{Larsson2015, A novel algorithm for the detection of fixations and smooth pursuit movements in high-speed eye-tracking data is proposed, which uses a three-stage procedure to divide the intersaccadic intervals into a sequence of fixation and smooth pursuit events. The first stage performs a preliminary segmentation while the latter two stages evaluate the characteristics of each such segment and reorganize the preliminary segments into fixations and smooth pursuit events. Five different performance measures are calculated to investigate different aspects of the algorithm's behavior. The algorithm is compared to the current state-of-the-art (I-VDT and the algorithm in [11]), as well as to annotations by two experts. The proposed algorithm performs considerably better (average Cohen's kappa 0.42) than the I-VDT algorithm (average Cohen's kappa 0.20) and the algorithm in [11] (average Cohen's kappa 0.16), when compared to the experts' annotations. |
Minyoung Lee; Randolph Blake; Sujin Kim; Chai-Youn Kim Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music Journal Article In: Proceedings of the National Academy of Sciences, vol. 112, no. 27, pp. 8493–8498, 2015. @article{Lee2015b, Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry. |
Yan Luo; Ming Jiang; Yongkang Wong; Qi Zhao Multi-camera saliency Journal Article In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 10, pp. 2057–2070, 2015. @article{Luo2015a, A significant body of literature on saliency modeling predicts where humans look in a single image or video. Besides the scientific goal of understanding how information is fused from multiple visual sources to identify regions of interest in a holistic manner, there are tremendous engineering applications of multi-camera saliency due to the widespread of cameras. This paper proposes a principled framework to smoothly integrate visual information from multiple views to a global scene map, and to employ a saliency algorithm incorporating high-level features to identify the most important regions by fusing visual information. The proposed method has the following key distinguishing features compared with its counterparts: (1) the proposed saliency detection is global (salient regions from one local view may not be important in a global context), (2) it does not require special ways for camera deployment or overlapping field of view, and (3) the key saliency algorithm is effective in highlighting interesting object regions though not a single detector is used. Experiments on several data sets confirm the effectiveness of the proposed principled framework. |
Andrew K. Mackenzie; Julie M. Harris Eye movements and hazard perception in active and passive driving Journal Article In: Visual Cognition, vol. 23, no. 6, pp. 736–757, 2015. @article{Mackenzie2015, Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment. |
Ying Yan; Huazhi Yuan; Xiaofei Wang; Ting Xu; Haoxue Liu Study on driver's fixation variation at entrance and inside sections of tunnel on highway Journal Article In: Advances in Mechanical Engineering, vol. 7, no. 1, pp. 1–10, 2015. @article{Yan2015d, How drivers' visual characteristics change as they pass tunnels was studied. Firstly, nine drivers' test data at tunnel entrance and inside sections using eye movement tracking devices were recorded. Then the transfer function of BP artificial neural network was employed to simulate and analyze the variation of the drivers' eye movement parameters. The relation models between eye movement parameters and the distance of the tunnels were established. In the analysis of the fixation point distributions, the analytic coordinates of fixations in visual field were clustered to obtain different visual area of fixations by utilizing dynamic cluster theory. The results indicated that, at 100 meters before the entrance, the average fixation duration increased, but the fixations number decreased substantially. After 100 meters into the tunnel, the fixation duration started to decrease first and then increased. The variations of drivers' fixation points demonstrated such a pattern of change as scatter, focus, and scatter again. While driving through the tunnels, drivers presented a long time fixation. Nearly 61.5% subjects' average fixation duration increased significantly. In the tunnel, these drivers pay attention to seven fixation points areas from the car dashboard area to the road area in front of the car. |
Luming Zhang; Meng Wang; Liqiang Nie; Liang Hong; Yong Rui; Qi Tian Retargeting semantically-rich photos Journal Article In: IEEE Transactions on Multimedia, vol. 17, no. 9, pp. 1538–1549, 2015. @article{Zhang2015a, Semantically-rich photos contain a rich variety of semantic objects (e.g., pedestrians and bicycles). Retargeting these photos is a challenging task since each semantic object has fixed geometric characteristics. Shrinking these objects simultaneously during retargeting is prone to distortion. In this paper, we propose to retarget semantically-rich photos by detecting photo semantics from image tags, which are predicted by a multi-label SVM. The key technique is a generative model termed latent stability discovery (LSD). It can robustly localize various semantic objects in a photo by making use of the predicted noisy image tags. BasedonLSD,afeaturefusionalgorithm is proposed to detect salient regions at both the low-level and high-level. These salient regions are linked into a path sequentially to simulate human visual perception. Finally, we learn the prior distribution of such paths from aesthetically pleasing training photos. The prior enforces the path of a retargeted photo to be maximally similar to those from the training photos. In the experiment, we collect 217 1600x1200 photos, each containing over seven salient objects. Comprehensive user studies demonstrate the competitiveness of our method. |
Shu-Fei Yang An eye-tracking study of the Elaboration Likelihood Model in online shopping Journal Article In: Electronic Commerce Research and Applications, vol. 14, no. 4, pp. 233–240, 2015. @article{Yang2015a, This study uses eye-tracking to explore the Elaboration Likelihood Model (ELM) in online shopping. The results show that the peripheral cue did not have moderating effect on purchase intention, but had moderating effect on eye movement. Regarding purchase intention, the high elaboration had higher purchase intention than the low elaboration with a positive peripheral cue, but there was no difference in purchase intention between the high and low elaboration with a negative peripheral cue. Regarding eye movement, with a positive peripheral cue, the high elaboration group was observed to have longer fixation duration than the low elaboration group in two areas of interest (AOIs); however, with a negative peripheral cue, the low elaboration group had longer fixation on the whole page and two AOIs. In addition, the relationship between purchase intention and eye movement of the AOIs is more significant in the high elaboration group when given a negative peripheral cue and in the low elaboration group when given a positive peripheral cue. This study not only examines the postulates of the ELM, but also contributes to a better understanding of the cognitive processes of the ELM. These findings have practical implications for e-sellers to identify characteristics of consumers' elaboration in eye movement and designing customization and persuasive context for different elaboration groups in e-commerce. |
Youming Zhang; Jorma Laurikkala; Martti Juhola; Youming Zhang; Jorma Laurikkala; Martti Juhola Biometric verification with eye movements: Results from a long-term recording series Journal Article In: IET Biometrics, vol. 4, no. 3, pp. 162–168, 2015. @article{Zhang2015c, The authors present the author's results of using saccadic eye movements for biometric user verification. The method can be applied to computers or other devices, in which it is possible to include an eye movement camera system. Thus far, this idea has been little researched. As they have extensively studied eye movement signals for medical applications, they saw an opportunity for the biometric use of saccades. Saccades are the fastest of all eye movements, and are easy to stimulate and detect from signals. As signals measured from a physiological origin, the properties of eye movements (e.g. latency and maximum angular velocity) may contain considerable variability between different times of day, between days or weeks and so on. Since such variability might impair biometric verification based on saccades, they attempted to tackle this issue. In contrast to their earlier results, where they did not include such long intervals between sessions of eye movement recordings as in the present research, their results showed that – notwithstanding some variability present in saccadic variables – this variability was not considerable enough to essentially disturb or impair verification results. The only exception was a test series of very long intervals ∼16 or 32 months in length. For the best results obtained with various classification methods, false rejection and false acceptance rates were <5%. Thus, they conclude that saccadic eye movements can provide a realistic basis for biometric user verification. |
Ishan Nigam; Mayank Vatsa; Richa Singh Ocular biometrics: A survey of modalities and fusion approaches Journal Article In: Information Fusion, vol. 26, pp. 1–35, 2015. @article{Nigam2015, Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India's Aadhaar Program and the United Arab Emirate's border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development. |
Kristien Ooms; Arzu Coltekin; Philippe De Maeyer; Lien Dupont; Sara I. Fabrikant; Annelies Incoul; Matthias Kuhn; Hendrik Slabbinck; Pieter Vansteenkiste; Lise Van der Haegen Combining user logging with eye tracking for interactive and dynamic applications Journal Article In: Behavior Research Methods, vol. 47, no. 4, pp. 977–993, 2015. @article{Ooms2015, User evaluations of interactive and dynamic applications face various challenges related to the active nature of these displays. For example, users can often zoom and pan on digital products, and these interactions cause changes in the extent and/or level of detail of the stimulus. Therefore, in eye tracking studies, when a user's gaze is at a particular screen position (gaze position) over a period of time, the information contained in this particular position may have changed. Such digital activities are commonplace in modern life, yet it has been difficult to automatically compare the changing information at the viewed position, especially across many participants. Existing solutions typically involve tedious and time-consuming manual work. In this article, we propose a methodology that can overcome this problem. By combining eye tracking with user logging (mouse and keyboard actions) with cartographic products, we are able to accurately reference screen coordinates to geographic coordinates. This referencing approach allows researchers to know which geographic object (location or attribute) corresponds to the gaze coordinates at all times. We tested the proposed approach through two case studies, and discuss the advantages and disadvantages of the applied methodology. Furthermore, the applicability of the proposed approach is discussed with respect to other fields of research that use eye tracking-namely, marketing, sports and movement sciences, and experimental psychology. From these case studies and discussions, we conclude that combining eye tracking and user-logging data is an essential step forward in efficiently studying user behavior with interactive and static stimuli in multiple research fields. |
Ioannis Rigas; Oleg V. Komogortsev Eye movement-driven defense against iris print-attacks Journal Article In: Pattern Recognition Letters, vol. 68, no. 2, pp. 316–326, 2015. @article{Rigas2015, This paper proposes a methodology for the utilization of eye movement cues for the task of iris print-attack detection. We investigate the fundamental distortions arising in the eye movement signal during an iris print-attack, due to the structural and functional discrepancies between a paper-printed iris and a natural eye iris. The performed experiments involve the execution of practical print-attacks against an eye-tracking device, and the collection of the resulting eye movement signals. The developed methodology for the detection of print-attack signal distortions is evaluated on a large database collected from 200 subjects, which contains both the real (‘live') eye movement signals and the print-attack (‘spoof') eye movement signals. The suggested methodology provides a sufficiently high detection performance, with a maximum average classification rate (ACR) of 96.5% and a minimum equal error rate (EER) of 3.4%. Due to the hardware similarities between eye tracking and iris capturing systems, we hypothesize that the proposed methodology can be adopted into the existing iris recognition systems with minimal cost. To further support this hypothesis we experimentally investigate the robustness of our scheme by simulating conditions of reduced sampling resolution (temporal and spatial), and of limited duration of the eye movement signals. |
Donghyun Ryu; Bruce Abernethy; David L. Mann; Jamie M. Poolton The contributions of central and peripheral vision to expertise in basketball: How blur helps to provide a clearer picture Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 1, pp. 167–183, 2015. @article{Ryu2015, The main purpose of this study was to examine the relative roles of central and peripheral vision when performing a dynamic forced-choice task. We did so by using a gaze-contingent display with different levels of blur in an effort to (a) test the limit of visual resolution necessary for information pick-up in each of these sectors of the visual field and, as a result, to (b) develop a more natural means of gaze-contingent display using a blurred central or peripheral visual field. The expert advantage seen in usual whole field visual presentation persists despite surprisingly high levels of impairment to central or peripheral vision. Consistent with the well-established central/peripheral differences in sensitivity to spatial frequency, high levels of blur did not prevent better-than-chance performance by skilled players when peripheral information was blurred, but they did affect response accuracy when impairing central vision. Blur was found to always alter the pattern of eye movements before it decreased task performance. The evidence accumulated across the 4 experi- ments provides new insights into several key questions surrounding the role that different sectors of the visual field play in expertise in dynamic, time-constrained tasks. |
Chengyao Shen; Xun Huang; Qi Zhao Predicting eye fixations in webpages with multi-scale features and high-level representations from deep networks Journal Article In: IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2084–2093, 2015. @article{Shen2015, In recent decades, webpages are becoming an increasingly important visual information source. Compared with natural images, webpages are different in many ways. For example, webpages are usually rich in semantically meaningful visual media (text, pictures, logos, and animations), which make the direct application of some traditional low-level saliency models ineffective. Besides, distinct web-viewing patterns such as top-left bias and banner blindness suggest different ways for predicting attention deployment on a webpage. In this study, we utilize a new scheme of low-level feature extraction pipeline and combine it with high-level representations from deep neural networks. The proposed model is evaluated on a newly published webpage saliency dataset with three popular evaluation metrics. Results show that our model outperforms other existing saliency models by a large margin and both low-and high-level features play an important role in predicting fixations on webpage. |
Lisa M. Soederberg Miller; Diana L. Cassady; Elizabeth A. Applegate; Laurel A. Beckett; Machelle D. Wilson; Tanja N. Gibson; Kathleen Ellwood Relationships among food label use, motivation, and dietary quality Journal Article In: Nutrients, vol. 7, no. 2, pp. 1068–1080, 2015. @article{SoederbergMiller2015, Nutrition information on packaged foods supplies information that aids consumers in meeting the recommendations put forth in the US Dietary Guidelines for Americans such as reducing intake of solid fats and added sugars. It is important to understand how food label use is related to dietary intake. However, prior work is based only on self-reported use of food labels, making it unclear if subjective assessments are biased toward motivational influences. We assessed food label use using both self-reported and objective measures, the stage of change, and dietary quality in a sample of 392 stratified by income. Self-reported food label use was assessed using a questionnaire. Objective use was assessed using a mock shopping task in which participants viewed food labels and decided which foods to purchase. Eye movements were monitored to assess attention to nutrition information on the food labels. Individuals paid attention to nutrition information when selecting foods to buy. Self-reported and objective measures of label use showed some overlap with each other (r=0.29, p<0.001), and both predicted dietary quality (p<0.001 for both). The stage of change diminished the predictive power of subjective (p<0.09), but not objective (p<0.01), food label use. These data show both self-reported and objective measures of food label use are positively associated with dietary quality. However, self-reported measures appear to capture a greater motivational component of food label use than do more objective measures. |
Lisa M. Soederberg Miller; Diana L. Cassady; Laurel A. Beckett; Elizabeth A. Applegate; Machelle D. Wilson; Tanja N. Gibson; Kathleen Ellwood Misunderstanding of front-of-package nutrition information on us food products Journal Article In: PLoS ONE, vol. 10, no. 4, pp. e0125306, 2015. @article{SoederbergMiller2015a, Front-of-package nutrition symbols (FOPs) are presumably readily noticeable and require minimal prior nutrition knowledge to use. Although there is evidence to support this notion, few studies have focused on Facts Up Front type symbols which are used in the US. Participants with varying levels of prior knowledge were asked to view two products and decide which was more healthful. FOPs on packages were manipulated so that one product was more healthful, allowing us to assess accuracy. Attention to nutrition information was assessed via eye tracking to determine what if any FOP information was used to make their decisions. Results showed that accuracy was below chance on half of the comparisons despite consulting FOPs. Negative correlations between attention to calories, fat, and sodium and accuracy indicated that consumers over-relied on these nutrients. Although relatively little attention was allocated to fiber and sugar, associations between attention and accuracy were positive. Attention to vitamin D showed no association to accuracy, indicating confusion surrounding what constitutes a meaningful change across products. Greater nutrition knowledge was associated with greater accuracy, even when less attention was paid. Individuals, particularly those with less knowledge, are misled by calorie, sodium, and fat information on FOPs. |
Miguel A. Vadillo; Chris N. H. Street; Tom Beesley; David R. Shanks A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation Journal Article In: Behavior Research Methods, vol. 47, no. 4, pp. 1365–1376, 2015. @article{Vadillo2015, Poor calibration and inaccurate drift correction can pose severe problems for eye-tracking experiments requiring high levels of accuracy and precision. We describe an algorithm for the offline correction of eye-tracking data. The algorithm conducts a linear transformation of the coordinates of fixations that minimizes the distance between each fixation and its closest stimulus. A simple implementation in MATLAB is also presented. We explore the performance of the correction algorithm under several conditions using simulated and real data, and show that it is particularly likely to improve data quality when many fixations are included in the fitting process. |
Juan D. Velásquez; Pablo Loyola; Gustavo Martinez; Kristofher Munoz; Pedro Maldanado; Andrés Andres Couve; Pedro E. Maldonado Combining eye tracking and pupillary dilation analysis to identify website key objects Journal Article In: Neurocomputing, vol. 168, pp. 179–189, 2015. @article{Velasquez2015, Identifying the salient zones from Web interfaces, namely the Website Key Objects, is an essential part of the personalization process that current Web systems perform to increase user engagement. While several techniques have been proposed, most of them are focused on the use of Web usage logs. Only recently has the use of data from users[U+05F3] biological responses emerged as an alternative to enrich the analysis. In this work, a model is proposed to identify Website Key Objects that not only takes into account visual gaze activity, such as fixation time, but also the impact of pupil dilation. Our main hypothesis is that there is a strong relationship in terms of the pupil dynamics and the Web user preferences on a Web page. An empirical study was conducted on a real Website, from which the navigational activity of 23 subjects was captured using an eye tracking device. Results showed that the inclusion of pupillary activity, although not conclusively, allows us to extract a more robust Web Object classification, achieving a 14% increment in the overall accuracy. |
Jian Wang; Ryoichi Ohtsuka; Kimihiro Yamanaka Relation between mental workload and visual information processing Journal Article In: Procedia Manufacturing, vol. 3, pp. 5308–5312, 2015. @article{Wang2015, The aim of this study is to clarify the relation between mental workload and the function of visual information processing. To examine the mental workload (MWL) relative to the size of the useful field of view (UFOV), an experiment was conducted with 12 participants (ages 21–23). In the primary task, participants responded to visual markers appearing in a computer display. The UFOV and the results of the secondary task for MWL were measured. In the MWL task, participants solved numerical operations designed to increase MWL. The experimental conditions in this task were divided into three categories (Repeat Aloud, Addition, and No Task), where No Task meant no mental task was given. MWL was changed in a stepwise manner. The quantitative assessment confirmed that the UFOV narrows with the increase in the MWL. |
Sheng-Ming Wang Integrating service design and eye tracking insight for designing smart TV user interfaces Journal Article In: International Journal of Advanced Computer Science and Applications, vol. 6, no. 7, pp. 163–171, 2015. @article{Wang2015a, This research proposes a process that integrate service design method and eye tracking insight for designing a Smart TV user interface. The Service Design method, which is utilized for leading the combination of the quality function deployment (QFD) and the analytic hierarchy process (AHP), is used to analyze the features of three Smart TV user interface design mockups. Scientific evidences, which include the effectiveness and efficiency testing data obtained from eye tracking experiments with six participants, are provided the information for analysing the affordance of these design mockups. The results of this research demonstrate a comprehensive methodology that can be used iteratively for redesigning, redefining and evaluating of Smart TV user interfaces. It can also help to make the design of Smart TV user interfaces relate to users' behaviors and needs. So that to improve the affordance of design. Future studies may analyse the data that are derived from eye tracking experiments to improve our understanding of the spatial relationship between designed elements in a Smart TV user interface. |
Hani Alers; Judith A. Redi; Ingrid Heynderickx Quantifying the importance of preserving video quality in visually important regions at the expense of background content Journal Article In: Signal Processing: Image Communication, vol. 32, pp. 69–80, 2015. @article{Alers2015, Advances in digital technology have allowed us to embed significant processing power in everyday video consumption devices. At the same time, we have placed high demands on the video content itself by continuing to increase spatial resolution while trying to limit the allocated file size and bandwidth as much as possible. The result is typically a trade-off between perceptual quality and fulfillment of technological limitations. To bring this trade-off to its optimum, it is necessary to understand better how people perceive video quality. In this work, we particularly focus on understanding how the spatial location of compression artifacts impacts visual quality perception, and specifically in relation with visual attention. In particular we investigate how changing the quality of the region of interest of a video affects its overall perceived quality, and we quantify the importance of the visual quality of the region of interest to the overall quality judgment. A three stage experiment was conducted where viewers were shown videos with different quality levels in different parts of the scene. By asking them to score the overall quality we found that the quality of the region of interest has 10 times more impact than the quality of the rest of the scene. These results are in line with similar effects observed in still images, yet in videos the relevance of the visual quality of the region of interest is twice as high than in images. The latter finding is directly relevant for the design of more accurate objective quality metrics for videos, that are based on the estimation of local distortion visibility. |
Benedetta Cesqui; Maura Mezzetti; Francesco Lacquaniti; Andrea D'Avella Gaze behavior in one-handed catching and its relation with interceptive performance: What the eyes can't tell Journal Article In: PLoS ONE, vol. 10, no. 3, pp. e0119445, 2015. @article{Cesqui2015, In ball sports, it is usually acknowledged that expert athletes track the ball more accurately than novices. However, there is also evidence that keeping the eyes on the ball is not always necessary for interception. Here we aimed at gaining new insights on the extent to which ocular pursuit performance is related to catching performance. To this end, we analyzed eye and head movements of nine subjects catching a ball projected by an actuated launching apparatus. Four different ball flight durations and two different ball arrival heights were tested and the quality of ocular pursuit was characterized by means of several timing and accuracy parameters. Catching performance differed across subjects and depended on ball flight characteristics. All subjects showed a similar sequence of eye movement events and a similar modulation of the timing of these events in relation to the characteristics of the ball trajectory. On a trial-by-trial basis there was a significant relationship only between pursuit duration and catching performance, confirming that keeping the eyes on the ball longer increases catching success probability. Ocular pursuit parameters values and their dependence on flight conditions as well as the eye and head contributions to gaze shift differed across subjects. However, the observed average individual ocular behavior and the eye-head coordination patterns were not directly related to the individual catching performance. These results suggest that several oculomotor strategies may be used to gather information on ball motion, and that factors unrelated to eye movements may underlie the observed differences in interceptive performance. |
Leandro Luigi Di Stasi; Michael B. McCamy; Sebastian Pannasch; Rebekka Renner; Andrés Catena; José J. Cañas; Boris M. Velichkovsky; Susana Martinez-Conde Effects of driving time on microsaccadic dynamics Journal Article In: Experimental Brain Research, vol. 233, no. 2, pp. 599–605, 2015. @article{DiStasi2015, Driver fatigue is a common cause of car acci- dents. Thus, the objective detection of driver fatigue is a first step toward the effective management of fatigue- related traffic accidents. Here, we investigated the effects of driving time, a common inducer of driver fatigue, on the dynamics of fixational eye movements. Participants drove for 2 h in a virtual driving environment while we recorded their eye movements. Microsaccade velocities decreased with driving time, suggesting a potential effect of fatigue on microsaccades during driving. |
Ivan Diaz; Sabine Schmidt; Francis R. Verdun; François O. Bochud Eye‐tracking of nodule detection in lung CT volumetric data Journal Article In: Medical Physics, vol. 42, no. 6, pp. 2925–2932, 2015. @article{Diaz2015, PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems.$backslash$n$backslash$nMETHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts.$backslash$n$backslash$nRESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed.$backslash$n$backslash$nCONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior. |
Hayward J. Godwin; Simon P. Liversedge; Julie A. Kirkby; Michael Boardman; Katherine Cornes; Nick Donnelly The influence of experience upon information-sampling and decision-making behaviour during risk assessment in military personnel Journal Article In: Visual Cognition, vol. 23, no. 4, pp. 415–431, 2015. @article{Godwin2015, We examined the influence of experience upon information-sampling and decision-making behaviour in a group of military personnel as they conducted risk assessments of scenes photographed from patrol routes during the recent conflict in Afghanistan. Their risk assessment was based on an evaluation of Potential Risk Indicators (PRIs) during examination of each scene. We found that both participant groups were equally likely to fixate PRIs, demonstrating similarity in the selectivity of their information-sampling. However, the inexperienced participants made more revisits to PRIs, had longer response times, and were more likely to decide that the scenes contained a high level of risk. Together, these results suggest that experience primarily modulates decision-making behaviour. We discuss potential routes to train personnel to conduct risk assessments in a more similar manner to experienced participants. |
2014 |
Janice Attard; Markus Bindemann Establishing the duration of crimes: An individual differences and eye-tracking investigation into time estimation Journal Article In: Applied Cognitive Psychology, vol. 28, no. 2, pp. 215–225, 2014. @article{Attard2014, The time available for viewing a perpetrator at a crime scene predicts successful person recognition in subsequent identity line-ups. This time is usually unknown and must be derived from eyewitnesses' duration estimates. This study therefore compared the estimates that different individuals provide for crimes. We then attempted to determine the accuracy of these durations by measuring observers' general time estimation ability with a set of estimator videos. Observers differed greatly in their ability to estimate time, but individual duration estimates correlated strongly for crime and estimator materials. This indicates that it might be possible to infer unknown durations of events, such as criminal incidents, from a person's ability to estimate known durations. We also measured observers' eye movements to a perpetrator during crimes. Only fixations on a perpetrator's face related to eyewitness accuracy, but these fixations did not correlate with exposure estimates for this person. The implications of these findings are discussed. |
Y. Behnke Visual qualities of future geography books Journal Article In: European Journal of Geography, vol. 5, no. 4, pp. 56–66, 2014. @article{Behnke2014, The capacity for spatial orientation and associated faculties are closely related to visual competencies. Consequently, the practice and acquisition of visual competencies are vital prerequisites to successful learning and teaching of geography. Today, geography can be understood as a visual discipline and as such may develop strong links to visual communication. In geography, textbooks may establish this link in an everyday context. This Ph.D. project aims to build the bridge between subject content and design. The result will be a visually convincing geography textbook prototype. Fifty-six geography textbooks from different European countries were analysed, focussing on the design concept. Furthermore, double-page spreads of current German geography textbooks were evaluated by observing students' textbook usage via eye tracking. Eye tracking monitors students' reactions to varying contents and designs. Findings from both analyses form the basis for the textbook concept, which is to be developed. |
Erin Berenbaum; Amy E. Latimer-Cheung Examining the link between framed physical activity ads and behavior among women Journal Article In: Journal of Sport and Exercise Psychology, vol. 36, no. 3, pp. 271–280, 2014. @article{Berenbaum2014, Gain-framed messages are more effective at promoting physical activity than loss-framed messages. However, the mechanism through which this effect occurs is unclear. The current experiment examined the effects of message framing on variables described in the communication behavior change model (McGuire, 1989), as well as the mediating effects of these variables on the message-frame-behavior relationship. Sixty low-to-moderately active women viewed 20 gain- or loss-framed ads and five control ads while their eye movements were recorded via eye tracking. The gain-framed ads attracted greater attention, ps < .05; produced more positive attitudes |
Daniel Bishop; Gustav Kuhn; Claire Maton Telling people where to look in a soccer-based decision task: A nomothetic approach Journal Article In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–13, 2014. @article{Bishop2014, Research has shown that identifiable visual search patterns characterize skilled performance of anticipation and decision-making tasks in sport. However, to date, the use of experts' gaze patterns to entrain novices' performance has been confined to aiming activities. Accordingly, in a first experiment, 40 participants of varying soccer experience viewed static images of oncoming soccer players and attempted to predict the direction in which those players were about to move. Multiple regression analyses showed that the sole predictor of decision-making efficiency was the time taken to initiate a saccade to the ball. In a follow-up experiment, soccer novices undertook the same task as in Experiment 1. Two experimental groups were instructed to either look at the ball, or the player's head, as quickly as possible; a control group received no instructions. The experimental groups were fastest to make a saccade to the ball or head, respectively, but decision-making efficiency was equivalent across all three groups. The fallibility of a nomothetic approach to training eye movements is discussed. |
Hanneke Bouwsema; Corry K. Sluis; Raoul M. Bongers Changes in performance over time while learning to use a myoelectric prosthesis Journal Article In: Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, pp. 1–15, 2014. @article{Bouwsema2014, BACKGROUND: Training increases the functional use of an upper limb prosthesis, but little is known about how people learn to use their prosthesis. The aim of this study was to describe the changes in performance with an upper limb myoelectric prosthesis during practice. The results provide a basis to develop an evidence-based training program. METHODS: Thirty-one able-bodied participants took part in an experiment as well as thirty-one age- and gender-matched controls. Participants in the experimental condition, randomly assigned to one of four groups, practiced with a myoelectric simulator for five sessions in a two-weeks period. Group 1 practiced direct grasping, Group 2 practiced indirect grasping, Group 3 practiced fixating, and Group 4 practiced a combination of all three tasks. The Southampton Hand Assessment Procedure (SHAP) was assessed in a pretest, posttest, and two retention tests. Participants in the control condition performed SHAP two times, two weeks apart with no practice in between. Compressible objects were used in the grasping tasks. Changes in end-point kinematics, joint angles, and grip force control, the latter measured by magnitude of object compression, were examined. RESULTS: The experimental groups improved more on SHAP than the control group. Interestingly, the fixation group improved comparable to the other training groups on the SHAP. Improvement in global position of the prosthesis leveled off after three practice sessions, whereas learning to control grip force required more time. The indirect grasping group had the smallest object compression in the beginning and this did not change over time, whereas the direct grasping and the combination group had a decrease in compression over time. Moreover, the indirect grasping group had the smallest grasping time that did not vary over object rigidity, while for the other two groups the grasping time decreased with an increase in object rigidity. CONCLUSIONS: A training program should spend more time on learning fine control aspects of the prosthetic hand during rehabilitation. Moreover, training should start with the indirect grasping task that has the best performance, which is probably due to the higher amount of useful information available from the sound hand. |
Myriam Chanceaux; Anne Guérin-Dugué; Benoît Lemaire; Thierry Baccino A computational cognitive model of information search in textual materials Journal Article In: Cognitive Computation, vol. 6, no. 1, pp. 1–17, 2014. @article{Chanceaux2014a, Document foraging for information is a crucial and increasingly prevalent activity nowadays. We designed a computational cognitive model to simulate the oculomotor scanpath of an average web user searching for specific information from textual materials. In particular, the developed model dynamically combines visual, semantic, and memory processes to predict the user's focus of attention during information seeking from paragraphs of text. A series of psychological experiments was conducted using eye-tracking techniques in order to validate and refine the proposed model. Comparisons between model simulations and human data are reported and discussed taking into account the strengths and shortcomings of the model. The proposed model provides a unique contribution to the investigation of the cognitive processes involved during information search and bears significant implications for web page design and evaluation. |
Samuel G. Charlton; Nicola J. Starkey; John A. Perrone; Robert B. Isler What's the risk? A comparison of actual and perceived driving risk Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 25, no. A, pp. 50–64, 2014. @article{Charlton2014, It has long been presumed that drivers' perceptions of risk play an important role in guiding on-road behaviour. The answer to how accurately drivers perceive the momentary risk of a driving situation, however, is unknown. This research compared drivers' perceptions of the momentary risk for a range of roads to the objective risk associated with those roads. Videos of rural roads, filmed from the drivers' perspective, were presented to 69 participants seated in a driving simulator while they indicated the momentary levels of risk they were experiencing by moving a risk meter mounted on the steering wheel. Estimates of the objective levels of risk for the roads were calculated using road protection scores from the KiwiRAP database (part of the International Road Assessment Programme). Subsequently, the participants also provided risk estimates for still photos taken from the videos. Another group of 10 participants viewed the videos and photos while their eye movements and fixations were recorded. In a third experiment, 14 participants drove a subset of the roads in a car while providing risk ratings at selected points of interest. Results showed a high degree of consistency across the different methods. Certain road situations were rated as being riskier than the objective risk, and perhaps more importantly, the risk of other situations was significantly under-rated. Horizontal curves and narrow lanes were associated with over-rated risk estimates, while intersections and roadside hazards such as narrow road shoulders, power poles and ditches were significantly under-rated. Analysis of eye movements indicated that drivers did not fixate these features and that the spread of fixations, pupil size and eye blinks were significantly correlated with the risk ratings. An analysis of the road design elements at 77 locations in the video revealed five road characteristics that predicted nearly 80% of the variance in drivers' risk perceptions; horizontal curvature, lane and shoulder width, gradient, and the presence of median barriers. |
Mina Choi; Joel Wang; Wei Chung Cheng; Giovanni Ramponi; Luigi Albani; Aldo Badano Effect of veiling glare on detectability in high-dynamic-range medical images Journal Article In: IEEE/OSA Journal of Display Technology, vol. 10, no. 5, pp. 420–428, 2014. @article{Choi2014a, We describe a methodology for predicting the detectability of subtle targets in dark regions of high-dynamic-range (HDR) images in the presence of veiling glare in the human eye. The method relies on predictions of contrast detection thresholds for the human visual system within a HDR image based on psychophysics measurements and modeling of the HDR display device characteristics. We present experimental results used to construct the model and discuss an image-dependent empirical veiling glare model and the validation of the model predictions with test patterns, natural scenes, and medical images. The model predictions are compared to a previously reported model (HDR-VDP2) for predicting HDR image quality accounting for glare effects. © 2005-2012 IEEE. |
Antoine Coutrot; Nathalie Guyader; Gelu Ionescu; Alice Caplier Video viewing: Do auditory salient events capture visual attention? Journal Article In: Annals of Telecommunications, vol. 69, no. 1-2, pp. 89–97, 2014. @article{Coutrot2014a, We assess whether salient auditory events contained in soundtracks modify eye movements when exploring videos. In a previous study, we found that, on average, nonspatial sound contained in video soundtracks impacts on eye movements. This result indicates that sound could play a leading part in visual attention models to predict eye movements. In this research, we go further and test whether the effect of sound on eye movements is stronger just after salient auditory events. To automatically spot salient auditory events, we used two auditory saliency models: the discrete energy separation algorithm and the energy model. Both models provide a saliency time curve, based on the fusion of several elementary audio features. The most salient auditory events were extracted by thresholding these curves. We examined some eye movement parameters just after these events rather than on all the video frames. We showed that the effect of sound on eye movements (variability between eye positions, saccade amplitude, and fixation duration) was not stronger after salient auditory events than on average over entire videos. Thus, we suggest that sound could impact on visual exploration not only after salient events but in a more global way. © 2013 Institut Mines-Télécom and Springer-Verlag France. |
Leandro Luigi Di Stasi; Michael B. McCamy; Stephen L. Macknik; James A. Mankin; Nicole Hooft; Andrés Catena; Susana Martinez-Conde Saccadic eye movement metrics reflect surgical residents′ fatigue Journal Article In: Annals of Surgery, vol. 259, no. 4, pp. 824–829, 2014. @article{DiStasi2014a, OBJECTIVE: Little is known about the effects of surgical residentsÊ fatigue on patient safety. We monitored surgical residentsÊ fatigue levels during their call day using (1) eye movement metrics, (2) objective measures of laparoscopic surgical performance, and (3) subjective reports based on standardized questionnaires. BACKGROUND: Prior attempts to investigate the effects of fatigue on surgical performance have suffered from methodological limitations, including inconsistent definitions and lack of objective measures of fatigue, and nonstandardized measures of surgical performance. Recent research has shown that fatigue can affect the characteristics of saccadic (fast ballistic) eye movements in nonsurgical scenarios. Here we asked whether fatigue induced by time-on-duty (∼24 hours) might affect saccadic metrics in surgical residents. Because saccadic velocity is not under voluntary control, a fatigue index based on saccadic velocity has the potential to provide an accurate and unbiased measure of the residentÊs fatigue level. METHODS: We measured the eye movements of members of the general surgery resident team at St. JosephÊs Hospital and Medical Center (Phoenix, AZ) (6 males and 6 females), using a head-mounted video eye tracker (similar configuration to a surgical headlight), during the performance of 3 tasks: 2 simulated laparoscopic surgery tasks (peg transfer and precision cutting) and a guided saccade task, before and after their call day. Residents rated their perceived fatigue level every 3 hours throughout their 24-hour shift, using a standardized scale. RESULTS:: Time-on-duty decreased saccadic velocity and increased subjective fatigue but did not affect laparoscopic performance. These results support the hypothesis that saccadic indices reflect graded changes in fatigue. They also indicate that fatigue due to prolonged time-on-duty does not result necessarily in medical error, highlighting the complicated relationship among continuity of care, patient safety, and fatigued providers. CONCLUSIONS: Our data show, for the first time, that saccadic velocity is a reliable indicator of the subjective fatigue of health care professionals during prolonged time-on-duty. These findings have potential impacts for the development of neuroergonomic tools to detect fatigue among health professionals and in the specifications of future guidelines regarding residentsÊ duty hours. |
Lien Dupont; Marc Antrop; Veerle Van Eetvelde Eye-tracking analysis in landscape perception research: Influence of photograph properties and landscape characteristics Journal Article In: Landscape Research, vol. 39, no. 4, pp. 417–432, 2014. @article{Dupont2014, The European Landscape Convention emphasises the need for public participation in landscape planning and management. This demands understanding of how people perceive and observe landscapes. This can objectively be measured using eye tracking, a system recording eye movements and fixations while observing images. In this study, 23 participants were asked to observe 90 landscape photographs, representing 18 landscape character types in Flanders (Belgium) differing in degree of openness and heterogeneity. For each landscape, five types of photographs were shown, varying in view angle. This experiment design allowed testing the effect of the landscape characteristics and photograph types on the observation pattern, measured by Eye-tracking Metrics (ETM). The results show that panoramic and detail photographs are observed differently than the other types. The degree of openness and heterogeneity also seems to exert a significant influence on the observation of the landscape. |
Daniel Frings; John Parkin; Anne M. Ridley The effects of cycle lanes, vehicle to kerb distance and vehicle type on cyclists' attention allocation during junction negotiation Journal Article In: Accident Analysis and Prevention, vol. 72, pp. 411–421, 2014. @article{Frings2014, Increased frequency of cycle journeys has led to an escalation in collisions between cyclists and vehicles, particularly at shared junctions. Risks associated with passing decisions have been shown to influence cyclists' behavioural intentions. The current study extended this research by linking not only risk perception but also attention allocation (via tracking the eye movements of twenty cyclists viewing junction approaches presented on video) to behavioural intentions. These constructs were measured in a variety of contexts: junctions featuring cycle lanes, large vs. small vehicles and differing kerb to vehicle distances). Overall, cyclists devoted the majority of their attention to the nearside (side closest to kerb) of vehicles, and perceived near and offside (side furthest from kerb) passing as most risky. Waiting behind was the most frequent behavioural intention, followed by nearside and then offside passing. While cycle lane presence did not affect behaviour, it did lead to nearside passing being perceived as less risky, and to less attention being devoted to the offside. Large vehicles led to increased risk perceived with passing, and more attention directed towards the rear of vehicles, with reduced offside passing and increased intentions to remain behind the vehicle. Whether the vehicle was large or small, nearside passing was preferred around 30% of the time. Wide kerb distances increased nearside passing intentions and lower associated perceptions of risk. Additionally, relationships between attention and both risk evaluations and behaviours were observed. These results are discussed in relation to the cyclists' situational awareness and biases that various contextual factors can introduce. From these, recommendations for road safety and training are suggested. |
Jingwen Yang; Frederic Hamelin; Dominique Sauter Fault detection observer design using time and frequency domain specifications Journal Article In: IFAC Proceedings Volumes, vol. 19, no. 1, pp. 8564–8569, 2014. @article{Yang2014, Several scholars have proposed personalization models based on product variety breadth and the intensity of customer-firm interaction with a focus on marketing strategies ranging from basic product versioning to customerization and reverse marketing. However, some studies have shown that the explosion of product variety may generate information overload. Moreover, customers are highly heterogeneous in willingness and ability to interact with firms in personalization processes. This often results in consumer confusion and wasteful investments. To address this problem, we propose a conceptual framework of e-customer profiling for interactive personalization by distinguishing content (that is, expected customer benefits) and process (that is, expected degree of interaction) issues. The framework focuses on four general dimensions suggested by previous research as significant drivers of online customer heterogeneity: VALUE, KNOWLEDGE, ORIENTATION, and RELATIONSHIP QUALITY. We also present a preliminary test of the framework and derive directions for customer relationship management and future research. |
Kristin J. Heaton; Alexis L. Maule; Jun Maruta; Elisabeth M. Kryskow; Jamshid Ghajar Attention and visual tracking degradation during acute sleep deprivation in a military sample Journal Article In: Aviation Space and Environmental Medicine, vol. 85, no. 5, pp. 497–503, 2014. @article{Heaton2014, Background: Fatigue due to sleep restriction places individuals at elevated risk for accidents, degraded health, and impaired physical and mental performance. Early detection of fatigue-related performance decrements is an important component of injury prevention and can help to ensure optimal performance and mission readiness. This study used a predictive visual tracking task and a computer-based measure of attention to characterize fatigue-related attention decrements in healthy Army personnel during acute sleep deprivation. Methods: Serving as subjects in this laboratory-based study were 87 male and female service members between the ages of 18 and 50 with no history of brain injury with loss of consciousness, substance abuse, or significant psychiatric or neurologic diagnoses. Subjects underwent 26 h of sleep deprivation, during which eye movement measures from a continuous circular visual tracking task and attention measures (reaction time, accuracy) from the Attention Network Test (ANT) were collected at baseline, 20 h awake, and between 24 to 26 h awake. Results: Increases in the variability of gaze positional errors (46-47%), as well as reaction time-based ANT measures (9-65%), were observed across 26 h of sleep deprivation. Accuracy of ANT responses declined across this same period (11%). Discussion: Performance measures of predictive visual tracking accurately reflect impaired attention due to acute sleep deprivation and provide a promising approach for assessing readiness in personnel serving in diverse occupational areas, including flight and ground support crews. |
Benedetta Heimler; Francesco Pavani; Mieke Donk; Wieske Zoest Stimulus-and goal-driven control of eye movements: Action videogame players are faster but not better Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 8, pp. 2398–2412, 2014. @article{Heimler2014, Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs. |
Oleg V. Komogortsev; Corey D. Holland; Alex Karpov; Larry R. Price Biometrics via oculomotor plant characteristics: Impact of parameters in oculomotor plant model Journal Article In: ACM Transactions on Applied Perception, vol. 11, no. 4, pp. 1–17, 2014. @article{Komogortsev2014, This article proposes and evaluates a novel biometric approach utilizing the internal, nonvisible, anatomical structure of the human eye. The proposed method estimates the anatomical properties of the human oculomotor plant from the measurable properties of human eye movements, utilizing a two-dimensional linear homeomorphic model of the oculomotor plant. The derived properties are evaluated within a biometric framework to determine their efficacy in both verification and identification scenarios. The results suggest that the physical properties derived from the oculomotor plant model are capable of achieving 20.3% equal error rate and 65.7% rank-1 identification rate on high-resolution equipment involving 32 subjects, with biometric samples taken over four recording sessions; or 22.2% equal error rate and 12.6% rank-1 identification rate on low-resolution equipment involving 172 subjects, with biometric samples taken over two recording sessions. |
Chiuhsiang Joe Lin; Chi-Chan Chang; Yung-Hui Lee Evaluating camouflage design using eye movement data Journal Article In: Applied Ergonomics, vol. 45, no. 3, pp. 714–723, 2014. @article{Lin2014d, This study investigates the characteristics of eye movements during a camouflaged target search task. Camouflaged targets were randomly presented on two natural landscapes. The performance of each camouflage design was assessed by target detection hit rate, detection time, number of fixations on display, first saccade amplitude to target, number of fixations on target, fixation duration on target, and subjective ratings of search task difficulty. The results showed that the camouflage patterns could significantly affect the eye-movement behavior, especially first saccade amplitude and fixation duration, and the findings could be used to increase the sensitivity of the camouflage assessment. We hypothesized that the assessment could be made with regard to the differences in detectability and discriminability of the camouflage patterns. These could explain less efficient search behavior in eye movements. Overall, data obtained from eye movements can be used to significantly enhance the interpretation of the effects of different camouflage design. |
Chiuhsiang Joe Lin; Chi-Chan Chang; Bor-Shong Liu Developing and evaluating a target-background similarity metric for camouflage detection Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e87310, 2014. @article{Lin2014e, BACKGROUND: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. METHODOLOGY: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. SIGNIFICANCE: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. |
Hsin-Hui Lin; Shu-Fei Yang An eye movement study of attribute framing in online shopping Journal Article In: Journal of Marketing Analytics, vol. 2, no. 2, pp. 72–80, 2014. @article{Lin2014c, This study uses an eye-tracking method to explore the framing effect on observed eye movements and purchase intention in online shopping. The results show that negative framing induces more active eye movements. Function attributes and non-functionality attributes attract more eye movements and with higher intensity. And the scanpath on the areas of interest reveals a certain pattern. These findings have practical implications for e-sellers to improve communication with customers. |
John J. H. Lin; Sunny S. J. Lin Cognitive load for configuration comprehension in computer-supported geometry problem solving: An eye movement perspective Journal Article In: International Journal of Science and Mathematics Education, vol. 12, no. 3, pp. 605–627, 2014. @article{Lin2014a, The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations involving the number of informational elements, the number of element interactivities and the level of mental operations were assumed to account for the increasing difficulty. A sample of 311 9th grade students solved five geometry problems that required knowledge of similar triangles in a computer-supported environment. In the second experiment, 63 participants solved the same problems and eye movements were recorded. The results indicated that (1) the five problems differed in pass rate and in self-reported cognitive load; (2) because the successful solvers were very swift in pattern recognition and visual integration, their fixation did not clearly show valuable information; (3) more attention and more time (shown by the heat maps, dwell time and fixation counts) were given to read the more difficult configurations than to the intermediate or easier configurations; and (4) in addition to number of elements and element interactivities, the level of mental operations accounts for the major cognitive load sources of configuration comprehension. The results derived some implications for design principles of geometry diagrams in secondary school mathematics textbooks. |
John J. H. Lin; Sunny S. J. Lin Tracking eye movements when solving geometry problems with handwriting devices Journal Article In: Journal of Eye Movement Research, vol. 7, no. 1, pp. 1–15, 2014. @article{Lin2014b, The present study investigated the following issues: (1) whether differences are evident in the eye movement measures of successful and unsuccessful problem-solvers; (2) what is the relationship between perceived difficulty and eye movement measures; and (3) whether eye movements in various AOIs differ when solving problems. Sixty-three 11th grade students solved five geometry problems about the properties of similar triangles. A digital drawing tablet and sensitive pressure pen were used to record the responses. The results indicated that unsuccessful solvers tended to have more fixation counts, run counts, and longer dwell time on the problem area, whereas successful solvers focused more on the calculation area. In addition, fixation counts, dwell time, and run counts in the diagram area were positively correlated with the perceived difficulty, suggesting that understanding similar triangles may require translation or mental rotation. We argue that three eye movement measures (i.e., fixation counts, dwell time, and run counts) are appropriate for use in examining problem solving given that they differentiate successful from unsuccessful solvers and correlate with perceived difficulty. Furthermore, the eye-tracking technique provides objective measures of students' cognitive load for instructional designers. |
Tzu Chien Liu; Melissa Hui Mei Fan; Fred Paas In: Computers & Education, vol. 70, pp. 9–20, 2014. @article{Liu2014b, Recent research has shown that students involved in computer-based second language learning prefer to use a digital dictionary in which a word can be looked up by clicking on it with a mouse (i.e., click-on dictionary) to a digital dictionary in which a word can be looked up by typing it on a keyboard (i.e., key-in dictionary). This study investigated whether digital dictionary format also differentially affects students' incidental acquisition of spelling knowledge and cognitive load during second language learning. A comparison between a click-on dictionary condition, a key-in dictionary condition, and a non-dictionary control condition for 45 Taiwanese students learning English as a foreign language revealed that learners who used a key-in dictionary invested more time investment on dictionary consultation than learners who used a click-on dictionary. However, on a subsequent unexpected spelling test the key-in group invested less time investment and performed better than the click-on group. The theoretical and practical implications of the results are discussed. |
W. Joseph MacInnes; Amelia R. Hunt; Matthew D. Hilchey; Raymond M. Klein Driving forces in free visual search: An ethology Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 2, pp. 280–295, 2014. @article{MacInnes2014a, Visual search typically involves sequences of eye movements under the constraints of a specific scene and specific goals. Visual search has been used as an experimental paradigm to study the interplay of scene salience and top-down goals, as well as various aspects of vision, attention, and memory, usually by introducing a secondary task or by controlling and manipulating the search environment. An ethology is a study of an animal in its natural environment, and here we examine the fixation patterns of the human animal searching a series of challenging illustrated scenes that are well-known in popular culture. The search was free of secondary tasks, probes, and other distractions. Our goal was to describe saccadic behavior, including patterns of fixation duration, saccade amplitude, and angular direction. In particular, we employed both new and established techniques for identifying top-down strategies, any influences of bottom-up image salience, and the midlevel attentional effects of saccadic momentum and inhibition of return. The visual search dynamics that we observed and quantified demonstrate that saccades are not independently generated and incorporate distinct influences from strategy, salience, and attention. Sequential dependencies consistent with inhibition of return also emerged from our analyses. |
Olivia M. Maynard; Angela Attwood; Laura O'Brien; Sabrina Brooks; Craig Hedge; Ute Leonards; Marcus R. Munafò Avoidance of cigarette pack health warnings among regular cigarette smokers Journal Article In: Drug and Alcohol Dependence, vol. 136, no. 1, pp. 170–174, 2014. @article{Maynard2014, Background: Previous research with adults and adolescents indicates that plain cigarette packs increase visual attention to health warnings among non-smokers and non-regular smokers, but not among regular smokers. This may be because regular smokers: (1) are familiar with the health warnings, (2) preferentially attend to branding, or (3) actively avoid health warnings. We sought to distinguish between these explanations using eye-tracking technology. Method: A convenience sample of 30 adult dependent smokers participated in an eye-tracking study. Participants viewed branded, plain and blank packs of cigarettes with familiar and unfamiliar health warnings. The number of fixations to health warnings and branding on the different pack types were recorded. Results: Analysis of variance indicated that regular smokers were biased towards fixating the branding rather than the health warning on all three pack types. This bias was smaller, but still evident, for blank packs, where smokers preferentially attended the blank region over the health warnings. Time-course analysis showed that for branded and plain packs, attention was preferentially directed to the branding location for the entire 10. s of the stimulus presentation, while for blank packs this occurred for the last 8. s of the stimulus presentation. Familiarity with health warnings had no effect on eye gaze location. Conclusion: Smokers actively avoid cigarette pack health warnings, and this remains the case even in the absence of salient branding information. Smokers may have learned to divert their attention away from cigarette pack health warnings. These findings have implications for cigarette packaging and health warning policy. |
K. Ooms; Philippe De Maeyer; V. Fack Study of the attentive behavior of novice and expert map users using eye tracking Journal Article In: Cartography and Geographic Information Science, vol. 41, no. 1, pp. 37–54, 2014. @article{Ooms2014, The aim of this paper is to gain better understanding of the way map users read and interpret the visual stimuli presented to them and how this can be influenced. In particular, the difference between expert and novice map users is considered. In a user study, the participants studied four screen maps which had been manipulated to introduce deviations. The eye movements of 24 expert and novice participants were tracked, recorded, and analyzed (both visually and statistically) based on a grid of Areas of Interest. These visual analyses are essential for studying the spatial dimension of maps to identify problems in design. In this research, we used visualization of eye movement metrics (fixation count and duration) in a 2D and 3D grid and a statistical comparison of the grid cells. The results show that the users' eye movements clearly reflect the main elements on the map. The users' attentive behavior is influenced by deviating colors, as their attention is drawn to it. This could also influence the users' interpretation process. Both user groups encountered difficulties when trying to interpret and store map objects that were mirrored. Insights into how different types of map users read and interpret map content are essential in this fast-evolving era of digital cartographic products. |
Alessandro Piras; Roberto Lobietti; Salvatore Squatrito Response time, visual search strategy, and anticipatory skills in volleyball players Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–10, 2014. @article{Piras2014, This paper aimed at comparing expert and novice volleyball players in a visuomotor task using realistic stimuli. Videos of a volleyball setter performing offensive action were presented to participants, while their eye movements were recorded by a head-mounted video based eye tracker. Participants were asked to foresee the direction (forward or backward) of the setter's toss by pressing one of two keys. Key-press response time, response accuracy, and gaze behaviour were measured from the first frame showing the setter's hand-ball contact to the button pressed by the participants. Experts were faster and more accurate in predicting the direction of the setting than novices, showing accurate predictions when they used a search strategy involving fewer fixations of longer duration, as well as spending less time in fixating all display areas from which they extract critical information for the judgment. These results are consistent with the view that superior performance in experts is due to their ability to efficiently encode domain-specific information that is relevant to the task. |
Alessandro Piras; Emanuela Pierantozzi; Salvatore Squatrito Visual search strategy in judo fighters during the execution of the first grip Journal Article In: International Journal of Sports Science & Coaching, vol. 9, no. 1, pp. 185–198, 2014. @article{Piras2014a, Visual search behaviour is believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that optimal visuo-motor strategy may be part of advanced training programs. Gaze behaviour of expert and novice judo fighters was investigated while they were doing a real sport-specific task. The athletes were tested while they performed a first grip either in an attack or defence condition. The results showed that expert judo fighters use a search strategy involving fewer fixations of longer duration than their novice counterparts. Experts spent a greater percentage of their time fixating on lapel and face with respect to other areas of the scene. On the contrary, the most frequently fixed cue for novice group was the sleeve area. It can be concluded that experts orient their gaze in the middle of the scene, both in attack and in defence, in order to gather more information at once, perhaps using parafoveal vision. |
Frederik Platten; Maximilian Schwalm; Julia Hülsmann; Josef Krems Analysis of compensative behavior in demanding driving situations Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 26, no. A, pp. 38–48, 2014. @article{Platten2014, Drivers usually perform a range of different activities while driving. Following a classical workload approach, additional activities are expected to increase the demand on the driver. Nevertheless, drivers can usually manage even demanding situations successfully. They seem to be able to compensate demands by behavior adaptations, mainly in the following factors: in the driving task itself, in an additional (secondary) task and in their mental workload. It is suggested that by analyzing these three factors in temporal coherence, compensative interactions between them become measurable. Additionally, a reduction of activity in the secondary task is expected to be influenced by the characteristics of this task. To analyze these effects, a driving simulator study with 33 participants was accomplished. It could be shown that if a secondary task can be interrupted without a perceived decline in performance, it is interrupted in demanding driving situations. If an interruption causes a perceived performance loss, efforts are increased, and so the workload is heightened (measured with a high resolution physiological measurement based on pupillometry). Thus, drivers compensate their current demands by behavior adaptations in different factors, depending on the characteristics of a secondary task. |
Ioannis Rigas; Oleg V. Komogortsev Biometric recognition via probabilistic spatial projection of eye movement trajectories in dynamic visual environments Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 9, no. 10, pp. 1743–1754, 2014. @article{Rigas2014, This paper proposes a method for the extraction of biometric features from the spatial patterns formed by eye movements during an inspection of dynamic visual stimulus. In the suggested framework, each eye movement signal is transformed into a time-constrained decomposition by using a probabilistic representation of spatial and temporal features related to eye fixations and called fixation density map (FDM). The results for a large collection of eye movements recorded from 200 individuals indicate the best equal error rate of 10.8% and Rank-1 identification rate as high as 51%, which is a significant improvement over existing eye movement-driven biometric methods. In addition, our experiments reveal that a person recognition approach based on the FDM performs well even in cases when eye movement data are captured at lower than optimum sampling frequencies. This property is very important for the future ocular biometric systems where existing iris recognition devices could be employed to combine eye movement traits with iris information for increased security and accuracy. Considering that commercial iris recognition devices are able to implement eye image sampling usually at a relatively low rate, the ability to perform eye movement-driven biometrics at such rates is of great significance. |
Germán Sanchis-Trilles; Vicent Alabau; Christian Buck; Michael Carl; Francisco Casacuberta; Mercedes García-Martínez; Ulrich Germann; Jesús González-Rubio; Robin L. Hill; Philipp Koehn; Luis A. Leiva; Bartolomé Mesa-Lao; Daniel Ortiz-Martínez; Herve Saint-Amand; Chara Tsoukala; Enrique Vidal Interactive translation prediction versus conventional post-editing in practice: A study with the CasMaCat workbench Journal Article In: Machine Translation, vol. 28, no. 3-4, pp. 217–235, 2014. @article{SanchisTrilles2014, We conducted a field trial in computer-assisted professional translation to compare interactive translation prediction (ITP) against conventional post-editing (PE) of machine translation (MT) output. In contrast to the conventional PE set-up, where an MT system first produces a static translation hypothesis that is then edited by a professional (hence post-editing), ITP constantly updates the translation hypothesis in real time in response to user edits. Our study involved nine professional translators and four reviewers working with the web-based CasMaCat workbench. Various new interactive features aiming to assist the post-editor/translator were also tested in this trial. Our results show that even with little training, ITP can be as productive as conventional PE in terms of the total time required to produce the final translation. Moreover, translation editors working with ITP require fewer key strokes to arrive at the final version of their translation. |
Lutz Schega; Daniel Hamacher; Sandra Erfuth; Wolfgang Behrens-Baumann; Juliane Reupsch; Michael B. Hoffmann Differential effects of head-mounted displays on visual performance Journal Article In: Ergonomics, vol. 57, no. 1, pp. 1–11, 2014. @article{Schega2014, Head-mounted displays (HMDs) virtually augment the visual world to aid visual task completion. Three types of HMDs were compared [look around (LA); optical see-through with organic light emitting diodes and virtual retinal display] to determine whether LA, leaving the observer functionally monocular, is inferior. Response times and error rates were determined for a combined visual search and Go-NoGo task. The costs of switching between displays were assessed separately. Finally, HMD effects on basic visual functions were quantified. Effects of HMDs on visual search and Go-NoGo task were small, but for LA display-switching costs for the Go-NoGo-task the effects were pronounced. Basic visual functions were most affected for LA (reduced visual acuity and visual field sensitivity, inaccurate vergence movements and absent stereo-vision). LA involved comparatively high switching costs for the Go-NoGo task, which might indicate reduced processing of external control cues. Reduced basic visual functions are a likely cause of this effect. |
Jennifer G. Tichon; Timothy Mavin; Guy Wallis; Troy A. W. Visser; Stephan Riek Using pupillometry and electromyography to track positive and negative affect during flight simulation Journal Article In: Aviation Psychology and Applied Human Factors, vol. 4, no. 1, pp. 23–32, 2014. @article{Tichon2014, Affect is a key determinant of performance, due to its influence on cognitive processing. Negative emotions such as anxiety are recognized cognitive stressors shown to degrade decision making and situation awareness. Conversely, positive affect can improve problem solving and facilitate recall. This exploratory pilot study used electromyography and pupillometry measures to track pilots' levels of negative and positive affect while training in a flight simulator. Fixation duration and saccade rate were found to correspond reliably to pilot self-reports of anxiety. Additionally, large increases in muscle activation were also recorded when higher anxiety was reported. Decreases in positive affect correlated significantly with saccade rate, fixation duration, and mean saccade velocity. Results are discussed in terms of using psychophysiological measures to provide a continuous, objective measure of pilot affective levels as an additional evaluation method to support assessment of pilot performance in simulation training environments. |
Yusuke Uchida; Nobuaki Mizuguchi; Masaaki Honda; Kazuyuki Kanosue Prediction of shot success for basketball free throws: Visual search strategy Journal Article In: European Journal of Sport Science, vol. 14, no. 5, pp. 426–432, 2014. @article{Uchida2014, Abstract In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed. |
Boris M. Velichkovsky; Mikhail A. Rumyantsev; Mikhail A. Morozov New solution to the Midas Touch Problem: Identification of visual commands via extraction of focal fixations Journal Article In: Procedia Computer Science, vol. 39, pp. 75–82, 2014. @article{Velichkovsky2014, Reliable identification of intentional visual commands is a major problem in the development of eye-movements based user interfaces. This work suggests that the presence of focal visual fixations is indicative of visual commands. Two experiments are described which assessed the effectiveness of this approach in a simple gaze-control interface. Identification accuracy was shown to match that of the commonly used dwell time method. Using focal fixations led to less visual fatigue and higher speed of work. Perspectives of using focal fixations for identification of visual commands in various kinds of eye-movements based interfaces are discussed. |
2013 |
Pierre-Vincent Paubel; Philippe Averty; Éric Raufaste Effects of an automated conflict solver on the visual activity of air traffic controllers Journal Article In: International Journal of Aviation Psychology, vol. 23, no. 2, pp. 181–196, 2013. @article{Paubel2013, ERASMUS is a "subliminal" automated aid system designed to reduce air traffic controllers' workload. Prior experiments showed that ERASMUS reduced subjective ratings of mental workload. In this article, the effect of ERASMUS on objective measures of controllers' visual activity was tested in a fully realistic simulation environment. The eye movements of 7 controllers were recorded during experimental traffic sequences, with and without ERASMUS. Consistent with a reduced workload hypothesis, results showed medium to large effects of ERASMUS on the amplitude of saccades, on the time spent gazing at aircraft, and on the distribution of attention over the visual scene. |
Adam M. Perkins; Ulrich Ettinger; K. Weaver; Anne Schmechtig; A. Schrantee; P. D. Morrison; A. Sapara; V. Kumari; Steve C. R. Williams; P. J. Corr In: Translational Psychiatry, vol. 3, pp. e246, 2013. @article{Perkins2013, Clinically effective drugs against human anxiety and fear systematically alter the innate defensive behavior of rodents, suggesting that in humans these emotions reflect defensive adaptations. Compelling experimental human evidence for this theory is yet to be obtained. We report the clearest test to date by investigating the effects of 1 and 2 mg of the anti-anxiety drug lorazepam on the intensity of threat-avoidance behavior in 40 healthy adult volunteers (20 females). We found lorazepam modulated the intensity of participants' threat-avoidance behavior in a dose-dependent manner. However, the pattern of effects depended upon two factors: type of threat-avoidance behavior and theoretically relevant measures of personality. In the case of flight behavior (one-way active avoidance), lorazepam increased intensity in low scorers on the Fear Survey Schedule tissue-damage fear but reduced it in high scorers. Conversely, in the case of risk-assessment behavior (two-way active avoidance), lorazepam reduced intensity in low scorers on the Spielberger trait anxiety but increased it in high scorers. Anti-anxiety drugs do not systematically affect rodent flight behavior; therefore, we interpret this new finding as suggesting that lorazepam has a broader effect on defense in humans than in rodents, perhaps by modulating general perceptions of threat intensity. The different patterning of lorazepam effects on the two behaviors implies that human perceptions of threat intensity are nevertheless distributed across two different neural streams, which influence effects observed on one-way or two-way active avoidance demanded by the situation. |
Judith Peth; Johann S. C. Kim; Matthias Gamer Fixations and eye-blinks allow for detecting concealed crime related memories Journal Article In: International Journal of Psychophysiology, vol. 88, no. 1, pp. 96–103, 2013. @article{Peth2013, The Concealed Information Test (CIT) is a method of forensic psychophysiology that allows for revealing concealed crime related knowledge. Such detection is usually based on autonomic responses but there is a huge interest in other measures that can be acquired unobtrusively. Eye movements and blinks might be such measures but their validity is unclear. Using a mock crime procedure with a manipulation of the arousal during the crime as well as the delay between crime and CIT, we tested whether eye tracking measures allow for detecting concealed knowledge. Guilty participants showed fewer but longer fixations on central crime details and this effect was even present after stimulus offset and accompanied by a reduced blink rate. These ocular measures were partly sensitive for induction of emotional arousal and time of testing. Validity estimates were moderate but indicate that a significant differentiation between guilty and innocent subjects is possible. Future research should further investigate validity differences between gaze measures during a CIT and explore the underlying mechanisms. |
Hector Rieiro; Susana Martinez-Conde; Stephen L. Macknik Perceptual elements in Penn & Teller's “Cups and Balls” magic trick Journal Article In: PeerJ, vol. 1, pp. 1–12, 2013. @article{Rieiro2013, Magic illusions provide the perceptual and cognitive scientist with a toolbox of experimental manipulations and testable hypotheses about the building blocks of conscious experience. Here we studied several sleight-of-hand manipulations in the performance of the classic "Cups and Balls" magic trick (where balls appear and disappear inside upside-down opaque cups). We examined a version inspired by the entertainment duo Penn & Teller, conducted with three opaque and subsequently with three transparent cups. Magician Teller used his right hand to load (i.e. introduce surreptitiously) a small ball inside each of two upside-down cups, one at a time, while using his left hand to remove a different ball from the upside-down bottom of the cup. The sleight at the third cup involved one of six manipulations: (a) standard maneuver, (b) standard maneuver without a third ball, (c) ball placed on the table, (d) ball lifted, (e) ball dropped to the floor, and (f) ball stuck to the cup. Seven subjects watched the videos of the performances while reporting, via button press, whenever balls were removed from the cups/table (button "1") or placed inside the cups/on the table (button "2"). Subjects' perception was more accurate with transparent than with opaque cups. Perceptual performance was worse for the conditions where the ball was placed on the table, or stuck to the cup, than for the standard maneuver. The condition in which the ball was lifted displaced the subjects' gaze position the most, whereas the condition in which there was no ball caused the smallest gaze displacement. Training improved the subjects' perceptual performance. Occlusion of the magician's face did not affect the subjects' perception, suggesting that gaze misdirection does not play a strong role in the Cups and Balls illusion. Our results have implications for how to optimize the performance of this classic magic trick, and for the types of hand and object motion that maximize magic misdirection. |
Yasuhiro Seya; Hidetoshi Nakayasu; Tadasu Yagi Useful field of view in simulated driving: Reaction times and eye movements of drivers Journal Article In: i-Perception, vol. 4, no. 4, pp. 285–298, 2013. @article{Seya2013, To examine the spatial distribution of a useful field of view (UFOV) in driving, reaction times (RTs) and eye movements were measured in simulated driving. In the experiment, a normal or mirror-reversed letter "E" was presented on driving images with different eccentricities and directions from the current gaze position. The results showed significantly slower RTs in the upper and upper left directions than in the other directions. The RTs were significantly slower in the left directions than in the right directions. These results suggest that the UFOV in driving may be asymmetrical among the meridians in the visual field. |
Heather Sheridan; Eyal M. Reingold The mechanisms and boundary conditions of the Einstellung Effect in chess: Evidence from eye movements Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75796, 2013. @article{Sheridan2013, In a wide range of problem-solving settings, the presence of a familiar solution can block the discovery of better solutions (i.e., the Einstellung effect). To investigate this effect, we monitored the eye movements of expert and novice chess players while they solved chess problems that contained a familiar move (i.e., the Einstellung move), as well as an optimal move that was located in a different region of the board. When the Einstellung move was an advantageous (but suboptimal) move, both the expert and novice chess players who chose the Einstellung move continued to look at this move throughout the trial, whereas the subset of expert players who chose the optimal move were able to gradually disengage their attention from the Einstellung move. However, when the Einstellung move was a blunder, all of the experts and the majority of the novices were able to avoid selecting the Einstellung move, and both the experts and novices gradually disengaged their attention from the Einstellung move. These findings shed light on the boundary conditions of the Einstellung effect, and provide convergent evidence for Bilalić, McLeod, & Gobet (2008)'s conclusion that the Einstellung effect operates by biasing attention towards problem features that are associated with the familiar solution rather than the optimal solution. |
Michael Stengel; Martin Eisemann; Stephan Wenger; Benjamin Hell; Marcus Magnor Optimizing apparent display resolution enhancement for arbitrary videos Journal Article In: IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3604–3613, 2013. @article{Stengel2013, Display resolution is frequently exceeded by available image resolution. Recently, apparent display resolution enhancement (ADRE) techniques show how characteristics of the human visual system can be exploited to provide super-resolution on high refresh rate displays. In this paper, we address the problem of generalizing the ADRE technique to conventional videos of arbitrary content. We propose an optimization-based approach to continuously translate the video frames in such a way that the added motion enables apparent resolution enhancement for the salient image region. The optimization considers the optimal velocity, smoothness, and similarity to compute an appropriate trajectory. In addition, we provide an intuitive user interface that allows to guide the algorithm interactively and preserves important compositions within the video. We present a user study evaluating apparent rendering quality and show versatility of our method on a variety of general test scenes. |
Feng-Yi Tseng; Chin-Jung Chao; Wen-Yang Feng; Sheue-Ling Hwang Effects of display modality on critical battlefield e-map search performance Journal Article In: Behaviour & Information Technology, vol. 32, no. 9, pp. 888–901, 2013. @article{Tseng2013, Visual search performance in visual display terminals can be affected by several changeable display parameters, such as the dimensions of screen, target size and background clutter. We found that when there was time pressure for operators to execute the critical battlefield map searching in a control room, efficient visual search became more important. We investigated the visual search performance in a simulated radar interface, which included the warrior symbology. Thirty-six participants were recruited and a three-factor mixed design was used in which the independent variables were three screen dimensions (7, 15 and 21 in.), five icon sizes (visual angle 40, 50, 60, 70 and 80 min of arc) and two map background clutter types (topography displayed [TD] and topography not displayed [TND]). The five dependent variables were completion time, accuracy, fixation duration, fixation count and saccade amplitude. The results showed that the best icon sizes were 80 and 70 min. The 21 in. screen dimension was chosen as the superior screen for search tasks. The TND map background with less clutters produced higher accuracy compared to that of TD background with clutter. The results of this research can be used in control room design to promote operators' visual search performance. |
Lisa Stockhausen; Sara Koeser; Sabine Sczesny The gender typicality of faces and its impact on visual processing and on hiring decisions Journal Article In: Experimental Psychology, vol. 60, no. 6, pp. 444–452, 2013. @article{Stockhausen2013, Past research has shown that the gender typicality of applicants' faces affects leadership selection irrespective of a candidate's gender: A masculine facial appearance is congruent with masculine-typed leadership roles, thus masculine-looking applicants are hired more certainly than feminine-looking ones. In the present study, we extended this line of research by investigating hiring decisions for both masculine- and feminine-typed professional roles. Furthermore, we used eye tracking to examine the visual exploration of applicants' portraits. Our results indicate that masculine-looking applicants were favored for the masculine-typed role (leader) and feminine-looking applicants for the feminine-typed role (team member). Eye movement patterns showed that information about gender category and facial appearance was integrated during first fixations of the portraits. Hiring decisions, however, were not based on this initial analysis, but occurred at a second stage, when the portrait was viewed in the context of considering the applicant for a specific job. |
Robin Walker An iPad app as a low-vision aid for people with macular disease Journal Article In: British Journal of Ophthalmology, vol. 97, no. 1, pp. 110–112, 2013. @article{Walker2013, Age-related macular degeneration (AMD) is the single most common cause of vision loss in people over the age of 50. Individuals with low vision caused by macular disease, experience severe difficulty with everyday tasks such as reading, which has profound detrimental consequences for their quality of life. We have developed an app for the iPad (the MD evReader) that aims to improve reading (of electronic books) by enhancing the effectiveness of the eccentric viewing technique (EV) using dynamic text presentation. Eccentric viewing is a simple strategy adopted by individuals with AMD that involves using the relatively preserved peripheral region of their retina in order to see. A limiting factor of the EV technique is that it relies on the individual holding their gaze away from the focus of interest and suppressing the natural and strong, tendency to make eye-movements (saccades). During normal reading, for example, a stereotypical pattern of horizontal saccades are made, from left-to-right, enabling fixations to be made on each word4 – Figure 1a). The natural inclination to make saccades is, however, difficult to suppress and limits the effectiveness of eccentric viewing in people with macular disease. |
Li Zhang; Jie Ren; Liang Xu; Xue Jun Qiu; Jost B. Jonas Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis Journal Article In: British Journal of Ophthalmology, vol. 97, no. 7, pp. 941–942, 2013. @article{Zhang2013a, With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue. |
Li Zhang; Ya-Qin Zhang; Jing-Shang Zhang; Liang Xu; Jost B. Jonas Visual fatigue and discomfort after stereoscopic display viewing Journal Article In: Acta Ophthalmologica, vol. 91, no. 2, pp. 149–153, 2013. @article{Zhang2013b, Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p < 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively. |
Chenjiang Xie; Tong Zhu; Chunlin Guo; Yimin Zhang Measuring IVIS impact to driver by on-road test and simulator experiment Journal Article In: Procedia Social and Behavioral Sciences, vol. 96, pp. 1566–1577, 2013. @article{Xie2013, This work examined the effects of using in-vehicle information systems (IVIS) on drivers by on-road test and simulator experiment. Twelve participants participated in the test. In on-road test, drivers performed driving task with voice prompt and non-voice prompt navigation device mounted on different position. In simulator experiment, secondary tasks, including cognitive, visual and manual tasks, were performed in a driving simulator. Subjective rating was used to test mental workload of drivers in on-road test and simulator experiment. The impact of task complexity and reaction mode was investigated in this paper. The results of the test and the simulation showed that position 1 was more comfortable than other two positions for drivers and it would cause less mental load. Drivers tend to support this result in subjective rating. IVIS with voice prompt causes less visual demand to drivers. The mental load will grow as the difficulty of the task increasing. The cognitive task on manual reaction causes higher mental load than cognitive task which doesn't require manual reaction. These results may have practical implications for in-vehicle information system design. |
D. A. Baker; N. J. Schweitzer; Evan F. Risko; Jillian M. Ware Visual attention and the neuroimage bias Journal Article In: PLoS ONE, vol. 8, no. 9, pp. e74449, 2013. @article{Baker2013, Several highly-cited experiments have presented evidence suggesting that neuroimages may unduly bias laypeople's judgments of scientific research. This finding has been especially worrisome to the legal community in which neuroimage techniques may be used to produce evidence of a person's mental state. However, a more recent body of work that has looked directly at the independent impact of neuroimages on layperson decision-making (both in legal and more general arenas), and has failed to find evidence of bias. To help resolve these conflicting findings, this research uses eye tracking technology to provide a measure of attention to different visual representations of neuroscientific data. Finding an effect of neuroimages on the distribution of attention would provide a potential mechanism for the influence of neuroimages on higher-level decisions. In the present experiment, a sample of laypeople viewed a vignette that briefly described a court case in which the defendant's actions might have been explained by a neurological defect. Accompanying these vignettes was either an MRI image of the defendant's brain, or a bar graph depicting levels of brain activity-two competing visualizations that have been the focus of much of the previous research on the neuroimage bias. We found that, while laypeople differentially attended to neuroimagery relative to the bar graph, this did not translate into differential judgments in a way that would support the idea of a neuroimage bias. |
Ana Margarida Barreto Do users look at banner ads on Facebook? Journal Article In: Journal of Research in Interactive Marketing, vol. 7, no. 2, pp. 119–139, 2013. @article{Barreto2013, Purpose – The main purpose of this study was to determine whether users of the online social network site, Facebook, actually look at the ads displayed (briefly, to test the existence of the phenomenon known as “banner blindness” in this website), thus ascertaining the effectiveness of paid advertising, and comparing it with the number of friends' recommendations seen. Design/methodology/approach – In order to achieve this goal, an experiment using eye-tracking technology was administered to a total of 20 participants from a major university in the USA, followed by a questionnaire. Findings – Findings show that online ads attract less attention levels than friends' recommendations. A possible explanation for this phenomenon may be related to the fact that ads on Facebook are outside of the F-shaped visual pattern range, causing a state of “banner blindness”. Results also show that statistically there is no difference in ads seen and clicked between women and men. Research limitations/implications – The sample type (undergraduate and graduate students) and the sample size (20 participants) inhibit the generalization of the findings to other populations. Practical implications – The paper includes implications for the development of an effective online advertising campaign, as well as some proposed conceptualizations of the terms social network site and advertising, which can be used as platforms for discussion or as standards for future definitions. Originality/value – This study fulfils some identified needs to study advertising effectiveness based on empirical data and to assess banner blindness in other contexts, representative of current internet users' habits. |
Raymond Bertram; Laura Helle; Johanna K. Kaakinen; Erkki Svedström The effect of expertise on eye movement behaviour in medical image perception Journal Article In: PLoS ONE, vol. 8, no. 6, pp. e66169, 2013. @article{Bertram2013, The present eye-movement study assessed the effect of expertise on eye-movement behaviour during image perception in the medical domain. To this end, radiologists, computed-tomography radiographers and psychology students were exposed to nine volumes of multi-slice, stack-view, axial computed-tomography images from the upper to the lower part of the abdomen with or without abnormality. The images were presented in succession at low, medium or high speed, while the participants had to detect enlarged lymph nodes or other visually more salient abnormalities. The radiologists outperformed both other groups in the detection of enlarged lymph nodes and their eye-movement behaviour also differed from the other groups. Their general strategy was to use saccades of shorter amplitude than the two other participant groups. In the presence of enlarged lymph nodes, they increased the number of fixations on the relevant areas and reverted to even shorter saccades. In volumes containing enlarged lymph nodes, radiologists' fixation durations were longer in comparison to their fixation durations in volumes without enlarged lymph nodes. More salient abnormalities were detected equally well by radiologists and radiographers, with both groups outperforming psychology students. However, to accomplish this, radiologists actually needed fewer fixations on the relevant areas than the radiographers. On the basis of these results, we argue that expert behaviour is manifested in distinct eye-movement patterns of proactivity, reactivity and suppression, depending on the nature of the task and the presence of abnormalities at any given moment. |
Leandro Luigi Di Stasi; Adoración Antolí; José J. Cañas Evaluating mental workload while interacting with computer-generated artificial environments Journal Article In: Entertainment Computing, vol. 4, no. 1, pp. 63–69, 2013. @article{DiStasi2013a, The need to evaluate user behaviour and cognitive efforts when interacting with complex simulations plays a crucial role in many information and communications technologies. The aim of this paper is to propose the use of eye-related measures as indices of mental workload in complex tasks. An experiment was conducted using the FireChief® microworld in which user mental workload was manipulated by changing the interaction strategy required to perform a common task. There were significant effects of the attentional state of users on visual scanning behavior. Longer fixations were found for the more demanding strategy, slower saccades were found as the time-on-task increased, and pupil diameter decreased when an environmental change was introduced. Questionnaire and performance data converged with the psychophysiological ones. These results provide additional empirical support for the ability of some eye-related indices to discriminate variations in the attentional state of the user in visual-dynamic complex tasks and show their potential diagnostic capacity in the field of applied ergonomics. |
Leandro Luigi Di Stasi; Andrés Catena; José J. Cañas; Stephen L. Macknik; Susana Martinez-Conde Saccadic velocity as an arousal index in naturalistic tasks Journal Article In: Neuroscience and Biobehavioral Reviews, vol. 37, no. 5, pp. 968–975, 2013. @article{DiStasi2013b, Experimental evidence indicates that saccadic metrics vary with task difficulty and time-on-task in naturalistic scenarios. We explore historical and recent findings on the correlation of saccadic velocity with task parameters in clinical, military, and everyday situations, and its potential role in ergonomics. We moreover discuss the hypothesis that changes in saccadic velocity indicate variations in sympathetic nervous system activation; that is, variations in arousal. |
Trafton Drew; Melissa L. -H. Võ; Alex Olwal; Francine Jacobson; Steven E. Seltzer; Jeremy M. Wolfe Scanners and drillers: Characterizing expert visual search through volumetric images Journal Article In: Journal of Vision, vol. 13, no. 10, pp. 1–13, 2013. @article{Drew2013, Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a "stack" of 2-D chest CT "slices." At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: "drilling" and "scanning." Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated. |
Hayward J. Godwin; Stuart Hyde; Dominic Taunton; James Calver; James I. R. Blake; Simon P. Liversedge The influence of expertise on maritime driving behaviour Journal Article In: Applied Cognitive Psychology, vol. 27, no. 4, pp. 483–492, 2013. @article{Godwin2013a, We compared expert and novice behaviour in a group of participants as they engaged in a simulated maritime driving task. We varied the difficulty of the driving task by controllling the severity of the sea state in which they were driving their craft. Increases in sea severity increased the size of the upcoming waves while also increasing the length of the waves. Expert participants drove their craft at a higher speed than novices and decreased their fixation durations as wave severity increased. Furthermore, the expert participants increased the horizontal spread of their fixation positions as wave severity increased to a greater degree than novices. Conversely, novice participants showed evidence of a greater vertical spread of fixations than experts. By connecting our findings with previous research investigating eye movement behaviour and road driving, we suggest that novice or inexperienced drivers show inflexibility in adaptation to changing driving conditions. |
David J. Hancock; Diane M. Ste-Marie Gaze behaviors and decision making accuracy of higher- and lower-level ice hockey referees Journal Article In: Psychology of Sport & Exercise, vol. 14, no. 1, pp. 66–71, 2013. @article{Hancock2013, Background: Gaze behaviors are often studied in athletes, but infrequently for sport officials. There is a need to better understand gaze behavior in refereeing in order to improve training and education related to visual search patterns, which have been argued to be related to decision making (Abernethy & Russell, 1987a). Objective: To examine gaze behaviors, decision accuracy, and decision sensitivity (using signal detection analysis) of ice hockey referees of varying skill levels in a laboratory setting. Design: Using an experimental design, we conducted multiple t-tests. Method: Higher-level (N = 15) and lower-level ice hockey referees (N = 15) wore a head-mounted eye movement recorder and made penalty/no penalty decisions related to ice hockey video clips on a computer screen. We recorded gaze behaviors, decision accuracy, and decision sensitivity for each participant. Results: Results of the t-tests indicated no group differences in gaze behaviors; however, higher-level referees made significantly more accurate decisions (both accuracy and sensitivity) than lower-level referees. Conclusion: Higher-level ice hockey referees are superior to lower-level referees on decision making, but referees do not differ on gaze behaviors. Possibly, higher-level referees process relevant decision making information more effectively. |
Alistair J. Harvey; Wendy Kneller; Alison C. Campbell The effects of alcohol intoxication on attention and memory for visual scenes. Journal Article In: Memory, vol. 21, no. 8, pp. 969–980, 2013. @article{Harvey2013, This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes. |
Corey D. Holland; Oleg V. Komogortsev Complex eye movement pattern biometrics: The effects of environment and stimulus Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 8, no. 12, pp. 2115–2126, 2013. @article{Holland2013, This paper presents an objective evaluation of the effects of eye tracking specification and stimulus presentation on the biometric viability of complex eye movement patterns. Six spatial accuracy tiers (0.5°, 1.0°, 1.5°, 2.0°, 2.5°, 3.0°), six temporal resolution tiers (1000, 500, 250, 120, 75, 30 Hz), and five stimulus types (simple, complex, cognitive, textual, random) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment capable of at least 0.5° spatial accuracy and 250 Hz temporal resolution for biometric purposes, whereas stimulus had little effect on the biometric viability of eye movements. |
Olivia M. Maynard; Marcus R. Munafò; Ute Leonards Visual attention to health warnings on plain tobacco packaging in adolescent smokers and non-smokers Journal Article In: Addiction, vol. 108, no. 2, pp. 413–419, 2013. @article{Maynard2013, AIMS: Previous research with adults indicates that plain packaging increases visual attention to health warnings in adult non-smokers and weekly smokers, but not daily smokers. The present research extends this study to adolescents aged 14-19 years. DESIGN: Mixed-model experimental design, with smoking status as a between-subjects factor and pack type (branded or plain pack) and eye gaze location (health warning or branding) as within-subjects factors. SETTING: Three secondary schools in Bristol, UK. PARTICIPANTS: A convenience sample of adolescents comprising never-smokers (n = 26), experimenters (n = 34), weekly smokers (n = 13) and daily smokers (n = 14). MEASUREMENTS: Number of eye movements to health warnings and branding on plain and branded packs. FINDINGS: Analysis of variance, irrespective of smoking status revealed more eye movements to health warnings than branding on plain packs, but an equal number of eye movements to both regions on branded packs (P = 0.033). This was observed among experimenters (P < 0.001) and weekly smokers (P = 0.047), but not among never-smokers or daily smokers. CONCLUSION: Among experimenters and weekly smokers, plain packaging increases visual attention to health warnings and away from branding. Daily smokers, even relatively early in their smoking careers, seem to avoid the health warnings on cigarette packs. Adolescent never-smokers attend the health warnings preferentially on both types of packs, a finding which may reflect their decision not to smoke. |