EyeLink Usability / Applied Publications
All EyeLink usability and applied research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications using keywords such as Driving, Sport, Workload, etc. You can also search for individual author names. If we missed any EyeLink usability or applied article, please email us!
Lauren H. Williams; Trafton Drew
Distraction in diagnostic radiology: How is search through volumetric medical images affected by interruptions? Journal Article
In: Cognitive Research: Principles and Implications, vol. 2, no. 1, pp. 12, 2017.
Observational studies have shown that interruptions are a frequent occurrence in diagnostic radiology. The present study used an experimental design in order to quantify the cost of these interruptions during search through volumetric medical images. Participants searched through chest CT scans for nodules that are indicative of lung cancer. In half of the cases, search was interrupted by a series of true or false math equations. The primary cost of these interruptions was an increase in search time with no corresponding increase in accuracy or lung coverage. This time cost was not modulated by the difficulty of the interruption task or an individual's working memory capacity. Eye-tracking suggests that this time cost was driven by impaired memory for which regions of the lung were searched prior to the interruption. Potential interventions will be discussed in the context of these results.
Julia A. Wolfson; Dan J. Graham; Sara N. Bleich
Attention to physical activity–equivalent calorie information on nutrition facts labels: An eye-tracking investigation Journal Article
In: Journal of Nutrition Education and Behavior, vol. 49, no. 1, pp. 35–42.e1, 2017.
Objective Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. Design An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Setting Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. Participants The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Main Outcome Measure(s) Attention to and attitudes about activity-equivalent calorie information. Analysis Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Results Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Conclusions and Implications Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions.
Aiping Xiong; Robert W. Proctor; Weining Yang; Ninghui Li
Is domain highlighting actually helpful in identifying phishing web pages? Journal Article
In: Human Factors, vol. 59, no. 4, pp. 640–660, 2017.
OBJECTIVE: To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. BACKGROUND: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. METHOD: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. RESULTS: Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. CONCLUSION: Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. APPLICATION: Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.
Ying Yan; Xiaofei Wang; Ludan Shi; Haoxue Liu
Influence of light zones on drivers' visual fixation characteristics and traffic safety in extra-long tunnels Journal Article
In: Traffic Injury Prevention, vol. 18, no. 1, pp. 102–110, 2017.
OBJECTIVE: Special light zone is a new illumination technique that promises to improve the visual environment and improve traffic safety in extra-long tunnels. The purpose of this study is to identify how light zones affect the dynamic visual characteristics and information perception of drivers as they pass through extra-long tunnels on highways. METHODS: Thirty-two subjects were recruited for this study, and fixation data were recorded using eye movement tracking devices. A back-propagation artificial neural network was employed to predict and analyze the influence of special light zones on the variations in the fixation duration and pupil area of drivers. The analytic coordinates of focus points at different light zones were clustered to obtain different visual fixation regions using dynamic cluster theory. RESULTS: The findings of this study indicated that the special light zones had different influences on fixation duration and pupil area compared to other sections. Drivers gradually changed their fixation points from a scattered pattern to a narrow and zonal distribution that mainly focused on the main visual area at the center, the road just ahead, and the right side of the main visual area while approaching the special light zones. The results also showed that the variation in illumination and landscape in light zones was more important than driving experience to yield changes in visual cognition and driving behavior. CONCLUSIONS: It can be concluded that the special light zones can help relieve drivers' vision fatigue to some extent and further develop certain visual stimulus that can enhance drivers' attention. The study would provide a scientific basis for safety measurement implementation in extra-long tunnels.
Thomas Zawisza; Ray Garza
Using an eye tracking device to assess vulnerabilities to burglary Journal Article
In: Journal of Police and Criminal Psychology, vol. 32, no. 3, pp. 203–213, 2017.
This research examines the extent to which visual cues influence a person's decision to burglarize. Participants in this study (n = 65) viewed ten houses through an eye tracking device and were asked whether or not they thought each house was vulnerable to burglary. The eye tracking device recorded where a person looked and for how long they looked (in milliseconds). Our findings showed that windows and doors were two of the most important visual stimuli. Results from our follow-up questionnaire revealed that stimuli such as fencing, beware of pet signs, cars in driveways, and alarm systems are also considered. There are a number of implications for future research and policy.
John-Ross Rizzo; Todd E. Hudson; Weiwei Dai; Ninad Desai; Arash Yousefi; Dhaval Palsana; Ivan Selesnick; Laura J. Balcer; Steven L. Galetta; Janet C. Rucker
Objectifying eye movements during rapid number naming: Methodology for assessment of normative data for the King-Devick test Journal Article
In: Journal of the Neurological Sciences, vol. 362, pp. 232–239, 2016.
Objective Concussion is a major public health problem and considerable efforts are focused on sideline-based diagnostic testing to guide return-to-play decision-making and clinical care. The King-Devick (K-D) test, a sensitive sideline performance measure for concussion detection, reveals slowed reading times in acutely concussed subjects, as compared to healthy controls; however, the normal behavior of eye movements during the task and deficits underlying the slowing have not been defined. Methods Twelve healthy control subjects underwent quantitative eye tracking during digitized K-D testing. Results The total K-D reading time was 51.24 (± 9.7) seconds. A total of 145 saccades (± 15) per subject were generated, with average peak velocity 299.5°/s and average amplitude 8.2°. The average inter-saccadic interval was 248.4 ms. Task-specific horizontal and oblique saccades per subject numbered, respectively, 102 (± 10) and 17 (± 4). Subjects with the fewest saccades tended to blink more, resulting in a larger amount of missing data; whereas, subjects with the most saccades tended to make extra saccades during line transitions. Conclusions Establishment of normal and objective ocular motor behavior during the K-D test is a critical first step towards defining the range of deficits underlying abnormal testing in concussion. Further, it sets the groundwork for exploration of K-D correlations with cognitive dysfunction and saccadic paradigms that may reflect specific neuroanatomic deficits in the concussed brain.
Donghyun Ryu; David L. Mann; Bruce Abernethy; Jamie M. Poolton
Gaze-contingent training enhances perceptual skill acquisition Journal Article
In: Journal of Vision, vol. 16, no. 2, pp. 1–21, 2016.
The purpose of this study was to determine whether decision-making skill in perceptual-cognitive tasks could be enhanced using a training technique that impaired selective areas of the visual field. Recreational basketball players performed perceptual training over 3 days while viewing with a gaze-contingent manipulation that displayed either (a) a moving window (clear central and blurred peripheral vision), (b) a moving mask (blurred central and clear peripheral vision), or (c) full (unrestricted) vision. During the training, participants watched video clips of basketball play and at the conclusion of each clip made a decision about to which teammate the player in possession of the ball should pass. A further control group watched unrelated videos with full vision. The effects of training were assessed using separate tests of decision-making skill conducted in a pretest, posttest, and 2-week retention test. The accuracy of decision making was greater in the posttest than in the pretest for all three intervention groups when compared with the control group. Remarkably, training with blurred peripheral vision resulted in a further improvement in performance from posttest to retention test that was not apparent for the other groups. The type of training had no measurable impact on the visual search strategies of the participants, and so the training improvements appear to be grounded in changes in information pickup. The findings show that learning with impaired peripheral vision offers a promising form of training to support improvements in perceptual skill.
Sameer Saproo; Victor Shih; David C. Jangraw; Paul Sajda
Neural mechanisms underlying catastrophic failure in human-machine interaction during aerial navigation Journal Article
In: Journal of Neural Engineering, vol. 13, pp. 1–12, 2016.
Objective. We investigated the neural correlates of workload buildup in a fine visuomotor task called the boundary avoidance task (BAT). The BAT has been known to induce naturally occurring failures of human–machine coupling in high performance aircraft that can potentially lead to a crash—these failures are termed pilot induced oscillations (PIOs). Approach. We recorded EEG and pupillometry data from human subjects engaged in a flight BAT simulated within a virtual 3D environment. Main results. We find that workload buildup in a BAT can be successfully decoded from oscillatory features in the electroencephalogram (EEG). Information in delta, theta, alpha, beta, and gamma spectral bands of the EEG all contribute to successful decoding, however gamma band activity with a lateralized somatosensory topography has the highest contribution, while theta band activity with a fronto-central topography has the most robust contribution in terms of real-world usability. We show that the output of the spectral decoder can be used to predict PIO susceptibility. We also find that workload buildup in the task induces pupil dilation, the magnitude of which is significantly correlated with the magnitude of the decoded EEG signals. These results suggest that PIOs may result from the dysregulation of cortical networks such as the locus coeruleus (LC)—anterior cingulate cortex (ACC) circuit. Significance. Our findings may generalize to similar control failures in other cases of tight manmachine coupling where gains and latencies in the control system must be inferred and compensated for by the human operators. A closed-loop intervention using neurophysiological decoding of workload buildup that targets the LC-ACC circuit may positively impact operator performance in such situations.
Graham G. Scott; Christopher J. Hand
Motivation determines Facebook viewing strategy: An eye movement analysis Journal Article
In: Computers in Human Behavior, vol. 56, pp. 267–280, 2016.
Individuals' Social Networking Site (SNS) profiles are central to online impression formation. Distinct profile elements (e.g., Profile Picture) experimentally manipulated in isolation can alter perception of profile owners, but it is not known which elements are focused on and attributed most importance when profiles are viewed naturally. The current study recorded the eye movement behaviour of 70 participants who viewed experimenter-generated Facebook timelines of male and female targets carefully controlled for content. Participants were instructed to process the targets either as potential friends or as potential employees. Target timelines were delineated into Regions of Interest (RoIs) prior to data collection. We found pronounced effects of target gender, viewer motivation and interactions between these factors on processing. Global processing patterns differed based on whether a 'social' or a 'professional' viewing motivation was used. Both patterns were distinct to the 'F'-shaped patterns observed in previous research. When viewing potential employees viewers focused on the text content of timelines and when viewing potential friends image content was more important. Viewing patterns provide insight into the characteristics and abilities of targets most valued by viewers with distinct motivations. These results can inform future research, and allow new perspectives on previous findings.
Sergei L. Shishkin; Yuri O. Nuzhdin; Evgeny P. Svirin; Alexander G. Trofimov; Anastasia A. Fedorova; Bogdan L. Kozyrskiy; Boris M. Velichkovsky
EEG negativity in fixations used for gaze-based control: Toward converting intentions into actions with an eye-brain-computer interface Journal Article
In: Frontiers in Neuroscience, vol. 10, pp. 528, 2016.
We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200..500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage LDA classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for paralyzed and healthy users, the EBCI “Wish Mouse”.
Tarkeshwar Singh; Christopher M. Perry; Troy M. Herter
A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment Journal Article
In: Journal of NeuroEngineering and Rehabilitation, vol. 13, pp. 1–17, 2016.
BACKGROUND: Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. RESULTS: Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. CONCLUSIONS: The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Mathew Stange; Amanda Barry; Jolene Smyth; Kristen Olson
Effects of smiley face scales on visual processing of satisfaction questions in web surveys Journal Article
In: Social Science Computer Review, vol. 36, no. 6, pp. 756–766, 2016.
Web surveys permit researchers to use graphic or symbolic elements alongside the text of response options to help respondents process the categories. Smiley faces are one example used to communicate positive and negative domains. How respondents visually process these smiley faces, including whether they detract from the question's text, is understudied. We report the results of two eye-tracking experiments in which satisfaction questions were asked with and without smiley faces. Respondents to the questions with smiley faces spent less time reading the question stem and response option text than respondents to the questions without smiley faces, but the response distributions did not differ by version. We also find support that lower literacy respondents rely more on the smiley faces than higher literacy respondents.
John Sustersic; Brad Wyble; Siddharth Advani; Vijaykrishnan Narayanan
Towards a unified multiresolution vision model for autonomous ground robots Journal Article
In: Robotics and Autonomous Systems, vol. 75, pp. 221–232, 2016.
While remotely operated unmanned vehicles are increasingly a part of everyday life, truly autonomous robots capable of independent operation in dynamic environments have yet to be realized-particularly in the case of ground robots required to interact with humans and their environment. We present a unified multiresolution vision model for this application designed to provide the wide field of view required to maintain situational awareness and sufficient visual acuity to recognize elements of the environment while permitting feasible implementations in real-time vision applications. The model features a kind of color-constant processing through single-opponent color channels and contrast invariant oriented edge detection using a novel implementation of the Combination of Receptive Fields model. The model provides color and edge-based salience assessment, as well as a compressed color image representation suitable for subsequent object identification. We show that bottom-up visual saliency computed using this model is competitive with the current state-of-the-art while allowing computation in a compressed domain and mimicking the human visual system with nearly half (45%) of computational effort focused within the fovea. This method reduces storage requirement of the image pyramid to less than 5% of the full image, and computation in this domain reduces model complexity in terms of both computational costs and memory requirements accordingly. We also quantitatively evaluate the model for its application domain by using it with a camera/lens system with a 185° field of view capturing 3.5M pixel color images by using a tuned salience model to predict human fixations.
Vijay Vitthal Thitme; Akanksha Varghese
Image retrieval using vector of locally aggregated descriptors Journal Article
In: International Journal of Advance Research in Computer Science and Management Studies, vol. 4, no. 2, pp. 97–104, 2016.
Partial duplicate image retrieval is very powerful and important task in the real world applications such as landmark search, copyright protection, fake image identification. In the internet applications users continuously upload images which may be partially duplicate images on the domains like social sites orkut, facebook, and related applications etc. The partial image is nothing but segment of whole image, and the various methods of transformations are scaling, resolution, illumination, rotation and viewpoint. This method is considered as of much more valuable by different real world aspects and has motivated towards this study. The method of retrieving images which is based on the object methods generally uses the whole image as the query image. In object based image retrieval methods usually use the whole image as the query. This method is compared with text system by using the bag of visual words (BOV) Generally there may be lots of noise on the images so it is impossible to perform operations on large scale dataset of images. This approach is not much more used because no any spatial data is used to retrieve the efficient images.The art of image retrieval methods represent image with a large dimensional vector of visual words by making quantization of local features, such as Scale Invariant Feature Transform, solely on the descriptor space. Quantization of the Local features to visual words are done firstly in descriptor space and then in orientation space. Local Self-Similarity Descriptor (LSSD) value is used which is used to captures the internal geometric layouts in the local text self-similar regions near interest points.
Margarita Vinnikov; Robert S. Allison; Suzette Fernandes
Impact of depth of field simulation on visual fatigue: Who are impacted? and how? Journal Article
In: International Journal of Human-Computer Studies, vol. 91, pp. 37–51, 2016.
While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all.
Andrej Vlasenko; Tadas Limba; Mindaugas Kiškis; Gintarė Gulevičiūtė
Research on human emotion while playing a computer game using pupil recognition technology. Journal Article
In: TEM Journal, vol. 5, no. 4, pp. 417–423, 2016.
The article presents the results of an experiment during which the participants were playing an online game (poker), and while playing the game, a special video cam was recording the diameters of the player's eye pupils. Diameter data and calculations were based on these records with the aid of a computer program; then, diagrams of the diameter changes in the players' pupils were created (built) depending on the game situation. The study was conducted in a real life situation, when the players were playing online poker. The results of the study point out the connection between the changes in the psycho-emotional state of the players and the changes in their pupil diameters, where the emotional state is a critical factor affecting the operation of such systems.
Xi Wang; Bin Cai; Yang Cao; Chen Zhou; Le Yang; Runzhong Liu; Xiaojing Long; Weicai Wang; Dingguo Gao; Baicheng Bao
Objective method for evaluating orthodontic treatment from the lay perspective: An eye-tracking study Journal Article
In: American Journal of Orthodontics and Dentofacial Orthopedics, vol. 150, no. 4, pp. 601–610, 2016.
Introduction Currently, few methods are available to measure orthodontic treatment need and treatment outcome from the lay perspective. The objective of this study was to explore the function of an eye-tracking method to evaluate orthodontic treatment need and treatment outcome from the lay perspective as a novel and objective way when compared with traditional assessments. Methods The scanpaths of 88 laypersons observing the repose and smiling photographs of normal subjects and pretreatment and posttreatment malocclusion patients were recorded by an eye-tracking device. The total fixation time and the first fixation time on the areas of interest (eyes, nose, and mouth) for each group of faces were compared and analyzed using mixed-effects linear regression and a support vector machine. The aesthetic component of the Index of Orthodontic Treatment Need was used to categorize treatment need and outcome levels to determine the accuracy of the support vector machine in identifying these variables. Results Significant deviations in the scanpaths of laypersons viewing pretreatment smiling faces were noted, with less fixation time (P <0.05) and later attention capture (P <0.05) on the eyes, and more fixation time (P <0.05) and earlier attention capture (P <0.05) on the mouth than for the scanpaths of laypersons viewing normal smiling subjects. The same results were obtained when comparing posttreatment smiling patients, with less fixation time (P <0.05) and later attention capture on the eyes (P <0.05), and more fixation time (P <0.05) and earlier attention capture on the mouth (P <0.05). The pretreatment repose faces exhibited an earlier attention capture on the mouth than did the normal subjects (P <0.05) and posttreatment patients (P <0.05). Linear support vector machine classification showed accuracies of 97.2% and 93.4% in distinguishing pretreatment patients from normal subjects (treatment need), and pretreatment patients from posttreatment patients (treatment outcome), respectively. Conclusions The eye-tracking device was able to objectively quantify the effect of malocclusion on facial perception and the impact of orthodontic treatment on malocclusion from the lay perspective. The support vector machine for classification of selected features achieved high accuracy of judging treatment need and treatment outcome. This approach may represent a new method for objectively evaluating orthodontic treatment need and treatment outcome from the perspective of laypersons.
Matthew B. Winn
Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants Journal Article
In: Trends in Hearing, vol. 20, 2016.
People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability.
Hosam Al-Samarraie; Samer Muthana Sarsam; Hans Guesgen
Predicting user preferences of environment design: A perceptual mechanism of user interface customisation Journal Article
In: Behaviour and Information Technology, vol. 35, no. 8, pp. 644–653, 2016.
It is a well-known fact that users vary in their preferences and needs. Therefore, it is very crucial to provide the customisation or personalisation for users in certain usage conditions that are more associated with their preferences. With the current limitation in adopting perceptual processing into user interface personalisation, we introduced the possibility of inferring interface design preferences from the user?s eye-movement behaviour. We firstly captured the user?s preferences of graphic design elements using an eye-tracker. Then we diagnosed these preferences towards the region of interests to build a prediction model for interface customisation. The prediction models from eye-movement behaviour showed a high potential for predicting users? preferences of interface design based on the paralleled relation between their fixation and saccadic movement. This mechanism provides a novel way of user interface design customisation and opens the door for new research in the areas of human?computer interaction and decision-making.
Joseph E. Barton; Anindo Roy; John D. Sorkin; Mark W. Rogers; Richard F. Macko
An engineering model of human balance control—Part I: Biomechanical model Journal Article
In: Journal of Biomechanical Engineering, vol. 138, no. 1, pp. 1–11, 2016.
We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task-with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities.
How textbook design may influence learning with geography textbooks Journal Article
In: Nordidactica – Journal of Humanities and Social Science Education, vol. 1, pp. 38–62, 2016.
This paper investigates how textbook design may influence students' visual attention to graphics, photos and text in current geography textbooks. Eye tracking, a visual method of data collection and analysis, was utilised to precisely monitor students' eye movements while observing geography textbook spreads. In an exploratory study utilising random sampling, the eye movements of 20 students (secondary school students 15–17 years of age and university students 20–24 years of age) were recorded. The research entities were double- page spreads of current German geography textbooks covering an identical topic, taken from five separate textbooks. A two-stage test was developed. Each participant was given the task of first looking at the entire textbook spread to determine what was being explained on the pages. In the second stage, participants solved one of the tasks from the exercise section. Overall, each participant studied five different textbook spreads and completed five set tasks. After the eye tracking study, each participant completed a questionnaire. The results may verify textbook design as one crucial factor for successful knowledge acquisition from textbooks. Based on the eye tracking documentation, learning-related challenges posed by images and complex image-text structures in textbooks are elucidated and related to educational psychology insights and findings from visual communication and textbook analysis.
Palash Bera; Louis Philippe Sirois
Displaying background maps in business intelligence dashboards Journal Article
In: Iranian Journal of Psychiatry, vol. 18, no. 5, pp. 58–65, 2016.
Business data in geographic maps, called data maps, can be displayed via business intelligence dashboards. An important emerging feature is the use of background maps that overlap with existing data maps. Here, the authors examine the usefulness of background maps in dashboards and investigate how much cognitive effort users put in when they use dashboards with background maps as compared to dashboards without them. To test the extent of cognitive effort, the authors conducted an eye-tracking study in which users performed a decision-making task with maps in dashboards. In a separate study, users were asked directly about the mental effort required to perform tasks with the dashboards. Both studies identified that when users use background maps, they required less cognitive effort than users who use dashboards in which the information on the background map is represented in another form, such as a bar chart.
Raymond Bertram; Johanna K. Kaakinen; Frank Bensch; Laura Helle; Eila Lantto; Pekka Niemi; Nina Lundbom
Eye movements of radiologists reflect expertise in CT study interpretation: A potential tool to measure resident development Journal Article
In: Radiology, vol. 281, no. 3, pp. 805–815, 2016.
PURPOSE: To establish potential markers of visual expertise in eye movement (EM) patterns of early residents, advanced residents, and specialists who interpret abdominal computed tomography (CT) studies. MATERIAL AND METHODS: The institutional review board approved use of anonymized CT studies as research materials and to obtain anonymized eye-tracking data from volunteers. Participants gave written informed consent. RESULTS: Early residents (n = 15), advanced residents (n = 14), and specialists (n = 12) viewed 26 abdominal CT studies as a sequence of images at either 3 or 5 frames per second while EMs were recorded. Data were analyzed by using linear mixed-effects models. Early residents' detection rate decreased with working hours (odds ratio, 0.81; 95% confidence interval [CI]: 0.73, 0.91; P = .001). They detected less of the low visual contrast (but not of the high visual contrast) lesions (45% [13 of 29]) than did specialists (62% [18 of 29]) (odds ratio, 0.39; 95% CI: 0.25, 0.61; P , .001) or advanced residents (56% [16 of 29]) (odds ratio, 0.55; 95% CI: 0.33, 0.93; P = .024). Specialists and advanced residents had longer fixation durations at 5 than at 3 frames per second (specialists: b = .01; 95% CI: .004, .026; P = .008; advanced residents: b = .04; 95% CI: .03, .05; P , .001). In the presence of lesions, saccade lengths of specialists shortened more than those of advanced (b = .02; 95% CI: .007, .04; P = .003) and of early residents (b = .02; 95% CI: .008, 0.04; P = .003). Irrespective of expertise, high detection rate correlated with greater reduction of saccade length in the presence of lesions (b = 2.10; 95% CI: 2.16, 2.04; P = .002) and greater increase at higher presentation speed (b = .11; 95% CI: .04, .17; P = .001). CONCLUSION: Expertise in CT reading is characterized by greater adaptivity in EM patterns in response to the demands of the task and environment.
Federica Bianchi; Sébastien Santurette; Dorothea Wendt; Torsten Dau
Pitch discrimination in musicians and non-musicians: Effects of harmonic resolvability and processing effort Journal Article
In: JARO - Journal of the Association for Research in Otolaryngology, vol. 17, no. 1, pp. 69–79, 2016.
Musicians typically show enhanced pitch discrimination abilities compared to non-musicians. The present study investigated this perceptual enhancement behaviorally and objectively for resolved and unresolved complex tones to clarify whether the enhanced performance in musicians can be ascribed to increased peripheral frequency selectivity and/or to a different processing effort in performing the task. In a first experiment, pitch discrimination thresholds were obtained for harmonic complex tones with fundamental frequencies (F0s) between 100 and 500 Hz, filtered in either a low- or a high-frequency region, leading to variations in the resolvability of audible harmonics. The results showed that pitch discrimination performance in musicians was enhanced for resolved and unresolved complexes to a similar extent. Additionally, the harmonics became resolved at a similar F0 in musicians and non-musicians, suggesting similar peripheral frequency selectivity in the two groups of listeners. In a follow-up experiment, listeners' pupil dilations were measured as an indicator of the required effort in performing the same pitch discrimination task for conditions of varying resolvability and task difficulty. Pupillometry responses indicated a lower processing effort in the musicians versus the non-musicians, although the processing demand imposed by the pitch discrimination task was individually adjusted according to the behavioral thresholds. Overall, these findings indicate that the enhanced pitch discrimination abilities in musicians are unlikely to be related to higher peripheral frequency selectivity and may suggest an enhanced pitch representation at more central stages of the auditory system in musically trained listeners.
Indu P. Bodala; Junhua Li; Nitish V. Thakor; Hasan Al-Nashash
EEG and eye tracking demonstrate vigilance enhancement with challenge integration Journal Article
In: Frontiers in Human Neuroscience, vol. 10, pp. 273, 2016.
Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using ‘challenge integration', a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p < 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p < 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p < 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RT mean = 3.65 ± 1.4 secs) compared to initial monotonous phase without challenge (RT mean = 4.6 ± 2.7 secs). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration.
Tom Bullock; James C. Elliott; John T. Serences; Barry Giesbrecht
Acute exercise modulates feature-selective responses in human cortex Journal Article
In: Journal of Cognitive Neuroscience, vol. 29, no. 4, pp. 605–618, 2016.
An organism's current behavioral state influences ongoing brain activity. Nonhuman mammalian and invertebrate brains exhibit large increases in the gain of feature-selective neural responses in sensory cortex during locomotion, suggesting that the visual system becomes more sensitive when actively exploring the environment. This raises the possibility that human vision is also more sensitive during active movement. To investigate this possibility, we used an inverted encoding model technique to estimate feature-selective neural response profiles from EEG data acquired from participants performing an orientation discrimination task. Participants (n = 18) fixated at the center of a flickering (15 Hz) circular grating presented at one of nine different orientations and monitored for a brief shift in orientation that occurred on every trial. Participants completed the task while seated on a stationary exercise bike at rest and during low- and high-intensity cycling. We found evidence for inverted-U effects; such that the peak of the reconstructed feature-selective tuning profiles was highest during low-intensity exercise compared with those estimated during rest and high-intensity exercise. When modeled, these effects were driven by changes in the gain of the tuning curve and in the profile bandwidth during low-intensity exercise relative to rest. Thus, despite profound differences in visual pathways across species, these data show that sensitivity in human visual cortex is also enhanced during locomotive behavior. Our results reveal the nature of exercise-induced gain on feature-selective coding in human sensory cortex and provide valuable evidence linking the neural mechanisms of behavior state across species.
Rong-Fuh Day; Peng-Yeng Yin; Yu-Chi Wang; Ching-Hui Chao
A new hybrid multi-start tabu search for finding hidden purchase decision strategies in WWW based on eye-movements Journal Article
In: Applied Soft Computing, vol. 48, pp. 217–229, 2016.
It is known that the decision strategy performed by a subject is implicit in his/her external behaviors. Eye movement is one of the observable external behaviors when humans are performing decision activities. Due to the dramatic increase of e-commerce volume on WWW, it is beneficial for the companies to know where the customers focus their attention on the webpage in deciding to make a purchase. This study proposes a new hybrid multi-start tabu search (HMTS) algorithm for finding the hidden decision strategies by clustering the eye-movement data obtained during the decision activities. The HMTS uses adaptive memory and employs both multi-start and local search strategies. An empirical dataset containing 294 eye-fixation sequences and a synthetic dataset consisting of 360 sequences were experimented with. We conduct the Sign test and the result shows that the proposed HMTS method significantly outperforms its variants which implement just one strategy, and the HMTS algorithm shows an improvement over genetic algorithm, particle swarm optimization, and K-means, with a level of significance $alpha$ = 0.01. The scalability and robustness of the HMTS is validated through a series of statistical tests.
Jelmer P. De Vries; Britta K. Ischebeck; L. P. Voogt; Malou Janssen; Maarten A. Frens; Gert Jan Kleinrensink; Josef N. Geest
Cervico-ocular reflex is increased in people with nonspecific neck pain Journal Article
In: Physical Therapy, vol. 96, no. 8, pp. 1190–1195, 2016.
Background: Neck pain is a widespread complaint. People experiencing neck pain often present an altered timing in contraction of cervical muscles. This altered afferent information elicits the cervico-ocular reflex (COR), which stabilizes the eye in response to trunk-to-head movements. The vestibulo-ocular reflex (VOR) elicited by the vestibulum is thought to be unaffected by afferent information from the cervical spine. Objective The aim of the study was to measure the COR and VOR in people with nonspecific neck pain. Design: This study utilized a cross-sectional design in accordance with the STROBE statement. Methods: An infrared eye-tracking device was used to record the COR and the VOR while the participant was sitting on a rotating chair in darkness. Eye velocity was calculated by taking the derivative of the horizontal eye position. Parametric statistics were performed. Results: The mean COR gain in the control group (n=30) was 0.26 (SD=0.15) compared with 0.38 (SD=0.16) in the nonspecific neck pain group (n=37). Analyses of covariance were performed to analyze differences in COR and VOR gains, with age and sex as covariates. Analyses of covariance showed a significantly increased COR in participants with neck pain. The VOR between the control group, with a mean VOR of 0.67 (SD=0.17), and the nonspecific neck pain group, with a mean VOR of 0.66 (SD=0.22), was not significantly different. Limitations: Measuring eye movements while the participant is sitting on a rotating chair in complete darkness is technically complicated. Conclusions: This study suggests that people with nonspecific neck pain have an increased COR. The COR is an objective, nonvoluntary eye reflex and an unaltered VOR. This study shows that an increased COR is not restricted to patients with traumatic neck pain.
Tao Deng; Kaifu Yang; Yongjie Li; Hongmei Yan
Where does the driver look? Top-down-based saliency detection in a traffic driving environment Journal Article
In: IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 2051–2062, 2016.
A traffic driving environment is a complex and dynamically changing scene. When driving, drivers always allocate their attention to the most important and salient areas or targets. Traffic saliency detection, which computes the salient and prior areas or targets in a specific driving environment, is an indispensable part of intelligent transportation systems and could be useful in supporting autonomous driving, traffic sign detection, driving training, car collision warning, and other tasks. Recently, advances in visual attention models have provided substantial progress in describing eye movements over simple stimuli and tasks such as free viewing or visual search. However, to date, there exists no computational framework that can accurately mimic a driver's gaze behavior and saliency detection in a complex traffic driving environment. In this paper, we analyzed the eye-tracking data of 40 subjects consisted of nondrivers and experienced drivers when viewing 100 traffic images. We found that a driver's attention was mostly concentrated on the end of the road in front of the vehicle. We proposed that the vanishing point of the road can be regarded as valuable top-down guidance in a traffic saliency detection model. Subsequently, we build a framework of a classic bottom-up and top-down combined traffic saliency detection model. The results show that our proposed vanishing-point-based top-down model can effectively simulate a driver's attention areas in a driving environment.
Leandro L. Di Stasi; Michael B. McCamy; Susana Martinez-Conde; Ellis Gayles; Chad Hoare; Michael Foster; Andrés Catena; Stephen L. Macknik
Effects of long and short simulated flights on the saccadic eye movement velocity of aviators Journal Article
In: Physiology and Behavior, vol. 153, pp. 91–96, 2016.
Aircrew fatigue is a major contributor to operational errors in civil and military aviation. Objective detection of pilot fatigue is thus critical to prevent aviation catastrophes. Previous work has linked fatigue to changes in oculomotor dynamics, but few studies have studied this relationship in critical safety environments. Here we measured the eye movements of US Marine Corps combat helicopter pilots before and after simulated flight missions of different durations. We found a decrease in saccadic velocities after long simulated flights compared to short simulated flights. These results suggest that saccadic velocity could serve as a biomarker of aviator fatigue.
Carolina Diaz-Piedra; Héctor Rieiro; Juan Suárez; Francisco Rios-Tejada; Andrés Catena; Leandro Luigi Di Stasi
Fatigue in the military: Towards a fatigue detection test based on the saccadic velocity Journal Article
In: Physiological Measurement, vol. 37, no. 9, pp. N62–N75, 2016.
Fatigue is a major contributing factor to operational errors. Therefore, the validation of objective and sensitive indices to detect fatigue is critical to prevent accidents and catastrophes. Whereas tests based on saccadic velocity (SV) have become popular, their sensitivity in the military is not yet clear, since most research has been conducted in laboratory settings using not fully validated instruments. Field studies remain scarce, especially in extreme conditions such as real flights. Here, we investigated the effects of real, long flights on SV. We assessed five newly commissioned military helicopter pilots during their naviation training. Pilots flew Sikorsky S-76C helicopters, under instrumental flight rules, for more than 2 h (ca. 150 min). Eye movements were recorded before and after the flight with an eye tracker using a standard guided-saccade task. We also collected subjective ratings of fatigue. SV significantly decreased from the Pre-Flight to the Post-Flight session in all pilots by around 3% (range: 1-4%). Subjective ratings showed the same tendency. We provide conclusive evidence about the high sensitivity of fatigue tests based on SV in real flight conditions, even in small samples. This result might offer military medical departments a valid and useful biomarker of warfighter physiological state.
Blue hypertext is a good design decision: No perceptual disadvantage in reading and successful highlighting of relevant information Journal Article
In: PeerJ, vol. 4, pp. 1–11, 2016.
BACKGROUND: Highlighted text in the Internet (i.e., hypertext) is predominantly blue and underlined. The perceptibility of these hypertext characteristics was heavily questioned by applied research and empirical tests resulted in inconclusive results. The ability to recognize blue text in foveal and parafoveal vision was identified as potentially constrained by the low number of foveally centered blue light sensitive retinal cells. The present study investigates if foveal and parafoveal perceptibility of blue hypertext is reduced in comparison to normal black text during reading. METHODS: A silent-sentence reading study with simultaneous eye movement recordings and the invisible boundary paradigm, which allows the investigation of foveal and parafoveal perceptibility, separately, was realized (comparing fixation times after degraded vs. un-degraded parafoveal previews). Target words in sentences were presented in either black or blue and either underlined or normal. RESULTS: No effect of color and underlining, but a preview benefit could be detected for first pass reading measures. Fixation time measures that included re-reading, e.g., total viewing times, showed, in addition to a preview effect, a reduced fixation time for not highlighted (black not underlined) in contrast to highlighted target words (either blue or underlined or both). DISCUSSION: The present pattern reflects no detectable perceptual disadvantage of hyperlink stimuli but increased attraction of attention resources, after first pass reading, through highlighting. Blue or underlined text allows readers to easily perceive hypertext and at the same time readers re-visited highlighted words longer. On the basis of the present evidence, blue hypertext can be safely recommended to web designers for future use.
Ziad M. Hafed; Katarina Stingl; Karl Ulrich Bartz-Schmidt; Florian Gekeler; Eberhart Zrenner
Oculomotor behavior of blind patients seeing with a subretinal visual implant Journal Article
In: Vision Research, vol. 118, pp. 119–131, 2016.
Electronic implants are able to restore some visual function in blind patients with hereditary retinal degenerations. Subretinal visual implants, such as the CE-approved Retina Implant Alpha IMS (Retina Implant AG, Reutlingen, Germany), sense light through the eye's optics and subsequently stimulate retinal bipolar cells via $sim$1500 independent pixels to project visual signals to the brain. Because these devices are directly implanted beneath the fovea, they potentially harness the full benefit of eye movements to scan scenes and fixate objects. However, so far, the oculomotor behavior of patients using subretinal implants has not been characterized. Here, we tracked eye movements in two blind patients seeing with a subretinal implant, and we compared them to those of three healthy controls. We presented bright geometric shapes on a dark background, and we asked the patients to report seeing them or not. We found that once the patients visually localized the shapes, they fixated well and exhibited classic oculomotor fixational patterns, including the generation of microsaccades and ocular drifts. Further, we found that a reduced frequency of saccades and microsaccades was correlated with loss of visibility. Last, but not least, gaze location corresponded to the location of the stimulus, and shape and size aspects of the viewed stimulus were reflected by the direction and size of saccades. Our results pave the way for future use of eye tracking in subretinal implant patients, not only to understand their oculomotor behavior, but also to design oculomotor training strategies that can help improve their quality of life.
Lynn Huestegge; Anne Böckler
Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes Journal Article
In: Journal of Vision, vol. 16, no. 2, pp. 1–15, 2016.
Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards.
Yu-Cin Jian; Chao-Jung Wu
The function of diagram with numbered arrows and text in helping readers construct kinematic representations: Evidenced from eye movements and reading tests Journal Article
In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016.
Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas.
Ioanna Katidioti; Jelmer P. Borst; Douwe J. Bierens de Haan; Tamara Pepping; Marieke K. Vugt; Niels A. Taatgen
Interrupted by your pupil: An interruption management system based on pupil dilation Journal Article
In: International Journal of Human-Computer Interaction, vol. 32, no. 10, pp. 791–801, 2016.
Interruptions are prevalent in everyday life and can be very disruptive. An important factor that affects the level of disruptiveness is the timing of the interruption: Interruptions at low-workload moments are known to be less disruptive than interruptions at high-workload moments. In this study, we developed a task-independent interruption management system (IMS) that interrupts users at low-workload moments in order to minimize the disruptiveness of interruptions. The IMS identifies low-workload moments in real time by measuring users' pupil dilation, which is a well-known indicator of workload. Using an experimental setup we showed that the IMS succeeded in finding the optimal moments for interruptions and marginally improved performance. Because our IMS is task-independent—it does not require a task analysis—it can be broadly applied.
Ellen M. Kok; Halszka Jarodzka; Anique B. H. Bruin; Hussain A. N. BinAmir; Simon G. F. Robben; Jeroen J. G. Merriënboer
Systematic viewing in radiology: Seeing more, missing less? Journal Article
In: Advances in Health Sciences Education, vol. 21, no. 1, pp. 189–205, 2016.
To prevent radiologists from overlooking lesions, radiology textbooks rec- ommend ‘‘systematic viewing,'' a technique whereby anatomical areas are inspected in a fixed order. This would ensure complete inspection (full coverage) of the image and, in turn, improve diagnostic performance. To test this assumption, two experiments were performed. Both experiments investigated the relationship between systematic viewing, coverage, and diagnostic performance. Additionally, the first investigated whether sys- tematic viewing increases with expertise; the second investigated whether novices benefit from full-coverage or systematic viewing training. In Experiment 1, 11 students, ten res- idents, and nine radiologists inspected five chest radiographs. Experiment 2 had 75 students undergo a training in either systematic, full-coverage (without being systematic) or non- systematic viewing. Eye movements and diagnostic performance were measured throughout both experiments. In Experiment 1, no significant correlations were found between systematic viewing and coverage
Oleg V. Komogortsev; Alexey Karpov
Oculomotor plant characteristics : The effects of environment and stimulus Journal Article
In: IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 621–632, 2016.
This paper presents an objective evaluation of the effects of environmental factors, such as stimulus presentation and eye tracking specifications, on the biometric accuracy of oculomotor plant characteristic (OPC) biometrics. The study examines the largest known dataset for eye movement biometrics, with eye movements recorded from 323 subjects over multiple sessions. Six spatial precision tiers (0.01°, 0.11°, 0.21°, 0.31°, 0.41°, 0.51°), six temporal resolution tiers (1000 Hz, 500 Hz, 250 Hz, 120 Hz, 75 Hz, 30 Hz), and three stimulus types (horizontal, random, textual) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment providing at least 0.1° spatial precision and 30 Hz sampling rate for biometric purposes, and the use of a horizontal pattern stimulus when using the two- dimensional oculomotor plant model developed by Komogortsev et al. 
Mark A. LeBoeuf; Jessica M. Choplin; Debra Pogrund Stark
Eye see what you are saying: Testing conversational influences on the information gleaned from home-loan disclosure forms Journal Article
In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 307–321, 2016.
The federal government mandates the use of home-loan disclosure forms to facilitate understanding of offered loans, enable comparison shopping, and prevent predatory lending. Predatory lending persists, however, and scant research has examined how salespeople might undermine the effectiveness of these forms. Three eye-tracking studies (a laboratory simulation and two controlled experiments) investigated how conversational norms affect the information consumers can glean from these forms. Study 1 was a laboratory simulation that recreated in the laboratory; the effects that previous literature suggested is likely happening in the field, namely, that following or violating conversational norms affects the information that consumers can glean from home-loan disclosure forms and the home-loan decisions they make. Studies 2 and 3 were controlled experiments that isolated the possible factors responsible for the observed biases in the information gleaned from these forms. The results suggest that attentional biases are largely responsible for the effects of conversation on the information consumers get and that perceived importance plays little to no role. Policy implications and how eye-tracking technology can be employed to improve decision-making are considered.
Tsu-Chiang Lei; Shih-Chieh Wu; Chi-Wen Chao; Su-Hsin Lee
Evaluating differences in spatial visual attention in wayfinding strategy when using 2D and 3D electronic maps Journal Article
In: GeoJournal, vol. 81, no. 2, pp. 153–167, 2016.
With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace.
Qian Li; Zhuowei Joy Huang; Kiel Christianson
Visual attention toward tourism photographs with text: An eye-tracking study Journal Article
In: Tourism Management, vol. 54, pp. 243–258, 2016.
This study examines consumers' visual attention toward tourism photographs with text naturally embedded in landscapes and their perceived advertising effectiveness. Eye-tracking is employed to record consumers' visual attention and a questionnaire is administered to acquire information about the perceived advertising effectiveness. The impacts of text elements are examined by two factors: viewers' understanding of the text language (understand vs. not understand), and the number of textual messages (single vs. multiple). Findings indicate that text within the landscapes of tourism photographs draws the majority of viewers' visual attention, irrespective of whether or not participants understand the text language. People spent more time viewing photographs with text in a known language compared to photographs with an unknown language, and more time viewing photographs with a single textual message than those with multiple textual messages. Viewers reported higher perceived advertising effectiveness toward tourism photographs that included text in the known language.
Joan López-Moliner; Eli Brenner
Flexible timing of eye movements when catching a ball Journal Article
In: Journal of Vision, vol. 16, no. 5, pp. 1–11, 2016.
In ball games, one cannot direct ones gaze at the ball all the time because one must also judge other aspects of the game, such as other players' positions. We wanted to know whether there are times at which obtaining information about the ball is particularly beneficial for catching it. We recently found that people could catch successfully if they saw any part of the ball's flight except the very end, when sensory-motor delays make it impossible to use new information. Nevertheless, there may be a preferred time to see the ball. We examined when six catchers would choose to look at the ball if they had to both catch the ball and find out what to do with it while the ball was approaching. A catcher and a thrower continuously threw a ball back and forth. We recorded their hand movements, the catcher's eye movements, and the ball's path. While the ball was approaching the catcher, information was provided on a screen about how the catcher should throw the ball back to the thrower (its peak height). This information disappeared just before the catcher caught the ball. Initially there was a slight tendency to look at the ball before looking at the screen but, later, most catchers tended to look at the screen before looking at the ball. Rather than being particularly eager to see the ball at a certain time, people appear to adjust their eye movements to the combined requirements of the task.
Bob McMurray; Ashley Farris-Trimble; Michael Seedorff; Hannah Rigler
The effect of residual acoustic hearing and adaptation to uncertainty on speech perception in cochlear implant users: Evidence from eye-tracking Journal Article
In: Ear and Hearing, vol. 37, no. 1, pp. e37–e51, 2016.
OBJECTIVES: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/ʃ, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/ʃ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ʃ-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions.
Zhongling Pi; Jianzhong Hong
Learning process and learning outcomes of video podcasts including the instructor and PPT slides: A Chinese case Journal Article
In: Innovations in Education and Teaching International, vol. 53, no. 2, pp. 135–144, 2016.
Video podcasts have become one of the fastest developing trends in learning and teaching. The study explored the effect of the presenting mode of educational video podcasts on the learning process and learning outcomes. Prior to viewing a video podcast, the 94 Chinese undergraduates participating in the study completed a demographic questionnaire and prior knowledge test. The learning process was investigated by eye-tracking and the learning outcome by a learning test. The results revealed that the participants using the video podcast with both the instructor and PPT slides gained the best learning outcomes. It was noted that they allocated much more visual attention to the instructor than to the PPT slides. It was additionally found that the 22 min was the time at which the participants reached the peak of mental fatigue. The results of our study imply that the use of educational technology is culture bound.
Alessandro Piras; Ivan M. Lanzoni; Milena Raffi; Michela Persiani; Salvatore Squatrito
The within-task criterion to determine successful and unsuccessful table tennis players Journal Article
In: International Journal of Sports Science & Coaching, vol. 11, no. 4, pp. 523–531, 2016.
The aim of this study was to examine the differences in visual search behaviour between a group of expert-level and one of novice table tennis players, to determine the temporal and spatial aspects of gaze orientation associated with correct responses. Expert players were classified as successful or unsuccessful depending on their performance in a video-based test of anticipation skill involving two kinds of stroke techniques: forehand top spin and backhand drive. Eye movements were recorded binocularly with a video-based eye tracking system. Successful experts were more effective than novices and unsuccessful experts in accurately anticipating both type and direction of stroke, showing fewer fixations of longer duration. Participants fixated mainly on arm area during forehand top spin, and on hand–racket and trunk areas during backhand drive. This study can help to develop interventions that facilitate the acquisition of anticipatory skills by improving visual search strategies.
Ioannis Rigas; Evgeniy Abdulin; Oleg V. Komogortsev
Towards a multi-source fusion approach for eye movement-driven recognition Journal Article
In: Information Fusion, vol. 32, pp. 13–25, 2016.
This paper presents a research for the use of multi-source information fusion in the field of eye movement biometrics. In the current state-of-the-art, there are different techniques developed to extract the physical and the behavioral biometric characteristics of the eye movements. In this work, we explore the effects from the multi-source fusion of the heterogeneous information extracted by different biometric algorithms under the presence of diverse visual stimuli. We propose a two-stage fusion approach with the employment of stimulus-specific and algorithm-specific weights for fusing the information from different matchers based on their identification efficacy. The experimental evaluation performed on a large database of 320 subjects reveals a considerable improvement in biometric recognition accuracy, with minimal equal error rate (EER) of 5.8%, and best case Rank-1 identification rate (Rank-1 IR) of 88.6%. It should be also emphasized that although the concept of multi-stimulus fusion is currently evaluated specifically for the eye movement biometrics, it can be adopted by other biometric modalities too, in cases when an exogenous stimulus affects the extraction of the biometric features.
Ioannis Rigas; Oleg V. Komogortsev; Reza Shadmehr
Biometric recognition via eye movements : Saccadic vigor and acceleration cues Journal Article
In: ACM Transactions on Applied Perception, vol. 13, no. 2, pp. 1–21, 2016.
Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6–33.5% for the verification scenario, and in range of 22.3–53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues.
Seung Kweon Hong
Comparison of vertical and horizontal eye movement times in the selection of visual targets by an eye input device Journal Article
In: Journal of the Ergonomics Society of Korea, vol. 34, no. 1, pp. 19–27, 2015.
Objective: The aim of this study is to investigate how well eye movement times in visual target selection tasks by an eye input device follows the typical Fitts' Law and to compare vertical and horizontal eye movement times. Background: Typically manual pointing provides excellent fit to the Fitts' Law model. However, when an eye input device is used for the visual target selection tasks, there were some debates on whether the eye movement times in can be described by the Fitts' Law. More empirical studies should be added to resolve these debates. This study is an empirical study for resolving this debate. On the other hand, many researchers reported the direction of movement in typical manual pointing has some effects on the movement times. The other question in this study is whether the direction of eye movement also affects the eye movement times. Method: A cursor movement times in visual target selection tasks by both input devices were collected. The layout of visual targets was set up by two types. Cursor starting position for vertical movement times were in the top of the monitor and visual targets were located in the bottom, while cursor starting positions for horizontal movement times were in the right of the monitor and visual targets were located in the left. Results: Although eye movement time was described by the Fitts' Law, the error rate was high and correlation was relatively low (R2 = 0.80 for horizontal movements and R2 = 0.66 for vertical movements), compared to those of manual movement. According to the movement direction, manual movement times were not significantly different, but eye movement times were significantly different. Conclusion: Eye movement times in the selection of visual targets by an eye-gaze input device could be described and predicted by the Fitts' Law. Eye movement times were significantly different according to the direction of eye movement. Application: The results of this study might help to understand eye movement times in visual target selection tasks by the eye input devices.
Oleg V. Komogortsev; Alexey Karpov; Corey D. Holland
Attack of mechanical replicas: Liveness detection with eye movements Journal Article
In: IEEE Transactions on Information Forensics and Security, vol. 10, no. 4, pp. 716–725, 2015.
This paper investigates liveness detection techniques in the area of eye movement biometrics. We investigate a specific scenario, in which an impostor constructs an artificial replica of the human eye. Two attack scenarios are considered: 1) the impostor does not have access to the biometric templates representing authentic users, and instead utilizes average anatomical values from the relevant literature and 2) the impostor gains access to the complete biometric database, and is able to employ exact anatomical values for each individual. In this paper, liveness detection is performed at the feature and match score levels for several existing forms of eye movement biometric, based on different aspects of the human visual system. The ability of each technique to differentiate between live and artificial recordings is measured by its corresponding false spoof acceptance rate, false live rejection rate, and classification rate. The results suggest that eye movement biometrics are highly resistant to circumvention by artificial recordings when liveness detection is performed at the feature level. Unfortunately, not all techniques provide feature vectors that are suitable for liveness detection at the feature level. At the match score level, the accuracy of liveness detection depends highly on the biometric techniques employed.
Moritz Köster; Marco Rüth; Kai Christoph Hamborg; Kai Kaspar
Effects of personalized banner ads on visual attention and recognition memory Journal Article
In: Applied Cognitive Psychology, vol. 29, no. 2, pp. 181–192, 2015.
Internet companies collect a vast amount of data about their users in order to personalize banner ads. However, very little is known about the effects of personalized banners on attention and memory. In the present study, 48 subjects performed search tasks on web pages containing personalized or nonpersonalized banners. Overt attention was measured by an eye-tracker, and recognition of banner and task-relevant information was subsequently examined. The entropy of fixations served as a measure for the overall exploration of web pages. Results confirm the hypotheses that personalization enhances recognition for the content of banners while the effect on attention was weaker and partially nonsignificant. In contrast, overall exploration of web pages and recognition of task-relevant information was not influenced. The temporal course of fixations revealed that visual exploration of banners typically proceeds from the picture to the logo and finally to the slogan. We discuss theoretical and practical implications.
Linnéa Larsson; Marcus Nyström; Richard Andersson; Martin Stridh
Detection of fixations and smooth pursuit movements in high-speed eye-tracking data Journal Article
In: Biomedical Signal Processing and Control, vol. 18, pp. 145–152, 2015.
A novel algorithm for the detection of fixations and smooth pursuit movements in high-speed eye-tracking data is proposed, which uses a three-stage procedure to divide the intersaccadic intervals into a sequence of fixation and smooth pursuit events. The first stage performs a preliminary segmentation while the latter two stages evaluate the characteristics of each such segment and reorganize the preliminary segments into fixations and smooth pursuit events. Five different performance measures are calculated to investigate different aspects of the algorithm's behavior. The algorithm is compared to the current state-of-the-art (I-VDT and the algorithm in ), as well as to annotations by two experts. The proposed algorithm performs considerably better (average Cohen's kappa 0.42) than the I-VDT algorithm (average Cohen's kappa 0.20) and the algorithm in  (average Cohen's kappa 0.16), when compared to the experts' annotations.
Minyoung Lee; Randolph Blake; Sujin Kim; Chai-Youn Kim
Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music Journal Article
In: Proceedings of the National Academy of Sciences, vol. 112, no. 27, pp. 8493–8498, 2015.
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
Yan Luo; Ming Jiang; Yongkang Wong; Qi Zhao
Multi-camera saliency Journal Article
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 10, pp. 2057–2070, 2015.
A significant body of literature on saliency modeling predicts where humans look in a single image or video. Besides the scientific goal of understanding how information is fused from multiple visual sources to identify regions of interest in a holistic manner, there are tremendous engineering applications of multi-camera saliency due to the widespread of cameras. This paper proposes a principled framework to smoothly integrate visual information from multiple views to a global scene map, and to employ a saliency algorithm incorporating high-level features to identify the most important regions by fusing visual information. The proposed method has the following key distinguishing features compared with its counterparts: (1) the proposed saliency detection is global (salient regions from one local view may not be important in a global context), (2) it does not require special ways for camera deployment or overlapping field of view, and (3) the key saliency algorithm is effective in highlighting interesting object regions though not a single detector is used. Experiments on several data sets confirm the effectiveness of the proposed principled framework.
Andrew K. Mackenzie; Julie M. Harris
Eye movements and hazard perception in active and passive driving Journal Article
In: Visual Cognition, vol. 23, no. 6, pp. 736–757, 2015.
Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment.
Ishan Nigam; Mayank Vatsa; Richa Singh
Ocular biometrics: A survey of modalities and fusion approaches Journal Article
In: Information Fusion, vol. 26, pp. 1–35, 2015.
Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India's Aadhaar Program and the United Arab Emirate's border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development.
Kristien Ooms; Arzu Coltekin; Philippe De Maeyer; Lien Dupont; Sara I. Fabrikant; Annelies Incoul; Matthias Kuhn; Hendrik Slabbinck; Pieter Vansteenkiste; Lise Van der Haegen
Combining user logging with eye tracking for interactive and dynamic applications Journal Article
In: Behavior Research Methods, vol. 47, no. 4, pp. 977–993, 2015.
User evaluations of interactive and dynamic applications face various challenges related to the active nature of these displays. For example, users can often zoom and pan on digital products, and these interactions cause changes in the extent and/or level of detail of the stimulus. Therefore, in eye tracking studies, when a user's gaze is at a particular screen position (gaze position) over a period of time, the information contained in this particular position may have changed. Such digital activities are commonplace in modern life, yet it has been difficult to automatically compare the changing information at the viewed position, especially across many participants. Existing solutions typically involve tedious and time-consuming manual work. In this article, we propose a methodology that can overcome this problem. By combining eye tracking with user logging (mouse and keyboard actions) with cartographic products, we are able to accurately reference screen coordinates to geographic coordinates. This referencing approach allows researchers to know which geographic object (location or attribute) corresponds to the gaze coordinates at all times. We tested the proposed approach through two case studies, and discuss the advantages and disadvantages of the applied methodology. Furthermore, the applicability of the proposed approach is discussed with respect to other fields of research that use eye tracking-namely, marketing, sports and movement sciences, and experimental psychology. From these case studies and discussions, we conclude that combining eye tracking and user-logging data is an essential step forward in efficiently studying user behavior with interactive and static stimuli in multiple research fields.
Ioannis Rigas; Oleg V. Komogortsev
Eye movement-driven defense against iris print-attacks Journal Article
In: Pattern Recognition Letters, vol. 68, no. 2, pp. 316–326, 2015.
This paper proposes a methodology for the utilization of eye movement cues for the task of iris print-attack detection. We investigate the fundamental distortions arising in the eye movement signal during an iris print-attack, due to the structural and functional discrepancies between a paper-printed iris and a natural eye iris. The performed experiments involve the execution of practical print-attacks against an eye-tracking device, and the collection of the resulting eye movement signals. The developed methodology for the detection of print-attack signal distortions is evaluated on a large database collected from 200 subjects, which contains both the real (‘live') eye movement signals and the print-attack (‘spoof') eye movement signals. The suggested methodology provides a sufficiently high detection performance, with a maximum average classification rate (ACR) of 96.5% and a minimum equal error rate (EER) of 3.4%. Due to the hardware similarities between eye tracking and iris capturing systems, we hypothesize that the proposed methodology can be adopted into the existing iris recognition systems with minimal cost. To further support this hypothesis we experimentally investigate the robustness of our scheme by simulating conditions of reduced sampling resolution (temporal and spatial), and of limited duration of the eye movement signals.
Donghyun Ryu; Bruce Abernethy; David L. Mann; Jamie M. Poolton
The contributions of central and peripheral vision to expertise in basketball: How blur helps to provide a clearer picture Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 1, pp. 167–183, 2015.
The main purpose of this study was to examine the relative roles of central and peripheral vision when performing a dynamic forced-choice task. We did so by using a gaze-contingent display with different levels of blur in an effort to (a) test the limit of visual resolution necessary for information pick-up in each of these sectors of the visual field and, as a result, to (b) develop a more natural means of gaze-contingent display using a blurred central or peripheral visual field. The expert advantage seen in usual whole field visual presentation persists despite surprisingly high levels of impairment to central or peripheral vision. Consistent with the well-established central/peripheral differences in sensitivity to spatial frequency, high levels of blur did not prevent better-than-chance performance by skilled players when peripheral information was blurred, but they did affect response accuracy when impairing central vision. Blur was found to always alter the pattern of eye movements before it decreased task performance. The evidence accumulated across the 4 experi- ments provides new insights into several key questions surrounding the role that different sectors of the visual field play in expertise in dynamic, time-constrained tasks.
Chengyao Shen; Xun Huang; Qi Zhao
Predicting eye fixations in webpages with multi-scale features and high-level representations from deep networks Journal Article
In: IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2084–2093, 2015.
In recent decades, webpages are becoming an increasingly important visual information source. Compared with natural images, webpages are different in many ways. For example, webpages are usually rich in semantically meaningful visual media (text, pictures, logos, and animations), which make the direct application of some traditional low-level saliency models ineffective. Besides, distinct web-viewing patterns such as top-left bias and banner blindness suggest different ways for predicting attention deployment on a webpage. In this study, we utilize a new scheme of low-level feature extraction pipeline and combine it with high-level representations from deep neural networks. The proposed model is evaluated on a newly published webpage saliency dataset with three popular evaluation metrics. Results show that our model outperforms other existing saliency models by a large margin and both low-and high-level features play an important role in predicting fixations on webpage.
Lisa M. Soederberg Miller; Diana L. Cassady; Elizabeth A. Applegate; Laurel A. Beckett; Machelle D. Wilson; Tanja N. Gibson; Kathleen Ellwood
Relationships among food label use, motivation, and dietary quality Journal Article
In: Nutrients, vol. 7, no. 2, pp. 1068–1080, 2015.
Nutrition information on packaged foods supplies information that aids consumers in meeting the recommendations put forth in the US Dietary Guidelines for Americans such as reducing intake of solid fats and added sugars. It is important to understand how food label use is related to dietary intake. However, prior work is based only on self-reported use of food labels, making it unclear if subjective assessments are biased toward motivational influences. We assessed food label use using both self-reported and objective measures, the stage of change, and dietary quality in a sample of 392 stratified by income. Self-reported food label use was assessed using a questionnaire. Objective use was assessed using a mock shopping task in which participants viewed food labels and decided which foods to purchase. Eye movements were monitored to assess attention to nutrition information on the food labels. Individuals paid attention to nutrition information when selecting foods to buy. Self-reported and objective measures of label use showed some overlap with each other (r=0.29, p<0.001), and both predicted dietary quality (p<0.001 for both). The stage of change diminished the predictive power of subjective (p<0.09), but not objective (p<0.01), food label use. These data show both self-reported and objective measures of food label use are positively associated with dietary quality. However, self-reported measures appear to capture a greater motivational component of food label use than do more objective measures.
Lisa M. Soederberg Miller; Diana L. Cassady; Laurel A. Beckett; Elizabeth A. Applegate; Machelle D. Wilson; Tanja N. Gibson; Kathleen Ellwood
Misunderstanding of front-of-package nutrition information on us food products Journal Article
In: PLoS ONE, vol. 10, no. 4, pp. e0125306, 2015.
Front-of-package nutrition symbols (FOPs) are presumably readily noticeable and require minimal prior nutrition knowledge to use. Although there is evidence to support this notion, few studies have focused on Facts Up Front type symbols which are used in the US. Participants with varying levels of prior knowledge were asked to view two products and decide which was more healthful. FOPs on packages were manipulated so that one product was more healthful, allowing us to assess accuracy. Attention to nutrition information was assessed via eye tracking to determine what if any FOP information was used to make their decisions. Results showed that accuracy was below chance on half of the comparisons despite consulting FOPs. Negative correlations between attention to calories, fat, and sodium and accuracy indicated that consumers over-relied on these nutrients. Although relatively little attention was allocated to fiber and sugar, associations between attention and accuracy were positive. Attention to vitamin D showed no association to accuracy, indicating confusion surrounding what constitutes a meaningful change across products. Greater nutrition knowledge was associated with greater accuracy, even when less attention was paid. Individuals, particularly those with less knowledge, are misled by calorie, sodium, and fat information on FOPs.
Miguel A. Vadillo; Chris N. H. Street; Tom Beesley; David R. Shanks
A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation Journal Article
In: Behavior Research Methods, vol. 47, no. 4, pp. 1365–1376, 2015.
Poor calibration and inaccurate drift correction can pose severe problems for eye-tracking experiments requiring high levels of accuracy and precision. We describe an algorithm for the offline correction of eye-tracking data. The algorithm conducts a linear transformation of the coordinates of fixations that minimizes the distance between each fixation and its closest stimulus. A simple implementation in MATLAB is also presented. We explore the performance of the correction algorithm under several conditions using simulated and real data, and show that it is particularly likely to improve data quality when many fixations are included in the fitting process.
Juan D. Velásquez; Pablo Loyola; Gustavo Martinez; Kristofher Munoz; Pedro Maldanado; Andrés Andres Couve; Pedro E. Maldonado
Combining eye tracking and pupillary dilation analysis to identify website key objects Journal Article
In: Neurocomputing, vol. 168, pp. 179–189, 2015.
Identifying the salient zones from Web interfaces, namely the Website Key Objects, is an essential part of the personalization process that current Web systems perform to increase user engagement. While several techniques have been proposed, most of them are focused on the use of Web usage logs. Only recently has the use of data from users[U+05F3] biological responses emerged as an alternative to enrich the analysis. In this work, a model is proposed to identify Website Key Objects that not only takes into account visual gaze activity, such as fixation time, but also the impact of pupil dilation. Our main hypothesis is that there is a strong relationship in terms of the pupil dynamics and the Web user preferences on a Web page. An empirical study was conducted on a real Website, from which the navigational activity of 23 subjects was captured using an eye tracking device. Results showed that the inclusion of pupillary activity, although not conclusively, allows us to extract a more robust Web Object classification, achieving a 14% increment in the overall accuracy.
Jian Wang; Ryoichi Ohtsuka; Kimihiro Yamanaka
Relation between mental workload and visual information processing Journal Article
In: Procedia Manufacturing, vol. 3, pp. 5308–5312, 2015.
The aim of this study is to clarify the relation between mental workload and the function of visual information processing. To examine the mental workload (MWL) relative to the size of the useful field of view (UFOV), an experiment was conducted with 12 participants (ages 21–23). In the primary task, participants responded to visual markers appearing in a computer display. The UFOV and the results of the secondary task for MWL were measured. In the MWL task, participants solved numerical operations designed to increase MWL. The experimental conditions in this task were divided into three categories (Repeat Aloud, Addition, and No Task), where No Task meant no mental task was given. MWL was changed in a stepwise manner. The quantitative assessment confirmed that the UFOV narrows with the increase in the MWL.
Integrating service design and eye tracking insight for designing smart TV user interfaces Journal Article
In: International Journal of Advanced Computer Science and Applications, vol. 6, no. 7, pp. 163–171, 2015.
This research proposes a process that integrate service design method and eye tracking insight for designing a Smart TV user interface. The Service Design method, which is utilized for leading the combination of the quality function deployment (QFD) and the analytic hierarchy process (AHP), is used to analyze the features of three Smart TV user interface design mockups. Scientific evidences, which include the effectiveness and efficiency testing data obtained from eye tracking experiments with six participants, are provided the information for analysing the affordance of these design mockups. The results of this research demonstrate a comprehensive methodology that can be used iteratively for redesigning, redefining and evaluating of Smart TV user interfaces. It can also help to make the design of Smart TV user interfaces relate to users' behaviors and needs. So that to improve the affordance of design. Future studies may analyse the data that are derived from eye tracking experiments to improve our understanding of the spatial relationship between designed elements in a Smart TV user interface.
Ying Yan; Huazhi Yuan; Xiaofei Wang; Ting Xu; Haoxue Liu
Study on driver's fixation variation at entrance and inside sections of tunnel on highway Journal Article
In: Advances in Mechanical Engineering, vol. 7, no. 1, pp. 1–10, 2015.
How drivers' visual characteristics change as they pass tunnels was studied. Firstly, nine drivers' test data at tunnel entrance and inside sections using eye movement tracking devices were recorded. Then the transfer function of BP artificial neural network was employed to simulate and analyze the variation of the drivers' eye movement parameters. The relation models between eye movement parameters and the distance of the tunnels were established. In the analysis of the fixation point distributions, the analytic coordinates of fixations in visual field were clustered to obtain different visual area of fixations by utilizing dynamic cluster theory. The results indicated that, at 100 meters before the entrance, the average fixation duration increased, but the fixations number decreased substantially. After 100 meters into the tunnel, the fixation duration started to decrease first and then increased. The variations of drivers' fixation points demonstrated such a pattern of change as scatter, focus, and scatter again. While driving through the tunnels, drivers presented a long time fixation. Nearly 61.5% subjects' average fixation duration increased significantly. In the tunnel, these drivers pay attention to seven fixation points areas from the car dashboard area to the road area in front of the car.
An eye-tracking study of the Elaboration Likelihood Model in online shopping Journal Article
In: Electronic Commerce Research and Applications, vol. 14, no. 4, pp. 233–240, 2015.
This study uses eye-tracking to explore the Elaboration Likelihood Model (ELM) in online shopping. The results show that the peripheral cue did not have moderating effect on purchase intention, but had moderating effect on eye movement. Regarding purchase intention, the high elaboration had higher purchase intention than the low elaboration with a positive peripheral cue, but there was no difference in purchase intention between the high and low elaboration with a negative peripheral cue. Regarding eye movement, with a positive peripheral cue, the high elaboration group was observed to have longer fixation duration than the low elaboration group in two areas of interest (AOIs); however, with a negative peripheral cue, the low elaboration group had longer fixation on the whole page and two AOIs. In addition, the relationship between purchase intention and eye movement of the AOIs is more significant in the high elaboration group when given a negative peripheral cue and in the low elaboration group when given a positive peripheral cue. This study not only examines the postulates of the ELM, but also contributes to a better understanding of the cognitive processes of the ELM. These findings have practical implications for e-sellers to identify characteristics of consumers' elaboration in eye movement and designing customization and persuasive context for different elaboration groups in e-commerce.
Luming Zhang; Meng Wang; Liqiang Nie; Liang Hong; Yong Rui; Qi Tian
Retargeting semantically-rich photos Journal Article
In: IEEE Transactions on Multimedia, vol. 17, no. 9, pp. 1538–1549, 2015.
Semantically-rich photos contain a rich variety of semantic objects (e.g., pedestrians and bicycles). Retargeting these photos is a challenging task since each semantic object has fixed geometric characteristics. Shrinking these objects simultaneously during retargeting is prone to distortion. In this paper, we propose to retarget semantically-rich photos by detecting photo semantics from image tags, which are predicted by a multi-label SVM. The key technique is a generative model termed latent stability discovery (LSD). It can robustly localize various semantic objects in a photo by making use of the predicted noisy image tags. BasedonLSD,afeaturefusionalgorithm is proposed to detect salient regions at both the low-level and high-level. These salient regions are linked into a path sequentially to simulate human visual perception. Finally, we learn the prior distribution of such paths from aesthetically pleasing training photos. The prior enforces the path of a retargeted photo to be maximally similar to those from the training photos. In the experiment, we collect 217 1600x1200 photos, each containing over seven salient objects. Comprehensive user studies demonstrate the competitiveness of our method.
Youming Zhang; Jorma Laurikkala; Martti Juhola; Youming Zhang; Jorma Laurikkala; Martti Juhola
Biometric verification with eye movements: Results from a long-term recording series Journal Article
In: IET Biometrics, vol. 4, no. 3, pp. 162–168, 2015.
The authors present the author's results of using saccadic eye movements for biometric user verification. The method can be applied to computers or other devices, in which it is possible to include an eye movement camera system. Thus far, this idea has been little researched. As they have extensively studied eye movement signals for medical applications, they saw an opportunity for the biometric use of saccades. Saccades are the fastest of all eye movements, and are easy to stimulate and detect from signals. As signals measured from a physiological origin, the properties of eye movements (e.g. latency and maximum angular velocity) may contain considerable variability between different times of day, between days or weeks and so on. Since such variability might impair biometric verification based on saccades, they attempted to tackle this issue. In contrast to their earlier results, where they did not include such long intervals between sessions of eye movement recordings as in the present research, their results showed that – notwithstanding some variability present in saccadic variables – this variability was not considerable enough to essentially disturb or impair verification results. The only exception was a test series of very long intervals ∼16 or 32 months in length. For the best results obtained with various classification methods, false rejection and false acceptance rates were <5%. Thus, they conclude that saccadic eye movements can provide a realistic basis for biometric user verification.
Hani Alers; Judith A. Redi; Ingrid Heynderickx
Quantifying the importance of preserving video quality in visually important regions at the expense of background content Journal Article
In: Signal Processing: Image Communication, vol. 32, pp. 69–80, 2015.
Advances in digital technology have allowed us to embed significant processing power in everyday video consumption devices. At the same time, we have placed high demands on the video content itself by continuing to increase spatial resolution while trying to limit the allocated file size and bandwidth as much as possible. The result is typically a trade-off between perceptual quality and fulfillment of technological limitations. To bring this trade-off to its optimum, it is necessary to understand better how people perceive video quality. In this work, we particularly focus on understanding how the spatial location of compression artifacts impacts visual quality perception, and specifically in relation with visual attention. In particular we investigate how changing the quality of the region of interest of a video affects its overall perceived quality, and we quantify the importance of the visual quality of the region of interest to the overall quality judgment. A three stage experiment was conducted where viewers were shown videos with different quality levels in different parts of the scene. By asking them to score the overall quality we found that the quality of the region of interest has 10 times more impact than the quality of the rest of the scene. These results are in line with similar effects observed in still images, yet in videos the relevance of the visual quality of the region of interest is twice as high than in images. The latter finding is directly relevant for the design of more accurate objective quality metrics for videos, that are based on the estimation of local distortion visibility.
Benedetta Cesqui; Maura Mezzetti; Francesco Lacquaniti; Andrea D'Avella
Gaze behavior in one-handed catching and its relation with interceptive performance: What the eyes can't tell Journal Article
In: PLoS ONE, vol. 10, no. 3, pp. e0119445, 2015.
In ball sports, it is usually acknowledged that expert athletes track the ball more accurately than novices. However, there is also evidence that keeping the eyes on the ball is not always necessary for interception. Here we aimed at gaining new insights on the extent to which ocular pursuit performance is related to catching performance. To this end, we analyzed eye and head movements of nine subjects catching a ball projected by an actuated launching apparatus. Four different ball flight durations and two different ball arrival heights were tested and the quality of ocular pursuit was characterized by means of several timing and accuracy parameters. Catching performance differed across subjects and depended on ball flight characteristics. All subjects showed a similar sequence of eye movement events and a similar modulation of the timing of these events in relation to the characteristics of the ball trajectory. On a trial-by-trial basis there was a significant relationship only between pursuit duration and catching performance, confirming that keeping the eyes on the ball longer increases catching success probability. Ocular pursuit parameters values and their dependence on flight conditions as well as the eye and head contributions to gaze shift differed across subjects. However, the observed average individual ocular behavior and the eye-head coordination patterns were not directly related to the individual catching performance. These results suggest that several oculomotor strategies may be used to gather information on ball motion, and that factors unrelated to eye movements may underlie the observed differences in interceptive performance.
Leandro Luigi Di Stasi; Michael B. McCamy; Sebastian Pannasch; Rebekka Renner; Andrés Catena; José J. Cañas; Boris M. Velichkovsky; Susana Martinez-Conde
Effects of driving time on microsaccadic dynamics Journal Article
In: Experimental Brain Research, vol. 233, no. 2, pp. 599–605, 2015.
Driver fatigue is a common cause of car acci- dents. Thus, the objective detection of driver fatigue is a first step toward the effective management of fatigue- related traffic accidents. Here, we investigated the effects of driving time, a common inducer of driver fatigue, on the dynamics of fixational eye movements. Participants drove for 2 h in a virtual driving environment while we recorded their eye movements. Microsaccade velocities decreased with driving time, suggesting a potential effect of fatigue on microsaccades during driving.
Ivan Diaz; Sabine Schmidt; Francis R. Verdun; François O. Bochud
Eye‐tracking of nodule detection in lung CT volumetric data Journal Article
In: Medical Physics, vol. 42, no. 6, pp. 2925–2932, 2015.
PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems.nnMETHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts.nnRESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed.nnCONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior.
Hayward J. Godwin; Simon P. Liversedge; Julie A. Kirkby; Michael Boardman; Katherine Cornes; Nick Donnelly
The influence of experience upon information-sampling and decision-making behaviour during risk assessment in military personnel Journal Article
In: Visual Cognition, vol. 23, no. 4, pp. 415–431, 2015.
We examined the influence of experience upon information-sampling and decision-making behaviour in a group of military personnel as they conducted risk assessments of scenes photographed from patrol routes during the recent conflict in Afghanistan. Their risk assessment was based on an evaluation of Potential Risk Indicators (PRIs) during examination of each scene. We found that both participant groups were equally likely to fixate PRIs, demonstrating similarity in the selectivity of their information-sampling. However, the inexperienced participants made more revisits to PRIs, had longer response times, and were more likely to decide that the scenes contained a high level of risk. Together, these results suggest that experience primarily modulates decision-making behaviour. We discuss potential routes to train personnel to conduct risk assessments in a more similar manner to experienced participants.
Janice Attard; Markus Bindemann
Establishing the duration of crimes: An individual differences and eye-tracking investigation into time estimation Journal Article
In: Applied Cognitive Psychology, vol. 28, no. 2, pp. 215–225, 2014.
The time available for viewing a perpetrator at a crime scene predicts successful person recognition in subsequent identity line-ups. This time is usually unknown and must be derived from eyewitnesses' duration estimates. This study therefore compared the estimates that different individuals provide for crimes. We then attempted to determine the accuracy of these durations by measuring observers' general time estimation ability with a set of estimator videos. Observers differed greatly in their ability to estimate time, but individual duration estimates correlated strongly for crime and estimator materials. This indicates that it might be possible to infer unknown durations of events, such as criminal incidents, from a person's ability to estimate known durations. We also measured observers' eye movements to a perpetrator during crimes. Only fixations on a perpetrator's face related to eyewitness accuracy, but these fixations did not correlate with exposure estimates for this person. The implications of these findings are discussed.
Visual qualities of future geography books Journal Article
In: European Journal of Geography, vol. 5, no. 4, pp. 56–66, 2014.
The capacity for spatial orientation and associated faculties are closely related to visual competencies. Consequently, the practice and acquisition of visual competencies are vital prerequisites to successful learning and teaching of geography. Today, geography can be understood as a visual discipline and as such may develop strong links to visual communication. In geography, textbooks may establish this link in an everyday context. This Ph.D. project aims to build the bridge between subject content and design. The result will be a visually convincing geography textbook prototype. Fifty-six geography textbooks from different European countries were analysed, focussing on the design concept. Furthermore, double-page spreads of current German geography textbooks were evaluated by observing students' textbook usage via eye tracking. Eye tracking monitors students' reactions to varying contents and designs. Findings from both analyses form the basis for the textbook concept, which is to be developed.
Erin Berenbaum; Amy E. Latimer-Cheung
Examining the link between framed physical activity ads and behavior among women Journal Article
In: Journal of Sport and Exercise Psychology, vol. 36, no. 3, pp. 271–280, 2014.
Gain-framed messages are more effective at promoting physical activity than loss-framed messages. However, the mechanism through which this effect occurs is unclear. The current experiment examined the effects of message framing on variables described in the communication behavior change model (McGuire, 1989), as well as the mediating effects of these variables on the message-frame-behavior relationship. Sixty low-to-moderately active women viewed 20 gain- or loss-framed ads and five control ads while their eye movements were recorded via eye tracking. The gain-framed ads attracted greater attention, ps < .05; produced more positive attitudes
Daniel Bishop; Gustav Kuhn; Claire Maton
Telling people where to look in a soccer-based decision task: A nomothetic approach Journal Article
In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–13, 2014.
Research has shown that identifiable visual search patterns characterize skilled performance of anticipation and decision-making tasks in sport. However, to date, the use of experts' gaze patterns to entrain novices' performance has been confined to aiming activities. Accordingly, in a first experiment, 40 participants of varying soccer experience viewed static images of oncoming soccer players and attempted to predict the direction in which those players were about to move. Multiple regression analyses showed that the sole predictor of decision-making efficiency was the time taken to initiate a saccade to the ball. In a follow-up experiment, soccer novices undertook the same task as in Experiment 1. Two experimental groups were instructed to either look at the ball, or the player's head, as quickly as possible; a control group received no instructions. The experimental groups were fastest to make a saccade to the ball or head, respectively, but decision-making efficiency was equivalent across all three groups. The fallibility of a nomothetic approach to training eye movements is discussed.
Hanneke Bouwsema; Corry K. Sluis; Raoul M. Bongers
Changes in performance over time while learning to use a myoelectric prosthesis Journal Article
In: Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, pp. 1–15, 2014.
BACKGROUND: Training increases the functional use of an upper limb prosthesis, but little is known about how people learn to use their prosthesis. The aim of this study was to describe the changes in performance with an upper limb myoelectric prosthesis during practice. The results provide a basis to develop an evidence-based training program. METHODS: Thirty-one able-bodied participants took part in an experiment as well as thirty-one age- and gender-matched controls. Participants in the experimental condition, randomly assigned to one of four groups, practiced with a myoelectric simulator for five sessions in a two-weeks period. Group 1 practiced direct grasping, Group 2 practiced indirect grasping, Group 3 practiced fixating, and Group 4 practiced a combination of all three tasks. The Southampton Hand Assessment Procedure (SHAP) was assessed in a pretest, posttest, and two retention tests. Participants in the control condition performed SHAP two times, two weeks apart with no practice in between. Compressible objects were used in the grasping tasks. Changes in end-point kinematics, joint angles, and grip force control, the latter measured by magnitude of object compression, were examined. RESULTS: The experimental groups improved more on SHAP than the control group. Interestingly, the fixation group improved comparable to the other training groups on the SHAP. Improvement in global position of the prosthesis leveled off after three practice sessions, whereas learning to control grip force required more time. The indirect grasping group had the smallest object compression in the beginning and this did not change over time, whereas the direct grasping and the combination group had a decrease in compression over time. Moreover, the indirect grasping group had the smallest grasping time that did not vary over object rigidity, while for the other two groups the grasping time decreased with an increase in object rigidity. CONCLUSIONS: A training program should spend more time on learning fine control aspects of the prosthetic hand during rehabilitation. Moreover, training should start with the indirect grasping task that has the best performance, which is probably due to the higher amount of useful information available from the sound hand.
Myriam Chanceaux; Anne Guérin-Dugué; Benoît Lemaire; Thierry Baccino
A computational cognitive model of information search in textual materials Journal Article
In: Cognitive Computation, vol. 6, no. 1, pp. 1–17, 2014.
Document foraging for information is a crucial and increasingly prevalent activity nowadays. We designed a computational cognitive model to simulate the oculomotor scanpath of an average web user searching for specific information from textual materials. In particular, the developed model dynamically combines visual, semantic, and memory processes to predict the user's focus of attention during information seeking from paragraphs of text. A series of psychological experiments was conducted using eye-tracking techniques in order to validate and refine the proposed model. Comparisons between model simulations and human data are reported and discussed taking into account the strengths and shortcomings of the model. The proposed model provides a unique contribution to the investigation of the cognitive processes involved during information search and bears significant implications for web page design and evaluation.
Samuel G. Charlton; Nicola J. Starkey; John A. Perrone; Robert B. Isler
What's the risk? A comparison of actual and perceived driving risk Journal Article
In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 25, no. A, pp. 50–64, 2014.
It has long been presumed that drivers' perceptions of risk play an important role in guiding on-road behaviour. The answer to how accurately drivers perceive the momentary risk of a driving situation, however, is unknown. This research compared drivers' perceptions of the momentary risk for a range of roads to the objective risk associated with those roads. Videos of rural roads, filmed from the drivers' perspective, were presented to 69 participants seated in a driving simulator while they indicated the momentary levels of risk they were experiencing by moving a risk meter mounted on the steering wheel. Estimates of the objective levels of risk for the roads were calculated using road protection scores from the KiwiRAP database (part of the International Road Assessment Programme). Subsequently, the participants also provided risk estimates for still photos taken from the videos. Another group of 10 participants viewed the videos and photos while their eye movements and fixations were recorded. In a third experiment, 14 participants drove a subset of the roads in a car while providing risk ratings at selected points of interest. Results showed a high degree of consistency across the different methods. Certain road situations were rated as being riskier than the objective risk, and perhaps more importantly, the risk of other situations was significantly under-rated. Horizontal curves and narrow lanes were associated with over-rated risk estimates, while intersections and roadside hazards such as narrow road shoulders, power poles and ditches were significantly under-rated. Analysis of eye movements indicated that drivers did not fixate these features and that the spread of fixations, pupil size and eye blinks were significantly correlated with the risk ratings. An analysis of the road design elements at 77 locations in the video revealed five road characteristics that predicted nearly 80% of the variance in drivers' risk perceptions; horizontal curvature, lane and shoulder width, gradient, and the presence of median barriers.
Mina Choi; Joel Wang; Wei Chung Cheng; Giovanni Ramponi; Luigi Albani; Aldo Badano
Effect of veiling glare on detectability in high-dynamic-range medical images Journal Article
In: IEEE/OSA Journal of Display Technology, vol. 10, no. 5, pp. 420–428, 2014.
We describe a methodology for predicting the detectability of subtle targets in dark regions of high-dynamic-range (HDR) images in the presence of veiling glare in the human eye. The method relies on predictions of contrast detection thresholds for the human visual system within a HDR image based on psychophysics measurements and modeling of the HDR display device characteristics. We present experimental results used to construct the model and discuss an image-dependent empirical veiling glare model and the validation of the model predictions with test patterns, natural scenes, and medical images. The model predictions are compared to a previously reported model (HDR-VDP2) for predicting HDR image quality accounting for glare effects. textcopyright 2005-2012 IEEE.
Antoine Coutrot; Nathalie Guyader; Gelu Ionescu; Alice Caplier
Video viewing: Do auditory salient events capture visual attention? Journal Article
In: Annals of Telecommunications, vol. 69, no. 1-2, pp. 89–97, 2014.
We assess whether salient auditory events contained in soundtracks modify eye movements when exploring videos. In a previous study, we found that, on average, nonspatial sound contained in video soundtracks impacts on eye movements. This result indicates that sound could play a leading part in visual attention models to predict eye movements. In this research, we go further and test whether the effect of sound on eye movements is stronger just after salient auditory events. To automatically spot salient auditory events, we used two auditory saliency models: the discrete energy separation algorithm and the energy model. Both models provide a saliency time curve, based on the fusion of several elementary audio features. The most salient auditory events were extracted by thresholding these curves. We examined some eye movement parameters just after these events rather than on all the video frames. We showed that the effect of sound on eye movements (variability between eye positions, saccade amplitude, and fixation duration) was not stronger after salient auditory events than on average over entire videos. Thus, we suggest that sound could impact on visual exploration not only after salient events but in a more global way. textcopyright 2013 Institut Mines-Télécom and Springer-Verlag France.
Leandro Luigi Di Stasi; Michael B. McCamy; Stephen L. Macknik; James A. Mankin; Nicole Hooft; Andrés Catena; Susana Martinez-Conde
Saccadic eye movement metrics reflect surgical residents′ fatigue Journal Article
In: Annals of Surgery, vol. 259, no. 4, pp. 824–829, 2014.
OBJECTIVE: Little is known about the effects of surgical residentsÊ fatigue on patient safety. We monitored surgical residentsÊ fatigue levels during their call day using (1) eye movement metrics, (2) objective measures of laparoscopic surgical performance, and (3) subjective reports based on standardized questionnaires. BACKGROUND: Prior attempts to investigate the effects of fatigue on surgical performance have suffered from methodological limitations, including inconsistent definitions and lack of objective measures of fatigue, and nonstandardized measures of surgical performance. Recent research has shown that fatigue can affect the characteristics of saccadic (fast ballistic) eye movements in nonsurgical scenarios. Here we asked whether fatigue induced by time-on-duty (∼24 hours) might affect saccadic metrics in surgical residents. Because saccadic velocity is not under voluntary control, a fatigue index based on saccadic velocity has the potential to provide an accurate and unbiased measure of the residentÊs fatigue level. METHODS: We measured the eye movements of members of the general surgery resident team at St. JosephÊs Hospital and Medical Center (Phoenix, AZ) (6 males and 6 females), using a head-mounted video eye tracker (similar configuration to a surgical headlight), during the performance of 3 tasks: 2 simulated laparoscopic surgery tasks (peg transfer and precision cutting) and a guided saccade task, before and after their call day. Residents rated their perceived fatigue level every 3 hours throughout their 24-hour shift, using a standardized scale. RESULTS:: Time-on-duty decreased saccadic velocity and increased subjective fatigue but did not affect laparoscopic performance. These results support the hypothesis that saccadic indices reflect graded changes in fatigue. They also indicate that fatigue due to prolonged time-on-duty does not result necessarily in medical error, highlighting the complicated relationship among continuity of care, patient safety, and fatigued providers. CONCLUSIONS: Our data show, for the first time, that saccadic velocity is a reliable indicator of the subjective fatigue of health care professionals during prolonged time-on-duty. These findings have potential impacts for the development of neuroergonomic tools to detect fatigue among health professionals and in the specifications of future guidelines regarding residentsÊ duty hours.
Lien Dupont; Marc Antrop; Veerle Van Eetvelde
Eye-tracking analysis in landscape perception research: Influence of photograph properties and landscape characteristics Journal Article
In: Landscape Research, vol. 39, no. 4, pp. 417–432, 2014.
The European Landscape Convention emphasises the need for public participation in landscape planning and management. This demands understanding of how people perceive and observe landscapes. This can objectively be measured using eye tracking, a system recording eye movements and fixations while observing images. In this study, 23 participants were asked to observe 90 landscape photographs, representing 18 landscape character types in Flanders (Belgium) differing in degree of openness and heterogeneity. For each landscape, five types of photographs were shown, varying in view angle. This experiment design allowed testing the effect of the landscape characteristics and photograph types on the observation pattern, measured by Eye-tracking Metrics (ETM). The results show that panoramic and detail photographs are observed differently than the other types. The degree of openness and heterogeneity also seems to exert a significant influence on the observation of the landscape.
Daniel Frings; John Parkin; Anne M. Ridley
The effects of cycle lanes, vehicle to kerb distance and vehicle type on cyclists' attention allocation during junction negotiation Journal Article
In: Accident Analysis and Prevention, vol. 72, pp. 411–421, 2014.
Increased frequency of cycle journeys has led to an escalation in collisions between cyclists and vehicles, particularly at shared junctions. Risks associated with passing decisions have been shown to influence cyclists' behavioural intentions. The current study extended this research by linking not only risk perception but also attention allocation (via tracking the eye movements of twenty cyclists viewing junction approaches presented on video) to behavioural intentions. These constructs were measured in a variety of contexts: junctions featuring cycle lanes, large vs. small vehicles and differing kerb to vehicle distances). Overall, cyclists devoted the majority of their attention to the nearside (side closest to kerb) of vehicles, and perceived near and offside (side furthest from kerb) passing as most risky. Waiting behind was the most frequent behavioural intention, followed by nearside and then offside passing. While cycle lane presence did not affect behaviour, it did lead to nearside passing being perceived as less risky, and to less attention being devoted to the offside. Large vehicles led to increased risk perceived with passing, and more attention directed towards the rear of vehicles, with reduced offside passing and increased intentions to remain behind the vehicle. Whether the vehicle was large or small, nearside passing was preferred around 30% of the time. Wide kerb distances increased nearside passing intentions and lower associated perceptions of risk. Additionally, relationships between attention and both risk evaluations and behaviours were observed. These results are discussed in relation to the cyclists' situational awareness and biases that various contextual factors can introduce. From these, recommendations for road safety and training are suggested.
Kristin J. Heaton; Alexis L. Maule; Jun Maruta; Elisabeth M. Kryskow; Jamshid Ghajar
Attention and visual tracking degradation during acute sleep deprivation in a military sample Journal Article
In: Aviation Space and Environmental Medicine, vol. 85, no. 5, pp. 497–503, 2014.
Background: Fatigue due to sleep restriction places individuals at elevated risk for accidents, degraded health, and impaired physical and mental performance. Early detection of fatigue-related performance decrements is an important component of injury prevention and can help to ensure optimal performance and mission readiness. This study used a predictive visual tracking task and a computer-based measure of attention to characterize fatigue-related attention decrements in healthy Army personnel during acute sleep deprivation. Methods: Serving as subjects in this laboratory-based study were 87 male and female service members between the ages of 18 and 50 with no history of brain injury with loss of consciousness, substance abuse, or significant psychiatric or neurologic diagnoses. Subjects underwent 26 h of sleep deprivation, during which eye movement measures from a continuous circular visual tracking task and attention measures (reaction time, accuracy) from the Attention Network Test (ANT) were collected at baseline, 20 h awake, and between 24 to 26 h awake. Results: Increases in the variability of gaze positional errors (46-47%), as well as reaction time-based ANT measures (9-65%), were observed across 26 h of sleep deprivation. Accuracy of ANT responses declined across this same period (11%). Discussion: Performance measures of predictive visual tracking accurately reflect impaired attention due to acute sleep deprivation and provide a promising approach for assessing readiness in personnel serving in diverse occupational areas, including flight and ground support crews.
Benedetta Heimler; Francesco Pavani; Mieke Donk; Wieske Zoest
Stimulus-and goal-driven control of eye movements: Action videogame players are faster but not better Journal Article
In: Attention, Perception, and Psychophysics, vol. 76, no. 8, pp. 2398–2412, 2014.
Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs.
Oleg V. Komogortsev; Corey D. Holland; Alex Karpov; Larry R. Price
Biometrics via oculomotor plant characteristics: Impact of parameters in oculomotor plant model Journal Article
In: ACM Transactions on Applied Perception, vol. 11, no. 4, pp. 1–17, 2014.
This article proposes and evaluates a novel biometric approach utilizing the internal, nonvisible, anatomical structure of the human eye. The proposed method estimates the anatomical properties of the human oculomotor plant from the measurable properties of human eye movements, utilizing a two-dimensional linear homeomorphic model of the oculomotor plant. The derived properties are evaluated within a biometric framework to determine their efficacy in both verification and identification scenarios. The results suggest that the physical properties derived from the oculomotor plant model are capable of achieving 20.3% equal error rate and 65.7% rank-1 identification rate on high-resolution equipment involving 32 subjects, with biometric samples taken over four recording sessions; or 22.2% equal error rate and 12.6% rank-1 identification rate on low-resolution equipment involving 172 subjects, with biometric samples taken over two recording sessions.
Chiuhsiang Joe Lin; Chi-Chan Chang; Yung-Hui Lee
Evaluating camouflage design using eye movement data Journal Article
In: Applied Ergonomics, vol. 45, no. 3, pp. 714–723, 2014.
This study investigates the characteristics of eye movements during a camouflaged target search task. Camouflaged targets were randomly presented on two natural landscapes. The performance of each camouflage design was assessed by target detection hit rate, detection time, number of fixations on display, first saccade amplitude to target, number of fixations on target, fixation duration on target, and subjective ratings of search task difficulty. The results showed that the camouflage patterns could significantly affect the eye-movement behavior, especially first saccade amplitude and fixation duration, and the findings could be used to increase the sensitivity of the camouflage assessment. We hypothesized that the assessment could be made with regard to the differences in detectability and discriminability of the camouflage patterns. These could explain less efficient search behavior in eye movements. Overall, data obtained from eye movements can be used to significantly enhance the interpretation of the effects of different camouflage design.
Chiuhsiang Joe Lin; Chi-Chan Chang; Bor-Shong Liu
Developing and evaluating a target-background similarity metric for camouflage detection Journal Article
In: PLoS ONE, vol. 9, no. 2, pp. e87310, 2014.
BACKGROUND: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. METHODOLOGY: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. SIGNIFICANCE: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.
Hsin-Hui Lin; Shu-Fei Yang
An eye movement study of attribute framing in online shopping Journal Article
In: Journal of Marketing Analytics, vol. 2, no. 2, pp. 72–80, 2014.
This study uses an eye-tracking method to explore the framing effect on observed eye movements and purchase intention in online shopping. The results show that negative framing induces more active eye movements. Function attributes and non-functionality attributes attract more eye movements and with higher intensity. And the scanpath on the areas of interest reveals a certain pattern. These findings have practical implications for e-sellers to improve communication with customers.
John J. H. Lin; Sunny S. J. Lin
Tracking eye movements when solving geometry problems with handwriting devices Journal Article
In: Journal of Eye Movement Research, vol. 7, no. 1, pp. 1–15, 2014.
The present study investigated the following issues: (1) whether differences are evident in the eye movement measures of successful and unsuccessful problem-solvers; (2) what is the relationship between perceived difficulty and eye movement measures; and (3) whether eye movements in various AOIs differ when solving problems. Sixty-three 11th grade students solved five geometry problems about the properties of similar triangles. A digital drawing tablet and sensitive pressure pen were used to record the responses. The results indicated that unsuccessful solvers tended to have more fixation counts, run counts, and longer dwell time on the problem area, whereas successful solvers focused more on the calculation area. In addition, fixation counts, dwell time, and run counts in the diagram area were positively correlated with the perceived difficulty, suggesting that understanding similar triangles may require translation or mental rotation. We argue that three eye movement measures (i.e., fixation counts, dwell time, and run counts) are appropriate for use in examining problem solving given that they differentiate successful from unsuccessful solvers and correlate with perceived difficulty. Furthermore, the eye-tracking technique provides objective measures of students' cognitive load for instructional designers.
John J. H. Lin; Sunny S. J. Lin
Cognitive load for configuration comprehension in computer-supported geometry problem solving: An eye movement perspective Journal Article
In: International Journal of Science and Mathematics Education, vol. 12, no. 3, pp. 605–627, 2014.
The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations involving the number of informational elements, the number of element interactivities and the level of mental operations were assumed to account for the increasing difficulty. A sample of 311 9th grade students solved five geometry problems that required knowledge of similar triangles in a computer-supported environment. In the second experiment, 63 participants solved the same problems and eye movements were recorded. The results indicated that (1) the five problems differed in pass rate and in self-reported cognitive load; (2) because the successful solvers were very swift in pattern recognition and visual integration, their fixation did not clearly show valuable information; (3) more attention and more time (shown by the heat maps, dwell time and fixation counts) were given to read the more difficult configurations than to the intermediate or easier configurations; and (4) in addition to number of elements and element interactivities, the level of mental operations accounts for the major cognitive load sources of configuration comprehension. The results derived some implications for design principles of geometry diagrams in secondary school mathematics textbooks.
Tzu Chien Liu; Melissa Hui Mei Fan; Fred Paas
Effects of digital dictionary format on incidental acquisition of spelling knowledge and cognitive load during second language learning: Click-on vs. key-in dictionaries Journal Article
In: Computers and Education, vol. 70, pp. 9–20, 2014.
Recent research has shown that students involved in computer-based second language learning prefer to use a digital dictionary in which a word can be looked up by clicking on it with a mouse (i.e., click-on dictionary) to a digital dictionary in which a word can be looked up by typing it on a keyboard (i.e., key-in dictionary). This study investigated whether digital dictionary format also differentially affects students' incidental acquisition of spelling knowledge and cognitive load during second language learning. A comparison between a click-on dictionary condition, a key-in dictionary condition, and a non-dictionary control condition for 45 Taiwanese students learning English as a foreign language revealed that learners who used a key-in dictionary invested more time investment on dictionary consultation than learners who used a click-on dictionary. However, on a subsequent unexpected spelling test the key-in group invested less time investment and performed better than the click-on group. The theoretical and practical implications of the results are discussed.
W. Joseph MacInnes; Amelia R. Hunt; Matthew D. Hilchey; Raymond M. Klein
Driving forces in free visual search: An ethology Journal Article
In: Attention, Perception, and Psychophysics, vol. 76, no. 2, pp. 280–295, 2014.
Visual search typically involves sequences of eye movements under the constraints of a specific scene and specific goals. Visual search has been used as an experimental paradigm to study the interplay of scene salience and top-down goals, as well as various aspects of vision, attention, and memory, usually by introducing a secondary task or by controlling and manipulating the search environment. An ethology is a study of an animal in its natural environment, and here we examine the fixation patterns of the human animal searching a series of challenging illustrated scenes that are well-known in popular culture. The search was free of secondary tasks, probes, and other distractions. Our goal was to describe saccadic behavior, including patterns of fixation duration, saccade amplitude, and angular direction. In particular, we employed both new and established techniques for identifying top-down strategies, any influences of bottom-up image salience, and the midlevel attentional effects of saccadic momentum and inhibition of return. The visual search dynamics that we observed and quantified demonstrate that saccades are not independently generated and incorporate distinct influences from strategy, salience, and attention. Sequential dependencies consistent with inhibition of return also emerged from our analyses.
Olivia M. Maynard; Angela Attwood; Laura O'Brien; Sabrina Brooks; Craig Hedge; Ute Leonards; Marcus R. Munafò
Avoidance of cigarette pack health warnings among regular cigarette smokers Journal Article
In: Drug and Alcohol Dependence, vol. 136, no. 1, pp. 170–174, 2014.
Background: Previous research with adults and adolescents indicates that plain cigarette packs increase visual attention to health warnings among non-smokers and non-regular smokers, but not among regular smokers. This may be because regular smokers: (1) are familiar with the health warnings, (2) preferentially attend to branding, or (3) actively avoid health warnings. We sought to distinguish between these explanations using eye-tracking technology. Method: A convenience sample of 30 adult dependent smokers participated in an eye-tracking study. Participants viewed branded, plain and blank packs of cigarettes with familiar and unfamiliar health warnings. The number of fixations to health warnings and branding on the different pack types were recorded. Results: Analysis of variance indicated that regular smokers were biased towards fixating the branding rather than the health warning on all three pack types. This bias was smaller, but still evident, for blank packs, where smokers preferentially attended the blank region over the health warnings. Time-course analysis showed that for branded and plain packs, attention was preferentially directed to the branding location for the entire 10. s of the stimulus presentation, while for blank packs this occurred for the last 8. s of the stimulus presentation. Familiarity with health warnings had no effect on eye gaze location. Conclusion: Smokers actively avoid cigarette pack health warnings, and this remains the case even in the absence of salient branding information. Smokers may have learned to divert their attention away from cigarette pack health warnings. These findings have implications for cigarette packaging and health warning policy.
K. Ooms; Philippe De Maeyer; V. Fack
Study of the attentive behavior of novice and expert map users using eye tracking Journal Article
In: Cartography and Geographic Information Science, vol. 41, no. 1, pp. 37–54, 2014.
The aim of this paper is to gain better understanding of the way map users read and interpret the visual stimuli presented to them and how this can be influenced. In particular, the difference between expert and novice map users is considered. In a user study, the participants studied four screen maps which had been manipulated to introduce deviations. The eye movements of 24 expert and novice participants were tracked, recorded, and analyzed (both visually and statistically) based on a grid of Areas of Interest. These visual analyses are essential for studying the spatial dimension of maps to identify problems in design. In this research, we used visualization of eye movement metrics (fixation count and duration) in a 2D and 3D grid and a statistical comparison of the grid cells. The results show that the users' eye movements clearly reflect the main elements on the map. The users' attentive behavior is influenced by deviating colors, as their attention is drawn to it. This could also influence the users' interpretation process. Both user groups encountered difficulties when trying to interpret and store map objects that were mirrored. Insights into how different types of map users read and interpret map content are essential in this fast-evolving era of digital cartographic products.
Alessandro Piras; Roberto Lobietti; Salvatore Squatrito
Response time, visual search strategy, and anticipatory skills in volleyball players Journal Article
In: Journal of Ophthalmology, vol. 2014, pp. 1–10, 2014.
This paper aimed at comparing expert and novice volleyball players in a visuomotor task using realistic stimuli. Videos of a volleyball setter performing offensive action were presented to participants, while their eye movements were recorded by a head-mounted video based eye tracker. Participants were asked to foresee the direction (forward or backward) of the setter's toss by pressing one of two keys. Key-press response time, response accuracy, and gaze behaviour were measured from the first frame showing the setter's hand-ball contact to the button pressed by the participants. Experts were faster and more accurate in predicting the direction of the setting than novices, showing accurate predictions when they used a search strategy involving fewer fixations of longer duration, as well as spending less time in fixating all display areas from which they extract critical information for the judgment. These results are consistent with the view that superior performance in experts is due to their ability to efficiently encode domain-specific information that is relevant to the task.
Alessandro Piras; Emanuela Pierantozzi; Salvatore Squatrito
Visual search strategy in judo fighters during the execution of the first grip Journal Article
In: International Journal of Sports Science & Coaching, vol. 9, no. 1, pp. 185–198, 2014.
Visual search behaviour is believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that optimal visuo-motor strategy may be part of advanced training programs. Gaze behaviour of expert and novice judo fighters was investigated while they were doing a real sport-specific task. The athletes were tested while they performed a first grip either in an attack or defence condition. The results showed that expert judo fighters use a search strategy involving fewer fixations of longer duration than their novice counterparts. Experts spent a greater percentage of their time fixating on lapel and face with respect to other areas of the scene. On the contrary, the most frequently fixed cue for novice group was the sleeve area. It can be concluded that experts orient their gaze in the middle of the scene, both in attack and in defence, in order to gather more information at once, perhaps using parafoveal vision.