EyeLink使用性/应用出版物
以下列出了截至2023年的所有EyeLink可用性和应用研究出版物(有些早于2024年)。您可以使用诸如驾驶、运动、工作负载等关键词搜索出版物。您还可以搜索个人作者姓名。如果我们错过了任何EyeLink可用性或应用文章,请给我们发电子邮件
2017 |
Rebecca L. Monk; J. Westwood; Derek Heim; Adam W. Qureshi The effect of pictorial content on attention levels and alcohol-related beliefs: An eye-tracking study Journal Article In: Journal of Applied Social Psychology, vol. 47, no. 3, pp. 158–164, 2017. @article{Monk2017, To examine attention levels to different types of alcohol warning labels. Twenty-two participants viewed neutral or graphic warning messages while dwell times for text and image components of messages were assessed. Pre and postexposure outcome expectancies were assessed in order to compute change scores. Dwell times were significantly higher for the image, as opposed to the text, components of warnings, irrespective of image type. Participants whose expectancies increased after exposure to the warnings spent longer looking at the image than did those whose positive expectancies remained static or decreased. Images in alcohol warnings appear beneficial for drawing attention, although findings may suggest that this is also associated with heightened positive alcohol-related beliefs. Implications for health intervention are discussed and future research in this area is recommended. |
Parashkev Nachev; Geoff E. Rose; David H. Verity; Sanjay G. Manohar; Kelly MacKenzie; Gill Adams; Maria Theodorou; Quentin A. Pankhurst; Christopher Kennard Magnetic oculomotor prosthetics for acquired nystagmus Journal Article In: Ophthalmology, vol. 124, no. 10, pp. 1556–1564, 2017. @article{Nachev2017, Purpose: Acquired nystagmus, a highly symptomatic consequence of damage to the substrates of oculomotor control, often is resistant to pharmacotherapy. Although heterogeneous in its neural cause, its expression is unified at the effector—the eye muscles themselves—where physical damping of the oscillation offers an alternative approach. Because direct surgical fixation would immobilize the globe, action at a distance is required to damp the oscillation at the point of fixation, allowing unhindered gaze shifts at other times. Implementing this idea magnetically, herein we describe the successful implantation of a novel magnetic oculomotor prosthesis in a patient. Design: Case report of a pilot, experimental intervention. Participant: A 49-year-old man with longstanding, medication-resistant, upbeat nystagmus resulting from a paraneoplastic syndrome caused by stage 2A, grade I, nodular sclerosing Hodgkin's lymphoma. Methods: We designed a 2-part, titanium-encased, rare-earth magnet oculomotor prosthesis, powered to damp nystagmus without interfering with the larger forces involved in saccades. Its damping effects were confirmed when applied externally. We proceeded to implant the device in the patient, comparing visual functions and high-resolution oculography before and after implantation and monitoring the patient for more than 4 years after surgery. Main Outcome Measures: We recorded Snellen visual acuity before and after intervention, as well as the amplitude, drift velocity, frequency, and intensity of the nystagmus in each eye. Results The patient reported a clinically significant improvement of 1 line of Snellen acuity (from 6/9 bilaterally to 6/6 on the left and 6/5–2 on the right), reflecting an objectively measured reduction in the amplitude, drift velocity, frequency, and intensity of the nystagmus. These improvements were maintained throughout a follow-up of 4 years and enabled him to return to paid employment. Conclusions: This work opens a new field of implantable therapeutic devices—oculomotor prosthetics—designed to modify eye movements dynamically by physical means in cases where a purely neural approach is ineffective. Applied to acquired nystagmus refractory to all other interventions, it is shown successfully to damp pathologic eye oscillations while allowing normal saccadic shifts of gaze. |
Andrew D. Ogle; Dan J. Graham; Rachel G. Lucas-Thompson; Christina A. Roberto Influence of cartoon media characters on children's attention to and preference for food and beverage products Journal Article In: Journal of the Academy of Nutrition and Dietetics, vol. 117, no. 2, pp. 265–270, 2017. @article{Ogle2017, Background: Over-consuming unhealthful foods and beverages contributes to pediatric obesity and associated diseases. Food marketing influences children's food preferences, choices, and intake. Objective: To examine whether adding licensed media characters to healthful food/beverage packages increases children's attention to and preference for these products. We hypothesized that children prefer less- (vs more-) healthful foods, and pay greater attention to and preferentially select products with (vs without) media characters regardless of nutritional quality. We also hypothesized that children prefer more-healthful products when characters are present over less-healthful products without characters. Design: On a computer, participants viewed food/beverage pairs of more-healthful and less-healthful versions of similar products. The same products were shown with and without licensed characters on the packaging. An eye-tracking camera monitored participant gaze, and participants chose which product they preferred from each of 60 pairs. Participants/setting: Six- to 9-year-old children (n=149; mean age=7.36, standard deviation=1.12) recruited from the Twin Cities, MN, area in 2012-2013. Main outcome measures: Visual attention and product choice. Statistical analyses performed Attention to products was compared using paired-samples t tests, and product choice was analyzed with single-sample t tests. Analyses of variance were conducted to test for interaction effects of specific characters and child sex and age. Results: Children paid more attention to products with characters and preferred less-healthful products. Contrary to our prediction, children chose products without characters approximately 62% of the time. Children's choices significantly differed based on age, sex, and the specific cartoon character displayed, with characters in this study being preferred by younger boys. Conclusions: Results suggest that putting licensed media characters on more-healthful food/beverage products might not encourage all children to make healthier food choices, but could increase selection of healthy foods among some, particularly younger children, boys, and those who like the featured character(s). Effective use likely requires careful demographic targeting. |
Avigael M. Aizenman; Trafton Drew; Krista A. Ehinger; Dianne Georgian-smith; Jeremy M. Wolfe Comparing search patterns in digital breast tomosynthesis and full-field digital mammography: An eye tracking study Journal Article In: Journal of Medical Imaging, vol. 4, no. 4, pp. 1–22, 2017. @article{Aizenman2017, As a promising imaging modality, digital breast tomosynthesis (DBT) leads to better diagnostic per- formance than traditional full-field digital mammograms (FFDM) alone. DBT allows different planes of the breast to be visualized, reducing occlusion from overlapping tissue. Although DBT is gaining popularity, best practices for search strategies in this medium are unclear. Eye tracking allowed us to describe search patterns adopted by radiologists searching DBT and FFDM images. Eleven radiologists examined eight DBT and FFDM cases. Observers marked suspicious masses with mouse clicks. Eye position was recorded at 1000 Hz and was coregistered with slice/depth plane as the radiologist scrolled through the DBT images, allowing a 3-D representation of eye position. Hit rate for masses was higher for tomography cases than 2-D cases and DBT led to lower false positive rates. However, search duration was much longer for DBT cases than FFDM. DBT was associated with longer fixations but similar saccadic amplitude compared with FFDM. When comparing radiologists' eye movements to a previous study, which tracked eye movements as radiologists read chest CT, we found DBT viewers did not align with previously identified “driller” or “scanner” strategies, although their search strategy most closely aligns with a type of vigorous drilling strategy. |
Elham Azizi; Larry Allen Abel; Matthew J. Stainer The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 2, pp. 484–497, 2017. @article{Azizi2017, Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes. |
Jan W. Brascamp; Marnix Naber Eye tracking under dichoptic viewing conditions: A practical solution Journal Article In: Behavior Research Methods, vol. 49, no. 4, pp. 1303–1309, 2017. @article{Brascamp2017, In several research contexts it is important to obtain eye-tracking measures while presenting visual stimuli inde- pendently to each of the two eyes (dichoptic stimulation). However, the hardware that allows dichoptic viewing, such as mirrors, often interferes with high-quality eye tracking, es- pecially when using a video-based eye tracker. Here we detail an approach to combining mirror-based dichoptic stimulation with video-based eye tracking, centered on the fact that some mirrors, although they reflect visible light, are selectively transparent to the infrared wavelength range in which eye trackers record their signal. Although the method we propose is straightforward, affordable (on the order ofUS$1,000) and easy to implement, for many purposes it makes for an im- provement over existing methods, which tend to require spe- cialized equipment and often compromise on the quality ofthe visual stimulus and/or the eye tracking signal. The proposed method is compatible with standard display screens and eye trackers, and poses no additional limitations on the quality or nature of the stimulus presented or the data obtained. We in- clude an evaluation ofthe quality ofeye tracking data obtained using our method, and a practical guide to building a specific version of the setup used in our laboratories. |
Etzel Cardeña; Barbara Nordhjem; David Marcusson-Clavertz; Kenneth Holmqvist The "hypnotic state" and eye movements: Less there than meets the eye? Journal Article In: PLoS ONE, vol. 12, no. 8, pp. e0182546, 2017. @article{Cardena2017, Responsiveness to hypnotic procedures has been related to unusual eye behaviors for centuries. Kallio and collaborators claimed recently that they had found a reliable index for "the hypnotic state" through eye-tracking methods. Whether or not hypnotic responding involves a special state of consciousness has been part of a contentious debate in the field, so the potential validity of their claim would constitute a landmark. However, their conclusion was based on 1 highly hypnotizable individual compared with 14 controls who were not measured on hypnotizability. We sought t o replicate their results with a sample screened for High (n = 16) or Low (n = 13) hypnotizability. We used a factorial 2 (high vs. low hypnotizability) x 2 (hypnosis vs. resting conditions) counterbalanced order design with these eye-tracking tasks: Fixation, Saccade, Optokinetic nystagmus (OKN), Smooth pursuit, and Antisaccade (the first three tasks has been used in Kallio et al.'s experiment). Highs reported being more deeply in hypnosis than Lows but only in the hypnotic condition, as expected. There were no significant main or interaction effects for the Fixation, OKN, or Smooth pursuit tasks. For the Saccade task both Highs and Lows had smaller saccades during hypnosis, and in the Antisaccade task both groups had slower Antisaccades during hypnosis. Although a couple of results suggest that a hypnotic condition may produce reduced eye motility, the lack of significant interactions (e.g., showing only Highs expressing a particular eye behavior during hypnosis) does not support the claim that eye behaviors (at least as measured with the techniques used) are an indicator of a "hypnotic state.” Our results do not preclude the possibility that in a more spontaneous or different setting the experience of being hypnotized might relate to specific eye behaviors. |
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken Identifying the machine translation error types with the greatest impact on post-editing effort Journal Article In: Frontiers in Psychology, vol. 8, pp. 1282, 2017. @article{Daems2017a, Translation Environment Tools make translators' work easier by providing them with term lists, translation memories and machine translation output. Ideally, such tools automatically predict whether it is more effortful to post-edit than to translate from scratch, and determine whether or not to provide translators with machine translation output. Current machine translation quality estimation systems heavily rely on automatic metrics, even though they do not accurately capture actual post-editing effort. In addition, these systems do not take translator experience into account, even though novices' translation processes are different from those of professional translators. In this paper, we report on the impact of machine translation errors on various types of post-editing effort indicators, for professional translators as well as student translators. We compare the impact of MT quality on a product effort indicator (HTER) with that on various process effort indicators. The translation and post-editing process of student translators and professional translators was logged with a combination of keystroke logging and eye-tracking, and the MT output was analyzed with a fine-grained translation quality assessment approach. We find that most post-editing effort indicators (product as well as process) are influenced by machine translation quality, but that different error types affect different post-editing effort indicators, confirming that a more fine-grained MT quality analysis is needed to correctly estimate actual post-editing effort. Coherence, meaning shifts, and structural issues are shown to be good indicators of post-editing effort. The additional impact of experience on these interactions between MT quality and post-editing effort is smaller than expected. |
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken Translation methods and experience: A comparative analysis of human translation and post-editing with students and professional translators Journal Article In: Meta, vol. 62, no. 2, pp. 245–270, 2017. @article{Daems2017, While the benefits of using post-editing for technical texts have been more or less acknowledged, it remains unclear whether post-editing is a viable alternative to human translation for more general text types. In addition, we need a better understanding of both translation methods and how they are performed by students as well as professionals, so that pitfalls can be determined and translator training can be adapted accordingly. In this article, we aim to get a better understanding of the differences between human translation and post-editing for newspaper articles. Processes are registered by means of eye tracking and keystroke logging, which allows us to study translation speed, cognitive load, and the use of external resources. We also look at the final quality of the product as well as translators' attitude towards both methods of translation. Studying these different aspects shows that both methods and groups are more similar than anticipated. |
Ewa Domaradzka; Maksymilian Bielecki Deadly attraction - attentional bias toward preferred cigarette brand in smokers Journal Article In: Frontiers in Psychology, vol. 8, pp. 1365, 2017. @article{Domaradzka2017, Numerous studies have shown that biases in visual attention might be evoked by affective and personally relevant stimuli, for example addiction-related objects. Despite the fact that addiction is often linked to specific products and systematic purchase behaviors, no studies focused directly on the existence of bias evoked by brands. Smokers are characterized by high levels of brand loyalty and everyday contact with cigarette packaging. Using the incentive-salience mechanism as a theoretical framework, we hypothesized that this group might exhibit a bias toward the preferred cigarette brand. In our study, a group of smokers (N = 40) performed a dot probe task while their eye movements were recorded. In every trial a pair of pictures was presented – each of them showed a single cigarette pack. The visual properties of stimuli were carefully controlled, so branding information was the key factor affecting subjects' reactions. For each participant, we compared gaze behavior related to the preferred vs. other brands. The analyses revealed no attentional bias in the early, orienting phase of the stimulus processing and strong differences in maintenance and disengagement. Participants spent more time looking at the preferred cigarettes and saccades starting at the preferred brand location had longer latencies. In sum, our data shows that attentional bias toward brands might be found in situations not involving choice or decision making. These results provide important insights into the mechanisms of formation and maintenance of attentiona l biases to stimuli of personal relevance and might serve as a first step toward developing new attitude measurement techniques. |
Mackenzie G. Glaholt; Grace Sim Gaze-contingent center-surround fusion of infrared images to facilitate visual search for human targets Journal Article In: Journal of Imaging Science and Technology, vol. 61, no. 1, pp. 230–235, 2017. @article{Glaholt2017, We investigated gaze-contingent fusion of infrared imagery during visual search. Eye movements were monitored while subjects searched for and identified human targets in images captured simultaneously in the short-wave (SWIR) and long-wave (LWIR) infrared bands. Based on the subject's gaze position, the search displaywas updated such that imagery from one sensorwas continuously presented to the subject's central visual field (“center”) and another sensor was presented to the subject's non-central visual field (“surround”). Analysis ofperformance data indicated that, compared to the other combinations, the scheme featuring SWIR imagery in the center region and LWIR imagery in the surround region constituted an optimal combination of the SWIR and LWIR information: it inherited the superior target detection performance of LWIR imagery and the superior target identification performance of SWIR imagery. This demonstrates a novel method for efficiently combining imagery from two infrared sources as an alternative to conventional image fusion. |
Cheng S. Qian; Jan W. Brascamp How to build a dichoptic presentation system that includes an eye tracker Journal Article In: Journal of Visualized Experiments, no. 127, pp. 1–9, 2017. @article{Qian2017, The presentation of different stimuli to the two eyes, dichoptic presentation, is essential for studies involving 3D vision and interocular suppression. There is a growing literature on the unique experimental value of pupillary and oculomotor measures, especially for research on interocular suppression. Although obtaining eye-tracking measures would thus benefit studies that use dichoptic presentation, the hardware essential for dichoptic presentation (e.g. mirrors) often interferes with high-quality eye tracking, especially when using a video-based eye tracker. We recently described an experimental setup that combines a standard dichoptic presentation system with an infrared eye tracker by using infrared-transparent mirrors1. The setup is compatible with standard monitors and eye trackers, easy to implement, and affordable (on the order of US$1,000). Relative to existing methods it has the benefits of not requiring special equipment and posing few limits on the nature and quality of the visual stimulus. Here we provide a visual guide to the construction and use of our setup. |
Ioannis Rigas; Oleg V. Komogortsev Current research in eye movement biometrics: An analysis based on BioEye 2015 competition Journal Article In: Image and Vision Computing, vol. 58, pp. 129–141, 2017. @article{Rigas2017a, On the onset of the second decade of research in eye movement biometrics, the already demonstrated results strongly support the promising perspectives of the field. This paper presents a description of the research conducted in eye movement biometrics based on an extended analysis of the characteristics and results of the “BioEye 2015: Competition on Biometrics via Eye Movements.” This extended presentation can contribute to the understanding of the current level of research in eye movement biometrics, covering areas such as the previous work in the field, the procedures for the creation of a database of eye movement recordings, and the different approaches that can be used for the analysis of eye movements. Also, the presented results from BioEye 2015 competition can demonstrate the potential identification accuracy that can be achieved under easier and more difficult scenarios. Based on the provided presentation, we discuss topics related to the current status in eye movement biometrics and suggest possible directions for the future research in the field. |
Sergei L. Shishkin; Darisii G. Zhao; Andrei V. Isachenko; Boris M. Velichkovsky Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction Journal Article In: Psychology in Russia: State of the Art, vol. 10, no. 3, pp. 120–137, 2017. @article{Shishkin2017, Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system's information transfer limits. Will brain-computer interfaces (BCIs) and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more efficient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important filter to separate intentional gaze dwells from non-intentional ones. conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI) interface will have a chance to enable natural, fluent, and effortless interaction with machines in various fields of application. |
Jan-philipp Tauscher; Maryam Mustafa; Marcus Magnor; T. U. Braunschweig Comparative analysis of three different modalities for perception of artifacts in videos Journal Article In: ACM Transactions on Applied Perception, vol. 14, no. 4, pp. 1–12, 2017. @article{Tauscher2017, This study compares three popular modalities for analyzing perceived video quality; user ratings, eye tracking, and EEG. We contrast these three modalities for a given video sequence to determine if there is a gap between what humans consciously see and what we implicitly perceive. Participants are shown a video sequence with different artifacts appearing at specific distances in their field of vision; near foveal, middle peripheral, and far peripheral. Our results show distinct differences between what we saccade to (eye tracking), howwe consciously rate video quality, and our neural responses (EEG data). Our findings indicate that the measurement of perceived quality depends on the specific modality used. |
Philip R. K. Turnbull; John R. Phillips Ocular effects of virtual reality headset wear in young adults Journal Article In: Scientific Reports, vol. 7, pp. 16172, 2017. @article{Turnbull2017a, Virtual Reality (VR) headsets create immersion by displaying images on screens placed very close to the eyes, which are viewed through high powered lenses. Here we investigate whether this viewing arrangement alters the binocular status of the eyes, and whether it is likely to provide a stimulus for myopia development. We compared binocular status after 40-minute trials in indoor and outdoor environments, in both real and virtual worlds. We also measured the change in thickness of the ocular choroid, to assess the likely presence of signals for ocular growth and myopia development. We found that changes in binocular posture at distance and near, gaze stability, amplitude of accommodation and stereopsis were not different after exposure to each of the 4 environments. Thus, we found no evidence that the VR optical arrangement had an adverse effect on the binocular status of the eyes in the short term. Choroidal thickness did not change after either real world trial, but there was a significant thickening (≈10 microns) after each VR trial (p < 0.001). The choroidal thickening which we observed suggest that a VR headset may not be a myopiagenic stimulus, despite the very close viewing distances involved. |
Lauren H. Williams; Trafton Drew Distraction in diagnostic radiology: How is search through volumetric medical images affected by interruptions? Journal Article In: Cognitive Research: Principles and Implications, vol. 2, no. 1, pp. 12, 2017. @article{Williams2017, Observational studies have shown that interruptions are a frequent occurrence in diagnostic radiology. The present study used an experimental design in order to quantify the cost of these interruptions during search through volumetric medical images. Participants searched through chest CT scans for nodules that are indicative of lung cancer. In half of the cases, search was interrupted by a series of true or false math equations. The primary cost of these interruptions was an increase in search time with no corresponding increase in accuracy or lung coverage. This time cost was not modulated by the difficulty of the interruption task or an individual's working memory capacity. Eye-tracking suggests that this time cost was driven by impaired memory for which regions of the lung were searched prior to the interruption. Potential interventions will be discussed in the context of these results. |
Julia A. Wolfson; Dan J. Graham; Sara N. Bleich Attention to physical activity–equivalent calorie information on nutrition facts labels: An eye-tracking investigation Journal Article In: Journal of Nutrition Education and Behavior, vol. 49, no. 1, pp. 35–42.e1, 2017. @article{Wolfson2017, Objective Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. Design An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Setting Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. Participants The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Main Outcome Measure(s) Attention to and attitudes about activity-equivalent calorie information. Analysis Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Results Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Conclusions and Implications Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions. |
Aiping Xiong; Robert W. Proctor; Weining Yang; Ninghui Li Is domain highlighting actually helpful in identifying phishing web pages? Journal Article In: Human Factors, vol. 59, no. 4, pp. 640–660, 2017. @article{Xiong2017, OBJECTIVE: To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. BACKGROUND: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. METHOD: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. RESULTS: Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. CONCLUSION: Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. APPLICATION: Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages. |
Ying Yan; Xiaofei Wang; Ludan Shi; Haoxue Liu Influence of light zones on drivers' visual fixation characteristics and traffic safety in extra-long tunnels Journal Article In: Traffic Injury Prevention, vol. 18, no. 1, pp. 102–110, 2017. @article{Yan2017, OBJECTIVE: Special light zone is a new illumination technique that promises to improve the visual environment and improve traffic safety in extra-long tunnels. The purpose of this study is to identify how light zones affect the dynamic visual characteristics and information perception of drivers as they pass through extra-long tunnels on highways. METHODS: Thirty-two subjects were recruited for this study, and fixation data were recorded using eye movement tracking devices. A back-propagation artificial neural network was employed to predict and analyze the influence of special light zones on the variations in the fixation duration and pupil area of drivers. The analytic coordinates of focus points at different light zones were clustered to obtain different visual fixation regions using dynamic cluster theory. RESULTS: The findings of this study indicated that the special light zones had different influences on fixation duration and pupil area compared to other sections. Drivers gradually changed their fixation points from a scattered pattern to a narrow and zonal distribution that mainly focused on the main visual area at the center, the road just ahead, and the right side of the main visual area while approaching the special light zones. The results also showed that the variation in illumination and landscape in light zones was more important than driving experience to yield changes in visual cognition and driving behavior. CONCLUSIONS: It can be concluded that the special light zones can help relieve drivers' vision fatigue to some extent and further develop certain visual stimulus that can enhance drivers' attention. The study would provide a scientific basis for safety measurement implementation in extra-long tunnels. |
Thomas Zawisza; Ray Garza Using an eye tracking device to assess vulnerabilities to burglary Journal Article In: Journal of Police and Criminal Psychology, vol. 32, no. 3, pp. 203–213, 2017. @article{Zawisza2017, This research examines the extent to which visual cues influence a person's decision to burglarize. Participants in this study (n = 65) viewed ten houses through an eye tracking device and were asked whether or not they thought each house was vulnerable to burglary. The eye tracking device recorded where a person looked and for how long they looked (in milliseconds). Our findings showed that windows and doors were two of the most important visual stimuli. Results from our follow-up questionnaire revealed that stimuli such as fencing, beware of pet signs, cars in driveways, and alarm systems are also considered. There are a number of implications for future research and policy. |
Elise Grison; Valérie Gyselinck; Jean Marie Burkhardt; Jan M. Wiener Route planning with transportation network maps: An eye-tracking study Journal Article In: Psychological Research, vol. 81, no. 5, pp. 1020–1034, 2017. @article{Grison2017, Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning. |
Jessica Hanley; David E. Warren; Natalie Glass; Daniel Tranel; Matthew Karam; Joseph Buckwalter Visual interpretation of plain radiographs in orthopaedics using eye-tracking technology Journal Article In: The Iowa Orthopaedic Journal, vol. 37, pp. 225–231, 2017. @article{Hanley2017, BACKGROUND: Despite the importance of radiographic interpretation in orthopaedics, there not a clear understanding of the specific visual strategies used while analyzing a plain film. Eyetracking technology allows for the objective study of eye movements while performing a dynamic task, such as reading X-rays. Our study looks to elucidate objective differences in image interpretation between novice and experienced orthopaedic trainees using this novel technology. METHODS: Novice and experienced orthopaedic trainees (N=23) were asked to interpret AP pelvis films, searching for unilateral acetabular fractures while eye-movements were assessed for pattern of gaze, fixation on regions of interest, and time of fixation at regions of interest. Participants were asked to label radiographs as "fractured" or "not fractured." If "fractured", the participant was asked to determine the fracture pattern. A control condition employed Ekman faces and participants judged gender and facial emotion. Data were analyzed for variation in eye movements between participants, accuracy of responses, and response time. RESULTS: Accuracy: There was no significant difference by level of training for accurately identifing fracture images (p=0.3255). There was a significant association between higher level of training and correctly identifying non-fractured images (p=0.0155); greater training was also associated with more success in identifying the correct Judet-Letournel classification (p=0.0029). Response Time: Greater training was associated with faster response times (p=0.0009 for fracture images and 0.0012 for non-fractured images). Fixation Duration: There was no correlation of average fixation duration with experience (p=0.9632). Regions of Interest (ROIs): More experience was associated with an average of two fewer fixated ROIs (p=0.0047). Number of Fixations: Increased experience was associated with fewer fixations overall (p=0.0007). CONCLUSIONS: Experience has a significant impact on both accuracy and efficiency in interpreting plain films. Greater training is associated with a shift toward a more efficient and thorough assessment of plain radiographs. Eyetracking is a useful descriptive tool in the setting of plain film interpretation. CLINICAL RELEVANCE: We propose further assessment of eye movements in larger populations of orthopaedic surgeons, including staff orthopaedists. Describing the differences between novice and expert interpretation may provide insight into ways to accelerate the learning process in young orthopaedists. |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd In: Journal of Management in Engineering, vol. 33, no. 5, pp. 1–17, 2017. @article{Hasanzadeh2017a, Although several studies have highlighted the importance of attention in reducing the number of injuries in the construction industry, few have attempted to empirically measure the attention of construction workers. One technique that can be used to measure worker attention is eye tracking, which is widely accepted as the most direct and continuous measure of attention because where one looks is highly correlated with where one is focusing his or her attention. Thus, with the fundamental objective of measuring the impacts of safety knowledge (specifically, training, work experience, and injury exposure) on construction workers' attentional allocation, this study demonstrates the application of eye tracking to the realm of construction safety practices. To achieve this objective, a laboratory experiment was designed in which participants identified safety hazards presented in 35 construction site images ordered randomly, each of which showed multiple hazards varying in safety risk. During the experiment, the eye movements of 27 construction workers were recorded using a head-mounted EyeLink II system. The impact of worker safety knowledge in terms of training, work experience, and injury exposure (independent variables) on eye-tracking metrics (dependent variables) was then assessed by implementing numerous permutation simulations. The results show that tacit safety knowledge acquired from work experience and injury exposure can significantly improve construction workers' hazard detection and visual search strategies. The results also demonstrate that (1) there is minimal difference, with or without the Occupational Safety and Health Administration 10-h certificate, in workers' search strategies and attentional patterns while exposed to or seeing hazardous situations; (2) relative to less experienced workers (<5 years), more experienced workers (>10 years) need less processing time and deploy more frequent short fixations on hazardous areas to maintain situational awareness of the environment; and (3) injury exposure significantly impacts a worker's visual search strategy and attentional allocation. In sum, practical safety knowledge and judgment on a jobsite requires the interaction of both tacit and explicit knowledge gained through work experience, injury exposure, and interactive safety training. This study significantly contributes to the literature by demonstrating the potential application of eye-tracking technology in studying the attentional allocation of construction workers. Regarding practice, the results of the study show that eye tracking can be used to improve worker training and preparedness, which will yield safer working conditions, detect at-risk workers, and improve the effectiveness of safety-training programs. |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd Impact of construction workers' hazard identification skills on their visual attention Journal Article In: Journal of Construction Engineering and Management, vol. 143, no. 10, pp. 1–16, 2017. @article{Hasanzadeh2017, Eye-movement metrics have been shown to correlate with attention and, therefore, represent a means of identifying and analyzing an individual's cognitive processes. Human errors–such as failure to identify a hazard–are often attributed to a worker's lack of attention. Piecemeal attempts have been made to investigate the potential of harnessing eye movements as predictors of human error (e.g., failure to identify a hazard) in the construction industry, although more attempts have investigated human error via subjective measurements. To address this knowledge gap, the present study harnessed eye-tracking technology to evaluate the impacts of workers' hazard-identification skills on their attentional distributions and visual search strategies. To achieve this objective, an experiment was designed in which the eye movements of 31 construction workers were tracked while they searched for hazards in 35 randomly ordered construction scenario images. Workers were then divided into three groups on the basis of their hazard identification performance. Three fixation-related metrics–fixation count, dwell-time percentage, and run count–were analyzed during the eye-tracking experiment for each group (low, medium, and high hazard-identification skills) across various types of hazards. Then, multivariate ANOVA (MANOVA) was used to evaluate the impact of workers' hazard-identification skills on their visual attention. To further investigate the effect of hazard identification skills on the dependent variables (eye movement metrics), two distinct processes followed: separate ANOVAs on each of the dependent variables, and a discriminant function analysis. The analyses indicated that hazard identification skills significantly impact workers' visual search strategies: workers with higher hazard-identification skills had lower dwell-time percentages on ladder-related hazards; higher fixation counts on fall-to-lower-level hazards; and higher fixation counts and run counts on fall-protection systems, struck-by, housekeeping, and all hazardous areas combined. Among the eye-movement metrics studied, fixation count had the largest standardized coefficient in all canonical discriminant functions, which implies that this eye-movement metric uniquely discriminates workers with high hazard-identification skills and at-risk workers. Because discriminant function analysis is similar to regression, discriminant function (linear combinations of eye-movement metrics) can be used to predict workers' hazard-identification capabilities. In conclusion, this study provides a proof of concept that certain eye- movement metrics are predictive indicators of human error due to attentional failure. These outcomes stemmed from a laboratory setting, and, foreseeably, safety managers in the future will be able to use these findings to identify at-risk construction workers, pinpoint required safety training, measure training effectiveness, and eventually improve future personal protective equipment to measure construction workers' situation awareness in real time. |
Matthew Heath; Erin M. Shellington; Sam Titheridge; Dawn P. Gill; Robert J. Petrella In: Journal of Alzheimer's Disease, vol. 56, no. 1, pp. 167–183, 2017. @article{Heath2017, Exercise programs involving aerobic and resistance training (i.e., multiple-modality) have shown promise in improving cognition and executive control in older adults at risk, or experiencing, cognitive decline. It is, however, unclear whether cognitive training within a multiple-modality program elicits an additive benefit to executive/cognitive processes. This is an important question to resolve in order to identify optimal training programs that delay, or ameliorate, executive deficits in persons at risk for further cognitive decline. In the present study, individuals with a self-reported cognitive complaint (SCC) participated in a 24-week multiple-modality (i.e., the M2 group) exercise intervention program. In addition, a separate group of individuals with a SCC completed the same aerobic and resistance training as the M2 group but also completed a cognitive-based stepping task (i.e., multiple-modality, mind-motor intervention: M4 group). Notably, pre- and post-intervention executive control was examined via the antisaccade task (i.e., eye movement mirror-symmetrical to a target). Antisaccades are an ideal tool for the study of individuals with subtle executive deficits because of its hands- and language-free nature and because the task's neural mechanisms are linked to neuropathology in cognitive decline (i.e., prefrontal cortex). Results showed that M2 and M4 group antisaccade reaction times reliably decreased from pre- to post-intervention and the magnitude of the decrease was consistent across groups. Thus, multi-modality exercise training improved executive performance in persons with a SCC independent of mind-motor training. Accordingly, we propose that multiple-modality training provides a sufficient intervention to improve executive control in persons with a SCC. |
Yu-Cin Jian Eye-movement patterns and reader characteristics of students with good and poor performance when reading scientific text with diagrams Journal Article In: Reading and Writing, vol. 30, no. 7, pp. 1447–1472, 2017. @article{Jian2017a, This study investigated the cognitive processes and reader characteristics of sixth graders who had good and poor performance when reading scientific text with diagrams. We first measured the reading ability and reading self-efficacy of sixth-grade participants, and then recorded their eye movements while they were reading an illustrated scientific text and scored their answers to content-related questions. Finally, the participants evaluated the difficulty of the article, the attractiveness of the content and diagram, and their learning performance. The participants were then classified into groups based on how many correct responses they gave to questions related to reading. The results showed that readers with good performance had better character recognition ability and reading self-efficacy, were more attracted to the diagrams, and had higher self-evaluated learning levels than the readers with poor performance did. Eye-movement data indicated that readers with good performance spent significantly more reading time on the whole article, the text section, and the diagram section than the readers with poor performance did. Interestingly, readers with good performance had significantly longer mean fixation duration on the diagrams than readers with poor performance did; further, readers with good performance made more saccades between the text and the diagrams. Additionally, sequential analysis of eye movements showed that readers with good performance preferred to observe the diagram rather than the text after reading the title, but this tendency was not present in readers with poor performance. In sum, using eye-tracking technology and several reading tests and questionnaires, we found that various cognitive aspects (reading strategy, diagram utilization) and affective aspects (reading self-efficacy, article likeness, diagram attraction, and self-evaluation of learning) affected sixth graders' reading performance in this study. |
Yu-Cin Jian; Hwa-Wei Ko Influences of text difficulty and reading ability on learning illustrated science texts for children: An eye movement study Journal Article In: Computers and Education, vol. 113, pp. 263–279, 2017. @article{Jian2017, In this study, eye movement recordings and comprehension tests were used to investigate children's cognitive processes and comprehension when reading illustrated science texts. Ten-year-old children (N = 42) who were beginning to read to learn, with high and low reading ability read two illustrated science texts in Chinese (one medium-difficult article, one difficult article), and then answered questions that measured comprehension of textual and pictorial information as well as text-and-picture integration. The high-ability group outperformed the low-ability group on all questions. Eye movement analyses showed that both group of students spent roughly the same amount of time reading both articles, but had different methods of reading them. The low-ability group was inclined to read what seemed easier to them and read the text more. The high-ability group attended more to the difficult article and made an effort to integrate the textual and pictorial information. During a first-pass reading of the difficult article, high- but not low-ability readers returned to the previous paragraph. The low-ability readers spent more time reading the less difficult article and not the difficult one that required teachers' attention. Suggestions for classroom instruction are proposed accordingly. |
Shijian Luo; Yi Hu; Yuxiao Zhou Factors attracting Chinese Generation Y in the smartphone application marketplace Journal Article In: Frontiers of Computer Science, vol. 11, no. 2, pp. 290–306, 2017. @article{Luo2017, Smartphone applications (apps) are becoming increasingly popular all over the world, particularly in the Chinese Generation Y population; however, surprisingly, only a small number of studies on app factors valued by this important group have been conducted. Because the competition among app developers is increasing, app factors that attract users' attention are worth studying for sales promotion. This paper examines these factors through two separate studies. In the first study, i.e., Experiment 1, which consists of a survey, perceptual rating and verbal protocol methods are employed, and 90 randomly selected app websites are rated by 169 experienced smartphone users according to app attraction. Twelve of the most rated apps (six highest rated and six lowest rated) are selected for further investigation, and 11 influential factors that Generation Y members value are listed. A second study, i.e., Experiment 2, is conducted using the most and least rated app websites from Experiment 1, and eye tracking and verbal protocol methods are used. The eye movements of 45 participants are tracked while browsing these websites, providing evidence about what attracts these users' attention and the order in which the app components are viewed. The results of these two studies suggest that Chinese Generation Y is a content-centric group when they browse the smartphone app marketplace. Icon, screenshot, price, rating, and name are the dominant and indispensable factors that influence purchase intentions, among which icon and screenshot should be meticulously designed. Price is another key factor that drives Chinese Generation Y's attention. The recommended apps are the least dominant element. Design suggestions for app websites are also proposed. This research has important implications. |
Min-Yuan Ma; Hsien-Chih Chuang An exploratory study of the effect of enclosed structure on type design with fixation dispersion: Evidence from eye movements Journal Article In: International Journal of Technology and Design Education, vol. 27, no. 1, pp. 149–164, 2017. @article{Ma2017, Type design is the process of re-organizing visual elements and their corresponding meanings into a new organic entity, particularly for the highly logographic Chinese characters whose intrinsic features are retained even after reorganization. Due to this advantage, designers believe that such a re-organization process will not affect Chinese character recognition. However, not having an effect on recognition is not the same as not affecting the viewing process, especially when the character is so highly deconstructed that, along with the viewing process, the original intention of the design and its efficacy are both indirectly affected. Therefore, besides capturing the changes of character features, a good type designer should understand how characters are viewed. Past studies have found that character structure will affect character recognition, particularly for enclosed and non-enclosed characters whose differences are significant, although the interpretation of such differences remains open for discussion. This study explored the viewing process of Chinese characters with eye-tracking methods and calculated the concentration and saccadic amplitude of fixation in the viewing process in terms of the descriptive approach in a geographic information system, so as to investigate the differences among types of character modules with the spatial dispersion index. This study found that the overall vision when viewing enclosed structures is more concentrated than non-enclosed structures. |
Andrew K. Mackenzie; Julie M. Harris A link between attentional function, effective eye movements, and driving ability Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 2, pp. 381–394, 2017. @article{Mackenzie2017, The misallocation of driver visual attention has been suggested as a major contributing factor to vehicle accidents. One possible reason is that the relatively high cognitive demands of driving limits the ability to efficiently allocate gaze. We present an experiment that explores the relationship between attentional function and visual performance when driving. Drivers performed two variations of a multiple object tracking task targeting aspects of cognition including sustained attention, dual-tasking, covert attention and visuomotor skill. They also drove a number of courses in a driving simulator. Eye movements were recorded throughout. We found that individuals who performed better in the cognitive tasks exhibited more effective eye movement strategies when driving, such as scanning more of the road, and they also exhibited better driving performance. We discuss the potential link between an individual's attentional function, effective eye movements and driving ability. We also discuss the use of a visuomotor task in assessing driving behaviour. |
Yousri Marzouki; Valériane Dusaucy; Myriam Chanceaux; Sebastiaan Mathôt The World (of Warcraft) through the eyes of an expert Journal Article In: PeerJ, vol. 5, pp. 1–21, 2017. @article{Marzouki2017, Negative correlations between pupil size and the tendency to look at salient locations were found in recent studies (e.g., Mathôt et al., 2015). It is hypothesized that this negative correlation might be explained by the mental effort put by participants in the task that leads in return to pupil dilation. Here we present an exploratory study on the effect of expertise on eye-movement behavior. Because there is no available standard tool to evaluate WoW players' expertise, we built an off-game questionnaire testing players' knowledge about WoW and acquired skills through completed raids, highest rated battlegrounds, Skill Points, etc. Experts ( N = 4) and novices ( N = 4) in the massively multiplayer online role-playing game World of Warcraft (WoW) viewed 24 designed video segments from the game that differ in regards with their content (i.e, informative locations) and visual complexity (i.e, salient locations). Consistent with previous studies, we found a negative correlation between pupil size and the tendency to look at salient locations (experts |
Olivia M. Maynard; Jonathan C. W. Brooks; Marcus R. Munafò; Ute Leonards Neural mechanisms underlying visual attention to health warnings on branded and plain cigarette packs Journal Article In: Addiction, vol. 112, no. 4, pp. 662–672, 2017. @article{Maynard2017, Aims: To (1) test if activation in brain regions related to reward (nucleus accumbens) and emotion (amygdala) differ when branded and plain packs of cigarettes are viewed, (2) test whether these activation patterns differ by smoking status and (3) examine whether activation patterns differ as a function of visual attention to health warning labels on cigarette packs. Design: Cross-sectional observational study combining functional magnetic resonance imaging (fMRI) with eye-tracking. Non-smokers, weekly smokers and daily smokers performed a memory task on branded and plain cigarette packs with pictorial health warnings presented in an event-related design. Setting: Clinical Research and Imaging Centre, University of Bristol, UK. Participants: Non-smokers, weekly smokers and daily smokers (n = 72) were tested. After exclusions, data from 19 non-smokers, 19 weekly smokers and 20 daily smokers were analysed. Measurements: Brain activity was assessed in whole brain analyses and in pre-specified masked analyses in the amygdala and nucleus accumbens. On-line eye-tracking during scanning recorded visual attention to health warnings. Findings: There was no evidence for a main effect of pack type or smoking status in either the nucleus accumbens or amygdala, and this was unchanged when taking account of visual attention to health warnings. However, there was evidence for an interaction, such that we observed increased activation in the right amygdala when viewing branded as compared with plain packs among weekly smokers (P = 0.003). When taking into account visual attention to health warnings, we observed higher levels of activation in the visual cortex in response to plain packaging compared with branded packaging of cigarettes (P = 0.020). Conclusions: Based on functional magnetic resonance imaging and eye-tracking data, health warnings appear to be more salient on ‘plain' cigarette packs than branded packs. |
2016 |
John-Ross Rizzo; Todd E. Hudson; Weiwei Dai; Ninad Desai; Arash Yousefi; Dhaval Palsana; Ivan Selesnick; Laura J. Balcer; Steven L. Galetta; Janet C. Rucker Objectifying eye movements during rapid number naming: Methodology for assessment of normative data for the King-Devick test Journal Article In: Journal of the Neurological Sciences, vol. 362, pp. 232–239, 2016. @article{Rizzo2016a, Objective Concussion is a major public health problem and considerable efforts are focused on sideline-based diagnostic testing to guide return-to-play decision-making and clinical care. The King-Devick (K-D) test, a sensitive sideline performance measure for concussion detection, reveals slowed reading times in acutely concussed subjects, as compared to healthy controls; however, the normal behavior of eye movements during the task and deficits underlying the slowing have not been defined. Methods Twelve healthy control subjects underwent quantitative eye tracking during digitized K-D testing. Results The total K-D reading time was 51.24 (± 9.7) seconds. A total of 145 saccades (± 15) per subject were generated, with average peak velocity 299.5°/s and average amplitude 8.2°. The average inter-saccadic interval was 248.4 ms. Task-specific horizontal and oblique saccades per subject numbered, respectively, 102 (± 10) and 17 (± 4). Subjects with the fewest saccades tended to blink more, resulting in a larger amount of missing data; whereas, subjects with the most saccades tended to make extra saccades during line transitions. Conclusions Establishment of normal and objective ocular motor behavior during the K-D test is a critical first step towards defining the range of deficits underlying abnormal testing in concussion. Further, it sets the groundwork for exploration of K-D correlations with cognitive dysfunction and saccadic paradigms that may reflect specific neuroanatomic deficits in the concussed brain. |
Ioannis Rigas; Evgeniy Abdulin; Oleg V. Komogortsev Towards a multi-source fusion approach for eye movement-driven recognition Journal Article In: Information Fusion, vol. 32, pp. 13–25, 2016. @article{Rigas2016, This paper presents a research for the use of multi-source information fusion in the field of eye movement biometrics. In the current state-of-the-art, there are different techniques developed to extract the physical and the behavioral biometric characteristics of the eye movements. In this work, we explore the effects from the multi-source fusion of the heterogeneous information extracted by different biometric algorithms under the presence of diverse visual stimuli. We propose a two-stage fusion approach with the employment of stimulus-specific and algorithm-specific weights for fusing the information from different matchers based on their identification efficacy. The experimental evaluation performed on a large database of 320 subjects reveals a considerable improvement in biometric recognition accuracy, with minimal equal error rate (EER) of 5.8%, and best case Rank-1 identification rate (Rank-1 IR) of 88.6%. It should be also emphasized that although the concept of multi-stimulus fusion is currently evaluated specifically for the eye movement biometrics, it can be adopted by other biometric modalities too, in cases when an exogenous stimulus affects the extraction of the biometric features. |
Ioannis Rigas; Oleg V. Komogortsev; Reza Shadmehr Biometric recognition via eye movements : Saccadic vigor and acceleration cues Journal Article In: ACM Transactions on Applied Perception, vol. 13, no. 2, pp. 1–21, 2016. @article{Rigas2016a, Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6–33.5% for the verification scenario, and in range of 22.3–53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues. |
Donghyun Ryu; David L. Mann; Bruce Abernethy; Jamie M. Poolton Gaze-contingent training enhances perceptual skill acquisition Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–21, 2016. @article{Ryu2016, The purpose of this study was to determine whether decision-making skill in perceptual-cognitive tasks could be enhanced using a training technique that impaired selective areas of the visual field. Recreational basketball players performed perceptual training over 3 days while viewing with a gaze-contingent manipulation that displayed either (a) a moving window (clear central and blurred peripheral vision), (b) a moving mask (blurred central and clear peripheral vision), or (c) full (unrestricted) vision. During the training, participants watched video clips of basketball play and at the conclusion of each clip made a decision about to which teammate the player in possession of the ball should pass. A further control group watched unrelated videos with full vision. The effects of training were assessed using separate tests of decision-making skill conducted in a pretest, posttest, and 2-week retention test. The accuracy of decision making was greater in the posttest than in the pretest for all three intervention groups when compared with the control group. Remarkably, training with blurred peripheral vision resulted in a further improvement in performance from posttest to retention test that was not apparent for the other groups. The type of training had no measurable impact on the visual search strategies of the participants, and so the training improvements appear to be grounded in changes in information pickup. The findings show that learning with impaired peripheral vision offers a promising form of training to support improvements in perceptual skill. |
Sameer Saproo; Victor Shih; David C. Jangraw; Paul Sajda Neural mechanisms underlying catastrophic failure in human-machine interaction during aerial navigation Journal Article In: Journal of Neural Engineering, vol. 13, pp. 1–12, 2016. @article{Saproo2016, Objective. We investigated the neural correlates of workload buildup in a fine visuomotor task called the boundary avoidance task (BAT). The BAT has been known to induce naturally occurring failures of human–machine coupling in high performance aircraft that can potentially lead to a crash—these failures are termed pilot induced oscillations (PIOs). Approach. We recorded EEG and pupillometry data from human subjects engaged in a flight BAT simulated within a virtual 3D environment. Main results. We find that workload buildup in a BAT can be successfully decoded from oscillatory features in the electroencephalogram (EEG). Information in delta, theta, alpha, beta, and gamma spectral bands of the EEG all contribute to successful decoding, however gamma band activity with a lateralized somatosensory topography has the highest contribution, while theta band activity with a fronto-central topography has the most robust contribution in terms of real-world usability. We show that the output of the spectral decoder can be used to predict PIO susceptibility. We also find that workload buildup in the task induces pupil dilation, the magnitude of which is significantly correlated with the magnitude of the decoded EEG signals. These results suggest that PIOs may result from the dysregulation of cortical networks such as the locus coeruleus (LC)—anterior cingulate cortex (ACC) circuit. Significance. Our findings may generalize to similar control failures in other cases of tight manmachine coupling where gains and latencies in the control system must be inferred and compensated for by the human operators. A closed-loop intervention using neurophysiological decoding of workload buildup that targets the LC-ACC circuit may positively impact operator performance in such situations. |
Graham G. Scott; Christopher J. Hand Motivation determines Facebook viewing strategy: An eye movement analysis Journal Article In: Computers in Human Behavior, vol. 56, pp. 267–280, 2016. @article{Scott2016, Individuals' Social Networking Site (SNS) profiles are central to online impression formation. Distinct profile elements (e.g., Profile Picture) experimentally manipulated in isolation can alter perception of profile owners, but it is not known which elements are focused on and attributed most importance when profiles are viewed naturally. The current study recorded the eye movement behaviour of 70 participants who viewed experimenter-generated Facebook timelines of male and female targets carefully controlled for content. Participants were instructed to process the targets either as potential friends or as potential employees. Target timelines were delineated into Regions of Interest (RoIs) prior to data collection. We found pronounced effects of target gender, viewer motivation and interactions between these factors on processing. Global processing patterns differed based on whether a 'social' or a 'professional' viewing motivation was used. Both patterns were distinct to the 'F'-shaped patterns observed in previous research. When viewing potential employees viewers focused on the text content of timelines and when viewing potential friends image content was more important. Viewing patterns provide insight into the characteristics and abilities of targets most valued by viewers with distinct motivations. These results can inform future research, and allow new perspectives on previous findings. |
Sergei L. Shishkin; Yuri O. Nuzhdin; Evgeny P. Svirin; Alexander G. Trofimov; Anastasia A. Fedorova; Bogdan L. Kozyrskiy; Boris M. Velichkovsky EEG negativity in fixations used for gaze-based control: Toward converting intentions into actions with an eye-brain-computer interface Journal Article In: Frontiers in Neuroscience, vol. 10, pp. 528, 2016. @article{Shishkin2016, We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200..500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage LDA classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for paralyzed and healthy users, the EBCI “Wish Mouse”. |
Tarkeshwar Singh; Christopher M. Perry; Troy M. Herter A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment Journal Article In: Journal of NeuroEngineering and Rehabilitation, vol. 13, pp. 1–17, 2016. @article{Singh2016, BACKGROUND: Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. RESULTS: Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. CONCLUSIONS: The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth. |
Mathew Stange; Amanda Barry; Jolene Smyth; Kristen Olson Effects of smiley face scales on visual processing of satisfaction questions in web surveys Journal Article In: Social Science Computer Review, vol. 36, no. 6, pp. 756–766, 2016. @article{Stange2016, Web surveys permit researchers to use graphic or symbolic elements alongside the text of response options to help respondents process the categories. Smiley faces are one example used to communicate positive and negative domains. How respondents visually process these smiley faces, including whether they detract from the question's text, is understudied. We report the results of two eye-tracking experiments in which satisfaction questions were asked with and without smiley faces. Respondents to the questions with smiley faces spent less time reading the question stem and response option text than respondents to the questions without smiley faces, but the response distributions did not differ by version. We also find support that lower literacy respondents rely more on the smiley faces than higher literacy respondents. |
John Sustersic; Brad Wyble; Siddharth Advani; Vijaykrishnan Narayanan Towards a unified multiresolution vision model for autonomous ground robots Journal Article In: Robotics and Autonomous Systems, vol. 75, pp. 221–232, 2016. @article{Sustersic2016, While remotely operated unmanned vehicles are increasingly a part of everyday life, truly autonomous robots capable of independent operation in dynamic environments have yet to be realized-particularly in the case of ground robots required to interact with humans and their environment. We present a unified multiresolution vision model for this application designed to provide the wide field of view required to maintain situational awareness and sufficient visual acuity to recognize elements of the environment while permitting feasible implementations in real-time vision applications. The model features a kind of color-constant processing through single-opponent color channels and contrast invariant oriented edge detection using a novel implementation of the Combination of Receptive Fields model. The model provides color and edge-based salience assessment, as well as a compressed color image representation suitable for subsequent object identification. We show that bottom-up visual saliency computed using this model is competitive with the current state-of-the-art while allowing computation in a compressed domain and mimicking the human visual system with nearly half (45%) of computational effort focused within the fovea. This method reduces storage requirement of the image pyramid to less than 5% of the full image, and computation in this domain reduces model complexity in terms of both computational costs and memory requirements accordingly. We also quantitatively evaluate the model for its application domain by using it with a camera/lens system with a 185° field of view capturing 3.5M pixel color images by using a tuned salience model to predict human fixations. |
Vijay Vitthal Thitme; Akanksha Varghese Image retrieval using vector of locally aggregated descriptors Journal Article In: International Journal of Advance Research in Computer Science and Management Studies, vol. 4, no. 2, pp. 97–104, 2016. @article{Thitme2016, Partial duplicate image retrieval is very powerful and important task in the real world applications such as landmark search, copyright protection, fake image identification. In the internet applications users continuously upload images which may be partially duplicate images on the domains like social sites orkut, facebook, and related applications etc. The partial image is nothing but segment of whole image, and the various methods of transformations are scaling, resolution, illumination, rotation and viewpoint. This method is considered as of much more valuable by different real world aspects and has motivated towards this study. The method of retrieving images which is based on the object methods generally uses the whole image as the query image. In object based image retrieval methods usually use the whole image as the query. This method is compared with text system by using the bag of visual words (BOV) Generally there may be lots of noise on the images so it is impossible to perform operations on large scale dataset of images. This approach is not much more used because no any spatial data is used to retrieve the efficient images.The art of image retrieval methods represent image with a large dimensional vector of visual words by making quantization of local features, such as Scale Invariant Feature Transform, solely on the descriptor space. Quantization of the Local features to visual words are done firstly in descriptor space and then in orientation space. Local Self-Similarity Descriptor (LSSD) value is used which is used to captures the internal geometric layouts in the local text self-similar regions near interest points. |
Margarita Vinnikov; Robert S. Allison; Suzette Fernandes Impact of depth of field simulation on visual fatigue: Who are impacted? and how? Journal Article In: International Journal of Human-Computer Studies, vol. 91, pp. 37–51, 2016. @article{Vinnikov2016, While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all. |
Andrej Vlasenko; Tadas Limba; Mindaugas Kiškis; Gintarė Gulevičiūtė Research on human emotion while playing a computer game using pupil recognition technology. Journal Article In: TEM Journal, vol. 5, no. 4, pp. 417–423, 2016. @article{Vlasenko2016, The article presents the results of an experiment during which the participants were playing an online game (poker), and while playing the game, a special video cam was recording the diameters of the player's eye pupils. Diameter data and calculations were based on these records with the aid of a computer program; then, diagrams of the diameter changes in the players' pupils were created (built) depending on the game situation. The study was conducted in a real life situation, when the players were playing online poker. The results of the study point out the connection between the changes in the psycho-emotional state of the players and the changes in their pupil diameters, where the emotional state is a critical factor affecting the operation of such systems. |
Xi Wang; Bin Cai; Yang Cao; Chen Zhou; Le Yang; Runzhong Liu; Xiaojing Long; Weicai Wang; Dingguo Gao; Baicheng Bao Objective method for evaluating orthodontic treatment from the lay perspective: An eye-tracking study Journal Article In: American Journal of Orthodontics and Dentofacial Orthopedics, vol. 150, no. 4, pp. 601–610, 2016. @article{Wang2016b, Introduction Currently, few methods are available to measure orthodontic treatment need and treatment outcome from the lay perspective. The objective of this study was to explore the function of an eye-tracking method to evaluate orthodontic treatment need and treatment outcome from the lay perspective as a novel and objective way when compared with traditional assessments. Methods The scanpaths of 88 laypersons observing the repose and smiling photographs of normal subjects and pretreatment and posttreatment malocclusion patients were recorded by an eye-tracking device. The total fixation time and the first fixation time on the areas of interest (eyes, nose, and mouth) for each group of faces were compared and analyzed using mixed-effects linear regression and a support vector machine. The aesthetic component of the Index of Orthodontic Treatment Need was used to categorize treatment need and outcome levels to determine the accuracy of the support vector machine in identifying these variables. Results Significant deviations in the scanpaths of laypersons viewing pretreatment smiling faces were noted, with less fixation time (P <0.05) and later attention capture (P <0.05) on the eyes, and more fixation time (P <0.05) and earlier attention capture (P <0.05) on the mouth than for the scanpaths of laypersons viewing normal smiling subjects. The same results were obtained when comparing posttreatment smiling patients, with less fixation time (P <0.05) and later attention capture on the eyes (P <0.05), and more fixation time (P <0.05) and earlier attention capture on the mouth (P <0.05). The pretreatment repose faces exhibited an earlier attention capture on the mouth than did the normal subjects (P <0.05) and posttreatment patients (P <0.05). Linear support vector machine classification showed accuracies of 97.2% and 93.4% in distinguishing pretreatment patients from normal subjects (treatment need), and pretreatment patients from posttreatment patients (treatment outcome), respectively. Conclusions The eye-tracking device was able to objectively quantify the effect of malocclusion on facial perception and the impact of orthodontic treatment on malocclusion from the lay perspective. The support vector machine for classification of selected features achieved high accuracy of judging treatment need and treatment outcome. This approach may represent a new method for objectively evaluating orthodontic treatment need and treatment outcome from the perspective of laypersons. |
Matthew B. Winn Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants Journal Article In: Trends in Hearing, vol. 20, 2016. @article{Winn2016, People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability. |
Ziad M. Hafed; Katarina Stingl; Karl Ulrich Bartz-Schmidt; Florian Gekeler; Eberhart Zrenner Oculomotor behavior of blind patients seeing with a subretinal visual implant Journal Article In: Vision Research, vol. 118, pp. 119–131, 2016. @article{Hafed2016, Electronic implants are able to restore some visual function in blind patients with hereditary retinal degenerations. Subretinal visual implants, such as the CE-approved Retina Implant Alpha IMS (Retina Implant AG, Reutlingen, Germany), sense light through the eye's optics and subsequently stimulate retinal bipolar cells via ~1500 independent pixels to project visual signals to the brain. Because these devices are directly implanted beneath the fovea, they potentially harness the full benefit of eye movements to scan scenes and fixate objects. However, so far, the oculomotor behavior of patients using subretinal implants has not been characterized. Here, we tracked eye movements in two blind patients seeing with a subretinal implant, and we compared them to those of three healthy controls. We presented bright geometric shapes on a dark background, and we asked the patients to report seeing them or not. We found that once the patients visually localized the shapes, they fixated well and exhibited classic oculomotor fixational patterns, including the generation of microsaccades and ocular drifts. Further, we found that a reduced frequency of saccades and microsaccades was correlated with loss of visibility. Last, but not least, gaze location corresponded to the location of the stimulus, and shape and size aspects of the viewed stimulus were reflected by the direction and size of saccades. Our results pave the way for future use of eye tracking in subretinal implant patients, not only to understand their oculomotor behavior, but also to design oculomotor training strategies that can help improve their quality of life. |
Lynn Huestegge; Anne Böckler Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–15, 2016. @article{Huestegge2016, Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards. |
Yu-Cin Jian; Chao-Jung Wu In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016. @article{Jian2016a, Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas. |
Ioanna Katidioti; Jelmer P. Borst; Douwe J. Bierens de Haan; Tamara Pepping; Marieke K. Vugt; Niels A. Taatgen Interrupted by your pupil: An interruption management system based on pupil dilation Journal Article In: International Journal of Human-Computer Interaction, vol. 32, no. 10, pp. 791–801, 2016. @article{Katidioti2016a, Interruptions are prevalent in everyday life and can be very disruptive. An important factor that affects the level of disruptiveness is the timing of the interruption: Interruptions at low-workload moments are known to be less disruptive than interruptions at high-workload moments. In this study, we developed a task-independent interruption management system (IMS) that interrupts users at low-workload moments in order to minimize the disruptiveness of interruptions. The IMS identifies low-workload moments in real time by measuring users' pupil dilation, which is a well-known indicator of workload. Using an experimental setup we showed that the IMS succeeded in finding the optimal moments for interruptions and marginally improved performance. Because our IMS is task-independent—it does not require a task analysis—it can be broadly applied. |
Ellen M. Kok; Halszka Jarodzka; Anique B. H. Bruin; Hussain A. N. BinAmir; Simon G. F. Robben; Jeroen J. G. Merriënboer Systematic viewing in radiology: Seeing more, missing less? Journal Article In: Advances in Health Sciences Education, vol. 21, no. 1, pp. 189–205, 2016. @article{Kok2016, To prevent radiologists from overlooking lesions, radiology textbooks rec- ommend ‘‘systematic viewing,'' a technique whereby anatomical areas are inspected in a fixed order. This would ensure complete inspection (full coverage) of the image and, in turn, improve diagnostic performance. To test this assumption, two experiments were performed. Both experiments investigated the relationship between systematic viewing, coverage, and diagnostic performance. Additionally, the first investigated whether sys- tematic viewing increases with expertise; the second investigated whether novices benefit from full-coverage or systematic viewing training. In Experiment 1, 11 students, ten res- idents, and nine radiologists inspected five chest radiographs. Experiment 2 had 75 students undergo a training in either systematic, full-coverage (without being systematic) or non- systematic viewing. Eye movements and diagnostic performance were measured throughout both experiments. In Experiment 1, no significant correlations were found between systematic viewing and coverage |
Oleg V. Komogortsev; Alexey Karpov Oculomotor plant characteristics : The effects of environment and stimulus Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 621–632, 2016. @article{Komogortsev2016, This paper presents an objective evaluation of the effects of environmental factors, such as stimulus presentation and eye tracking specifications, on the biometric accuracy of oculomotor plant characteristic (OPC) biometrics. The study examines the largest known dataset for eye movement biometrics, with eye movements recorded from 323 subjects over multiple sessions. Six spatial precision tiers (0.01°, 0.11°, 0.21°, 0.31°, 0.41°, 0.51°), six temporal resolution tiers (1000 Hz, 500 Hz, 250 Hz, 120 Hz, 75 Hz, 30 Hz), and three stimulus types (horizontal, random, textual) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment providing at least 0.1° spatial precision and 30 Hz sampling rate for biometric purposes, and the use of a horizontal pattern stimulus when using the two- dimensional oculomotor plant model developed by Komogortsev et al. [1] |
Mark A. LeBoeuf; Jessica M. Choplin; Debra Pogrund Stark Eye see what you are saying: Testing conversational influences on the information gleaned from home-loan disclosure forms Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 307–321, 2016. @article{LeBoeuf2016, The federal government mandates the use of home-loan disclosure forms to facilitate understanding of offered loans, enable comparison shopping, and prevent predatory lending. Predatory lending persists, however, and scant research has examined how salespeople might undermine the effectiveness of these forms. Three eye-tracking studies (a laboratory simulation and two controlled experiments) investigated how conversational norms affect the information consumers can glean from these forms. Study 1 was a laboratory simulation that recreated in the laboratory; the effects that previous literature suggested is likely happening in the field, namely, that following or violating conversational norms affects the information that consumers can glean from home-loan disclosure forms and the home-loan decisions they make. Studies 2 and 3 were controlled experiments that isolated the possible factors responsible for the observed biases in the information gleaned from these forms. The results suggest that attentional biases are largely responsible for the effects of conversation on the information consumers get and that perceived importance plays little to no role. Policy implications and how eye-tracking technology can be employed to improve decision-making are considered. |
Tsu-Chiang Lei; Shih-Chieh Wu; Chi-Wen Chao; Su-Hsin Lee Evaluating differences in spatial visual attention in wayfinding strategy when using 2D and 3D electronic maps Journal Article In: GeoJournal, vol. 81, no. 2, pp. 153–167, 2016. @article{Lei2016, With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace. |
Qian Li; Zhuowei Joy Huang; Kiel Christianson Visual attention toward tourism photographs with text: An eye-tracking study Journal Article In: Tourism Management, vol. 54, pp. 243–258, 2016. @article{Li2016b, This study examines consumers' visual attention toward tourism photographs with text naturally embedded in landscapes and their perceived advertising effectiveness. Eye-tracking is employed to record consumers' visual attention and a questionnaire is administered to acquire information about the perceived advertising effectiveness. The impacts of text elements are examined by two factors: viewers' understanding of the text language (understand vs. not understand), and the number of textual messages (single vs. multiple). Findings indicate that text within the landscapes of tourism photographs draws the majority of viewers' visual attention, irrespective of whether or not participants understand the text language. People spent more time viewing photographs with text in a known language compared to photographs with an unknown language, and more time viewing photographs with a single textual message than those with multiple textual messages. Viewers reported higher perceived advertising effectiveness toward tourism photographs that included text in the known language. |
Joan López-Moliner; Eli Brenner Flexible timing of eye movements when catching a ball Journal Article In: Journal of Vision, vol. 16, no. 5, pp. 1–11, 2016. @article{LopezMoliner2016, In ball games, one cannot direct ones gaze at the ball all the time because one must also judge other aspects of the game, such as other players' positions. We wanted to know whether there are times at which obtaining information about the ball is particularly beneficial for catching it. We recently found that people could catch successfully if they saw any part of the ball's flight except the very end, when sensory-motor delays make it impossible to use new information. Nevertheless, there may be a preferred time to see the ball. We examined when six catchers would choose to look at the ball if they had to both catch the ball and find out what to do with it while the ball was approaching. A catcher and a thrower continuously threw a ball back and forth. We recorded their hand movements, the catcher's eye movements, and the ball's path. While the ball was approaching the catcher, information was provided on a screen about how the catcher should throw the ball back to the thrower (its peak height). This information disappeared just before the catcher caught the ball. Initially there was a slight tendency to look at the ball before looking at the screen but, later, most catchers tended to look at the screen before looking at the ball. Rather than being particularly eager to see the ball at a certain time, people appear to adjust their eye movements to the combined requirements of the task. |
Bob McMurray; Ashley Farris-Trimble; Michael Seedorff; Hannah Rigler The effect of residual acoustic hearing and adaptation to uncertainty on speech perception in cochlear implant users: Evidence from eye-tracking Journal Article In: Ear & Hearing, vol. 37, no. 1, pp. e37–e51, 2016. @article{McMurray2016, OBJECTIVES: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/ʃ, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/ʃ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ʃ-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions. |
Zhongling Pi; Jianzhong Hong Learning process and learning outcomes of video podcasts including the instructor and PPT slides: A Chinese case Journal Article In: Innovations in Education and Teaching International, vol. 53, no. 2, pp. 135–144, 2016. @article{Pi2016, Video podcasts have become one of the fastest developing trends in learning and teaching. The study explored the effect of the presenting mode of educational video podcasts on the learning process and learning outcomes. Prior to viewing a video podcast, the 94 Chinese undergraduates participating in the study completed a demographic questionnaire and prior knowledge test. The learning process was investigated by eye-tracking and the learning outcome by a learning test. The results revealed that the participants using the video podcast with both the instructor and PPT slides gained the best learning outcomes. It was noted that they allocated much more visual attention to the instructor than to the PPT slides. It was additionally found that the 22 min was the time at which the participants reached the peak of mental fatigue. The results of our study imply that the use of educational technology is culture bound. |
Alessandro Piras; Ivan M. Lanzoni; Milena Raffi; Michela Persiani; Salvatore Squatrito The within-task criterion to determine successful and unsuccessful table tennis players Journal Article In: International Journal of Sports Science & Coaching, vol. 11, no. 4, pp. 523–531, 2016. @article{Piras2016, The aim of this study was to examine the differences in visual search behaviour between a group of expert-level and one of novice table tennis players, to determine the temporal and spatial aspects of gaze orientation associated with correct responses. Expert players were classified as successful or unsuccessful depending on their performance in a video-based test of anticipation skill involving two kinds of stroke techniques: forehand top spin and backhand drive. Eye movements were recorded binocularly with a video-based eye tracking system. Successful experts were more effective than novices and unsuccessful experts in accurately anticipating both type and direction of stroke, showing fewer fixations of longer duration. Participants fixated mainly on arm area during forehand top spin, and on hand–racket and trunk areas during backhand drive. This study can help to develop interventions that facilitate the acquisition of anticipatory skills by improving visual search strategies. |
Hosam Al-Samarraie; Samer Muthana Sarsam; Hans Guesgen Predicting user preferences of environment design: A perceptual mechanism of user interface customisation Journal Article In: Behaviour & Information Technology, vol. 35, no. 8, pp. 644–653, 2016. @article{AlSamarraie2016, It is a well-known fact that users vary in their preferences and needs. Therefore, it is very crucial to provide the customisation or personalisation for users in certain usage conditions that are more associated with their preferences. With the current limitation in adopting perceptual processing into user interface personalisation, we introduced the possibility of inferring interface design preferences from the user?s eye-movement behaviour. We firstly captured the user?s preferences of graphic design elements using an eye-tracker. Then we diagnosed these preferences towards the region of interests to build a prediction model for interface customisation. The prediction models from eye-movement behaviour showed a high potential for predicting users? preferences of interface design based on the paralleled relation between their fixation and saccadic movement. This mechanism provides a novel way of user interface design customisation and opens the door for new research in the areas of human?computer interaction and decision-making. |
Joseph E. Barton; Anindo Roy; John D. Sorkin; Mark W. Rogers; Richard F. Macko An engineering model of human balance control—Part I: Biomechanical model Journal Article In: Journal of Biomechanical Engineering, vol. 138, no. 1, pp. 1–11, 2016. @article{Barton2016, We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task-with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities. |
Yvonne Behnke How textbook design may influence learning with geography textbooks Journal Article In: Nordidactica – Journal of Humanities and Social Science Education, vol. 1, pp. 38–62, 2016. @article{Behnke2016, This paper investigates how textbook design may influence students' visual attention to graphics, photos and text in current geography textbooks. Eye tracking, a visual method of data collection and analysis, was utilised to precisely monitor students' eye movements while observing geography textbook spreads. In an exploratory study utilising random sampling, the eye movements of 20 students (secondary school students 15–17 years of age and university students 20–24 years of age) were recorded. The research entities were double- page spreads of current German geography textbooks covering an identical topic, taken from five separate textbooks. A two-stage test was developed. Each participant was given the task of first looking at the entire textbook spread to determine what was being explained on the pages. In the second stage, participants solved one of the tasks from the exercise section. Overall, each participant studied five different textbook spreads and completed five set tasks. After the eye tracking study, each participant completed a questionnaire. The results may verify textbook design as one crucial factor for successful knowledge acquisition from textbooks. Based on the eye tracking documentation, learning-related challenges posed by images and complex image-text structures in textbooks are elucidated and related to educational psychology insights and findings from visual communication and textbook analysis. |
Palash Bera; Louis Philippe Sirois Displaying background maps in business intelligence dashboards Journal Article In: Iranian Journal of Psychiatry, vol. 18, no. 5, pp. 58–65, 2016. @article{Bera2016, Business data in geographic maps, called data maps, can be displayed via business intelligence dashboards. An important emerging feature is the use of background maps that overlap with existing data maps. Here, the authors examine the usefulness of background maps in dashboards and investigate how much cognitive effort users put in when they use dashboards with background maps as compared to dashboards without them. To test the extent of cognitive effort, the authors conducted an eye-tracking study in which users performed a decision-making task with maps in dashboards. In a separate study, users were asked directly about the mental effort required to perform tasks with the dashboards. Both studies identified that when users use background maps, they required less cognitive effort than users who use dashboards in which the information on the background map is represented in another form, such as a bar chart. |
Raymond Bertram; Johanna K. Kaakinen; Frank Bensch; Laura Helle; Eila Lantto; Pekka Niemi; Nina Lundbom Eye movements of radiologists reflect expertise in CT study interpretation: A potential tool to measure resident development Journal Article In: Radiology, vol. 281, no. 3, pp. 805–815, 2016. @article{Bertram2016, PURPOSE: To establish potential markers of visual expertise in eye movement (EM) patterns of early residents, advanced residents, and specialists who interpret abdominal computed tomography (CT) studies. MATERIAL AND METHODS: The institutional review board approved use of anonymized CT studies as research materials and to obtain anonymized eye-tracking data from volunteers. Participants gave written informed consent. RESULTS: Early residents (n = 15), advanced residents (n = 14), and specialists (n = 12) viewed 26 abdominal CT studies as a sequence of images at either 3 or 5 frames per second while EMs were recorded. Data were analyzed by using linear mixed-effects models. Early residents' detection rate decreased with working hours (odds ratio, 0.81; 95% confidence interval [CI]: 0.73, 0.91; P = .001). They detected less of the low visual contrast (but not of the high visual contrast) lesions (45% [13 of 29]) than did specialists (62% [18 of 29]) (odds ratio, 0.39; 95% CI: 0.25, 0.61; P , .001) or advanced residents (56% [16 of 29]) (odds ratio, 0.55; 95% CI: 0.33, 0.93; P = .024). Specialists and advanced residents had longer fixation durations at 5 than at 3 frames per second (specialists: b = .01; 95% CI: .004, .026; P = .008; advanced residents: b = .04; 95% CI: .03, .05; P , .001). In the presence of lesions, saccade lengths of specialists shortened more than those of advanced (b = .02; 95% CI: .007, .04; P = .003) and of early residents (b = .02; 95% CI: .008, 0.04; P = .003). Irrespective of expertise, high detection rate correlated with greater reduction of saccade length in the presence of lesions (b = 2.10; 95% CI: 2.16, 2.04; P = .002) and greater increase at higher presentation speed (b = .11; 95% CI: .04, .17; P = .001). CONCLUSION: Expertise in CT reading is characterized by greater adaptivity in EM patterns in response to the demands of the task and environment. |
Federica Bianchi; Sébastien Santurette; Dorothea Wendt; Torsten Dau Pitch discrimination in musicians and non-musicians: Effects of harmonic resolvability and processing effort Journal Article In: JARO - Journal of the Association for Research in Otolaryngology, vol. 17, no. 1, pp. 69–79, 2016. @article{Bianchi2016, Musicians typically show enhanced pitch discrimination abilities compared to non-musicians. The present study investigated this perceptual enhancement behaviorally and objectively for resolved and unresolved complex tones to clarify whether the enhanced performance in musicians can be ascribed to increased peripheral frequency selectivity and/or to a different processing effort in performing the task. In a first experiment, pitch discrimination thresholds were obtained for harmonic complex tones with fundamental frequencies (F0s) between 100 and 500 Hz, filtered in either a low- or a high-frequency region, leading to variations in the resolvability of audible harmonics. The results showed that pitch discrimination performance in musicians was enhanced for resolved and unresolved complexes to a similar extent. Additionally, the harmonics became resolved at a similar F0 in musicians and non-musicians, suggesting similar peripheral frequency selectivity in the two groups of listeners. In a follow-up experiment, listeners' pupil dilations were measured as an indicator of the required effort in performing the same pitch discrimination task for conditions of varying resolvability and task difficulty. Pupillometry responses indicated a lower processing effort in the musicians versus the non-musicians, although the processing demand imposed by the pitch discrimination task was individually adjusted according to the behavioral thresholds. Overall, these findings indicate that the enhanced pitch discrimination abilities in musicians are unlikely to be related to higher peripheral frequency selectivity and may suggest an enhanced pitch representation at more central stages of the auditory system in musically trained listeners. |
Indu P. Bodala; Junhua Li; Nitish V. Thakor; Hasan Al-Nashash EEG and eye tracking demonstrate vigilance enhancement with challenge integration Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 273, 2016. @article{Bodala2016, Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using ‘challenge integration', a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p < 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p < 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p < 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RT mean = 3.65 ± 1.4 secs) compared to initial monotonous phase without challenge (RT mean = 4.6 ± 2.7 secs). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration. |
Tom Bullock; James C. Elliott; John T. Serences; Barry Giesbrecht Acute exercise modulates feature-selective responses in human cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 4, pp. 605–618, 2016. @article{Bullock2016, An organism's current behavioral state influences ongoing brain activity. Nonhuman mammalian and invertebrate brains exhibit large increases in the gain of feature-selective neural responses in sensory cortex during locomotion, suggesting that the visual system becomes more sensitive when actively exploring the environment. This raises the possibility that human vision is also more sensitive during active movement. To investigate this possibility, we used an inverted encoding model technique to estimate feature-selective neural response profiles from EEG data acquired from participants performing an orientation discrimination task. Participants (n = 18) fixated at the center of a flickering (15 Hz) circular grating presented at one of nine different orientations and monitored for a brief shift in orientation that occurred on every trial. Participants completed the task while seated on a stationary exercise bike at rest and during low- and high-intensity cycling. We found evidence for inverted-U effects; such that the peak of the reconstructed feature-selective tuning profiles was highest during low-intensity exercise compared with those estimated during rest and high-intensity exercise. When modeled, these effects were driven by changes in the gain of the tuning curve and in the profile bandwidth during low-intensity exercise relative to rest. Thus, despite profound differences in visual pathways across species, these data show that sensitivity in human visual cortex is also enhanced during locomotive behavior. Our results reveal the nature of exercise-induced gain on feature-selective coding in human sensory cortex and provide valuable evidence linking the neural mechanisms of behavior state across species. |
Rong-Fuh Day; Peng-Yeng Yin; Yu-Chi Wang; Ching-Hui Chao A new hybrid multi-start tabu search for finding hidden purchase decision strategies in WWW based on eye-movements Journal Article In: Applied Soft Computing, vol. 48, pp. 217–229, 2016. @article{Day2016, It is known that the decision strategy performed by a subject is implicit in his/her external behaviors. Eye movement is one of the observable external behaviors when humans are performing decision activities. Due to the dramatic increase of e-commerce volume on WWW, it is beneficial for the companies to know where the customers focus their attention on the webpage in deciding to make a purchase. This study proposes a new hybrid multi-start tabu search (HMTS) algorithm for finding the hidden decision strategies by clustering the eye-movement data obtained during the decision activities. The HMTS uses adaptive memory and employs both multi-start and local search strategies. An empirical dataset containing 294 eye-fixation sequences and a synthetic dataset consisting of 360 sequences were experimented with. We conduct the Sign test and the result shows that the proposed HMTS method significantly outperforms its variants which implement just one strategy, and the HMTS algorithm shows an improvement over genetic algorithm, particle swarm optimization, and K-means, with a level of significance α = 0.01. The scalability and robustness of the HMTS is validated through a series of statistical tests. |
Jelmer P. De Vries; Britta K. Ischebeck; L. P. Voogt; Malou Janssen; Maarten A. Frens; Gert Jan Kleinrensink; Josef N. Geest Cervico-ocular reflex is increased in people with nonspecific neck pain Journal Article In: Physical Therapy, vol. 96, no. 8, pp. 1190–1195, 2016. @article{DeVries2016, Background: Neck pain is a widespread complaint. People experiencing neck pain often present an altered timing in contraction of cervical muscles. This altered afferent information elicits the cervico-ocular reflex (COR), which stabilizes the eye in response to trunk-to-head movements. The vestibulo-ocular reflex (VOR) elicited by the vestibulum is thought to be unaffected by afferent information from the cervical spine. Objective The aim of the study was to measure the COR and VOR in people with nonspecific neck pain. Design: This study utilized a cross-sectional design in accordance with the STROBE statement. Methods: An infrared eye-tracking device was used to record the COR and the VOR while the participant was sitting on a rotating chair in darkness. Eye velocity was calculated by taking the derivative of the horizontal eye position. Parametric statistics were performed. Results: The mean COR gain in the control group (n=30) was 0.26 (SD=0.15) compared with 0.38 (SD=0.16) in the nonspecific neck pain group (n=37). Analyses of covariance were performed to analyze differences in COR and VOR gains, with age and sex as covariates. Analyses of covariance showed a significantly increased COR in participants with neck pain. The VOR between the control group, with a mean VOR of 0.67 (SD=0.17), and the nonspecific neck pain group, with a mean VOR of 0.66 (SD=0.22), was not significantly different. Limitations: Measuring eye movements while the participant is sitting on a rotating chair in complete darkness is technically complicated. Conclusions: This study suggests that people with nonspecific neck pain have an increased COR. The COR is an objective, nonvoluntary eye reflex and an unaltered VOR. This study shows that an increased COR is not restricted to patients with traumatic neck pain. |
Tao Deng; Kaifu Yang; Yongjie Li; Hongmei Yan Where does the driver look? Top-down-based saliency detection in a traffic driving environment Journal Article In: IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 2051–2062, 2016. @article{Deng2016a, A traffic driving environment is a complex and dynamically changing scene. When driving, drivers always allocate their attention to the most important and salient areas or targets. Traffic saliency detection, which computes the salient and prior areas or targets in a specific driving environment, is an indispensable part of intelligent transportation systems and could be useful in supporting autonomous driving, traffic sign detection, driving training, car collision warning, and other tasks. Recently, advances in visual attention models have provided substantial progress in describing eye movements over simple stimuli and tasks such as free viewing or visual search. However, to date, there exists no computational framework that can accurately mimic a driver's gaze behavior and saliency detection in a complex traffic driving environment. In this paper, we analyzed the eye-tracking data of 40 subjects consisted of nondrivers and experienced drivers when viewing 100 traffic images. We found that a driver's attention was mostly concentrated on the end of the road in front of the vehicle. We proposed that the vanishing point of the road can be regarded as valuable top-down guidance in a traffic saliency detection model. Subsequently, we build a framework of a classic bottom-up and top-down combined traffic saliency detection model. The results show that our proposed vanishing-point-based top-down model can effectively simulate a driver's attention areas in a driving environment. |
Leandro L. Di Stasi; Michael B. McCamy; Susana Martinez-Conde; Ellis Gayles; Chad Hoare; Michael Foster; Andrés Catena; Stephen L. Macknik Effects of long and short simulated flights on the saccadic eye movement velocity of aviators Journal Article In: Physiology and Behavior, vol. 153, pp. 91–96, 2016. @article{DiStasi2016, Aircrew fatigue is a major contributor to operational errors in civil and military aviation. Objective detection of pilot fatigue is thus critical to prevent aviation catastrophes. Previous work has linked fatigue to changes in oculomotor dynamics, but few studies have studied this relationship in critical safety environments. Here we measured the eye movements of US Marine Corps combat helicopter pilots before and after simulated flight missions of different durations. We found a decrease in saccadic velocities after long simulated flights compared to short simulated flights. These results suggest that saccadic velocity could serve as a biomarker of aviator fatigue. |
Carolina Diaz-Piedra; Héctor Rieiro; Juan Suárez; Francisco Rios-Tejada; Andrés Catena; Leandro Luigi Di Stasi Fatigue in the military: Towards a fatigue detection test based on the saccadic velocity Journal Article In: Physiological Measurement, vol. 37, no. 9, pp. N62–N75, 2016. @article{DiazPiedra2016, Fatigue is a major contributing factor to operational errors. Therefore, the validation of objective and sensitive indices to detect fatigue is critical to prevent accidents and catastrophes. Whereas tests based on saccadic velocity (SV) have become popular, their sensitivity in the military is not yet clear, since most research has been conducted in laboratory settings using not fully validated instruments. Field studies remain scarce, especially in extreme conditions such as real flights. Here, we investigated the effects of real, long flights on SV. We assessed five newly commissioned military helicopter pilots during their naviation training. Pilots flew Sikorsky S-76C helicopters, under instrumental flight rules, for more than 2 h (ca. 150 min). Eye movements were recorded before and after the flight with an eye tracker using a standard guided-saccade task. We also collected subjective ratings of fatigue. SV significantly decreased from the Pre-Flight to the Post-Flight session in all pilots by around 3% (range: 1-4%). Subjective ratings showed the same tendency. We provide conclusive evidence about the high sensitivity of fatigue tests based on SV in real flight conditions, even in small samples. This result might offer military medical departments a valid and useful biomarker of warfighter physiological state. |
Benjamin Gagl Blue hypertext is a good design decision: No perceptual disadvantage in reading and successful highlighting of relevant information Journal Article In: PeerJ, vol. 4, pp. 1–11, 2016. @article{Gagl2016, BACKGROUND: Highlighted text in the Internet (i.e., hypertext) is predominantly blue and underlined. The perceptibility of these hypertext characteristics was heavily questioned by applied research and empirical tests resulted in inconclusive results. The ability to recognize blue text in foveal and parafoveal vision was identified as potentially constrained by the low number of foveally centered blue light sensitive retinal cells. The present study investigates if foveal and parafoveal perceptibility of blue hypertext is reduced in comparison to normal black text during reading. METHODS: A silent-sentence reading study with simultaneous eye movement recordings and the invisible boundary paradigm, which allows the investigation of foveal and parafoveal perceptibility, separately, was realized (comparing fixation times after degraded vs. un-degraded parafoveal previews). Target words in sentences were presented in either black or blue and either underlined or normal. RESULTS: No effect of color and underlining, but a preview benefit could be detected for first pass reading measures. Fixation time measures that included re-reading, e.g., total viewing times, showed, in addition to a preview effect, a reduced fixation time for not highlighted (black not underlined) in contrast to highlighted target words (either blue or underlined or both). DISCUSSION: The present pattern reflects no detectable perceptual disadvantage of hyperlink stimuli but increased attraction of attention resources, after first pass reading, through highlighting. Blue or underlined text allows readers to easily perceive hypertext and at the same time readers re-visited highlighted words longer. On the basis of the present evidence, blue hypertext can be safely recommended to web designers for future use. |
2015 |
Ishan Nigam; Mayank Vatsa; Richa Singh Ocular biometrics: A survey of modalities and fusion approaches Journal Article In: Information Fusion, vol. 26, pp. 1–35, 2015. @article{Nigam2015, Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India's Aadhaar Program and the United Arab Emirate's border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development. |
Kristien Ooms; Arzu Coltekin; Philippe De Maeyer; Lien Dupont; Sara I. Fabrikant; Annelies Incoul; Matthias Kuhn; Hendrik Slabbinck; Pieter Vansteenkiste; Lise Van der Haegen Combining user logging with eye tracking for interactive and dynamic applications Journal Article In: Behavior Research Methods, vol. 47, no. 4, pp. 977–993, 2015. @article{Ooms2015, User evaluations of interactive and dynamic applications face various challenges related to the active nature of these displays. For example, users can often zoom and pan on digital products, and these interactions cause changes in the extent and/or level of detail of the stimulus. Therefore, in eye tracking studies, when a user's gaze is at a particular screen position (gaze position) over a period of time, the information contained in this particular position may have changed. Such digital activities are commonplace in modern life, yet it has been difficult to automatically compare the changing information at the viewed position, especially across many participants. Existing solutions typically involve tedious and time-consuming manual work. In this article, we propose a methodology that can overcome this problem. By combining eye tracking with user logging (mouse and keyboard actions) with cartographic products, we are able to accurately reference screen coordinates to geographic coordinates. This referencing approach allows researchers to know which geographic object (location or attribute) corresponds to the gaze coordinates at all times. We tested the proposed approach through two case studies, and discuss the advantages and disadvantages of the applied methodology. Furthermore, the applicability of the proposed approach is discussed with respect to other fields of research that use eye tracking-namely, marketing, sports and movement sciences, and experimental psychology. From these case studies and discussions, we conclude that combining eye tracking and user-logging data is an essential step forward in efficiently studying user behavior with interactive and static stimuli in multiple research fields. |
Hani Alers; Judith A. Redi; Ingrid Heynderickx Quantifying the importance of preserving video quality in visually important regions at the expense of background content Journal Article In: Signal Processing: Image Communication, vol. 32, pp. 69–80, 2015. @article{Alers2015, Advances in digital technology have allowed us to embed significant processing power in everyday video consumption devices. At the same time, we have placed high demands on the video content itself by continuing to increase spatial resolution while trying to limit the allocated file size and bandwidth as much as possible. The result is typically a trade-off between perceptual quality and fulfillment of technological limitations. To bring this trade-off to its optimum, it is necessary to understand better how people perceive video quality. In this work, we particularly focus on understanding how the spatial location of compression artifacts impacts visual quality perception, and specifically in relation with visual attention. In particular we investigate how changing the quality of the region of interest of a video affects its overall perceived quality, and we quantify the importance of the visual quality of the region of interest to the overall quality judgment. A three stage experiment was conducted where viewers were shown videos with different quality levels in different parts of the scene. By asking them to score the overall quality we found that the quality of the region of interest has 10 times more impact than the quality of the rest of the scene. These results are in line with similar effects observed in still images, yet in videos the relevance of the visual quality of the region of interest is twice as high than in images. The latter finding is directly relevant for the design of more accurate objective quality metrics for videos, that are based on the estimation of local distortion visibility. |
Benedetta Cesqui; Maura Mezzetti; Francesco Lacquaniti; Andrea D'Avella Gaze behavior in one-handed catching and its relation with interceptive performance: What the eyes can't tell Journal Article In: PLoS ONE, vol. 10, no. 3, pp. e0119445, 2015. @article{Cesqui2015, In ball sports, it is usually acknowledged that expert athletes track the ball more accurately than novices. However, there is also evidence that keeping the eyes on the ball is not always necessary for interception. Here we aimed at gaining new insights on the extent to which ocular pursuit performance is related to catching performance. To this end, we analyzed eye and head movements of nine subjects catching a ball projected by an actuated launching apparatus. Four different ball flight durations and two different ball arrival heights were tested and the quality of ocular pursuit was characterized by means of several timing and accuracy parameters. Catching performance differed across subjects and depended on ball flight characteristics. All subjects showed a similar sequence of eye movement events and a similar modulation of the timing of these events in relation to the characteristics of the ball trajectory. On a trial-by-trial basis there was a significant relationship only between pursuit duration and catching performance, confirming that keeping the eyes on the ball longer increases catching success probability. Ocular pursuit parameters values and their dependence on flight conditions as well as the eye and head contributions to gaze shift differed across subjects. However, the observed average individual ocular behavior and the eye-head coordination patterns were not directly related to the individual catching performance. These results suggest that several oculomotor strategies may be used to gather information on ball motion, and that factors unrelated to eye movements may underlie the observed differences in interceptive performance. |
Leandro Luigi Di Stasi; Michael B. McCamy; Sebastian Pannasch; Rebekka Renner; Andrés Catena; José J. Cañas; Boris M. Velichkovsky; Susana Martinez-Conde Effects of driving time on microsaccadic dynamics Journal Article In: Experimental Brain Research, vol. 233, no. 2, pp. 599–605, 2015. @article{DiStasi2015, Driver fatigue is a common cause of car acci- dents. Thus, the objective detection of driver fatigue is a first step toward the effective management of fatigue- related traffic accidents. Here, we investigated the effects of driving time, a common inducer of driver fatigue, on the dynamics of fixational eye movements. Participants drove for 2 h in a virtual driving environment while we recorded their eye movements. Microsaccade velocities decreased with driving time, suggesting a potential effect of fatigue on microsaccades during driving. |
Ivan Diaz; Sabine Schmidt; Francis R. Verdun; François O. Bochud Eye‐tracking of nodule detection in lung CT volumetric data Journal Article In: Medical Physics, vol. 42, no. 6, pp. 2925–2932, 2015. @article{Diaz2015, PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems.$backslash$n$backslash$nMETHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts.$backslash$n$backslash$nRESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed.$backslash$n$backslash$nCONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior. |
Hayward J. Godwin; Simon P. Liversedge; Julie A. Kirkby; Michael Boardman; Katherine Cornes; Nick Donnelly The influence of experience upon information-sampling and decision-making behaviour during risk assessment in military personnel Journal Article In: Visual Cognition, vol. 23, no. 4, pp. 415–431, 2015. @article{Godwin2015, We examined the influence of experience upon information-sampling and decision-making behaviour in a group of military personnel as they conducted risk assessments of scenes photographed from patrol routes during the recent conflict in Afghanistan. Their risk assessment was based on an evaluation of Potential Risk Indicators (PRIs) during examination of each scene. We found that both participant groups were equally likely to fixate PRIs, demonstrating similarity in the selectivity of their information-sampling. However, the inexperienced participants made more revisits to PRIs, had longer response times, and were more likely to decide that the scenes contained a high level of risk. Together, these results suggest that experience primarily modulates decision-making behaviour. We discuss potential routes to train personnel to conduct risk assessments in a more similar manner to experienced participants. |
Seung Kweon Hong Comparison of vertical and horizontal eye movement times in the selection of visual targets by an eye input device Journal Article In: Journal of the Ergonomics Society of Korea, vol. 34, no. 1, pp. 19–27, 2015. @article{Hong2015, Objective: The aim of this study is to investigate how well eye movement times in visual target selection tasks by an eye input device follows the typical Fitts' Law and to compare vertical and horizontal eye movement times. Background: Typically manual pointing provides excellent fit to the Fitts' Law model. However, when an eye input device is used for the visual target selection tasks, there were some debates on whether the eye movement times in can be described by the Fitts' Law. More empirical studies should be added to resolve these debates. This study is an empirical study for resolving this debate. On the other hand, many researchers reported the direction of movement in typical manual pointing has some effects on the movement times. The other question in this study is whether the direction of eye movement also affects the eye movement times. Method: A cursor movement times in visual target selection tasks by both input devices were collected. The layout of visual targets was set up by two types. Cursor starting position for vertical movement times were in the top of the monitor and visual targets were located in the bottom, while cursor starting positions for horizontal movement times were in the right of the monitor and visual targets were located in the left. Results: Although eye movement time was described by the Fitts' Law, the error rate was high and correlation was relatively low (R2 = 0.80 for horizontal movements and R2 = 0.66 for vertical movements), compared to those of manual movement. According to the movement direction, manual movement times were not significantly different, but eye movement times were significantly different. Conclusion: Eye movement times in the selection of visual targets by an eye-gaze input device could be described and predicted by the Fitts' Law. Eye movement times were significantly different according to the direction of eye movement. Application: The results of this study might help to understand eye movement times in visual target selection tasks by the eye input devices. |
Oleg V. Komogortsev; Alexey Karpov; Corey D. Holland Attack of mechanical replicas: Liveness detection with eye movements Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 10, no. 4, pp. 716–725, 2015. @article{Komogortsev2015, This paper investigates liveness detection techniques in the area of eye movement biometrics. We investigate a specific scenario, in which an impostor constructs an artificial replica of the human eye. Two attack scenarios are considered: 1) the impostor does not have access to the biometric templates representing authentic users, and instead utilizes average anatomical values from the relevant literature and 2) the impostor gains access to the complete biometric database, and is able to employ exact anatomical values for each individual. In this paper, liveness detection is performed at the feature and match score levels for several existing forms of eye movement biometric, based on different aspects of the human visual system. The ability of each technique to differentiate between live and artificial recordings is measured by its corresponding false spoof acceptance rate, false live rejection rate, and classification rate. The results suggest that eye movement biometrics are highly resistant to circumvention by artificial recordings when liveness detection is performed at the feature level. Unfortunately, not all techniques provide feature vectors that are suitable for liveness detection at the feature level. At the match score level, the accuracy of liveness detection depends highly on the biometric techniques employed. |
Moritz Köster; Marco Rüth; Kai Christoph Hamborg; Kai Kaspar Effects of personalized banner ads on visual attention and recognition memory Journal Article In: Applied Cognitive Psychology, vol. 29, no. 2, pp. 181–192, 2015. @article{Koester2015, Internet companies collect a vast amount of data about their users in order to personalize banner ads. However, very little is known about the effects of personalized banners on attention and memory. In the present study, 48 subjects performed search tasks on web pages containing personalized or nonpersonalized banners. Overt attention was measured by an eye-tracker, and recognition of banner and task-relevant information was subsequently examined. The entropy of fixations served as a measure for the overall exploration of web pages. Results confirm the hypotheses that personalization enhances recognition for the content of banners while the effect on attention was weaker and partially nonsignificant. In contrast, overall exploration of web pages and recognition of task-relevant information was not influenced. The temporal course of fixations revealed that visual exploration of banners typically proceeds from the picture to the logo and finally to the slogan. We discuss theoretical and practical implications. |
Linnéa Larsson; Marcus Nyström; Richard Andersson; Martin Stridh Detection of fixations and smooth pursuit movements in high-speed eye-tracking data Journal Article In: Biomedical Signal Processing and Control, vol. 18, pp. 145–152, 2015. @article{Larsson2015, A novel algorithm for the detection of fixations and smooth pursuit movements in high-speed eye-tracking data is proposed, which uses a three-stage procedure to divide the intersaccadic intervals into a sequence of fixation and smooth pursuit events. The first stage performs a preliminary segmentation while the latter two stages evaluate the characteristics of each such segment and reorganize the preliminary segments into fixations and smooth pursuit events. Five different performance measures are calculated to investigate different aspects of the algorithm's behavior. The algorithm is compared to the current state-of-the-art (I-VDT and the algorithm in [11]), as well as to annotations by two experts. The proposed algorithm performs considerably better (average Cohen's kappa 0.42) than the I-VDT algorithm (average Cohen's kappa 0.20) and the algorithm in [11] (average Cohen's kappa 0.16), when compared to the experts' annotations. |
Minyoung Lee; Randolph Blake; Sujin Kim; Chai-Youn Kim Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music Journal Article In: Proceedings of the National Academy of Sciences, vol. 112, no. 27, pp. 8493–8498, 2015. @article{Lee2015b, Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry. |
Yan Luo; Ming Jiang; Yongkang Wong; Qi Zhao Multi-camera saliency Journal Article In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 10, pp. 2057–2070, 2015. @article{Luo2015a, A significant body of literature on saliency modeling predicts where humans look in a single image or video. Besides the scientific goal of understanding how information is fused from multiple visual sources to identify regions of interest in a holistic manner, there are tremendous engineering applications of multi-camera saliency due to the widespread of cameras. This paper proposes a principled framework to smoothly integrate visual information from multiple views to a global scene map, and to employ a saliency algorithm incorporating high-level features to identify the most important regions by fusing visual information. The proposed method has the following key distinguishing features compared with its counterparts: (1) the proposed saliency detection is global (salient regions from one local view may not be important in a global context), (2) it does not require special ways for camera deployment or overlapping field of view, and (3) the key saliency algorithm is effective in highlighting interesting object regions though not a single detector is used. Experiments on several data sets confirm the effectiveness of the proposed principled framework. |
Andrew K. Mackenzie; Julie M. Harris Eye movements and hazard perception in active and passive driving Journal Article In: Visual Cognition, vol. 23, no. 6, pp. 736–757, 2015. @article{Mackenzie2015, Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment. |
Ioannis Rigas; Oleg V. Komogortsev Eye movement-driven defense against iris print-attacks Journal Article In: Pattern Recognition Letters, vol. 68, no. 2, pp. 316–326, 2015. @article{Rigas2015, This paper proposes a methodology for the utilization of eye movement cues for the task of iris print-attack detection. We investigate the fundamental distortions arising in the eye movement signal during an iris print-attack, due to the structural and functional discrepancies between a paper-printed iris and a natural eye iris. The performed experiments involve the execution of practical print-attacks against an eye-tracking device, and the collection of the resulting eye movement signals. The developed methodology for the detection of print-attack signal distortions is evaluated on a large database collected from 200 subjects, which contains both the real (‘live') eye movement signals and the print-attack (‘spoof') eye movement signals. The suggested methodology provides a sufficiently high detection performance, with a maximum average classification rate (ACR) of 96.5% and a minimum equal error rate (EER) of 3.4%. Due to the hardware similarities between eye tracking and iris capturing systems, we hypothesize that the proposed methodology can be adopted into the existing iris recognition systems with minimal cost. To further support this hypothesis we experimentally investigate the robustness of our scheme by simulating conditions of reduced sampling resolution (temporal and spatial), and of limited duration of the eye movement signals. |
Donghyun Ryu; Bruce Abernethy; David L. Mann; Jamie M. Poolton The contributions of central and peripheral vision to expertise in basketball: How blur helps to provide a clearer picture Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 1, pp. 167–183, 2015. @article{Ryu2015, The main purpose of this study was to examine the relative roles of central and peripheral vision when performing a dynamic forced-choice task. We did so by using a gaze-contingent display with different levels of blur in an effort to (a) test the limit of visual resolution necessary for information pick-up in each of these sectors of the visual field and, as a result, to (b) develop a more natural means of gaze-contingent display using a blurred central or peripheral visual field. The expert advantage seen in usual whole field visual presentation persists despite surprisingly high levels of impairment to central or peripheral vision. Consistent with the well-established central/peripheral differences in sensitivity to spatial frequency, high levels of blur did not prevent better-than-chance performance by skilled players when peripheral information was blurred, but they did affect response accuracy when impairing central vision. Blur was found to always alter the pattern of eye movements before it decreased task performance. The evidence accumulated across the 4 experi- ments provides new insights into several key questions surrounding the role that different sectors of the visual field play in expertise in dynamic, time-constrained tasks. |
Chengyao Shen; Xun Huang; Qi Zhao Predicting eye fixations in webpages with multi-scale features and high-level representations from deep networks Journal Article In: IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2084–2093, 2015. @article{Shen2015, In recent decades, webpages are becoming an increasingly important visual information source. Compared with natural images, webpages are different in many ways. For example, webpages are usually rich in semantically meaningful visual media (text, pictures, logos, and animations), which make the direct application of some traditional low-level saliency models ineffective. Besides, distinct web-viewing patterns such as top-left bias and banner blindness suggest different ways for predicting attention deployment on a webpage. In this study, we utilize a new scheme of low-level feature extraction pipeline and combine it with high-level representations from deep neural networks. The proposed model is evaluated on a newly published webpage saliency dataset with three popular evaluation metrics. Results show that our model outperforms other existing saliency models by a large margin and both low-and high-level features play an important role in predicting fixations on webpage. |
Lisa M. Soederberg Miller; Diana L. Cassady; Elizabeth A. Applegate; Laurel A. Beckett; Machelle D. Wilson; Tanja N. Gibson; Kathleen Ellwood Relationships among food label use, motivation, and dietary quality Journal Article In: Nutrients, vol. 7, no. 2, pp. 1068–1080, 2015. @article{SoederbergMiller2015, Nutrition information on packaged foods supplies information that aids consumers in meeting the recommendations put forth in the US Dietary Guidelines for Americans such as reducing intake of solid fats and added sugars. It is important to understand how food label use is related to dietary intake. However, prior work is based only on self-reported use of food labels, making it unclear if subjective assessments are biased toward motivational influences. We assessed food label use using both self-reported and objective measures, the stage of change, and dietary quality in a sample of 392 stratified by income. Self-reported food label use was assessed using a questionnaire. Objective use was assessed using a mock shopping task in which participants viewed food labels and decided which foods to purchase. Eye movements were monitored to assess attention to nutrition information on the food labels. Individuals paid attention to nutrition information when selecting foods to buy. Self-reported and objective measures of label use showed some overlap with each other (r=0.29, p<0.001), and both predicted dietary quality (p<0.001 for both). The stage of change diminished the predictive power of subjective (p<0.09), but not objective (p<0.01), food label use. These data show both self-reported and objective measures of food label use are positively associated with dietary quality. However, self-reported measures appear to capture a greater motivational component of food label use than do more objective measures. |
Lisa M. Soederberg Miller; Diana L. Cassady; Laurel A. Beckett; Elizabeth A. Applegate; Machelle D. Wilson; Tanja N. Gibson; Kathleen Ellwood Misunderstanding of front-of-package nutrition information on us food products Journal Article In: PLoS ONE, vol. 10, no. 4, pp. e0125306, 2015. @article{SoederbergMiller2015a, Front-of-package nutrition symbols (FOPs) are presumably readily noticeable and require minimal prior nutrition knowledge to use. Although there is evidence to support this notion, few studies have focused on Facts Up Front type symbols which are used in the US. Participants with varying levels of prior knowledge were asked to view two products and decide which was more healthful. FOPs on packages were manipulated so that one product was more healthful, allowing us to assess accuracy. Attention to nutrition information was assessed via eye tracking to determine what if any FOP information was used to make their decisions. Results showed that accuracy was below chance on half of the comparisons despite consulting FOPs. Negative correlations between attention to calories, fat, and sodium and accuracy indicated that consumers over-relied on these nutrients. Although relatively little attention was allocated to fiber and sugar, associations between attention and accuracy were positive. Attention to vitamin D showed no association to accuracy, indicating confusion surrounding what constitutes a meaningful change across products. Greater nutrition knowledge was associated with greater accuracy, even when less attention was paid. Individuals, particularly those with less knowledge, are misled by calorie, sodium, and fat information on FOPs. |
Miguel A. Vadillo; Chris N. H. Street; Tom Beesley; David R. Shanks A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation Journal Article In: Behavior Research Methods, vol. 47, no. 4, pp. 1365–1376, 2015. @article{Vadillo2015, Poor calibration and inaccurate drift correction can pose severe problems for eye-tracking experiments requiring high levels of accuracy and precision. We describe an algorithm for the offline correction of eye-tracking data. The algorithm conducts a linear transformation of the coordinates of fixations that minimizes the distance between each fixation and its closest stimulus. A simple implementation in MATLAB is also presented. We explore the performance of the correction algorithm under several conditions using simulated and real data, and show that it is particularly likely to improve data quality when many fixations are included in the fitting process. |
Juan D. Velásquez; Pablo Loyola; Gustavo Martinez; Kristofher Munoz; Pedro Maldanado; Andrés Andres Couve; Pedro E. Maldonado Combining eye tracking and pupillary dilation analysis to identify website key objects Journal Article In: Neurocomputing, vol. 168, pp. 179–189, 2015. @article{Velasquez2015, Identifying the salient zones from Web interfaces, namely the Website Key Objects, is an essential part of the personalization process that current Web systems perform to increase user engagement. While several techniques have been proposed, most of them are focused on the use of Web usage logs. Only recently has the use of data from users[U+05F3] biological responses emerged as an alternative to enrich the analysis. In this work, a model is proposed to identify Website Key Objects that not only takes into account visual gaze activity, such as fixation time, but also the impact of pupil dilation. Our main hypothesis is that there is a strong relationship in terms of the pupil dynamics and the Web user preferences on a Web page. An empirical study was conducted on a real Website, from which the navigational activity of 23 subjects was captured using an eye tracking device. Results showed that the inclusion of pupillary activity, although not conclusively, allows us to extract a more robust Web Object classification, achieving a 14% increment in the overall accuracy. |
Jian Wang; Ryoichi Ohtsuka; Kimihiro Yamanaka Relation between mental workload and visual information processing Journal Article In: Procedia Manufacturing, vol. 3, pp. 5308–5312, 2015. @article{Wang2015, The aim of this study is to clarify the relation between mental workload and the function of visual information processing. To examine the mental workload (MWL) relative to the size of the useful field of view (UFOV), an experiment was conducted with 12 participants (ages 21–23). In the primary task, participants responded to visual markers appearing in a computer display. The UFOV and the results of the secondary task for MWL were measured. In the MWL task, participants solved numerical operations designed to increase MWL. The experimental conditions in this task were divided into three categories (Repeat Aloud, Addition, and No Task), where No Task meant no mental task was given. MWL was changed in a stepwise manner. The quantitative assessment confirmed that the UFOV narrows with the increase in the MWL. |
Sheng-Ming Wang Integrating service design and eye tracking insight for designing smart TV user interfaces Journal Article In: International Journal of Advanced Computer Science and Applications, vol. 6, no. 7, pp. 163–171, 2015. @article{Wang2015a, This research proposes a process that integrate service design method and eye tracking insight for designing a Smart TV user interface. The Service Design method, which is utilized for leading the combination of the quality function deployment (QFD) and the analytic hierarchy process (AHP), is used to analyze the features of three Smart TV user interface design mockups. Scientific evidences, which include the effectiveness and efficiency testing data obtained from eye tracking experiments with six participants, are provided the information for analysing the affordance of these design mockups. The results of this research demonstrate a comprehensive methodology that can be used iteratively for redesigning, redefining and evaluating of Smart TV user interfaces. It can also help to make the design of Smart TV user interfaces relate to users' behaviors and needs. So that to improve the affordance of design. Future studies may analyse the data that are derived from eye tracking experiments to improve our understanding of the spatial relationship between designed elements in a Smart TV user interface. |
Ying Yan; Huazhi Yuan; Xiaofei Wang; Ting Xu; Haoxue Liu Study on driver's fixation variation at entrance and inside sections of tunnel on highway Journal Article In: Advances in Mechanical Engineering, vol. 7, no. 1, pp. 1–10, 2015. @article{Yan2015d, How drivers' visual characteristics change as they pass tunnels was studied. Firstly, nine drivers' test data at tunnel entrance and inside sections using eye movement tracking devices were recorded. Then the transfer function of BP artificial neural network was employed to simulate and analyze the variation of the drivers' eye movement parameters. The relation models between eye movement parameters and the distance of the tunnels were established. In the analysis of the fixation point distributions, the analytic coordinates of fixations in visual field were clustered to obtain different visual area of fixations by utilizing dynamic cluster theory. The results indicated that, at 100 meters before the entrance, the average fixation duration increased, but the fixations number decreased substantially. After 100 meters into the tunnel, the fixation duration started to decrease first and then increased. The variations of drivers' fixation points demonstrated such a pattern of change as scatter, focus, and scatter again. While driving through the tunnels, drivers presented a long time fixation. Nearly 61.5% subjects' average fixation duration increased significantly. In the tunnel, these drivers pay attention to seven fixation points areas from the car dashboard area to the road area in front of the car. |
Shu-Fei Yang An eye-tracking study of the Elaboration Likelihood Model in online shopping Journal Article In: Electronic Commerce Research and Applications, vol. 14, no. 4, pp. 233–240, 2015. @article{Yang2015a, This study uses eye-tracking to explore the Elaboration Likelihood Model (ELM) in online shopping. The results show that the peripheral cue did not have moderating effect on purchase intention, but had moderating effect on eye movement. Regarding purchase intention, the high elaboration had higher purchase intention than the low elaboration with a positive peripheral cue, but there was no difference in purchase intention between the high and low elaboration with a negative peripheral cue. Regarding eye movement, with a positive peripheral cue, the high elaboration group was observed to have longer fixation duration than the low elaboration group in two areas of interest (AOIs); however, with a negative peripheral cue, the low elaboration group had longer fixation on the whole page and two AOIs. In addition, the relationship between purchase intention and eye movement of the AOIs is more significant in the high elaboration group when given a negative peripheral cue and in the low elaboration group when given a positive peripheral cue. This study not only examines the postulates of the ELM, but also contributes to a better understanding of the cognitive processes of the ELM. These findings have practical implications for e-sellers to identify characteristics of consumers' elaboration in eye movement and designing customization and persuasive context for different elaboration groups in e-commerce. |