Estimated reading time: 3 minutes
We have recently finished updating our database of EyeLink publications – there were more than 900 papers published in 2019 alone, and the database now contains well over 8000 publications in total. Each publication is checked individually to ensure that it contains data collected using an EyeLink eye tracker (rather than just referring to data collected with an EyeLink, as in a meta-analysis or review article) and that the research is published in a peer-reviewed journal.
Publications by Year
In a previous blog I plotted the number of publications per year and an updated version of that plot is included below:
Highly Cited EyeLink Publications
The earlier blog also listed the “top” journals for EyeLink publications – both with respect to the number of EyeLink articles and with respect to the journal’s impact factor. This year I thought it might be interesting to list some of the most highly cited articles in our database. Determining citation counts is a somewhat inexact science. There are three main sources of information on article citation counts – Web of Science, Scopus and Google Scholar. While the advantages and disadvantages of each of these sources is a topic of lively debate (Harzing has written extensively on this – see e.g. this blog), Google Scholar has the twin advantages of having a very comprehensive coverage and being freely accessible.
The list below is a selection of 15 EyeLink articles, all of which have citation counts >500 according to Google Scholar. The list was generated by searching the top 20 journals by volume of EyeLink articles, and the top 10 journals by Impact Factor in our database. It is not intended to be exhaustive, and the articles are listed in no particular order. I think the list provides a fascinating illustration of the sheer breadth (and enormous impact) of the research that EyeLink eye trackers have been involved in.
Huang, Lijin; Wei, Weijie; Liu, Zhi; Zhang, Tianhong; Wang, Jijun; Xu, Lihua; Chen, Weiyu; Meur, Olivier Le
In: Pattern Recognition Letters, 138 , pp. 608–616, 2020.
Eye movement abnormalities have been effective biomarkers that provide the possibility of distinguishing patients with schizophrenia from healthy controls. The existing methods for measuring eye movement abnormalities mostly focus on synchronic parameters, such as fixation duration and saccade amplitude, which can be directly obtained from eye movement data, while lack of considering more thorough features. In this paper, to better characterize eye-tracking dysfunction, we create a dataset containing 100 images with eye movement data of 40 patients and 30 healthy controls via a free-viewing task, and propose two types of features for effective schizophrenia recognition, i.e. the hand-crafted discriminative eye movement features and the model-metric based features via utilizing the computational models of fixation prediction and the metrics of evaluating their prediction performance. Using the proposed features, two commonly used classifiers including support vector machine and random forest have been trained for classification between patients and controls. Experimental results demonstrate the effectiveness of the proposed features for improving classification performance, and the potential that our method can serve as an alternative and promising approach for the computer-aided diagnosis of schizophrenia.
Sui, Xiao Yang; Liu, Hong Zhi; Rao, Li Lin
In: Cognition, 195 , pp. 1–11, 2020.
Risky decisions are ubiquitous in daily life and are central to human behavior, but little attention has been devoted to exploring whether risky choice can be influenced by gaze direction. In the current study, we used gaze-contingent manipulation to manipulate an individual's gaze while he/she decided between two risky options, and we examined whether risky decisions could be biased toward a randomly determined target. We found that participants' risky choices were biased toward a randomly determined target when they were manipulated to gaze longer at the target option (Study 1
Becker, Stefanie I; Martin, Aimee; Hamblin-Frohman, Zachary
In: Visual Cognition, 27 (5-8), pp. 502–517, 2019.
It is well-known that visual attention can be tuned in a top-down controlled manner to various attributes. Amongst other search strategies, previous research has identified a feature search mode in which attention is tuned to the target feature (e.g., colour) vs. a singleton search mode, where all salient items can attract attention. A short review of the literature reveals that singleton search mode is not regularly applied in single-target search, but could play a role in two-target search. Here we critically tested whether results suggesting singleton search could alternatively be due to top-down tuning to different attributes of the targets (e.g., luminance). The results of the first experiment show a mixture of attentional tuning to the target colours (red, green), as well as luminance (darker), and residual singleton capture. A second experiment shows that such mixed results can be obtained in the standard paradigm, with only small changes to the stimuli. These results cannot be coherently described within a single mental representation, and are therefore difficult to reconcile with the notion of a target template. Non-representational theories such as feature map theories seem better equipped to explain mixed search results, which could be a decisive weakness of representational theories.
Blair, Christopher D; Ristic, Jelena
Attention combines similarly in covert and overt conditions Journal Article
In: Vision, 3 , pp. 16, 2019.
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention's dynamic flexibility in facilitating human behavior.
Raffi, Milena; Piras, Alessandro; Persiani, Michela; Perazzolo, Monica; Squatrito, Salvatore
Angle of gaze and optic flow direction modulate body sway Journal Article
In: Journal of Electromyography and Kinesiology, 35 , pp. 61–68, 2017.
Optic flow is a crucial signal in maintaining postural stability. We sought to investigate whether the activity of postural muscles and body sway was modulated by eye position during the view of radial optic flow stimuli. We manipulated the spatial distribution of dot speed and the fixation point position to simulate specific heading directions combined with different gaze positions. The experiments were performed using stabilometry and surface electromyography (EMG) on 24 right-handed young, healthy volunteers. Center of pressure (COP) signals were analyzed considering antero-posterior and medio-lateral oscillation, COP speed, COP area, and the prevalent direction of oscillation of body sway. We found a significant main effect of body side in all COP parameters, with the right body side showing greater oscillations. The different combinations of optic flow and eye position evoked a non-uniform direction of oscillations in females. The EMG analysis showed a significant main effect for muscle and body side. The results showed that the eye position modulated body sway without changing the activity of principal leg postural muscles, suggesting that the extraretinal input regarding the eye position is a crucial signal that needs to be integrated with perceptual optic flow processing in order to control body sway.
Khan, Aarlenne Zein; Blohm, Gunnar; Pisella, Laure; Munoz, Douglas P
In: European Journal of Neuroscience, 41 (12), pp. 1624–1634, 2015.
As we have limited processing abilities with respect to the plethora of visual information entering our brain, spatial selection mechanisms are crucial. These mechanisms result in both enhancing processing at a location of interest and in suppressing processing at other locations; together, they enable successful further processing of locations of interest. It has been suggested that saccade planning modulates these spatial selection mechanisms; however, the precise influence of saccades on the distribution of spatial resources underlying selection remains unclear. To this end, we compared discrimination performance at different locations (six) within a work space during different saccade tasks. We used visual discrimination performance as a behavioral measure of enhancement and suppression at the different locations. A total of 14 participants performed a dual discrimination/saccade countermanding task, which allowed us to specifically isolate the consequences of saccade execution. When a saccade was executed, discrimination performance at the cued location was never better than when fixation was maintained, suggesting that saccade execution did not enhance processing at a location more than knowing the likelihood of its appearance. However, discrimination was consistently lower at distractor (uncued) locations in all cases where a saccade was executed compared with when fixation was maintained. Based on these results, we suggest that saccade execution specifically suppresses distractor locations, whereas attention shifts (with or without an accompanying saccade) are involved in enhancing perceptual processing at the goal location.
Metzner, Paul; von der Malsburg, Titus; Vasishth, Shravan; Rösler, Frank
In: Journal of Cognitive Neuroscience, 27 (5), pp. 1017–1028, 2015.
Recent research has shown that brain potentials time-locked to fixations in natural reading can be similar to brain potentials recorded during rapid serial visual presentation (RSVP). We attempted two replications of Hagoort, Hald, Bastiaansen, and Petersson [Hagoort, P., Hald, L., Bastiaansen, M., & Petersson, K. M. Integration of word meaning and world knowledge in language comprehension. Science, 304, 438-441, 2004] to determine whether this correspondence also holds for oscillatory brain responses. Hagoort et al. reported an N400 effect and synchronization in the theta and gamma range following world knowledge violations. Our first experiment (n = 32) used RSVP and replicated both the N400 effect in the ERPs and the power increase in the theta range in the time-frequency domain. In the second experiment (n = 49), participants read the same materials freely while their eye movements and their EEG were monitored. First fixation durations, gaze durations, and regression rates were increased, and the ERP showed an N400 effect. An analysis of time-frequency representations showed synchronization in the delta range (1-3 Hz) and desynchronization in the upper alpha range (11-13 Hz) but no theta or gamma effects. The results suggest that oscillatory EEG changes elicited by world knowledge violations are different in natural reading and RSVP. This may reflect differences in how representations are constructed and retrieved from memory in the two presentation modes.
Dillon, Brian W; Mishler, Alan; Sloggett, Shayne; Phillips, Colin
Contrasting interference profiles for agreement and anaphora: Experimental and modeling evidence Journal Article
In: Journal of Memory and Language, 69 (2), pp. 85–103, 2013.
We investigated the relationship between linguistic representation and memory access by comparing the processing of two linguistic dependencies that require comprehenders to check that the subject of the current clause has the correct morphological features: subject–verb agreement and reflexive anaphors in English. In two eye-tracking experiments we examined the impact of structurally illicit noun phrases on the computation of reflexive and subject–verb agreement. Experiment 1 directly compared the two dependencies within participants. Results show a clear difference in the intrusion profile associated with each dependency: agreement resolution displays clear intrusion effects in comprehension (as found by Pearlmutter et al., 1999, Wagers et al., 2009), but reflexives show no such intrusion effect from illicit antecedents (Sturt, 2003, Xiang et al., 2009). Experiment 2 replicated the lack of intrusion for reflexives, confirming the reliability of the pattern and examining a wider range of feature combinations. In addition, we present modeling evidence that suggests that the reflexive results are best captured by a memory retrieval mechanism that uses primarily syntactic information to guide retrievals for the anaphor's antecedent, in contrast to the mixed morphological and syntactic cues used resolve subject–verb agreement dependencies. Despite the fact that agreement and reflexive dependencies are subject to a similar morphological agreement constraint, in online processing comprehenders appear to implement this constraint in distinct ways for the two dependencies.
Valle, Araceli; Binder, Katherine S; Walsh, Caitlin B; Nemier, Carolyn; Bangs, Kathryn E
Eye movements, prosody, and word frequency among average-and high-skilled second-grade readers Journal Article
In: School Psychology Review, 42 (October), pp. 171–190, 2012.
The present study explored how average-and high-skilled second-grade readers (as identified by their Woodcock-Johnson III Test of Academic Achieve-ment Broad Reading scores) differed on behavioral measures of reading related to comprehension: eye movements during silent reading and prosody during oral reading. Results from silent reading implicate word processing efficiency: high skilled readers had fewer fixations and intraword regressions, and shorter first fixation, gaze duration, and total word reading times. Their skipping and regres-sion patterns during silent reading were representative of a more systematic approach to passage reading, suggesting that meta-cognitive or motivational factors may also differentiate the groups. Compared to high-skilled readers, average readers' oral reading was characterized by longer pauses, less differen-tiation across pause types, and more intrusions. Counter to prior research, aspects of prosody associated with expressivity favored average readers: they had a sharper pitch declination at the end of declarative sentences and used a wider range of pitch within sentences. High-and low-frequency target words yielded frequency effects during both silent and oral reading. Interactions with skill level on the oral reading task are discussed in terms of potential differences in strategic approaches to reading challenges.
Gagl, Benjamin; Hawelka, Stefan; Hutzler, Florian
In: Behavior Research Methods, 43 (4), pp. 1171–1181, 2011.
Cognitive effort is reflected in pupil dilation, but the assessment of pupil size is potentially susceptible to changes in gaze position. This study exemplarily used sentence reading as a stand-in for paradigms that assess pupil size in tasks during which changes in gaze position are unavoidable. The influence of gaze position on pupil size was first investigated by an artificial eye model with a fixed pupil size. Despite its fixed pupil size, the systematic measurements of the artificial eye model revealed substantial gaze-position-dependent changes in the measured pupil size. We evaluated two functions and showed that they can accurately capture and correct the gaze-dependent measurement error of pupil size recorded during a sentence-reading and an effortless z-string-scanning task. Implications for previous studies are discussed, and recommendations for future studies are provided.
Filik, Ruth; Moxey, Linda M
The on-line processing of written irony Journal Article
In: Cognition, 116 (3), pp. 421–436, 2010.
We report an eye-tracking study in which we investigate the on-line processing of written irony. Specifically, participants' eye movements were recorded while they read sentences which were either intended ironically, or non-ironically, and subsequent text which contained pronominal reference to the ironic (or non-ironic) phrase. Results showed longer reading times for ironic comments compared to a non-ironic baseline, suggesting that additional processing was required in ironic compared to non-ironic conditions. Reading times for subsequent pronominal reference indicated that for ironic materials, both the ironic and literal interpretations of the text were equally accessible during on-line language comprehension. This finding is most in-line with predictions of the graded salience hypothesis, which, in conjunction with the retention hypothesis, states that readers represent both the literal and ironic interpretation of an ironic utterance.
Jacob, Michal; Hochstein, Shaul
In: Vision Research, 50 (1), pp. 107–117, 2010.
Target recognition stages were studied by exposing observers to varying controlled numbers of target fixations. The target, present in half the displays, consisted of two identical cards (Identity Search Task; Jacob & Hochstein, 2009). Following more fixations, targets are better recognized, indicated by increased Hit-rate and detectability (according to Unequal Variance Signal Detection Theory), decreased Response Time and growing confidence, reflecting current stage in recognition process. Thus, gathering information over a specific scene region results from a growing number of fixations on that particular region. We conclude that several fixations on a scene location are necessary for achieving recognition.
Nuthmann, Antje; Engbert, Ralf; Kliegl, Reinhold
The IOVP effect in mindless reading: Experiment and modeling Journal Article
In: Vision Research, 47 (7), pp. 990–1002, 2007.
Fixation durations in reading are longer for within-word fixation positions close to word center than for positions near word boundaries. This counterintuitive result was termed the Inverted-Optimal Viewing Position (IOVP) effect. We proposed an explanation of the effect based on error-correction of mislocated fixations [Nuthmann, A., Engbert, R., & Kliegl, R. (2005). Mislocated fixations during reading and the inverted optimal viewing position effect. Vision Research, 45, 2201-2217], that suggests that the IOVP effect is not related to word processing. Here we demonstrate the existence of an IOVP effect in "mindless reading", a z-string scanning task. We compare the results from experimental data with results obtained from computer simulations of a simple model of the IOVP effect and discuss alternative accounts. We conclude that oculomotor errors, which often induce mislocalized fixations, represent the most important source of the IOVP effect.
Saint-Aubin, Jean; Tremblay, Sébastien; Jalbert, Annie
In: Experimental Psychology, 54 (4), pp. 264–272, 2007.
This research investigated the nature of encoding and its contribution to serial recall for visual-spatial information. In order to do so, we examined the relationship between fixation duration and recall performance. Using the dot task--a series of seven dots spatially distributed on a monitor screen is presented sequentially for immediate recall--performance and eye-tracking data were recorded during the presentation of the to-be-remembered items. When participants were free to move their eyes at their will, both fixation durations and probability of correct recall decreased as a function of serial position. Furthermore, imposing constant durations of fixation across all serial positions had a beneficial impact (though relatively small) on item but not order recall. Great care was taken to isolate the effect of fixation duration from that of presentation duration. Although eye movement at encoding contributes to immediate memory, it is not decisive in shaping serial recall performance. Our results also provide further evidence that the distinction between item and order information, well-established in the verbal domain, extends to visual-spatial information.
Tseng, Yuan-Chi; Li, Chiang-Shan Ray
In: Perception and Psychophysics, 66 (8), pp. 1363–1378, 2004.
Previous studies have shown that context-facilitated visual search can occur through implicit learning. In the present study, we have explored its oculomotor correlates as a step toward unraveling the mechanisms that underlie such learning. Specifically, we examined a number of oculomotor parameters that might accompany the learning of context-guided search. The results showed that a decrease in the number of saccades occurred along with a fall in search time. Furthermore, we identified an effective search period in which each saccade monotonically brought the fixation closer to the target. Most important, the speed with which eye fixation approached the target did not change as a result of learning. We discuss the general implications of these results for visual search.
If you would like us to feature your EyeLink research, have ideas for posts, or have any questions about our hardware and software, please contact us. We are always happy to help. You can call us (+1-613-271-8686) or click the button below to email:
- Header Image by Hermann.