
Estimated reading time: 3 minutes
We have recently finished updating our database of EyeLink publications – there were more than 900 papers published in 2019 alone, and the database now contains well over 8000 publications in total. Each publication is checked individually to ensure that it contains data collected using an EyeLink eye tracker (rather than just referring to data collected with an EyeLink, as in a meta-analysis or review article) and that the research is published in a peer-reviewed journal.
Publications by Year
In a previous blog I plotted the number of publications per year and an updated version of that plot is included below:

Highly Cited EyeLink Publications
The earlier blog also listed the “top” journals for EyeLink publications – both with respect to the number of EyeLink articles and with respect to the journal’s impact factor. This year I thought it might be interesting to list some of the most highly cited articles in our database. Determining citation counts is a somewhat inexact science. There are three main sources of information on article citation counts – Web of Science, Scopus and Google Scholar. While the advantages and disadvantages of each of these sources is a topic of lively debate (Harzing has written extensively on this – see e.g. this blog), Google Scholar has the twin advantages of having a very comprehensive coverage and being freely accessible.
The list below is a selection of 15 EyeLink articles, all of which have citation counts >500 according to Google Scholar. The list was generated by searching the top 20 journals by volume of EyeLink articles, and the top 10 journals by Impact Factor in our database. It is not intended to be exhaustive, and the articles are listed in no particular order. I think the list provides a fascinating illustration of the sheer breadth (and enormous impact) of the research that EyeLink eye trackers have been involved in.
Nakhla, Nardin; Korkian, Yavar; Krause, Matthew R.; Pack, Christopher C. Neural selectivity for visual motion in macaque area v3a Journal Article In: eNeuro, vol. 8, no. 1, pp. 1–14, 2021. @article{Nakhla2021, The processing of visual motion is conducted by dedicated pathways in the primate brain. These pathways originate with populations of direction-selective neurons in the primary visual cortex, which projects to dorsal structures like the middle temporal (MT) and medial superior temporal (MST) areas. Anatomical and imaging studies have suggested that area V3A might also be specialized for motion processing, but there have been very few studies of single-neuron direction selectivity in this area. We have therefore performed electrophysiological recordings from V3A neurons in two macaque monkeys (one male and one female) and measured responses to a large battery of motion stimuli that includes translation motion, as well as more complex optic flow patterns. For comparison, we simultaneously recorded the responses of MT neurons to the same stimuli. Surprisingly, we find that overall levels of direction selectivity are similar in V3A and MT and moreover that the population of V3A neurons exhibits somewhat greater selectivity for optic flow patterns. These results suggest that V3A should be considered as part of the motion processing machinery of the visual cortex, in both human and non-human primates. |
Jiménez, Elizabeth Carolina; Romeo, August; Zapata, Laura Pérez; Puig, Maria Solé; Bustos-Valenzuela, Patricia; Cañete, José; Casal, Paloma Varela; Supèr, Hans Eye vergence responses in children with and without reading difficulties during a word detection task Journal Article In: Vision Research, vol. 169, pp. 6–11, 2020. @article{Jimenez2020, Vergence eye movements are movements of both eyes in opposite directions. Vergence is known to have a role in binocular vision. However recent studies link vergence eye movements also to attention and attention disorders. As attention may be involved in dyslexia, it is sensible to guess that the presence of reading difficulties can be associated with specific patterns in vergence responses. Data from school children performing a word-reading task have been analysed. In the task, children had to distinguish words from non-words (scrambled words or row of X's), while their eye positions were recorded. Our findings show that after stimulus presentation eyes briefly converge. These vergence responses depend on the stimulus type and age of the child, and are different for children with reading difficulties. Our findings support the idea of a role of attention in word reading and offer an explanation of altered attention in dyslexia. |
Kinjo, Hikari; Fooken, Jolande; Spering, Miriam Do eye movements enhance visual memory retrieval? Journal Article In: Vision Research, vol. 176, pp. 80–90, 2020. @article{Kinjo2020, When remembering an object at a given location, participants tend to return their gaze to that location even after the object has disappeared, known as Looking-at-Nothing (LAN). However, it is unclear whether LAN is associated with better memory performance. Previous studies reporting beneficial effects of LAN have often not systematically manipulated or assessed eye movements. We asked 20 participants to remember the location and identity of eight objects arranged in a circle, shown for 5 s. Participants were prompted to judge whether a location statement (e.g., “Star Right”) was correct or incorrect, or referred to a previously unseen object. During memory retrieval, participants either fixated in the screen center or were free to move their eyes. Results reveal no difference in memory accuracy and response time between free-viewing and fixation while a LAN effect was found for saccades during free viewing, but not for microsaccades during fixation. Memory performance was better in those free-viewing trials in which participants made a saccade to the critical location, and scaled with saccade accuracy. These results indicate that saccade kinematics might be related to both memory performance and memory retrieval processes, but the strength of their link would differ between individuals and task demands. |
Gatarić, Isidora The cognitive processing of derived nouns with ambiguous suffixes: Behavioral and eye-emovement study Journal Article In: Primenjena Psihologija, vol. 12, no. 1, pp. 85–104, 2019. @article{Gataric2019, The primary aim of this research has been to investigate whether the suffix ambiguity affects the lexical processing of derived nouns in Serbian. Consequently, in the Experiment 1, the derived nouns were presented isolated to participants in the visual lexical decision task. Bearing in mind that the sentence context was important for the lexical processing, the Experiment 2 was designed as an eye-movement study with the sentences (with derived nouns from the Experiment 1) as stimuli. To the best of our knowledge, the similar experimental study was not performed before in the Serbian language, and therefore this study represents the first attempt to investigate this phenomenon in Serbian. An identical statistical analysis was used to analyze the data collected in both experiments, the Generalized Additive Mixed Models (GAMMs). The final results of all GAMMs analyses suggested that the suffixal ambiguity did not affect the lexical processing of derived nouns in Serbian, regardless of whether they were displayed isolated or in the sentence context. The observed results supported the a-morphous perspective in the morpho-lexical processing, as well as the distributed morphology insights from the theoretical linguistics. |
Mikula, Laura; Jacob, Marilyn; Tran, Trang; Pisella, Laure; Khan, Aarlenne Zein Spatial and temporal dynamics of presaccadic attentional facilitation before pro- and antisaccades Journal Article In: Journal of Vision, vol. 18, no. 11, pp. 1–16, 2018. @article{Mikula2018, The premotor theory of attention and the visual attention model make different predictions about the temporal and spatial allocation of presaccadic attentional facilitation. The current experiment investigated the spatial and temporal dynamics of presaccadic attentional facilitation during pro- and antisaccade planning; we investigated whether attention shifts only to the saccade goal location or to the target location or elsewhere, and when. Participants performed a dual-task paradigm with blocks of either anti- or prosaccades and also discriminated symbols appearing at different locations before saccade onset (measure of attentional allocation). In prosaccades blocks, correct prosaccade discrimination was best at the target location, while during errors, discrimination was best at the location opposite to the target location. This pattern was inversed in antisaccades blocks, although discrimination remained high opposite to the target location. In addition, we took the benefit of a large range of saccadic landing positions and showed that performance across both types of saccades was best at the actual saccade goal location (where the eye will actually land) rather than at the instructed position. Finally, temporal analyses showed that discrimination remained highest at the saccade goal location, from long before to closer to saccade onset, increasing slightly for antisaccades closer to saccade onset. These findings are n line with the premises of the premotor theory of attention, showing that attentional allocation is primarily linked both temporally and spatially to the saccade goal location. |
Loy, Jia E.; Rohde, Hannah; Corley, Martin Effects of disfluency in online interpretation of deception Journal Article In: Cognitive Science, vol. 41, pp. 1434–1456, 2017. @article{Loy2017, A speaker's manner of delivery of an utterance can affect a listener's pragmatic interpretation of the message. Disfluencies (such as filled pauses) influence a listener's off-line assessment of whether the speaker is truthful or deceptive. Do listeners also form this assessment during the moment-by-moment processing of the linguistic message? Here we present two experiments that examined listeners' judgments of whether a speaker was indicating the true location of the prize in a game during fluent and disfluent utterances. Participants' eye and mouse movements were biased toward the location named by the speaker during fluent utterances, whereas the opposite bias was observed during disfluent utterances. This difference emerged rapidly after the onset of the critical noun. Participants were similarly sensitive to disfluencies at the start of the utterance (Experiment 1) and in the middle (Experiment 2). Our findings support recent research showing that listeners integrate pragmatic information alongside semantic content during the earliest moments of language processing. Unlike prior work which has focused on pragmatic effects in the interpretation of the literal message, here we highlight disfluency's role in guiding a listener to an alternative non-literal message. |
Choo, Heeyoung; Walther, Dirk B. In: NeuroImage, vol. 135, pp. 32–44, 2016. @article{Choo2016, Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scene-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions. |
Lafuente, Victor; Jazayeri, Mehrdad; Shadlen, Michael N. Representation of accumulating evidence for a decision in two parietal areas Journal Article In: Journal of Neuroscience, vol. 35, no. 10, pp. 4306–4318, 2015. @article{Lafuente2015, Decisions are often made by accumulating evidence for and against the alternatives. The momentary evidence represented by sensory neurons is accumulated by downstream structures to form a decision variable, linking the evolving decision to the formation of a motor plan. When decisions are communicated by eye movements, neurons in the lateral intraparietal area (LIP) represent the accumulation of evidence bearing on the potential targets for saccades. We now show that reach-related neurons from the medial intraparietal area (MIP) exhibit a gradual modulation of their firing rates consistent with the representation of an evolving decision variable. When decisions were communicated by saccades instead of reaches, decision-related activity was attenuated in MIP, whereas LIP neurons were active while monkeys communicated decisions by saccades or reaches. Thus, for decisions communicated by a hand movement, a parallel flow of sensory information is directed to parietal areas MIP and LIP during decision formation. |
Cypryańska, Marzena; Krejtz, Izabela; Jaskółowska, Aleksandra; Kulawik, Alicja; Żukowska, Aleksandra; Zavala, Agnieszka Golec De; Niewiarowski, Jakub; Nezlek, John B. An experimental study of the influence of limited time horizon on positivity effects among young adults using eye-tracking Journal Article In: Psychological Reports, vol. 115, no. 3, pp. 813–827, 2014. @article{Cypryanska2014, Compared to younger adults, older adults attend more to positive stimuli, a positivity effect. Older adults have limited time horizons, and they focus on maintaining positive affect, whereas younger adults have unlimited time horizons, and they focus on acquiring knowledge and developing skills. Time horizons were manipulated by asking participants (66 young adults, M age = 20.5 yr. |
Murphy, Aidan P.; Leopold, David A.; Welchman, Andrew E. Perceptual memory drives learning of retinotopic biases for bistable stimuli Journal Article In: Frontiers in Psychology, vol. 5, pp. 60, 2014. @article{Murphy2014, The visual system exploits past experience at multiple timescales to resolve perceptual$backslash$r$backslash$n ambiguity in the retinal image. For example, perception of a bistable stimulus can be$backslash$r$backslash$n biased towards one interpretation over another when preceded by a brief presentation of a$backslash$r$backslash$n disambiguated version of the stimulus (positive priming) or through intermittent$backslash$r$backslash$n presentations of the ambiguous stimulus (stabilization). Similarly, prior presentations of$backslash$r$backslash$n unambiguous stimuli can be used to explicitly “train” a long-lasting association between$backslash$r$backslash$n a percept and a retinal location (perceptual association). These phenonema have typically$backslash$r$backslash$n been regarded as independent processes, with short-term biases attributed to perceptual$backslash$r$backslash$n memory and longer-term biases described as associative learning. Here we tested for$backslash$r$backslash$n interactions between these two forms of experience-dependent perceptual bias and$backslash$r$backslash$n demonstrate that short-term processes strongly influence long-term outcomes. We first$backslash$r$backslash$n demonstrate that the establishment of long-term perceptual contingencies does not require$backslash$r$backslash$n explicit training by unambiguous stimuli, but can arise spontaneously during the periodic$backslash$r$backslash$n presentation of brief, ambiguous stimuli. Using rotating Necker cube stimuli, we$backslash$r$backslash$n observed enduring, retinotopically specific perceptual biases that were expressed from$backslash$r$backslash$n the outset and remained stable for up to forty minutes, consistent with the known$backslash$r$backslash$n phenomenon of perceptual stabilization. Further, bias was undiminished after a break$backslash$r$backslash$n period of five minutes, but was readily reset by interposed periods of continuous, as$backslash$r$backslash$n opposed to periodic, ambiguous presentation. Taken together, the results demonstrate that$backslash$r$backslash$n perceptual biases can arise naturally and may principally reflect the brain's tendency to$backslash$r$backslash$n favor recent perceptual interpretation at a given retinal location. Further, they suggest that$backslash$r$backslash$n an association between retinal location and perceptual state, rather than a physical stimulus, is sufficient to generate long-term biases in perceptual organization. |
Foulsham, Tom; Sanderson, Lucy Anne Look who's talking? Sound changes gaze behaviour in a dynamic social scene Journal Article In: Visual Cognition, vol. 21, no. 7, pp. 922–944, 2013. @article{Foulsham2013c, Humans often look at other people in natural scenes, and previous research has shown that these looks follow the conversation and that they are sensitive to sound in audiovisual speech perception. In the present experiment, participants viewed video clips of four people involved in a discussion. By removing the sound, we asked whether auditory information would affect when speakers were fixated, how fixations between different observers were synchronized, and whether the eyes or mouth were looked at most often. The results showed that sound changed the timing of looks?by alerting observers to changes in conversation and attracting attention to the speaker. Clips with sound also led to greater attentional synchrony, with more observers fixating the same regions at the same time. However, looks towards the eyes of the people continued to dominate and were unaffected by removing the sound. These findings provide a rich example of multimodal social attention. |
Grubb, Michael A.; Minshew, Nancy J.; Heeger, David J.; Carrasco, Marisa Exogenous spatial attention: Evidence for intact functioning in adults with autism spectrum disorder Journal Article In: Journal of Vision, vol. 13, no. 14, pp. 1–13, 2013. @article{Grubb2013, Deficits or atypicalities in attention have been reported in individuals with autism spectrum disorder (ASD), yet no consensus on the nature of these deficits has emerged. We conducted three experiments that paired a peripheral precue with a covert discrimination task, using protocols for which the effects of covert exogenous spatial attention on early vision have been well established in typically developing populations. Experiment 1 assessed changes in contrast sensitivity, using orientation discrimination of a contrast-defined grating; Experiment 2 evaluated the reduction of crowding in the visual periphery, using discrimination of a letter-like figure with flanking stimuli at variable distances; and Experiment 3 assessed improvements in visual search, using discrimination of the same letter-like figure with a variable number of distractor elements. In all three experiments, we found that exogenous attention modulated visual discriminability in a group of high-functioning adults with ASD and that it did so in the same way and to the same extent as in a matched control group. We found no evidence to support the hypothesis that deficits in exogenous spatial attention underlie the emergence of core ASD symptomatology. |
Badler, Jeremy B.; Lefevre, Philippe; Missal, Marcus Causality attribution biases oculomotor responses Journal Article In: Journal of Neuroscience, vol. 30, no. 31, pp. 10517–10525, 2010. @article{Badler2010, When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements. |
Baumann, Oliver; Mattingley, Jason B. Scaling of neural responses to visual and auditory motion in the human cerebellum Journal Article In: Journal of Neuroscience, vol. 30, no. 12, pp. 4489–4495, 2010. @article{Baumann2010, The human cerebellum contains approximately half of all the neurons within the cerebrum, yet most experimental work in human neuroscience over the last century has focused exclusively on the structure and functions of the forebrain. The cerebellum has an undisputed role in a range of motor functions (Thach et al., 1992), but its potential contributions to sensory and cognitive processes are widely debated (Stoodley and Schmahmann, 2009). Here we used functional magnetic resonance imaging to test the hypothesis that the human cerebellum is involved in the acquisition of auditory and visual sensory data. We monitored neural activity within the cerebellum while participants engaged in a task that required them to discriminate the direction of a visual or auditory motion signal in noise. We identified a distinct set of cerebellar regions that were differentially activated for visual stimuli (vermal lobule VI and right-hemispheric lobule X) and auditory stimuli (right-hemispheric lobules VIIIA and VIIIB and hemispheric lobule VI bilaterally). In addition, we identified a region in left crus I in which activity correlated significantly with increases in the perceptual demands of the task (i.e., with decreasing signal strength), for both auditory and visual stimuli. Our results support suggestions of a role for the cerebellum in the processing of auditory and visual motion and suggest that parts of cerebellar cortex are concerned with tracking movements of objects around the animal, rather than with controlling movements of the animal itself (Paulin, 1993). |
Leopold, David A.; Plettenberg, Holger K.; Logothetis, Nikos K. Visual processing in the ketamine-anesthetized monkey: Optokinetic and blood oxygenation level-dependent responses Journal Article In: Experimental Brain Research, vol. 143, no. 3, pp. 359–372, 2002. @article{Leopold2002, We used optokinetic responses and functional magnetic resonance imaging (fMRI) to examine visual processing in monkeys whose conscious state was modulated by low doses (1-2 mg/kg) of the dissociative anesthetic ketamine. We found that, despite the animal's dissociated state and despite specific influences of ketamine on the oculomotor system, optokinetic nystagmus (OKN) could be reliably elicited with large, moving visual patterns. Responses were horizontally bidirectional for monocular stimulation, indicating that ketamine did not eliminate cortical processing of the motion stimulus. Also, results from fMRI directly demonstrated that the cortical blood oxygenation level-dependent (BOLD) response to visual patterns was preserved at the same ketamine doses used to elicit OKN. Finally, in the ketamine-anesthetized state, perceptually bistable motion stimuli produced patterns of spontaneously alternating OKN that normally would be tightly coupled to perceptual changes. These results, taken together, demonstrate that after ketamine administration cortical circuits continue to processes visual patterns in a dose-dependent manner despite the animal's behavioral dissociation. While perceptual experience is difficult to evaluate under these conditions, oculomotor patterns revealed that the brain not only registers but also acts upon its sensory input, employing it to drive a sensorimotor loop and even responding to a sensory conflict by engaging in spontaneous perception-related state changes. The ketamine-anesthetized monkey preparation thereby offers a safe and viable paradigm for the behavioral and electrophysiological investigation of issues related to conscious perception and anesthesia, as well as neural mechanisms of basic sensory processing. |
Contact
If you would like us to feature your EyeLink research, have ideas for posts, or have any questions about our hardware and software, please contact us. We are always happy to help. You can call us (+1-613-271-8686) or click the button below to email:
References & Image Credits
- Header Image by Hermann (Pixabay License)