Estimated reading time: 3 minutes
We have recently finished updating our database of EyeLink publications – there were more than 900 papers published in 2019 alone, and the database now contains well over 8000 publications in total. Each publication is checked individually to ensure that it contains data collected using an EyeLink eye tracker (rather than just referring to data collected with an EyeLink, as in a meta-analysis or review article) and that the research is published in a peer-reviewed journal.
Publications by Year
In a previous blog I plotted the number of publications per year and an updated version of that plot is included below:
Highly Cited EyeLink Publications
The earlier blog also listed the “top” journals for EyeLink publications – both with respect to the number of EyeLink articles and with respect to the journal’s impact factor. This year I thought it might be interesting to list some of the most highly cited articles in our database. Determining citation counts is a somewhat inexact science. There are three main sources of information on article citation counts – Web of Science, Scopus and Google Scholar. While the advantages and disadvantages of each of these sources is a topic of lively debate (Harzing has written extensively on this – see e.g. this blog), Google Scholar has the twin advantages of having a very comprehensive coverage and being freely accessible.
The list below is a selection of 15 EyeLink articles, all of which have citation counts >500 according to Google Scholar. The list was generated by searching the top 20 journals by volume of EyeLink articles, and the top 10 journals by Impact Factor in our database. It is not intended to be exhaustive, and the articles are listed in no particular order. I think the list provides a fascinating illustration of the sheer breadth (and enormous impact) of the research that EyeLink eye trackers have been involved in.
Boer, Minke J.; Başkent, Deniz; Cornelissen, Frans W.
In: Multisensory Research, vol. 34, no. 1, pp. 17–47, 2021.
The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
McGarrigle, Ronan; Knight, Sarah; Rakusen, Lyndon; Geller, Jason; Mattys, Sven
In: Psychology and Aging, vol. 36, no. 4, pp. 504–519, 2021.
Listening to speech in adverse conditions can be challenging and effortful, especially for older adults. This study examined age-related differences in effortful listening by recording changes in the task-evoked pupil response (TEPR; a physiological marker of listening effort) both at the level of sentence processing and over the entire course of a listening task. A total of 65 (32 young adults, 33 older adults) participants performed a speech recognition task in the presence of a competing talker, while moment-to-moment changes in pupil size were continuously monitored. Participants were also administered the Vanderbilt Fatigue Scale, a questionnaire assessing daily life listening-related fatigue within four domains (social, cognitive, emotional, physical). Normalized TEPRs were overall larger and more steeply rising and falling around the peak in the older versus the young adult group during sentence processing. Additionally, mean TEPRs over the course of the listening task were more stable in the older versus the young adult group, consistent with a more sustained recruitment of compensatory attentional resources to maintain task performance. No age-related differences were found in terms of total daily life listening-related fatigue; however, older adults reported higher scores than young adults within the social domain. Overall, this study provides evidence for qualitatively distinct patterns of physiological arousal between young and older adults consistent with age-related upregulation in resource allocation during listening. A more detailed understanding of age-related changes in the subjective and physiological mechanisms that underlie effortful listening will ultimately help to address complex communication needs in aging listeners. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Hintz, Florian; Meyer, Antje S.; Huettig, Falk
In: Quarterly Journal of Experimental Psychology, vol. 73, no. 3, pp. 458–467, 2020.
Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants' eye movements as they listened to sentences in which an object was predictable based on the verb's selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: the target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 s before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 s after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.
Eger, Nikola Anna; Mitterer, Holger; Reinisch, Eva
In: Journal of Phonetics, vol. 77, pp. 1–24, 2019.
The present study investigated Italian learners' production and perception of German /h/ and /Ɂ/ – two sounds that lack obvious linguistic counterparts in Italian. Critically, of these sounds only /h/ is explicitly known to learners from instruction and orthography. We therefore asked whether this awareness would lead to better acquisition of /h/ than /Ɂ/, and whether any differences would depend on the explicitness of the task. In production, learners of a medium proficiency level performed accurately in about 70% of the cases, with errors including sound deletions and substitutions. In spoken word recognition, two other learner groups of the same proficiency were hindered by sound deletions, but not by substitutions, although they were able to differentiate the sounds in an explicit goodness rating task. Overall, acquisition of /Ɂ/ was similar to /h/, despite lack of awareness for this sound. The results suggest that learners have established one combined “glottal category” to which both sounds map in speech processing, while they may be better implemented in production.
Isabella, Silvia L.; Urbain, Charline; Cheyne, J. Allan; Cheyne, Douglas
In: Neuropsychologia, vol. 127, pp. 48–56, 2019.
In previous studies we have provided evidence that performance in speeded response tasks with infrequent target stimuli reflects both automatic and controlled cognitive processes, based on differences in reaction time (RT) and task-related brain responses (Cheyne et al. 2012, Isabella et al. 2015). Here we test the hypothesis that such shifts in cognitive control may be influenced by changes in cognitive load related to stimulus predictability, and that these changes can be indexed by task-evoked pupillary responses (TEPR). We manipulated stimulus predictability using fixed stimulus sequences that were unknown to the participants in a Go/Switch task (re-quiring a switch response on 25% of trials) while monitoring TEPR as a measure of cognitive load in 12 healthy adults. Results showed significant improvement in performance (reduced RT, increased efficiency) for repeated sequences compared to occasional deviant sequences (10% probability) indicating that incidental learning of the predictable sequences facilitated performance. All behavioral measures varied between Switch and Go trials (RT, efficiency), however mean TEPR amplitude (mTEPR) and latency to maximum pupil dilation were particularly sensitive to Go/Switch. Results were consistent with the hypothesis that mTEPR indexes cognitive load, whereas TEPR latency indexes time to response selection, independent from response execution. The present study provides evidence that incidental pattern learning during response inhibition tasks may modulate several cognitive processes including cognitive load, effort, response selection and execution, which can in turn have differential effects on measures of performance. In particular, we demonstrate that reaction time may not be indicative of underlying cognitive load.
Seemiller, Eric S.; Port, Nicholas L.; Candy, T. Rowan
The gaze stability of 4- to 10-week-old human infants Journal Article
In: Journal of Vision, vol. 18, no. 8, pp. 1–10, 2018.
The relationship between gaze stability, retinal image quality, and visual perception is complex. Gaze instability related to pathology in adults can cause a reduction in visual acuity (e.g., Chung, LaFrance, & Bedell, 2011). Conversely, poor retinal image quality and spatial vision may be a contributing factor to gaze instability (e.g., Ukwade & Bedell, 1993). Though much is known about the immaturities in spatial vision of human infants, little is currently understood about their gaze stability. To characterize the gaze stability of young infants, adult participants and 4- to 10-week-old infants were shown a dynamic random-noise stimulus for 30-s intervals while their eye positions were recorded binocularly. After removing adultlike saccades, we used 5-s epochs of stable intersaccade gaze to estimate bivariate contour ellipse area and standard deviations of vergence. The geometric means (with standard deviations) for infants' bivariate contour ellipse area were left eye = -0.697 ± 0.534 log(°2), right eye = -0.471 ± 0.367 log(°2). For binocular vergence stability, the infant geometric means (with standard deviations) were horizontal = -1.057 ± 0.743 log(°)
Hayes, Taylor R.; Petrov, Alexander A.
In: Behavior Research Methods, vol. 48, no. 2, pp. 510–527, 2016.
Pupil size is correlated with a wide variety of important cognitive variables and is increasingly being used by cognitive scientists. Pupil data can be recorded inexpensively and non-invasively by many commonly used video-based eye-tracking cameras. Despite the relative ease of data collection and increasing prevalence of pupil data in the cognitive literature, researchers often underestimate the methodological challenges associated with controlling for confounds that can result in misinterpretation of their data. One serious confound that is often not properly controlled is pupil foreshortening error (PFE)-the foreshortening of the pupil image as the eye rotates away from the camera. Here we systematically map PFE using an artificial eye model and then apply a geometric model correction. Three artificial eyes with different fixed pupil sizes were used to systematically measure changes in pupil size as a function of gaze position with a desktop EyeLink 1000 tracker. A grid-based map of pupil measurements was recorded with each artificial eye across three experimental layouts of the eye-tracking camera and display. Large, systematic deviations in pupil size were observed across all nine maps. The measured PFE was corrected by a geometric model that expressed the foreshortening of the pupil area as a function of the cosine of the angle between the eye-to-camera axis and the eye-to-stimulus axis. The model reduced the root mean squared error of pupil measurements by 82.5 % when the model parameters were pre-set to the physical layout dimensions, and by 97.5 % when they were optimized to fit the empirical error surface.
Farmer, Thomas A.; Yan, Shaorong; Bicknell, Klinton; Tanenhaus, Michael K.
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 4, pp. 958–976, 2015.
Recent Electroencephalography/Magnetoencephalography (EEG/MEG) studies suggest that when contextual information is highly predictive of some property of a linguistic signal, expectations generated from context can be translated into surprisingly low-level estimates of the physical form-based properties likely to occur in subsequent portions of the unfolding signal. Whether form-based expectations are generated and assessed during natural reading, however, remains unclear. We monitored eye movements while participants read phonologically typical and atypical nouns in noun-predictive contexts (Experiment 1), demonstrating that when a noun is strongly expected, fixation durations on furst-pass eye movement measures, including first fixation duration, gaze durations, and go-past times, are shorter for nouns with category typical form-based features. In Experiments 2 and 3, typical and atypical nouns were placed in sentential contexts normed to create expectations of variable strength for a noun. Context and typicality interacted significantly at gaze duration. These results suggest that during reading, form-based expectations that are translated from higher-level category-based expectancies can facilitate the processing of a word in context, and that their effect on lexical processing is graded based on the strength of category expectancy.
Liu, Yanping; Reichle, Erik D.; Li, Xingshan
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 4, pp. 1229–1236, 2015.
Participants' eye movements were measured while reading Chinese sentences in which target-word frequency and the availability of parafoveal processing were manipulated using a gaze-contingent boundary paradigm. The results of this study indicate that preview availability and its interaction with word frequency modulated the length of the saccades exiting the target words, suggesting important functional roles for parafoveal processing in determining where the eyes move during reading. The theoretical significance of these findings is discussed in relation to 2 current models of eye-movement control during reading, both of which assume that saccades are directed toward default targets (e.g., the center of the next unidentified word). A possible method for addressing these limitations (i.e., dynamic attention allocation) is also discussed.
Perez-Osorio, Jairo; Müller, Hermann J.; Wiese, Eva; Wykowska, Agnieszka
In: PLoS ONE, vol. 10, no. 11, pp. e0143614, 2015.
Humans attend to social cues in order to understand and predict others' behavior. Facial expressions and gaze direction provide valuable information to infer others' mental states and intentions. The present study examined the mechanism of gaze following in the context of participants' expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following) was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal). In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated bymonitoring eye movements). The essential pattern of results of Experiment 1 was replicated, with the gaze- cueing effect being completely eliminated when the observed agent gazed at an action-incongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent.
Shelton, Annie L.; Cornish, Kim M.; Kraan, Claudine; Georgiou-Karistianis, Nellie; Metcalfe, Sylvia A.; Bradshaw, John L.; Hocking, Darren R.; Archibald, Alison D.; Cohen, Jonathan; Trollor, Julian N.; Fielding, Joanne
In: Brain and Cognition, vol. 85, no. 1, pp. 201–208, 2014.
There is evidence which demonstrates that a subset of males with a premutation CGG repeat expansion (between 55 and 200 repeats) of the fragile X mental retardation 1 gene exhibit subtle deficits of executive function that progressively deteriorate with increasing age and CGG repeat length. However, it remains unclear whether similar deficits, which may indicate the onset of more severe degeneration, are evident in female PM-carriers. In the present study we explore whether female PM-carriers exhibit deficits of executive function which parallel those of male PM-carriers. Fourteen female fragile X premutation carriers without fragile X-associated tremor/ataxia syndrome and fourteen age, sex, and IQ matched controls underwent ocular motor and neuropsychological tests of select executive processes, specifically of response inhibition and working memory. Group comparisons revealed poorer inhibitory control for female premutation carriers on ocular motor tasks, in addition to demonstrating some difficulties in behaviour self-regulation, when compared to controls. A negative correlation between CGG repeat length and antisaccade error rates for premutation carriers was also found. Our preliminary findings indicate that impaired inhibitory control may represent a phenotype characteristic which may be a sensitive risk biomarker within this female fragile X premutation population.
Bate, Sarah; Haslam, Catherine; Hodgson, Timothy L.; Jansari, Ashok; Gregory, Nicola J.; Kay, Janice
In: Neuropsychology, vol. 24, no. 1, pp. 84–89, 2010.
Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing.
Nummenmaa, Lauri; Hirvonen, Jussi; Parkkola, Riitta; Hietanen, Jari K.
In: NeuroImage, vol. 43, no. 3, pp. 571–580, 2008.
Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.
Ryan, Jennifer D.; Shen, Jiye; Reingold, Eyal M.
Modulation of distraction in ageing Journal Article
In: British Journal of Psychology, vol. 97, no. 3, pp. 339–351, 2006.
A cueing paradigm was employed to examine modulation of distraction due to a visual singleton. Subjects were required to make a saccade to a shape-singleton target. A predictive location cue indicated the hemifield where a target would appear. Older adults made more anticipatory saccades than younger adults, and were less accurate for making an eye movement in the vicinity of a target. However, younger and older adults likewise benefited from the cue; distraction was reduced when the distractor singleton appeared in an uncued hemisphere. The ability to compensate for problems with distraction in older and younger adults through use of the precue suggests that attention to a general region of space, rather than a specific location, may be enough to modulate distraction.
Bertram, Raymond; Hyönä, Jukka
The length of a complex word modifies the role of morphological structure: Evidence from eye movements when reading short and long Finnish compounds Journal Article
In: Journal of Memory and Language, vol. 48, pp. 615–634, 2003.
This study explored whether the length of a complex word modifies the role of morphological structure in lexical processing: Does morphological structure play a similar role in short complex words that typically elicit one eye fixation (e.g., eyelid) as it does in long complex words that typically elicit two or more eye fixations (e.g., watercourse)? Two eye movement experiments with short vs. long Finnish compound words in context were conducted to find an answer to this question. In Experiment 1, a first-constituent frequency manipulation revealed solid effects for long compounds in early and late processing measures, but no effects for short compounds. In contrast, in Experiment 2, a whole-word frequency manipulation elicited solid effects for short compounds in early and late processing measures, but mainly late effects for long compounds. A race model, incorporating a headstart for the decomposition route, in case whole-word information of complex words cannot be extracted in a single fixation can explain the pattern of results.
If you would like us to feature your EyeLink research, have ideas for posts, or have any questions about our hardware and software, please contact us. We are always happy to help. You can call us (+1-613-271-8686) or click the button below to email:
- Header Image by Hermann.