All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2023 |
Eva Puimege; Maribel Montero Perez; Elke Peters The effects of typographic enhancement on L2 collocation processing and learning from reading: An eye-tracking study Journal Article In: Applied Linguistics, pp. 1–24, 2023. @article{Puimege2023, This study examined the effects of TE on online processing of collocations during reading and on L2 collocation knowledge. The eye-tracking results indi- cate that the initial attention-enhancing effect of TE did not carry over to later, unenhanced exposures. Results of post-experiment interviews suggested that learners' primary focus was on meaning comprehension and that TE did not induce conscious attention to the form of the target collocations. One week after the treatment, participants could recognize the correct form of target col- locations, but they could not productively recall most of them. We conclude that a single enhanced exposure does not necessarily affect learners' memory of collocations, or their processing of those collocations in later exposures. The development of L2 collocation knowledge may require a large amount of expo- sure in purely incidental contexts. |
Sophia Antonia Press; Stefanie C. Biehl; Gregor Domes; Jennifer Svaldi; Sophia Antonia Press; Stefanie C. Biehl; Gregor Domes; Jennifer Svaldi; Sophia Antonia Press Increased insula and amygdala activity during selective attention for negatively valenced body parts in binge eating disorder Journal Article In: Journal of Psychopathology and Clinical Science, vol. 132, no. 1, pp. 63–77, 2023. @article{Press2023, Previous studies indicate that participants with eating disorders show an attentional bias for the negatively valenced body parts of their own body. However, the neural basis underlying these processes has not been investigated. We conducted a preregistered combined functional MRI (fMRI)/eye tracking study and presented 35 women with binge eating disorder (BED) and 24 weight-matched control subjects (CG) with body part images of their own body and a weight-matched unknown body. After the fMRI examination, participants rated the attractiveness of the presented body parts. As expected, women with BED responded with significantly higher insula and amygdala activity when viewing the negatively valenced body parts of their own body (compared to all other combinations). However, individuals with BED did not deviate from the CG in the processing of these stimuli in the ventromedial prefrontal cortex, the extrastriate body area or the fusiform body area. Our results indicate that the negative valued body parts carry a particularly strong emotional valence in individuals with BED. These results further emphasize the relevance of processing bias for negatively valenced body parts in the pathology of BED. |
Sabina Poudel; Jianzhong Jin; Hamed Rahimi-Nasrabadi; Stephen Dellostritto; Mitchell W. Dul; Suresh Viswanathan; Jose-Manuel Alonso Contrast sensitivity of ON and OFF human retinal pathways in myopia Journal Article In: The Journal of Neuroscience, vol. 44, no. 3, pp. 1–16, 2023. @article{Poudel2023, The human visual cortex processes light and dark stimuli with ON and OFF pathways that are differently modulated by luminance contrast. We have previously demonstrated that ON cortical pathways have higher contrast sensitivity than OFF cortical pathways and the difference increases with luminance range (defined as the maximum minus minimum luminance in the scene). Here, we demonstrate that these ON-OFF cortical differences are already present in the human retina and that retinal responses measured with electroretinography are more affected by reductions in luminance range than cortical responses measured with electroencephalography. Moreover, we show that ON-OFF pathway differences measured with electroretinography become more pronounced in myopia, a visual disorder that elongates the eye and blurs vision at far distance. We find that, as the eye axial length increases across subjects, ON retinal pathways become less responsive, slower in response latency, less sensitive, and less effective and slower at driving pupil constriction. Based on these results, we conclude that myopia is associated with a deficit in ON pathway function that decreases the ability of the retina to process low contrast and regulate retinal illuminance in bright environments. Significance Statement Contrast sensitivity is an important visual function that allows discriminating faint visual targets slightly lighter or darker than the background. We have previously demonstrated that ON and OFF cortical pathways signaling light and dark stimuli have different contrast sensitivity and the difference increases with luminance range. Here, we demonstrate that these ON-OFF sensitivity differences are inherited from the retina and are affected by myopia (nearsightedness), a visual disorder that blurs vision at far distances and is becoming a world epidemic. We show that myopia is associated with a retinal deficit that makes ON pathways less effective at signaling contrast and regulating retinal illuminance. These results could have clinical implications and may lead to novel approaches for myopia control. |
G. V. Portnova; K. M. Liaukovich; L. N. Vasilieva; E. I. Alshanskaia Autonomic and behavioral indicators on increased cognitive loading in healthy volunteers Journal Article In: Neuroscience and Behavioral Physiology, vol. 53, no. 1, pp. 92–102, 2023. @article{Portnova2023, Cognitive and emotional loading during increases in task difficulty leads to activation of various parts of the autonomic nervous system and can be accompanied by an increase in problem-solving efficiency and may contribute to destabilization of emotional status and decreases in productivity. An increase in cognitive loading in conditions of high motivation of subjects constitutes a stress factor and is expressed in various reactions of the sympathetic and parasympathetic compartments in response to loading. The aim of the present work was to study the features of various autonomic reactions to gradually increasing task difficulty, which included recording pupil area and the number of blinks, as well as the frequency of respiratory movements, measures of heart rate variability, and galvanic skin responses. Ten healthy volunteers took part in the study. The experimental paradigm included six levels of task difficulty requiring the active participation of working memory and attention. Increases in task difficulty from the first level to the sixth led to a gradual increase in pupil area and the number of blinks, which we suggest corresponds to an increase in sympathetic nervous system activation. Linear changes in the autonomic parameters of the respiratory and cardiovascular systems, as well as the electrical activity of the skin, were observed only up to the third level of difficulty. Further increases in difficulty led to opposite changes in these indicators and were accompanied by decreases in problem-solving efficiency. A more marked change in the galvanic skin response during problem-solving correlated with a decrease in mood after the study, indirectly indicating a higher level of emotional stress. |
Brendan L. Portengen; Giorgio L. Porro; Saskia M. Imhof; Marnix Naber The trade-off between luminance and color contrast assessed with pupil responses Journal Article In: Translational Vision Science & Technology, vol. 12, no. 1, pp. 19–25, 2023. @article{Portengen2023, Purpose: A scene consisting of a white stimulus on a black background incorporates strong luminance contrast. When both stimulus and background receive different colors, luminance contrast decreases but color contrast increases. Here, we sought to charac-terize the pattern of stimulus salience across varying trade-offs of color and luminance contrasts by using the pupil light response. Methods: Three experiments were conducted with 17, 16, and 17 healthy adults. For all experiments, a flickering stimulus (2 Hz; alternating color to black) was presented super-imposed on a background with a complementary color to the stimulus (i.e., opponency colors in human color perception: blue and yellow for Experiment 1, red and green for Experiment 2, and equiluminant red and green for Experiment 3). Background luminance varied between 0% and 45% to trade off luminance and color contrast with the stimulus. By comparing the locus of the optimal trade-off between color and luminance across different color axes, we explored the generality of the trade-off. Results: The strongest pupil responses were found when a substantial amount of color contrast was present (at the expense of luminance contrast). Pupil response ampli-tudes increased by 15% to 30% after the addition of color contrast. An optimal pupillary responsiveness was reached at a background luminance setting of 20% to 35% color contrast across several color axes. Conclusions: These findings suggest that a substantial component of pupil light responses incorporates color processing. More sensitive pupil responses and more salient stimulus designs can be achieved by adding subtle levels of color contrast between stimulus and background. Translational Relevance: More robust pupil responses will enhance tests of the visual field with pupil perimetry. |
Brendan L. Portengen; Marnix Naber; Giorgio L. Porro; Douwe Bergsma; Evert J. Veldman; Saskia M. Imhof In: Eye and Brain, vol. 15, pp. 77–89, 2023. @article{Portengen2023a, Purpose: We improve pupillary responses and diagnostic performance of flicker pupil perimetry through alterations in global and local color contrast and luminance contrast in adult patients suffering from visual field defects due to cerebral visual impairment (CVI). Methods: Two experiments were conducted on patients with CVI (Experiment 1: 19 subjects, age M and SD 57.9 ± 14.0; Experiment 2: 16 subjects, age M and SD 57.3 ± 14.7) suffering from absolute homonymous visual field (VF) defects. We altered global color contrast (stimuli consisted of white, yellow, cyan and yellow-equiluminant-to-cyan colored wedges) in Experiment 1, and we manipulated luminance and local color contrast with bright and dark yellow and multicolor wedges in a 2-by-2 design in Experiment 2. Stimuli consecutively flickered across 44 stimulus locations within the inner 60 degrees of the VF and were offset to a contrasting (opponency colored) dark background. Pupil perimetry results were compared to standard automated perimetry (SAP) to assess diagnostic accuracy. Results: A bright stimulus with global color contrast using yellow (p= 0.009) or white (p= 0.006) evoked strongest pupillary responses as opposed to stimuli containing local color contrast and lower brightness. Diagnostic accuracy, however, was similar across global color contrast conditions in Experiment 1 (p= 0.27) and decreased when local color contrast and less luminance contrast was introduced in Experiment 2 (p= 0.02). The bright yellow condition resulted in highest performance (AUC M = 0.85 ± 0.10 |
Dina V. Popovkina; John Palmer; Cathleen M. Moore; Geoffrey M. Boynton Testing hemifield independence for divided attention in visual object tasks Journal Article In: Journal of Vision, vol. 23, no. 13, pp. 1–17, 2023. @article{Popovkina2023, In this study, we asked to what degree hemifields contribute to divided attention effects observed in tasks with object-based judgments. If object recognition processes in the two hemifields were fully independent, then placing stimuli in separate hemifields would eliminate divided attention effects; in the alternative extreme, if object recognition processes in the two hemifields were fully integrated, then placing stimuli in separate hemifields would not modulate divided attention effects. Using a dual-task paradigm, we compared performance in a semantic categorization task for relevant stimuli arranged in the same hemifield to performance for relevant stimuli arranged in separate left and right hemifields. In two experiments, there was a reliable decrease in divided attention effects when stimuli were shown in separate hemifields compared to the same hemifield. However, the effect of divided attention was not eliminated. These results reject both the independent and integrated hypotheses, and instead support a third alternative - that object recognition processes in the two hemifields are partially dependent. More specifically, the magnitude of modulation by hemifields was closer to the prediction of the integrated hypothesis, suggesting that for dual tasks with objects, dependent processing is mostly shared across the visual field. |
Tzvetan Popov; Tobias Staudigl Cortico-ocular coupling in the service of episodic memory formation Journal Article In: Progress in Neurobiology, vol. 227, pp. 1–9, 2023. @article{Popov2023, Encoding of visual information is a necessary requirement for most types of episodic memories. In search for a neural signature of memory formation, amplitude modulation of neural activity has been repeatedly shown to correlate with and suggested to be functionally involved in successful memory encoding. We here report a complementary view on why and how brain activity relates to memory, indicating a functional role of cortico-ocular interactions for episodic memory formation. Recording simultaneous magnetoencephalography and eye tracking in 35 human participants, we demonstrate that gaze variability and amplitude modulations of alpha/beta oscillations (10–20 Hz) in visual cortex covary and predict subsequent memory performance between and within participants. Amplitude variation during pre-stimulus baseline was associated with gaze direction variability, echoing the co-variation observed during scene encoding. We conclude that encoding of visual information engages unison coupling between oculomotor and visual areas in the service of memory formation. |
Tzvetan Popov; Bart Gips; Nathan Weisz; Ole Jensen Brain areas associated with visual spatial attention display topographic organization during auditory spatial attention Journal Article In: Cerebral Cortex, vol. 33, no. 7, pp. 3478–3489, 2023. @article{Popov2023a, Spatially selective modulation of alpha power (8–14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention. |
Eva R. Pool; Wolfgang M. Pauli; Logan Cross; John P. O'Doherty Neural substrates of parallel devaluation-sensitive and devaluation-insensitive Pavlovian learning in humans Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–17, 2023. @article{Pool2023, We aim to differentiate the brain regions involved in the learning and encoding of Pavlovian associations sensitive to changes in outcome value from those that are not sensitive to such changes by combining a learning task with outcome devaluation, eye-tracking, and functional magnetic resonance imaging in humans. Contrary to theoretical expectation, voxels correlating with reward prediction errors in the ventral striatum and subgenual cingulate appear to be sensitive to devaluation. Moreover, regions encoding state prediction errors appear to be devaluation insensitive. We can also distinguish regions encoding predictions about outcome taste identity from predictions about expected spatial location. Regions encoding predictions about taste identity seem devaluation sensitive while those encoding predictions about an outcome's spatial location seem devaluation insensitive. These findings suggest the existence of multiple and distinct associative mechanisms in the brain and help identify putative neural correlates for the parallel expression of both devaluation sensitive and insensitive conditioned behaviors. |
Elie Poncet; Gaelle Nicolas; Nathalie Guyader; Elena Moro; Aurélie Campagne Spatio-temporal attention toward emotional scenes across adulthood Journal Article In: Emotion, vol. 23, no. 6, pp. 1726–1739, 2023. @article{Poncet2023, Research on emotion suggests that the attentional preference observed toward the negative stimuli in young adults tends to disappear in normal aging and, sometimes, to shift toward a preference for positive stimuli. The current eye-tracking study investigated visual exploration of paired natural scenes of different valence (Negative–Neutral, Positive–Neutral, and Negative–Positive pairs) in three age groups (young, middle-aged, and older adults). Two arousal levels of stimuli (high and low arousal) were also considered given role of this factor in age-related effects on emotion. Results showed the automatic attentional orienting toward the negative stimuli was relatively preserved in our three age groups although reduced in the elderly, in both arousal conditions. A similar negativity bias was also observed in initial attention focusing but shifted toward a positivity bias over time in the three age groups. Moreover, it appeared the spatial exploration of emotional scenes evolved over time differently for older adults compared with other age groups. No difference between young adults and middle-aged adults in ocular behavior was observed. This study confirms the interest of studying both spatial and temporal characteristics of oculomotor behaviors to better understand the age-related effects on emotion. |
Antonella Pomè; Sandra Tyralla; Eckart Zimmermann Altered oculomotor flexibility is linked to high autistic traits Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–12, 2023. @article{Pome2023, Autism is a multifaced disorder comprising sensory abnormalities and a general inflexibility in the motor domain. The sensorimotor system is continuously challenged to answer whether motion-contingent errors result from own movements or whether they are due to external motion. Disturbances in this decision could lead to the perception of motion when there is none and to an inflexibility with regard to motor learning. Here, we test the hypothesis that altered processing of gaze-contingent sensations are responsible for both the motor inflexibility and the sensory overload in autism. We measured motor flexibility by testing how strong participants adapted in a classical saccade adaptation task. We asked healthy participants, scored for autistic traits, to make saccades to a target that was displaced either in inward or in outward direction during saccade execution. The amount of saccade adaptation, that requires to shift the internal target representation, varied with the autistic symptom severity. The higher participants scored for autistic traits, the less they adapted. In order to test for visual stability, we asked participants to localize the position of the saccade target after they completed their saccade. We found the often-reported saccade-induced mislocalization in low Autistic Quotient (AQ) participants. However, we also found mislocalization in high AQ participants despite the absence of saccade adaptation. Our data suggest that high autistic traits are associated with an oculomotor inflexibility that might produce altered processing of trans-saccadic vision which might increase the perceptual overstimulation that is experienced in autism spectrum disorders (ASD). |
Stefan Pollmann; Lei Zheng Right-dominant contextual cueing for global configuration cues, but not local position cues Journal Article In: Neuropsychologia, vol. 178, pp. 1–7, 2023. @article{Pollmann2023, Contextual cueing can depend on global configuration or local item position. We investigated the role of these two kinds of cues in the lateralization of contextual cueing effects. Cueing by item position was tested by recombining two previously learned displays, keeping the individual item locations intact, but destroying the global configuration. In contrast, cueing by configuration was investigated by rotating learned displays, thereby keeping the configuration intact but changing all item positions. We observed faster search for targets in the left display half, both for repeated and new displays, along with more first fixation locations on the left. Both position and configuration cues led to faster search, but the search time reduction compared to new displays due to position cues was comparable in the left and right display half. In contrast, configural cues led to increased search time reduction for right half targets. We conclude that only configural cues enabled memory-guided search for targets across the whole search display, whereas position cueing guided search only to targets in the vicinity of the fixation. The right-biased configural cueing effect is a consequence of the initial leftward search bias and does not indicate hemispheric dominance for configural cueing. |
Megan Polden; Trevor J. Crawford Eye movement latency coefficient of variation as a predictor of cognitive impairment: An eye tracking study of cognitive impairment Journal Article In: Vision, vol. 7, no. 2, pp. 1–12, 2023. @article{Polden2023, Studies demonstrated impairment in the control of saccadic eye movements in Alzheimer's disease (AD) and people with mild cognitive impairment (MCI) when conducting the pro-saccade and antisaccade tasks. Research showed that changes in the pro and antisaccade latencies may be particularly sensitive to dementia and general executive functioning. These tasks show potential for diagnostic use, as they provide a rich set of potential eye tracking markers. One such marker, the coefficient of variation (CV), is so far overlooked. For biological markers to be reliable, they must be able to detect abnormalities in preclinical stages. MCI is often viewed as a predecessor to AD, with certain classifications of MCI more likely than others to progress to AD. The current study examined the potential of CV scores on pro and antisaccade tasks to distinguish participants with AD, amnestic MCI (aMCI), non-amnesiac MCI (naMCI), and older controls. The analyses revealed no significant differences in CV scores across the groups using the pro or antisaccade task. Antisaccade mean latencies were able to distinguish participants with AD and the MCI subgroups. Future research is needed on CV measures and attentional fluctuations in AD and MCI individuals to fully assess this measure's potential to robustly distinguish clinical groups with high sensitivity and specificity. |
Timothy J. Pleskac; Shuli Yu; Sergej Grunevski; Taosheng Liu Attention biases preferential choice by enhancing an option's value Journal Article In: Journal of Experimental Psychology: General, vol. 152, no. 4, pp. 993–1010, 2023. @article{Pleskac2023, Does attending to an option lead to liking it? Though attention-induced valuation is often hypothesized, evi- dence for this causal link has remained elusive. We test this hypothesis across 2 studies by manipulating atten- tion during a preferential decision and its perceptual analog. In a free-viewing task, attention impacted choice and eye movement pattern in the preferential decision more than the perceptual analog. Similarly, in a con- trolled-viewing task, attention had a larger effect on choice in the preferential decision than its perceptual ana- log. Across these experimental manipulations of attention, choice and eye-tracking data provide converging evidence that attention enhances value, and computational modeling further supports this attention-induced valuation hypothesis. A possible explanation for our results is a normalization mechanism where attention induces a gain modulation on an option's representation at both the sensory and value processing levels. |
Iván Plaza-Rosales; Enzo Brunetti; Rodrigo Montefusco-Siegmund; Samuel Madariaga; Rodrigo Hafelin; Daniela P. Ponce; María Isabel Behrens; Pedro E. Maldonado; Andrea Paula-Lima Visual-spatial processing impairment in the occipital-frontal connectivity network at early stages of Alzheimer's disease Journal Article In: Frontiers in Aging Neuroscience, vol. 15, pp. 1–14, 2023. @article{PlazaRosales2023, Introduction: Alzheimer's disease (AD) is the leading cause of dementia worldwide, but its pathophysiological phenomena are not fully elucidated. Many neurophysiological markers have been suggested to identify early cognitive impairments of AD. However, the diagnosis of this disease remains a challenge for specialists. In the present cross-sectional study, our objective was to evaluate the manifestations and mechanisms underlying visual-spatial deficits at the early stages of AD. Methods: We combined behavioral, electroencephalography (EEG), and eye movement recordings during the performance of a spatial navigation task (a virtual version of the Morris Water Maze adapted to humans). Participants (69–88 years old) with amnesic mild cognitive impairment–Clinical Dementia Rating scale (aMCI–CDR 0.5) were selected as probable early AD (eAD) by a neurologist specialized in dementia. All patients included in this study were evaluated at the CDR 0.5 stage but progressed to probable AD during clinical follow-up. An equal number of matching healthy controls (HCs) were evaluated while performing the navigation task. Data were collected at the Department of Neurology of the Clinical Hospital of the Universidad de Chile and the Department of Neuroscience of the Faculty of Universidad de Chile. Results: Participants with aMCI preceding AD (eAD) showed impaired spatial learning and their visual exploration differed from the control group. eAD group did not clearly prefer regions of interest that could guide solving the task, while controls did. The eAD group showed decreased visual occipital evoked potentials associated with eye fixations, recorded at occipital electrodes. They also showed an alteration of the spatial spread of activity to parietal and frontal regions at the end of the task. The control group presented marked occipital activity in the beta band (15–20 Hz) at early visual processing time. The eAD group showed a reduction in beta band functional connectivity in the prefrontal cortices reflecting poor planning of navigation strategies. Discussion: We found that EEG signals combined with visual-spatial navigation analysis, yielded early and specific features that may underlie the basis for understanding the loss of functional connectivity in AD. Still, our results are clinically promising for early diagnosis required to improve quality of life and decrease healthcare costs. |
Belinda Platt; Anca Sfärlea; Johanna Löchner; Elske Salemink; Gerd Schulte-Körne The role of cognitive biases and negative life events in predicting later depressive symptoms in children and adolescents Journal Article In: Journal of Experimental Psychopathology, vol. 14, no. 3, pp. 1–16, 2023. @article{Platt2023, Aims: Cognitive models propose that negative cognitive biases in attention (AB) and interpretation (IB) contribute to the onset of depression. This is the first prospective study to test this hypothesis in a sample of youth with no mental disorder. Methods: Participants were 61 youth aged 9–14 years with no mental disorder. At baseline (T1) we measured AB (passive- viewing task), IB (scrambled sentences task) and self-report depressive symptoms. Thirty months later (T2) we measured onset of mental disorder, depressive symptoms and life events (parent- and child-report). The sample included children of parents with (n = 31) and without (n = 30) parental depression. Results: Symptoms of depression at T2 were predicted by IB (ß = .35 |
Rista C. Plate; Tralucia Powell; Rachael Bedford; Tim J. Smith; Ankur Bamezai; Quentin Wedderburn; Alexis Broussard; Natasha Soesanto; Caroline Swetlitz; Rebecca Waller; Nicholas J. Wagner Social threat processing in adults and children: Faster orienting to, but shorter dwell time on, angry faces during visual search Journal Article In: Developmental Science, pp. 1–8, 2023. @article{Plate2023, Attention to emotional signals conveyed by others is critical for gleaning information about potential social partners and the larger social context. Children appear to detect social threats (e.g., angry faces) faster than non-threatening social signals (e.g., neutral faces). However, methods that rely on behavioral responses alone are limited in identifying different attentional processes involved in threat detection or responding. To address this question, we used a visual search paradigm to assess behavioral (i.e., reaction time to select a target image) and attentional (i.e., eye-tracking fixations, saccadic shifts, and dwell time) responses in children (ages 7–10 years old |
Sotiris Plainis; Emmanouil Ktistakis; Miltiadis K. Tsilimbaris Presbyopia correction with multifocal contact lenses: Evaluation of silent reading performance using eye movements analysis Journal Article In: Contact Lens and Anterior Eye, vol. 46, no. 4, pp. 1–8, 2023. @article{Plainis2023, Purpose: Many activities of daily living rely on reading, thus is not surprising that complaints from presbyopes originate in reading difficulties rather in visual acuity. Here, the effectiveness of presbyopia correction with multifocal contact lenses (CLs) is evaluated using an eye-fixation based method of silent reading performance. Μethods: Visual performance of thirty presbyopic volunteers (age: 50 ± 5 yrs) was assessed monocularly and binocularly following 15 days of wear of monthly disposable CLs (AIR OPTIX™ plus HydraGlyde™, Alcon Laboratories) with: (a) single vision (SV) lenses – uncorrected for near (b) aspheric multifocal (MF) CLs. LogMAR acuity was measured with ETDRS charts. Reading performance was evaluated using standard IReST paragraphs displayed on a screen (0.4 logMAR print size at 40 cm distance). Eye movements were monitored with an infrared eyetracker (Eye-Link II, SR Research Ltd). Data analysis included computation of reading speed, fixation duration, fixations per word and percentage of regressions. Results: Average reading speed was 250 ± 68 and 235 ± 70 wpm, binocularly and monocularly, with SV CLs, improving statistically significantly to 280 ± 67 (p = 0.002) and 260 ± 59 wpm (p = 0.01), respectively, with MF CLs. Moreover, fixation duration, fixations per word and ex-Gaussian parameter of fixation duration, μ, showed a statistically significant improvement when reading with MF CLs, with fixation duration exhibiting the stronger correlation (r = 0.79, p < 0.001) with improvement in reading speed. The correlation between improvement in VA and reading speed was moderate (r = 0.46 |
Barbara L. Pitts; Michelle L. Eisenberg; Heather R. Bailey; Jeffrey M. Zacks Cueing natural event boundaries improves memory in people with post-traumatic stress disorder Journal Article In: Cognitive Research: Principles and Implications, vol. 8, no. 1, pp. 1–10, 2023. @article{Pitts2023, People with post-traumatic stress disorder (PTSD) often report difficulty remembering information in their everyday lives. Recent findings suggest that such difficulties may be due to PTSD-related deficits in parsing ongoing activity into discrete events, a process called event segmentation. Here, we investigated the causal relationship between event segmentation and memory by cueing event boundaries and evaluating its effect on subsequent memory in people with PTSD. People with PTSD (n = 38) and trauma-matched controls (n = 36) watched and remembered videos of everyday activities that were either unedited, contained visual and auditory cues at event boundaries, or contained visual and auditory cues at event middles. PTSD symptom severity varied substantial within both the group with a PTSD diagnosis and the control group. Memory performance did not differ significantly between groups, but people with high symptoms of PTSD remembered fewer details from the videos than those with lower symptoms of PTSD. Both those with PTSD and controls remembered more information from the videos in the event boundary cue condition than the middle cue or unedited conditions. This finding has important implications for translational work focusing on addressing everyday memory complaints in people with PTSD. |
Katharina Pittrich; Sascha Schroeder Reading vertically and horizontally mirrored text: An eye movement investigation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 2, pp. 271–283, 2023. @article{Pittrich2023, This study examined the cognitive processes involved in reading vertically and horizontally mirrored text. We tracked participants' eye movements while they were reading the Potsdam Sentence Corpus which consists of 144 sentences with target words that are manipulated for length and frequency. Sentences were presented in three different conditions: In the normal condition, text was presented with upright letters, in the vertical condition, each letter was flipped around its vertical (left-right) axis while in the horizontal condition, letters were flipped around their horizontal (up-down) axis. Results show that reading was slowed down in both mirror conditions and that horizontal mirroring was particularly disruptive. In both conditions, we found larger effects of word length than in the normal condition indicating that participants read the sentences more serially and effortfully. Similarly, frequency effects were larger in both mirror conditions in later reading measures (gaze duration, go-past time, and total reading time) and particularly pronounced in the horizontal condition. This indicates that reading mirrored script involves a late checking mechanism that is particularly important for reading a horizontally mirrored script. Together, our findings demonstrate that mirroring affects both early visual identification and later linguistic processes. |
Aurélie Pistono; Robert J. Hartsuiker Can object identification difficulty be predicted based on disfluencies and eye-movements in connected speech? Journal Article In: PLoS ONE, vol. 18, pp. 1–18, 2023. @article{Pistono2023, In the current study, we asked whether delays in the earliest stages of picture naming elicit disfluency. To address this question, we used a network task, where participants describe the route taken by a marker through visually presented networks of objects. Additionally, given that disfluencies are arguably multifactorial, we combined this task with eye tracking, to be able to disentangle disfluency related to word preparation from other factors (e.g., stalling strategy). We used visual blurring, which hinders visual identification of the items and thereby slows down selection of a lexical concept. We tested the effect of this manipulation on disfluency production and visual attention. Blurriness did not lead to more disfluency on average and viewing times decreased with blurred pictures. However, multivariate pattern analyses revealed that a classifier could predict above chance, from the pattern of disfluency, whether each participant was about to name blurred or control pictures. Impeding the conceptual generation of a message therefore affected the pattern of disfluencies of each participant individually, but this pattern was not consistent from one participant to another. Additionally, some of the disfluency and eye-movement variables correlated with individual cognitive differences, in particular with inhibition. |
Alessandro Piras; Matteo Bertucco; Francesco Del Santo; Andrea Meoni; Milena Raffi Postural stability assessment in expert versus amateur basketball players during optic flow stimulation Journal Article In: Journal of Electromyography and Kinesiology, vol. 74, pp. 1–8, 2023. @article{Piras2023, We evaluated the role of visual stimulation on postural muscles and the changes in the center of pressure (CoP) during standing posture in expert and amateur basketball players. Participants were instructed to look at a fixation point presented on a screen during foveal, peripheral, and full field optic flow stimuli. Postural mechanisms and motor strategies were assessed by simultaneous recordings of stabilometric, oculomotor, and electromyographic data during visual stimulation. We found significant differences between experts and amateurs in the orientation of visual attention. Experts oriented attention to the right of their visual field, while amateurs to the bottom-right. The displacement in the CoP mediolateral direction showed that experts had a greater postural sway of the right leg, while amateurs on the left leg. The entropy-based data analysis of the CoP mediolateral direction exhibited a greater value in amateurs than in experts. The root-mean-square and the coactivation index analysis showed that experts activated mainly the right leg while amateurs the left leg. In conclusion, playing sports for years seems to have induced some strong differences in the standing posture between the right and left sides. Even during non-ecological visual stimulation, athletes maintain postural adaptations to counteract the body oscillation. |
Yair Pinto; Maria Chiara Villa; Sabrina Siliquini; Gabriele Polonara; Claudia Passamonti; Simona Lattanzi; Nicoletta Foschi; Mara Fabri; Edward H. F. Haan Visual integration across fixation: Automatic processes are split but conscious processes remain unified in the split-brain Journal Article In: Frontiers in Human Neuroscience, vol. 17, pp. 1–8, 2023. @article{Pinto2023, The classic view holds that when “split-brain” patients are presented with an object in the right visual field, they will correctly identify it verbally and with the right hand. However, when the object is presented in the left visual field, the patient verbally states that he saw nothing but nevertheless identifies it accurately with the left hand. This interaction suggests that perception, recognition and responding are separated in the two isolated hemispheres. However, there is now accumulating evidence that this interaction is not absolute; for instance, split-brain patients are able to detect and localise stimuli anywhere in the visual field verbally and with either hand. In this study we set out to explore this cross-hemifield interaction in more detail with the split-brain patient DDC and carried out two experiments. The aim of these experiments is to unveil the unity of deliberate and automatic processing in the context of visual integration across hemispheres. Experiment 1 suggests that automatic processing is split in this context. In contrast, when the patient is forced to adopt a conscious, deliberate, approach, processing seemed to be unified across visual fields (and thus across hemispheres). First, we looked at the confidence that DDC has in his responses. The experiment involved a simultaneous “same” versus “different” matching task with two shapes presented either within one hemifield or across fixation. The results showed that we replicated the observation that split brain patients cannot match across fixation, but more interesting, that DDC was very confident in the across-fixation condition while performing at chance-level. On the basis of this result, we hypothesised a two-route explanation. In healthy subjects, the visual information from the two hemifields is integrated in an automatic, unconscious fashion via the intact splenium, and this route has been severed in DDC. However, we know from previous experiments that some transfer of information remains possible. We proposed that this second route (perhaps less visual; more symbolic) may become apparent when he is forced to use a deliberate, consciously controlled approach. In an experiment where he is informed, by a second stimulus presented in one hemifield, what to do with the first stimulus that was presented in the same or the opposite hemifield, we showed that there was indeed interhemispheric transfer of information. We suggest that this two-route model may help in clarifying some of the controversial issues in split-brain research. |
Anastasia Pilat; Rebecca J. McLean; Anna Vanina; Robert A. Dineen; Irene Gottlob Clinical features and imaging characteristics in achiasmia Journal Article In: Brain Communications, vol. 5, no. 4, pp. 1–11, 2023. @article{Pilat2023, Achiasmia is a rare visual pathway maldevelopment with reduced decussation of the axons in the optic chiasm. Our aim was to investigate clinical characteristics, macular, optic nerve and brain morphology in achiasmia. A prospective, cross-sectional, observational study of 12 participants with achiasmia [8 males and 4 females; 29.6 ± 18.4 years (mean ± standard deviation)] and 24 gender-, age-, ethnicity- and refraction-matched healthy controls was done. Full ophthalmology assessment, eye movement recording, a high-resolution spectral-domain optical coherence tomography of the macular and optic disc, five-channel visual-evoked responses, eye movement recordings and MRI scans of the brain and orbits were acquired. Achiasmia was confirmed in all 12 clinical participants by visual-evoked responses. Visual acuity in this group was 0.63 ± 0.19 and 0.53 ± 0.19 for the right and left eyes, respectively; most participants had mild refractive errors. All participants with achiasmia had see-saw nystagmus and no measurable stereo vision. Strabismus and abnormal head position were noted in 58% of participants. Optical coherence tomography showed optic nerve hypoplasia with associated foveal hypoplasia in four participants. In the remaining achiasmia participants, macular changes with significantly thinner paracentral inner segment (P = 0.002), wider pit (P = 0.04) and visual flattening of the ellipsoid line were found. MRI demonstrated chiasmatic aplasia in 3/12 (25%), chiasmatic hypoplasia in 7/12 (58%) and a subjectively normal chiasm in 2/12 (17%). Septo-optic dysplasia and severe bilateral optic nerve hypoplasia were found in three patients with chiasmic aplasia/hypoplasia on MRI. In this largest series of achiasmia patients to date, we found for the first time that neuronal abnormalities occur already at the retinal level. Foveal changes, optic nerve hypoplasia and the midline brain anomaly suggest that these abnormalities could be part of the same spectrum, with different manifestations of events during foetal development occurring with varying severity. |
Zhongling Pi; Yi Zhang; Fangfang Zhu; Louqi Chen; Xin Guo; Jiumin Yang The mutual influence of an instructor's eye gaze and facial expression in video lectures Journal Article In: Interactive Learning Environments, vol. 31, no. 6, pp. 3664–3681, 2023. @article{Pi2023c, This study tested the mutual effects of the instructor's eye gaze and facial expression on students' eye movements (i.e. first fixation time to the slides, percentage dwell time on the slides, and percentage dwell time on the instructor), parasocial interaction, and learning performance in pre-recorded video lectures. Students (N = 118 undergraduate and graduate students) were assigned to watch one of four videos in a 2 (gaze: direct, guided) × 2 (facial expression: surprised, neutral) between-groups design. Contrary to our hypotheses, eye movement data showed that students who watched the video lecture with the instructor's guided gaze and surprised face showed longer first fixation time to the slides and lower dwell time on the slides; these students also had lower learning scores. Instructor eye gaze and facial expression did not influence students' ratings of parasocial interaction. Our results suggest that in reference to social cues during video lectures with slides, “more” is not necessarily “better.” The findings have practical implications for designing pre-recorded slide-based video lectures: An instructor is cautioned against using multiple social cues simultaneously, especially in video lectures in which the instructor and the visual learning materials compete for students' attention. |
Zhongling Pi; Yi Zhang; Ke Xu; Jiumin Yang Does an outline of contents promote learning from videos? A study on learning performance and engagement Journal Article In: Education and Information Technologies, vol. 28, no. 3, pp. 3493–3511, 2023. @article{Pi2023b, It is well known that outlines can help learners establish a conceptual framework that connects new knowledge with prior knowledge, and thus promote learning. However, it is unclear whether outlines are beneficial before learning from watching an educational video. We tested the effects of two goal setting strategies on learning from a video lecture. Learners (N = 87) were randomly assigned to one of three groups: read an instructor-generated outline before the video (n = 29); read the same outline, and based on it, generate their own outline of the key ideas before the video (n = 29); control group (n = 29). The study was conducted in an eye-tracking laboratory. Learners in the instructor-generated outline group reported higher learning engagement than those in the control group. Learners in the reading and generating outline group paid greater attention to the learning materials, and had higher learning performance scores, than those in the control group. The two strategy groups did not differ from each other on learning engagement or learning performance. The findings suggest that: To improve learning, instructors should ask learners to read an instructor-generated outline, and to generate their own outline based on the instructor's outline, before viewing the video lecture. |
Zhongling Pi; Qiuchen Yu; Yi Zhang; Yan Li; Hui Chen; Jiumin Yang Presenting points or rank: The impacts of leaderboard elements on English vocabulary learning through video lectures Journal Article In: Journal of Computer Assisted Learning, pp. 104–117, 2023. @article{Pi2023, Background: Leaderboards are a highly popular gamification component used in student learning to enhance motivation, attentional engagement, and learning performance. However, few studies have examined the effects of individual leaderboard elements on English vocabulary learning through video lectures. Objectives: The present study aimed to examine how different leaderboard elements (i.e., points and rank) may affect students' English vocabulary learning through video lectures. Methods: A total of 34 students were assigned to groups using different leaderboard elements in a counterbalanced order. Participants' motivation, eye movements, and learning performance were measured and analysed. Results and Conclusions: Students' leaderboard rank was shown to increase their motivation regardless of whether other elements were present. Eye movement tracking revealed that the presence of the leaderboard increased students' saccades between the questions and the options, and lengthened their dwell time on the learning materials while reducing their dwell time on the non-learning-related screen areas. Presenting students' rank alone also improved their learning performance. Implications: Our findings strongly support the use of video lectures for English vocabulary learning, with the following recommendations: (1) Instructors should present students' rank on the leaderboard to enhance students' motivation and engagement; (2) Instructors should present only the students' rank on the leaderboard to also enhance students' learning performance. |
Zhongling Pi; Wei Liu; Hongjuan Ling; Xingyu Zhang; Xiying Li Does an instructor's facial expressions override their body gestures in video lectures? Journal Article In: Computers and Education, vol. 193, pp. 1–16, 2023. @article{Pi2023a, While teaching, instructors will use unplanned, spontaneous facial expressions and body gestures to express their emotions. There is a growing consensus that an instructor's emotional expressions can trigger students' emotional and psychological responses, thus enhancing or inhibiting their learning in both face-to-face and online teaching contexts. However, little systematic research exists on which specific design features of an instructor's movements can induce emotions in video lectures. Three experiments were conducted in this study. Experiment 1 aimed to test the congruency/incongruency effects of an instructor's facial expressions (happy vs. bored) and body gestures (happy vs. bored) on student learning from video lectures in terms of students' emotions, motivation, cognitive load, and learning performance. Results of Experiment 1 showed that the instructor's happy facial expressions induced more positive emotions, enhanced motivation, and improved learning performance in students than the bored facial expressions, regardless of the instructor's body gestures. Experiment 2 sought to build upon the unexpected finding from Experiment 1 by increasing the frequency of body gestures, seeking evidence from both self-reports and eye movements. Results of Experiment 2 showed that the instructor's happy facial expressions enhanced students' learning performance when the instructor did not use body gestures, but not when they used increased body gestures. Experiment 3 was conducted to further expand upon findings from Experiment 1 and Experiment 2. Results of Experiment 3 confirmed the emotion-motivational and cognitive benefits of the instructor's happy facial expressions. The results have implications for designing features of instructors in video lectures: if instructors are visible, they should be encouraged to exhibit happy facial expressions, using body gestures less frequently or even avoid body gestures entirely. |
Christina U. Pfeuffer; Andrea Kiesel; Lynn Huestegge Similar proactive effect monitoring in free and forced choice action modes Journal Article In: Psychological Research, vol. 87, no. 1, pp. 226–241, 2023. @article{Pfeuffer2023, When our actions yield predictable consequences in the environment, our eyes often already saccade towards the locations we expect these consequences to appear at. Such spontaneous anticipatory saccades occur based on bi-directional associations between action and effect formed by prior experience. That is, our eye movements are guided by expectations derived from prior learning history. Anticipatory saccades presumably reflect a proactive effect monitoring process that prepares a later comparison of expected and actual effect. Here, we examined whether anticipatory saccades emerged under forced choice conditions when only actions but not target stimuli were predictive of future effects and whether action mode (forced choice vs. free choice, i.e., stimulus-based vs. stimulus-independent choice) affected proactive effect monitoring. Participants produced predictable visual effects on the left/right side via forced choice and free choice left/right key presses. Action and visual effect were spatially compatible in one half of the experiment and spatially incompatible in the other half. Irrespective of whether effects were predicted by target stimuli in addition to participants' actions, in both action modes, we observed anticipatory saccades towards the location of future effects. Importantly, neither the frequency, nor latency or amplitude of these anticipatory saccades significantly differed between forced choice and free choice action modes. Overall, our findings suggest that proactive effect monitoring of future action consequences, as reflected in anticipatory saccades, is comparable between forced choice and free choice action modes. |
Sonja Perkovic; Martin Schoemann; Carl Johan Lagerkvist; Jacob L. Orquin Covert attention leads to fast and accurate decision-making Journal Article In: Journal of Experimental Psychology: Applied, vol. 29, no. 1, pp. 78–94, 2023. @article{Perkovic2023, Decision-makers are regularly faced with more choice information than they can directly gaze at in a limited amount of time. Many theories assume that because decision-makers attend to information sequentially and overtly, that is, with direct gaze, they must respond to information overload by trading off between speed and decision accuracy. By reanalyzing five published studies, we show that participants, besides using overt attention, also use covert attention. That is, without being instructed to do so, participants attend to information without direct gaze to evaluate choice attributes that lead them to either choose the best or reject theworst option. Weshow that the use of covert attention is common for most participants andmore sowhen information is easily identifiable in the peripheral visual field due to being large or visually salient. Covert attention is associated with faster decision times suggesting that participants might process multiple pieces of information simultaneously using distributed attention. Our findings highlight the importance of covert attention in decision-making and show how decision-makers may be gaining speed while retaining high levels of decision accuracy.We discuss how harnessing covert attention can benefit consumer decision-making of healthy and sustainable products |
Alexis Pérez-Bellido; Eelke Spaak; Floris P. Lange Magnetoencephalography recordings reveal the neural mechanisms of auditory contributions to improved visual detection Journal Article In: Communications biology, vol. 6, no. 12, pp. 1–16, 2023. @article{PerezBellido2023, Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases. |
Oswaldo Pérez; Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco; Hugo Merchant Rhythmic tapping to a moving beat motion kinematics overrules natural gravity Journal Article In: iScience, vol. 26, no. 9, pp. 1–21, 2023. @article{Perez2023a, Beat induction is the cognitive ability that allows humans to listen to a regular pulse in music and move in synchrony with it. Although auditory rhythmic cues induce more consistent synchronization than flashing visual metronomes, this auditory-visual asymmetry can be canceled by visual moving stimuli. Here, we investigated whether the naturalness of visual motion or its kinematics could provide a synchronization advantage over flashing metronomes. Subjects were asked to tap in sync with visual metronomes defined by vertically accelerating/decelerating motion, either congruent or not with natural gravity; horizontally accelerating/decelerating motion; or flashing stimuli. We found that motion kinematics was the predominant factor determining rhythm synchronization, as accelerating moving metronomes in any cardinal direction produced more precise and predictive tapping than decelerating or flashing conditions. Our results support the notion that accelerating visual metronomes convey a strong sense of beat, as seen in the cueing movements of an orchestra director. |
A. I. Pérez; E. Schmidt; I. M. Tsimpli Inferential evaluation and revision in L1 and L2 text comprehension: An eye movement study Journal Article In: Bilingualism, pp. 1–14, 2023. @article{Perez2023, Text comprehension frequently demands the resolution of no longer plausible interpretations to build an accurate situation model, an ability that might be especially challenging during second language comprehension. Twenty-two native English speakers (L1) and twenty-two highly proficient non-native English speakers (L2) were presented with short narratives in English. Each text required the evaluation and revision of an initial prediction. Eye movements in the text and a comprehension sentence indicated less efficient performance in the L2 than in L1 comprehension, in both inferential evaluation and revision. Interestingly, these effects were determined by individual differences in inhibitory control and linguistic proficiency. Higher inhibitory control reduced the time rereading previous parts of the text (better evaluation) as well as revisiting the text before answering the sentence (better revision) in L2 comprehenders, whereas higher proficiency reduced the time in the sentence when the story was coherent, suggesting better general comprehension in both languages. |
Maud Pélissier; Dag Haugland; Bjørn Handeland; Beatrice Zitong Urland; Allison Wetterlin; Linda Wheeldon; Steven Frisson Competition between form-related words in bilingual sentence reading: Effects of language proficiency Journal Article In: Bilingualism: Language and Cognition, vol. 26, no. 2, pp. 384–401, 2023. @article{Pelissier2023, Sentence reading involves constant competition between lexical candidates. Previous research with monolinguals has shown that the neighbours of a read word are inhibited, making their retrieval as a subsequent target more difficult, but the duration of this interference may depend on reading skills. In this study, we examined neighbour priming effects in sentence reading among proficient Norwegian–English bilinguals reading in their L2. We investigated the effects of the distance between prime and target (short vs. long) and the nature of the overlap between the two words (beginning or end), and related these to differences in individual cognitive skills. Our results replicated the inhibition effects found in monolinguals, albeit slightly delayed. Interference between form-related words was affected by the L2 reading skills and, crucially, by the phonological decoding abilities of the bilingual reader. We discuss the results in light of competition models of bilingual reading as well as episodic memory accounts. |
Ana Pelegrino; Anna Luiza Guimaraes; Walter Sena; Nwabunwanne Emele; Linda Scoriels; Rogerio Panizzutti Dysregulated noradrenergic response is associated with symptom severity in individuals with schizophrenia Journal Article In: Frontiers in Psychiatry, vol. 14, pp. 1–9, 2023. @article{Pelegrino2023, Introduction: The locus coeruleus-noradrenaline (LC-NA) system is involved in a wide range of cognitive functions and may be altered in schizophrenia. A non-invasive method to indirectly measure LC activity is task-evoked pupillary response. Individuals with schizophrenia present reduced pupil dilation compared to healthy subjects, particularly when task demand increases. However, the extent to which alteration in LC activity contributes to schizophrenia symptomatology remains largely unexplored. We aimed to investigate the association between symptomatology, cognition, and noradrenergic response in individuals with schizophrenia. Methods: We assessed task-evoked pupil dilation during a pro- and antisaccade task in 23 individuals with schizophrenia and 28 healthy subjects. Results: Both groups showed similar preparatory pupil dilation during prosaccade trials, but individuals with schizophrenia showed significantly lower pupil dilation compared to healthy subjects in antisaccade trials. Importantly, reduced preparatory pupil dilation for antisaccade trials was associated with worse general symptomatology in individuals with schizophrenia. Discussion: Our findings suggest that changes in LC-NA activity – measured by task-evoked pupil dilation – when task demand increases is associated with schizophrenia symptoms. Interventions targeting the modulation of noradrenergic responses may be suitable candidates to reduce schizophrenia symptomatology. |
Marek A. Pedziwiatr; Elisabeth Hagen; Christoph Teufel Knowledge-driven perceptual organization reshapes information sampling via eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 49, no. 3, pp. 408–427, 2023. @article{Pedziwiatr2023, Humans constantly move their eyes to explore the environment. However, how image-computable features and object representations contribute to eye-movement control is an ongoing debate. Recent developments in object perception indicate a complex relationship between features and object representations, where image-independent object knowledge generates objecthood by reconfiguring how feature space is carved up. Here, we adopt this emerging perspective, asking whether object-oriented eye movements result from gaze being guided by image-computable features, or by the fact that these features are bound into an object representation.We recorded eyemovements in response to stimuli that initially appear asmeaningless patches but are experienced as coherent objects once relevant object knowledge has been acquired. We demonstrate that fixations on identical images are more object-centered, less dispersed, and more consistent across observers once these images are organized into objects. Gaze guidance also showed a shift from exploratory information sampling to exploitation of object-related image areas. These effects were evident from the first fixations onwards. Importantly, eye movements were not fully determined by knowledge-dependent object representations but were best explained by the integration of these representations with image-computable features. Overall, the results show how information sampling via eye movements is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organization. |
Marco Pedrotti; Anne Françoise Chambrier; Paolo Ruggeri; Jasinta Dewi; Myrto Atzemian; Catherine Thevenot; Catherine Martinet; Philippe Terrier Raw eye tracking data of healthy adults reading aloud words, pseudowords and numerals Journal Article In: Data in Brief, vol. 49, pp. 1–6, 2023. @article{Pedrotti2023, This paper describes data from de Chambrier et al. (2023). The dataset [2] contains raw eye tracking data of 36 healthy adults, collected using an EyeLink 1000 (SR Research Ltd., ON, Canada) during an on-screen reading task. Participants read 96 items including words, pseudowords and numerals. Each item was presented at the center of the screen until the participant produced an oral response and pressed the keyboard's space bar. Part of the data were analyzed to extract key metrics such as fixation number, fixation duration, saccade number, and saccade amplitude identified by the EyeLink 1000 [1]. Reuse potential includes (but is not limited to) pupil diameter data analysis, identification of fixations and saccades using custom algorithms, and secondary analyses using participant demographics (age, gender) as independent variables. |
Salome Pedrett; Alain Chavaillaz; Andrea Frick Age-related changes in how 3.5- to 5.5-year-olds observe and imagine rotational object motion Journal Article In: Spatial Cognition & Computation, vol. 23, no. 2, pp. 83–111, 2023. @article{Pedrett2023, Mental representations of rotation were investigated in 3.5- to 5.5-year-olds (N = 74) using a multi-method approach. In a novel mental-rotation task, children were asked to choose one of two rotated shapes that would fit onto a counterpart. The developmental trajectory of mental rotation was compared to eye-tracking results on how the same children observed and anticipated circular object motion. On the mental-rotation task, children below age 4 performed above chance up to angles of 150°, and performance improved with age. Eye-tracking results indicated that mental representations of circular motion were largely developed by the age of 3.5 years. In contrast, perception of rotational motion and mental rotation of asymmetrical shapes continued to develop between 3.5 and 5.5 years of age. |
Candace E. Peacock; Praveena Singh; Taylor R. Hayes; Gwendolyn Rehrig; John M. Henderson Searching for meaning: Local scene semantics guide attention during natural visual search in scenes Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 3, pp. 632–648, 2023. @article{Peacock2023a, Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search. |
Candace E. Peacock; Elizabeth H. Hall; John M. Henderson Objects are selected for attention based upon meaning during passive scene viewing Journal Article In: Psychonomic Bulletin & Review, vol. 30, no. 5, pp. 1874–1886, 2023. @article{Peacock2023, While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing. |
Srividya Pattisapu; Supratim Ray Stimulus-induced narrow-band gamma oscillations in humans can be recorded using open-hardware low-cost EEG amplifier Journal Article In: PLoS ONE, vol. 18, pp. 1–19, 2023. @article{Pattisapu2023, Stimulus-induced narrow-band gamma oscillations (30–70 Hz) in human electro-encephalograph (EEG) have been linked to attentional and memory mechanisms and are abnormal in mental health conditions such as autism, schizophrenia and Alzheimer's Disease. However, since the absolute power in EEG decreases rapidly with increasing frequency following a “1/ f” power law, and the gamma band includes line noise frequency, these oscillations are highly susceptible to instrument noise. Previous studies that recorded stimulus-induced gamma oscillations used expensive research-grade EEG amplifiers to address this issue. While low-cost EEG amplifiers have become popular in Brain Computer Interface applications that mainly rely on low-frequency oscillations (< 30 Hz) or steady-state-visually-evoked-potentials, whether they can also be used to measure stimulus-induced gamma oscillations is unknown. We recorded EEG signals using a low-cost, open-source amplifier (OpenBCI) and a traditional, research-grade amplifier (Brain Products GmbH), both connected to the OpenBCI cap, in male (N = 6) and female (N = 5) subjects (22–29 years) while they viewed full-screen static gratings that are known to induce two distinct gamma oscillations: slow and fast gamma, in a subset of subjects. While the EEG signals from OpenBCI were considerably noisier, we found that out of the seven subjects who showed a gamma response in Brain Products recordings, six showed a gamma response in OpenBCI as well. In spite of the noise in the OpenBCI setup, the spectral and temporal profiles of these responses in alpha (8–13 Hz) and gamma bands were highly correlated between OpenBCI and Brain Products recordings. These results suggest that low-cost amplifiers can potentially be used in stimulus-induced gamma response detection. |
Jagruti J. Pattadkal; Carrie Barr; Nicholas J. Priebe Ocular following eye movements in marmosets follow complex motion trajectories Journal Article In: eNeuro, vol. 10, no. 6, pp. 1–9, 2023. @article{Pattadkal2023, Ocular following eye movements help stabilize images on the retina and offer a window to study motion inter-pretation by visual circuits. We use these ocular following eye movements to study motion integration behavior in the marmosets. We characterize ocular following responses in the marmosets using different moving stimuli such as dot patterns, gratings, and plaids. Marmosets track motion along different directions and exhibit spatial frequency and speed sensitivity, which closely matches the sensitivity reported in neurons from their mo-tion-selective area MT. Marmosets are also able to track the integrated motion of plaids, with tracking direction consistent with an intersection of constraints model of motion integration. Marmoset ocular following responses are similar to responses in macaques and humans with certain species-specific differences in peak sensitivities. Such motion-sensitive eye movement behavior in combination with direct access to cortical circuitry makes the marmoset model well suited to study the neural basis of motion integration. |
Aashay M. Patel; Katsuhisa Kawaguchi; Lenka Seillier; Hendrikje Nienborg In: European Journal of Neuroscience, vol. 57, no. 8, pp. 1368–1382, 2023. @article{Patel2023, Sensory processing is influenced by neuromodulators such as serotonin, thought to relay behavioural state. Recent work has shown that the modulatory effect of serotonin itself differs with the animal's behavioural state. In primates, including humans, the serotonin system is anatomically important in the primary visual cortex (V1). We previously reported that in awake fixating macaques, serotonin reduces the spiking activity by decreasing response gain in V1. But the effect of serotonin on the local network is unknown. Here, we simultaneously recorded single-unit activity and local field potentials (LFPs) while iontophoretically applying serotonin in V1 of alert monkeys fixating on a video screen for juice rewards. The reduction in spiking response we observed previously is the opposite of the known increase of spiking activity with spatial attention. Conversely, in the local network (LFP), the application of serotonin resulted in changes mirroring the local network effects of previous reports in macaques directing spatial attention to the receptive field. It reduced the LFP power and the spike–field coherence, and the LFP became less predictive of spiking activity, consistent with reduced functional connectivity. We speculate that together, these effects may reflect the sensory side of a serotonergic contribution to quiet vigilance: The lower gain reduces the salience of stimuli to suppress an orienting reflex to novel stimuli, whereas at the network level, visual processing is in a state comparable to that of spatial attention. |
Olga Parshina; Nina Zdorova; Victor Kuperman Cross-linguistic comparison in reading sentences of uniform length: Visual–perceptual demands override readers' experience Journal Article In: Quarterly Journal of Experimental Psychology, pp. 1–9, 2023. @article{Parshina2023, Accurate saccadic targeting is critical for efficient reading and is driven by the sensory input under the eye-gaze. Yet whether a reader's experience with the distributional properties of their written language also influences saccadic targeting is an open debate. This study of Russian sentence reading follows Cutter et al.'s (2017) study in English and presents readers with sentences consisting of words of the same length. We hypothesised that if the readers' experience matters as per discrete control account, Russian readers would produce longer saccades and farther landing positions than the ones produced by English readers. On the contrary, if the saccadic targeting is primarily driven by the immediate perceptual demands that override readers' experience as per the dynamic adjustment account, the saccades of Russian and English readers would be of the same length, resulting in similar landing positions. The results in both Cutter et al. and the present study provided evidence for the latter account: Russian readers showed rapid and accurate adjustment of saccade lengths and landing positions to the highly constrained input. Crucially, the saccade lengths and landing positions did not differ between English and Russian readers even in the cross-linguistically length-matched stimuli. |
Ashley C. Parr; Heidi C. Riek; Brian C. Coe; Giovanna Pari; Mario Masellis; Connie Marras; Douglas P. Munoz Genetic variation in the dopamine system is associated with mixed-strategy decision-making in patients with Parkinson's disease Journal Article In: European Journal of Neuroscience, vol. 58, no. 12, pp. 4523–4544, 2023. @article{Parr2023, Decision-making during mixed-strategy games requires flexibly adapting choice strategies in response to others' actions and dynamically tracking outcomes. Such decisions involve diverse cognitive processes, including reinforcement learning, which are affected by disruptions to the striatal dopamine system. We therefore investigated how genetic variation in dopamine function affected mixed-strategy decision-making in Parkinson's disease (PD), which involves striatal dopamine pathology. Sixty-six PD patients (ages 49–85, Hoehn and Yahr Stages 1–3) and 22 healthy controls (ages 54–75) competed in a mixed-strategy game where successful performance depended on minimizing choice biases (i.e., flexibly adapting choices trial by trial). Participants also completed a fixed-strategy task that was matched for sensory input, motor outputs and overall reward rate. Factor analyses were used to disentangle cognitive from motor aspects within both tasks. Using a within-subject, multi-centre design, patients were examined on and off dopaminergic therapy, and genetic variation was examined via a multilocus genetic profile score representing the additive effects of three single nucleotide polymorphisms (SNPs) that influence dopamine transmission: rs4680 (COMT Val158Met), rs6277 (C957T) and rs907094 (encoding DARPP-32). PD and control participants displayed comparable mixed-strategy choice behaviour (overall); however, PD patients with genetic profile scores indicating higher dopamine transmission showed improved performance relative to those with low scores. Exploratory follow-up tests across individual SNPs revealed better performance in individuals with the C957T polymorphism, reflecting higher striatal D2/D3 receptor density. Importantly, genetic variation modulated cognitive aspects of performance, above and beyond motor function, suggesting that genetic variation in dopamine signalling may underlie individual differences in cognitive function in PD. |
Samantha Parker; Richard Ramsey Exploring the relationship between oculomotor preparation and gaze-cued covert shifts in attention Journal Article In: Journal of Vision, vol. 23, no. 3, pp. 1–18, 2023. @article{Parker2023a, Eye gaze plays dual perceptual and social roles in everyday life. Gaze allows us to select information, while also indicating to others where we are attending. There are situations, however, where revealing the locus of our attention is not adaptive, such as when playing competitive sports or confronting an aggressor. It is in these circumstances that covert shifts in attention are assumed to play an essential role. Despite this assumption, few studies have explored the relationship between covert shifts in attention and eye movements within social contexts. In the present study, we explore this relationship using the saccadic dual-task in combination with the gaze-cueing paradigm. Across two experiments, participants prepared an eye movement or fixated centrally. At the same time, spatial attention was cued with a social (gaze) or non-social (arrow) cue.We used an evidence accumulation model to quantify the contributions of both spatial attention and eye movement preparation to performance on a Landolt gap detection task. Importantly, this computational approach allowed us to extract a measure of performance that could unambiguously compare covert and overt orienting in social and non-social cueing tasks for the first time. Our results revealed that covert and overt orienting make separable contributions to perception during gaze-cueing, and that the relationship between these two types of orienting was similar for both social and non-social cueing. Therefore, our results suggest that covert and overt shifts in attention may be mediated by independent underlying mechanisms that are invariant to social context. |
Adam J. Parker; Milla Räsänen; Timothy J. Slattery What is the optimal position of low-frequency words across line boundaries? An eye movement investigation Journal Article In: Applied Cognitive Psychology, vol. 37, no. 1, pp. 161–173, 2023. @article{Parker2023, When displaying text on a page or a screen, only a finite number of characters can be presented on a single line. If the text exceeds that finite value, then text wrapping occurs. Often this process results in longer, more difficult to process words being positioned at the start of a line. We conducted an eye movement study to examine how this artefact of text wrapping affects passage reading. This allowed us to answer the question: should word difficulty be used when determining line breaks? Thirty-nine participants read 20 passages where low-frequency target words were either line-initial or line-final. There was no statistically reliable effect of our manipulation on passage reading time or comprehension despite several effects at a local level. Regarding our primary research question, the evidence suggests that word difficulty may not need to be accounted for when determining line breaks and assigning words to new lines. |
Soon Young Park; Kenneth Holmqvist; Diederick C. Niehorster; Ludwig Huber; Zsófia Virányi How to improve data quality in dog eye tracking Journal Article In: Behavior Research Methods, vol. 55, no. 4, pp. 1513–1536, 2023. @article{Park2023a, Pupil–corneal reflection (P–CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P–CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188–204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans. |
JeongJun Park; Seolmin Kim; Hyung Goo R. Kim; Joonyeol Lee Prior expectation enhances sensorimotor behavior by modulating population tuning and subspace activity in sensory cortex Journal Article In: Science Advances, vol. 9, no. 27, pp. 1–20, 2023. @article{Park2023, Prior knowledge facilitates our perception and goal-directed behaviors, particularly when sensory input is lacking or noisy. However, the neural mechanisms underlying the improvement in sensorimotor behavior by prior expectations remain unknown. In this study, we examine the neural activity in the middle temporal (MT) area of visual cortex while monkeys perform a smooth pursuit eye movement task with prior expectation of the visual target's motion direction. Prior expectations discriminately reduce the MT neural responses depending on their preferred directions, when the sensory evidence is weak. This response reduction effectively sharpens neural population direction tuning. Simulations with a realistic MT population demonstrate that sharpening the tuning can explain the biases and variabilities in smooth pursuit, suggesting that neural computations in the sensory area alone can underpin the integration of prior knowledge and sensory evidence. State-space analysis further supports this by revealing neural signals of prior expectations in the MT population activity that correlate with behavioral changes. |
Nadia Paraskevoudi; Iria SanMiguel Sensory suppression and increased neuromodulation during actions disrupt memory encoding of unpredictable self-initiated stimuli Journal Article In: Psychophysiology, vol. 60, no. 1, pp. 1–25, 2023. @article{Paraskevoudi2023, Actions modulate sensory processing by attenuating responses to self-compared to externally generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems. Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli; however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of self-generation and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motor-auditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before. We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performance – but no differences in memory bias –, attenuated responses, and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds. Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds. |
Ilenia Paparella; Islay Campbell; Roya Sharifpour; Elise Beckers; Alexandre Berger; Jose Fermin Balda Aizpurua; Ekaterina Koshmanova; Nasrin Mortazavi; Puneet Talwar; Christian Degueldre; Laurent Lamalle; Siya Sherif; Christophe Phillips; Pierre Maquet; Gilles Vandewalle Light modulates task-dependent thalamo-cortical connectivity during an auditory attentional task Journal Article In: Communications Biology, vol. 6, no. 1, pp. 1–10, 2023. @article{Paparella2023, Exposure to blue wavelength light stimulates alertness and performance by modulating a widespread set of task-dependent cortical and subcortical areas. How light affects the crosstalk between brain areas to trigger this stimulating effect is not established. Here we record the brain activity of 19 healthy young participants (24.05±2.63; 12 women) while they complete an auditory attentional task in darkness or under an active (blue-enriched) or a control (orange) light, in an ultra-high-field 7 Tesla MRI scanner. We test if light modulates the effective connectivity between an area of the posterior associative thalamus, encompassing the pulvinar, and the intraparietal sulcus (IPS), key areas in the regulation of attention. We find that only the blue-enriched light strengthens the connection from the posterior thalamus to the IPS. To the best of our knowledge, our results provide the first empirical data supporting that blue wavelength light affects ongoing non-visual cognitive activity by modulating task-dependent information flow from subcortical to cortical areas. |
Shubham Pandey; Rashmi Gupta Implicit angry faces interfere with response inhibition and response adjustment Journal Article In: Cognition and Emotion, vol. 37, no. 2, pp. 303–319, 2023. @article{Pandey2023a, Cognitive control enables people to adjust their thoughts and actions according to the current task demands. Response inhibition and response adjustment are two key aspects of cognitive control. Here, we examined how the implicit processing of emotional information influences these two functions with the help of the double-step saccade task. Each trial had either a single target or two sequential targets. Upon a single target onset, participants were required to make a quick saccade, but upon two target onsets, participants were instructed to inhibit their initial saccades and redirect their gaze to the second target. In three experiments, we manipulated the emotional information of the first and second targets. We found that irrelevant emotional information of the first target impaired response inhibition compared to non-emotional information (geometric shapes) of the first target. When non-emotional information (geometric shape) came as the first target, irrelevant angry emotional faces as the second target interfered with both response inhibition and response adjustment compared to irrelevant happy and neutral faces. We explain these results with previous findings that processing faces with irrelevant angry facial expressions take up many attentional resources, leaving fewer resources available for ongoing activities such as response inhibition and response adjustment. |
Ashim Pandey; Sujaya Neupane; Srijana Adhikary; Keepa Vaidya; Christopher C. Pack Cortical visual impairment at birth can be improved rapidly by vision training in adulthood: A case study Journal Article In: Restorative Neurology and Neuroscience, vol. 40, no. 4-6, pp. 261–270, 2023. @article{Pandey2023, Background: Cortical visual impairment (CVI) is a severe loss of visual function caused by damage to the visual cortex or its afferents, often as a consequence of hypoxic insults during birth. It is one of the leading causes of vision loss in children, and it is most often permanent. Objective: Several studies have demonstrated limited vision restoration in adults who trained on well-controlled psychophysical tasks, after acquiring CVI late in life. Other studies have shown improvements in children who underwent vision training. However, little is known about the prospects for the large number of patients who acquired CVI at birth but received no formal therapy as children. Methods: We, therefore, conducted a proof-of-principle study in one CVI patient long after the onset of cortical damage (age 18), to test the training speed, efficacy and generalizability of vision rehabilitation using protocols that had previously proven successful in adults. The patient trained at home and in the laboratory, on a psychophysical task that required discrimination of complex motion stimuli presented in the blind field. Visual function was assessed before and after training, using perimetric measures, as well as a battery of psychophysical tests. Results: The patient showed remarkably rapid improvements on the training task, with performance going from chance to 80% correct over the span of 11 sessions. With further training, improved vision was found for untrained stimuli and for perimetric measures of visual sensitivity. Some, but not all, of these performance gains were retained upon retesting after one year. Conclusions: These results suggest that existing vision rehabilitation programs can be highly effective in adult patients who acquired CVI at a young age. Validation with a large sample size is critical, and future work should also focus on improving the usability and accessibility of these programs for younger patients. |
Yali Pan; Tzvetan Popov; Steven Frisson; Ole Jensen Saccades are locked to the phase of alpha oscillations during natural reading Journal Article In: PLoS Biology, vol. 21, no. 1, pp. 1–19, 2023. @article{Pan2023b, AU We:saccade Pleaseconfirmthatallheadinglevelsarerepresentedcorrectly 3 to 5 times per second when reading. However,:little is known about the neuronal mechanisms coordinating the oculomotor and visual system during such rapid processing. Here, we ask if brain oscillations play a role in the temporal coordination of the visuomotor integration. We simultaneously acquired MEG and eye-tracking data while participants read sentences silently. Every sentence was embedded with a target word of either high or low lexical frequency. Our key finding demonstrated that saccade onsets were locked to the phase of alpha oscillations (8 to 13 Hz), and in particular, for saccades towards low frequency words. Source modelling demonstrated that the alpha oscillations to which the saccades were locked, were generated in the right-visual motor cortex (BA 7). Our findings suggest that the alpha oscillations serve to time the processing between the oculomotor and visual systems during natural reading, and that this coordination becomes more pronounced for demanding words. |
Yafeng Pan; Mikkel C. Vinding; Lei Zhang; Daniel Lundqvist; Andreas Olsson A brain-to-brain mechanism for social transmission of threat learning Journal Article In: Advanced Science, vol. 10, no. 28, pp. 1–18, 2023. @article{Pan2023a, Survival and adaptation in environments require swift and efficacious learning about what is dangerous. Across species, much of such threat learning is acquired socially, e.g., through the observation of others' (“demonstrators'”) defensive behaviors. However, the specific neural mechanisms responsible for the integration of information shared between demonstrators and observers remain largely unknown. This dearth of knowledge is addressed by performing magnetoencephalography (MEG) neuroimaging in demonstrator-observer dyads. A set of stimuli are first shown to a demonstrator whose defensive responses are filmed and later presented to an observer, while neuronal activity is recorded sequentially from both individuals who never interacted directly. These results show that brain-to-brain coupling (BtBC) in the fronto-limbic circuit (including insula, ventromedial, and dorsolateral prefrontal cortex) within demonstrator-observer dyads predict subsequent expressions of learning in the observer. Importantly, the predictive power of BtBC magnifies when a threat is imminent to the demonstrator. Furthermore, BtBC depends on how observers perceive their social status relative to the demonstrator, likely driven by shared attention and emotion, as bolstered by dyadic pupillary coupling. Taken together, this study describes a brain-to-brain mechanism for social threat learning, involving BtBC, which reflects social relationships and predicts adaptive, learned behaviors. |
Jinger Pan; Ming Yan The perceptual span in traditional Chinese Journal Article In: Language and Cognition, pp. 1–14, 2023. @article{Pan2023, The present study aimed at examining the perceptual span, the visual field area for information extraction within a single fixation, during the reading of traditional Chinese sentences. Native traditional Chinese readers' eye-movements were recorded as they read sentences that were presented using a gaze-contingent technique, in which legible text was restricted within a window that moved in synchrony with the eyes, while characters outside the window were masked. Comparisons of the window conditions with a baseline condition in which no viewing constraint was applied showed that when the window revealed one previous character and three upcoming characters around the current fixation, reading speed and oculomotor activities reached peak performance. Compared to previous results with simplified Chinese reading, based on a similar set of materials, traditional Chinese exhibits a reduction of the perceptual span. We suggest that the visual complexity of a writing system likely influences the perceptual span during reading. |
Jinger Pan; Aiping Wang; Catherine McBride; Jeung Ryeul Cho; Ming Yan Online assessment of parafoveal morphological processing/Awareness during reading among Chinese and Korean adults Journal Article In: Scientific Studies of Reading, vol. 27, no. 3, pp. 232–252, 2023. @article{Pan2023c, Purpose: The present study tested parafoveal morphological processing during sentence reading with two eye-tracking experiments, making use of an implicit measurement of morphological awareness. In Chinese and Korean, each character form typically corresponds to multiple mental lexicons, leading to morphological ambiguity. Method: Using the gaze-contingent boundary paradigm, we manipulated the relation between the homographic parafoveal preview morphemes and the target words in Chinese and Korean, respectively, in two experiments. We tested 57 Chinese and 45 Korean university students. Together with baseline conditions in which the previews were either identical or unrelated to the target, we had two critical conditions in which the homographs shared/did not share the same morphemic meaning (i.e., same morpheme/different morpheme) with the target morpheme. Results: Across the two experiments, the differences between the same and different morpheme conditions in a number of eye movement indices were significant, consistently showing that appropriate morpho-semantic information facilitates lexical processing. The different-morpheme previews facilitated the target word processing in Chinese but not in Korean reading. Conclusion: These findings suggest that morphemic meanings are activated early on during word recognition in Chinese, a logographic orthography, and Korean Hangul, a phonologically transparent writing system, before the word is fixated upon. |
Helena Palmieri; Antonio Fernández; Marisa Carrasco Microsaccades and temporal attention at different locations of the visual field Journal Article In: Journal of Vision, vol. 23, no. 5, pp. 1–17, 2023. @article{Palmieri2023, Temporal attention, the prioritization of information at specific points in time, improves performance in behavioral tasks but cannot ameliorate the perceptual asymmetries that exist across the visual field. That is, even after attentional deployment, performance is better along the horizontal than vertical meridian and worse at the upper than lower vertical meridian. Here we asked whether and how microsaccades—tiny fixational eye-movements—could mirror or alternatively attempt to compensate for these performance asymmetries by assessing temporal profiles and direction of microsaccades as a function of visual field location. Observers were asked to report the orientation of one of two targets presented at different time points, in one of three blocked locations (fovea, right horizontal meridian, upper vertical meridian).We found the following: (1) Microsaccade occurrence did not affect either task performance or the magnitude of the temporal attention effect. (2) Temporal attention modulated the microsaccade temporal profiles, and this modulation varied with polar angle location. At all locations, microsaccade rates were significantly more suppressed in anticipation of the target when temporally cued than in the neutral condition. Moreover, microsaccade rates were more suppressed during target presentation in the fovea than in the right horizontal meridian. (3) Across locations and attention conditions, there was a pronounced bias toward the upper hemifield. Overall, these results reveal that temporal attention benefits performance similarly around the visual field, microsaccade suppression is more pronounced for attention than expectation (neutral trials) across locations, and the directional bias toward the upper hemifield could reflect an attempt to compensate for typical poor performance at the upper vertical meridian. |
Arthur Pabst; Zoé Bollen; Nicolas Masson; Pauline Billaux; Philippe Timary; Pierre Maurage An eye-tracking study of biased attentional processing of emotional faces in severe alcohol use disorder Journal Article In: Journal of Affective Disorders, vol. 323, pp. 778–787, 2023. @article{Pabst2023, Background: Social cognition impairments in severe alcohol use disorder (SAUD) are increasingly established. However, fundamental aspects of social cognition, and notably the attentional processing of socio-affective information, remain unexplored, limiting our understanding of underlying mechanisms. Here, we determined whether patients with SAUD show attentional biases to specific socio-affective cues, namely emotional faces. Method: In a modified dot-probe paradigm, 30 patients with SAUD and 30 demographically matched healthy controls (HC) were presented with pairs of neutral-emotional (angry, disgusted, happy, sad) faces while having their eye movements recorded. Indices of early/automatic (first fixations, latency to first fixations) and later/controlled (number of fixations, dwell-time) processes were computed. Results: Patients with SAUD did not differ from HC in their attention to angry/disgusted/sad vs. neutral faces. However, patients with SAUD fixated/dwelled less on happy vs. neutral faces in the first block of stimuli than HC, who presented an attentional bias to happy faces. Limitations: Sample-size was determined to detect medium-to-large effects and subtler ones may have been missed. Further, our cross-sectional design provides no explanation as to whether the evidenced biases precede or are a consequence of SAUD. Conclusions: These results extend the social cognition literature in SAUD to the attentional domain, by evidencing the absence of a controlled attentional bias toward positive social cues in SAUD. This may reflect reduced sensitivity to social reward and could contribute to higher order social cognition difficulties and social dysfunction. |
Melisa Menceloglu; Ken Nakayama; Joo-Hyun Song Radial bias alters high-level motion perception Journal Article In: Vision Research, vol. 209, pp. 1–8, 2023. @article{Menceloglu2023, The visual system involves various orientation and visual field anisotropies, one of which is a preference for radial orientations and motion directions. By radial, we mean those directions coursing symmetrically outward from the fovea into the periphery. This bias stems from anatomical and physiological substrates in the early visual system. We recently reported that this low-level visual anisotropy can alter perceived object orientation. Here, we report that radial bias can also alter another higher-level system, the perceived direction of apparent motion. We presented a bistable apparent motion quartet in the center of the screen while participants fixated on various locations around the quartet. Participants (N = 22) were strongly biased to see the motion direction that was radial with respect to their fixation, controlling for any biases with center fixation. This was observed using a vertical-horizontal quartet as well as an oblique quartet (45° rotated quartet). The latter allowed us to rule out the contribution of the hemisphere effect where motion across the midline is perceived less often. These results extend our earlier findings on perceived object orientation, showing that low-level structural aspects of the visual system alter yet another higher-level visual process, that of apparent motion perception. |
Natalia Melnik; Stefan Pollmann Efficient versus inefficient visual search as training for saccadic re-referencing to an extrafoveal location Journal Article In: Journal of Vision, vol. 23, no. 10, pp. 1–13, 2023. @article{Melnik2023, Central vision loss is one of the leading causes of visual impairment in the elderly and its frequency is increasing. Without formal training, patients adopt an unaffected region of the retina as a new fixation location, a preferred retinal locus (PRL). However, learning to use the PRL as a reference location for saccades, that is, saccadic re-referencing, is protracted and time-consuming. Recent studies showed that training with visual search tasks can expedite this process. However, visual search can be driven by salient external features - leading to efficient search, or by internal goals, usually leading to inefficient, attention-demanding search. We compared saccadic re-referencing training in the presence of a simulated central scotoma with either an efficient or an inefficient visual search task. Participants had to respond by fixating the target with an experimenter-defined retinal location in the lower visual field. We observed that comparable relative training gains were obtained in both tasks for a number of behavioral parameters, with higher training gains for the trained task, compared to the untrained task. The transfer to the untrained task was only observed for some parameters. Our findings thus confirm and extend previous research showing comparable efficiency for exogenously and endogenously driven visual search tasks for saccadic re-referencing training. Our results also show that transfer of training gains to related tasks may be limited and needs to be tested for saccadic re-referencing-training paradigms to assess its suitability as a training tool for patients. |
Lucia Melloni; Liad Mudrik; Michael Pitts; Katarina Bendtz; Oscar Ferrante; Urszula Gorska; Rony Hirschhorn; Aya Khalaf; Csaba Kozma; Alex Lepauvre; Ling Liu; David Mazumder; David Richter; Hao Zhou; Hal Blumenfeld; Melanie Boly; David J. Chalmers; Sasha Devore; Francis Fallon; Floris P. Lange; Ole Jensen; Gabriel Kreiman; Huan Luo; Theofanis I. Panagiotaropoulos; Stanislas Dehaene; Christof Koch; Giulio Tononi An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory Journal Article In: PLoS ONE, vol. 18, pp. 1–28, 2023. @article{Melloni2023, The relationship between conscious experience and brain activity has intrigued scientists and philosophers for centuries. In the last decades, several theories have suggested different accounts for these relationships. These theories have developed in parallel, with little to no cross-talk among them. To advance research on consciousness, we established an adversarial collaboration between proponents of two of the major theories in the field, Global Neuronal Workspace and Integrated Information Theory. Together, we devised and preregistered two experiments that test contrasting predictions of these theories concerning the location and timing of correlates of visual consciousness, which have been endorsed by the theories' proponents. Predicted outcomes should either support, refute, or challenge these theories. Six theory-impartial laboratories will follow the study protocol specified here, using three complementary methods: Functional Magnetic Resonance Imaging (fMRI), Magneto-Electroencephalography (M-EEG), and intracranial electroencephalography (iEEG). The study protocol will include built-in replications, both between labs and within datasets. Through this ambitious undertaking, we hope to provide decisive evidence in favor or against the two theories and clarify the footprints of conscious visual perception in the human brain, while also providing an innovative model of large-scale, collaborative, and open science practice. |
Sarah Nadine MeWIBBLEer; Marc Bächinger; Sanne Kikkert; Jenny Imhof; Silvia Missura; Manuel Carro Dominguez; Nicole Wenderoth Self-regulating arousal via pupil-based biofeedback Journal Article In: Nature Human Behaviour, pp. 1–25, 2023. @article{MeWIBBLEer2023, The brain's arousal state is controlled by several neuromodulatory nuclei known to substantially influence cognition and mental well-being. Here we investigate whether human participants can gain volitional control of their arousal state using a pupil-based biofeedback approach. Our approach inverts a mechanism suggested by previous literature that links activity of the locus coeruleus, one of the key regulators of central arousal and pupil dynamics. We show that pupil-based biofeedback enables participants to acquire volitional control of pupil size. Applying pupil self-regulation systematically modulates activity of the locus coeruleus and other brainstem structures involved in arousal control. Furthermore, it modulates cardiovascular measures such as heart rate, and behavioural and psychophysiological responses during an oddball task. We provide evidence that pupil-based biofeedback makes the brain's arousal system accessible to volitional control, a finding that has tremendous potential for translation to behavioural and clinical applications across various domains, including stress-related and anxiety disorders. |
Hiu Mei Chow; Miriam Spering Eye movements during optic flow perception Journal Article In: Vision Research, vol. 204, pp. 1–11, 2023. @article{MeiChow2023, Optic flow is an important visual cue for human perception and locomotion and naturally triggers eye movements. Here we investigate whether the perception of optic flow direction is limited or enhanced by eye movements. In Exp. 1, 23 human observers localized the focus of expansion (FOE) of an optic flow pattern; in Exp. 2, 18 observers had to detect brief visual changes at the FOE. Both tasks were completed during free viewing and fixation conditions while eye movements were recorded. Task difficulty was varied by manipulating the coherence of radial motion from the FOE (4 %-90 %). During free viewing, observers tracked the optic flow pattern with a combination of saccades and smooth eye movements. During fixation, observers nevertheless made small-scale eye movements. Despite differences in spatial scale, eye movements during free viewing and fixation were similarly directed toward the FOE (saccades) and away from the FOE (smooth tracking). Whereas FOE localization sensitivity was not affected by eye movement instructions (Exp. 1), observers' sensitivity to detect brief changes at the FOE was 27 % higher (p <.001) during free-viewing compared to fixation (Exp. 2). This performance benefit was linked to reduced saccade endpoint errors, indicating the direct beneficial impact of foveating eye movements on performance in a fine-grain perceptual task, but not during coarse perceptual localization. |
Mishika Mehrotra; Sebastian P. Dys; Tina Malti Children's sympathy moderates the link between their attentional orientation and ethical guilt Journal Article In: British Journal of Developmental Psychology, vol. 41, no. 3, pp. 276–290, 2023. @article{Mehrotra2023, This study examined how children's attentional orientation towards environmental cues, dispositional sympathy and inhibitory control were associated with their ethical guilt. Participants were 4- and 6-year-old children (N = 211; 55% male) from ethnically diverse backgrounds. To assess ethical guilt, children were presented with two vignettes depicting ethical violations and reported how they would feel and why, if they had committed those transgressions. Using eye tracking, we calculated attentional orientation as the percentage of time children attended to other-oriented (i.e., victim) minus self-serving (i.e., object gained by transgressing) cues during these vignettes. Children also reported on their sympathy and completed an observational measure of inhibitory control. Although main effects were not significant, sympathy moderated the link between attentional orientation and ethical guilt: attentional orientation was positively associated with ethical guilt for children with low levels of sympathy but had no effect among those high in sympathy. These findings suggest that practices centred on prompting children to attend to other-oriented cues – and away from self-serving ones – may be effective particularly for children who are generally less sympathetic. |
Hannah Mechtenberg; Cristal Giorio; Emily B. Myers Pupil dilation reflects perceptual priorities during a receptive speech task Journal Article In: Ear & Hearing, pp. 1–16, 2023. @article{Mechtenberg2023, Objectives: The listening demand incurred by speech perception fluctu- ates in normal conversation. At the acoustic-phonetic level, natural varia- tion in pronunciation acts as speedbumps to accurate lexical selection. Any given utterance may be more or less phonetically ambiguous—a problem that must be resolved by the listener to choose the correct word. This becomes especially apparent when considering two common speech registers—clear and casual—that have characteristically differ- ent levels of phonetic ambiguity. Clear speech prioritizes intelligibility through hyperarticulation which results in less ambiguity at the pho- netic level, while casual speech tends to have a more collapsed acoustic space. We hypothesized that listeners would invest greater cognitive resources while listening to casual speech to resolve the increased amount of phonetic ambiguity, as compared with clear speech. To this end, we used pupillometry as an online measure of listening effort dur- ing perception of clear and casual continuous speech in two background conditions: quiet and noise. Design: Forty-eight participants performed a probe detection task while listening to spoken, nonsensical sentences (masked and unmasked) while recording pupil size. Pupil size was modeled using growth curve analysis to capture the dynamics of the pupil response as the sentence unfolded. Results: Pupil size during listening was sensitive to the presence of noise and speech register (clear/casual). Unsurprisingly, listeners had over- all larger pupil dilations during speech perception in noise, replicating earlier work. The pupil dilation pattern for clear and casual sentences was considerably more complex. Pupil dilation during clear speech tri- als was slightly larger than for casual speech, across quiet and noisy backgrounds. Conclusions: We suggest that listener motivation could explain the larger pupil dilations to clearly spoken speech. We propose that, bounded by the context of this task, listeners devoted more resources to perceiving the speech signal with the greatest acoustic/phonetic fidelity. Further, we unexpectedly found systematic differences in pupil dilation preceding the onset of the spoken sentences. Together, these data demonstrate that the pupillary system is not merely reactive but also adaptive—sensitive to both task structure and listener motivation to maximize accurate per- ception in a limited resource system. |
Mary E. McNamara; Kean J. Hsu; Bryan A. McSpadden; Semeon Risom; Jason Shumake; Christopher G. Beevers Beyond face value: Assessing the factor structure of an eye-tracking based attention bias task Journal Article In: Cognitive Therapy and Research, vol. 47, no. 5, pp. 772–787, 2023. @article{McNamara2023, Background: Behavioral measurement of attention bias for emotional stimuli has traditionally ignored whether trial-level task data have a strong enough general factor to justify a unidimensional measurement model. This is surprising, as unidimensionality across trials is an important assumption for computing bias scores. Methods: In the present study, we assess the psychometric properties of a free-viewing, eye-tracking task measuring attention for emotional stimuli. Undergraduate students (N = 130) viewed two counterbalanced blocks of 4 × 4 matrices of sad/neutral and happy/neutral facial expressions for 10 seconds each across 60 trials. We applied a bifactor measurement model across ten attention bias metrics (e.g., total dwell time for neutral and emotional stimuli, ratio of emotional to total dwell time, difference in dwell time for emotional and neutral stimuli, a variable indicating whether dwell time on emotional stimuli exceeded dwell time on neutral stimuli) to assess whether trial-level data load on to a single, general factor. Unidimensionality was evaluated using omega hierarchical, explained common variance, and percentage of uncontaminated correlations. Results: Total dwell time had excellent internal consistency for sad (ɑ =.95, ɷ =.96) and neutral stimuli (ɑ =.95, ɷ =.95), and met criteria for unidimensionality, suggesting the trial-level data within each task reflect a single underlying construct. However, the remaining bias metrics fell short of the unidimensionality thresholds, suggesting not all metrics are good candidates for creating bias scores. Conclusion: Total dwell time by valence had the best psychometrics in terms of internal consistency and unidimensionality. This study demonstrates the importance of assessing whether trial-level data load onto a general factor, as not all metrics are equivalent, even when derived from the same task data. |
Drew J. McLaughlin; Maggie E. Zink; Lauren Gaunt; Jamie Reilly; Mitchell S. Sommers; Kristin J. Van Engen; Jonathan E. Peelle Give me a break! Unavoidable fatigue effects in cognitive pupillometry Journal Article In: Psychophysiology, vol. 60, no. 7, pp. 1–20, 2023. @article{McLaughlin2023a, Pupillometry has a rich history in the study of perception and cognition. One perennial challenge is that the magnitude of the task-evoked pupil response diminishes over the course of an experiment, a phenomenon we refer to as a fatigue effect. Reducing fatigue effects may improve sensitivity to task effects and reduce the likelihood of confounds due to systematic physiological changes over time. In this paper, we investigated the degree to which fatigue effects could be ameliorated by experimenter intervention. In Experiment 1, we assigned participants to one of three groups—no breaks, kinetic breaks (playing with toys, but no social interaction), or chatting with a research assistant—and compared the pupil response across conditions. In Experiment 2, we additionally tested the effect of researcher observation. Only breaks including social interaction significantly reduced the fatigue of the pupil response across trials. However, in all conditions we found robust evidence for fatigue effects: that is, regardless of protocol, the task-evoked pupil response was substantially diminished (at least 60%) over the duration of the experiment. We account for the variance of fatigue effects in our pupillometry data using multiple common statistical modeling approaches (e.g., linear mixed-effects models of peak, mean, and baseline pupil diameters, as well as growth curve models of time-course data). We conclude that pupil attenuation is a predictable phenomenon that should be accommodated in our experimental designs and statistical models. |
Drew J. McLaughlin; Jackson S. Colvett; Julie M. Bugg; Kristin J. Van Engen Sequence effects and speech processing: Cognitive load for speaker-switching within and across accents Journal Article In: Psychonomic Bulletin & Review, pp. 1–11, 2023. @article{McLaughlin2023, Prior work in speech processing indicates that listening tasks with multiple speakers (as opposed to a single speaker) result in slower and less accurate processing. Notably, the trial-to-trial cognitive demands of switching between speakers or switching between accents have yet to be examined. We used pupillometry, a physiological index of cognitive load, to examine the demands of processing first (L1) and second (L2) language-accented speech when listening to sentences produced by the same speaker consecutively (no switch), a novel speaker of the same accent (within-accent switch), and a novel speaker with a different accent (across-accent switch). Inspired by research on sequential adjustments in cognitive control, we aimed to identify the cognitive demands of accommodating a novel speaker and accent by examining the trial-to-trial changes in pupil dilation during speech processing. Our results indicate that switching between speakers was more cognitively demanding than listening to the same speaker consecutively. Additionally, switching to a novel speaker with a different accent was more cognitively demanding than switching between speakers of the same accent. However, there was an asymmetry for across-accent switches, such that switching from an L1 to an L2 accent was more demanding than vice versa. Findings from the present study align with work examining multi-talker processing costs, and provide novel evidence that listeners dynamically adjust cognitive processing to accommodate speaker and accent variability. We discuss these novel findings in the context of an active control model and auditory streaming framework of speech processing. |
Jacie R. McHaney; William L. Schuerman; Matthew K. Leonard; Bharath Chandrasekaran Transcutaneous auricular vagus nerve stimulation modulates performance but not pupil size during nonnative speech category learning Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 66, no. 10, pp. 3825–3843, 2023. @article{McHaney2023, Purpose: Subthreshold transcutaneous auricular vagus nerve stimulation (taVNS) synchronized with behavioral training can selectively enhance nonnative speech category learning in adults. Prior work has demonstrated that behavioral performance increases when taVNS is paired with easier-to-learn Mandarin tone categories in native English listeners, relative to when taVNS is paired with harder-to-learn Mandarin tone categories or without taVNS. Mechanistically, this temporally precise plasticity has been attributed to noradrenergic modulation. However, prior work did not specifically utilize methodologies that indexed nor-adrenergic modulation and, therefore, was unable to explicitly test this hypothesis. Our goal for this study was to use pupillometry to gain mechanistic insights into taVNS behavioral effects. Method: Thirty-eight participants learned to categorize Mandarin tones while pupil-lometry was recorded. In a double-blinded design, participants were divided into two taVNS groups that, as in the prior study, differed according to whether taVNS was paired with easier-to-learn tones or harder-to-learn tones. Learning performance and pupillary responses were measured using linear mixed-effects models. Results: We found that taVNS did not have any tone-specific or group behav-ioral or pupillary effects. However, in an exploratory analysis, we observed that taVNS did lead to faster rates of learning on trials paired with stimulation, particularly for those who were stimulated at lower amplitudes. Conclusions: Our results suggest that pupillary responses may not be a reliable marker of locus coeruleus–norepinephrine system activity in humans. However, future research should systematically examine the effects of stimulation amplitude on both behavior and pupillary responses. |
Vincent B. McGinty; Shira M. Lupkin Behavioral read-out from population value signals in primate orbitofrontal cortex Journal Article In: Nature Neuroscience, vol. 26, no. 12, pp. 2203–2212, 2023. @article{McGinty2023, The primate orbitofrontal cortex (OFC) has long been recognized for its role in value-based decisions; however, the exact mechanism linking value representations in the OFC to decision outcomes has remained elusive. Here, to address this question, we show, in non-human primates, that trial-wise variability in choices can be explained by variability in value signals decoded from many simultaneously recorded OFC neurons. Mechanistically, this relationship is consistent with the projection of activity within a low-dimensional value-encoding subspace onto a potentially higher-dimensional, behaviorally potent output subspace. Identifying this neural–behavioral link answers longstanding questions about the role of the OFC in economic decision-making and suggests population-level read-out mechanisms for the OFC similar to those recently identified in sensory and motor cortex. |
Siobhan M. McAteer; Anthony McGregor; Daniel T. Smith Oculomotor rehearsal in visuospatial working memory Journal Article In: Attention, Perception, and Psychophysics, vol. 85, pp. 261–275, 2023. @article{McAteer2023, The neural and cognitive mechanisms of spatial working memory are tightly coupled with the systems that control eye movements, but the precise nature of this coupling is not well understood. It has been argued that the oculomotor system is selectively involved in rehearsal of spatial but not visual material in visuospatial working memory. However, few studies have directly compared the effect of saccadic interference on visual and spatial memory, and there is little consensus on how the underlying working memory representation is affected by saccadic interference. In this study we aimed to examine how working memory for visual and spatial features is affected by overt and covert attentional interference across two experiments. Participants were shown a memory array, then asked to either maintain fixation or to overtly or covertly shift attention in a detection task during the delay period. Using the continuous report task we directly examined the precision of visual and spatial working memory representations and fit psychophysical functions to investigate the sources of recall error associated with different types of interference. These data were interpreted in terms of embodied theories of attention and memory and provide new insights into the nature of the interactions between cognitive and motor systems. |
Siobhan M. McAteer; Emma Ablott; Anthony McGregor; Daniel T. Smith Dynamic resource allocation in spatial working memory during full and partial report tasks Journal Article In: Journal of Vision, vol. 23, no. 2, pp. 1–14, 2023. @article{McAteer2023a, Serial position effects are well-documented in working memory literature. Studies of spatial short-term memory that rely on binary response; full report tasks tend to report stronger primacy than recency effects. In contrast, studies that utilize a continuous response, partial report task report stronger recency than primacy effects (Gorgoraptis, Catalao, Bays, & Husain, 2011; Zokaei, Gorgoraptis, Bahrami, Bays, & Husain, 2011). The current study explored the idea that probing spatial working memory using full and partial continuous response tasks would produce different distributions of visuospatial working memory resources across spatial sequences and, therefore, explain the conflicting results in the literature. Experiment 1 demonstrated that primacy effects were observed when memory was probed with a full report task. Experiment 2 confirmed this finding while controlling eye movements. Critically, Experiment 3 demonstrated that switching from a full to a partial report task abolished the primacy effect and produced a recency effect, consistent with the idea that the distribution of resources in visuospatial working memory depends on the type of recall required. It is argued that the primacy effect in the whole report task arose from the accumulation of noise caused by the execution of multiple spatially directed actions during recall, whereas the recency effect in the partial report task reflects the redistribution of preallocated resources when an anticipated item is not presented. These data show that it is possible to reconcile apparently contradictory findings within the resource theory of spatial working memory and the importance of considering how memory is probed when interpreting behavioral data through the lens of resource theories of spatial working memory |
Audrey Mazancieux; Franck Mauconduit; Alexis Amadon; Jan Willem de Gee; Tobias H. Donner; Florent Meyniel Brainstem fMRI signaling of surprise across different types of deviant stimuli Journal Article In: Cell Reports, vol. 42, no. 11, pp. 1–15, 2023. @article{Mazancieux2023, Detection of deviant stimuli is crucial to orient and adapt our behavior. Previous work shows that deviant stimuli elicit phasic activation of the locus coeruleus (LC), which releases noradrenaline and controls central arousal. However, it is unclear whether the detection of behaviorally relevant deviant stimuli selectively triggers LC responses or other neuromodulatory systems (dopamine, serotonin, and acetylcholine). We combine human functional MRI (fMRI) recordings optimized for brainstem imaging with pupillometry to perform a mapping of deviant-related responses in subcortical structures. Participants have to detect deviant items in a “local-global” paradigm that distinguishes between deviance based on the stimulus probability and the sequence structure. fMRI responses to deviant stimuli are distributed in many cortical areas. Both types of deviance elicit responses in the pupil, LC, and other neuromodulatory systems. Our results reveal that the detection of task-relevant deviant items recruits the same multiple subcortical systems across computationally different types of deviance. |
Sebastiaan Mathôt; Ana Vilotijević Methods in cognitive pupillometry: Design, preprocessing, and statistical analysis Journal Article In: Behavior Research Methods, vol. 55, no. 6, pp. 3055–3077, 2023. @article{Mathot2023a, Cognitive pupillometry is the measurement of pupil size to investigate cognitive processes such as attention, mental effort, working memory, and many others. Currently, there is no commonly agreed-upon methodology for conducting cognitive-pupillometry experiments, and approaches vary widely between research groups and even between different experiments from the same group. This lack of consensus makes it difficult to know which factors to consider when conducting a cognitive-pupillometry experiment. Here we provide a comprehensive, hands-on guide to methods in cognitive pupillometry, with a focus on trial-based experiments in which the measure of interest is the task-evoked pupil response to a stimulus. We cover all methodological aspects of cognitive pupillometry: experimental design, preprocessing of pupil-size data, and statistical techniques to deal with multiple comparisons when testing pupil-size data. In addition, we provide code and toolboxes (in Python) for preprocessing and statistical analysis, and we illustrate all aspects of the proposed workflow through an example experiment and example scripts. |
Sebastiaan Mathôt; Hermine Berberyan; Philipp Büchel; Veera Ruuskanen; Ana Vilotijević; Wouter Kruijne Effects of pupil size as manipulated through ipRGC activation on visual processing Journal Article In: NeuroImage, vol. 283, pp. 1–13, 2023. @article{Mathot2023, The size of the eyes' pupils determines how much light enters the eye and also how well this light is focused. Through this route, pupil size shapes the earliest stages of visual processing. Yet causal effects of pupil size on vision are poorly understood and rarely studied. Here we introduce a new way to manipulate pupil size, which relies on activation of intrinsically photosensitive retinal ganglion cells (ipRGCs) to induce sustained pupil constriction. We report the effects of both experimentally induced and spontaneous changes in pupil size on visual processing as measured through EEG. We compare these to the effects of stimulus intensity and covert visual attention, because previous studies have shown that these factors all have comparable effects on some common measures of early visual processing, such as detection performance and steady-state visual evoked potentials; yet it is still unclear whether these are superficial similarities, or rather whether they reflect similar underlying processes. Using a mix of neural-network decoding, ERP analyses, and time-frequency analyses, we find that induced pupil size, spontaneous pupil size, stimulus intensity, and covert visual attention all affect EEG responses, mainly over occipital and parietal electrodes, but—crucially—that they do so in qualitatively different ways. Induced and spontaneous pupil-size changes mainly modulate activity patterns (but not overall power or intertrial coherence) in the high-frequency beta range; this may reflect an effect of pupil size on oculomotor activity and/ or visual processing. In addition, spontaneous (but not induced) pupil size tends to correlate positively with intertrial coherence in the alpha band; this may reflect a non-causal relationship, mediated by arousal. Taken together, our findings suggest that pupil size has qualitatively different effects on visual processing from stimulus intensity and covert visual attention. This shows that pupil size as manipulated through ipRGC activation strongly affects visual processing, and provides concrete starting points for further study of this important yet understudied earliest stage of visual processing. |
Nicolas Masson; Valérie Dormal; Martine Stephany; Christine Schiltz Eye movements reveal that young school children shift attention when solving additions and subtractions Journal Article In: Developmental Science, pp. 1–12, 2023. @article{Masson2023, Abstract: Adults shift their attention to the right or to the left along a spatial continuum when solving additions and subtractions, respectively. Studies suggest that these shifts not only support the exact computation of the results but also anticipatively narrow down the range of plausible answers when processing the operands. However, little is known on when and how these attentional shifts arise in childhood during the acquisition of arithmetic. Here, an eye-tracker with high spatio-temporal resolution was used to measure spontaneous eye movements, used as a proxy for attentional shifts, while children of 2nd (8 y-o; N = 50) and 4th (10 y-o; N = 48) Grade solved simple additions (e.g., 4+3) and subtractions (e.g., 3-2). Gaze patterns revealed horizontal and vertical attentional shifts in both groups. Critically, horizontal eye movements were observed in 4th Graders as soon as the first operand and the operator were presented and thus before the beginning of the exact computation. In 2nd Graders, attentional shifts were only observed after the presentation of the second operand just before the response was made. This demonstrates that spatial attention is recruited when children solve arithmetic problems, even in the early stages of learning mathematics. The time course of these attentional shifts suggests that with practice in arithmetic children start to use spatial attention to anticipatively guide the search for the answer and facilitate the implementation of solving procedures. Research Highlights: Additions and subtractions are associated to right and left attentional shifts in adults, but it is unknown when these mechanisms arise in childhood. Children of 8–10 years old solved single-digit additions and subtractions while looking at a blank screen. Eye movements showed that children of 8 years old already show spatial biases possibly to represent the response when knowing both operands. Children of 10 years old shift attention before knowing the second operand to anticipatively guide the search for plausible answers. |
Jana Masselink; Alexis Cheviet; Caroline Froment-Tilikete; Denis Pélisson; Markus Lappe A triple distinction of cerebellar function for oculomotor learning and fatigue compensation Journal Article In: PLoS Computational Biology, vol. 19, no. 8, pp. 1–37, 2023. @article{Masselink2023, The cerebellum implements error-based motor learning via synaptic gain adaptation of an inverse model, i.e. the mapping of a spatial movement goal onto a motor command. Recently, we modeled the motor and perceptual changes during learning of saccadic eye movements, showing that learning is actually a threefold process. Besides motor recalibration of (1) the inverse model, learning also comprises perceptual recalibration of (2) the visuospatial target map and (3) of a forward dynamics model that estimates the saccade size from corollary discharge. Yet, the site of perceptual recalibration remains unclear. Here we dissociate cerebellar contributions to the three stages of learning by modeling the learning data of eight cerebellar patients and eight healthy controls. Results showed that cerebellar pathology restrains short-term recalibration of the inverse model while the forward dynamics model is well informed about the reduced saccade change. Adaptation of the visuospatial target map trended in learning direction only in control subjects, yet without reaching significance. Moreover, some patients showed a tendency for uncompensated oculomotor fatigue caused by insufficient upregulation of saccade duration. According to our model, this could induce long-term perceptual compensation, consistent with the overestimation of target eccentricity found in the patients' baseline data. We conclude that the cerebellum mediates short-term adaptation of the inverse model, especially by control of saccade duration, while the forward dynamics model was not affected by cerebellar pathology. |
Jun Maruta; Lisa A. Spielman; Jamshid Ghajar Visuomotor synchronization: Military normative performance Journal Article In: Military Medicine, vol. 188, no. 3-4, pp. E484–E491, 2023. @article{Maruta2023, Introduction: Cognitive processes such as perception and reasoning are preceded and dependent on attention. Because of the close overlap between neural circuits of attention and eye movement, attention may be objectively quantified with recording of eye movements during an attention-dependent task. Our previous work demonstrated that performance scores on a circular visual tracking task that requires dynamic synchronization of the gaze with the target motion can be impacted by concussion, sleep deprivation, and attention deficit/hyperactivity disorder. The current study examined the characteristics of performance on a standardized predictive visual tracking task in a large sample from a U.S. Military population to provide military normative data. Materials and Methods: The sample consisted of 1,594 active duty military service members of either sex aged 18-29 years old who were stationed at Fort Hood Army Base. The protocol was reviewed and approved by the U.S. Army Medical Research and Materiel Command Institutional Review Board. Demographic, medical, and military history data were collected using questionnaires, and performance-based data were collected using a circular visual tracking test and Trail Making Test. Differences in visual tracking performance by demographic characteristics were examined with a multivariate analysis of variance, as well as a Kolmogorov-Smirnov test and a rank-sum test. Associations with other measures were examined with a rank-sum test or Spearman correlations. Results: Robust sex differences in visual tracking performance were found across the various statistical models, as well as age differences in several isolated comparisons. Accordingly, norms of performance scores, described in terms of percentile standings, were developed adjusting for age and sex. The effects of other measures on visual tracking performance were small or statistically non-significant. An examination of the score distributions of various metrics suggested that strategies preferred by men and women may optimize different aspects of visual tracking performance. Conclusion: This large-scale quantification of attention, using dynamic visuomotor synchronization performance, provides rigorously characterized age- and sex-based military population norms. This study establishes analytics for assessing normal and impaired attention and detecting changes within individuals over time. Practical applications for combat readiness and surveillance of attention impairment from sleep insufficiency, concussion, medication, or attention disorders will be enhanced with portable, easily accessible, fast, and reliable dynamic eye-tracking technologies. |
Beatriz Martín-Luengo; Karlos Luna; Yury Shtyrov Conversational pragmatics: Memory reporting strategies in different social contexts Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–10, 2023. @article{MartinLuengo2023, Previous studies in conversational pragmatics have showed that the information people share with others heavily depends on the confidence they have in the correctness of a candidate answer. At the same time, different social contexts prompt different incentive structures, which set a higher or lower confidence criterion to determine which potential answer to report. In this study, we investigated how the different incentive structures of several types of social contexts and how different levels of knowledge affect the amount of information we are willing to share. Participants answered easy, intermediate, and difficult general-knowledge questions and decided whether they would report or withhold their selected answer in different social contexts: formal vs. informal, that could be either constrained (a context that promotes providing only responses we are certain about) or loose (with an incentive structure that maximizes providing any type of answer). Overall, our results confirmed that social contexts are associated with different incentive structures which affects memory reporting strategies. We also found that the difficulty of the questions is an important factor in conversational pragmatics. Our results highlight the relevance of studying different incentive structures of social contexts to understand the underlying processes of conversational pragmatics, and stress the importance of considering metamemory theories of memory reporting. |
Jiayu Mao; Shuang Qiu; Wei Wei; Huiguang He Cross-modal guiding and reweighting network for multi-modal RSVP-based target detection Journal Article In: Neural Networks, vol. 161, pp. 65–82, 2023. @article{Mao2023, Rapid Serial Visual Presentation (RSVP) based Brain–Computer Interface (BCI) facilities the high-throughput detection of rare target images by detecting evoked event-related potentials (ERPs). At present, the decoding accuracy of the RSVP-based BCI system limits its practical applications. This study introduces eye movements (gaze and pupil information), referred to as EYE modality, as another useful source of information to combine with EEG-based BCI and forms a novel target detection system to detect target images in RSVP tasks. We performed an RSVP experiment, recorded the EEG signals and eye movements simultaneously during a target detection task, and constructed a multi-modal dataset including 20 subjects. Also, we proposed a cross-modal guiding and fusion network to fully utilize EEG and EYE modalities and fuse them for better RSVP decoding performance. In this network, a two-branch backbone was built to extract features from these two modalities. A Cross-Modal Feature Guiding (CMFG) module was proposed to guide EYE modality features to complement the EEG modality for better feature extraction. A Multi-scale Multi-modal Reweighting (MMR) module was proposed to enhance the multi-modal features by exploring intra- and inter-modal interactions. And, a Dual Activation Fusion (DAF) was proposed to modulate the enhanced multi-modal features for effective fusion. Our proposed network achieved a balanced accuracy of 88.00% (±2.29) on the collected dataset. The ablation studies and visualizations revealed the effectiveness of the proposed modules. This work implies the effectiveness of introducing the EYE modality in RSVP tasks. And, our proposed network is a promising method for RSVP decoding and further improves the performance of RSVP-based target detection systems. |
Marcello Maniglia; Kristina M. Visscher; Aaron R. Seitz Consistency of preferred retinal locus across tasks and participants trained with a simulated scotoma Journal Article In: Vision Research, vol. 203, pp. 1–9, 2023. @article{Maniglia2023, After loss of central vision following retinal pathologies such as macular degeneration (MD), patients often adopt compensatory strategies including developing a “preferred retinal locus” (PRL) to replace the fovea in tasks involving fixation. A key question is whether patients develop multi-purpose PRLs or whether their oculomotor strategies adapt to the demands of the task. While most MD patients develop a PRL, clinical evidence suggests that patients may develop multiple PRLs and switch between them according to the task at hand. To understand this, we examined a model of central vision loss in normally seeing individuals and tested whether they used the same or different PRLs across tasks after training. Nineteen participants trained for 10 sessions on contrast detection while in conditions of gaze-contingent, simulated central vision loss. Before and after training, peripheral looking strategies were evaluated during tasks measuring visual acuity, reading abilities and visual search. To quantify strategies in these disparate, naturalistic tasks, we measured and compared the amount of task-relevant information at each of 8 equally spaced, peripheral locations, while participants performed the tasks. Results showed that some participants used consistent viewing strategies across tasks whereas other participants' strategies differed depending on task. This novel method allows quantification of peripheral vision use even in relatively ecological tasks. These results represent one of the first examinations of peripheral viewing strategies across tasks in simulated vision loss. Results suggest that individual differences in peripheral looking strategies following simulated central vision loss may model those developed in pathological vision loss. |
Giorgio L. Manenti; Aslan S. Dizaji; Caspar M. Schwiedrzik Variability in training unlocks generalization in visual perceptual learning through invariant representations Journal Article In: Current Biology, vol. 33, no. 5, pp. 817–826, 2023. @article{Manenti2023, Stimulus and location specificity are long considered hallmarks of visual perceptual learning. This renders visual perceptual learning distinct from other forms of learning, where generalization can be more easily attained, and therefore unsuitable for practical applications, where generalization is key. Based on the hypotheses derived from the structure of the visual system, we test here whether stimulus variability can unlock generalization in perceptual learning. We train subjects in orientation discrimination, while we vary the amount of variability in a task-irrelevant feature, spatial frequency. We find that, independently of task difficulty, this manipulation enables generalization of learning to new stimuli and locations, while not negatively affecting the overall amount of learning on the task. We then use deep neural networks to investigate how variability unlocks generalization. We find that networks develop invariance to the task-irrelevant feature when trained with variable inputs. The degree of learned invariance strongly predicts generalization. A reliance on invariant representations can explain variability-induced generalization in visual perceptual learning. This suggests new targets for understanding the neural basis of perceptual learning in the higher-order visual cortex and presents an easy-to-implement modification of common training paradigms that may benefit practical applications. |
Silvia Makowski; Annika Bätz; Paul Prasse; Lena A. Jäger; Tobias Scheffer Detection of alcohol inebriation from eye movements Journal Article In: Procedia Computer Science, vol. 225, pp. 2086–2095, 2023. @article{Makowski2023, Today, the most convenient way of estimating an individual's blood-alcohol concentration requires a breathalyzer device and intense Today, the most convenient way of estimating an individual's blood-alcohol concentration requires a breathalyzer device and intense detects alcohol inebriation based on a person's eye gaze and eye closure. We investigate the relative contribution of individual user cooperation, which severely limits the scope of potential applications. We develop and study a machine-learning model that features derived from eye gaze and eye closure to the model. In order to train and experimentally evaluate the model, we collect— user cooperation, which severely limits the scope of potential applications. We develop and study a machine-learning model that detects alcohol inebriation based on a person's eye gaze and eye closure. We investigate the relative contribution of individual detects alcohol inebriation based on a person's eye gaze and eye closure. We investigate the relative contribution of individual and share—a new data set with participants in baseline and alcohol-intoxicated states. We find that the model can in fact detect the features derived from eye gaze and eye closure to the model. In order to train and experimentally evaluate the model, we collect— consumption of a moderate amount of alcohol; the accuracy grows significantly with increasing blood alcohol concentration. The features derived from eye gaze and eye closure to the model. In order to train and experimentally evaluate the model, we collect— and share—a new data set with participants in baseline and alcohol-intoxicated states. We find that the model can in fact detect the and share—a new data set with participants in baseline and alcohol-intoxicated states. We find that the model can in fact detect the most relevant features turn out to relate to the velocity and acceleration profiles during fixations and saccades. From our proof-of- consumption of a moderate amount of alcohol; the accuracy grows significantly with increasing blood alcohol concentration. The concept study, we can conclude that contactless inebriation detection based on eye gaze is in fact possible, albeit data need to be consumption of a moderate amount of alcohol; the accuracy grows significantly with increasing blood alcohol concentration. The collected on an industrial scale to reach practical applicability. Potential applications of contactless inebriation detection include most relevant features turn out to relate to the velocity and acceleration profiles during fixations and saccades. From our proof-of- most relevant features turn out to relate to the velocity and acceleration profiles during fixations and saccades. From our proof-of- concept study, we can conclude that contactless inebriation detection based on eye gaze is in fact possible, albeit data need to be the detection of impaired drivers or operators of other hazardous machinery as well as health-monitoring applications. concept study, we can conclude that contactless inebriation detection based on eye gaze is in fact possible, albeit data need to be collected on an industrial scale to reach practical applicability. Potential applications of contactless inebriation detection include collected on an industrial scale to reach practical applicability. Potential applications of contactless inebriation detection include the detection of impaired drivers or operators of other hazardous machinery as well as health-monitoring applications. the detection of impaired drivers or operators of other hazardous machinery as well as health-monitoring applications. |
Marloes Mak; Myrthe Faber; Roel M. Willems Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study Journal Article In: Cortex, vol. 162, pp. 115–135, 2023. @article{Mak2023, Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration). |
Oliver Maith; Javier Baladron; Wolfgang Einhäuser; Fred H. Hamker Exploration behavior after reversals is predicted by STN-GPe synaptic plasticity in a basal ganglia model Journal Article In: iScience, vol. 26, no. 5, pp. 1–23, 2023. @article{Maith2023, Humans can quickly adapt their behavior to changes in the environment. Classical reversal learning tasks mainly measure how well participants can disengage from a previously successful behavior but not how alternative responses are explored. Here, we propose a novel 5-choice reversal learning task with alternating position-reward contingencies to study exploration behavior after a reversal. We compare human exploratory saccade behavior with a prediction obtained from a neuro-computational model of the basal ganglia. A new synaptic plasticity rule for learning the connectivity between the subthalamic nucleus (STN) and external globus pallidus (GPe) results in exploration biases to previously rewarded positions. The model simulations and human data both show that during experimental experience exploration becomes limited to only those positions that have been rewarded in the past. Our study demonstrates how quite complex behavior may result from a simple sub-circuit within the basal ganglia pathways. |
Federica Magnabosco; Olaf Hauk An eye on semantics: A study on the influence of concreteness and predictability on early fixation durations Journal Article In: Language, Cognition and Neuroscience, pp. 1–15, 2023. @article{Magnabosco2023, We used eye-tracking during natural reading to study how semantic control and representation mechanisms interact for the successful comprehension of sentences, by manipulating sentence context and single-word meaning. Specifically, we examined whether a word's semantic characteristic (concreteness) affects first fixation and gaze durations (FFDs and GDs) and whether it interacts with the predictability of a word. We used a linear mixed effects model including several possible psycholinguistic covariates. We found a small but reliable main effect of concreteness and replicated a predictability effect on FFDs, but we found no interaction between the two. The results parallel previous findings of additive effects of predictability (context) and frequency (lexical level) in fixation times. Our findings suggest that the semantics of a word and the context created by the preceding words additively influence early stages of word processing in natural sentence reading. |
Kazutaka Maeda; Ken Inoue; Masahiko Takada; Okihide Hikosaka Environmental context-dependent activation of dopamine neurons via putative amygdala-nigra pathway in macaques Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–12, 2023. @article{Maeda2023, Seeking out good and avoiding bad objects is critical for survival. In practice, objects are rarely good every time or everywhere, but only at the right time or place. Whereas the basal ganglia (BG) are known to mediate goal-directed behavior, for example, saccades to rewarding objects, it remains unclear how such simple behaviors are rendered contingent on higher-order factors, including environmental context. Here we show that amygdala neurons are sensitive to environments and may regulate putative dopamine (DA) neurons via an inhibitory projection to the substantia nigra (SN). In male macaques, we combined optogenetics with multi-channel recording to demonstrate that rewarding environments induce tonic firing changes in DA neurons as well as phasic responses to rewarding events. These responses may be mediated by disinhibition via a GABAergic projection onto DA neurons, which in turn is suppressed by an inhibitory projection from the amygdala. Thus, the amygdala may provide an additional source of learning to BG circuits, namely contingencies imposed by the environment. |
Samuel Madariaga; Cecilia Babul; José Ignacio Egaña; Iván Rubio-Venegas; Gamze Güney; Miguel Concha-Miranda; Pedro E. Maldonado; Christ Devia In: MethodsX, vol. 10, pp. 1–10, 2023. @article{Madariaga2023, In this work we present SaFiDe, a deterministic method to detect eye movements (saccades and fixations) from eye-trace data. We developed this method for human and nonhuman primate data from video- and coil-recorded eye traces and further applied the algorithm to eye traces computed from electrooculograms. All the data analyzed were from free-exploration paradigms, where the main challenge was to detect periods of saccades and fixations that were uncued by the task. The method uses velocity and acceleration thresholds, calculated from the eye trace, to detect saccade and fixation periods. We show that our fully deterministic method detects saccades and fixations from eye traces during free visual exploration. The algorithm was implemented in MATLAB, and the code is publicly available on a GitHub repository. • The algorithm presented is entirely deterministic, simplifying the comparison between subjects and tasks. • Thus far, the algorithm presented can operate over video-based eye tracker data, human electrooculogram records, or monkey scleral eye coil data. |
Diane E. MacKenzie; R. Lee Kirby; Cher Smith; Zainab Al Lawati; Eric Lee; Sorayya Askari Novice and expert observer accuracy of the threshold wheelchair skill: A pilot eye-tracking study Journal Article In: The Open Journal of Occupational Therapy, vol. 11, no. 2, pp. 1–10, 2023. @article{MacKenzie2023, Background: Moving a wheelchair over a low threshold is an entry-level mobility skill. Observation is critical to the assessment and training of this skill. The primary objective of this exploratory pilot study was to determine if a difference between novice and expert visual attention allocation pattern was linked to the accuracy of rating skill performance and decision confidence. Methods: Twelve expert occupational therapists and nine non-expert occupational therapy students observed 30 first-attempt recordings of able-bodied persons learning the low threshold skill. Randomized recordings included 10 recordings from each rating group of “pass,” “pass with difficulty" (pwd), and “fail.” Skill ratings, confidence ratings, time to decision, and SR Eyelink 1000+ monitored eye movements were recorded. Results: No significant group differences were found in the correct identification skill rating, though significant relationships were found with experts rating higher confidence in their decision-making and generally faster reaction times. While trends of eye movements differences were found between groups, only the number of areas of interest viewed in pwd videos was a potential rating correctness predictor. Conclusion: Improved confidence in decision-making did not mean improved assessment accuracy. The pwd video stimuli created the opportunity for assessing observation patterns differences. Further study is recommended. |
Kelsey J. MacKay; Filip Germeys; Wim Van Dooren; Lieven Verschaffel; Koen Luwel The structure of the notation system in adults' number line estimation: An eye-tracking study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 3, pp. 538 –553, 2023. @article{MacKay2023, Research on rational numbers suggests that adults experience more difficulties in understanding the numerical magnitude of rational than natural numbers. Within rational numbers, the numerical magnitude of fractions has been found to be more difficult to understand than that of decimals. Using a number line estimation (NLE) task, the current study investigated two sources of difficulty in adults' numerical magnitude understanding: number type (natural vs rational) and structure of the notation system (place-value-based vs non-place-value-based). This within-subjects design led to four conditions: natural numbers (natural/place-value-based), decimals (rational/place-value-based), fractions (rational/non-place-value-based), and separated fractions (natural/non-place-value-based). In addition to percentage absolute error (PAE) and response times, we collected eye-tracking data. Results showed that participants estimated natural and place-value-based notations more accurately than rational and non-place-value-based notations, respectively. Participants were also slower to respond to fractions compared with the three other notations. Consistent with the response time data, eye-tracking data showed that participants spent more time encoding fractions and re-visited them more often than the other notations. Moreover, in general, participants spent more time positioning non-place-value-based than place-value-based notations on the number line. Overall, the present study contends that when both sources of difficulty are present in a notation (i.e., both rational and non-place-value-based), adults understand its numerical magnitude less well than when there is only one source of difficulty present (i.e., either rational or non-place-value-based). When no sources of difficulty are present in a notation (i.e., both natural and place-value-based), adults have the strongest understanding of its numerical magnitude. |
Sylwia Macinska; Shane Lindsay; Tjeerd Jellema Visual attention to dynamic emotional faces in adults on the autism spectrum Journal Article In: Journal of Autism and Developmental Disorders, pp. 1–13, 2023. @article{Macinska2023, Using eye-tracking, we studied allocation of attention to faces where the emotional expression and eye-gaze dynamically changed in an ecologically-valid manner. We tested typically-developed (TD) adults low or high in autistic-like traits (Experiment 1), and adults with high-functioning autism (HFA; Experiment 2). All groups fixated more on the eyes than on any of the other facial area, regardless of emotion and gaze direction, though the HFA group fixated less on the eyes and more on the nose than TD controls. The sequence of dynamic facial changes affected the groups similarly, with reduced attention to the eyes and increased attention to the mouth. The results suggest that dynamic emotional face scanning patterns are stereotypical and differ only modestly between TD and HFA adults. |
Ye Ma; Brian Buccola; Zinan Wang; Shannon Cousins; Aline Godfroid; Alan Beretta Expressions with aspectual verbs elicit slower reading times than those with psychological verbs: An eye-tracking study in Mandarin Chinese Journal Article In: Journal of Psycholinguistic Research, vol. 52, no. 1, pp. 179–215, 2023. @article{Ma2023c, Research over the last 20 years has investigated the processing costs for sentences such as John began the book. Much of this work has conflated sentences with aspectual verbs, like start or finish, with psychological verbs, like enjoy or tolerate. However, recent studies have reported greater costs for aspectual verbs compared to psychological verbs (e.g., Katsika et al. in Ment Lex 7:58–76, 2012; Lai et al. in Compositionality and concepts in linguistics and psychology, 2017). The present paper reports an eye-tracking study that examined the costs of processing both verb types in Mandarin Chinese. The results revealed greater costs both for aspectual verbs compared to controls (John read the book) and for aspectual verbs compared to psychological verbs, reinforcing the claims of the Structured Individual Hypothesis (Piñango and Deo in J Semant 33:359–408, 2016). Strikingly, there was an early effect at the verb for aspectual verbs but not for psychological verbs. We argue that this result, together with previous findings and other conceptual issues, necessitates a conservative modification of the SIH: aspectual verbs are semantically more complex than psychological verbs. This modification retains the core analysis underlying the SIH, but reconciles the SIH with experimental findings by bringing it in line with the view that lexical semantic complexity has immediate consequences in processing (e.g., Brennan and Pylkkänen in Lang Cogn Process 25:777–807, 2010). |
Xiaochuan Ma; Yikang Liu; Roy Clariana; Chanyuan Gu; Ping Li From eye movements to scanpath networks: A method for studying individual differences in expository text reading Journal Article In: Behavior Research Methods, vol. 55, no. 2, pp. 730–750, 2023. @article{Ma2023b, Eye movements have been examined as an index of attention and comprehension during reading in the literature for over 30 years. Although eye-movement measurements are acknowledged as reliable indicators of readers' comprehension skill, few studies have analyzed eye-movement patterns using network science. In this study, we offer a new approach to analyze eye-movement data. Specifically, we recorded visual scanpaths when participants were reading expository science text, and used these to construct scanpath networks that reflect readers' processing of the text. Results showed that low ability and high ability readers' scanpath networks exhibited distinctive properties, which are reflected in different network metrics including density, centrality, small-worldness, transitivity, and global efficiency. Such patterns provide a new way to show how skilled readers, as compared with less skilled readers, process information more efficiently. Implications of our analyses are discussed in light of current theories of reading comprehension. |
Wenbo Ma; Mingsha Zhang Multiple step saccades are generated by internal real-time saccadic error correction Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–9, 2023. @article{Ma2023a, Objectives: Multiple step saccades (MSSs) are an atypical form of saccade that consists of a series of small-amplitude saccades. It has been argued that the mechanism for generating MSS is due to the automatic saccadic plan. This argument was based on the observation that trials with MSS had shorter saccadic latency than trials without MSS in the reactive saccades. However, the validity of this argument has never been verified by other saccadic tasks. Alternatively, we and other researchers have speculated that the function of MSS is the same as that of the corrective saccade (CS), i.e., to correct saccadic errors. Thus, we propose that the function of the MSS is also to rectify saccadic errors and generated by forward internal models. The objective of the present study is to examine whether the automatic theory is universally applicable for the generation of MSSs in various saccadic tasks and to seek other possible mechanisms, such as error correction by forward internal models. Methods: Fifty young healthy subjects (YHSs) and fifty elderly healthy subjects (EHSs) were recruited in the present study. The task paradigms were prosaccade (PS), anti-saccade (AS) and memory-guided saccade (MGS) tasks. Results: Saccadic latency in trials with MSS was shorter than without MSS in the PS task but similar in the AS and MGS tasks. The intersaccadic intervals (ISI) were similar among the three tasks in both YHSs and EHSs. Conclusion: Our results indicate that the automatic theory is not a universal mechanism. Instead, the forward internal model for saccadic error correction might be an important mechanism. |
Jialin Ma; Rui Zhang; Yongxin Li Age weakens the other-race effect among Han subjects in recognizing own- and other-ethnicity faces Journal Article In: Behavioral Sciences, vol. 13, no. 8, pp. 1–17, 2023. @article{Ma2023, The development and change in the other-race effect (ORE) in different age groups have always been a focus of researchers. Previous studies have mainly focused on the influence of maturity of life (from infancy to early adulthood) on the ORE, while few researchers have explored the ORE in older people. Therefore, this study used behavioral and eye movement techniques to explore the influence of age on the ORE and the visual scanning pattern of Han subjects recognizing own- and other-ethnicity faces. All participants were asked to complete a study-recognition task for faces, and the behavioral results showed that the ORE of elderly Han subjects was significantly lower than that of young Han subjects. The results of eye movement showed that there were significant differences in the visual scanning pattern of young subjects in recognizing the faces of individuals of their own ethnicity and other ethnicities, which were mainly reflected in the differences in looking at the nose and mouth, while the differences were reduced in the elderly subjects. The elderly subjects used similar scanning patterns to recognize the own- and other-ethnicity faces. This indicates that as age increases, the ORE of older people in recognizing faces of those from different ethnic groups becomes weaker, and elderly subjects have more similar visual scanning patterns in recognizing faces of their own and other ethnicities. |
Hailong Lyu; David St Clair; Renrong Wu; Philip J. Benson; Wenbin Guo; Guodong Wang; Yi Liu; Shaohua Hu; Jingping Zhao Eye movement abnormalities can distinguish first-episode schizophrenia, chronic schizophrenia, and prodromal patients from healthy controls Journal Article In: Schizophrenia Bulletin Open, vol. 4, no. 1, pp. 1–11, 2023. @article{Lyu2023a, Background: This study attempts to replicate in a Chinese population an earlier UK report that eye movement abnormalities can accurately distinguish schizophrenia (SCZ) cases from healthy controls (HCs). It also seeks to determine whether first-episode SCZ differ from chronic SCZ and whether these eye movement abnormalities are enriched in psychosis risk syndrome (PRS). Methods: The training set included 104 Chinese HC and 60 Chinese patients with SCZ, and the testing set included 20 SCZ patients and 20 HC from a UK cohort. An additional 16 individuals with PRS were also enrolled. Eye movements of all participants were recorded during free-viewing, smooth pursuit, and fixation stability tasks. Group differences in 55 performance measures were compared and a gradient-boosted decision tree model was built for predictive analyses. Results: Extensive eye-movement abnormalities were observed in patients with SCZ on almost all eye-movement tests. On almost all individual variables, first-episode patients showed no statistically significant differences compared with chronic patients. The classification model was able to discriminate patients from controls with an area under the curve of 0.87; the model also classified 88% of PRS individuals as SCZ-like. Conclusions: Our findings replicate and extend the UK results. The overall accuracy of the Chinese study is virtually identical to the UK findings. We conclude that eye-movement abnormalities appear early in the natural history of the disorder and can be considered as potential trait markers for SCZ diathesis. |
Anqi Lyu; Larry Abel; Allen M. Y. Cheong Effect of habitual reading direction on saccadic eye movements: A pilot study Journal Article In: PLoS ONE, vol. 18, pp. 1–16, 2023. @article{Lyu2023, Cognitive processes can influence the characteristics of saccadic eye movements. Reading habits, including habitual reading direction, also affect cognitive and visuospatial processes, favouring attention to the side where reading begins. Few studies have investigated the effect of habitual reading direction on saccade directionality of low-cognitive-demand stimuli (such as dots). The current study examined horizontal prosaccade, antisaccade, and self-paced saccade in subjects with two primary habitual reading directions. We hypothesised that saccades responding to the stimuli in subject's habitual reading direction would show a longer prosaccade latency and lower antisaccade error rate (errors being a reflexive glance to a suddenappearing target, rather than a saccade away from it). Sixteen young Chinese participants with primary habitual reading direction from left to right and sixteen young Arabic and Persian participants with primary habitual reading direction from right to left were recruited. All subjects spoke/read English as their second language. Subjects needed to look towards a 5°/10° target in the prosaccade task or look towards the mirror image location of the target in the antisaccade task and look between two 10° targets in the self-paced saccade task. Only Arabic and Persian participants showed a shorter and directional prosaccade latency towards 5° stimuli against their habitual reading direction. No significant effect of reading direction on antisaccade latency towards the correct directions was found. Chinese readers were found to generate significantly shorter prosaccade latencies and higher antisaccade directional errors compared with Arabic and Persian readers for stimuli appearing at their habitual reading side. The present pilot study provides insights into the effect of reading habits on saccadic eye movements of low-cognitive-demand stimuli and offers a platform for future studies to investigate the relationship between reading habits and eye movement behaviours. |
Yingyue Lv; Lei Zhang; Wanying Chen; Fang Xie; Kayleigh L. Warrington The influence of foveal load on parafoveal processing of N + 2 during Chinese reading Journal Article In: Visual Cognition, vol. 31, no. 2, pp. 97–106, 2023. @article{Lv2023, According to the foveal load hypothesis, parafoveal processing is influenced by the difficulty of current foveal processing. It remains unclear whether foveal load may affect the extent of parafoveal processing. This is an important consideration given the evidence that Chinese readers may frequently pre-process word N + 2 when N + 1 is one character. Accordingly, the current study manipulated word frequency to explore the influence of foveal load on parafoveal processing of N + 2 using a 2 (foveal load: high-frequency, low-frequency) × 2 (preview condition: identical preview, pseudo-character preview) within-subject design. Main effects of foveal load were found for the foveal word N, with longer fixations for low- than for high-frequency words and a main effect of preview was also found for N + 2, with longer fixations for pseudo-character preview compared to identical preview. Crucially, there was no interaction between foveal load and preview condition, indicating that parafoveal processing of word N + 2 is not influenced by foveal load during natural Chinese reading. |