全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2023 |
Priyanka Srivastava; Saskia Jaarsveld; Kishan Sangani Verbal-analytical rather than visuo-spatial Raven's puzzle solving favors Raven's-like puzzle generation Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–13, 2023. @article{Srivastava2023, Raven's advanced progressive matrices (APM) comprise two types of representational codes, namely visuo-spatial and verbal-analytical, that are used to solve APM puzzles. Studies using analytical, behavioral, and imaging methods have supported the multidimensional perspectives of APM puzzles. The visuo-spatial code is expected to recruit operations more responsive to the visual perception tasks. In contrast, the verbal-analytical code is expected to use operations more responsive to the logical reasoning task and may entail different cognitive strategies. Acknowledging different representational codes used in APM puzzle-solving is critical for a better understanding of APM's performance and their relationship with other tasks, especially creative reasoning. We used the eye-tracking method to investigate the role of two representational codes, visuo-spatial and verbal-analytical, in strategies involved in solving APM puzzles and in generating an APM-like puzzle by using a creative-reasoning task (CRT). Participants took longer time to complete the verbal-analytical than visuo-spatial puzzles. In addition, visuo-analytical than visual-spatial puzzles showed higher progressive and regressive saccade counts, suggesting the use of more response elimination than constructive matching strategies employed while solving verbal-analytical than visuo-spatial puzzles. We observed higher CRT scores when it followed verbal-analytical (Mdn = 84) than visuo-spatial (Mdn = 73) APM puzzles, suggesting puzzle-solving specific strategies affect puzzle-creating task performance. The advantage of verbal-analytical over visuo-spatial puzzle-solving has been discussed in light of shared cognitive processing between APM puzzle-solving and APM-like puzzle-creating task performance. |
Sybren Spit; Andreea Geambașu; Daan Renswoude; Elma Blom; Paula Fikkert; Sabine Hunnius; Caroline Junge; Josje Verhagen; Ingmar Visser; Frank Wijnen; Clara C. Levelt Robustness of the cognitive gains in 7-month-old bilingual infants: A close multi-center replication of Kovács and Mehler (2009) Journal Article In: Developmental Science, vol. 26, no. 6, pp. 1–16, 2023. @article{Spit2023, We present an exact replication of Experiment 2 from Kovács and Mehler's 2009 study, which showed that 7-month-old infants who are raised bilingually exhibit a cognitive advantage. In the experiment, a sound cue, following an AAB or ABB pattern, predicted the appearance of a visual stimulus on the screen. The stimulus appeared on one side of the screen for nine trials and then switched to the other side. In the original experiment, both mono- and bilingual infants anticipated where the visual stimulus would appear during pre-switch trials. However, during post-switch trials, only bilingual children anticipated that the stimulus would appear on the other side of the screen. The authors took this as evidence of a cognitive advantage. Using the exact same materials in combination with novel analysis techniques (Bayesian analyses, mixed effects modeling and cluster based permutation analyses), we assessed the robustness of these findings in four babylabs (N = 98). Our results did not replicate the original findings: although anticipatory looks increased slightly during post-switch trials for both groups, bilingual infants were not better switchers than monolingual infants. After the original experiment, we presented additional trials to examine whether infants associated sound patterns with cued locations, for which we did not find any evidence either. The results highlight the importance of multicenter replications and more fine-grained statistical analyses to better understand child development. Highlights: We carried out an exact replication across four baby labs of the high-impact study by Kovács and Mehler (2009). We did not replicate the findings of the original study, calling into question the robustness of the claim that bilingual infants have enhanced cognitive abilities. After the original experiment, we presented additional trials to examine whether infants correctly associated sound patterns with cued locations, for which we did not find any evidence. The use of novel analysis techniques (Bayesian analyses, mixed effects modeling and cluster based permutation analyses) allowed us to draw better-informed conclusions. |
John P. Spencer; Samuel H. Forbes; Sophie Naylor; Vinay P. Singh; Kiara Jackson; Sean Deoni; Madhuri Tiwari; Aarti Kumar Poor air quality is associated with impaired visual cognition in the first two years of life: A longitudinal investigation Journal Article In: eLife, vol. 12, pp. 1–19, 2023. @article{Spencer2023, Background: Poor air quality has been linked to cognitive deficits in children, but this relationship has not been examined in the first year of life when brain growth is at its peak. Methods: We measured in-home air quality focusing on particulate matter with diameter of <2.5 μm (PM2.5) and infants' cognition longitudinally in a sample of families from rural India. Results: Air quality was poorer in homes that used solid cooking materials. Infants from homes with poorer air quality showed lower visual working memory scores at 6 and 9 months of age and slower visual processing speed from 6 to 21 months when controlling for family socio-economic status. Conclusions: Thus, poor air quality is associated with impaired visual cognition in the first two years of life, consistent with animal studies of early brain development. We demonstrate for the first time an association between air quality and cognition in the first year of life using direct measures of in-home air quality and looking-based measures of cognition. Because indoor air quality was linked to cooking materials in the home, our findings suggest that efforts to reduce cooking emissions should be a key target for intervention. |
David Souto; Jennifer Sudkamp; Kyle Nacilla; Mateusz Bocian Tuning in to a hip-hop beat: Pursuit eye movements reveal processing of biological motion Journal Article In: Human Movement Science, vol. 91, pp. 1–12, 2023. @article{Souto2023, Smooth pursuit eye movements are mainly driven by motion signals to achieve their goal of reducing retinal motion blur. However, they can also show anticipation of predictable movement patterns. Oculomotor predictions may rely on an internal model of the target kinematics. Most investigations on the nature of those predictions have concentrated on simple stimuli, such as a decontextualized dot. However, biological motion is one of the most important visual stimuli in regulating human interaction and its perception involves integration of form and motion across time and space. Therefore, we asked whether there is a specific contribution of an internal model of biological motion in driving pursuit eye movements. Unlike previous contributions, we exploited the cyclical nature of walking to measure eye movement's ability to track the velocity oscillations of the hip of point-light walkers. We quantified the quality of tracking by cross-correlating pursuit and hip velocity oscillations. We found a robust correlation between signals, even along the horizontal dimension, where changes in velocity during the stepping cycle are very subtle. The inversion of the walker and the presentation of the hip-dot without context incurred the same additional phase lag along the horizontal dimension. These findings support the view that information beyond the hip-dot contributes to the prediction of hip kinematics that controls pursuit. We also found a smaller phase lag in inverted walkers for pursuit along the vertical dimension compared to upright walkers, indicating that inversion does not simply reduce prediction. We suggest that pursuit eye movements reflect the visual processing of biological motion and as such could provide an implicit measure of higher-level visual function. |
Wenfang Song; Xinze Xie; Wenyue Huang; Qianqian Yu The design of automotive interior for Chinese young consumers based on Kansei engineering and eye-tracking technology Journal Article In: Applied Sciences, vol. 13, no. 19, pp. 1–20, 2023. @article{Song2023, The reasonable CMF (Color, Material and Finishing) design for automotive interiors could bring positive psychophysical and affective responses of customers, providing an important guideline for automobile enterprises making differentiated products. However, current studies mainly focus on an aspect of CMF design or a single style of the automotive interior, and examined the design mainly through human visual perception. There lack systematic studies on the design and evaluation of automobile interior CMF, and more scientific evaluation of the design through human visual and touching perception was required. Therefore, this study systematically designed the automobile interior CMF based on Kansei engineering and eye-tracking technology. The study consists of five steps: (1) Product positioning: the Chinese young consumers, the new energy vehicles, and bridge and seat are the target users, the automotive model and the key interior components. (2) Kansei physiological measurement: nine groups of Kansei words and thirty-three interior samples were selected, and the interior samples were scored by the Kansei words. (3) Kansei data analysis: three design types were determined, i.e., “hard and stately”, “concise and technological” and “comfortable and safe”. Meanwhile, the CMF design elements of the automotive interiors under the three styles were obtained through mathematical methods. (4) Design practice: four CMF samples under each design style (12 samples) were developed. (5) Kansei evaluation: the design themes were conducted using eye-tracking technology, and the optimal sample that mostly satisfy the user's Kansei requirements under each style was obtained. The proposed design process of automotive interior CMF may have great implications in the design of automotive interiors. |
Sangkyu Son; Joonsik Moon; Yee-Joon Kim; Min-Suk Kang; Joonyeol Lee Frontal-to-visual information flow explains predictive motion tracking Journal Article In: NeuroImage, vol. 269, pp. 1–11, 2023. @article{Son2023, Predictive tracking demonstrates our ability to maintain a line of vision on moving objects even when they temporarily disappear. Models of smooth pursuit eye movements posit that our brain achieves this ability by directly streamlining motor programming from continuously updated sensory motion information. To test this hypothesis, we obtained sensory motion representation from multivariate electroencephalogram activity while human participants covertly tracked a temporarily occluded moving stimulus with their eyes remaining stationary at the fixation point. The sensory motion representation of the occluded target evolves to its maximum strength at the expected timing of reappearance, suggesting a timely modulation of the internal model of the visual target. We further characterize the spatiotemporal dynamics of the task-relevant motion information by computing the phase gradients of slow oscillations. We discovered a predominant posterior-to-anterior phase gradient immediately after stimulus occlusion; however, at the expected timing of reappearance, the axis reverses the gradient, becoming anterior-to-posterior. The behavioral bias of smooth pursuit eye movements, which is a signature of the predictive process of the pursuit, was correlated with the posterior division of the gradient. These results suggest that the sensory motion area modulated by the prediction signal is involved in updating motor programming. |
Linda Sommerfeld; Maria Staudte; Nivedita Mani; Jutta Kray Even young children make multiple predictions in the complex visual world Journal Article In: Journal of Experimental Child Psychology, vol. 235, pp. 1–29, 2023. @article{Sommerfeld2023, Children can anticipate upcoming input in sentences with semantically constraining verbs. In the visual world, the sentence context is used to anticipatorily fixate the only object matching potential sentence continuations. Adults can process even multiple visual objects in parallel when predicting language. This study examined whether young children can also maintain multiple prediction options in parallel during language processing. In addition, we aimed at replicating the finding that children's receptive vocabulary size modulates their prediction. German children (5–6 years |
Emma J. Solly; Meaghan Clough; Allison M. McKendrick; Owen B. White; Joanne Fielding Eye movement characteristics are not significantly influenced by psychiatric comorbidities in people with visual snow syndrome Journal Article In: Brain Research, vol. 1804, pp. 1–5, 2023. @article{Solly2023, Visual snow syndrome (VSS) is a neurological disorder primarily affecting the processing of visual information. Using ocular motor (OM) tasks, we previously demonstrated that participants with VSS exhibit altered saccade profiles consistent with visual attention impairments. We subsequently proposed that OM assessments may provide an objective measure of dysfunction in these individuals. However, VSS participants also frequently report significant psychiatric symptoms. Given that that these symptoms have been shown previously to influence performance on OM tasks, the objective of this study was to investigate whether psychiatric symptoms (specifically: depression, anxiety, fatigue, sleep difficulties, and depersonalization) influence the OM metrics found to differ in VSS. Sixty-one VSS participants completed a battery of four OM tasks and a series of online questionnaires assessing psychiatric symptomology. We revealed no significant relationship between psychiatric symptoms and OM metrics on any of the tasks, demonstrating that in participants with VSS, differences in OM behaviour are a feature of the disorder. This supports the utility of OM assessment in characterising deficit in VSS, whether supporting a diagnosis or monitoring future treatment efficacy. |
Katrine Falcon Soby; Evelyn Arko Milburn; Line Burholt Kristensen; Valentin Vulchanov; Mila Vulchanova In the native speaker's eye: Online processing of anomalous learner syntax Journal Article In: Applied Psycholinguistics, vol. 44, no. 1, pp. 1–28, 2023. @article{Soby2023, How do native speakers process texts with anomalous learner syntax? Second-language learners of Norwegian, and other verb-second (V2) languages, frequently place the verb in third position (e.g.,*Adverbial-Subject-Verb), although it is mandatory for the verb in these languages to appear in second position (Adverbial-Verb-Subject). In an eye-Tracking study, native Norwegian speakers read sentences with either grammatical V2 or ungrammatical verb-Third (V3) word order. Unlike previous eye-Tracking studies of ungrammaticality, which have primarily addressed morphosyntactic anomalies, we exclusively manipulate word order with no morphological or semantic changes. We found that native speakers reacted immediately to ungrammatical V3 word order, indicated by increased fixation durations and more regressions out on the subject, and subsequently on the verb. Participants also recovered quickly, already on the following word. The effects of grammaticality were unaffected by the length of the initial adverbial. The study contributes to future models of sentence processing which should be able to accommodate various types of noisy input, that is, non-standard variation. Together with new studies of processing of other L2 anomalies in Norwegian, the current findings can help language instructors and students prioritize which aspects of grammar to focus on. |
Joshua Snell; Jeremy Yeaton; Jonathan Mirault; Jonathan Grainger Parallel word reading revealed by fixation-related brain potentials Journal Article In: Cortex, vol. 162, pp. 1–11, 2023. @article{Snell2023, During reading, the brain is confronted with many relevant objects at once. But does lexical processing occur for multiple words simultaneously? Cognitive science has yet to answer this prominent question. Recently it has been argued that the issue warrants supplementing the field's traditional toolbox (response times, eye-tracking) with neuroscientific techniques (EEG, fMRI). Indeed, according to the OB1-reader model, upcoming words need not impact oculomotor behavior per se, but parallel processing of these words must nonetheless be reflected in neural activity. Here we combined eye-tracking with EEG, time-locking the neural window of interest to the fixation on target words in sentence reading. During these fixations, we manipulated the identity of the subsequent word so that it posed either a syntactically legal or illegal continuation of the sentence. In line with previous research, oculomotor measures were unaffected. Yet, syntax impacted brain potentials as early as 100 ms after the target fixation onset. Given the EEG literature on syntax processing, the presently observed timings suggest parallel word reading. We reckon that parallel word processing typifies reading, and that OB1-reader offers a good platform for theorizing about the reading brain. |
Maverick E. Smith; Lester C. Loschky; Heather R. Bailey Eye movements and event segmentation: Eye movements reveal age-related differences in event model updating Journal Article In: Psychology and Aging, pp. 1–8, 2023. @article{Smith2023, People spontaneously segment continuous ongoing actions into sequences of events. Prior research found that gaze similarity and pupil dilation increase at event boundaries and that older adults segmentmore idiosyncratically than do young adults.We used eye tracking to explore age-related differences in gaze similarity (i.e., the extent to which individuals look at the same places at the same time as others) and pupil dilation at event boundaries. Older and young adults watched naturalistic videos of actors performing everyday activities while we tracked their eye movements. Afterward, they segmented the videos into subevents. Replicating prior work, we found that pupil size and gaze similarity increased at event boundaries. Thus, there were fewer individual differences in eye position at boundaries.We also found that young adults had higher gaze similarity than older adults throughout an entire video and at event boundaries. This study is the first to show that age-related differences in how people parse continuous everyday activities into events may be partially explained by individual differences in gaze patterns. Those who segment less normatively may do so because they fixate less normative regions. Results have implications for future interventions designed to improve encoding in older adults. |
Simona Skripkauskaite; Ioana Mihai; Kami Koldewyn Attentional bias towards social interactions during viewing of naturalistic scenes Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 10, pp. 2303 –2311, 2023. @article{Skripkauskaite2023, Human visual attention is readily captured by the social information in scenes. Multiple studies have shown that social areas of interest (AOIs) such as faces and bodies attract more attention than non-social AOIs (e.g., objects or background). However, whether this attentional bias is moderated by the presence (or absence) of a social interaction remains unclear. Here, the gaze of 70 young adults was tracked during the free viewing of 60 naturalistic scenes. All photographs depicted two people, who were either interacting or not. Analyses of dwell time revealed that more attention was spent on human than background AOIs in the interactive pictures. In non-interactive pictures, however, dwell time did not differ between AOI type. In the time-to-first-fixation analysis, humans always captured attention before other elements of the scene, although this difference was slightly larger in interactive than non-interactive scenes. These findings confirm the existence of a bias towards social information in attentional capture and suggest our attention values social interactions beyond the presence of two people. |
Alice E. Skelton; Anna Franklin; Jenny M. Bosten Colour vision is aligned with natural scene statistics at 4 months of age Journal Article In: Developmental Science, vol. 26, no. 6, pp. 1–8, 2023. @article{Skelton2023, Visual perception in adult humans is thought to be tuned to represent the statistical regularities of natural scenes. For example, in adults, visual sensitivity to different hues shows an asymmetry which coincides with the statistical regularities of colour in the natural world. Infants are sensitive to statistical regularities in social and linguistic stimuli, but whether or not infants' visual systems are tuned to natural scene statistics is currently unclear. We measured colour discrimination in infants to investigate whether or not the visual system can represent chromatic scene statistics in very early life. Our results reveal the earliest association between vision and natural scene statistics that has yet been found: even as young as 4 months of age, colour vision is aligned with the distributions of colours in natural scenes. Research Highlights: We find infants' colour sensitivity is aligned with the distribution of colours in the natural world, as it is in adults. At just 4 months, infants' visual systems are tailored to extract and represent the statistical regularities of the natural world. This points to a drive for the human brain to represent statistical regularities even at a young age. |
Oindrila Sinha; Shirin Madarshahian; Ana Gómez-Granados; Morgan L. Paine; Isaac Kurtzer; Tarkeshwar Singh Smooth pursuit eye movements contribute to anticipatory force control during mechanical stopping of moving objects Journal Article In: Journal of Neurophysiology, vol. 129, no. 6, pp. 1293–1309, 2023. @article{Sinha2023, When stopping a closing door or catching an object, humans process the motion of inertial objects and apply reactive limb force over short period to interact with them. One way in which the visual system processes motion is through extraretinal signals associated with smooth pursuit eye movements (SPEMs). We conducted three experiments to investigate how SPEMs contribute to anticipatory and reactive hand force modulation when interacting with a virtual object moving in the horizontal plane. We hypothesized that SPEM signals are critical for timing motor responses, anticipatory control of hand force, and task performance. Participants held a robotic manipulandum and attempted to stop an approaching simulated object by applying a force impulse (area under force-time curve) that matched the object's virtual momentum upon contact. We manipulated the object's momentum by varying either its virtual mass or its speed under free gaze or constrained gaze conditions. We examined gaze variables, the timing of hand motor responses, anticipatory force control, and overall task performance. Our results show that when participants were fixated at a designated location instead of following objects with SPEM, anticipatory modulation of hand force before contact decreased. However, constraining gaze by asking participants to fixate did not seem to affect the timing of the motor response or the task performance. Together, these results suggest that SPEMs may be important for anticipatory control of hand force before contact and may also play a critical role in anticipatory stabilization of limb posture when humans interact with moving objects. NEW & NOTEWORTHY We show for the first time that smooth pursuit eye movements (SPEMs) play a role in the modulation of anticipatory control of hand force to stabilize posture against contact forces. SPEMs are critical for tracking moving objects, facilitate processing motion of moving objects, and are impacted during aging and in many neurological disorders, such as Alzheimer's disease and multiple sclerosis. These results provide a novel basis to probe how changes in SPEMs could contribute to deficient limb motor control in older adults and patients with neurological disorders. |
Tarkeshwar Singh; John-Ross Rizzo; Cédrick Bonnet; Jennifer A. Semrau; Troy M. Herter Enhanced cognitive interference during visuomotor tasks may cause eye–hand dyscoordination Journal Article In: Experimental Brain Research, vol. 241, no. 2, pp. 547–558, 2023. @article{Singh2023, In complex visuomotor tasks, such as cooking, people make many saccades to continuously search for items before and during reaching movements. These tasks require cognitive resources, such as short-term memory and task-switching. Cognitive load may impact limb motor performance by increasing demands on mental processes, but mechanisms remain unclear. The Trail-Making Tests, in which participants sequentially search for and make reaching movements to 25 targets, consist of a simple numeric variant (Trails-A) and a cognitively challenging variant that requires alphanumeric switching (Trails-B). We have previously shown that stroke survivors and age-matched controls make many more saccades in Trails-B, and those increases in saccades are associated with decreases in speed and smoothness of reaching movements. However, it remains unclear how patients with neurological injuries, e.g., stroke, manage progressive increases in cognitive load during visuomotor tasks, such as the Trail-Making Tests. As Trails-B trial progresses, switching between numbers and letters leads to progressive increases in cognitive load. Here, we show that stroke survivors with damage to frontoparietal areas and age-matched controls made more saccades and had longer fixations as they progressed through the 25 alphanumeric targets in Trails-B. Furthermore, when stroke survivors made saccades during reaching movements in Trails-B, their movement speed slowed down significantly. Thus, damage to frontoparietal areas serving cognitive motor functions may cause interference between oculomotor, visual, and limb motor functions, which could lead to significant disruptions in activities of daily living. These findings augment our understanding of the mechanisms that underpin cognitive-motor interference during complex visuomotor tasks. |
Johannes J. D. Singer; Radoslaw M. Cichy; Martin N. Hebart The spatiotemporal neural dynamics of object recognition for natural images and line drawings Journal Article In: Journal of Neuroscience, vol. 43, no. 3, pp. 484–500, 2023. @article{Singer2023, Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while human participants (female and male) viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital, ventral-temporal, and posterior parietal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together, our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings. |
Tiana V. Simovic; Craig G. Chambers How do antecedent semantics influence pronoun interpretation? Evidence from eye movements Journal Article In: Cognitive Science, vol. 47, no. 2, pp. 1–15, 2023. @article{Simovic2023, Pronoun interpretation is often described as relying on a comprehender's mental model of discourse. For example, in some psycholinguistic accounts, interpreting pronouns involves a process of retrieval, whereby a pronoun is resolved by accessing information from its linguistic antecedent. However, linguistic antecedents are neither necessary nor sufficient for interpreting a pronoun, and even when an antecedent has been introduced in earlier discourse, there is little evidence for the retrieval of linguistic form. The current study extends our understanding of pronoun interpretation by examining whether the semantics of antecedent expressions are retrieved from representations of past discourse. Participants were instructed to move displayed objects in a Visual World eye-tracking task. In some cases, the semantics of the antecedent were no longer viable after an instruction was completed (e.g., “Move the house on the left to area 12,” where the result was that a different house is now the leftmost one). In this case, retrieving antecedent semantics at the point of hearing a subsequent pronoun (“Now, move it…”) should entail a processing penalty. Instead, the results showed that antecedent semantics have no direct effect on interpretation, raising additional questions about the role that retrieval might play in pronoun interpretation. |
Olympia Simantiraki; Anita E. Wagner; Martin Cooke The impact of speech type on listening effort and intelligibility for native and non-native listeners Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–16, 2023. @article{Simantiraki2023, Listeners are routinely exposed to many different types of speech, including artificially-enhanced and synthetic speech, styles which deviate to a greater or lesser extent from naturally-spoken exemplars. While the impact of differing speech types on intelligibility is well-studied, it is less clear how such types affect cognitive processing demands, and in particular whether those speech forms with the greatest intelligibility in noise have a commensurately lower listening effort. The current study measured intelligibility, self-reported listening effort, and a pupillometry-based measure of cognitive load for four distinct types of speech: (i) plain i.e. natural unmodified speech; (ii) Lombard speech, a naturally-enhanced form which occurs when speaking in the presence of noise; (iii) artificially-enhanced speech which involves spectral shaping and dynamic range compression; and (iv) speech synthesized from text. In the first experiment a cohort of 26 native listeners responded to the four speech types in three levels of speech-shaped noise. In a second experiment, 31 non-native listeners underwent the same procedure at more favorable signal-to-noise ratios, chosen since second language listening in noise has a more detrimental effect on intelligibility than listening in a first language. For both native and non-native listeners, artificially-enhanced speech was the most intelligible and led to the lowest subjective effort ratings, while the reverse was true for synthetic speech. However, pupil data suggested that Lombard speech elicited the lowest processing demands overall. These outcomes indicate that the relationship between intelligibility and cognitive processing demands is not a simple inverse, but is mediated by speech type. The findings of the current study motivate the search for speech modification algorithms that are optimized for both intelligibility and listening effort. |
Mustafa Shirzad; James Van Riesen; Nikan Behboodpour; Matthew Heath 10-min exposure to a 2.5% hypercapnic environment increases cerebral blood blow but does not impact executive function Journal Article In: Life Sciences in Space Research, vol. 40, no. 2023, pp. 143–150, 2023. @article{Shirzad2023, Space travel and exploration are associated with increased ambient CO2 (i.e., a hypercapnic environment). Some work reported that the physiological changes (e.g., increased cerebral blood flow [CBF]) associated with a chronic hypercapnic environment contributes to a “space fog” that adversely impacts cognition and psychomotor performance, whereas other work reported no change or a positive change. Here, we employed the antisaccade task to evaluate whether transient exposure to a hypercapnic environment influences top-down executive function (EF). Antisaccades require a goal-directed eye movement mirror-symmetrical to a target and are an ideal tool for identifying subtle EF changes. Healthy young adults (aged 19–25 years) performed blocks of antisaccade trials prior to (i.e., pre-intervention), during (i.e., concurrent) and after (i.e., post-intervention) 10-min of breathing factional inspired CO2 (FiCO2) of 2.5% (i.e., hypercapnic condition) and during a normocapnic (i.e., control) condition. In both conditions, CBF, ventilatory and cardiorespiratory responses were measured. Results showed that the hypercapnic condition increased CBF, ventilation and end-tidal CO2 and thus demonstrated an expected physiological adaptation to increased FiCO2. Notably, however, null hypothesis and equivalence tests indicated that concurrent and post-intervention antisaccade reaction times were refractory to the hypercapnic environment; that is, transient exposure to a FiCO2 of 2.5% did not produce a real-time or lingering influence on an oculomotor-based measure of EF. Accordingly, results provide a framework that – in part – establishes the FiCO2 percentage and timeline by which high-level EF can be maintained. Future work will explore CBF and EF dynamics during chronic hypercapnic exposure as more direct proxy for the challenges of space flight and exploration. |
Frederick Shic; Erin C. Barney; Adam J. Naples; Kelsey J. Dommer; Shou An Chang; Beibin Li; Takumi McAllister; Adham Atyabi; Quan Wang; Raphael Bernier; Geraldine Dawson; James Dziura; Susan Faja; Shafali Spurling Jeste; Michael Murias; Scott P. Johnson; Maura Sabatos-DeVito; Gerhard Helleman; Damla Senturk; Catherine A. Sugar; Sara Jane Webb; James C. McPartland; Katarzyna Chawarska In: Autism Research, vol. 16, pp. 2150–2159, 2023. @article{Shic2023, The Selective Social Attention (SSA) task is a brief eye-tracking task involving experimental conditions varying along socio-communicative axes. Traditionally the SSA has been used to probe socially-specific attentional patterns in infants and toddlers who develop autism spectrum disorder (ASD). This current work extends these findings to preschool and school-age children. Children 4- to 12-years-old with ASD (N = 23) and a typically-developing comparison group (TD; N = 25) completed the SSA task as well as standardized clinical assessments. Linear mixed models examined group and condition effects on two outcome variables: percent of time spent looking at the scene relative to scene presentation time (%Valid), and percent of time looking at the face relative to time spent looking at the scene (%Face). Age and IQ were included as covariates. Outcome variables' relationships to clinical data were assessed via correlation analysis. The ASD group, compared to the TD group, looked less at the scene and focused less on the actress' face during the most socially-engaging experimental conditions. Additionally, within the ASD group, %Face negatively correlated with SRS total T-scores with a particularly strong negative correlation with the Autistic Mannerism subscale T-score. These results highlight the extensibility of the SSA to older children with ASD, including replication of between-group differences previously seen in infants and toddlers, as well as its ability to capture meaningful clinical variation within the autism spectrum across a wide developmental span inclusive of preschool and school-aged children. The properties suggest that the SSA may have broad potential as a biomarker for ASD. |
Summer Sheremata; George L. Malcolm; Sarah Shomstein Behavioral asymmetries in visual short-term memory occur in retinotopic coordinates Journal Article In: Attention, Perception, and Psychophysics, vol. 85, pp. 113–119, 2023. @article{Sheremata2023, Visual short-term memory (VSTM) is an essential store that creates continuous representations from disjointed visual input. However, severe capacity limits exist, reflecting constraints in supporting brain networks. VSTM performance shows spatial biases predicted by asymmetries in the brain based upon the location of the remembered object. Visual representations are retinotopic, or relative to location of the representation on the retina. It therefore stands to reason that memory performance may also show retinotopic biases. Here, eye position was manipulated to tease apart retinotopic coordinates from spatiotopic coordinates, or location relative to the external world. Memory performance was measured while participants performed a color change-detection task for items presented across the visual field while subjects fixated central or peripheral position. VSTM biases reflected the location of the stimulus on the retina, regardless of where the stimulus appeared on the screen. Therefore, spatial biases occur in retinotopic coordinates in VSTM and suggest a fundamental link between behavioral VSTM measures and visual representations. |
Yuanping Shen; Qin Wang; Hongli Liu; Jianye Luo; Qunyue Liu; Yuxiang Lan Landscape design intensity and its associated complexity of forest landscapes in relation to preference and eye movements Journal Article In: Forests, vol. 14, no. 4, pp. 1–16, 2023. @article{Shen2023b, Understanding how people perceive landscapes is essential for the design of forest landscapes. The study investigates how design intensity affects landscape complexity, preference, and eye movements for urban forest settings. Eight groups of twenty-four pictures, representing lawn, path, and waterscape settings in urban forests, with each type of setting having two groups of pictures and one group having four pictures, were selected. The four pictures in each group were classified into slight, low, medium, and high design intensities. A total of 76 students were randomly assigned to observe one group of pictures within each type of landscape with an eye-tracking apparatus and give ratings of complexity and preference. The results indicate that design intensity was positively associated with subjective landscape complexity but was positively or negatively related to objective landscape complexity in three types of settings. Subjective landscape complexity was found to significantly contribute to visual preference across landscape types, while objective landscape complexity did not contribute to preference. In addition, the marginal effect of medium design intensity on preference was greater than that of low and high design intensity in most cases. Moreover, although some eye movement metrics were significantly related to preference in lawn settings, none were found to be indicative predictors for preference. The findings enrich research in visual preference and assist landscape designers during the design process to effectively arrange landscape design intensity in urban forests. |
Meng Shen; Zibei Niu; Lei Gao; Tianzhi Li; Danhui Wang; Shan Li; Man Zeng; Xuejun Bai; Xiaolei Gao Examining the extraction of parafoveal semantic information in Tibetan Journal Article In: PLoS ONE, vol. 18, no. 4, pp. 1–20, 2023. @article{Shen2023a, This study conducted two experiments to investigate the extraction of semantic preview information from the parafovea in Tibetan reading. In Experiment 1, a single-factor (preview type: identical vs. semantically related vs. unrelated) within-subject experimental design was used to investigate whether there is a parafoveal semantic preview effect (SPE) in Tibetan reading. Experiment 2 used a 2 (contextual constraint: high vs. low) × 3 (preview type: identical vs. semantically related vs. unrelated) within-subject experimental design to investigate the influence of contextual constraint on the parafoveal semantic preview effect in Tibetan reading. Supporting the E-Z reader model, the experimental results showed that in Tibetan reading, readers could not obtain semantic preview information from the parafovea, and contextual constraint did not influence this process. However, comparing high- and low-constrained contexts, the latter might be more conducive to extracting semantic preview information from the parafovea. |
Jing Shen; Elizabeth Heller Murray; Erin R. Kulick The effect of breathy vocal quality on speech intelligibility and listening effort in background noise Journal Article In: Trends in Hearing, vol. 27, pp. 1–14, 2023. @article{Shen2023, Speech perception is challenging under adverse conditions. However, there is limited evidence regarding how multiple adverse conditions affect speech perception. The present study investigated two conditions that are frequently encountered in real-life communication: background noise and breathy vocal quality. The study first examined the effects of background noise and breathiness on speech perception as measured by intelligibility. Secondly, the study tested the hypothesis that both noise and breathiness affect listening effort, as indicated by linear and nonlinear changes in pupil dilation. Low-context sentences were resynthesized to create three levels of breathiness (original, mild-moderate, and severe). The sentences were presented in a fluctuating nonspeech noise with two signal-to-noise ratios (SNRs) of −5 dB (favorable) and −9 dB (adverse) SNR. Speech intelligibility and pupil dilation data were collected from young listeners with normal hearing thresholds. The results demonstrated that a breathy vocal quality presented in noise negatively affected speech intelligibility, with the degree of breathiness playing a critical role. Listening effort, as measured by the magnitude of pupil dilation, showed significant effects with both severe and mild-moderate breathy voices that were independent of noise level. The findings contributed to the literature by demonstrating the impact of vocal quality on the perception of speech in noise. They also highlighted the complex dynamics between overall task demand and processing resources in understanding the combined impact of multiple adverse conditions. |
Shdeour O.; Tal-Perry N.; Glickman M.; Yuval-Greenberg S. Exposure to temporal randomness promotes subsequent adaptation to new temporal regularities Journal Article In: Cognition, vol. 244, pp. 1–11, 2023. @article{Shdeour2023, Noise is intuitively thought to interfere with perceptual learning; However, human and machine learning studies suggest that, in certain contexts, variability may reduce overfitting and improve generalizability. Whereas previous studies have examined the effects of variability in learned stimuli or tasks, it is hitherto unknown what are the effects of variability in the temporal environment. Here, we examined this question in two groups of adult participants (N=40) presented with visual targets at either random or fixed temporal routines and then tested on the same type of targets at a new nearly-fixed temporal routine. Findings reveal that participants of the random group performed better and adapted quicker following a change in the timing routine, relative to participants of the fixed group. Corroborated with eye tracking and computational modeling, these findings suggest that prior exposure to temporal randomness promotes the formation of new temporal expectations and enhances generalizability in a dynamic environment. We conclude that noise plays an important role in promotion perceptual learning in the temporal domain: rather than interfering with the formation of temporal expectations, noise enhances them. This counterintuitive effect is hypothesized to be achieved through eliminating overfitting and promoting generalizability. |
Mishaal Sharif; Yougan Saman; Rose Burling; Oliver Rea; Rakesh Patel; Douglas J. K. Barrett; Peter Rea; Amir Kheradmand; Qadeer Arshad Altered visual conscious awareness in patients with vestibular dysfunctions; a cross-sectional observation study Journal Article In: Journal of the Neurological Sciences, vol. 448, pp. 1–6, 2023. @article{Sharif2023, Background: Patients with vestibular dysfunctions often experience visual-induced symptoms. Here we asked whether such visual dependence can be related to alterations in visual conscious awareness in these patients. Methods: To measure visual conscious awareness, we used the effect of motion-induced blindness (MIB,) in which the perceptual awareness of the visual stimulus alternates despite its unchanged physical characteristics. In this phenomenon, a salient visual target spontaneously disappears and subsequently reappears from visual perception when presented against a moving visual background. The number of perceptual switches during the experience of the MIB stimulus was measured for 120 s in 15 healthy controls, 15 patients with vestibular migraine, 15 patients with benign positional paroxysmal vertigo (BPPV) and 15 with migraine without vestibular symptoms. Results: Patients with vestibular dysfunctions (i.e., both vestibular migraine and BPPV) exhibited increased perceptual fluctuations during MIB compared to healthy controls and migraine patients without vertigo. In VM patients, those with more severe symptoms exhibited higher fluctuations of visual awareness (i.e., positive correlation), whereas, in BPPV patients, those with more severe symptoms had lower fluctuations of visual awareness (i.e., negative correlation). Implications: Taken together, these findings show that fluctuations of visual awareness are linked to the severity of visual-induced symptoms in patients with vestibular dysfunctions, and distinct pathophysiological mechanisms may mediate visual vertigo in peripheral versus central vestibular dysfunctions. |
Yijing Shan; Jay A. Edelman The reduction of saccadic inhibition by distractor repetition Journal Article In: Journal of Neurophysiology, vol. 130, no. 3, pp. 619–627, 2023. @article{Shan2023, When visual distractors are presented far from the goal of an impending voluntary saccadic eye movement, saccade execution will occur less frequently about 90 ms after distractor appearance, a phenomenon known as saccadic inhibition. However, it is also known that neural responses in visual and visuomotor areas of the brain will be attenuated if a visual stimulus appears several times in the same location in rapid succession. In particular, such visual adaptation can affect neurons in the mammalian superior colliculus (SC). As the SC is known to be intimately involved in the production of saccadic eye movements, and thus perhaps in saccadic inhibition, we used a memory-guided saccade task to test whether saccadic inhibition in humans would diminish if a distractor appeared several times in quick succession. We found that distractor repetition reduced saccadic inhibition considerably when distractors appeared opposite in space to the goal of the impending saccade. In addition, when three distractors appeared in quick succession but in different, spatially disparate locations, with only the final distractor appearing opposite the saccade goal, saccadic inhibition was reduced by an intermediate level, suggesting that its reduction due to distractor inhibition spatially generalizes. This suggests that distractor suppression can help reduce the impact that suddenly appearing visual stimuli have on purposive eye movement behavior.NEW & NOTEWORTHY This work combines approaches studying saccadic inhibition and visual adaptation to demonstrate that saccadic inhibition is largely eliminated with stimulus repetition. This is likely to be the largest demonstrated effect of visual stimulus context on saccadic inhibition. It also provides evidence for the existence of a mechanism that acts to suppress the effect of frequently appearing visual stimuli on purposive eye movement behavior in dynamic visual environments. |
Soroosh Shalileh; Dmitry Ignatov; Anastasiya Lopukhina; Olga Dragoy Identifying dyslexia in school pupils from eye movement and demographic data using artificial intelligence Journal Article In: PLoS ONE, vol. 18, pp. 1–26, 2023. @article{Shalileh2023, This paper represents our research results in the pursuit of the following objectives: (i) to introduce a novel multi-sources data set to tackle the shortcomings of the previous data sets, (ii) to propose a robust artificial intelligence-based solution to identify dyslexia in primary school pupils, (iii) to investigate our psycholinguistic knowledge by studying the importance of the features in identifying dyslexia by our best AI model. In order to achieve the first objective, we collected and annotated a new set of eye-movement-during-reading data. Furthermore, we collected demographic data, including the measure of non-verbal intelligence, to form our three data sources. Our data set is the largest eye-movement data set globally. Unlike the previously introduced binary-class data sets, it contains (A) three class labels and (B) reading speed. Concerning the second objective, we formulated the task of dyslexia prediction as regression and classification problems and scrutinized the performance of 12 classifications and eight regressions approaches. We exploited the Bayesian optimization method to fine-tune the hyperparameters of the models: and reported the average and the standard deviation of our evaluation metrics in a stratified ten-fold cross-validation. Our studies showed that multi-layer perceptron, random forest, gradient boosting, and k-nearest neighbor form the group having the most acceptable results. Moreover, we showed that although separately using each data source did not lead to accurate results, their combination led to a reliable solution. We also determined the importance of the features of our best classifier: our findings showed that the IQ, gender, and age are the top three important features; we also showed that fixation along the y-axis is more important than other fixation data. Dyslexia detection, eye fixation, eye movement, demographic, classification, regression, artificial intelligence. |
Anaïs Servais; Noémie Préa; Christophe Hurter; Emmanuel J. Barbeau In: Acta Psychologica, vol. 240, pp. 1–13, 2023. @article{Servais2023, It is common to look away while trying to remember specific information, for example during autobiographical memory retrieval, a behavior referred to as gaze aversion. Given the competition between internal and external attention, gaze aversion is assumed to play a role in visual decoupling, i.e., suppressing environmental distractors during internal tasks. This suggests a link between gaze aversion and the attentional switch from the outside world to a temporary internal mental space that takes place during the initial stage of memory retrieval, but this assumption has never been verified so far. We designed a protocol where 33 participants answered 48 autobiographical questions while their eye movements were recorded with an eye-tracker and a camcorder. Results indicated that gaze aversion occurred early (median 1.09 s) and predominantly during the access phase of memory retrieval—i.e., the moment when the attentional switch is assumed to take place. In addition, gaze aversion lasted a relatively long time (on average 6 s), and was notably decoupled from concurrent head movements. These results support a role of gaze aversion in perceptual decoupling. Gaze aversion was also related to higher retrieval effort and was rare during memories which came spontaneously to mind. This suggests that gaze aversion might be required only when cognitive effort is required to switch the attention toward the internal world to help retrieving hard-to-access memories. Compared to eye vergence, another visual decoupling strategy, the association with the attentional switch seemed specific to gaze aversion. Our results provide for the first time several arguments supporting the hypothesis that gaze aversion is related to the attentional switch from the outside world to memory. |
Eser Sendesen; Samet Kılıç; Nurhan Erbil; Özgür Aydın; Didem Turkyilmaz An exploratory study of the effect of tinnitus on listening effort using EEG and pupillometry Journal Article In: Otolaryngology-Head and Neck Surgery, vol. 169, no. 5, pp. 1259–1267, 2023. @article{Sendesen2023, Objective: Previous behavioral studies on listening effort in tinnitus patients did not consider extended high-frequency hearing thresholds and had conflicting results. This inconsistency may be related that listening effort is not evaluated by the central nervous system (CNS) and autonomic nervous system (ANS), which are directly related to tinnitus pathophysiology. This study matches hearing thresholds at all frequencies, including the extended high-frequency and reduces hearing loss to objectively evaluate listening effort over the CNS and ANS simultaneously in tinnitus patients. Study Design: Case-control study. Setting: University hospital. Methods: Sixteen chronic tinnitus patients and 23 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125 to 20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), Visual Analog Scale (VAS), electroencephalography (EEG), and pupillometry. Results: Pupil dilation and EEG alpha band in the “coding” phase of the sentence presented in tinnitus patients was less than in the control group (p <.05). VAS score was higher in the tinnitus group (p <.01). Also, there was no statistically significant relationship between EEG and pupillometry components and THI or MoCA (p >.05). Conclusion: This study suggests that tinnitus patients may need to make an extra effort to listen. Also, pupillometry may not be sufficiently reliable to assess listening effort in ANS-related pathologies. Considering the possible listening difficulties in tinnitus patients, reducing the listening difficulties, especially in noisy environments, can be added to the goals of tinnitus therapy protocols. |
Werner Seitz; Artyom Zinchenko; Hermann J. Müller; Thomas Geyer Contextual cueing of visual search reflects the acquisition of an optimal, one-for-all oculomotor scanning strategy Journal Article In: Communications Psychology, vol. 1, no. 1, pp. 1–12, 2023. @article{Seitz2023, Visual search improves when a target is encountered repeatedly at a fixed location within a stable distractor arrangement (spatial context), compared to non-repeated contexts. The standard account attributes this contextual-cueing effect to the acquisition of display-specific long-term memories, which, when activated by the current display, cue attention to the target location. Here we present an alternative, procedural-optimization account, according to which contextual facilitation arises from the acquisition of generic oculomotor scanning strategies, optimized with respect to the entire set of displays, with frequently searched displays accruing greater weight in the optimization process. To decide between these alternatives, we examined measures of the similarity, across time-on-task, of the spatio-temporal sequences of fixations through repeated and non-repeated displays. We found scanpath similarity to increase generally with learning, but more for repeated versus non-repeated displays. This pattern contradicts display-specific guidance, but supports one-for-all scanpath optimization. |
Laura Schwalm; Ralph Radach Parafoveal syntactic processing from word N + 2 during reading: The case of gender-specific German articles Journal Article In: Psychological Research, vol. 87, no. 8, pp. 2511–2532, 2023. @article{Schwalm2023, Previous research has suggested that some syntactic information such as word class can be processed parafoveally during reading. However, it is still unclear to what extent early syntactic cueing within noun phrases can facilitate word processing during dynamic reading. Two experiments (total N = 72) were designed to address this question using a gaze-contingent boundary change paradigm to manipulate the syntactic fit within a nominal phrase. Either the article (Experiment 1) or the noun (Experiment 2) was manipulated in the parafovea, resulting in a syntactic mismatch, depending on the condition. Results indicated a substantial elevation of viewing times on both parts of the noun phrase when conflicting syntactic information had been present in the parafovea. In Experiment 1, the article was also fixated more often in the syntactic mismatch condition. These results provide direct evidence of parafoveal syntactic processing. Based on the early time-course of this effect, it can be concluded that grammatical gender is used to generate constraints for the processing of upcoming nouns. To our knowledge, these results also provide the first evidence that syntactic information can be extracted from a parafoveal word N + 2. |
Andreas Schroeer; Martin Rune Andersen; Mike Lind Rank; Ronny Hannemann; Eline Borch Petersen; Filip Marchman Rønne; Daniel J. Strauss; Farah I. Corona-Strauss Assessment of vestigial auriculomotor activity to acoustic stimuli using electrodes in and around the ear Journal Article In: Trends in Hearing, vol. 27, pp. 1–6, 2023. @article{Schroeer2023, Recently, it has been demonstrated that electromyographic (EMG) activity of auricular muscles in humans, especially the postauricular muscle (PAM), depends on the spatial location of auditory stimuli. This observation has only been shown using wet electrodes placed directly on auricular muscles. To move towards a more applied, out-of-the-laboratory setting, this study aims to investigate if similar results can be obtained using electrodes placed in custom-fitted earpieces. Furthermore, with the exception of the ground electrode, only dry-contact electrodes were used to record EMG signals, which require little to no skin preparation and can therefore be applied extremely fast. In two experiments, auditory stimuli were presented to ten participants from different spatial directions. In experiment 1, stimuli were rapid onset naturalistic stimuli presented in silence, and in experiment 2, the corresponding participant's first name, presented in a “cocktail party” environment. In both experiments, ipsilateral responses were significantly larger than contralateral responses. Furthermore, machine learning models objectively decoded the direction of stimuli significantly above chance level on a single trial basis (PAM: (Formula presented.) 80%, in-ear: (Formula presented.) 69%). There were no significant differences when participants repeated the experiments after several weeks. This study provides evidence that auricular muscle responses can be recorded reliably using an almost entirely dry-contact in-ear electrode system. The location of the PAM, and the fact that in-ear electrodes can record comparable signals, would make hearing aids interesting devices to record these auricular EMG signals and potentially utilize them as control signals in the future. |
Svea C. Y. Schroeder; David Aagten-Murphy; Niko A. Busch Contralateral delay activity, but not alpha lateralization, indexes prioritization of information for working memory storage Journal Article In: Attention, Perception, and Psychophysics, vol. 85, no. 3, pp. 718–733, 2023. @article{Schroeder2023, Working memory is inherently limited, which makes it important to select and maintain only task-relevant information and to protect it from distraction. Previous research has suggested the contralateral delay activity (CDA) and lateralized alpha oscillations as neural candidates for such a prioritization process. While most of this work focused on distraction during encoding, we examined the effect of external distraction presented during memory maintenance. Participants memorized the orientations of three lateralized objects. After an initial distraction-free maintenance interval, distractors appeared in the same location as the targets or in the opposite hemifield. This distraction was followed by another distraction-free interval. Our results show that CDA amplitudes were stronger in the interval before compared with the interval after the distraction (i.e., CDA amplitudes were stronger in response to targets compared with distractors). This amplitude reduction in response to distractors was more pronounced in participants with higher memory accuracy, indicating prioritization and maintenance of relevant over irrelevant information. In contrast, alpha lateralization did not change from the interval before distraction compared with the interval after distraction, and we found no correlation between alpha lateralization and memory accuracy. These results suggest that alpha lateralization plays no direct role in either selective maintenance of task-relevant information or inhibition of distractors. Instead, alpha lateralization reflects the current allocation of spatial attention to the most salient information regardless of task-relevance. In contrast, CDA indicates flexible allocation of working memory resources depending on task-relevance. |
Rebekka Schröder; Kristof Keidel; Peter Trautner; Alexander Radbruch; Ulrich Ettinger Neural mechanisms of background and velocity effects in smooth pursuit eye movements Journal Article In: Human Brain Mapping, vol. 44, no. 3, pp. 1–17, 2023. @article{Schroeder2023a, Smooth pursuit eye movements (SPEM) are essential to guide behaviour in complex visual environments. SPEM accuracy is known to be degraded by the presence of a structured visual background and at higher target velocities. The aim of this preregistered study was to investigate the neural mechanisms of these robust behavioural effects. N = 33 participants performed a SPEM task with two background conditions (present and absent) at two target velocities (0.4 and 0.6 Hz). Eye movement and BOLD data were collected simultaneously. Both the presence of a structured background and faster target velocity decreased pursuit gain and increased catch-up saccade rate. Faster targets additionally increased position error. Higher BOLD response with background was found in extensive clusters in visual, parietal, and frontal areas (including the medial frontal eye fields; FEF) partially overlapping with the known SPEM network. Faster targets were associated with higher BOLD response in visual cortex and left lateral FEF. Task-based functional connectivity analyses (psychophysiological interactions; PPI) largely replicated previous results in the basic SPEM network but did not yield additional information regarding the neural underpinnings of the background and velocity effects. The results show that the presentation of visual background stimuli during SPEM induces activity in a widespread visuo-parieto-frontal network including areas contributing to cognitive aspects of oculomotor control such as medial FEF, whereas the response to higher target velocity involves visual and motor areas such as lateral FEF. Therefore, we were able to propose for the first time different functions of the medial and lateral FEF during SPEM. |
Elizabeth R. Schotter; Sara Milligan; Victoria M. Estevez Event-related potentials show that parafoveal vision is insufficient for semantic integration Journal Article In: Psychophysiology, vol. 60, no. 7, pp. 1–25, 2023. @article{Schotter2023, Readers extract information from a word from parafoveal vision prior to looking at it. It has been argued that parafoveal perception allows readers to initiate linguistic processes, but it is unclear which stages of word processing are engaged: the process of extracting letter information to recognize words, or the process of extracting meaning to comprehend them. This study used the event-related brain potential (ERP) technique to investigate how word recognition (indexed by the N400 effect for unexpected or anomalous compared to expected words) and semantic integration (indexed by the Late-positive component; LPC effect for anomalous compared to expected words) are or are not elicited when the word is perceived only in parafoveal vision. Participants read a target word following a sentence that made it expected, unexpected, or anomalous, and read the sentences presented three words at a time in the Rapid Serial Visual Presentation (RSVP) with flankers paradigm so that words were perceived in parafoveal and foveal vision. We orthogonally manipulated whether the target word was masked in parafoveal and/or foveal vision to dissociate the processing associated with perception of the target word from either location. We found that the N400 effect was generated from parafoveally perceived words, and was reduced for foveally perceived words if they were previously perceived parafoveally. In contrast, the LPC effect was only elicited if the word was perceived foveally, suggesting that readers must attend to a word directly in foveal vision in order to attempt to integrate its meaning into the sentence context. |
Sebastian Schneegans; Jessica M. V. McMaster; Paul M. Bays Role of time in binding features in visual working memory Journal Article In: Psychological Review, vol. 130, no. 1, pp. 137–154, 2023. @article{Schneegans2023, Previous research on feature binding in visual working memory has supported a privileged role for location in binding an object's nonspatial features. However, humans are able to correctly recall feature conjunctions of objects that occupy the same location at different times. In a series of behavioral experiments, we investigated binding errors under these conditions, and specifically tested whether ordinal position can take the role of location in mediating feature binding. We performed two dual report experiments in which participants had to memorize three colored shapes presented sequentially at the screen center. When participants were cued with the ordinal position of one item and had to report its shape and color, report errors for the two features were largely uncorrelated. In contrast, when participants were cued, for example, with an item's shape and reported an incorrect ordinal position, they had a high chance of making a corresponding error in the color report. This pattern of error correlations closely matched the predictions of a model in which color and shape are bound to each other only indirectly via an item's ordinal position. In a third experiment, we directly compared the roles of location and sequential position in feature binding. Participants viewed a sequence of colored disks displayed at different locations and were cued either by a disk's location or its ordinal position to report its remaining properties. The pattern of errors supported a mixed strategy with individual variation, suggesting that binding via either time or space could be used for this task. |
Daniel Schmidtke; Sadaf Rahmanian; Anna L. Moro Tracking reading development in an English language university-level bridging program: evidence from eye-movements during passage reading Journal Article In: Bilingualism: Language and Cognition, vol. 26, pp. 356–370, 2023. @article{Schmidtke2023, Increasing numbers of international students enter university education via English language bridging programs. Much research has overlooked the nature of second language reading development during a bridging program, focusing instead on the development of literacy skills of international students who already meet the language requirement for undergraduate admission. We report a longitudinal eye-movement study assessing English passage reading efficiency and comprehension in 405 Chinese-speaking bridging program students. Incoming IELTS reading scores were used as an index of baseline reading ability. Linear mixed-effects regression models fitted to global eye-movement measures and reading comprehension indicated that despite initial between-subjects differences, within-subject change at each ability level progressed at the same rate, following parallel growth trajectories. Therefore, there was significant overall reading progress during the bridging program, but no evidence that the gap between low and high ability readers either closed or widened over time. |
Tim Schilling; Mojtaba Soltanlou; Hans-Christoph Nuerk; Hamed Bahmani Blue-light stimulation of the blind-spot constricts the pupil and enhances contrast sensitivity Journal Article In: PLoS ONE, vol. 18, pp. 1–11, 2023. @article{Schilling2023, Short- and long-wavelength light can alter pupillary responses differently, allowing inferences to be made about the contribution of different photoreceptors on pupillary constriction. In addition to classical retinal photoreceptors, the pupillary light response is formed by the activity of melanopsin-expressing intrinsically photosensitive retinal ganglion cells (ipRGC). It has been shown in rodents that melanopsin is expressed in the axons of ipRGCs that bundle at the optic nerve head, which forms the perceptual blind-spot. Hence, the first aim of this study was to investigate if blind-spot stimulation induces a pupillary response. The second aim was to investigate the effect of blind-spot stimulation by using the contrast sensitivity tests. Fifteen individuals participated in the pupil response experiment and thirty-two individuals in the contrast sensitivity experiment. The pupillary change was quantified using the post-illumination pupil response (PIPR) amplitudes after blue-light (experimental condition) and red-light (control condition) pulses in the time window between 2 s and 6 s post-illumination. The contrast sensitivity was assessed using two different tests: the Freiburg Visual Acuity Test and Contrast Test and the Tuebingen Contrast Sensitivity Test, respectively. Contrast sensitivity was measured before and 20 minutes after binocular blue-light stimulation of the blind-spot at spatial frequencies higher than or equal to 3 cycles per degree (cpd) and at spatial frequencies lower than 3 cpd (control condition). Blue-light blind-spot stimulation induced a significantly larger PIPR compared to red-light, confirming a melanopsin-mediated pupil-response in the blind-spot. Furthermore, contrast sensitivity was increased after blind-spot stimulation, confirmed by both contrast sensitivity tests. Only spatial frequencies of at least 3 cpd were enhanced. This study demonstrates that stimulating the blind-spot with blue-light constricts the pupil and increases the contrast sensitivity at higher spatial frequencies. |
Michael Paul Schallmo; Kimberly B. Weldon; Rohit S. Kamath; Hannah R. Moser; Samantha A. Montoya; Kyle W. Killebrew; Caroline Demro; Andrea N. Grant; Małgorzata Marjańska; Scott R. Sponheim; Cheryl A. Olman The psychosis human connectome project: Design and rationale for studies of visual neurophysiology Journal Article In: NeuroImage, vol. 272, pp. 1–20, 2023. @article{Schallmo2023, Visual perception is abnormal in psychotic disorders such as schizophrenia. In addition to hallucinations, laboratory tests show differences in fundamental visual processes including contrast sensitivity, center-surround interactions, and perceptual organization. A number of hypotheses have been proposed to explain visual dysfunction in psychotic disorders, including an imbalance between excitation and inhibition. However, the precise neural basis of abnormal visual perception in people with psychotic psychopathology (PwPP) remains unknown. Here, we describe the behavioral and 7 tesla MRI methods we used to interrogate visual neurophysiology in PwPP as part of the Psychosis Human Connectome Project (HCP). In addition to PwPP (n = 66) and healthy controls (n = 43), we also recruited first-degree biological relatives (n = 44) in order to examine the role of genetic liability for psychosis in visual perception. Our visual tasks were designed to assess fundamental visual processes in PwPP, whereas MR spectroscopy enabled us to examine neurochemistry, including excitatory and inhibitory markers. We show that it is feasible to collect high-quality data across multiple psychophysical, functional MRI, and MR spectroscopy experiments with a sizable number of participants at a single research site. These data, in addition to those from our previously described 3 tesla experiments, will be made publicly available in order to facilitate further investigations by other research groups. By combining visual neuroscience techniques and HCP brain imaging methods, our experiments offer new opportunities to investigate the neural basis of abnormal visual perception in PwPP. |
Jonathan Schaffner; Sherry Dongqi Bao; Philippe N. Tobler; Todd A. Hare; Rafael Polania Sensory perception relies on fitness-maximizing codes Journal Article In: Nature Human Behaviour, vol. 7, no. 7, pp. 1135–1151, 2023. @article{Schaffner2023, Sensory information encoded by humans and other organisms is generally presumed to be as accurate as their biological limitations allow. However, perhaps counterintuitively, accurate sensory representations may not necessarily maximize the organism's chances of survival. To test this hypothesis, we developed a unified normative framework for fitness-maximizing encoding by combining theoretical insights from neuroscience, computer science, and economics. Behavioural experiments in humans revealed that sensory encoding strategies are flexibly adapted to promote fitness maximization, a result confirmed by deep neural networks with information capacity constraints trained to solve the same task as humans. Moreover, human functional MRI data revealed that novel behavioural goals that rely on object perception induce efficient stimulus representations in early sensory structures. These results suggest that fitness-maximizing rules imposed by the environment are applied at early stages of sensory processing in humans and machines. |
Eugenio Scaliti; Kiri Pullar; Giulia Borghini; Andrea Cavallo; Stefano Panzeri; Cristina Becchio Kinematic priming of action predictions Journal Article In: Current Biology, vol. 33, no. 13, pp. 2717–2727, 2023. @article{Scaliti2023, The ability to anticipate what others will do next is crucial for navigating social, interactive environments. Here, we develop an experimental and analytical framework to measure the implicit readout of prospective intention information from movement kinematics. Using a primed action categorization task, we first demonstrate implicit access to intention information by establishing a novel form of priming, which we term kinematic priming: subtle differences in movement kinematics prime action prediction. Next, using data collected from the same participants in a forced-choice intention discrimination task 1 h later, we quantify single-trial intention readout—the amount of intention information read by individual perceivers in individual kinematic primes—and assess whether it can be used to predict the amount of kinematic priming. We demonstrate that the amount of kinematic priming, as indexed by both response times (RTs) and initial fixations to a given probe, is directly proportional to the amount of intention information read by the individual perceiver at the single-trial level. These results demonstrate that human perceivers have rapid, implicit access to intention information encoded in movement kinematics and highlight the potential of our approach to reveal the computations that permit the readout of this information with single-subject, single-trial resolution. |
Ceyda Sayalı; Emma Heling; Roshan Cools Learning progress mediates the link between cognitive effort and task engagement Journal Article In: Cognition, vol. 236, pp. 1–11, 2023. @article{Sayali2023, While a substantial body of work has shown that cognitive effort is aversive and costly, a separate line of research on intrinsic motivation suggests that people spontaneously seek challenging tasks. According to one prominent account of intrinsic motivation, the learning progress motivation hypothesis, the preference for difficult tasks reflects the dynamic range that these tasks yield for changes in task performance (Kaplan & Oudeyer, 2007). Here we test this hypothesis, by asking whether greater engagement with intermediately difficult tasks, indexed by subjective ratings and objective pupil measurements, is a function of trial-wise changes in performance. In a novel paradigm, we determined each individual's capacity for task performance and used difficulty levels that are low, intermediately challenging or high for that individual. We demonstrated that challenging tasks resulted in greater liking and engagement scores compared with easy tasks. Pupil size tracked objective task difficulty, where challenging tasks were associated with greater pupil responses than easy tasks. Most importantly, pupil responses were predicted by trial-to-trial changes in average accuracy as well as learning progress (derivative of average accuracy), while greater pupil responses also predicted greater subjective engagement scores. Together, these results substantiate the learning progress motivation hypothesis stating that the link between task engagement and cognitive effort is mediated the dynamic range for changes in task performance. |
Shreshth Saxena; Lauren K. Fink; Elke B. Lange Deep learning models for webcam eye tracking in online experiments Journal Article In: Behavior Research Methods, pp. 1–17, 2023. @article{Saxena2023a, Eye tracking is prevalent in scientific and commercial applications. Recent computer vision and deep learning methods enable eye tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. However, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. In this study, we tackle critical challenges faced in remote eye tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. From their own homes and laptops, 65 participants performed a battery of eye tracking tasks including (i) fixation, (ii) zone classification, (iii) free viewing, (iv) smooth pursuit, and (v) blink detection. Webcam recordings of the participants performing these tasks were processed offline through appearance-based models of gaze and blink detection. The task battery required different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. We find the best gaze accuracy to be 2.4° and precision of 0.47°, which outperforms previous online eye tracking studies and reduces the gap between laboratory-based and online eye tracking performance. We release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye tracking that has the potential to accelerate research in the fields of psychological science, cognitive neuroscience, user experience design, and human–computer interfaces. |
Pankhuri Saxena; Stefan Treue Effect of attention on human direction-discrimination thresholds at iso-eccentric locations in the visual field: A registered report protocol Journal Article In: PLoS ONE, vol. 18, pp. 1–9, 2023. @article{Saxena2023, Human visual perceptual performance is strongly dependent on a given stimulus' distance from the line of sight, i.e. its eccentricity. In addition, multiple studies have shown a dependence on a stimulus' angular position relative to the fovea. In humans, the resulting spatial profile of perceptual performance (the “performance field”) typically shows better performance near the lower vertical meridian, compared to the upper vertical meridian, and better performance near the horizontal meridian compared to the vertical meridian. Predominantly, these variations have been interpreted as sensory inhomogeneities. But it has also been shown that they are modulated by the allocation of spatial attention, either homogeneously elevating performance or compensating for the sensory inhomogeneities. Here, we propose a study protocol for pre-registration to investigate such interactions between sensory and attentional effects. First, we will determine performance fields for time-dependent, dynamic stimuli, namely the direction discrimination of moving random dot patterns. Then, we will establish whether directing focal attention to a particular stimulus location differentially improves thresholds compared to a distributed attention condition. |
Tetsuya Sato; Samia Islam; Jeremiah D. Still; Mark W. Scerbo; Yusuke Yamani Task priority reduces an adverse effect of task load on automation trust in a dynamic multitasking environment Journal Article In: Cognition, Technology and Work, vol. 25, no. 1, pp. 1–13, 2023. @article{Sato2023, The present study examined how task priority influences operators' scanning patterns and trust ratings toward imperfect automation. Previous research demonstrated that participants display lower trust and fixate less frequently toward a visual display for the secondary task assisted with imperfect automation when the primary task demanded more attention. One account for this phenomenon is that the increased primary task demand induced the participants to prioritize the primary task than the secondary task. The present study asked participants to perform a tracking task, system monitoring task, and resource management task simultaneously using the Multi-Attribute Task Battery (MATB) II. Automation assisted the system monitoring task with 70% reliability. Task load was manipulated via difficulty of the tracking task. Participants were explicitly instructed to either prioritize the tracking task over all other tasks (tracking priority condition) or reduce tracking performance (equal priority condition). The results demonstrate the effects of task load on attention distribution, task performance and trust ratings. Furthermore, participants under the equal priority condition reported lower performance-based trust when the tracking task required more frequent manual input (tracking condition), while no effect of task load was observed under the tracking priority condition. Task priority can modulate automation trust by eliminating the adverse effect of task load in a dynamic multitasking environment. |
Hannah S. Sarvasy; Adam Milton Morgan; Jenny Yu; Victor S. Ferreira; Shota Momma Cross-clause planning in Nungon (Papua New Guinea): Eye-tracking evidence Journal Article In: Memory & Cognition, vol. 51, no. 3, pp. 666–680, 2023. @article{Sarvasy2023, Hundreds of languages worldwide use a sentence structure known as the “clause chain,” in which 20 or more clauses can be stacked to form a sentence. The Papuan language Nungon is among a subset of clause chaining languages that require “switch-reference” suffixes on nonfinal verbs in chains. These suffixes announce whether the subject of each upcoming clause will differ from the subject of the previous clause. We examine two major issues in psycholinguistics: predictive processing in comprehension, and advance planning in production. Whereas previous work on other languages has demonstrated that sentence planning can be incremental, switch-reference marking would seem to prohibit strictly incremental planning, as it requires speakers to plan the next clause before they can finish producing the current one. This suggests an intriguing possibility: planning strategies may be fundamentally different in Nungon. We used a mobile eye-tracker and solar-powered laptops in a remote village in Papua, New Guinea, to track Nungon speakers' gaze in two experiments: comprehension and production. Curiously, during comprehension, fixation data failed to find evidence that switch-reference marking is used for predictive processing. However, during production, we found evidence for advance planning of switch-reference markers, and, by extension, the subjects they presage. We propose that this degree of advance syntactic planning pushes the boundaries of what is known about sentence planning, drawing on data from a novel morpheme type in an understudied language. |
Florian Sandhaeger; Nina Omejc; Anna Antonia Pape; Markus Siegel Abstract perceptual choice signals during action-linked decisions in the human brain Journal Article In: PLoS Biology, vol. 21, no. 10, pp. 1–27, 2023. @article{Sandhaeger2023, Humans can make abstract choices independent of motor actions. However, in laboratory tasks, choices are typically reported with an associated action. Consequentially, knowledge about the neural representation of abstract choices is sparse, and choices are often thought to evolve as motor intentions. Here, we show that in the human brain, perceptual choices are represented in an abstract, motor-independent manner, even when they are directly linked to an action. We measured MEG signals while participants made choices with known or unknown motor response mapping. Using multivariate decoding, we quantified stimulus, perceptual choice, and motor response information with distinct cortical distributions. Choice representations were invariant to whether the response mapping was known during stimulus presentation, and they occupied a distinct representational space from motor signals. As expected from an internal decision variable, they were informed by the stimuli, and their strength predicted decision confidence and accuracy. Our results demonstrate abstract neural choice signals that generalize to action-linked decisions, suggesting a general role of an abstract choice stage in human decision-making. |
Kathryn Nicole Sam; K. Jayasankara Reddy The effect of music and editing style on subjective perception of time when watching videos Journal Article In: Projections, vol. 17, no. 2, pp. 41–61, 2023. @article{Sam2023, Arousal, editing style, and eye movements have been implicated in time perception when watching videos. However, little multimodal research has explored how manipulating both the auditory and visual properties of videos affects temporal processing. This study investigated how editing density and music-induced arousal affect viewers' time perception. Thirty-nine participants watched six videos varying in editing density and music while their eye movements were recorded. They estimated the videos' duration and reported their subjective experience of time passage and emotional involvement. Fast-paced editing was associated with the feeling of time passing faster, a relationship mediated by fixation durations. High-arousal background music was also associated with the feeling of time passing faster. The consequences of this study in terms of a possible auditory driving effect are explored. |
Atena Sajedin; Sina Salehi; Hossein Esteky Information content and temporal structure of face selective local field potentials frequency bands in IT cortex Journal Article In: Cerebral Cortex, pp. 1–12, 2023. @article{Sajedin2023, Sensory stimulation triggers synchronized bioelectrical activity in the brain across various frequencies. This study delves into network-level activities, specifically focusing on local field potentials as a neural signature of visual category representation. Specifically, we studied the role of different local field potential frequency oscillation bands in visual stimulus category representation by presenting images of faces and objects to three monkeys while recording local field potential from inferior temporal cortex. We found category selective local field potential responses mainly for animate, but not inanimate, objects. Notably, face-selective local field potential responses were evident across all tested frequency bands, manifesting in both enhanced (above mean baseline activity) and suppressed (below mean baseline activity) local field potential powers. We observed four different local field potential response profiles based on frequency bands and face selective excitatory and suppressive responses. Low-frequency local field potential bands (1–30 Hz) were more prodominstaly suppressed by face stimulation than the high-frequency (30–170 Hz) local field potential bands. Furthermore, the low-frequency local field potentials conveyed less face category informtion than the high-frequency local field potential in both enhansive and suppressive conditions. Furthermore, we observed a negative correlation between face/object d-prime values in all the tested local field potential frequency bands and the anterior–posterior position of the recording sites. In addition, the power of low-frequency local field potential systematically declined across inferior temporal anterior–posterior positions, whereas high-frequency local field potential did not exhibit such a pattern. In general, for most of the above-mentioned findings somewhat similar results were observed for body, but not, other stimulus categories. The observed findings suggest that a balance of face selective excitation and inhibition across time and cortical space shape face category selectivity in inferior temporal cortex. |
Muhammet Ikbal Sahan; Roma Siugzdaite; Sebastiaan Mathôt; Wim Fias Attention-based rehearsal: Eye movements reveal how visuospatial information is maintained in working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–12, 2023. @article{Sahan2023, The human eye scans visual information through scan paths, series of fixations. Analogous to these scan paths during the process of actual “seeing,” we investigated whether similar scan paths are also observed while subjects are “rehearsing” stimuli in visuospatial working memory. Participants performed a continu- ous recall task in which they rehearsed the precise location and color ofthree serially presented discs during a retention interval, and later reproduced either the precise location or the color of a single probed item. In two experiments, we varied the direction along which the items were presented and investigated whether scan paths during rehearsal followed the pattern of stimulus presentation during encoding (left-to-right in Experiment 1; left-to-right/right-to-left in Experiment 2). In both experiments, we confirmed that the eyes follow similar scan paths during encoding and rehearsal. Specifically, we observed that during rehearsal par- ticipants refixated the memorized locations they saw during encoding. Most interestingly, the precision with which these locations were refixated was associated with smaller recall errors. Assuming that eye position reflects the focus of attention, our findings suggest a functional contribution of spatial attention shifts to working memory and are in line with the hypothesis that maintenance ofinformation in visuospatial working memory is supported by attention-based rehearsal. |
Nuria Sagarra; Joseph V. Casillas Practice beats age: Co-activation shapes heritage speakers' lexical access more than age of onset Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–18, 2023. @article{Sagarra2023, Probabilistic associations make language processing efficient and are honed through experience. However, it is unclear what language experience factors explain the non-monolingual processing behaviors typical of L2 learners and heritage speakers (HSs). We investigated whether AoO, language proficiency, and language use affect the recognition of Spanish stress-tense suffix associations involving a stressed syllable that cues a present suffix (SALta “s/he jumps”) and an unstressed syllable that cues a past suffix (SALtó “s/he jumped”). Adult Spanish-English HSs, English-Spanish L2 learners, and Spanish monolinguals saw a paroxytone verb (stressed initial syllable) and an oxytone verb (unstressed initial syllable), listened to a sentence containing one of the verbs, and chose the one they heard. Spanish proficiency measured grammatical and lexical knowledge, and Spanish use assessed percentage of current usage. Both bilingual groups were comparable in Spanish proficiency and use. Eye-tracking data showed that all groups fixated on target verbs above chance before hearing the syllable containing the suffix, except the HSs in the oxytones. Monolinguals fixated on targets more and earlier, although at a slower rate, than HSs and L2 learners; in turn, HSs fixated on targets more and earlier than L2 learners, except in oxytones. Higher proficiency increased target fixations in HSs (oxytones) and L2 learners (paroxytones), but greater use only increased target fixations in HSs (oxytones). Taken together, our data show that HSs' lexical access depends more on number of lexical competitors (co-activation of two L1 lexica) and type (phonotactic) frequency than token (lexical) frequency or AoO. We discuss the contribution of these findings to models in phonology, lexical access, language processing, language prediction, and human cognition. |
Elizabeth M. Sachse; Adam C. Snyder Dynamic attention signalling in V4: Relation to fast-spiking/non-fast-spiking cell class and population coupling Journal Article In: European Journal of Neuroscience, vol. 57, no. 6, pp. 918–939, 2023. @article{Sachse2023, The computational role of a neuron during attention depends on its firing properties, neurotransmitter expression and functional connectivity. Neurons in the visual cortical area V4 are reliably engaged by selective attention but exhibit diversity in the effect of attention on firing rates and correlated variability. It remains unclear what specific neuronal properties shape these attention effects. In this study, we quantitatively characterised the distribution of attention modulation of firing rates across populations of V4 neurons. Neurons exhibited a continuum of time-varying attention effects. At one end of the continuum, neurons' spontaneous firing rates were slightly depressed with attention (compared to when unattended), whereas their stimulus responses were enhanced with attention. The other end of the continuum showed the converse pattern: attention depressed stimulus responses but increased spontaneous activity. We tested whether the particular pattern of time-varying attention effects that a neuron exhibited was related to the shape of their actions potentials (so-called ‘fast-spiking' [FS] neurons have been linked to inhibition) and the strength of their coupling to the overall population. We found an interdependence among neural attention effects, neuron type and population coupling. In particular, we found neurons for which attention enhanced spontaneous activity but suppressed stimulus responses were less likely to be fast-spiking (more likely to be non-fast-spiking) and tended to have stronger population coupling, compared to neurons with other types of attention effects. These results add important information to our understanding of visual attention circuits at the cellular level. |
Satu Saalasti; Jussi Alho; Juha M. Lahnakoski; Mareike Bacha-Trams; Enrico Glerean; Iiro P. Jääskeläinen; Uri Hasson; Mikko Sams Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading Journal Article In: Brain and Behavior, vol. 13, no. 2, pp. 1–17, 2023. @article{Saalasti2023, Introduction: Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. Methods: We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6–100% |
Juliette Ryan-Lortie; Gabriel Pelletier; Matthew Pilgrim; Lesley K. Fellows Gaze differences in configural and elemental evaluation during multi-attribute decision-making Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–10, 2023. @article{RyanLortie2023, Introduction: While many everyday choices are between multi-attribute options, how attribute values are integrated to allow such choices remains unclear. Recent findings suggest a distinction between elemental (attribute-by-attribute) and configural (holistic) evaluation of multi-attribute options, with different neural substrates. Here, we asked if there are behavioral or gaze pattern differences between these putatively distinct modes of multi-attribute decision-making. Methods: Thirty-nine healthy men and women learned the monetary values of novel multi-attribute pseudo-objects (fribbles) and then made choices between pairs of these objects while eye movements were tracked. Value was associated with individual attributes in the elemental condition, and with unique combinations of attributes in the configural condition. Choice, reaction time, gaze fixation time on options and individual attributes, and within- and between-option gaze transitions were recorded. Results: There were systematic behavioral differences between elemental and configural conditions. Elemental trials had longer reaction times and more between-option transitions, while configural trials had more within-option transitions. The effect of last fixation on choice was more pronounced in the configural condition. Discussion: We observed differences in gaze patterns and the influence of last fixation location on choice in multi-attribute value-based choices depending on how value is associated with those attributes. This adds support for the claim that multi-attribute option values may emerge either elementally or holistically, reminiscent of similar distinctions in multi-attribute object recognition. This may be important to consider in neuroeconomics research that involve visually-presented complex objects. |
Brian E. Brain E. Russ; Kenji W. Koyano; Julian Day-Cooney; Neda Perwez; David A. Leopold Temporal continuity shapes visual responses of macaque face patch neurons Journal Article In: Neuron, vol. 111, no. 6, pp. 903–914, 2023. @article{Russ2023, Macaque inferior temporal cortex neurons respond selectively to complex visual images, with recent work showing that they are also entrained reliably by the evolving content of natural movies. To what extent does visual continuity itself shape the responses of high-level visual neurons? We addressed this question by measuring how cells in face-selective regions of the macaque temporal cortex were affected by the manipulation of a movie's temporal structure. Sampling the movie at 1s intervals, we measured neural responses to randomized, brief stimuli of different lengths, ranging from 800 ms dynamic movie snippets to 100 ms static frames. We found that the disruption of temporal continuity strongly altered neural response profiles, particularly in the early onset response period of the randomized stimulus. The results suggest that models of visual system function based on discrete and randomized visual presentations may not translate well to the brain's natural modes of operation. |
Annie Roy-Charland; Victoria Foglia; Karolyn Cloutier; Emalie Hendel; Marie Pier Mazerolle The effect of instructions and response format on smile judgement Journal Article In: Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, vol. 77, no. 4, pp. 308–318, 2023. @article{RoyCharland2023, Our study examined the role of instructions, response type, and definition on the judgement of enjoyment and nonenjoyment smiles. Participants viewed symmetric Duchenne, non-Duchenne, and asymmetric smiles. They were instructed to judge the happiness, authenticity, and sincerity of the smiles using either Likert scales or a dichotomous response type. Participants were also either given a definition of the instruction words "happy," "authentic," and "sincere" or not. Results showed that the probability of saying "really (happy/sincere/authentic)" was higher for the symmetric Duchenne than the asymmetric smiles and higher for the asymmetric than non-Duchenne smiles. Changing the instructions given to participants did not override the effect of smile type with the use of Likert scale or dichotomous response. However, with the use of Likert scale, we observed subtilities that were not observed with the use of dichotomous response. When given a definition, in the case of symmetric non-Duchenne smiles, Likert ratings were significantly lower, and participants were more accurate in their judgement on the dichotomous scale. However, no differences were observed for the asymmetric Duchenne and symmetric Duchenne smiles whether a definition was given or not. Symmetric non-Duchenne and asymmetric Duchenne smiles were also viewed longer when a definition was given than when one was not. Nevertheless, considering methodological variations of our study failed to explain the variations in the pattern of results of previous studies, other avenues should be explored, such as the use of dynamic stimuli and a greater variety of encoders. |
Cristina Rovira-Gay; Clara Mestre; Marc Argiles; Valldeflors Vinuela-Navarro; Jaume Pujol Feasibility of measuring fusional vergence amplitudes objectively Journal Article In: PLoS ONE, vol. 18, pp. 1–14, 2023. @article{RoviraGay2023, Two tests to measure fusional vergence amplitudes objectively were developed and validated against the two conventional clinical tests. Forty-nine adults participated in the study. Participants' negative (BI, base in) and positive (BO, base out) fusional vergence amplitudes at near were measured objectively in an haploscopic set-up by recording eye movements with an EyeLink 1000 Plus (SR Research). Stimulus disparity changed in steps or smoothly mimicking a prim bar and a Risley prism, respectively. Break and recovery points were determined offline using a custom Matlab algorithm for the analysis of eye movements. Fusional vergence amplitudes were also measured with two clinical tests using a Risley prism and a prism bar. A better agreement between tests was found for the measurement of BI than for BO fusional vergence amplitudes. The means ± SD of the differences between the BI break and recovery points measured with the two objective tests were -1.74 ± 3.35 PD and -1.97 ± 2.60 PD, respectively, which were comparable to those obtained for the subjective tests. For the BO break and recovery points, although the means of the differences between the two objective tests were small, high variability between subjects was found (0.31 ± 6.44 PD and -2.84 ± 7.01 PD, respectively). This study showed the feasibility to measure fusional vergence amplitudes objectively and overcome limitations of the conventional subjective tests. However, these tests cannot be used interchangeably due to their poor agreement. |
Alireza Rouzitalab; Chadwick B. Boulay; Jeongwon Park; Julio C. Martinez-Trujillo; Adam J. Sachs Ensembles code for associative learning in the primate lateral prefrontal cortex Journal Article In: Cell Reports, vol. 42, no. 5, pp. 1–16, 2023. @article{Rouzitalab2023, The lateral prefrontal cortex (LPFC) of primates is thought to play a role in associative learning. However, it remains unclear how LPFC neuronal ensembles dynamically encode and store memories for arbitrary stimulus-response associations. We recorded the activity of neurons in LPFC of two macaques during an associative learning task using multielectrode arrays. During task trials, the color of a symbolic cue indicated the location of one of two possible targets for a saccade. During a trial block, multiple randomly chosen associations were learned by the subjects. A state-space analysis indicated that LPFC neuronal ensembles rapidly learn new stimulus-response associations mirroring the animals' learning. Multiple associations acquired during training are stored in a neuronal subspace and can be retrieved hours after learning. Finally, knowledge of old associations facilitates learning new, similar associations. These results indicate that neuronal ensembles in the primate LPFC provide a flexible and dynamic substrate for associative learning. |
Tevin C. Rouse; Amy M. Ni; Chengcheng Huang; Marlene R. Cohen Topological insights into the neural basis of flexible behavior Journal Article In: Proceedings of the National Academy of Sciences of the United States of America, vol. 120, no. 24, pp. 1–11, 2023. @article{Rouse2023, It is widely accepted that there is an inextricable link between neural computations, biological mechanisms, and behavior, but it is challenging to simultaneously relate all three. Here, we show that topological data analysis (TDA) provides an important bridge between these approaches to studying how brains mediate behavior. We demonstrate that cognitive processes change the topological description of the shared activity of populations of visual neurons. These topological changes constrain and distinguish between competing mechanistic models, are connected to subjects performance on a visual change detection task, and, via a link with network control theory, reveal a tradeoff between improving sensitivity to subtle visual stimulus changes and increasing the chance that the subject will stray off task. These connections provide a blueprint for using TDA to uncover the biological and computational mechanisms by which cognition affects behavior in health and disease. |
Mylène Ross-Plourde; Mylène Lachance-Grzela; Andréanne Charbonneau; Mylène Dumont; Annie Roy-Charland Parental stereotypes and cognitive processes: Evidence for a double standard in parenting roles when reading texts Journal Article In: Journal of Gender Studies, vol. 32, no. 1, pp. 74–82, 2023. @article{RossPlourde2023, While the characteristics associated with fathers have taken on more maternal traits more recently, a similar shift has not been observed for maternal characteristics. The role of mother remains stereotyped, and those who do not adhere to this often face criticism. This study examines the impact of parental stereotypes on the cognitive processes associated with reading. A sample of 32 individuals read 24 experimental passages introducing a parent (mother or father) in a traditional or non-traditional role, and in a neutral or disambiguating context. Results show a significant interaction between the type of role and gender of the parent on reading times. Simple main effect tests revealed that for traditional roles, fixation durations were longer when the protagonist was a father than when the protagonist was a mother. There was no effect of role type for fathers, yet for mothers, fixation durations were longer when they were depicted in non-traditional roles than when they were depicted in traditional roles. This disruption of information processing of schema incongruent content suggests that mothers' parenting stereotypes remain anchored in society and are more rigid than those of fathers, supporting the idea of a double standard in parenting roles. |
Camilo R. Ronderos; Ernesto Guerra; Pia Knoeferle When sequence matters: The processing of contextually biased German verb–object metaphors Journal Article In: Language and Cognition, vol. 15, no. 1, pp. 1–28, 2023. @article{Ronderos2023, Several studies have investigated the comprehension of decontextualized English nominal metaphors. However, not much is known about how contextualized, non-nominal, non-English metaphors are processed, and how this might inform existing theories of metaphor comprehension. In the current work, we investigate the effects of context and of sequential order for an under-studied type of construction: German verb–object metaphors. In two visual-world, eye-tracking experiments, we manipulated whether a discourse context biased a spoken target utterance toward a metaphoric or a literal interpretation. We also manipulated the order of verb and object in the target utterances (e.g., Stefan interviewt eine Hyäne , ‘Stefan interviews a hyena', verb→object; and Stefan wird eine Hyäne interviewen , ‘Stefan will a hyena interview', object→verb). Experiment 1 shows that contextual cues interacted with sequential order, mediating the processing of verb–object metaphors: When the context biased toward a metaphoric interpretation, participants readily understood the object metaphorically for the verb→object sequence, whereas they likely first understood it literally for the object→verb sequence. Crucially, no such effect of sequential order was found when context biased toward a literal interpretation. Experiment 2 suggests that differences in processing found in Experiment 1 were brought on by the interaction of discourse context and sequential order and not by sequential order alone. We propose ways in which existing theoretical views could be extended to account for these findings. Overall, our study shows the importance of context during figurative language comprehension and highlights the need to test the predictions of metaphor theories on non-English and non-nominal metaphors. |
F. Rojas-Thomas; C. Artigas; G. Wainstein; Juan Pablo Morales; M. Arriagada; D. Soto; A. Dagnino-Subiabre; J. Silva; V. Lopez Impact of acute psychosocial stress on attentional control in humans. A study of evoked potentials and pupillary response Journal Article In: Neurobiology of Stress, vol. 25, pp. 1–16, 2023. @article{RojasThomas2023, Psychosocial stress has increased considerably in our modern lifestyle, affecting global mental health. Deficits in attentional control are cardinal features of stress disorders and pathological anxiety. Studies suggest that changes in the locus coeruleus-norepinephrine system could underlie the effects of stress on top-down attentional control. However, the impact of psychosocial stress on attentional processes and its underlying neural mechanisms are poorly understood. This study aims to investigate the effect of psychosocial stress on attentional processing and brain signatures. Evoked potentials and pupillary activity related to the oddball auditory paradigm were recorded before and after applying the Montreal Imaging Stress Task (MIST). Electrocardiogram (ECG), salivary cortisol, and subjective anxiety/stress levels were measured at different experimental periods. The control group experienced the same physical and cognitive effort but without the psychosocial stress component. The results showed that stressed subjects exhibited decreased P3a and P3b amplitude, pupil phasic response, and correct responses. On the other hand, they displayed an increase in Mismatch Negativity (MMN). N1 amplitude after MIST only decreased in the control group. We found that differences in P3b amplitude between the first and second oddball were significantly correlated with pupillary dilation and salivary cortisol levels. Our results suggest that under social-evaluative threat, basal activity of the coeruleus-norepinephrine system increases, enhancing alertness and decreasing voluntary attentional resources for the cognitive task. These findings contribute to understanding the neurobiological basis of attentional changes in pathologies associated with chronic psychosocial stress. |
Claudia Rodriguez-Sobstel; Shannon Wake; Helen Dodd; Eugene McSorley; Carien M. Reekum; Jayne Morriss Shifty eyes: The impact of intolerance of uncertainty on gaze behaviour during threat conditioning Journal Article In: Collabra: Psychology, vol. 9, no. 1, pp. 1–15, 2023. @article{RodriguezSobstel2023, Previous research has demonstrated that individuals with high levels of Intolerance of Uncertainty (IU) have difficulty updating threat associations to safety associations. Notably, prior research has focused on measuring IU-related differences in threat and safety learning using arousal-based measures such as skin conductance response. Here we assessed whether IU-related differences in threat and safety learning could be captured using eye-tracking metrics linked with gaze behaviours such as dwelling and scanning. Participants (N = 144) completed self-report questionnaires assessing levels of IU and trait anxiety. Eye movements were then recorded during each conditioning phase: acquisition, extinction learning, and extinction retention. Fixation count and fixation duration served as indices of conditioned responding. Patterns of threat and safety learning typically reported for physiology and self-report were observed for the fixation count and fixation duration metrics during acquisition and to some extent in extinction learning, but not for extinction retention. There was little evidence for specific associations between IU and disrupted safety learning (e.g., greater differential responses to the threat vs. safe cues during extinction learning and retention). While there was tentative evidence that IU was associated with shorter fixation durations (e.g., scanning) to threat vs. safe cues during extinction retention, this effect did not remain after controlling for trait anxiety. IU and trait anxiety similarly predicted greater fixation count and shorter fixation durations overall during extinction learning, and greater fixation count overall during extinction retention. IU further predicted shorter fixation durations overall during extinction retention. However, the only IU-based effect that remained significant after controlling for trait anxiety was that of fixation duration overall during threat extinction learning. Our results inform models of anxiety, particularly in relation to how individual differences modulate gaze behaviour during threat conditioning. |
Nadira Yusif Rodriguez; Theresa H. McKim; Debaleena Basu; Aarit Ahuja; Theresa M. Desrochers Monkey dorsolateral prefrontal cortex represents abstract visual sequences during a no-report task Journal Article In: Journal of Neuroscience, vol. 43, no. 15, pp. 2741–2755, 2023. @article{Rodriguez2023, Monitoring sequential information is an essential component of our daily lives. Many of these sequences are abstract, in that they do not depend on the individual stimuli, but do depend on an ordered set of rules (e.g., chop then stir when cooking). Despite the ubiquity and utility of abstract sequential monitoring, little is known about its neural mechanisms. Human rostrolateral prefrontal cortex (RLPFC) exhibits specific increases in neural activity (i.e., “ramping”) during abstract sequences. Monkey dorsolateral prefrontal cortex (DLPFC) has been shown to represent sequential information in motor (not abstract) sequence tasks, and contains a subregion, area 46, with homologous functional connectivity to human RLPFC. To test the prediction that area 46 may represent abstract sequence information, and do so with parallel dynamics to those found in humans, we conducted functional magnetic resonance imaging (fMRI) in three male monkeys. When monkeys performed no-report abstract sequence viewing, we found that left and right area 46 responded to abstract sequential changes. Interestingly, responses to rule and number changes overlapped in right area 46 and left area 46 exhibited responses to abstract sequence rules with changes in ramping activation, similar to that observed in humans. Together, these results indicate that monkey DLPFC monitors abstract visual sequential information, potentially with a preference for different dynamics in the two hemispheres. More generally, these results show that abstract sequences are represented in functionally homologous regions across monkeys and humans. |
Helen Rodger; Nayla Sokhn; Junpeng Lao; Yingdi Liu; Roberto Caldara Developmental eye movement strategies for decoding facial expressions of emotion Journal Article In: Journal of Experimental Child Psychology, vol. 229, pp. 1–23, 2023. @article{Rodger2023, In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six “basic emotions” to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations. |
Scott Roberts; Peter R. Kufahl; Rebecca J. Ryznar; Taylor Norris; Sagar Patel; K. Dean Gubler; Dean Paz; Greg Schwimer; Richard Besserman; Anthony J. LaPorta Start-of-day oculomotor screening demonstrates the effects of fatigue and rest during a total immersion training program Journal Article In: Surgery, vol. 174, no. 5, pp. 1193–1200, 2023. @article{Roberts2023, Background: Investigating changes in sleep and fatigue metrics during intensive surgical and trauma skills training, this study explored the dynamic association between oculomotor metrics and fatigue. Specifically, alterations in these relations over extended stress exposure, the influence of time of day, and the impact of fatigue exposure on sleep metrics were examined. Methods: Thirty-nine military medical students participated in 6 days of immersion, hyper-realistic, and high-stress experiential casualty training. Participants completed surveys assessing the state of sleepiness with oculomotor tests performed each morning and evening, analyzing eye movement and pupillary change to characterize fatigue. Participants wore Fitbit TM devices to measure overall time asleep and time in each sleep stage during the training. Results: Fitbit data showed increased average minutes in rapid eye movement, deep sleep, and less time in light sleep from day 1 to day 4. The microsaccade peak velocity-to-displacement ratio exhibited a morning decrease but not in afternoon sessions, indicating repeated but temporary effects of accumulated fatigue. There were no findings regarding pupil reactivity to illumination changes. Conclusion: This study describes characteristics of fatigue measured by rapid and individually calibrated oculomotor tests. It demonstrates oculomotor relationships to fatigue in start-of-day testing, providing a direction for timing for optimal fatigue testing. These data suggest that improved sleep could signal resilience to fatigue during afternoon testing. Further investigation with more participants and longer duration is warranted. A deeper understanding of the interrelationships between training, sleep, and fatigue could improve surgical and military fitness. |
Miriam Rivero-Contreras; Paul E. Engelhardt; David Saldaña Do easy-to-read adaptations really facilitate sentence processing for adults with a lower level of education? An experimental eye-tracking study Journal Article In: Learning and Instruction, vol. 84, pp. 1–13, 2023. @article{RiveroContreras2023a, The Easy-to-Read guidelines recommend visual support and lexical simplification to facilitate text processing, but few studies have empirically verified the efficacy of these guidelines. This study examined the influence of these recommendations on sentence processing by examining eye movements at the text- and word-level in adult readers. We tested 30 non-university adults (low education level) and 30 university adults (high education level). The experimental task consisted of 60 sentences. Half were accompanied by an image and half were not, and half contained a low-frequency word and half a high-frequency word. Results showed that visual support and lexical simplification facilitated processing in both groups of adults, and non-university adults were significantly slower than university adults at sentence processing. However, lexical simplification resulted in faster processing in the non-university adults' group. Conclusions focus on the mechanisms in which both adaptations benefit readers, and practical implications for reading comprehension. |
Miriam Rivero-Contreras; Paul E. Engelhardt; Pablo Delgado; David Saldaña Does the timing of visual support affect sentence comprehension? An eye-tracking study Journal Article In: Journal of Experimental Education, pp. 1–20, 2023. @article{RiveroContreras2023, Purpose: Recent research suggests that visual elements improve sentence processing for students, even at the university level. However, few studies have systematically examined the timing of visual support in reading. Method: We examined the impact of visual support and its timing on sentence comprehension in a sample of 40 typically developing university students. Across 60 sentences, half with images and half without, participants either viewed images simultaneously with sentences or before sentences. Word frequency was also manipulated. Results: Results showed that visual support facilitated sentence processing and that participants who viewed images before sentences exhibited a lower probability of regressions. Conclusion: In conclusion, incorporating images with text can benefit language comprehension. Moreover, the results suggest implications regarding the timing of visual support. |
Elizabeth Riley; Hamid Turker; Dongliang Wang; Khena M. Swallow; Adam K. Anderson; Eve De Rosa Nonlinear changes in pupillary attentional orienting responses across the lifespan Journal Article In: GeroScience, pp. 1–17, 2023. @article{Riley2023, The cognitive aging process is not necessarily linear. Central task-evoked pupillary responses, representing a brainstem-pupil relationship, may vary across the lifespan. Thus we examined, in 75 adults ranging in age from 19 to 86, whether task-evoked pupillary responses to an attention task may serve in as an index of cognitive aging. This is because the locus coeruleus (LC), located in the brainstem, is not only among the earliest sites of degeneration in pathological aging, but also supports both attentional and pupillary behaviors. We assessed brief, task-evoked phasic attentional orienting to behaviorally relevant and irrelevant auditory tones, stimuli known specifically to recruit the LC in the brainstem and evoke pupillary responses. Due to potential nonlinear changes across the lifespan, we used a novel data-driven analysis on 6 dynamic pupillary behaviors on 10% of the data to reveal cut off points that best characterized the three age bands: young (19–41 years old), middle aged (42–68 years old), and older adults (69 + years old). Follow-up analyses on independent data, the remaining 90%, revealed age-related changes such as monotonic decreases in tonic pupillary diameter and dynamic range, along with curvilinear phasic pupillary responses to the behaviorally relevant target events, increasing in the middle-aged group and then decreasing in the older group. Additionally, the older group showed decreased differentiation of pupillary responses between target and distractor events. This pattern is consistent with potential compensatory LC activity in midlife that is diminished in old age, resulting in decreased adaptive gain. Beyond regulating responses to light, pupillary dynamics reveal a nonlinear capacity for neurally mediated gain across the lifespan, thus providing evidence in support of the LC adaptive gain hypothesis. |
Stephanie Rich; Jesse A. Harris Global expectations mediate local constraint: Evidence from concessive structures Journal Article In: Language, Cognition and Neuroscience, vol. 38, no. 3, pp. 302–327, 2023. @article{Rich2023, Numerous studies have found facilitation for lexical processing in highly constraining contexts. However, less is known about cases in which immediately preceding (local) and broader (global) contextual constraint conflict. In two eye-tracking while reading experiments, local and global context were manipulated independently, creating a critical condition where local context biases towards a word that is incongruent with global context. Global context consisted of a clause introduced by a concessive marker generating broad expectations about upcoming material. Experiment 1 compared high- and low-predictability critical words, whereas Experiment 2 held the critical word constant and manipulated the preceding verb to impose different levels of local constraint. Facilitation from local context was reduced when it was incongruent with global context, supporting models in which information from global and local context is rapidly integrated during early lexical processing over models that would initially prioritise only local or only global context. |
Mario Reutter; Matthias Gamer Individual patterns of visual exploration predict the extent of fear generalization in humans Journal Article In: Emotion, vol. 23, no. 5, pp. 1267–1280, 2023. @article{Reutter2023, Generalization of fear is considered an important mechanism contributing to the etiology and maintenance of anxiety disorders. Although previous studies have identified the importance of stimulus discrimination for fear generalization, it is still unclear to what degree overt attention to relevant stimulus features might mediate its magnitude. To test the prediction that visual preferences for distinguishing stimulus aspects are associated with reduced fear generalization, we developed a set of facial stimuli that was meticulously manipulated such that pairs of faces could either be distinguished by looking into the eyes or into the region around mouth and nose, respectively. These pairs were then employed as CS+ and CS in a differential fear conditioning paradigm followed by a generalization test with morphs in steps of 20%. Shock expectancy ratings indicated a moderately curved fear generalization gradient that is typical for healthy samples, but its shape was altered depending on individual attentional deployment: Particpants who dwelled on the distinguishing facial features faster and for longer periods of time exhibited less fear generalization. Although both pupil and heart rate responses also showed a generalization gradient, with pupil diameter and heart rate deceleration increasing as a function of threat, these responses were not significantly related to visual exploration. In total, the current results indicate that the extent of explicit fear generalization depends on individual patterns of attentional deployment. Future studies evaluating the efficacy of perceptual trainings that aim to augment stimulus discriminability in order to reduce (over)generalization seem desirable. |
Tracy Reuter; Carolyn Mazzei; Casey Lew-Williams; Lauren Emberson Infants' lexical comprehension and lexical anticipation abilities are closely linked in early language development Journal Article In: Infancy, vol. 28, no. 3, pp. 532–549, 2023. @article{Reuter2023, Theories across cognitive domains propose that anticipating upcoming sensory input supports information processing. In line with this view, prior findings indicate that adults and children anticipate upcoming words during real-time language processing, via such processes as prediction and priming. However, it is unclear if anticipatory processes are strictly an outcome of prior language development or are more entwined with language learning and development. We operationalized this theoretical question as whether developmental emergence of comprehension of lexical items occurs before or concurrently with the anticipation of these lexical items. To this end, we tested infants of ages 12, 15, 18, and 24 months (N = 67) on their abilities to comprehend and anticipate familiar nouns. In an eye-tracking task, infants viewed pairs of images and heard sentences with either informative words (e.g., eat) that allowed them to anticipate an upcoming noun (e.g., cookie), or uninformative words (e.g., see). Findings indicated that infants' comprehension and anticipation abilities are closely linked over developmental time and within individuals. Importantly, we do not find evidence for lexical comprehension in the absence of lexical anticipation. Thus, anticipatory processes are present early in infants' second year, suggesting they are a part of language development rather than solely an outcome of it. |
Anja Rettig; Ulrich Schiefele Relations between reading motivation and reading efficiency—evidence from a longitudinal eye-tracking study Journal Article In: Reading Research Quarterly, vol. 58, no. 4, pp. 685–709, 2023. @article{Rettig2023, Studies on the relation between children's reading motivation and early developmental stages of reading competence are rare and have neglected on-line measures of reading skill (e.g., eye movements indicating word decoding). For this reason, we investigated the effects of intrinsic and extrinsic reading motivation on the efficiency of reading processes based on eye-movement data. Moreover, we examined reading efficiency as a mediator of the relation between motivation and comprehension. German elementary school students in Grades 1–3 (N = 131) were tested on three measurement occasions. Specifically, we assessed reading motivation, reading amount, and sentence comprehension at Time 1, reading efficiency at Time 2 (2 months after Time 1), and all of the variables again at Time 3 (10 months after Time 2). Reading efficiency was assessed while children read age-appropriate sentences and comprised measures of first-fixation duration, gaze duration, total reading time, forward-saccade length, and refixation probability. Linear and cross-lagged panel models showed significant favorable relations between intrinsic reading motivation (operationalized as involvement and enjoyment of reading), but not extrinsic reading motivation (operationalized as striving to outperform one's peers), and most measures of reading efficiency, while controlling for gender, grade level, and reading amount. The reverse effects of reading-efficiency indicators on intrinsic reading motivation were all significant. Moreover, the test of the mediation model revealed a significant indirect effect of Time 1 intrinsic reading motivation on Time 3 sentence comprehension mediated by Time 2 reading efficiency. We concluded that intrinsic reading motivation, in contrast to extrinsic reading motivation, facilitates reading comprehension through its effect on reading efficiency, independent of variations in reading amount. |
Thomas R. Reppert; Richard P. Heitz; Jeffrey D. Schall Neural mechanisms for executive control of speed-accuracy trade-off Journal Article In: Cell Reports, vol. 42, no. 11, pp. 1–18, 2023. @article{Reppert2023, The medial frontal cortex (MFC) plays an important but disputed role in speed-accuracy trade-off (SAT). In samples of neural spiking in the supplementary eye field (SEF) in the MFC simultaneous with the visuomotor frontal eye field and superior colliculus in macaques performing a visual search with instructed SAT, during accuracy emphasis, most SEF neurons discharge less from before stimulus presentation until response generation. Discharge rates adjust immediately and simultaneously across structures upon SAT cue changes. SEF neurons signal choice errors with stronger and earlier activity during accuracy emphasis. Other neurons signal timing errors, covarying with adjusting response time. Spike correlations between neurons in the SEF and visuomotor areas did not appear, disappear, or change sign across SAT conditions or trial outcomes. These results clarify findings with noninvasive measures, complement previous neurophysiological findings, and endorse the role of the MFC as a critic for the actor instantiated in visuomotor structures. |
Eyal M. Reingold; Heather Sheridan Chess expertise reflects domain-specific perceptual processing: Evidence from eye movements Journal Article In: Journal of Expertise, vol. 6, no. 1, pp. 1–18, 2023. @article{Reingold2023, The remarkably efficient performance of chess experts reflects extensive practice with domain-related visual configurations. To study the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during the performance of a novel double-check detection task. Chess players viewed an array of six minimized chessboards (4 x 4 squares), with each board displaying a king and 2 attackers. Players rapidly searched for the target board containing a double-check among distractor boards which either displayed a single check or displayed no check. During each fixation, chess pieces were only visible within the fixated board, while all other boards were replaced by empty boards. On half the trials, chess pieces were represented using the familiar symbol notation, while on the other half of the trials, pieces were represented using an unfamiliar letter notation. The analysis of overall response times and several fine-grained eye movement measures indicated that in trials using the familiar symbol notation, experts were much faster at identifying the double-check board, and this advantage was substantially attenuated in trials using the unfamiliar letter notation. In addition, an ex-Gaussian distributional analysis documented similar expertise by notation interactions. We discuss the implications of the present findings for theories of visual expertise in general, and skilled performance in chess, in particular. |
Gwendolyn Rehrig; Taylor R. Hayes; John M. Henderson; Fernanda Ferreira Visual attention during seeing for speaking in healthy aging Journal Article In: Psychology and Aging, vol. 38, no. 1, pp. 1–18, 2023. @article{Rehrig2023, As we age, we accumulate a wealth of information, but cognitive processing becomes slower and less efficient. There is mixed evidence on whether world knowledge compensates for age- related cognitive decline (Umanath & Marsh, 2014). We investigated whether older adults are more likely to fixate more meaningful scene locations than are young adults. Young (N=30) and older adults (N=30, aged 66-82) described scenes while eye movements and descriptions were recorded. We used a logistic mixed-effects model to determine whether fixated scene locations differed in meaning, salience, and center distance from locations that were not fixated, and whether those properties differed for locations young and older adults fixated. Meaning predicted fixated locations well overall, though the locations older adults fixated were less meaningful than those that young adults fixated. These results suggest that older adults' visual attention is less sensitive to meaning than young adults, despite extensive experience with scenes. |
Maimu Alissa Rehbein; Thomas Kroker; Constantin Winker; Lena Ziehfreund; Anna Reschke; Jens Bölte; Miroslaw Wyczesany; Kati Roesmann; Ida Wessing; Markus Junghöfer Non-invasive stimulation reveals ventromedial prefrontal cortex function in reward prediction and reward processing Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–22, 2023. @article{Rehbein2023, Introduction: Studies suggest an involvement of the ventromedial prefrontal cortex (vmPFC) in reward prediction and processing, with reward-based learning relying on neural activity in response to unpredicted rewards or non-rewards (reward prediction error, RPE). Here, we investigated the causal role of the vmPFC in reward prediction, processing, and RPE signaling by transiently modulating vmPFC excitability using transcranial Direct Current Stimulation (tDCS). Methods: Participants received excitatory or inhibitory tDCS of the vmPFC before completing a gambling task, in which cues signaled varying reward probabilities and symbols provided feedback on monetary gain or loss. We collected self-reported and evaluative data on reward prediction and processing. In addition, cue-locked and feedback-locked neural activity via magnetoencephalography (MEG) and pupil diameter using eye-tracking were recorded. Results: Regarding reward prediction (cue-locked analysis), vmPFC excitation (versus inhibition) resulted in increased prefrontal activation preceding loss predictions, increased pupil dilations, and tentatively more optimistic reward predictions. Regarding reward processing (feedback-locked analysis), vmPFC excitation (versus inhibition) resulted in increased pleasantness, increased vmPFC activation, especially for unpredicted gains (i.e., gain RPEs), decreased perseveration in choice behavior after negative feedback, and increased pupil dilations. Discussion: Our results support the pivotal role of the vmPFC in reward prediction and processing. Furthermore, they suggest that transient vmPFC excitation via tDCS induces a positive bias into the reward system that leads to enhanced anticipation and appraisal of positive outcomes and improves reward-based learning, as indicated by greater behavioral flexibility after losses and unpredicted outcomes, which can be seen as an improved reaction to the received feedback. |
Begoña Arechabaleta Regulez; Silvina Montrul Production, acceptability, and online comprehension of Spanish differential object marking by heritage speakers and L2 learners Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–19, 2023. @article{Regulez2023, We analyzed the production, acceptability and online comprehension of Spanish differential object marking (DOM) by two groups of bilingual speakers living in the U.S.: heritage speakers and L2 learners. DOM is the overt marking of direct objects that are higher on the animacy and referentiality scales, such as animate and specific objects in Spanish, marked by the preposition a (Juan ve a María ‘Juan sees DOM María'). Previous studies have reported variability and high omission rates of obligatory DOM in bilingual situations where Spanish is in contact with a non-DOM language.Our study combined different methodologies to tap knowledge of DOM in the two groups. The results showed that heritage speakers and L2 learners (1) exhibited variability with DOM in production (in two oral tasks), comprehension (in an acceptability judgement task), and processing (in an eye-tracking reading task); (2) can integrate DOM into their production, judgments and processing, but they do so inconsistently, and (3) type of task and type of sentence each have an effect on speakers' use of DOM. |
Peter Reddingius; Daniel S. Asfaw; Vera M. Mönter; Nicholas D. Smith; Pete R. Jones; David P. Crabb Data on eye movements of glaucoma patients with asymmetrical visual field loss during free viewing Journal Article In: Data in Brief, vol. 48, pp. 1–7, 2023. @article{Reddingius2023, This paper describes data from Asfaw at al. [1], which examined the eye movements of glaucoma patients (n=15) with pronounced asymmetrical vision loss (visual field loss worse in one eye). This allows for within-subject comparisons between the better and worse eye, thereby controlling for the effects of individual differences between patients. All patients had a clinical diagnosis of open angle glaucoma (OAG). Participants were asked to look at images of nature monocularly (free viewing; fellow eye patched) while gaze was recorded at 1000 Hz using a remote eye tracker (EyeLink 1000). Raw and processed eye tracking data are provided. In addition, clinical (visual acuity, contrast sensitivity and visual field) and demographic information (age, sex) are provided. |
Ralph S. Redden; Matthew D. Hilchey; Sinan Aslam; Jason Ivanoff; Raymond M. Klein Using speed and accuracy and the Simon Effect to explore the output form of inhibition of return Journal Article In: Vision, vol. 7, no. 1, pp. 1–13, 2023. @article{Redden2023, Inhibition of return (IOR) refers to slower responses to targets presented at previously cued locations. Contrasting target discrimination performance over various eye movement conditions has shown the level of activation of the reflexive oculomotor system determines the nature of the effect. Notably, an inhibitory effect of a cue nearer to the input end of the processing continuum is observed when the reflexive oculomotor system is actively suppressed, and an inhibitory effect nearer the output end of the processing continuum is observed when the reflexive oculomotor system is actively engaged. Furthermore, these two forms of IOR interact differently with the Simon effect. Drift diffusion modeling has suggested that two parameters can theoretically account for the speed-accuracy tradeoff rendered by the output-based form of IOR: increased threshold and decreased trial noise. In Experiment 1, we demonstrate that the threshold parameter best accounts for the output-based form of IOR by measuring it with intermixed discrimination and localization targets. Experiment 2 employed the response-signal methodology and showed that the output-based form has no effect on the accrual of information about the target's identity. These results converge with the response bias account for the output form of IOR. |
Lukas Recker; Christian H. Poth Test–retest reliability of eye tracking measures in a computerized Trail Making Test Journal Article In: Journal of Vision, vol. 23, no. 8, pp. 1–17, 2023. @article{Recker2023, The Trail Making Test (TMT) is a frequently applied neuropsychological test that evaluates participants' executive functions based on their time to connect a sequence of numbers (TMT-A) or alternating numbers and letters (TMT-B). Test performance is associated with various cognitive functions ranging from visuomotor speed to working memory capabilities. However, although the test can screen for impaired executive functioning in a variety of neuropsychiatric disorders, it provides only little information about which specific cognitive impairments underlie performance detriments. To resolve this lack of specificity, recent cognitive research combined the TMT with eye tracking so that eye movements could help uncover reasons for performance impairments. However, using eye-tracking-based test scores to examine differences between persons, and ultimately apply the scores for diagnostics, presupposes that the reliability of the scores is established. Therefore, we investigated the test–retest reliabilities of scores in an eye-tracking version of the TMT recently introduced by Recker et al. (2022).We examined two healthy samples performing an initial test and then a retest 3 days (n = 31) or 10 to 30 days (n = 34) later. Results reveal that, although reliabilities of classic completion times were overall good, comparable with earlier versions, reliabilities of eye-tracking-based scores ranged from excellent (e.g., durations of fixations) to poor (e.g., number of fixations guiding manual responses). These findings indicate that some eye-tracking measures offer a strong basis for assessing interindividual differences beyond classic behavioral measures when examining processes related to information accumulation processes but are less suitable to diagnose differences in eye–hand coordination. |
Daniele Re; Golan Karvat; Ayelet N. Landau Attentional sampling between eye channels Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 8, pp. 1350–1360, 2023. @article{Re2023, Our ability to detect targets in the environment fluctuates in time. When individuals focus attention on a single location, the ongoing temporal structure of performance fluctuates at 8 Hz. When task demands require the distribution of attention over two objects defined by their location, color or motion direction, ongoing performance fluctuates at 4 Hz per object. This suggests that distributing attention entails the division of the sampling process found for focused attention. It is unknown, however, at what stage of the processing hierarchy this sampling occurs, and whether attentional sampling depends on awareness. Here, we show that unaware selection between the two eyes leads to rhythmic sampling. We presented a display with a single central object to both eyes and manipulated the presentation of a reset event (i.e., cue) and a detection target to either both eyes (binocular) or separately to the different eyes (monocular). We assume that presenting a cue to one eye biases the selection process to content presented in that eye. Although participants were unaware of this manipulation, target detection fluctuated at 8 Hz under the binocular condition, and at 4 Hz when the right (and dominant) eye was cued. These results are consistent with recent findings reporting that competition between receptive fields leads to attentional sampling and demonstrate that this competition does not rely on aware processes. Furthermore, attentional sampling occurs at an early site of competition among monocular channels, before they are fused in the primary visual cortex. |
Isabel Raposo; Sara M. Szczepanski; Kathleen Haaland; Tor Endestad; Anne Kristin Solbakk; Robert T. Knight; Randolph F. Helfrich Periodic attention deficits after frontoparietal lesions provide causal evidence for rhythmic attentional sampling Journal Article In: Current Biology, vol. 33, no. 22, pp. 4893–4904, 2023. @article{Raposo2023, Contemporary models conceptualize spatial attention as a blinking spotlight that sequentially samples visual space. Hence, behavior fluctuates over time, even in states of presumed “sustained” attention. Recent evidence has suggested that rhythmic neural activity in the frontoparietal network constitutes the functional basis of rhythmic attentional sampling. However, causal evidence to support this notion remains absent. Using a lateralized spatial attention task, we addressed this issue in patients with focal lesions in the frontoparietal attention network. Our results revealed that frontoparietal lesions introduce periodic attention deficits, i.e., temporally specific behavioral deficits that are aligned with the underlying neural oscillations. Attention-guided perceptual sensitivity was on par with that of healthy controls during optimal phases but was attenuated during the less excitable sub-cycles. Theta-dependent sampling (3–8 Hz) was causally dependent on the prefrontal cortex, while high-alpha/low-beta sampling (8–14 Hz) emerged from parietal areas. Collectively, our findings reveal that lesion-induced high-amplitude, low-frequency brain activity is not epiphenomenal but has immediate behavioral consequences. More generally, these results provide causal evidence for the hypothesis that the functional architecture of attention is inherently rhythmic. |
Rajani Raman; Anna Bognár; Ghazaleh Ghamkhari Nejad; Nick Taubert; Martin Giese; Rufin Vogels Bodies in motion: Unraveling the distinct roles of motion and shape in dynamic body responses in the temporal cortex Journal Article In: Cell Reports, vol. 42, no. 12, pp. 1–20, 2023. @article{Raman2023, The temporal cortex represents social stimuli, including bodies. We examine and compare the contributions of dynamic and static features to the single-unit responses to moving monkey bodies in and between a patch in the anterior dorsal bank of the superior temporal sulcus (dorsal patch [DP]) and patches in the anterior inferotemporal cortex (ventral patch [VP]), using fMRI guidance in macaques. The response to dynamics varies within both regions, being higher in DP. The dynamic body selectivity of VP neurons correlates with static features derived from convolutional neural networks and motion. DP neurons' dynamic body selectivity is not predicted by static features but is dominated by motion. Whereas these data support the dominance of motion in the newly proposed “dynamic social perception” stream, they challenge the traditional view that distinguishes DP and VP processing in terms of motion versus static features, underscoring the role of inferotemporal neurons in representing body dynamics. |
Masih Rahmati; Clayton E. Curtis; Kartik K. Sreenivasan Mnemonic representations in human lateral geniculate nucleus Journal Article In: Frontiers in Behavioral Neuroscience, vol. 17, pp. 1–11, 2023. @article{Rahmati2023, There is a growing appreciation for the role of the thalamus in high-level cognition. Motivated by findings that internal cognitive state drives activity in feedback layers of primary visual cortex (V1) that target the lateral geniculate nucleus (LGN), we investigated the role of LGN in working memory (WM). Specifically, we leveraged model-based neuroimaging approaches to test the hypothesis that human LGN encodes information about spatial locations temporarily encoded in WM. First, we localized and derived a detailed topographic organization in LGN that accords well with previous findings in humans and non-human primates. Next, we used models constructed on the spatial preferences of LGN populations in order to reconstruct spatial locations stored in WM as subjects performed modified memory-guided saccade tasks. We found that population LGN activity faithfully encoded the spatial locations held in memory in all subjects. Importantly, our tasks and models allowed us to dissociate the locations of retinal stimulation and the motor metrics of memory-guided saccades from the maintained spatial locations, thus confirming that human LGN represents true WM information. These findings add LGN to the growing list of subcortical regions involved in WM, and suggest a key pathway by which memories may influence incoming processing at the earliest levels of the visual hierarchy. |
Aida Rahavi; Manuela Malaspina; Andrea Albonico; Jason J. S. Barton “Looking at nothing”: An implicit ocular motor index of face recognition in developmental prosopagnosia Journal Article In: Cognitive Neuropsychology, vol. 40, no. 2, pp. 59–70, 2023. @article{Rahavi2023, Subjects often look towards to previous location of a stimulus related to a task even when that stimulus is no longer visible. In this study we asked whether this effect would be preserved or reduced in subjects with developmental prosopagnosia. Participants learned faces presented in video-clips and then saw a brief montage of four faces, which was replaced by a screen with empty boxes, at which time they indicated whether the learned face had been present in the montage. Control subjects were more likely to look at the blank location where the learned face had appeared, on both hit and miss trials, though the effect was larger on hit trials. Prosopagnosic subjects showed a reduced effect, though still better on hit than on miss trials. We conclude that explicit accuracy and our implicit looking at nothing effect are parallel effects reflecting the strength of the neural activity underlying face recognition. |
Jan Ole Radecke; Andreas Sprenger; Hannah Stöckler; Lisa Espeter; Mandy Josephine Reichhardt; Lara S. Thomann; Tim Erdbrügger; Yvonne Buschermöhle; Stefan Borgwardt; Till R. Schneider; Joachim Gross; Carsten H. Wolters; Rebekka Lencer Normative tDCS over V5 and FEF reveals practice-induced modulation of extraretinal smooth pursuit mechanisms, but no specific stimulation effect Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–15, 2023. @article{Radecke2023, The neural networks subserving smooth pursuit eye movements (SPEM) provide an ideal model for investigating the interaction of sensory processing and motor control during ongoing movements. To better understand core plasticity aspects of sensorimotor processing for SPEM, normative sham, anodal or cathodal transcranial direct current stimulation (tDCS) was applied over visual area V5 and frontal eye fields (FEF) in sixty healthy participants. The identical within-subject paradigm was used to assess SPEM modulations by practice. While no specific tDCS effects were revealed, within- and between-session practice effects indicate plasticity of top-down extraretinal mechanisms that mainly affect SPEM in the absence of visual input and during SPEM initiation. To explore the potential of tDCS effects, individual electric field simulations were computed based on calibrated finite element head models and individual functional localization of V5 and FEF location (using functional MRI) and orientation (using combined EEG/MEG) was conducted. Simulations revealed only limited electric field target intensities induced by the applied normative tDCS montages but indicate the potential efficacy of personalized tDCS for the modulation of SPEM. In sum, results indicate the potential susceptibility of extraretinal SPEM control to targeted external neuromodulation (e.g., personalized tDCS) and intrinsic learning protocols. |
Adam W. Qureshi; Rebecca L. Monk; Shelby Quinn; Bethan Gannon; Kayleigh McNally; Derek Heim Catching a smile from individuals and crowds: Evidence for distinct emotional contagion processes Journal Article In: Journal of Personality and Social Psychology, pp. 1–21, 2023. @article{Qureshi2023, Research examining how crowd emotions impact observers usually requires participants to engage in an atypical mental process whereby (static) arrays of individuals are cognitively integrated to represent a crowd. The present work sought to extend our understanding of how crowd emotions may spread to individuals by assessing self-reported emotions, attention and muscle movement in response to emotions of dynamic, virtually modeled crowd stimuli. Self-reported emotions and attention from thirty-six participants were assessed when foreground and background crowd characters exhibited homogeneous (Study 1) or heterogeneous (Study 2) positive, neutral, or negative emotions. Results suggested that affective responses in observers are shaped by crowd emotions even in the absence of direct attention. Thirty-four participants supplied self-report and facial electromyography responses to the same homogeneous (Study 3) or heterogeneous (Study 4) crowd stimuli. Results indicated that positive crowd emotions appeared to exert greater attentional pull and objective responses, while negative crowd emotions also elicited affective responses. Study 5 (n = 67) introduced a control condition (stimuli containing an individual person) to examine if responses are unique to crowds and found that emotional contagion from crowds was more intense than from individuals. These studies present methodological advances in the study of crowd emotional contagion and have implications for our broader understanding ofhow people process, attend, and affectively respond to crowds. Advancing theory by suggesting that emotional contagion from crowds is distinct from that elicited by individuals, findings may have applications for refining crowd management approaches. |
Ying Que; Yueyuan Zheng; Janet H. Hsiao; Xiao Hu Studying the effect of self-selected background music on reading task with eye movements Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–18, 2023. @article{Que2023, Using background music (BGM) during learning is a common behavior, yet whether BGM can facilitate or hinder learning remains inconclusive and the underlying mechanism is largely an open question. This study aims to elucidate the effect of self-selected BGM on reading task for learners with different characteristics. Particularly, learners' reading task performance, metacognition, and eye movements were examined, in relation to their personal traits including language proficiency, working memory capacity, music experience and personality. Data were collected from a between-subject experiment with 100 non-native English speakers who were randomly assigned into two groups. Those in the experimental group read English passages with music of their own choice played in the background, while those in the control group performed the same task in silence. Results showed no salient differences on passage comprehension accuracy or metacognition between the two groups. Comparisons on fine-grained eye movement measures reveal that BGM imposed heavier cognitive load on post-lexical processes but not on lexical processes. It was also revealed that students with higher English proficiency level or more frequent BGM usage in daily self-learning/reading experienced less cognitive load when reading with their BGM, whereas students with higher working memory capacity (WMC) invested more mental effort than those with lower WMC in the BGM condition. These findings further scientific understanding of how BGM interacts with cognitive tasks in the foreground, and provide practical guidance for learners and learning environment designers on making the most of BGM for instruction and learning. |
Zeguo Qiu; Dihua Wu; Benjamin J. Muehlebach Differential modulation on neural activity related to flankers during face processing: A visual crowding study Journal Article In: Neuroscience Letters, vol. 815, no. September, pp. 137496, 2023. @article{Qiu2023a, In this visual crowding study, we manipulated the perceivability of a central crowded face (a fearful or a neutral face) by varying the similarity between the central face and the surrounding flanker stimuli. We presented participants with pairs of visual clutters and recorded their electroencephalography during an emotion judgement task. In an upright flanker condition where both the central target face and flanker faces were upright faces, participants were less likely to report seeing the target face, and their P300 was weakened, compared to a scrambled flanker condition where scrambled face images were used as flankers. Additionally, at ∼ 120 ms post-stimulus, a posterior negativity was found for the upright compared to scrambled flanker condition, however only for fearful face targets. We concluded that early neural responses seem to be affected by the perceptual characteristics of both target and flanker stimuli whereas later-stage neural activity is associated with post-perceptual evaluation of the stimuli in this visual crowding paradigm. |
Zeguo Qiu; Stefanie I. Becker; Hongfeng Xia; Zachary Hamblin-Frohman; Alan J. Pegna Fixation-related electrical potentials during a free visual search task reveal the timing of visual awareness Journal Article In: iScience, vol. 26, no. 7, pp. 1–17, 2023. @article{Qiu2023, It has been repeatedly claimed that emotional faces readily capture attention, and that they may be processed without awareness. Yet some observations cast doubt on these assertions. Part of the problem may lie in the experimental paradigms employed. Here, we used a free viewing visual search task during electroencephalographic recordings, where participants searched for either fearful or neutral facial expressions among distractor expressions. Fixation-related potentials were computed for fearful and neutral targets and the response compared for stimuli consciously reported or not. We showed that awareness was associated with an electrophysiological negativity starting at around 110 ms, while emotional expressions were distinguished on the N170 and early posterior negativity only when stimuli were consciously reported. These results suggest that during unconstrained visual search, the earliest electrical correlate of awareness may emerge as early as 110 ms, and fixating at an emotional face without reporting it may not produce any unconscious processing. |
Nan Qin; Francesca Crespi; Alice Mado Proverbio; Gilles Pourtois A systematic exploration of attentional load effects on the C1 ERP component Journal Article In: Psychophysiology, pp. 1–30, 2023. @article{Qin2023, The C1 ERP component reflects the earliest visual processing in V1. However, it remains debated whether attentional load can influence it or not. We conducted two EEG experiments to investigate the effect of attentional load on the C1. Task difficulty was manipulated at fixation using an oddball detection task that was either easy (low load) or difficult (high load), while the distractor was presented in the upper visual field (UVF) to score the C1. In Experiment 1, we used a block design and the stimulus onset asynchrony (SOA) between the central stimulus and the peripheral distractor was either short or long. In Experiment 2, task difficulty was manipulated on a trial-by-trial basis using a visual cue, and the peripheral distractor was presented either before or after the central stimulus. The results showed that the C1 was larger in the high compared to the low load condition irrespective of SOA in Experiment 1. In Experiment 2, no significant load modulation of the C1 was observed. However, we found that the contingent negative variation (CNV) was larger in the low compared to the high load condition. Moreover, the C1 was larger when the peripheral distractor was presented after than before the central stimulus. Combined together, these results suggest that different top-down control processes can influence the initial feedforward stage of visual processing in V1 captured by the C1 ERP component. |
Minglang Qiao; Yufan Liu; Mai Xu; Xin Deng; Bing Li; Weiming Hu; Ali Borji Joint learning of audio–visual saliency prediction and sound source localization on multi-face videos Journal Article In: International Journal of Computer Vision, pp. 1–23, 2023. @article{Qiao2023, Visual and audio events simultaneously occur and both attract attention. However, most existing saliency prediction works ignore the influence of audio and only consider vision modality. In this paper, we propose a multi-task learning method for audio–visual saliency prediction and sound source localization on multi-face video by leveraging visual, audio and face information. Specifically, we first introduce a large-scale database of multi-face video in visual-audio condition, containing eye-tracking data and sound source annotations. Using this database, we find that sound influences human attention, and conversely attention offers a cue to determine sound source on multi-face video. Guided by these findings, an audio–visual multi-task network (AVM-Net) is introduced to predict saliency and locate sound source. AVM-Net consists of three branches corresponding to visual, audio and face modalities. The visual branch has a two-stream architecture to capture spatial and temporal information. Face and audio branches encode audio signals and faces, respectively. Finally, a spatio-temporal multi- modal graph is constructed to model the interaction among multiple faces. With joint optimization of these branches, the intrinsic correlation of the tasks ofsaliency prediction and sound source localization is utilized and their performance is boosted by each other. Experiments show that the proposed method outperforms 12 state-of-the-art saliency prediction methods, and achieves competitive results in sound source localization. WABBLE |
Linze Qian; Xianliang Ge; Zhao Feng; Sujie Wang; Jingjia Yuan; Yunxian Pan; Hongqi Shi; Jie Xu; Yu Sun Brain network reorganization during visual search task revealed by a network analysis of fixation-related potential Journal Article In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 1219–1229, 2023. @article{Qian2023, Visual search is ubiquitous in daily life and has attracted substantial research interest over the past decades. Although accumulating evidence has suggested complex neurocognitive processes underlying visual search, the neural communication across the brain regions remains poorly understood. The present work aimed to fill this gap by investigating functional networks of fixation-related potential (FRP) during the visual search task. Multi-frequency electroencephalogram (EEG) networks were constructed from 70 university students (male/female = 35/35) using FRPs time-locked to target and non-target fixation onsets, which were determined by concurrent eye-tracking data. Then graph theoretical analysis (GTA) and a data-driven classification framework were employed to quantitatively reveal the divergent reorganization between target and non-target FRPs. We found distinct network architectures between target and non-target mainly in the delta and theta bands. More importantly, we achieved a classification accuracy of 92.74% for target and non-target discrimination using both global and nodal network features. In line with the results of GTA, we found that the integration corresponding to target and non-target FRPs significantly differed, while the nodal features contributing most to classification performance primarily resided in the occipital and parietal-temporal areas. Interestingly, we revealed that females exhibited significantly higher local efficiency in delta band when focusing on the search task. In summary, these results provide some of the first quantitative insights into the underlying brain interaction patterns during the visual search process. |
Philip T. Putnam; Cheng Chi J. Chu; Nicholas A. Fagan; Olga Dal Monte; Steve W. C. Chang Dissociation of vicarious and experienced rewards by coupling frequency within the same neural pathway Journal Article In: Neuron, vol. 111, no. 16, pp. 2513–2522, 2023. @article{Putnam2023, Vicarious reward, essential to social learning and decision making, is theorized to engage select brain regions similarly to experienced reward to generate a shared experience. However, it is just as important for neural systems to also differentiate vicarious from experienced rewards for social interaction. Here, we investigated the neuronal interaction between the primate anterior cingulate cortex gyrus (ACCg) and the basolateral amygdala (BLA) when social choices made by monkeys led to either vicarious or experienced reward. Coherence between ACCg spikes and BLA local field potential (LFP) selectively increased in gamma frequencies for vicarious reward, whereas it selectively increased in alpha/beta frequencies for experienced reward. These respectively enhanced couplings for vicarious and experienced rewards were uniquely observed following voluntary choices. Moreover, reward outcomes had consistently strong directional influences from ACCg to BLA. Our findings support a mechanism of vicarious reward where social agency is tagged by interareal coordination frequency within the same shared pathway. |
Vesa Putkinen; Sanaz Nazari-Farsani; Tomi Karjalainen; Severi Santavirta; Matthew Hudson; Kerttu Seppälä; Lihua Sun; Henry K. Karlsson; Jussi Hirvonen; Lauri Nummenmaa Pattern recognition reveals sex-dependent neural substrates of sexual perception Journal Article In: Human Brain Mapping, vol. 44, no. 6, pp. 2543–2556, 2023. @article{Putkinen2023, Sex differences in brain activity evoked by sexual stimuli remain elusive despite robust evidence for stronger enjoyment of and interest toward sexual stimuli in men than in women. To test whether visual sexual stimuli evoke different brain activity patterns in men and women, we measured hemodynamic brain activity induced by visual sexual stimuli in two experiments with 91 subjects (46 males). In one experiment, the subjects viewed sexual and nonsexual film clips, and dynamic annotations for nudity in the clips were used to predict hemodynamic activity. In the second experiment, the subjects viewed sexual and nonsexual pictures in an event-related design. Men showed stronger activation than women in the visual and prefrontal cortices and dorsal attention network in both experiments. Furthermore, using multivariate pattern classification we could accurately predict the sex of the subject on the basis of the brain activity elicited by the sexual stimuli. The classification generalized across the experiments indicating that the sex differences were task-independent. Eye tracking data obtained from an independent sample of subjects (N = 110) showed that men looked longer than women at the chest area of the nude female actors in the film clips. These results indicate that visual sexual stimuli evoke discernible brain activity patterns in men and women which may reflect stronger attentional engagement with sexual stimuli in men. |
Zoe A. Purcell; Colin A. Wastell; Naomi Sweller Eye movements reveal that low confidence precedes deliberation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 7, pp. 1539 –1546, 2023. @article{Purcell2023a, Contemporary dual-process models of reasoning maintain that there are two types of thinking –intuitive and deliberative –and that low confidence often leads to deliberation. Previous studies examining the confidence -deliberation relationship have been limited by (1) issues of endogeneity and between-subject comparisons, which we address in this study through debias training and (2) measures of confidence that are taken relatively late in the reasoning process, which we address by measuring confidence via real-time eye-tracking. Self-reported and eye-tracked confidence were both negatively related to deliberative thinking. This finding provides new evidence of the timecourse of the confidence -deliberation relationship and reveals that lowered confidence precedes deliberation. |
Zoe A. Purcell; Andrew J. Roberts; Simon J. Handley; Stephanie Howarth Eye movements, pupil dilation, and conflict detection in reasoning: Exploring the evidence for intuitive logic Journal Article In: Cognitive Science, vol. 47, no. 6, pp. 1–18, 2023. @article{Purcell2023, A controversial claim in recent dual process accounts of reasoning is that intuitive processes not only lead to bias but are also sensitive to the logical status of an argument. The intuitive logic hypothesis draws upon evidence that reasoners take longer and are less confident on belief–logic conflict problems, irrespective of whether they give the correct logical response. In this paper, we examine conflict detection under conditions in which participants are asked to either judge the logical validity or believability of a presented conclusion, accompanied by measures of eye movement and pupil dilation. The findings show an effect of conflict, under both types of instruction, on accuracy, latency, gaze shifts, and pupil dilation. Importantly, these effects extend to conflict trials in which participants give a belief-based response (incorrectly under logic instructions or correctly under belief instructions) demonstrating both behavioral and physiological evidence in support of the logical intuition hypothesis. |
Eva Puimège; Maribel Montero Perez; Elke Peters Promoting L2 acquisition of multiword units through textually enhanced audiovisual input: An eye-tracking study Journal Article In: Second Language Research, vol. 39, no. 2, pp. 471–492, 2023. @article{Puimege2023a, This study examines the effect of textual enhancement on learners' attention to and learning of multiword units from captioned audiovisual input. We adopted a within-participants design in which 28 learners of English as a foreign language (EFL) watched a captioned video containing enhanced (underlined) and unenhanced multiword units. Using eye-tracking, we measured learners' online processing of the multiword units as they appeared in the captions. Form recall pre- and posttests measured learners' acquisition of the target items. The results of mixed effects models indicate that enhanced items received greater visual attention, with longer reading times, less single word skipping and more rereading. Further, a positive relationship was found between amount of visual attention and learning odds: items fixated longer, particularly during the first pass, were more likely to be recalled in an immediate posttest. Our findings provide empirical support for the positive effect of visual attention on form recall of multiword units encountered in captioned television. The results also suggest that item difficulty and amount of attention were more important than textual enhancement in predicting learning gains. |