EyeLink Clinical and Oculomotor Eye-Tracking Publications
EyeLink clinical and oculomotor research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Saccadic Adaptation, Schizophrenia, Nystagmus, etc. You can also search for individual author names, and limit searches by year (choose the year then click the search button). If we missed any EyeLink clinical or oculomotor articles, please email us!
2024 |
Yordanka Zafirova; Anna Bognár; Rufin Vogels Configuration-sensitive face-body interactions in primate visual cortex Journal Article In: Progress in Neurobiology, vol. 232, pp. 1–16, 2024. @article{Zafirova2024, Traditionally, the neural processing of faces and bodies is studied separately, although they are encountered together, as parts of an agent. Despite its social importance, it is poorly understood how faces and bodies interact, particularly at the single-neuron level. Here, we examined the interaction between faces and bodies in the macaque inferior temporal (IT) cortex, targeting an fMRI-defined patch. We recorded responses of neurons to monkey images in which the face was in its natural location (natural face-body configuration), or in which the face was mislocated with respect to the upper body (unnatural face-body configuration). On average, the neurons did not respond stronger to the natural face-body configurations compared to the summed responses to their faces and bodies, presented in isolation. However, the neurons responded stronger to the natural compared to the unnatural face-body configurations. This configuration effect was present for face- and monkey-centered images, did not depend on local feature differences between configurations, and was present when the face was replaced by a small object. The face-body interaction rules differed between natural and unnatural configurations. In sum, we show for the first time that single IT neurons process faces and bodies in a configuration-specific manner, preferring natural face-body configurations. |
Lei Wang; Xufeng Zhou; Jie Yang; Fu Zeng; Shuzhen Zuo; Makoto Kusunoki; Huimin Wang; Yong-di Zhou; Aihua Chen; Sze Chai Kwok Mixed coding of content-temporal detail by dorsomedial posterior parietal neurons Journal Article In: Journal of Neuroscience, vol. 44, no. 3, pp. 1–16, 2024. @article{Wang2024, The dorsomedial posterior parietal cortex (dmPPC) is part of a higher-cognition network implicated in elaborate processes under- pinning memory formation, recollection, episode reconstruction, and temporal information processing. Neural coding for complex episodic processing is however under-documented. Here, we recorded extracellular neural activities from three male rhesus macaques (Macaca mulatta) and revealed a set of neural codes of “neuroethogram” in the primate parietal cortex. Analyzing neural responses in macaque dmPPC to naturalistic videos, we discovered several groups of neurons that are sensitive to different categories of ethogram items, low-level sensory features, and saccadic eye movement. We also discovered that the processing of category and feature information by these neurons is sustained by the accumulation of temporal information over a long timescale of up to 30 s, corroborating its reported long temporal receptive windows. We performed an additional behavioral experiment with additional two male rhesus macaques and found that saccade-related activities could not account for the mixed neuronal responses elicited by the video stimuli. We further observed monkeys' scan paths and gaze consistency are modulated by video content. Taken altogether, these neural findings explain how dmPPC weaves fabrics of ongoing experiences together in real time. The high dimensionality of neural representations should motivate us to shift the focus of attention from pure selectivity neurons to mixed selectivity neurons, especially in increasingly complex naturalistic task designs. |
Reiji Tanaka; Kei Watanabe; Takafumi Suzuki; Kae Nakamura; Masaharu Yasuda; Hiroshi Ban; Ken Okada; Shigeru Kitazawa An easy-to-implement, non-invasive head restraint method for monkey fMRI Journal Article In: NeuroImage, vol. 285, pp. 1–12, 2024. @article{Tanaka2024, Functional magnetic resonance imaging (fMRI) in behaving monkeys has a strong potential to bridge the gap between human neuroimaging and primate neurophysiology. In monkey fMRI, to restrain head movements, researchers usually surgically implant a plastic head-post on the skull. Although time-proven to be effective, this technique could create burdens for animals, including a risk of infection and discomfort. Furthermore, the presence of extraneous objects on the skull, such as bone screws and dental cement, adversely affects signals near the cortical surface. These side effects are undesirable in terms of both the practical aspect of efficient data collection and the spirit of “refinement” from the 3R's. Here, we demonstrate that a completely non-invasive fMRI scan in awake monkeys is possible by using a plastic head mask made to fit the skull of individual animals. In all of the three monkeys tested, longitudinal, quantitative assessment of head movements showed that the plastic mask has effectively suppressed head movements, and we were able to obtain reliable retinotopic BOLD signals in a standard retinotopic mapping task. The present, easy-to-make plastic mask has a strong potential to simplify fMRI experiments in awake monkeys, while giving data that is as good as or even better quality than that obtained with the conventional head-post method. |
Janina Hüer; Pankhuri Saxena; Stefan Treuea Pathway-selective optogenetics reveals the functional anatomy of top–down attentional modulation in the macaque visual cortex Journal Article In: Proceedings of the National Academy of Sciences, vol. 121, no. 3, pp. 1–9, 2024. @article{Hueer2024, Spatial attention represents a powerful top–down influence on sensory responses in primate visual cortical areas. The frontal eye field (FEF) has emerged as a key candidate area for the source of this modulation. However, it is unclear whether the FEF exerts its effects via its direct axonal projections to visual areas or indirectly through other brain areas and whether the FEF affects both the enhancement of attended and the suppression of unattended sensory responses. We used pathway- selective optogenetics in rhesus macaques performing a spatial attention task to inhibit the direct input from the FEF to area MT, an area along the dorsal visual pathway specialized for the processing of visual motion information. Our results show that the optogenetic inhibition of the FEF input specifically reduces attentional modulation in MT by about a third without affecting the neurons' sensory response component. We find that the direct FEF- to- MT pathway contributes to both the enhanced processing of target stimuli and the suppression of distractors. The FEF, thus, selectively modulates firing rates in visual area MT, and it does so via its direct axonal projections. |
Lei Yuan; Miriam Novack; David Uttal; Steven Franconeri Language systematizes attention: How relational language enhances relational representation by guiding attention Journal Article In: Cognition, vol. 243, pp. 1–14, 2024. @article{Yuan2024, Language can affect cognition, but through what mechanism? Substantial past research has focused on how labeling can elicit categorical representation during online processing. We focus here on a particularly powerful type of language-relational language-and show that relational language can enhance relational representation in children through an embodied attention mechanism. Four-year-old children were given a color-location conjunction task, in which they were asked to encode a two-color square, split either vertically or horizontally (e.g., red on the left, blue on the right), and later recall the same configuration from its mirror reflection. During the encoding phase, children in the experimental condition heard relational language (e.g., "Red is on the left of blue"), while those in the control condition heard generic non-relational language (e.g., "Look at this one, look at it closely"). At recall, children in the experimental condition were more successful at choosing the correct relational representation between the two colors compared to the control group. Moreover, they exhibited different attention patterns as predicted by the attention shift account of relational representation (Franconeri et al., 2012). To test the sustained effect of language and the role of attention, during the second half of the study, the experimental condition was given generic non-relational language. There was a sustained advantage in the experimental condition for both behavioral accuracies and signature attention patterns. Overall, our findings suggest that relational language enhances relational representation by guiding learners' attention, and this facilitative effect persists over time even in the absence of language. Implications for the mechanism of how relational language can enhance the learning of relational systems (e.g., mathematics, spatial cognition) by guiding attention will be discussed. |
Reza Azadi; Emily Lopez; Jessica Taubert; Amanda Patterson; Arash Afraz Inactivation of face- selective neurons alters eye movements when free viewing faces Journal Article In: Proceedings of the National Academy of Sciences, vol. 121, no. 3, pp. 1–10, 2024. @article{Azadi2024a, During free viewing, faces attract gaze and induce specific fixation patterns corresponding to the facial features. This suggests that neurons encoding the facial features are in the causal chain that steers the eyes. However, there is no physiological evidence to support a mechanistic link between face- encoding neurons in high- level visual areas and the oculo- motor system. In this study, we targeted the middle face patches of the inferior temporal (IT) cortex in two macaque monkeys using an functional magnetic resonance imaging (fMRI) localizer. We then utilized muscimol microinjection to unilaterally suppress IT neural activity inside and outside the face patches and recorded eye movements while the animals free viewing natural scenes. Inactivation of the face- selective neurons altered the pattern of eye movements on faces: The monkeys found faces in the scene but neglected the eye contralateral to the inactivation hemisphere. These findings reveal the causal contribution of the high- level visual cortex in eye movements. |
Monica Vanoncini; Stefanie Hoehl; Birgit Elsner; Sebastian Wallot; Natalie Boll-Avetisyan; Ezgi Kayhan Mother-infant social gaze dynamics relate to infant brain activity and word segmentation Journal Article In: Developmental Cognitive Neuroscience, vol. 65, pp. 1–8, 2024. @article{Vanoncini2024, The ‘social brain', consisting of areas sensitive to social information, supposedly gates the mechanisms involved in human language learning. Early preverbal interactions are guided by ostensive signals, such as gaze patterns, which are coordinated across body, brain, and environment. However, little is known about how the infant brain processes social gaze in naturalistic interactions and how this relates to infant language development. During free-play of 9-month-olds with their mothers, we recorded hemodynamic cortical activity of ´social brain` areas (prefrontal cortex, temporo-parietal junctions) via fNIRS, and micro-coded mother's and infant's social gaze. Infants' speech processing was assessed with a word segmentation task. Using joint recurrence quantification analysis, we examined the connection between infants' ´social brain` activity and the temporal dynamics of social gaze at intrapersonal (i.e., infant's coordination, maternal coordination) and interpersonal (i.e., dyadic coupling) levels. Regression modeling revealed that intrapersonal dynamics in maternal social gaze (but not infant's coordination or dyadic coupling) coordinated significantly with infant's cortical activity. Moreover, recurrence quantification analysis revealed that intrapersonal maternal social gaze dynamics (in terms of entropy) were the best predictor of infants' word segmentation. The findings support the importance of social interaction in language development, particularly highlighting maternal social gaze dynamics. |
H. Ershaid; M. Lizarazu; D. J. McLaughlin; M. Cooke; O. Simantiraki; M. Koutsogiannaki; M. Lallier Contributions of listening effort and intelligibility to cortical tracking of speech in adverse listening conditions Journal Article In: Cortex, vol. 172, pp. 54–71, 2024. @article{Ershaid2024, Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening effort, defined as the cognitive resources necessary for speech comprehension, and reported to have a strong negative correlation with speech intelligibility. Yet, no studies have examined the relationship between speech intelligibility, listening effort, and cortical tracking of speech. The aim of the present study was thus to examine these factors in quiet and distinct adverse listening conditions. Forty-nine normal hearing adults listened to sentences produced casually, presented in quiet and two adverse listening conditions: cafeteria noise and re- verberant speech. Electrophysiological responses were registered with electroencephalogram, and listening effort was estimated subjectively using self-reported scores and objectively using pupillometry. Results indicated varying impacts of adverse conditions on intelligibility, listening effort, and cortical tracking of speech, depending on the preservation of the speech temporal envelope. The more distorted envelope in the reverberant condition led to higher listening effort, as reflected in higher subjective scores, increased pupil diameter, and stronger cortical tracking of speech in the delta band. These findings suggest that using measures of listening effort in addition to those of intelligibility is useful for interpreting cortical tracking of speech results. Moreover, reading and phonological skills of participants were positively correlated with listening effort in the cafeteria condition, suggesting a special role of expert language skills in processing speech in this noisy condition. Implications for future research and theories linking atypical cortical tracking of speech and reading disorders are further discussed. |
Ethan S. Bromberg-Martin; Yang-Yang Feng; Takaya Ogasawara; J. Kael White; Kaining Zhang; Ilya E. Monosov A neural mechanism for conserved value computations integrating information and rewards Journal Article In: Nature Neuroscience, vol. 27, pp. 1–17, 2024. @article{BrombergMartin2024, Behavioral and economic theory dictate that we decide between options based on their values. However, humans and animals eagerly seek information about uncertain future rewards, even when this does not provide any objective value. This implies that decisions are made by endowing information with subjective value and integrating it with the value of extrinsic rewards, but the mechanism is unknown. Here, we show that human and monkey value judgements obey strikingly conserved computational principles during multi-attribute decisions trading off information and extrinsic reward. We then identify a neural substrate in a highly conserved ancient structure, the lateral habenula (LHb). LHb neurons signal subjective value, integrating information's value with extrinsic rewards, and the LHb predicts and causally influences ongoing decisions. Neurons in key input areas to the LHb largely signal components of these computations, not integrated value signals. Thus, our data uncover neural mechanisms of conserved computations underlying decisions to seek information about the future. |
Maya Campbell; Nicole Oppenheimer; Alex L. White Severe processing capacity limits for sub-lexical features of letter strings Journal Article In: Attention, Perception, & Psychophysics, pp. 1–10, 2024. @article{Campbell2024, When reading, the visual system is confronted with many words simultaneously. How much of that information can a reader process at once? Previous studies demonstrated that low-level visual features of multiple words are processed in parallel, but lexical attributes are processed serially, for one word at a time. This implies that an internal bottleneck lies somewhere between early visual and lexical analysis. We used a dual-task behavioral paradigm to investigate whether this bottleneck lies at the stage of letter recognition or phonological decoding. On each trial, two letter strings were flashed briefly, one above and one below fixation, and then masked. In the letter identification experiment, participants indicated whether a vowel was present in a particular letter string. In the phonological decoding experiment, participants indicated whether the letter string was pronounceable. We compared accuracy in a focused attention condition, in which participants judged only one of the two strings, with accuracy in a divided attention condition, in which participants judged both strings independently. In both experiments, the cost of dividing attention was so large that it supported a serial model: participants were able to process only one letter string per trial. Furthermore, we found a stimulus processing trade-off that is characteristic of serial processing: When participants judged one string correctly, they were less likely to judge the other string correctly. Therefore, the bottleneck that constrains word recognition under these conditions arises at a sub-lexical level, perhaps due to a limit on the efficiency of letter recognition. |
Sara Milligan; Elizabeth R. Schotter Do readers here what they sea?: Effects of lexicality, predictability, and individual differences on the phonological preview benefit Journal Article In: Journal of Memory and Language, vol. 135, pp. 1–14, 2024. @article{Milligan2024, For decades, researchers have debated whether readers benefit from translating visual word forms into phonological codes. A focus of this debate has been on the earliest moments of processing when a word is perceived in parafoveal vision (i.e., phonological preview benefit). A recent meta-analysis (Vasilev et al., 2019) concluded that the phonological preview benefit may be small and unreliable but they did not take into account potentially important stimulus-level or participant-level factors that varied across the included studies. Therefore, we conducted two well-powered experiments that systematically investigated the effects of sentence constraint, preview lexicality, and participant language skills on the phonological preview benefit effect. We found phonological preview benefits that were larger in high versus low constraint sentences, larger for words than pseudowords, and larger for better spellers. We conclude that phonological codes do facilitate early word recognition during reading, but that the phonological preview benefit magnitude depends on subject- and stimulus-level factors. |
Sarah C. Creel; Conor I. Frye Minimal gains for minimal pairs: Difficulty in learning similar-sounding words continues into preschool Journal Article In: Journal of Experimental Child Psychology, vol. 240, pp. 1–27, 2024. @article{Creel2024, A critical indicator of spoken language knowledge is the ability to discern the finest possible distinctions that exist between words in a language—minimal pairs, for example, the distinction between the novel words beesh and peesh. Infants differentiate similar-sounding novel labels like “bih” and “dih” by 17 months of age or earlier in the context of word learning. Adult word learners readily distinguish similar-sounding words. What is unclear is the shape of learning between infancy and adulthood: Is there a nonlinear increase early in development, or is there protracted improvement as experience with spoken language amasses? Three experiments tested monolingual English-speaking children aged 3 to 6 years and young adults. Children underperformed when learning minimal-pair words compared with adults (Experiment 1), compared with learning dissimilar words even when speech materials were optimized for young children (Experiment 2), and when the number of word instances during learning was quadrupled (Experiment 3). Nonetheless, the youngest group readily recognized familiar minimal pairs (Experiment 3). Results are consistent with a lengthy trajectory for detailed sound pattern learning in one's native language(s), although other interpretations are possible. Suggestions for research on developmental trajectories across various age ranges are made. |
Scott P. Ardoin; Katherine S. Binder; Paulina A. Kulesz; Eloise Nimocks; Joshua A. Mellott Examining the influence of passage and student characteristics on test-taking strategies: An eye-tracking study Journal Article In: Learning and Individual Differences, vol. 109, pp. 1–12, 2024. @article{Ardoin2024, Understanding test-taking strategies (TTSs) and the variables that influence TTSs is crucial to understanding what reading comprehension tests measure. We examined how passage and student characteristics were associated with TTSs and their impact on response accuracy. Third (n = 78), fifth (n = 86), and eighth (n = 86) graders read and answered questions associated with six passages. Eye-movement records were used to code TTSs. Results indicated that TTS choice was related to passage and student characteristics. Passage characteristics that make comprehension more difficult resulted in more students choosing a TTS that did not involve reading passages in their entirety before answering questions. TTSs encompassing reading passages in their entirety before answering questions resulted in higher accuracy for 5th and 8th graders. Understanding TTS choices can aid our understanding of the processes measured by reading comprehension tests, what TTS should be encouraged, and what contributes to tests producing different outcomes. Educational relevance statement Schools spend considerable time and money collecting and interpreting the outcomes of reading comprehension tests. To truly understand what these test results mean, we must understand what students are doing when taking reading comprehension tests. Furthermore, we need to know to what extent certain tests and student characteristics might be associated with test-taking strategies that avoid reading passages for comprehension. Finally, teachers need to know whether certain test-taking strategies might positively or negatively impact response accuracy to know what strategies to teach and not to teach. The current study was designed to provide answers relevant to these important educational matters. |
Amanda H. Seidl; Michelle Indarjit; Arielle Borovsky Touch to learn: Multisensory input supports word learning and processing Journal Article In: Developmental Science, vol. 27, no. 1, pp. 1–20, 2024. @article{Seidl2024, Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning. |
Adi Shechter; Sivan Medina; David L. Share; Amit Yashar In: Cortex, vol. 171, pp. 319–329, 2024. @article{Shechter2024, Peripheral letter recognition is fundamentally limited not by the visibility of letters but by the spacing between them, i.e., ‘crowding'. Crowding imposes a significant constraint on reading, however, the interplay between crowding and reading is not fully understood. Using a letter recognition task in varying display conditions, we investigated the effects of lexicality (words versus pseudowords), visual hemifield, and transitional letter probability (bigram/trigram frequency) among skilled readers (N = 14. and N = 13) in Hebrew – a script read from right to left. We observed two language-universal effects: a lexicality effect and a right hemifield (left hemisphere) advantage, as well as a strong language-specific effect – a left bigram advantage stemming from the right-to-left reading direction of Hebrew. The latter finding suggests that transitional probabilities are essential for parafoveal letter recognition. The results reveal that script-specific contextual information such as letter combination probabilities is used to accurately identify crowded letters. |
Victor Kuperman; Sascha Schroeder; Daniil Gnetov Word length and frequency effects on text reading are highly similar in 12 alphabetic languages Journal Article In: Journal of Memory and Language, vol. 135, pp. 1–15, 2024. @article{Kuperman2024, Reading research robustly finds that shorter and more frequent words are recognized faster and skipped more often than longer and less frequent words. An empirical question that has not been tested yet is whether languages within the same writing system would produce similarly strong length and frequency effects or whether typological differences between written languages would cause those effects to vary systematically in their magnitude. We analyzed text reading eye-movement data in 12 alphabetic languages from the Multilingual Eye-Movement Corpus (MECO). The languages varied substantially in their word length and frequency distributions as a function of their orthographic depth and morpho-syntactic type. Yet, the effects of word length and frequency on fixation durations and skipping rate were highly similar in size between the languages. This finding suggests a high degree of cross-linguistic universality in the readers' behavioral response to linguistic complexity (indexed by word length) and the amount of experience with the word (indexed by word frequency). These findings run counter to influential theories of single word recognition, which predict orthographic depth of a language to modulate the size of these benchmark effects. They also facilitate development of cross-linguistically generalizable computational models of eye-movement control in reading. |
Marianna Kyriacou Not batting an eye: Figurative meanings of L2 idioms do not interfere with literal uses Journal Article In: Languages, vol. 9, no. 32, pp. 1–15, 2024. @article{Kyriacou2024, Encountering idioms (hit the sack = “go to bed”) in a second language (L2) often results in a literal-first understanding (“literally hit a sack”). The figurative meaning is retrieved later, subject to idiom familiarity and L2 proficiency, and typically at a processing cost. Intriguingly recent findings report the overextension of idiom use in inappropriate contexts by advanced L2 users, with greater L2 proficiency somewhat mitigating this effect. In this study, we tested the tenability of this finding by comparing eye-movement patterns for idioms used literally, vs. literal control phrases (hit the dirt) in an eye-tracking-while-reading paradigm. We hypothesised that if idiom overextension holds, processing delays should be observed for idioms, as the (over)activated but contextually irrelevant figurative meanings would cause interference. In contrast, unambiguous control phrases should be faster to process. The results demonstrated undifferentiated processing for idioms used literally and control phrases across measures, with L2 proficiency affecting both similarly. Therefore, the findings do not support the hypothesis that advanced L2 users overextend idiom use in inappropriate contexts, nor that L2 proficiency modulates this tendency. The results are also discussed in light of potential pitfalls pertaining to idiom priming under typical experimental settings. WABBLE: |
Michela Redolfi; Chiara Melloni Processing adjectives in development: Evidence from eye-tracking Journal Article In: Journal of Child Language, pp. 1–24, 2024. @article{Redolfi2024, Combining adjective meaning with the modified noun is particularly challenging for children under three years. Previous research suggests that in processing noun-adjective phrases children may over-rely on noun information, delaying or omitting adjective interpretation. However, the question of whether this difficulty is modulated by semantic differences among (subsective) adjectives is underinvestigated. A visual-world experiment explores how Italian-learning children (N=38, 2;4–5;3) process noun-adjective phrases and whether their processing strategies adapt based on the adjective class. Our investigation substantiates the proficient integration of noun and adjective semantics by children. Nevertheless, alligning with previous research, a notable asymmetry is evident in the interpretation of nouns and adjectives, the latter being integrated more slowly. Remarkably, by testing toddlers across a wide age range, we observe a developmental trajectory in processing, supporting a continuity approach to children's development. Moreover, we reveal that children exhibit sensitivity to the distinct interpretations associated with each subsective adjective. |
Jukka Hyönä; Lei Cui; Timo T. Heikkilä; Birgitta Paranko; Yun Gao; Xingzhi Su Reading compound words in Finnish and Chinese: An eye-tracking study Journal Article In: Journal of Memory and Language, vol. 134, pp. 1–16, 2024. @article{Hyoenae2024, Two eye-tracking experiments in alphabetic Finnish and two in logographic Chinese examined the recognition of two-constituent compound words in reading. In Finnish, two-constituent compound words vary greatly in length, whereas in Chinese they are identical in length. According to the visual acuity principle (Bertram & Hyönä, 2003), short Finnish compound words and all two-character Chinese compound words that fit in foveal vision are recognized holistically, whereas long Finnish compound words are recognized via components. Experiment 1 in Finnish provided evidence consistent with the account, whereas the results for long compound words presented in condensed font in Experiment 2 were inconsistent with it. In Chinese, the first-character frequency effect was non-significant even when the compound words were presented in large font. The Finnish results suggest that componential processing is necessary when the compound word entails more than 10 letters. The Chinese results are compatible with the Chinese Reading Model (Li & Pollatsek, 2020) that assumes whole-word representations to overrule the activation of components during compound word recognition. |
Aine Ito; Huong Thi Thu Nguyen; Pia Knoeferle German-dominant Vietnamese heritage speakers use semantic constraints of German for anticipation during comprehension in Vietnamese Journal Article In: Bilingualism: Language and Cognition, vol. 27, pp. 57–74, 2024. @article{Ito2024, To test effects of German on anticipation in Vietnamese, we recorded eye-movements during comprehension and manipulated i) verb constraints (different vs. similar in German and Vietnamese) and ii) classifier constraints (absent in German). In each of two experiments, participants listened to Vietnamese sentences like "Mai mac mot chic áo."('Mai wears a [classifier] shirt.'), while viewing four objects. Between experiments, we contrasted bilingual background: L1 Vietnamese-L2 German late bilinguals (Experiment 1) and heritage speakers of Vietnamese in Germany (Experiment 2). Both groups anticipated verb-compatible and classifier-compatible objects upon hearing the verb/classifier. However, when the (verb) constraints differed (e.g., Vietnamese: mac 'wear (a shirt/#earrings)' - German: tragen 'wear (a shirt/earrings)'), the heritage speakers were distracted by the object (earrings) compatible with the German (but not the Vietnamese) verb constraints. These results demonstrate that competing information in the two languages can interfere with anticipation in heritage speakers. |
Simon P. Liversedge; Henri Olkoniemi; Chuanli Zang; Xin Li; Guoli Yan; Xuejun Bai; Jukka Hyönä Universality in eye movements and reading: A replication with increased power Journal Article In: Cognition, vol. 242, pp. 1–19, 2024. @article{Liversedge2024, Liversedge, Drieghe, Li, Yan, Bai and Hyönä (2016) reported an eye movement study that investigated reading in Chinese, Finnish and English (languages with markedly different orthographic characteristics). Analyses of the eye movement records showed robust differences in fine grained characteristics of eye movements between languages, however, overall sentence reading times did not differ. Liversedge et al. interpreted the entire set of results across languages as reflecting universal aspects of processing in reading. However, the study has been criticized as being statistically underpowered (Brysbaert, 2019) given that only 19–21 subjects were tested in each language. Also, given current best practice, the original statistical analyses can be considered to be somewhat weak (e.g., no inclusion of random slopes and no formal comparison of performance between the three languages). Finally, the original study did not include any formal statistical model to assess effects across all three languages simultaneously. To address these (and some other) concerns, we tested at least 80 new subjects in each language and conducted formal statistical modeling of our data across all three languages. To do this, we included an index that captured variability in visual complexity in each language. Unlike the original findings, the new analyses showed shorter total sentence reading times for Chinese relative to Finnish and English readers. The other main findings reported in the original study were consistent. We suggest that the faster reading times for Chinese subjects occurred due to cultural changes that have taken place in the decade or so that lapsed between when the original and current subjects were tested. We maintain our view that the results can be taken to reflect universality in aspects of reading and we evaluate the claims regarding a lack of statistical power that were levelled against the original article. |
Siqi Lyu; Jung-Yueh Tu; Chien-Jer Charles Lin Structural position affects topic transition: An eye tracking study Journal Article In: Language and Linguistics, vol. 25, no. 1, pp. 56–79, 2024. @article{Lyu2024, In an eye-tracking study, we used Chinese double-subject construction [NPa NPb PREDICATE] (e.g., [nage jiezhi]NPa [sheji]NPb [hen tebie]PREDICATE ‘that ring design very special') in a concessive construction like suiran…dan… ‘although…but…' to investigate how the syntactic position of the topic NP (i.e., that ring) affects the comprehension of topic transition in the subsequent clause. We contrasted topics located at a higher pre-connective topic position (e.g., that ring although) and those located at a post-connective subject position (e.g., although that ring). Topic transition was manipulated as either using a subtopic (e.g., workmanship of that ring) or a new topic (e.g., the wedding dress) in the second clause of concession. We found a main effect of topic transition in a batch ofeye-movement measures showing that subtopic transition was preferred over new-topic transition. More importantly, we found interactions on total reading time and total fixations at the topic-suiran region and on total fixations at the post-critical region, with post hoc tests revealing a larger cost of topic transition in the high-topic condition than in the low-topic condition. The results suggest that when a topic NP is located at a higher topic position (i.e., above the connective), it binds the topics of both clauses and induces greater cost when the topics do not form a consistent chain. When the topic NP is located at a local (i.e., post-connective) position, the processing of topic shift or resolution of topic conflict in the second clause is less costly because the second topic is not syntactically bound by the higher topic. Together, the results support a prominent status of the before-connective position in Chinese discourse. Furthermore, they indicate that syntactically induced topicality constrains the processing of topic transition in the subsequent discourse. |
2023 |
Chuanli Zang; Zhichao Zhang; Manman Zhang; Federica Degno; Simon P. Liversedge; Zhang Manman; Federica Degno; Simon P. Liversedge Examining semantic parafoveal-on-foveal effects using a Stroop boundary paradigm Journal Article In: Journal of Memory and Language, vol. 128, pp. 1–14, 2023. @article{Zang2023, The issue of whether lexical processing occurs serially or in parallel has been a central and contentious issue in respect of models of eye movement control in reading for well over a decade. A critical question in this regard concerns whether lexical parafoveal-on-foveal effects exist in reading. Because Chinese is an unspaced and densely packed language, readers may process parafoveal words to a greater extent than they do in spaced alphabetic languages. In two experiments using a novel Stroop boundary paradigm (Rayner, 1975), participants read sentences containing a single-character color-word whose preview was manipulated (identity or pseudocharacter, printed in black [no-color], or in a color congruent or incongruent with the character meaning). Two boundaries were used, one positioned two characters before the target and one immediately to the left of the target. The previews changed from black to color and then back to black as the eyes crossed the first and then the second boundary respectively. In Experiment 1 four color-words (red, green, yellow and blue) were used and in Experiment 2 only red and green color-words were used as targets. Both experiments showed very similar patterns such that reading times were increased for colored compared to no-color previews indicating a parafoveal visual interference effect. Most importantly, however, there were no robust interactive effects. Preview effects were comparable for congruent and incongruent color previews at the pretarget region when the data were combined from both experiments. These results favour serial processing accounts and indicate that even under very favourable experimental conditions, lexical semantic parafoveal-on-foveal effects are minimal. |
Yushu Wu; Chunyu Kit Hong Kong Corpus of Chinese Sentence and Passage Reading Journal Article In: Scientific Data, vol. 10, no. 1, pp. 1–13, 2023. @article{Wu2023e, Recent years have witnessed a mushrooming of reading corpora that have been built by means of eye tracking. This article showcases the Hong Kong Corpus of Chinese Sentence and Passage Reading (HKC for brevity), featured by a natural reading of logographic scripts and unspaced words. It releases 28 eye-movement measures of 98 native speakers reading simplified Chinese in two scenarios: 300 one-line single sentences and 7 multiline passages of 5,250 and 4,967 word tokens, respectively. To verify its validity and reusability, we carried out (generalised) linear mixed-effects modelling on the capacity of visual complexity, word frequency, and reading scenario to predict eye-movement measures. The outcomes manifest significant impacts of these typical (sub)lexical factors on eye movements, replicating previous findings and giving novel ones. The HKC provides a valuable resource for exploring eye movement control; the study contrasts the different scenarios of single-sentence and passage reading in hopes of shedding new light on both the universal nature of reading and the unique characteristics of Chinese reading. |
Xinyi Xia; Yanping Liu; Lili Yu; Erik D. Reichle Are there preferred viewing locations in Chinese reading? Evidence from eye-tracking and computer simulations Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 49, no. 4, pp. 607–625, 2023. @article{Xia2023b, The Chinese writing system is different from English in that individual words both comprise one to four characters and are not separated by clear word boundaries (e.g., interword spaces). These differences raise the question of how readers of Chinese know where to move their eyes to support efficient lexical processing? The widely accepted default-targeting hypothesis suggests that Chinese readers direct their eyes to a small number of preferred-viewing locations (PVLs), such as the beginning or middle of upcoming words. In this article, we report two eye-movement experiments testing this hypothesis. In both experiments, participants read sentences comprising entirely two-character words, but either without (Experiment 1) or with (Experiment 2) explicit knowledge of this structure prior to their participation. The results of both experiments indicate the absence of PVLs. Simulations using implemented versions of a simple oculomotor-based hypothesis, two variants of the default-targeting hypothesis, and the hypothesis that saccade lengths are modulated as a function of estimated parafoveal-processing difficulty (i.e., dynamic-adjustment hypothesis) suggest that the latter provides the best account of saccadictargeting during Chinese reading. These results are discussed in relation to broader issues of eye-movement control during reading and how models of such must be modified to provide more accurate accounts of the reading of Chinese and other languages |
Xue-Zhen Xiao; Gaoding Jia; Aiping Wang Semantic preview benefit of Tibetan-Chinese bilinguals during Chinese reading Journal Article In: Language Learning and Development, vol. 19, no. 1, pp. 1–15, 2023. @article{Xiao2023a, When reading Chinese, skilled native readers regularly gain a preview benefit (PB) when the parafoveal word is orthographically or semantically related to the target word. Evidence shows that non-native, beginning Chinese readers can obtain an orthographic PB during Chinese reading, which indicates the parafoveal processing of low-level visual information. However, whether non-native Chinese readers who are more proficient in Chinese can make use of high-level parafoveal information remains unknown. Therefore, this study examined parafoveal processing during Chinese reading among Tibetan-Chinese bilinguals with high Chinese proficiency and compared their PB effects with those from native Chinese readers. Tibetan-Chinese bilinguals demonstrated both orthographic and semantic PB but did not show phonological PB and only differed from native Chinese in the identical PB when preview characters were identical to the targets. These findings demonstrate that non-native Chinese readers can extract semantic information from parafoveal preview during Chinese reading and highlight the modulation of parafoveal processing efficiency by reading proficiency. The results are in line with the direct route to access the mental lexicon of visual Chinese characters among non-native Chinese speakers. |
Jianping Xiong; Lili Yu; Aaron Veldre; Erik D. Reichle; Sally Andrews A multitask comparison of word- and character-frequency effects in Chinese reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 49, no. 5, pp. 793–811, 2023. @article{Xiong2023, In this study, we examined the effects of word and character frequency across three commonly used word-identification tasks (lexical decision, naming, and sentence reading) using the same set of twocharacter target words (N = 60) and participants (N = 82). Facilitatory effects of word frequency were observed across all three tasks. The character-frequency effects, however, were facilitatory for naming but inhibitory for both lexical decision and reading. Further correlational analyses indicated that participants' performance (as measured using overall response latencies and the sizes of the frequency effects) was not consistent across tasks but was relatively reliable within the lexical-decision and reading tasks. These findings are discussed in relation to what is known about the reading of Chinese versus alphabetic scripts, word-identification tasks, and models of word identification. |
Kunyu Xu; Yu-Min Ku; Chenlu Ma; Chien-Hui Lin; Wan-Chen Chang Development of comprehension monitoring skill in Chinese children: Evidence from eye movement and probe interviews Journal Article In: Metacognition and Learning, pp. 1–19, 2023. @article{Xu2023, As an important construct in the cognitive process, comprehension monitoring has received much scholarly attention. Researchers have recognized comprehension monitoring as an ability closely linked with children's reading comprehension ability and working memory capacity. Evidence is also abundant to prove that comprehension monitoring skill develops with age. It remains unclear, however, how these factors interact during reading, particularly in low-grade children. Many previous empirical studies have only employed online or offline measurements to examine children's monitoring performance, which might lead to unsolid conclusions. In this study, we utilized both online eye-tracking measures and offline probe interviews to quantify the developmental features (i.e., evaluation and regulation) of comprehension monitoring skills among Chinese beginning readers. The results indicated that the comprehension monitoring performance, as quantified by eye-tracking measures, was positively related to their reading comprehension ability and working memory capacity. Moreover, the first-graders' performances lacked online regulation skills during the error-detecting tasks, while second-graders had relatively developed online monitoring performance. Additionally, the eye-tracking measures were found as a predictor for children's performances in probe interviews, as the readers with high comprehension ability and working memory capacity successfully reported more errors embedded in the self-designed reading materials. Therefore, the findings support the claim that children's comprehension monitoring is a developing skill associated with reading comprehension and working memory capacity and further question the existence of comprehension monitoring skills in beginning readers, especially first-graders. |
Licheng Xue; Ying Xiao; Tianying Qing; Urs Maurer; Wei Wang; Huidong Xue; Xuchu Weng; Jing Zhao Attention to the fine-grained aspect of words in the environment emerges in preschool children with high reading ability Journal Article In: Visual Cognition, vol. 31, no. 1, pp. 85–96, 2023. @article{Xue2023, Attention to words is closely related to the process of learning to read. However, it remains unclear how attention to words in environmental print (such as words on product labels) is changed with the growth of preschool children's reading ability. We thus used eye tracking technique to compare attention to words in environmental print in children at low (32, 15 males, 5.12 years) and high (32, 17 males, 5.16 years) reading levels during a free viewing task. To characterize which aspects of visual word form children attend to, we constructed three types of stimuli embedded in the same context: words in environment print, symbol strings (similar shape to words but without strokes), and character strings (comparable with words in the number of strokes and the structures). We observed that children at both reading levels showed lower percentages of fixations and fixation time in words relative to symbol strings, suggesting they start to attend to the coarse aspect of visual word form. Interestingly, only children at higher reading level showed lower percentages of fixations and fixation time for words relative to character strings, suggesting that attention to the fine-grained aspect of visual word form emerged, and was closely to reading ability. |
Shuwei Xue; Jana Lüdtke; Arthur M. Jacobs Once known, twice hedonic: Enjoying Shakespeare's sonnets through rereading-a deep learning perspective Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, pp. 1–14, 2023. @article{Xue2023a, Reading poetry is a popular hobby, but what does it involve in terms of the mind? Through quantitative nar- rative analysis, we computed seven surface features and two affective–semantic features of Shakespeare's sonnets and then added them to predict readers' eye movements during reading. Using the neural nets model, we found that the gaze duration, the regression time, the total reading time, and the fixation probability all depended mainly on surface features, no matter how often a poem was read. We also found that word-based valence and arousal were important as well and became more important in the course of repeated readings. In the last reading, valence became as important as the main surface features. Findings imply that the first impression of a poem is due mainly to surface features but then becomes enriched by meaning and mood. |
Ming Yan; Yingyi Luo; Jinger Pan Monolingual and bilingual phonological activation in Cantonese Journal Article In: Bilingualism: Language and Cognition, vol. 26, no. 4, pp. 751–761, 2023. @article{Yan2023, Previous research has provided evidence for cross-language phonological activation during visual word recognition. However, such findings mainly came from alphabetic languages, and readers' familiarity with the two scripts might differ. The present study aimed to test whether such cross-language phonological activation can be observed in Chinese, a logographic script, without the confounding factor of script familiarity as readers read the same script in different languages. Cantonese–Mandarin bilinguals were tested in an eye-tracking experiment in which they were instructed to read sentences silently. A target word in the sentence was replaced by either a homophone in both Cantonese and Mandarin, a homophone in Cantonese or in Mandarin only, or an unrelated character. The results showed that native Cantonese readers could activate phonological representations of L1 and L2 while reading Chinese sentences silently. However, the degree to which they relied on phonological decoding in L1 and L2 varied in the two languages. |
Ming Yan; Jinger Pan Joint effects of individual reading skills and word properties on Chinese children's eye movements during sentence reading Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–10, 2023. @article{Yan2023a, Word recognition during the reading of continuous text has received much attention. While a large body of research has investigated how linguistic properties of words affect eye movements during reading, it remains to be established how individual differences in reading skills affect momentary cognitive processes during sentence reading among typically developing Chinese readers. The present study set out to test the joint influences of word properties and individual reading skills on eye movements during reading among Chinese children. We recorded eye movements of 30 grade 3 (G3) children and 27 grade 5 (G5) children when they read sentences silently for comprehension. Predictors of linear mixed models included word frequency, visual complexity, and launch site distance, in addition to the participants' offline psychometric performances in rapid naming, morphological awareness, word segmenting, and character recognition. The results showed that word properties affected word recognition during sentence reading in both G3 and G5 children. Moreover, word segmenting predicted the G3 children's fixation durations and the G5 children's fixation location, whereas rapid naming predicted the G5 children's fixation duration. Implications are discussed based on the current findings, in light of how different literacy skills contribute to reading development. |
Panpan Yao; David Hall; Hagit Borer; Linnaea Stockall Dutch–Mandarin learners' online use of syntactic cues to anticipate mass vs. count interpretations Journal Article In: Second Language Research, pp. 1–38, 2023. @article{Yao2023b, It remains unclear whether late second language learners (L2ers) can acquire sufficient knowledge about unique-to-L2 constructions through implicit learning to build anticipations during real-time processing. To tackle this question, we conducted a visual world paradigm experiment to investigate high-proficiency late first-language Dutch second-language Mandarin Chinese learners' online processing of syntactic cues to count vs. mass interpretations in Chinese which are unique-to-L2 and never explicitly taught. The results showed that late Dutch–Mandarin learners were sensitive to a mass-biased syntactic cue in real-time processing, and exhibited some native-like anticipatory behaviour. These findings indicate that late L2ers can acquire unique-to-L2 constructions through implicit learning, and can automatically use this knowledge to make predictions. |
Panpan Yao; Xin Jiang; Xinwei Chen; Xingshan Li Explore the processing unit of L2 Chinese learners in on-line Chinese reading Journal Article In: Second Language Research, pp. 1–17, 2023. @article{Yao2023c, The present study explored the processing units of high-proficiency second language (L2) Chinese learners in on-line reading in an eye-tracking experiment. The critical aim was to investigate how learners segment continuous characters into words without the aid of word boundary demarcations. Based on previous studies, the embedded words of 2- and 3-character incremental words were manipulated to be either plausible or implausible with the preceding verbs, while the incremental words themselves were always plausible. The results revealed an effect of the plausibility manipulation, which suggested that L2 Chinese learners activated embedded words first and integrated embedded words with previous sentence context as soon as they read them. |
Yao Yao; Katrina Connell; Stephen Politzer-Ahles Hearing emotion in two languages : A pupillometry study of Cantonese – Mandarin bilinguals ' perception of affective cognates in L1 and L2 Journal Article In: Bilingualism: Language and Cognition, pp. 1–14, 2023. @article{Yao2023, Differential affective processing has been widely documented for bilinguals: L1 affective words elicit higher levels of arousal and stronger emotionality ratings than L2 affective words (Pavlenko, 2012). In this study, we focus on two closely related Chinese languages, Mandarin and Cantonese, whose affective lexicons are highly overlapping, with shared lexical items that only differ in pronunciation across languages. We recorded L1 Cantonese – L2 Mandarin bilinguals' pupil responses to auditory tokens of Cantonese and Mandarin affective words. Our results showed that Cantonese–Mandarin bilinguals had stronger pupil responses when the affective words were pronounced in Cantonese (L1) than when the same words were pronounced in Mandarin (L2). The effect was most evident in taboo words and among bilinguals with lower L2 proficiency. We discuss the theoretical implications of the findings in the frameworks of exemplar theory and models of the bilingual lexicon. © |
Michael C. W. Yip Tracking the time-course of spoken word recognition of Cantonese Chinese in sentence context: Evidence from eye movements Journal Article In: Psychonomic Bulletin & Review, pp. 1–11, 2023. @article{Yip2023, In this study, we conducted an eye-tracking experiment to investigate the effects of sentence context and tonal information on spoken word recognition processes in Cantonese Chinese. We recruited 60 native Cantonese listeners to participate in the eye-tracking experiment. The target words (phonologically similar words) were manipulated to either (1) a congruent context or (2) an incongruent context in the experiment. The resulting eye-movement patterns in the incongruent context condition clearly revealed that (1) sentence context produced a garden-path effect in the initial stage of the spoken word recognition processes and then (2) the lexical tone of the word (bottom-up information) overrode the contextual effects to help listeners to discriminate between different similar-sounding words during lexical access. In conclusion, the patterns of eye-tracking data show the interactive processes between the lexical tone (an acoustic cue within a Cantonese word) and sentence context played in different phases to the spoken word recognition of Cantonese Chinese. |
Haojue Yu; Miyoung Kwon Central and peripheral visual field examination Journal Article In: Investigative Ophthalmology & Visual Science, vol. 64, no. 13, pp. 1–14, 2023. @article{Yu2023, PURPOSE. Although foveal vision provides fine spatial information, parafoveal and peripheral vision are also known to be important for efficient reading behaviors. Here we systematically investigate how different types and sizes of visual field defects affect the way visual information is acquired via eye movements during reading. METHODS. Using gaze-contingent displays, simulated scotomas were induced in 24 adults with normal or corrected-to-normal vision during a reading task. The study design included peripheral and central scotomas of varying sizes (aperture or scotoma size of 2°, 4°, 6°, 8°, and 10°) and no-scotoma conditions. Eye movements (e.g., forward/backward saccades, fixations, microsaccades) were plotted as a function of either the aperture or scotoma size, and their relationships were characterized by the best fitting model. RESULTS. When the aperture size of the peripheral scotoma decreased below 6° (11 visible letters), there were significant decreases in saccade amplitude and velocity, as well as substantial increases in fixation duration and the number of fixations. Its dependency on the aperture size is best characterized by an exponential decay or growth function in log-linear coordinates. However, saccade amplitude and velocity, fixation duration, and forward/regressive saccades increased more or less linearly with increasing central scotoma size in log-linear coordinates. CONCLUSIONS. Our results showed differential impacts of central and peripheral vision loss on reading behaviors while lending further support for the importance of foveal and parafoveal vision in reading. These apparently deviated oculomotor behaviors may in part reflect optimal reading strategies to compensate for the loss of visual information. |
Tania S. Zamuner; Theresa Rabideau; Margarethe McDonald; H. Henny Yeung Developmental change in children's speech processing of auditory and visual cues: An eyetracking study Journal Article In: Journal of Child Language, vol. 50, pp. 27–51, 2023. @article{Zamuner2023, This study investigates how children aged two to eight years (N = 129) and adults (N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood. |
Chuanli Zang; Ying Fu; Hong Du; Xuejun Bai; Guoli Yan; Simon P. Liversedge Processing multiconstituent units: Preview effects during reading of Chinese words, idioms, and phrases Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–21, 2023. @article{Zang2023a, Arguably, the most contentious debate in the field of eye movement control in reading has centered on whether words are lexically processed serially or in parallel during reading. Chinese is character-based and unspaced, meaning the issue of how lexical processing is operationalized across potentially ambiguous, multicharacter strings is not straightforward. We investigated Chinese readers' processing of frequently occurring multiconstituent units (MCUs), that is, linguistic units composed of more than a single word, that might be represented lexically as a single representation. In Experiment 1, we manipulated the linguistic category of a two-constituent Chinese string (word, MCU, or phrase) and the preview of its second constituent (identical or pseudocharacter) using the boundary paradigm with the boundary located before the twoconstituent string. A robust preview effect was obtained when the second constituent, alongside the first, formed a word or MCU, but not a phrase, suggesting that frequently occurring MCUs are lexicalized and processed parafoveally as single units during reading. In Experiment 2, we further manipulated the phrase type of a two-constituent but three-character Chinese string (idiom with a one-character modifier and a twocharacter noun, or matched phrase) and the preview of the second constituent noun (identity or pseudocharacter). A greater preview effect was obtained for idioms than phrases, indicating that idioms are processed to a greater extent in the parafovea than matched phrases. Together, the results of these two experiments suggest that lexical identification processes in Chinese can be operationalized over linguistic units that are larger than an individual word. |
Andrea M. Zawoyski; Scott P. Ardoin; Katherine S. Binder The impact of test-taking strategies on eye movements of elementary students during reading comprehension assessment Journal Article In: School Psychology, vol. 38, no. 1, pp. 59–66, 2023. @article{Zawoyski2023, Teachers often encourage students to use test-taking strategies during reading comprehension assessments, but these strategies are not always evidence-based. One common strategy involves teaching students to read the questions before reading an associated passage. Research findings comparing the passage-first (PF) and questions-first (QF) strategies are mixed. The present study employed eye-tracking technology to record 84 third and fourth-grade participants' eye movements (EMs) as they read a passage and responded to multiple-choice (MC) questions using PF and QF strategies in a within-subject design. Although there were no significant differences between groups in accuracy on MC questions, EM measures revealed that the PF condition was superior to the QF condition for elementary readers in terms of efficiency in reading and responding to questions. These findings suggest that the PF strategy supports a more comprehensive understanding of the text. Ultimately, within the PF condition, students required less time to obtain the same accuracy outcomes they attained when reading in the QF condition. School psychologists can improve reading comprehension instruction by encouraging the importance of teaching children to gain meaning from the text rather than search the passage for answers to MC questions |
Nina Zdorova; Svetlana Malyutina; Anna Laurinavichyute; Anastasiia Kaprielova; Anastasia Ziubanova; Anastasiya Lopukhina Do we rely on good-enough processing in reading under auditory and visual noise? Journal Article In: PLoS ONE, vol. 18, pp. 1–19, 2023. @article{Zdorova2023, Noise, as part of real-life communication flow, degrades the quality of linguistic input and affects language processing. According to predictions of the noisy-channel and good-enough processing models, noise should make comprehenders rely more on word-level semantics instead of actual syntactic relations. However, empirical evidence supporting this prediction is still lacking. For the first time, we investigated whether auditory (three-talker babble) and visual (short idioms appearing next to a target sentence on the screen) noise would trigger greater reliance on semantics and make readers of Russian sentences process the sentences superficially. Our findings suggest that, although Russian speakers generally relied on semantics in sentence comprehension, neither auditory nor visual noise increased this reliance. The only effect of noise on semantic processing was found in reading speed under auditory noise measured by first fixation duration: only without noise, the semantically implausible sentences were read slower than semantically plausible ones. These results do not support the predictions of the study based on the noisy-channel and good-enough processing models, which is discussed in light of the methodological differences among the studies of noise and their possible limitations. |
Nina Zdorova; Olga Parshina; Bela Ogly; Irina Bagirokova; Ekaterina Krasikova; Anastasiia Ziubanova; Shamset Unarokova; Susanna Makerova; Olga Dragoy Eye movement corpora in Adyghe and Russian: An eye-tracking study of sentence reading in bilinguals Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–12, 2023. @article{Zdorova2023a, The present study expands the eye-tracking-while reading research toward less studied languages of different typological classes (polysynthetic Adyghe vs. synthetic Russian) that use a Cyrillic script. In the corpus reading data from the two languages, we confirmed the widely studied effects of word frequency and word length on eye movements in Adyghe-Russian bilingual individuals for both languages. We also confirmed morphological effects in Adyghe reading (part-of-speech class and the number of lexical affixes) that were previously shown in some morphologically-rich languages. Importantly, we demonstrated that bilinguals' reading in Adyghe does differ quantitatively (the effect of language on reading times) and qualitatively (different effects of landing and previous/upcoming words on the eye movements within a current word) from their reading in Russian. |
Biao Zeng; Guoxing Yu; Nabil Hasshim; Shanhu Hong Primacy of mouth over eyes to perceive audiovisual Mandarin lexical tones Journal Article In: Journal of Eye Movement Research, vol. 16, no. 4, pp. 1–12, 2023. @article{Zeng2023, The visual cues of lexical tones are more implicit and much less investigated than consonants and vowels, and it is still unclear what facial areas contribute to facial tones identification. This study investigated Chinese and English speakers' eye movements when they were asked to identify audiovisual Mandarin lexical tones. The Chinese and English speakers were presented with an audiovisual clip of Mandarin monosyllables (for instance, /ă/, /à/, /ĭ/, /ì/) and were asked to identify whether the syllables were a dipping tone (/ă/, / ĭ/) or a falling tone (/ à/, /ì/). These audiovisual syllables were presented in clear, noisy and silent (absence of audio signal) conditions. An eye-tracker recorded the participants' eye movements. Results showed that the participants gazed more at the mouth than the eyes. In addition, when acoustic conditions became adverse, both the Chinese and English speakers increased their gaze duration at the mouth rather than at the eyes. The findings suggested that the mouth is the primary area that listeners utilise in their perception of audiovisual lexical tones. The similar eye movements between the Chinese and English speakers imply that the mouth acts as a perceptual cue that provides articulatory information, as opposed to social and pragmatic information. |
Martin Zettersten; Daniel Yurovsky; Tian Linger Xu; Sarp Uner; Angeline Sin Mei Tsui; Rose M. Schneider; Annissa N. Saleh; Stephan C. Meylan; Virginia A. Marchman; Jessica Mankewitz; Kyle MacDonald; Bria Long; Molly Lewis; George Kachergis; Kunal Handa; Benjamin DeMayo; Alexandra Carstensen; Mika Braginsky; Veronica Boyce; Naiti S. Bhatt; Claire Augusta Bergey; Michael C. Frank Peekbank: An open, large-scale repository for developmental eye-tracking data of children's word recognition Journal Article In: Behavior Research Methods, vol. 55, no. 5, pp. 2485–2500, 2023. @article{Zettersten2023, The ability to rapidly recognize words and link them to referents is central to children's early language development. This ability, often called word recognition in the developmental literature, is typically studied in the looking-while-listening paradigm, which measures infants' fixation on a target object (vs. a distractor) after hearing a target label. We present a large-scale, open database of infant and toddler eye-tracking data from looking-while-listening tasks. The goal of this effort is to address theoretical and methodological challenges in measuring vocabulary development. We first present how we created the database, its features and structure, and associated tools for processing and accessing infant eye-tracking datasets. Using these tools, we then work through two illustrative examples to show how researchers can use Peekbank to interrogate theoretical and methodological questions about children's developing word recognition ability. |
Likan Zhan; Peng Zhou The online processing of hypothetical events Journal Article In: Experimental Psychology, vol. 70, no. 2, pp. 108–117, 2023. @article{Zhan2023, Abstract. A conditional statement If P then Q is formed by combining the two propositions P and Q together with the conditional connective If ··· then ···. When embedded under the conditional connective, the two propositions P and Q describe hypothetical events that are not actualized. It remains unclear when such hypothetical thinking is activated in the real-time comprehension of conditional statements. To tackle this problem, we conducted an eye-tracking experiment using the visual world paradigm. Participants' eye movements on the concurrent image were recorded when they were listening to the auditorily presented conditional statements. Depending on when and what critical information is added into the auditory input, there are four possible temporal slots to observe in the online processing of the conditional statement: the sentential connective If, the antecedent P, the consequent Q, and the processing of the sentence following the conditional. We mainly focused on the first three slots. First, the occurrence of the conditional connective should trigger participants to search in the visual world for the event that could not assign a truth-value to the embedded proposition. Second, if the embedded proposition P can be determined as true by an event, the hypothetical property implied by the connective would prevent the participants from excluding the consideration of other events. The consideration of other events would yield more fixations on the events where the proposition is false. |
Justin M. Fine; David J. N. Maisson; Seng Bum Michael Yoo; Tyler V. Cash-Padgett; Maya Zhe Wang; Jan Zimmermann; Benjamin Y. Hayden Abstract value encoding in neural populations but not single neurons Journal Article In: Journal of Neuroscience, vol. 43, no. 25, pp. 4650–4663, 2023. @article{Fine2023, An important open question in neuroeconomics is how the brain represents the value of offers in a way that is both abstract (allowing for comparison) and concrete (preserving the details of the factors that influence value). Here, we examine neuronal responses to risky and safe options in five brain regions that putatively encode value in male macaques. Surprisingly, we find no detectable overlap in the neural codes used for risky and safe options, even when the options have identical subjective values (as revealed by preference) in any of the regions. Indeed, responses are weakly correlated and occupy distinct (semi-orthogonal) encoding subspaces. Notably, however, these subspaces are linked through a linear transform of their constituent encodings, a property that allows for comparison of dissimilar option types. This encoding scheme allows these regions to multiplex decision related processes: they can encode the detailed factors that influence offer value (here, risky and safety) but also directly compare dissimilar offer types. Together these results suggest a neuronal basis for the qualitatively different psychological properties of risky and safe options and highlight the power of population geometry to resolve outstanding problems in neural coding. |
Alessio Fracasso; Antimo Buonocore; Ziad M. Hafed Peri-saccadic orientation identification performance and visual neural sensitivity are higher in the upper visual field Journal Article In: Journal of Neuroscience, vol. 43, no. 41, pp. 6884–6897, 2023. @article{Fracasso2023, Visual neural processing is distributed among a multitude of sensory and sensory-motor brain areas exhibiting varying degrees of functional specializations and spatial representational anisotropies. Such diversity raises the question of how perceptual performance is determined, at any one moment in time, during natural active visual behavior. Here, exploiting a known dichotomy between the primary visual cortex and superior colliculus in representing either the upper or lower visual fields, we asked whether peri-saccadic orientation identification performance is dominated by one or the other spatial anisotropy. Humans (48 participants, 29 females) reported the orientation of peri-saccadic upper visual field stimuli significantly better than lower visual field stimuli, unlike their performance during steady-state gaze fixation, and contrary to expected perceptual superiority in the lower visual field in the absence of saccades. Consistent with this, peri-saccadic superior colliculus visual neural responses in two male rhesus macaque monkeys were also significantly stronger in the upper visual field than in the lower visual field. Thus, peri-saccadic orientation identification performance is more in line with oculomotor, rather than visual, map spatial anisotropies. |
Whitney S. Griggs; Sumner L. Norman; Thomas Deffieux; Florian Segura; Bruno Félix Osmanski; Geeling Chau; Vasileios Christopoulos; Charles Liu; Mickael Tanter; Mikhail G. Shapiro; Richard A. Andersen Decoding motor plans using a closed-loop ultrasonic brain–machine interface Journal Article In: Nature Neuroscience, vol. 27, pp. 1–23, 2023. @article{Griggs2023, Brain–machine interfaces (BMIs) enable people living with chronic paralysis to control computers, robots and more with nothing but thought. Existing BMIs have trade-offs across invasiveness, performance, spatial coverage and spatiotemporal resolution. Functional ultrasound (fUS) neuroimaging is an emerging technology that balances these attributes and may complement existing BMI recording technologies. In this study, we use fUS to demonstrate a successful implementation of a closed-loop ultrasonic BMI. We streamed fUS data from the posterior parietal cortex of two rhesus macaque monkeys while they performed eye and hand movements. After training, the monkeys controlled up to eight movement directions using the BMI. We also developed a method for pretraining the BMI using data from previous sessions. This enabled immediate control on subsequent days, even those that occurred months apart, without requiring extensive recalibration. These findings establish the feasibility of ultrasonic BMIs, paving the way for a new class of less-invasive (epidural) interfaces that generalize across extended time periods and promise to restore function to people with neurological impairments. |
Beatriz Herrera; Amirsaman Sajad; Steven P. Errington; Jeffrey D. Schall; Jorge J. Riera Cortical origin of theta error signals Journal Article In: Cerebral Cortex, vol. 33, no. 23, pp. 11300–11319, 2023. @article{Herrera2023, A multi-scale approach elucidated the origin of the error-related-negativity (ERN), with its associated theta-rhythm, and the post-error-positivity (Pe) in macaque supplementary eye field (SEF). Using biophysical modeling, synaptic inputs to a subpopulation of layer-3 (L3) and layer-5 (L5) pyramidal cells (PCs) were optimized to reproduce error-related spiking modulation and inter-spike intervals. The intrinsic dynamics of dendrites in L5 but not L3 error PCs generate theta rhythmicity with random phases. Saccades synchronized the phases of the theta-rhythm, which was magnified on errors. Contributions from error PCs to the laminar current source density (CSD) observed in SEF were negligible and could not explain the observed association between error-related spiking modulation in L3 PCs and scalp-EEG. CSD from recorded laminar field potentials in SEF was comprised of multipolar components, with monopoles indicating strong electro-diffusion, dendritic/axonal electrotonic current leakage outside SEF, or violations of the model assumptions. Our results also demonstrate the involvement of secondary cortical regions, in addition to SEF, particularly for the later Pe component. The dipolar component from the observed CSD paralleled the ERN dynamics, while the quadrupolar component paralleled the Pe. These results provide the most advanced explanation to date of the cellular mechanisms generating the ERN. |
Patrick Jendritza; Frederike J. Klein; Pascal Fries Multi-area recordings and optogenetics in the awake, behaving marmoset Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–16, 2023. @article{Jendritza2023, The common marmoset has emerged as a key model in neuroscience. Marmosets are small in size, show great potential for genetic modification and exhibit complex behaviors. Thus, it is necessary to develop technology that enables monitoring and manipulation of the underlying neural circuits. Here, we describe a novel approach to record and optogenetically manipulate neural activity in awake, behaving marmosets. Our design utilizes a light-weight, 3D printed titanium chamber that can house several high-density silicon probes for semi-chronic recordings, while enabling simultaneous optogenetic stimulation. We demonstrate the application of our method in male marmosets by recording multi- and single-unit data from areas V1 and V6 with 192 channels simultaneously, and show that optogenetic activation of excitatory neurons in area V6 can influence behavior in a detection task. This method may enable future studies to investigate the neural basis of perception and behavior in the marmoset. |
Leor N. Katz; Gongchen Yu; James P. Herman; Richard J. Krauzlis Correlated variability in primate superior colliculus depends on functional class Journal Article In: Communications Biology, vol. 6, no. 1, pp. 1–13, 2023. @article{Katz2023, Correlated variability in neuronal activity (spike count correlations, rSC) can constrain how information is read out from populations of neurons. Traditionally, rSC is reported as a single value summarizing a brain area. However, single values, like summary statistics, stand to obscure underlying features of the constituent elements. We predict that in brain areas containing distinct neuronal subpopulations, different subpopulations will exhibit distinct levels of rSC that are not captured by the population rSC. We tested this idea in macaque superior colliculus (SC), a structure containing several functional classes (i.e., subpopulations) of neurons. We found that during saccade tasks, different functional classes exhibited differing degrees of rSC. “Delay class” neurons displayed the highest rSC, especially during saccades that relied on working memory. Such dependence of rSC on functional class and cognitive demand underscores the importance of taking functional subpopulations into account when attempting to model or infer population coding principles. |
Kenji W. Koyano; Elena M. Esch; Julie J. Hong; Elena N. Waidmann; Haitao Wu; David A. Leopold Progressive neuronal plasticity in primate visual cortex during stimulus familiarization Journal Article In: Science Advances, vol. 9, no. 12, pp. 1–12, 2023. @article{Koyano2023, The primate brain is equipped to learn and remember newly encountered visual stimuli such as faces and objects. In the macaque inferior temporal (IT) cortex, neurons mark the familiarity of a visual stimulus through response modification, often involving a decrease in spiking rate. Here, we investigate the emergence of this neural plasticity by longitudinally tracking IT neurons during several weeks of familiarization with face images. We found that most neurons in the anterior medial (AM) face patch exhibited a gradual decline in their late-phase visual responses to multiple stimuli. Individual neurons varied from days to weeks in their rates of plasticity, with time constants determined by the number of days of exposure rather than the cumulative number of presentations. We postulate that the sequential recruitment of neurons with experience-modified responses may provide an internal and graded measure of familiarity strength, which is a key mnemonic component of visual recognition. |
Nadira Yusif Rodriguez; Theresa H. McKim; Debaleena Basu; Aarit Ahuja; Theresa M. Desrochers Monkey dorsolateral prefrontal cortex represents abstract visual sequences during a no-report task Journal Article In: Journal of Neuroscience, vol. 43, no. 15, pp. 2741–2755, 2023. @article{Rodriguez2023, Monitoring sequential information is an essential component of our daily lives. Many of these sequences are abstract, in that they do not depend on the individual stimuli, but do depend on an ordered set of rules (e.g., chop then stir when cooking). Despite the ubiquity and utility of abstract sequential monitoring, little is known about its neural mechanisms. Human rostrolateral prefrontal cortex (RLPFC) exhibits specific increases in neural activity (i.e., “ramping”) during abstract sequences. Monkey dorsolateral prefrontal cortex (DLPFC) has been shown to represent sequential information in motor (not abstract) sequence tasks, and contains a subregion, area 46, with homologous functional connectivity to human RLPFC. To test the prediction that area 46 may represent abstract sequence information, and do so with parallel dynamics to those found in humans, we conducted functional magnetic resonance imaging (fMRI) in three male monkeys. When monkeys performed no-report abstract sequence viewing, we found that left and right area 46 responded to abstract sequential changes. Interestingly, responses to rule and number changes overlapped in right area 46 and left area 46 exhibited responses to abstract sequence rules with changes in ramping activation, similar to that observed in humans. Together, these results indicate that monkey DLPFC monitors abstract visual sequential information, potentially with a preference for different dynamics in the two hemispheres. More generally, these results show that abstract sequences are represented in functionally homologous regions across monkeys and humans. |
Tevin C. Rouse; Amy M. Ni; Chengcheng Huang; Marlene R. Cohen Topological insights into the neural basis of flexible behavior Journal Article In: Proceedings of the National Academy of Sciences of the United States of America, vol. 120, no. 24, pp. 1–11, 2023. @article{Rouse2023, It is widely accepted that there is an inextricable link between neural computations, biological mechanisms, and behavior, but it is challenging to simultaneously relate all three. Here, we show that topological data analysis (TDA) provides an important bridge between these approaches to studying how brains mediate behavior. We demonstrate that cognitive processes change the topological description of the shared activity of populations of visual neurons. These topological changes constrain and distinguish between competing mechanistic models, are connected to subjects performance on a visual change detection task, and, via a link with network control theory, reveal a tradeoff between improving sensitivity to subtle visual stimulus changes and increasing the chance that the subject will stray off task. These connections provide a blueprint for using TDA to uncover the biological and computational mechanisms by which cognition affects behavior in health and disease. |
Alireza Rouzitalab; Chadwick B. Boulay; Jeongwon Park; Julio C. Martinez-Trujillo; Adam J. Sachs Ensembles code for associative learning in the primate lateral prefrontal cortex Journal Article In: Cell Reports, vol. 42, no. 5, pp. 1–16, 2023. @article{Rouzitalab2023, The lateral prefrontal cortex (LPFC) of primates is thought to play a role in associative learning. However, it remains unclear how LPFC neuronal ensembles dynamically encode and store memories for arbitrary stimulus-response associations. We recorded the activity of neurons in LPFC of two macaques during an associative learning task using multielectrode arrays. During task trials, the color of a symbolic cue indicated the location of one of two possible targets for a saccade. During a trial block, multiple randomly chosen associations were learned by the subjects. A state-space analysis indicated that LPFC neuronal ensembles rapidly learn new stimulus-response associations mirroring the animals' learning. Multiple associations acquired during training are stored in a neuronal subspace and can be retrieved hours after learning. Finally, knowledge of old associations facilitates learning new, similar associations. These results indicate that neuronal ensembles in the primate LPFC provide a flexible and dynamic substrate for associative learning. |
Brian E. Brain E. Russ; Kenji W. Koyano; Julian Day-Cooney; Neda Perwez; David A. Leopold Temporal continuity shapes visual responses of macaque face patch neurons Journal Article In: Neuron, vol. 111, no. 6, pp. 903–914, 2023. @article{Russ2023, Macaque inferior temporal cortex neurons respond selectively to complex visual images, with recent work showing that they are also entrained reliably by the evolving content of natural movies. To what extent does visual continuity itself shape the responses of high-level visual neurons? We addressed this question by measuring how cells in face-selective regions of the macaque temporal cortex were affected by the manipulation of a movie's temporal structure. Sampling the movie at 1s intervals, we measured neural responses to randomized, brief stimuli of different lengths, ranging from 800 ms dynamic movie snippets to 100 ms static frames. We found that the disruption of temporal continuity strongly altered neural response profiles, particularly in the early onset response period of the randomized stimulus. The results suggest that models of visual system function based on discrete and randomized visual presentations may not translate well to the brain's natural modes of operation. |
Elizabeth M. Sachse; Adam C. Snyder Dynamic attention signalling in V4: Relation to fast-spiking/non-fast-spiking cell class and population coupling Journal Article In: European Journal of Neuroscience, vol. 57, no. 6, pp. 918–939, 2023. @article{Sachse2023, The computational role of a neuron during attention depends on its firing properties, neurotransmitter expression and functional connectivity. Neurons in the visual cortical area V4 are reliably engaged by selective attention but exhibit diversity in the effect of attention on firing rates and correlated variability. It remains unclear what specific neuronal properties shape these attention effects. In this study, we quantitatively characterised the distribution of attention modulation of firing rates across populations of V4 neurons. Neurons exhibited a continuum of time-varying attention effects. At one end of the continuum, neurons' spontaneous firing rates were slightly depressed with attention (compared to when unattended), whereas their stimulus responses were enhanced with attention. The other end of the continuum showed the converse pattern: attention depressed stimulus responses but increased spontaneous activity. We tested whether the particular pattern of time-varying attention effects that a neuron exhibited was related to the shape of their actions potentials (so-called ‘fast-spiking' [FS] neurons have been linked to inhibition) and the strength of their coupling to the overall population. We found an interdependence among neural attention effects, neuron type and population coupling. In particular, we found neurons for which attention enhanced spontaneous activity but suppressed stimulus responses were less likely to be fast-spiking (more likely to be non-fast-spiking) and tended to have stronger population coupling, compared to neurons with other types of attention effects. These results add important information to our understanding of visual attention circuits at the cellular level. |
Atena Sajedin; Sina Salehi; Hossein Esteky Information content and temporal structure of face selective local field potentials frequency bands in IT cortex Journal Article In: Cerebral Cortex, pp. 1–12, 2023. @article{Sajedin2023, Sensory stimulation triggers synchronized bioelectrical activity in the brain across various frequencies. This study delves into network-level activities, specifically focusing on local field potentials as a neural signature of visual category representation. Specifically, we studied the role of different local field potential frequency oscillation bands in visual stimulus category representation by presenting images of faces and objects to three monkeys while recording local field potential from inferior temporal cortex. We found category selective local field potential responses mainly for animate, but not inanimate, objects. Notably, face-selective local field potential responses were evident across all tested frequency bands, manifesting in both enhanced (above mean baseline activity) and suppressed (below mean baseline activity) local field potential powers. We observed four different local field potential response profiles based on frequency bands and face selective excitatory and suppressive responses. Low-frequency local field potential bands (1–30 Hz) were more prodominstaly suppressed by face stimulation than the high-frequency (30–170 Hz) local field potential bands. Furthermore, the low-frequency local field potentials conveyed less face category informtion than the high-frequency local field potential in both enhansive and suppressive conditions. Furthermore, we observed a negative correlation between face/object d-prime values in all the tested local field potential frequency bands and the anterior–posterior position of the recording sites. In addition, the power of low-frequency local field potential systematically declined across inferior temporal anterior–posterior positions, whereas high-frequency local field potential did not exhibit such a pattern. In general, for most of the above-mentioned findings somewhat similar results were observed for body, but not, other stimulus categories. The observed findings suggest that a balance of face selective excitation and inhibition across time and cortical space shape face category selectivity in inferior temporal cortex. |
Gabriel M. Stine; Eric M. Trautmann; Danique Jeurissen; Michael N. Shadlen A neural mechanism for terminating decisions Journal Article In: Neuron, vol. 111, no. 16, pp. 2601–2613, 2023. @article{Stine2023, The brain makes decisions by accumulating evidence until there is enough to stop and choose. Neural mechanisms of evidence accumulation are established in association cortex, but the site and mechanism of termination are unknown. Here, we show that the superior colliculus (SC) plays a causal role in terminating decisions, and we provide evidence for a mechanism by which this occurs. We recorded simultaneously from neurons in the lateral intraparietal area (LIP) and SC while monkeys made perceptual decisions. Despite similar trial-averaged activity, we found distinct single-trial dynamics in the two areas: LIP displayed drift-diffusion dynamics and SC displayed bursting dynamics. We hypothesized that the bursts manifest a threshold mechanism applied to signals represented in LIP to terminate the decision. Consistent with this hypothesis, SC inactivation produced behavioral effects diagnostic of an impaired threshold sensor and prolonged the buildup of activity in LIP. The results reveal the transformation from deliberation to commitment. |
Norihiro Takakuwa; Kaoru Isa; Reona Yamaguchi; Hirotaka Onoe; Jun Takahashi; Masatoshi Yoshida; Tadashi Isa Protocol for making an animal model of “blindsight” in macaque monkeys Journal Article In: STAR Protocols, vol. 4, no. 1, pp. 1–22, 2023. @article{Takakuwa2023, Patients with damage to the primary visual cortex (V1) can respond correctly to visual stimuli in their lesion-affected visual field above the chance level, an ability named blindsight. Here, we present a protocol for making an animal model of blindsight in macaque monkeys. We describe the steps to perform pre-lesion training of monkeys on a visual task, followed by lesion surgery, post-lesion training, and evaluation of blindsight. This animal model can be used to investigate the source of visual awareness. For complete details on the use and execution of this protocol, please refer to Yoshida et al. (2008)1 and Takakuwa et al. (2021).2 |
Bharath Chandra Talluri; Incheol Kang; Adam Lazere; Katrina R. Quinn; Nicholas Kaliss; Jacob L. Yates; Daniel A. Butts; Hendrikje Nienborg Activity in primate visual cortex is minimally driven by spontaneous movements Journal Article In: Nature Neuroscience, vol. 26, no. 11, pp. 1953–1959, 2023. @article{Talluri2023, Organisms process sensory information in the context of their own moving bodies, an idea referred to as embodiment. This idea is important for developmental neuroscience, robotics and systems neuroscience. The mechanisms supporting embodiment are unknown, but a manifestation could be the observation in mice of brain-wide neuromodulation, including in the primary visual cortex, driven by task-irrelevant spontaneous body movements. We tested this hypothesis in macaque monkeys (Macaca mulatta), a primate model for human vision, by simultaneously recording visual cortex activity and facial and body movements. We also sought a direct comparison using an analogous approach to those used in mouse studies. Here we found that activity in the primate visual cortex (V1, V2 and V3/V3A) was associated with the animals' own movements, but this modulation was largely explained by the impact of the movements on the retinal image, that is, by changes in visual input. These results indicate that visual cortex in primates is minimally driven by spontaneous movements and may reflect species-specific sensorimotor strategies. |
Pin Kwang Tan; Cheng Tang; Roger Herikstad; Arunika Pillay; Camilo Libedinsky Distinct lateral prefrontal regions are organized in an anterior-posterior functional gradient Journal Article In: Journal of Neuroscience, vol. 43, no. 38, pp. 6564–6572, 2023. @article{Tan2023a, The dorsolateral prefrontal cortex (dlPFC) is composed of multiple anatomically-defined regions involved in higher-order cognitive processes, including working memory and selective attention. It is organized in an anterior-posterior global gradient where posterior regions track changes in the environment while anterior regions support abstract neural representations. However, it remains unknown if such a global gradient results from a smooth gradient that spans regions, or an emergent property arising from functionally distinct regions, i.e. an areal gradient. Here, we recorded single-neurons in the dlPFC of non-human primates trained to perform a memory-guided saccade task with an interfering distractor, and analyzed their physiological properties along the anterior-posterior axis. We found that these physiological properties were best described by an areal gradient. Further, population analyses revealed that there is a distributed representation of spatial information across the dlPFC. Our results validate the functional boundaries between anatomically-defined dlPFC regions and highlight the distributed nature of computations underlying working memory across the dlPFC. Significance Statement Activity of frontal lobe regions is known to possess an anterior-posterior functional gradient. However, it is not known whether this gradient is the result of individual brain regions organized in a gradient (like a staircase), or a smooth gradient that spans regions (like a slide). Analysis of physiological properties of individual neurons in the primate frontal regions suggest that individual regions are organized as a gradient, rather than a smooth gradient. At the population level, working memory was more prominent in posterior regions, even though it was also present in anterior regions. This is consistent with the functional segregation of brain regions that is also observed in other systems (i.e. the visual system). |
John M. Tauber; Scott L. Brincat; Emily P. Stephen; Jacob A. Donoghue; Leo Kozachkov; Emery N. Brown; Earl K. Miller Propofol-mediated unconsciousness disrupts progression of sensory signals through the cortical hierarchy Journal Article In: Journal of Cognitive Neuroscience, vol. 36, no. 2, pp. 394–413, 2023. @article{Tauber2023, A critical component of anesthesia is the loss of sensory perception. Propofol is the most widely used drug for general anesthesia, but the neural mechanisms of how and when it disrupts sensory processing are not fully understood. We analyzed local field potential and spiking recorded from Utah arrays in auditory cortex, associative cortex, and cognitive cortex of nonhuman primates before and during propofol-mediated unconsciousness. Sensory stimuli elicited robust and decodable stimulus responses and triggered periods of stimulus-related synchronization between brain areas in the local field potential of Awake animals. By contrast, propofol-mediated unconsciousness eliminated stimulus-related synchrony and drastically weakened stimulus responses and information in all brain areas except for auditory cortex, where responses and information persisted. However, we found stimuli occurring during spiking Up states triggered weaker spiking responses than in Awake animals in auditory cortex, and little or no spiking responses in higher order areas. These results suggest that propofol's effect on sensory processing is not just because of asynchronous Down states. Rather, both Down states and Up states reflect disrupted dynamics. |
Lowell W. Thompson; Byounghoon Kim; Bas Rokers; Ari Rosenberg Hierarchical computation of 3D motion across macaque areas MT and FST Journal Article In: Cell Reports, vol. 42, no. 12, pp. 1–18, 2023. @article{Thompson2023, Computing behaviorally relevant representations of three-dimensional (3D) motion from two-dimensional (2D) retinal signals is critical for survival. To ascertain where and how the primate visual system performs this computation, we recorded from the macaque middle temporal (MT) area and its downstream target, the fundus of the superior temporal sulcus (area FST). Area MT is a key site of 2D motion processing, but its role in 3D motion processing is controversial. The functions of FST remain highly underexplored. To distinguish representations of 3D motion from those of 2D retinal motion, we contrast responses to multiple motion cues during a motion discrimination task. The results reveal a hierarchical transformation whereby many FST but not MT neurons are selective for 3D motion. Modeling results further show how generalized, cue-invariant representations of 3D motion in FST may be created by selectively integrating the output of 2D motion selective MT neurons. |
Yixin Tian; Jiapeng Yin; Chengyao Wang; Zhenliang He; Jingyi Xie; Xiaoshan Feng; Yang Zhou; Tianyu Ma; Yang Xie; Xue Li; Tianming Yang; Chi Ren; Chengyu Li; Zhengtuo Zhao An ultraflexible electrode array for large-scale chronic recording in the nonhuman primate brain Journal Article In: Advanced Science, vol. 10, no. 33, pp. 1–15, 2023. @article{Tian2023, Single-unit (SU) recording in nonhuman primates (NHPs) is indispensible in the quest of how the brain works, yet electrodes currently used for the NHP brain are limited in signal longevity, stability, and spatial coverage. Using new structural materials, microfabrication, and penetration techniques, we develop a mechanically robust ultraflexible, 1 µm thin electrode array (MERF) that enables pial penetration and high-density, large-scale, and chronic recording of neurons along both vertical and horizontal cortical axes in the nonhuman primate brain. Recording from three monkeys yields 2,913 SUs from 1,065 functional recording channels (up to 240 days), with some SUs tracked for up to 2 months. Recording from the primary visual cortex (V1) reveals that neurons with similar orientation preferences for visual stimuli exhibited higher spike correlation. Furthermore, simultaneously recorded neurons in different cortical layers of the primary motor cortex (M1) show preferential firing for hand movements of different directions. Finally, it is shown that a linear decoder trained with neuronal spiking activity across M1 layers during monkey's hand movements can be used to achieve on-line control of cursor movement. Thus, the MERF electrode array offers a new tool for basic neuroscience studies and brain–machine interface (BMI) applications in the primate brain. |
Sébastien Tremblay; Camille Testard; Ron W. Ditullio; Jeanne Inchauspé; Michael Petrides Neural cognitive signals during spontaneous movements in the macaque Journal Article In: Nature Neuroscience, vol. 26, no. 2, pp. 295–305, 2023. @article{Tremblay2023, The single-neuron basis of cognitive processing in primates has mostly been studied in laboratory settings where movements are severely restricted. It is unclear, therefore, how natural movements might affect neural signatures of cognition in the brain. Moreover, studies in mice indicate that body movements, when measured, account for most of the neural dynamics in the cortex. To examine these issues, we recorded from single-neuron ensembles in the prefrontal cortex in moving monkeys performing a cognitive task and characterized eye, head and body movements using video tracking. Despite considerable trial-to-trial movement variability, single-neuron tuning could be precisely measured and decision signals accurately decoded on a single-trial basis. Creating or abolishing spontaneous movements through head restraint and task manipulations had no measurable impact on neural responses. However, encoding models showed that uninstructed movements explained as much neural variance as task variables, with most movements aligned to task events. These results demonstrate that cognitive signals in the cortex are robust to natural movements, but also that unmeasured movements are potential confounds in cognitive neurophysiology experiments. |
Jeremy Steffman; Megha Sundara Disentangling the role of biphone probability from neighborhood density in the perception of nonwords Journal Article In: Language and Speech, pp. 1–37, 2023. @article{Steffman2023, In six experiments we explored how biphone probability and lexical neighborhood density influence listeners' categorization of vowels embedded in nonword sequences. We found independent effects of each. Listeners shifted categorization of a phonetic continuum to create a higher probability sequence, even when neighborhood density was controlled. Similarly, listeners shifted categorization to create a nonword from a denser neighborhood, even when biphone probability was controlled. Next, using a visual world eye-tracking task, we determined that biphone probability information is used rapidly by listeners in perception. In contrast, task complexity and irrelevant variability in the stimuli interfere with neighborhood density effects. These results support a model in which both biphone probability and neighborhood density independently affect word recognition, but only biphone probability effects are observed early in processing. |
Patrick Sturt; Nayoung Kwon Agreement attraction in comprehension: Do active dependencies and distractor position play a role? Journal Article In: Language, Cognition and Neuroscience, pp. 1–23, 2023. @article{Sturt2023, Across four eye-tracking studies and one self-paced reading study, we test whether attraction in subject-verb agreement is affected by (a) the relative linear positions of target and distractor, and (b) the active dependency status of the distractor. We find an effect of relative position, with greater attraction in retro-active interference configurations, where the distractor is linearly closer to the critical verb (Subject…Distractor…V) than in pro-active interference where it is more distant (Distractor…Subject…V). However, within pro-active interference configurations, attraction was not affected by the active dependency status of the distractor: attraction effects were similarly small whether or not the distractor was waiting to complete an upcoming dependency at the critical verb, with Bayes Factor analyses showing evidence in favour of a null effect of active dependency status. We discuss these findings in terms of the decay of activation, and whether such decay is affected by maintenance of features in memory. |
Yankui Su; Meiling He; Rongbao Li The effects of background music on English reading comprehension for English foreign language learners: Evidence from an eye movement study Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–10, 2023. @article{Su2023, Based on previous literature, the present study examines the effects of background music on English reading comprehension using eye tracking techniques. All the participants, whose first language was Chinese, were selected from a foreign language college and all of them were sophomores who majored in English. The experiment in this study was a 2 (music tempo: fast and slow) × 2 (text difficulty: difficult and easy) × 2 (background music preference: high and low) mixed design. Both musical tempo and English reading passage were within-subjects factors, and the level of music listening preference was a between-subjects factor. The results showed that the main effect of the music tempo was statistically significant, which indicated that participants read texts more quickly in the fast-tempo music condition than in the slow-tempo music condition. Furthermore, the main effect of the text difficulty was statistically significant. Additionally, the interaction between the text difficulty and music tempo was statistically significant. The music tempo had a greater effect on easy texts than on difficult texts. The results of this study reveal that it is beneficial for people who have a stronger preference for music listening to conduct English reading tasks with fast-tempo music. It is detrimental for people who have little preference for background music listening to complete difficult English reading tasks with slow-tempo music. |
Yongqiang Su; Yixun Li; Hong Li Development and validation of the simplified Chinese Author Recognition Test: Evidence from eye movements of Chinese adults in Mainland China Journal Article In: Journal of Research in Reading, pp. 1–25, 2023. @article{Su2023a, Background: It is well evident that individuals' levels of print exposure are significantly correlated with their reading ability across languages, and an author recognition test is commonly used to measure print exposure objectively. For the first time, the current work developed and validated a Simplified Chinese Author Recognition Test (SCART) and examined its role in explaining Chinese online reading. Methods: In Study 1, we constructed the SCART for readers of simplified Chinese and validated the test using data collected from 203 young adults in Mainland China. Participants were measured on the SCART and three self-report tasks about their reading experiences and habits. Study 2 recruited additional 68 young adults in Mainland and measured their print exposure (with the same tasks used in Study 1), reading-related cognitive ability (working memory, rapid automatic naming, Chinese character reading, and vocabulary knowledge), and Chinese online reading via an eye-tracking passage reading task. Results: Results of Study 1 support the high reliability and validity of the SCART. Results of Study 2 indicate that SCART scores significantly predicted participants' online reading processing while controlling for subjective reading experiences and habits, and reading-related cognitive abilities. Across two studies, we found converging evidence that the in-depth recognition of the authors (i.e., participants have read the books written by these authors) appears to be a better indicator of print exposure than the superficial recognition of the author names. Conclusions: Taken together, this work filled in the gap in the literature by providing an evidence-based, objective print exposure measure for simplified Chinese and contributes to a broader understanding of print exposure and online reading processing across different writing systems. |
Longjiao Sui; Nicolas Dirix; Evy Woumans; Wouter Duyck GECO-CN: Ghent Eye-tracking COrpus of sentence reading for Chinese-English bilinguals Journal Article In: Behavior Research Methods, vol. 55, no. 6, pp. 2743–2763, 2023. @article{Sui2023, The current work presents the very first eye-tracking corpus of natural reading by Chinese-English bilinguals, whose two languages entail different writing systems and orthographies. Participants read an entire novel in these two languages, presented in paragraphs on screen. Half of the participants first read half of the novel in their native language (Simplified Chinese) and then the rest of the novel in their second language (English), while the other half read in the reverse language order. This article presents some important basic descriptive statistics of reading times and compares the difference between reading in the two languages. However, this unique eye-tracking corpus also allows the exploration of theories of language processing and bilingualism. Importantly, it provides a solid and reliable ground for studying the difference between Eastern and Western languages, understanding the impact and consequences of having a completely different first language on bilingual processing. The materials are freely available for use by researchers interested in (bilingual) reading. |
Eunkyung Sung; Sehoon Jung; Sunhee Lee Word recognition in English place assimilation by L1 and L2 listeners: An eye tracking study Journal Article In: Korean Journal of English Language and Linguistics, vol. 23, pp. 175–191, 2023. @article{Sung2023, This study explores the dynamics of lexical activation by comparing the time course of word recognition between assimilated forms (e.g., ca[t p] in cat box) and noncoronal forms (e.g., ca[p] in cap box). Using the Visual World Paradigm, an eye-tracking method, the main goal was to investigate how gradient modification in place assimilation context influences L1 and L2 listeners' real time word recognition in English. Twenty native Korean learners of English, as well as fourteen native English listeners took part in the listening task integrated into the eye-tracking experiment. The participants were given aural input in the form of instructions (e.g., look at the cat/cap box) and asked to pick the word they had just heard between two options (e.g., cat or cap) on the screen while or after they listened to the input. Their eye movements over the visual screen while listening, along with their keyboard-press responses were recorded for the main analysis. The results showed both English and Korean listeners displayed higher proportions of fixations on the target (e.g., cat) than on the competitor words (e.g., cap) in assimilation contexts (e.g., ca[t p] box), as well as higher proportions of fixations on targets (e.g., cap) than on competitors (e.g., cat) in non-assimilation contexts (e.g., ca[p] box). However, the discrepancy of fixation proportions between targets and competitors was more obvious for the English listeners than for the Korean listeners. In other words, although the L2 listenersin addition to L1 listeners were able to use acoustic variations when identifying the target phonemes, the L1 listeners revealed a higher certainty level than their L2 counterparts. Furthermore, the divergence points between targets and competitors wereshown to appear earlier for the L1 listeners than for the L2 listeners. |
Enze Tang; Hongwei Ding Emotion effects in second language processing: Evidence from eye movements in natural sentence reading Journal Article In: Bilingualism: Language and Cognition, pp. 1–20, 2023. @article{Tang2023, There exists insufficient eye-tracking evidence on the differences in emotional word processing between the first language (L1) and second language (L2) readers. This study conducted an eye-tracking experiment to investigate the emotional effects in L2 sentence reading, and to explore the modulation of L2 proficiency and individual emotional states. Adapted from Knickerbocker et al. (2015), the current study recorded eye movements at both early and late processing stages when late Chinese-English bilinguals read emotion-label and neutral target words in natural L2 sentences. Results indicated that L2 readers did not show the facilitation effects of lexical affective connotations during sentence reading, and they even demonstrated processing disadvantages for L2 emotional words. Additionally, the interaction effect between L2 proficiency and emotion was consistently significant for the measure of total reading time in positive words. Measurements of participants' depressive and anxious states were not robustly correlated with eye movement measures. Our findings supplemented new evidence to existing sparse eye-tracking experiments on L2 emotion processing, and lent support to several theoretical frameworks in the bilingual research field, including the Emotional Contexts of Learning Theory, Lexical Quality Hypothesis and Revised Hierarchical Model. |
Jamie Taylor; Yoichi Mukai Bidirectional cross-linguistic influence with different-script languages: Evidence from eye tracking Journal Article In: Applied Psycholinguistics, vol. 44, no. 5, pp. 635–667, 2023. @article{Taylor2023, This study compared patterns of nonselective cross-language activation in L1 and L2 visual word recognition with different-script bilinguals. The aim was to determine (1) whether lexical processing is nonselective in the L1 (as in L2), and (2) if the same cross-linguistic factors affected processing similarly in each language. To examine the time course of activation, eye movements were tracked during lexical decision. Thirty-two Japanese-English bilinguals responded to 250 target words in Japanese and in English. The same participants and items (i.e., cognate translation equivalents) were used to directly compare L1 and L2 processing. Response latencies as well as eye movements representing early and late processing were analyzed using mixed-effects regression modeling. Similar cross-linguistic effects, namely cognate word frequency, phonological similarity, and semantic similarity, were found in both languages. These factors affected processing to different degrees in each language, however. While cognate frequency was significant as early as the first fixation, effects of cross-linguistic phonological and semantic similarity arose later in time. Increased phonological similarity slowed responses in L2 but speeded them in L1, while greater semantic overlap was facilitatory in both languages. Results are discussed from the perspective of the BIA+ model of visual word recognition. |
Yasuo Terao; Shin-ichi Tokushige; Satomi Inomata-Terada; Tai Miyazaki; Naoki Kotsuki; Francesco Fisicaro; Yoshikazu Ugawa How do patients with Parkinson's disease and cerebellar ataxia read aloud? -Eye–voice coordination in text reading Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–25, 2023. @article{Terao2023a, Background: The coordination between gaze and voice is closely linked when reading text aloud, with the gaze leading the reading position by a certain eye–voice lead (EVL). How this coordination is affected is unknown in patients with cerebellar ataxia and parkinsonism, who show oculomotor deficits possibly impacting coordination between different effectors. Objective: To elucidate the role of the cerebellum and basal ganglia in eye–voice coordination during reading aloud, by studying patients with Parkinson's disease (PD) and spinocerebellar degeneration (SCD). Methods: Participants were sixteen SCD patients, 18 PD patients, and 30 age-matched normal subjects, all native Japanese speakers without cognitive impairment. Subjects read aloud Japanese texts of varying readability displayed on a monitor in front of their eyes, consisting of Chinese characters and hiragana (Japanese phonograms). The gaze and voice reading the text was simultaneously recorded by video-oculography and a microphone. A custom program synchronized and aligned the gaze and audio data in time. Results: Reading speed was significantly reduced in SCD patients (3.53 ± 1.81 letters/s), requiring frequent regressions to compensate for the slow reading speed. In contrast, PD patients read at a comparable speed to normal subjects (4.79 ± 3.13 letters/s vs. 4.71 ± 2.38 letters/s). The gaze scanning speed, excluding regressive saccades, was slower in PD patients (9.64 ± 4.26 letters/s) compared to both normal subjects (12.55 ± 5.42 letters/s) and SCD patients (10.81 ± 4.52 letters/s). PD patients' gaze could not far exceed that of the reading speed, with smaller allowance for the gaze to proceed ahead of the reading position. Spatial EVL was similar across the three groups for all texts (normal: 2.95 ± 1.17 letters/s, PD: 2.95 ± 1.51 letters/s, SCD: 3.21 ± 1.35 letters/s). The ratio of gaze duration to temporal EVL was lowest for SCD patients (normal: 0.73 ± 0.50, PD: 0.70 ± 0.37, SCD: 0.40 ± 0.15). Conclusion: Although coordination between voice and eye movements and normal eye-voice span was observed in both PD and SCD, SCD patients made frequent regressions to manage the slowed vocal output, restricting the ability for advance processing of text ahead of the gaze. In contrast, PD patients experience restricted reading speed primarily due to slowed scanning, limiting their maximum reading speed but effectively utilizing advance processing of upcoming text. |
Malathi Thothathiri; Jeremy Kirkwood; Abhijeet Patra; Anna Krason; Erica L. Middleton Multimodal measures of sentence comprehension in agrammatism Journal Article In: Cortex, vol. 169, pp. 309–325, 2023. @article{Thothathiri2023, Agrammatic or asyntactic comprehension is a common language impairment in aphasia. We considered three possible hypotheses about the underlying cause of this deficit, namely problems in syntactic processing, over-reliance on semantics, and a deficit in cognitive control. We tested four individuals showing asyntactic comprehension on their comprehension of syntax-semantics conflict sentences (e.g., The robber handcuffed the cop), where semantic cues pushed towards a different interpretation from syntax. Two of the four participants performed above chance on such sentences indicating that not all agrammatic individuals are impaired in structure-based interpretation. We collected additional eyetracking measures from the other two participants, who performed at chance on the conflict sentences. These measures suggested distinct underlying processing profiles in the two individuals. Cognitive assessments further suggested that one participant might have performed poorly due to a linguistic cognitive control impairment while the other had difficulty due to over-reliance on semantics. Together, the results highlight the importance of multimodal measures for teasing apart aphasic individuals' underlying deficits. They corroborate findings from neurotypical adults by showing that semantics can strongly influence comprehension and that cognitive control could be relevant for choosing between competing sentence interpretations. They extend previous findings by demonstrating variability between individuals with aphasia—cognitive control might be especially relevant for patients who are not overly reliant on semantics. Clinically, the identification of distinct underlying problems in different individuals suggests that different treatment paths might be warranted for cases who might look similar on behavioral assessments. |
Elizabeth L. Tighe; Gal Kaldes; Amani Talwar; Scott A. Crossley; Daphne Greenberg; Stephen Skalicky In: Journal of Learning Disabilities, vol. 56, no. 1, pp. 25–42, 2023. @article{Tighe2023, Comprehension monitoring is a meta-cognitive skill that is defined as the ability to self-evaluate one's comprehension of text. Although it is known that struggling adult readers are poor at monitoring their comprehension, additional research is needed to understand the mechanisms underlying comprehension monitoring and their role in reading comprehension in this population. This study used a comprehension monitoring task with struggling adult readers, which included online eye movements (reread and regression path durations) and an offline verbal protocol (oral explanations of key information). We examined whether eye movements predicted accuracy on the passages' reading comprehension questions, a norm-referenced reading assessment, and an offline verbal protocol after controlling for age and traditional component skills (i.e., decoding, oral language, working memory). Regression path duration uniquely predicted accuracy on the questions; however, decoding and oral vocabulary were the most salient predictors of the norm-referenced reading comprehension measure. Regression path duration also predicted the offline verbal protocol, such that those who exhibited longer regression path duration were also better at explaining key information. These results contribute to the literature regarding struggling adults' reading component skills, eye movement behaviors involved in processing connected text, and future considerations in assessing comprehension monitoring. |
Alessandra Valentini; Rachel E. Pye; Carmel Houston-Price; Jessie Ricketts; Julie A. Kirkby Onlineprocessing shows advantages of bimodal listening-while-reading for vocabulary learning: An eye-tracking study Journal Article In: Reading Research Quarterly, vol. 59, no. 1, pp. 79–101, 2023. @article{Valentini2023, Children can learn words incidentally from stories. This kind of learning is enhanced when stories are presented both aurally and in written format, compared to just a written presentation. However, we do not know why this bimodal presentation is beneficial. This study explores two possible explanations: whether the bimodal advantage manifests online during story exposure, or later, at word retrieval. We collected eye-movement data from 34 8-to 9-year-old children exposed to two stories, one presented in written format (reading condition), and the second presented aurally and written at the same time (bimodal condition). Each story included six unfamiliar words (non-words) that were repeated three times, as well as definitions and clues to their meaning. Following exposure, the learning of the new words' meanings was assessed. Results showed that, during story presentation, children spent less time fixating the new words in the bimodal condition, compared to the reading condition, indicating that the bimodal advantage occurs online. Learning was greater in the bimodal condition than the reading condition, which may reflect either an online bimodal advantage during story presentation or an advantage at retrieval. The results also suggest that the bimodal condition was more conducive to learning than the reading condition when children looked at the new words for a shorter amount of time. This is in line with an online advantage of the bimodal condition, as it suggests that less effort is required to learn words in this condition. These results support educational strategies that routinely present new vocabulary in two modalities simultaneously. |
Ine Van der Cruyssen; Gershon Ben-Shakhar; Yoni Pertzov; Nitzan Guy; Quinn Cabooter; Lukas J. Gunschera; Bruno Verschuere The validation of online webcam-based eye-tracking: The replication of the cascade effect, the novelty preference, and the visual world paradigm Journal Article In: Behavior Research Methods, pp. 1–14, 2023. @article{VanderCruyssen2023, The many benefits of online research and the recent emergence of open-source eye-tracking libraries have sparked an interest in transferring time-consuming and expensive eye-tracking studies from the lab to the web. In the current study, we validate online webcam-based eye-tracking by conceptually replicating three robust eye-tracking studies (the cascade effect |
Florence Van Meenen; Nicolas Masson; Leen Catrysse; Liesje Coertjens Taking a closer look at how higher education students process and use (discrepant) peer feedback Journal Article In: Learning and Instruction, vol. 84, pp. 1–11, 2023. @article{VanMeenen2023, Little is known on how students process peer feedback (PF) and use it to improve their work. We asked 59 participants to read the feedback of two peers on a fictional essay and to revise it, while we recorded their gaze behaviour. Regarding the PF processing subphase, discrepant PF led to more transitions, but only for participants who reported the discrepancy afterwards. Counterintuitively, participants who did not report the discrepancy, showed longer first-pass reading times. Concerning the PF use subphase, dwell time on essay correlated positively with the quality of the revised essays assessed by professors. Participants with a high-quality revision spent more time addressing higher order comments, corrected one or two lower order aspects at a time and proofread in the end, in which they went beyond the suggestions provided in the PF. These insights can be used when designing training to foster students' uptake of (discrepant) PF. |
Yang Yiling; Katharine Shapcott; Alina Peter; Johanna Klon-Lipok; Huang Xuhui; Andreea Lazar; Wolf Singer Robust encoding of natural stimuli by neuronal response sequences in monkey visual cortex Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–18, 2023. @article{Yiling2023a, Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes. |
Taylor D. Webb; Matthew G. Wilson; Henrik Odéen; Jan Kubanek Sustained modulation of primate deep brain circuits with focused ultrasonic waves Journal Article In: Brain Stimulation, vol. 16, no. 3, pp. 798–805, 2023. @article{Webb2023, Background: Transcranial focused ultrasound has the potential to noninvasively modulate deep brain circuits and impart sustained, neuroplastic effects. Objective: Bring the approach closer to translations by demonstrating sustained modulation of deep brain circuits and choice behavior in task-performing non-human primates. Methods: Low-intensity transcranial ultrasound of 30 s in duration was delivered in a controlled manner into deep brain targets (left or right lateral geniculate nucleus; LGN) of non-human primates while the subjects decided whether a left or a right visual target appeared first. While the animals performed the task, we recorded intracranial EEG from occipital screws. The ultrasound was delivered into the deep brain targets daily for a period of more than 6 months. Results: The brief stimulation induced effects on choice behavior that persisted up to 15 minutes and were specific to the sonicated target. Stimulation of the left/right LGN increased the proportion of rightward/leftward choices. These effects were accompanied by an increase in gamma activity over visual cortex. The contralateral effect on choice behavior and the increase in gamma, compared to sham stimulation, suggest that the stimulation excited the target neural circuits. There were no detrimental effects on the animals' discrimination performance over the months-long course of the stimulation. Conclusion: This study demonstrates that brief, 30-s ultrasonic stimulation induces neuroplastic effects specifically in the target deep brain circuits, and that the stimulation can be applied daily without detrimental effects. These findings encourage repeated applications of transcranial ultrasound to malfunctioning deep brain circuits in humans with the goal of providing a durable therapeutic reset. |
Jacob A. Westerberg; Jeffrey D. Schall; Geoffrey F. Woodman; Alexander Maier Feedforward attentional selection in sensory cortex Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–17, 2023. @article{Westerberg2023, Salient objects grab attention because they stand out from their surroundings. Whether this phenomenon is accomplished by bottom-up sensory processing or requires top-down guidance is debated. We tested these alternative hypotheses by measuring how early and in which cortical layer(s) neural spiking distinguished a target from a distractor. We measured synaptic and spiking activity across cortical columns in mid-level area V4 of male macaque monkeys performing visual search for a color singleton. A neural signature of attentional capture was observed in the earliest response in the input layer 4. The magnitude of this response predicted response time and accuracy. Errant behavior followed errant selection. Because this response preceded top-down influences and arose in the cortical layer not targeted by top-down connections, these findings demonstrate that feedforward activation of sensory cortex can underlie attentional priority. |
Alessandro Zanini; Audrey Dureux; Janahan Selvanayagam; Stefan Everling Ultra-high field fMRI identifies an action-observation network in the common marmoset Journal Article In: Communications Biology, vol. 6, no. 1, pp. 1–11, 2023. @article{Zanini2023, The observation of others' actions activates a network of temporal, parietal and premotor/prefrontal areas in macaque monkeys and humans. This action-observation network (AON) has been shown to play important roles in social action monitoring, learning by imitation, and social cognition in both species. It is unclear whether a similar network exists in New-World primates, which separated from Old-Word primates ~35 million years ago. Here we used ultra-high field fMRI at 9.4 T in awake common marmosets (Callithrix jacchus) while they watched videos depicting goal-directed (grasping food) or non-goal-directed actions. The observation of goal-directed actions activates a temporo-parieto-frontal network, including areas 6 and 45 in premotor/prefrontal cortices, areas PGa-IPa, FST and TE in occipito-temporal region and areas V6A, MIP, LIP and PG in the occipito-parietal cortex. These results show overlap with the humans and macaques' AON, demonstrating the existence of an evolutionarily conserved network that likely predates the separation of Old and New-World primates. |
Mengxi Yun; Masafumi Nejime; Takashi Kawai; Jun Kunimatsu; Hiroshi Yamada; Hyung Goo R. Kim; Masayuki Matsumoto Distinct roles of the orbitofrontal cortex, ventral striatum, and dopamine neurons in counterfactual thinking of decision outcomes Journal Article In: Science Advances, vol. 9, no. 32, pp. 1–14, 2023. @article{Yun2023, Individuals often assess past decisions by comparing what was gained with what would have been gained had they acted differently. Thoughts of past alternatives that counter what actually happened are called “counterfactuals.” Recent theories emphasize the role of the prefrontal cortex in processing counterfactual outcomes in decision-making, although how subcortical regions contribute to this process remains to be elucidated. Here we report a clear distinction among the roles of the orbitofrontal cortex, ventral striatum and midbrain dopamine neurons in processing counterfactual outcomes in monkeys. Our findings suggest that actually gained and counterfactual outcome signals are both processed in the cortico-subcortical network constituted by these regions but in distinct manners and integrated only in the orbitofrontal cortex in a way to compare these outcomes. This study extends the prefrontal theory of counterfactual thinking and provides key insights regarding how the prefrontal cortex cooperates with subcortical regions to make decisions using counterfactual information. |
Mengna Yao; Bincheng Wen; Mingpo Yang; Jiebin Guo; Haozhou Jiang; Chao Feng; Yilei Cao; Huiguang He; Le Chang High-dimensional topographic organization of visual features in the primate temporal lobe Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–23, 2023. @article{Yao2023a, The inferotemporal cortex supports our supreme object recognition ability. Numerous studies have been conducted to elucidate the functional organization of this brain area, but there are still important questions that remain unanswered, including how this organization differs between humans and non-human primates. Here, we use deep neural networks trained on object categorization to construct a 25-dimensional space of visual features, and systematically measure the spatial organization of feature preference in both male monkey brains and human brains using fMRI. These feature maps allow us to predict the selectivity of a previously unknown region in monkey brains, which is corroborated by additional fMRI and electrophysiology experiments. These maps also enable quantitative analyses of the topographic organization of the temporal lobe, demonstrating the existence of a pair of orthogonal gradients that differ in spatial scale and revealing significant differences in the functional organization of high-level visual areas between monkey and human brains. |
Ruyi Yang; Peng Zhao; Liyang Wang; Chenli Feng; Chen Peng; Zhexuan Wang; Yingying Zhang; Minqian Shen; Kaiwen Shi; Shijun Weng; Chunqiong Dong; Fu Zeng; Tianyun Zhang; Xingdong Chen; Shuiyuan Wang; Yiheng Wang; Yuanyuan Luo; Qingyuan Chen; Yuqing Chen; Chengyong Jiang; Shanshan Jia; Zhaofei Yu; Jian Liu; Fei Wang; Su Jiang; Wendong Xu; Liang Li; Gang Wang; Xiaofen Mo; Gengfeng Zheng; Aihua Chen; Xingtao Zhou; Chunhui Jiang; Yuanzhi Yuan; Biao Yan; Jiayi Zhang Assessment of visual function in blind mice and monkeys with subretinally implanted nanowire arrays as artificial photoreceptors Journal Article In: Nature Biomedical Engineering, pp. 1–37, 2023. @article{Yang2023a, Retinal prostheses could restore image-forming vision in conditions of photoreceptor degeneration. However, contrast sensitivity and visual acuity are often insufficient. Here we report the performance, in mice and monkeys with induced photoreceptor degeneration, of subretinally implanted gold-nanoparticle-coated titania nanowire arrays providing a spatial resolution of 77.5 μm and a temporal resolution of 3.92 Hz in ex vivo retinas (as determined by patch-clamp recording of retinal ganglion cells). In blind mice, the arrays allowed for the detection of drifting gratings and flashing objects at light-intensity thresholds of 15.70–18.09 μW mm–2, and offered visual acuities of 0.3–0.4 cycles per degree, as determined by recordings of visually evoked potentials and optomotor-response tests. In monkeys, the arrays were stable for 54 weeks, allowed for the detection of a 10-μW mm–2 beam of light (0.5° in beam angle) in visually guided saccade experiments, and induced plastic changes in the primary visual cortex, as indicated by long-term in vivo calcium imaging. Nanomaterials as artificial photoreceptors may ameliorate visual deficits in patients with photoreceptor degeneration. |
Jacob L. Yates; Shanna H. Coop; Gabriel H. Sarch; Ruei Jr Wu; Daniel A. Butts; Michele Rucci; Jude F. Mitchell Detailed characterization of neural selectivity in free viewing primates Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–11, 2023. @article{Yates2023, Fixation constraints in visual tasks are ubiquitous in visual and cognitive neuroscience. Despite its widespread use, fixation requires trained subjects, is limited by the accuracy of fixational eye movements, and ignores the role of eye movements in shaping visual input. To overcome these limitations, we developed a suite of hardware and software tools to study vision during natural behavior in untrained subjects. We measured visual receptive fields and tuning properties from multiple cortical areas of marmoset monkeys who freely viewed full-field noise stimuli. The resulting receptive fields and tuning curves from primary visual cortex (V1) and area MT match reported selectivity from the literature which was measured using conventional approaches. We then combined free viewing with high-resolution eye tracking to make the first detailed 2D spatiotemporal measurements of foveal receptive fields in V1. These findings demonstrate the power of free viewing to characterize neural responses in untrained animals while simultaneously studying the dynamics of natural behavior. |
Yang Yiling; Johanna Klon-Lipok; Wolf Singer Joint encoding of stimulus and decision in monkey primary visual cortex Journal Article In: Cerebral Cortex, pp. 1–6, 2023. @article{Yiling2023, We investigated whether neurons in monkey primary visual cortex (V1) exhibit mixed selectivity for sensory input and behavioral choice. Parallel multisite spiking activity was recorded from area V1 of awake monkeys performing a delayed match-to-sample task. The monkeys had to make a forced choice decision of whether the test stimulus matched the preceding sample stimulus. The population responses evoked by the test stimulus contained information about both the identity of the stimulus and with some delay but before the onset of the motor response the forthcoming choice. The results of subspace identification analysis indicate that stimulus-specific and decision-related information coexists in separate subspaces of the high-dimensional population activity, and latency considerations suggest that the decision-related information is conveyed by top-down projections. |
Yang Zhou; Ou Zhu; David J. Freedman Posterior parietal cortex plays a causal role in abstract memory-based visual categorical decisions Journal Article In: Journal of Neuroscience, vol. 43, no. 23, pp. 4315–4328, 2023. @article{Zhou2023c, Neural activity in the lateral intraparietal cortex (LIP) correlates with both sensory evaluation and motor planning underlying visuomotor decisions. We previously showed that LIP plays a causal role in visually-based perceptual and categorical decisions, and preferentially contributes to evaluating sensory stimuli over motor planning. In that study, however, monkeys reported their decisions with a saccade to a colored target associated with the correct motion category or direction. Since LIP is known to play a role in saccade planning, it remains unclear whether LIP's causal role in such decisions extend to decision-making tasks which do not involve saccades. Here, we employed reversible pharmacological inactivation of LIP neural activity while two male monkeys performed delayed match to category (DMC) and delayed match to sample (DMS) tasks. In both tasks, monkeys needed to maintain gaze fixation throughout the trial and report whether a test stimulus was a categorical match or nonmatch to the previous sample stimulus by releasing a touch bar. LIP inactivation impaired monkeys' behavioral performance in both tasks, with deficits in both accuracy and reaction time (RT). Furthermore, we recorded LIP neural activity in the DMC task targeting the same cortical locations as in the inactivation experiments. We found significant neural encoding of the sample category, which was correlated with monkeys' categorical decisions in the DMC task. Taken together, our results demonstrate that LIP plays a generalized role in visual categorical decisions independent of the task-structure and motor response modality. |
Anthony Bigelow; Taekjun Kim; Tomoyuki Namima; Wyeth Bair; Anitha Pasupathy Dissociation in neuronal encoding of object versus surface motion in the primate brain Journal Article In: Current Biology, vol. 33, no. 4, pp. 711–719, 2023. @article{Bigelow2023, A paradox exists in our understanding of motion processing in the primate visual system: neurons in the dorsal motion processing stream often strikingly fail to encode long-range and perceptually salient jumps of a moving stimulus. Psychophysical studies suggest that such long-range motion, which requires integration over more distant parts of the visual field, may be based on higher-order motion processing mechanisms that rely on feature or object tracking. Here, we demonstrate that ventral visual area V4, long recognized as critical for processing static scenes, includes neurons that maintain direction selectivity for long-range motion, even when conflicting local motion is present. These V4 neurons exhibit specific selectivity for the motion of objects, i.e., targets with defined boundaries, rather than the motion of surfaces behind apertures, and are selective for direction of motion over a broad range of spatial displacements and defined by a variety of features. Motion direction at a range of speeds can be accurately decoded on single trials from the activity of just a few V4 neurons. Thus, our results identify a novel motion computation in the ventral stream that is strikingly different from, and complementary to, the well-established system in the dorsal stream, and they support the hypothesis that the ventral stream system interacts with the dorsal stream to achieve the higher level of abstraction critical for tracking dynamic objects. |
Magdalena Boch; Isabella C. Wagner; Sabrina Karl; Ludwig Huber; Claus Lamm Functionally analogous body- and animacy-responsive areas are present in the dog (Canis familiaris) and human occipito-temporal lobe Journal Article In: Communications Biology, vol. 6, no. 1, pp. 1–15, 2023. @article{Boch2023, Comparing the neural correlates of socio-cognitive skills across species provides insights into the evolution of the social brain and has revealed face- and body-sensitive regions in the primate temporal lobe. Although from a different lineage, dogs share convergent visuo-cognitive skills with humans and a temporal lobe which evolved independently in carnivorans. We investigated the neural correlates of face and body perception in dogs (N = 15) and humans (N = 40) using functional MRI. Combining univariate and multivariate analysis approaches, we found functionally analogous occipito-temporal regions involved in the perception of animate entities and bodies in both species and face-sensitive regions in humans. Though unpredicted, we also observed neural representations of faces compared to inanimate objects, and dog compared to human bodies in dog olfactory regions. These findings shed light on the evolutionary foundations of human and dog social cognition and the predominant role of the temporal lobe. |
Clara Bourrelly; Corentin Massot; Neeraj J. Gandhi Rapid input-output transformation between local field potential and spiking activity during sensation but not action in the superior colliculus Journal Article In: Journal of Neuroscience, vol. 43, no. 22, pp. 4047–4061, 2023. @article{Bourrelly2023, Sensorimotor transformation is the sequential process of registering a sensory signal in the environment and then responding with the relevant movement at an appropriate time. For visually guided eye movements, neural signatures in the form of spiking activity of neurons have been extensively studied along the dorsoventral axis of the superior colliculus (SC). In contrast, the local field potential (LFP), which represents the putative input to a region, remains largely unexplored in the SC. We therefore compared amplitude levels and onset times of both spike bursts and LFP modulations recorded simultaneously with a laminar probe along the dorsoventral axis of SC in 3 male monkeys performing the visually guided delayed saccade task. Both signals displayed a gradual transition from sensory activity in the superficial layers to a predominantly motor response in the deeper layers, although the transition from principally sensory to mostly motor response occurred;500 lm deeper for the LFP. For the sensory response, LFP modulation preceded spike burst onset by,5 ms in the superficial and intermediate layers and only when data were analyzed on a trial-by-trial basis. The motor burst in the spiking activity led LFP modulation by.25 ms in the deeper layers. The results reveal a fast and efficient input-output transformation between LFP modulation and spike burst in the visually responsive layers activity during sensation but not during action. The spiking pattern observed during the movement phase is likely dominated by intracollicular processing that is not captured in the LFP. |
Emiliano Brunamonti; Martin Paré Neuronal activity in posterior parietal cortex area LIP is not sufficient for saccadic eye movement production Journal Article In: Frontiers in Integrative Neuroscience, pp. 1–14, 2023. @article{Brunamonti2023, It is widely recognized that the posterior parietal cortex (PPC) plays a role in active exploration with eye movements, arm reaching, and hand grasping. Whether this role is causal in nature is largely unresolved. One region of the PPC appears dedicated to the control of saccadic eye movement—lateral intraparietal (LIP) area. This area LIP possesses direct projections to well-established oculomotor centers and contains neurons with movement-related activity. In this study, we tested whether these neurons are implicated in saccade initiation and production. The movement-related activity of LIP neurons was tested by recording these neurons while monkeys performed a countermanding task. We found that LIP neuronal activity is not different before the execution or the cancelation of commanded saccades and thereby is not sufficient for the initiation and production of saccades. Consistent with the evolutionarily late emergence of the PPC, this finding relegates the role of this PPC area to processes that can regulate but not trigger eye movements. |
Brock M. Carlson; Blake A. Mitchell; Kacie Dougherty; Jacob A. Westerberg; Michele A. Cox; Alexander Maier Does V1 response suppression initiate binocular rivalry? Journal Article In: iScience, vol. 26, no. 8, pp. 1–23, 2023. @article{Carlson2023, During binocular rivalry (BR) only one eye's view is perceived. Neural underpinnings of BR are debated. Recent studies suggest that primary visual cortex (V1) initiates BR. One trigger might be response suppression across most V1 neurons at the onset of BR. Here, we utilize a variant of BR called binocular rivalry flash suppression (BRFS) to test this hypothesis. BRFS is identical to BR, except stimuli are shown with a ∼1s delay. If V1 response suppression was required to initiate BR, it should occur during BRFS as well. To test this, we compared V1 spiking in two macaques observing BRFS. We found that BRFS resulted in response facilitation rather than response suppression across V1 neurons. However, BRFS still reduces responses in a subset of V1 neurons due to the adaptive effects of asynchronous stimulus presentation. We argue that this selective response suppression could serve as an alternate initiator of BR. |
Sourish Chakravarty; Jacob Donoghue; Ayan S. Waite; Meredith Mahnke; Indie C. Garwood; Sebastian Gallo; Earl K. Miller; Emery N. Brown Closed-loop control of anesthetic state in nonhuman primates Journal Article In: PNAS Nexus, vol. 2, no. 10, pp. 1–14, 2023. @article{Chakravarty2023, Research in human volunteers and surgical patients has shown that unconsciousness under general anesthesia can be reliably tracked using real-time electroencephalogram processing. Hence, a closed-loop anesthesia delivery (CLAD) system that maintains precisely specified levels of unconsciousness is feasible and would greatly aid intraoperative patient management. The US Federal Drug Administration has approved no CLAD system for human use due partly to a lack of testing in appropriate animal models. To address this key roadblock, we implement a nonhuman primate (NHP) CLAD system that controls the level of unconsciousness using the anesthetic propofol. The key system components are a local field potential (LFP) recording system; propofol pharmacokinetics and pharmacodynamic models; the control variable (LFP power between 20 and 30 Hz), a programmable infusion system and a linear quadratic integral controller. Our CLAD system accurately controlled the level of unconsciousness along two different 125-min dynamic target trajectories for 18 h and 45 min in nine experiments in two NHPs. System performance measures were comparable or superior to those in previous CLAD reports. We demonstrate that an NHP CLAD system can reliably and accurately control in real-time unconsciousness maintained by anesthesia. Our findings establish critical steps for CLAD systems' design and testing prior to human testing. |
He Chen; Jun Kunimatsu; Tomomichi Oya; Yuri Imaizumi; Yukiko Hori; Masayuki Matsumoto; Takafumi Minamimoto; Yuji Naya; Hiroshi Yamada Stable neural population dynamics in the regression subspace for continuous and categorical task parameters in monkeys Journal Article In: eNeuro, vol. 10, no. 7, pp. 1–20, 2023. @article{Chen2023c, Neural population dynamics provide a key computational framework for understanding information processing in the sensory, cognitive, and motor functions of the brain. They systematically depict complex neural population activity, dominated by strong temporal dynamics as trajectory geometry in a low-dimensional neural space. However, neural population dynamics are poorly related to the conventional analytical framework of single-neuron activity, the rate-coding regime that analyzes firing rate modulations using task parameters. To link the rate-coding and dynamic models, we developed a variant of state-space analysis in the regression subspace, which describes the temporal structures of neural modulations using continuous and categorical task parameters. In macaque monkeys, using two neural population datasets containing either of two standard task parameters, continuous and categorical, we revealed that neural modulation structures are reliably captured by these task parameters in the regression subspace as trajectory geometry in a lower dimension. Furthermore, we combined the classical optimal-stimulus response analysis (usually used in rate-coding analysis) with the dynamic model and found that the most prominent modulation dynamics in the lower dimension were derived from these optimal responses. Using those analyses, we successfully extracted geometries for both task parameters that formed a straight geometry, suggesting that their functional relevance is characterized as a unidimensional feature in their neural modulation dynamics. Collectively, our approach bridges neural modulation in the rate-coding model and the dynamic system, and provides researchers with a significant advantage in exploring the temporal structure of neural modulations for pre-existing datasets. |
Julien Claron; Matthieu Provansal; Quentin Salardaine; Pierre Tissier; Alexandre Dizeux; Thomas Deffieux; Serge Picaud; Mickael Tanter; Fabrice Arcizet; Pierre Pouget Co-variations of cerebral blood volume and single neurons discharge during resting state and visual cognitive tasks in non-human primates Journal Article In: Cell Reports, vol. 42, no. 4, pp. 1–16, 2023. @article{Claron2023, To better understand how the brain allows primates to perform various sets of tasks, the ability to simultaneously record neural activity at multiple spatiotemporal scales is challenging but necessary. However, the contribution of single-unit activities (SUAs) to neurovascular activity remains to be fully understood. Here, we combine functional ultrasound imaging of cerebral blood volume (CBV) and SUA recordings in visual and fronto-medial cortices of behaving macaques. We show that SUA provides a significant estimate of the neurovascular response below the typical fMRI spatial resolution of 2mm3. Furthermore, our results also show that SUAs and CBV activities are statistically uncorrelated during the resting state but correlate during tasks. These results have important implications for interpreting functional imaging findings while one constructs inferences of SUA during resting state or tasks. |
Christopher Conroy; Rakesh Nanjappa; Robert M. McPeek Inhibitory tagging in the superior colliculus during visual search Journal Article In: Journal of Neurophysiology, vol. 130, no. 4, pp. 824–837, 2023. @article{Conroy2023, Inhibitory tagging is an important feature of many models of saccade target selection, in particular those that are based on the notion of a neural priority map. The superior colliculus (SC) has been suggested as a potential site of such a map, yet it is unknown whether inhibitory tagging is represented in the SC during visual search. In this study, we tested the hypothesis that SC neurons represent inhibitory tagging during search, as might be expected if they contribute to a priority map. To do so, we recorded the activity of SC neurons in a multisaccade visual-search task. On each trial, a single reward-bearing target was embedded in an array of physically identical, potentially reward-bearing targets and physically distinct, non-reward-bearing distractors. The task was to fixate the reward-bearing target. We found that, in the context of this task, the activity of many SC neurons was greater when their response field stimulus was a target than when it was a distractor and was reduced when it had been previously fixated relative to when it had not. Moreover, we found that the previous-fixation-related reduction of activity was larger for targets than for distractors and decreased with increasing time (or number of saccades) since fixation. Taken together, the results suggest that fixated stimuli are transiently inhibited in the SC during search, consistent with the notion that inhibitory tagging plays an important role in visual search and that SC neurons represent this inhibition as part of a priority map used for saccade target selection.NEW & NOTEWORTHY Searching a cluttered scene for an object of interest is a ubiquitous task in everyday life, which we often perform relatively quickly and efficiently. It has been suggested that to achieve such speed and efficiency an inhibitory-tagging mechanism inhibits saccades to objects in the scene once they have been searched and rejected. Here, we demonstrate that the superior colliculus represents this type of inhibition during search, consistent with its role in saccade target selection. |
Benjamin W. Corrigan; Roberto A. Gulli; Guillaume Doucet; Borna Mahmoudian; Mohamad Abbass; Megan Roussy; Rogelio Luna; Adam J. Sachs; Julio C. Martinez-Trujillo View cells in the hippocampus and prefrontal cortex of macaques during virtual navigation Journal Article In: Hippocampus, vol. 33, no. 5, pp. 573–585, 2023. @article{Corrigan2023, Cells selectively activated by a particular view of an environment have been found in the primate hippocampus (HPC). Whether view cells are present in other brain areas, and how view selectivity interacts with other variables such as object features and place remain unclear. Here, we explore these issues by recording the responses of neurons in the HPC and the lateral prefrontal cortex (LPFC) of rhesus macaques performing a task in which they learn new context-object associations while navigating a virtual environment using a joystick. We measured neuronal responses at different locations in a virtual maze where animals freely directed gaze to different regions of the visual scenes. We show that specific views containing task relevant objects selectively activated a proportion of HPC units, and an even higher proportion of LPFC units. Place selectivity was scarce and generally dependent on view. Many view cells were not affected by changing the object color or the context cue, two task relevant features. However, a small proportion of view cells showed selectivity for these two features. Our results show that during navigation in a virtual environment with complex and dynamic visual stimuli, view cells are found in both the HPC and the LPFC. View cells may have developed as a multiarea specialization in diurnal primates to encode the complexities and layouts of the environment through gaze exploration which ultimately enables building cognitive maps of space that guide navigation. |