EyeLink 临床和动眼神经眼球追踪出版物
EyeLink临床和oculomotor研究出版物至2023年(一些早于2024年)列在以下年份。您可以使用Saccadic Adaptation、Schizophrenia、Nystagmus等关键词搜索出版物。您还可以按年份搜索个人作者姓名和有限搜索(选择年份,然后单击搜索按钮)。如果我们错过了任何EyeLink临床或oculomotor文章,请给我们发电子邮件!
2023 |
Keith S. Apfelbaum; Claire Goodwin; Christina Blomquist; Bob McMurray The development of lexical competition in written- and spoken-word recognition Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 1, pp. 196–219, 2023. @article{Apfelbaum2023, Efficient word recognition depends on the ability to overcome competition from overlapping words. The nature of the overlap depends on the input modality: spoken words have temporal overlap from other words that share phonemes in the same positions, whereas written words have spatial overlap from other words with letters in the same places. It is unclear how these differences in input format affect the ability to recognise a word and the types of competitors that become active while doing so. This study investigates word recognition in both modalities in children between 7 and 15. Children complete a visual-world paradigm eye-tracking task that measures competition from words with several types of overlap, using identical word lists between modalities. Results showed correlated developmental changes in the speed of target recognition in both modalities. In addition, developmental changes were seen in the efficiency of competitor suppression for some competitor types in the spoken modality. These data reveal some developmental continuity in the process of word recognition independent of modality but also some instances of independence in how competitors are activated. Stimuli, data, and analyses from this project are available at: https://osf.io/eav72. |
Eléonore Arbona; Kilian G. Seeber; Marianne Gullberg Semantically related gestures facilitate language comprehension during simultaneous interpreting Journal Article In: Bilingualism: Language and Cognition, vol. 26, no. 2, pp. 425–439, 2023. @article{Arbona2023, Manual co-speech gestures can facilitate language comprehension, but do they influence language comprehension in simultaneous interpreters, and if so, is this influence modulated by simultaneous interpreting (SI) and/or by interpreting experience? In a picture-matching task, 24 professional interpreters and 24 professional translators were exposed to utterances accompanied by semantically matching representational gestures, semantically unrelated pragmatic gestures, or no gestures while viewing passively (interpreters and translators) or during SI (interpreters only). During passive viewing, both groups were faster with semantically related than with semantically unrelated gestures. During SI, interpreters showed the same result. The results suggest that language comprehension is sensitive to the semantic relationship between speech and gesture, and facilitated when speech and gestures are semantically linked. This sensitivity is not modulated by SI or interpreting experience. Thus, despite simultaneous interpreters' extreme language use, multimodal language processing facilitates comprehension in SI the same way as in all other language processing. |
Katharine Aveni; Juweiriya Ahmed; Arielle Borovsky; Ken McRae; Mary E. Jenkins; Katherine Sprengel; J. Alexander Fraser; Joseph B. Orange; Thea Knowles; Angela C. Roberts Predictive language comprehension in Parkinson's disease Journal Article In: PLoS ONE, vol. 18, pp. 1–32, 2023. @article{Aveni2023, Verb and action knowledge deficits are reported in persons with Parkinson's disease (PD), even in the absence of dementia or mild cognitive impairment. However, the impact of these deficits on combinatorial semantic processing is less well understood. Following on previous verb and action knowledge findings, we tested the hypothesis that PD impairs the ability to integrate event-based thematic fit information during online sentence processing. Specifically, we anticipated persons with PD with age-typical cognitive abilities would perform more poorly than healthy controls during a visual world paradigm task requiring participants to predict a target object constrained by the thematic fit of the agent-verb combination. Twenty-four PD and 24 healthy age-matched participants completed comprehensive neuropsychological assessments. We recorded participants' eye movements as they heard predictive sentences (The fisherman rocks the boat) alongside target, agent-related, verb-related, and unrelated images. We tested effects of group (PD/control) on gaze using growth curve models. There were no significant differences between PD and control participants, suggesting that PD participants successfully and rapidly use combinatory thematic fit information to predict upcoming language. Baseline sentences with no predictive information (e.g., Look at the drum) confirmed that groups showed equivalent sentence processing and eye movement patterns. Additionally, we conducted an exploratory analysis contrasting PD and controls' performance on low-motion-content versus high-motion-content verbs. This analysis revealed fewer predictive fixations in high-motion sentences only for healthy older adults. PD participants may adapt to their disease by relying on spared, non-action-simulation-based language processing mechanisms, although this conclusion is speculative, as the analyses of high- vs. low-motion items was highly limited by the study design. These findings provide novel evidence that individuals with PD match healthy adults in their ability to use verb meaning to predict upcoming nouns despite previous findings of verb semantic impairment in PD across a variety of tasks. |
Hyunah Baek; Wonil Choi; Peter C. Gordon Reading spaced and unspaced Korean text: Evidence from eye-tracking during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 5, pp. 1072–1085, 2023. @article{Baek2023, In written Korean, spaces appear between phrasal units (“eojeols”). In Experiment 1, participants read sentences in which space information had been manipulated. Results indicated that removing spaces or replacing them with a symbol hindered reading, but this effect was not as disruptive as previously found in English. Experiment 2 presented sentences varying in the proportion of eojeols that ended with postpositional particles as well as the presence/absence of spaces. Results showed that space removal interfered with reading, but its effects were weaker when the sentence contained more postpositional particles. This suggests that postpositional particles provide an extra cue to word segmentation in Korean texts. These findings are discussed in relation to the unique characteristics of the Korean writing system and to the models of eye-movement control during reading in different languages. |
Andreas Bär; Hannah E. Bär; Max Schneider; Fritz Renner The pupil as a window to the mind's eye: Greater emotionality of episodic imagery than verbal visualisation of rewarding activities Journal Article In: Journal of Experimental Psychopathology, vol. 14, no. 2, pp. 1–11, 2023. @article{Baer2023, Episodic imagery has been shown to amplify emotion more than abstract verbal representations. This may prove useful for clinical interventions aiming to motivate adaptive behaviours. However, most findings rely on self-report measures and verbal control conditions not designed to actively prevent automatic engagement in episodic imagery. We thus investigated the difference in emotionality between Episodic Imagery (EI) and Verbal Visualisation (VV) using pupil dilation as a physiological measure of emotional arousal. A sample of 75 participants listened to audio recordings describing activities in a positive manner. Subjects were randomly assigned to the EI or VV condition. Participants in the EI condition imagined performing the described activity, while participants in the VV condition visualised the words constituting the descriptions. As predicted, EI led to greater pupil dilation than VV, independent of mental effort. Self-reported anticipatory reward assessed throughout the task was also greater for EI than VV, yet no difference was found for arousal, anticipated reward or motivation. Our findings extend previous work demonstrating the property of episodic imagery to amplify emotion to a physiological level using pupillometry. However, we did not find a transfer to motivation, which is in line with previous studies using verbal control conditions for episodic imagery. |
Monica Barbira; Mireille J. Babineaua; Anne-Caroline Fiévét; Anne Christophe; Anne-Caroline Fiéveta; Anne Christophe Rapid infant learning of syntactic–semantic links Journal Article In: Proceedings of the National Academy of Sciences, vol. 120, no. 1, pp. 1–6, 2023. @article{Barbira2023, In the second year of life, infants begin to rapidly acquire the lexicon of their native lan- guage. A key learning mechanism underlying this acceleration is syntactic bootstrapping: the use of hidden cues in grammar to facilitate vocabulary learning. How infants forge the syntactic–semantic links that underlie this mechanism, however, remains specula- tive. A hurdle for theories is identifying computationally light strategies that have high precision within the complexity of the linguistic signal. Here, we presented 20-mo-old infants with novel grammatical elements in a complex natural language environment and measured their resultant vocabulary expansion. We found that infants can learn and exploit a natural language syntactic–semantic link in less than 30 min. The rapid speed of acquisition of a new syntactic bootstrap indicates that even emergent syntactic–semantic links can accelerate language learning. The results suggest that infants employ a cognitive network of efficient learning strategies to self-supervise language development. |
Alisa Baron; Vanessa Harwood; Daniel Kleinman; Luca Campanelli; Joseph Molski; Nicole Landi; Julia Irwin Where on the face do we look during phonemic restoration: An eye-tracking study Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–12, 2023. @article{Baron2023, Face to face communication typically involves audio and visual components to the speech signal. To examine the effect of task demands on gaze patterns in response to a speaking face, adults participated in two eye-tracking experiments with an audiovisual (articulatory information from the mouth was visible) and a pixelated condition (articulatory information was not visible). Further, task demands were manipulated by having listeners respond in a passive (no response) or an active (button press response) context. The active experiment required participants to discriminate between speech stimuli and was designed to mimic environmental situations which require one to use visual information to disambiguate the speaker's message, simulating different listening conditions in real-world settings. Stimuli included a clear exemplar of the syllable /ba/ and a second exemplar in which the formant initial consonant was reduced creating an /a/−like consonant. Consistent with our hypothesis, results revealed that the greatest fixations to the mouth were present in the audiovisual active experiment and visual articulatory information led to a phonemic restoration effect for the /a/ speech token. In the pixelated condition, participants fixated on the eyes, and discrimination of the deviant token within the active experiment was significantly greater than the audiovisual condition. These results suggest that when required to disambiguate changes in speech, adults may look to the mouth for additional cues to support processing when it is available. |
Anthony Beh; Paul V. McGraw; Denis Schluppeck The effects of simulated hemianopia on eye movements during text reading Journal Article In: Vision Research, vol. 204, pp. 1–14, 2023. @article{Beh2023, Vision loss is a common, devastating complication of cerebral strokes. In some cases the complete contra-lesional visual field is affected, leading to problems with routine tasks and, notably, the ability to read. Although visual information crucial for reading is imaged on the foveal region, readers often extract useful parafoveal information from the next word or two in the text. In hemianopic field loss, parafoveal processing is compromised, shrinking the visual span and resulting in slower reading speeds. Recent approaches to rehabilitation using perceptual training have been able to demonstrate some recovery of useful visual capacity. As gains in visual sensitivity were most pronounced at the border of the scotoma, it may be possible to use training to restore some of the lost visual span for reading. As restitutive approaches often involve prolonged training sessions, it would be beneficial to know how much recovery is required to restore reading ability. To address this issue, we employed a gaze-contingent paradigm using a low-pass filter to blur one side of the text, functionally simulating a visual field defect. The degree of blurring acts as a proxy for visual function recovery that could arise from restitutive strategies, and allows us to evaluate and quantify the degree of visual recovery required to support normal reading fluency in patients. Because reading ability changes with age, we recruited a group of younger participants, and another with older participants who are closer in age to risk groups for ischaemic strokes. Our results show that changes in patterns of eye movement observed in hemianopic loss can be captured using this simulated reading environment. This opens up the possibility of using participants with normal visual function to help identify the most promising strategies for ameliorating hemianopic loss, before translation to patient groups. |
Ali Behzadnia; Signy Wegener; Audrey Burki; Elisabeth Beyersmann The role of oral vocabulary when L2 speakers read novel words: A complex word training study Journal Article In: Bilingualism: Language and Cognition, pp. 1–12, 2023. @article{Behzadnia2023, The present study asked whether oral vocabulary training can facilitate reading in a second language (L2). Fifty L2 speakers of English received oral training over three days on complex novel words, with predictable and unpredictable spellings, composed of novel stems and existing suffixes (i.e., vishing, vishes, vished). After training, participants read the novel word stems for the first time (i.e., trained and untrained), embedded in sentences, and their eye movements were monitored. The eye-Tracking data revealed shorter looking times for trained than untrained stems, and for stems with predictable than unpredictable spellings. In contrast to monolingual speakers of English, the interaction between training and spelling predictability was not significant, suggesting that L2 speakers did not generate orthographic skeletons that were robust enough to affect their eye-movement behaviour when seeing the trained novel words for the first time in print. |
Robyn Berghoff; Emanuel Bylund L2 activation during L1 processing is increased by exposure but decreased by proficiency Journal Article In: International Journal of Bilingualism, pp. 1–15, 2023. @article{Berghoff2023, Aims: The study investigates the effects of L2 proficiency and L2 exposure on L2-to-L1 cross-language activation (CLA) in L1-dominant bilinguals. In so doing, it tests the predictions made by prominent models of the bilingual lexicon regarding how language experience modulates CLA. Design: The participants (27 L1-dominant L1 English–L2 Afrikaans speakers) completed a visual world eye-tracking task, conducted entirely in English, in which they saw four objects on a screen: a target object, which they were instructed to click on; a competitor object, whose Afrikaans label overlapped phonetically at onset with the English target object label; and two unrelated distractors. Language background data were collected using the Language History Questionnaire 3.0. Analysis: A growth curve analysis was performed to investigate the extent to which the background variables modulated looks to the Afrikaans competitor item versus to the two unrelated distractor items. Findings: Increased L2 exposure was associated with greater CLA, which is consistent with models suggesting that exposure modulates the likelihood and speed with which a linguistic item becomes activated. Moreover, CLA was reduced at higher levels of L2 proficiency, which aligns with accounts of the bilingual lexicon positing that parasitism of the L2 on the L1 is reduced at higher proficiency levels, leading to reduced CLA. Originality: L2 activation during L1 processing and the variables that modulate it are not well documented, particularly among L1 speakers with limited proficiency in and exposure to the L2. Significance: The findings contribute to the evaluation of competing accounts of bilingual lexical organization. |
Elisabeth Beyersmann; Signy Wegener; Nenagh Kemp That's good news: Semantic congruency effects in emoji processing Journal Article In: Journal of Media Psychology, vol. 35, no. 1, pp. 17–27, 2023. @article{Beyersmann2023, The use of emojis in digital communication has become increasingly popular, but how emojis are processed and integrated in reading processes remains underexplored. This study used eye-tracking to monitor university students' (n = 47) eye movements while reading single-line text messages with a face emoji embedded medially. Messages contained a semantically congruent emoji (e.g., That's good news tell me more), a semantically incongruent emoji (e.g., That's good news tell me more), or a dash (e.g., That's good news - tell me more). Results revealed that emoji congruency did not influence early fixation measures (first fixation duration and gaze duration), nor the probability of regressions. However, there was a significant congruency effect in total reading time and trial dwell time, showing that incongruence incurred a processing cost. The present results extend previously reported semantic congruency effects in sentence reading to the processing of emojis. This result suggests that the semantic content conveyed by face emojis is integrated with sentence context late in processing. We further found that the use of congruent emojis improved the relationship between sender and receiver: Ratings collected separately suggested that message senders were liked better if they included congruent than incongruent emojis. Overall, emojis attracted attention: Participants were twice as likely to fixate on emojis than on dashes, and to fixate on emojis for longer. |
Elisabeth Beyersmann; Signy Wegener; Valentina N. Pescuma; Kate Nation; Danielle Colenbrander; Anne Castles The effect of oral vocabulary training on reading novel complex words Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 6, pp. 1321 –1332, 2023. @article{Beyersmann2023a, Do readers benefit from their knowledge of the phonological form and meaning of stems when seeing them embedded in morphologically complex words for the first time in print? This question was addressed using a word learning paradigm. Participants were trained on novel spoken word stems and their meanings (“tump”). Following training, participants then saw the novel stems for the first time in print, either in combination with a real affix (tumpist, tumpor) or with a non-affix (tumpel, tumpain). Untrained items were also included to test whether the affix effect was modulated by the prior training of the spoken word stems. First, the complex words were embedded in meaningful sentences which participants read as their eye movements were recorded (first orthographic exposure). Second, participants were asked to read aloud and spell each individual complex novel word (second orthographic exposure). Participants spent less time fixating on words that included trained stems compared with untrained stems. However, the training effect did not change depending on whether stems were accompanied by a real affix or a non-affix. In the reading aloud and spelling tasks, there was no effect of training, suggesting that the effect of oral vocabulary training did not extend beyond the initial print exposure. The results indicate that familiarity with spoken stems influences how complex words containing those stems are processed when being read for the first time. Our findings highlight the flexibility and adaptability of the morphological processing system to novel complex words during the first print exposure. |
Elisabeth Beyersmann; Signy Wegener; Jasmine Spencer; Anne Castles Acquisition of orthographic forms via spoken complex word training Journal Article In: Psychonomic Bulletin & Review, vol. 30, no. 2, pp. 739–750, 2023. @article{Beyersmann2023b, This study used a novel word-training paradigm to examine the integration of spoken word knowledge when learning to read morphologically complex novel words. Australian primary school children including Grades 3–5 were taught the oral form of a set of novel morphologically complex words (e.g., (/vɪbɪŋ/, /vɪbd/, /vɪbz/), with a second set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., vib), embedded in sentences, while their eye movements were monitored. Half of the stems were spelled predictably and half were spelled unpredictably. Reading times were shorter for orally trained stems with predictable than unpredictable spellings and this difference was greater for trained than untrained items. These findings suggest that children were able to form robust orthographic expectations of the embedded morphemic stems during spoken word learning, which may have occurred automatically without any explicit control of the applied mappings, despite still being in the early stages of reading development. Following the sentence reading task, children completed a reading-aloud task where they were exposed to the novel orthographic forms for a second time. The findings are discussed in the context of theories of reading acquisition. |
Bruno Bianchi; Rodrigo Loredo; María Fonseca; Julia Carden; Virginia Jaichenco; Titus Malsburg; Diego E. Shalom; Juan Kamienkowski Neural bases of predictions during natural reading of known statements: An electroencephalography and eye movements co-registration study Journal Article In: Neuroscience, vol. 519, pp. 131–146, 2023. @article{Bianchi2023, Predictions of incoming words performed during reading have an impact on how the reader moves their eyes and on the electrical brain potentials. Eye tracking (ET) experiments show that less predictable words are fixated for longer periods of times. Electroencephalography (EEG) experiments show that these words elicit a more negative potential around 400 ms (N400) after the word onset when reading one word at a time (foveated reading). Nevertheless, there was no N400 potential during the foveated reading of previously known sentences (memory-encoded), which suggests that the prediction of words from memory-encoded sentences is based on different mechanisms than predictions performed on common sentences. Here, we performed an ET-EEG co-registration experiment where participants read common and memory-encoded sentences. Our results show that the N400 potential disappear when the reader recognises the sentence. Furthermore, time–frequency analyses show a larger alpha lateralisation and a beta power increase for memory-encoded sentences. This suggests a more distributed attention and an active maintenance of the cognitive set, in concordance to the predictive coding framework. |
Christina M. Blomquist; Rochelle S. Newman; Jan Edwards The development of spoken word recognition in informative and uninformative sentence contexts Journal Article In: Journal of Experimental Child Psychology, vol. 227, pp. 1–10, 2023. @article{Blomquist2023, Although there is ample evidence documenting the development of spoken word recognition from infancy to adolescence, it is still unclear how development of word-level processing interacts with higher-level sentence processing, such as the use of lexical–semantic cues, to facilitate word recognition. We investigated how the ability to use an informative verb (e.g., draws) to predict an upcoming word (picture) and suppress competition from similar-sounding words (pickle) develops throughout the school-age years. Eye movements of children from two age groups (5–6 years and 9–10 years) were recorded while the children heard a sentence with an informative or neutral verb (The brother draws/gets the small picture) in which the final word matched one of a set of four pictures, one of which was a cohort competitor (pickle). Both groups demonstrated use of the informative verb to more quickly access the target word and suppress cohort competition. Although the age groups showed similar ability to use semantic context to facilitate processing, the older children demonstrated faster lexical access and more robust cohort suppression in both informative and uninformative contexts. This suggests that development of word-level processing facilitates access of top-down linguistic cues that support more efficient spoken language processing. Whereas developmental differences in the use of semantic context to facilitate lexical access were not explained by vocabulary knowledge, differences in the ability to suppress cohort competition were explained by vocabulary. This suggests a potential role for vocabulary knowledge in the resolution of lexical competition and perhaps the influence of lexical competition dynamics on vocabulary development. |
Christina Blomquist; Bob MCMurray The development of lexical inhibition in spoken word recognition Journal Article In: Developmental Psychology, vol. 59, no. 1, pp. 186–206, 2023. @article{Blomquist2023a, As a spoken word unfolds over time, similar sounding words (cap and cat) compete until one word “wins”. Lexical competition becomes more efficient from infancy through adolescence. We examined one potential mechanism underlying this development: lexical inhibition, by which activated candidates suppress competitors. In Experiment 1, younger (7–8 years) and older (12–13 years) children heard words (cap) in which the onset was manipulated to briefly boost competition from a cohort competitor (cat). This was compared to a condition with a nonword (cack) onset that would not inhibit the target. Words were presented in a visual world task during which eye movements were recorded. Both groups showed less looking to the target when perceiving the competitor-splice relative to the nonword-splice, showing engagement of lexical inhibition. Exploratory analyses of linguistic adaptation across the experiment revealed that older children demonstrated consistent lexical inhibition across the experiment and younger children did not, initially showing no effect in the first half of trials and then a robust effect in the latter half. In Experiment 2, adults also displayed consistent lexical inhibition in the same task. These findings suggest that younger children do not consistently engage lexical inhibition in typical listening but can quickly bring it online in response to certain linguistic experiences. Computational modeling showed that age-related differences are best explained by increased engagement of inhibition rather than growth in activation. These findings suggest that continued development of lexical inhibition in later childhood may underlie increases in efficiency of spoken word recognition. |
Rolando Bonandrini; Eraldo Paulesu; Daniela Traficante; Elena Capelli; Marco Marelli; Claudio Luzzatti Lateralized reading in the healthy brain: A behavioral and computational study on the nature of the visual field effect Journal Article In: Neuropsychologia, vol. 180, pp. 1–16, 2023. @article{Bonandrini2023, Despite its widespread use to measure functional lateralization of language in healthy subjects, the neurocognitive bases of the visual field effect in lateralized reading are still debated. Crucially, the lack of knowledge on the nature of the visual field effect is accompanied by a lack of knowledge on the relative impact of psycholinguistic factors on its measurement, thus potentially casting doubts on its validity as a functional laterality measure. In this study, an eye-tracking-controlled tachistoscopic lateralized lexical decision task (Experiment 1) was administered to 60 right-handed and 60 left-handed volunteers and word length, orthographic neighborhood, word frequency, and imageability were manipulated. The magnitude of visual field effect was bigger in right-handed than in left-handed participants. Across the whole sample, a visual field-by-frequency interaction was observed, whereby a comparatively smaller effect of word frequency was detected in the left visual field/right hemisphere (LVF/RH) than in the right visual field/left hemisphere (RVF/LH). In a subsequent computational study (Experiment 2), efficient (LH) and inefficient (RH) activation of lexical orthographic nodes was modelled by means of the Naïve Discriminative Learning approach. Computational data simulated the effect of visual field and its interaction with frequency observed in the Experiment 1. Data suggest that the visual field effect can be biased by word frequency. Less distinctive connections between orthographic cues and lexical/semantic output units in the RH than in the LH can account for the emergence of the visual field effect and its interaction with word frequency. |
Schea Fissel Brannick; Emily Sebranek; Emily Anderson; Ileana Ratiu; Arianna N. LaCroix Empathy interacts with second language proficiency to modify executive control of attention to social information. Journal Article In: Translational Issues in Psychological Science, vol. 9, no. 4, pp. 444–459, 2023. @article{Brannick2023, A large body of research suggests that bilinguals show a unique, lifelong relationship between language experience and executive control, but the nature of this relationship is unclear as much of this work has not addressed the role of social functioning. The purpose of this study was to clarify the relationship between language experience and social functioning by first exploring relationships between second language experience and two indices of social functioning: empathy and social cognition. We then explored whether these variables impact executive control of attention to incongruent trials over time. Thirty-eight adults with a range of second language experience completed surveys of language experience, empathy, and social cognition, as well as a traditional Flanker task and a socially modified version of the Flanker task. Reaction times, fixation counts, and blinking counts were measured by trial on each Flanker task. Second language experience did not predict empathy or social cognition, which was contrary to what was predicted. On the Flanker tasks, greater empathy interacted with bilingualism to improve executive control and behavioral responses to incongruent social stimuli, but interacted with worse social cognition to broaden attention to incongruent social stimuli, but with greater inhibitory costs and impaired performance. Highly empathetic bilinguals showed unique temporal patterns of broadening attention to incongruent stimuli early in trials, which then enhanced executive control and behavioral performance in subsequent trials. These findings point to the importance of measuring social variables in bilingual and clinical populations. (PsycInfo Database Record (c) 2023 APA, all rights reserved) |
Jon Burnsky; Franziska Kretzschmar; Erika Mayer; Adrian Staub In: Language, Cognition and Neuroscience, vol. 38, no. 6, pp. 821–842, 2023. @article{Burnsky2023, Two eye movement/EEG co-registration experiments investigated effects of predictability, visual contrast, and parafoveal preview in normal reading. Replicating previous studies, in Experiment 1 contrast and predictability additively influenced fixation durations, and in Experiment 2 invalid preview eliminated the predictability effect on early eye movement measures. In both experiments, predictability influenced the amplitude of the N400 component of the fixation-related potential. In Experiment 1, visual contrast did not influence the N400, and in Experiment 2, the effect of predictability on the N400 was larger with invalid preview, in opposition to the eye movement pattern. The N400 may reflect a late process of accessing conceptual representations while the duration of the eyes' fixation on a word is sensitive to the difficulty of perceptual encoding and early stages of word recognition. The effects of predictability on both fixation duration and the N400 suggest an influence of this variable at two distinct processing stages. |
Juan Escalante; Grant Eckstein; Troy L. Cox; Steven Luke Multiple-choice reading behaviors of ESL students: An eye-tracking study Journal Article In: TESOL Communications, vol. 2, no. 1, pp. 1973–1974, 2023. @article{Escalante2023, Only recently has eye-tracking been used to investigate test-taker reading behavior, and results have been primarily used to confirm a range of cognitive tasks elicited by test items. This study explores test taker reading behavior for its own sake by describing how ESL readers of different proficiency levels behaviorally view multiple-choice passages and test items at different difficulty levels. Data were gathered from 51 students at three proficiency levels attending a university-sponsored intensive English program (IEP). Participants read eight validated reading comprehension items at varying difficulty levels while their eye movements were recorded on the passage, multiple-choice stem, correct answer, and distractors. Reading behavior demonstrated that language proficiency had a limited effect while passage difficulty had a stronger effect on reading behavior: participants gave less visual attention to the reading passage and correct answers within easier items and when they had higher language proficiency. The interaction of proficiency and item difficult on reading behavior is important in understanding how learners experience tests. |
Michael A. Eskenazi Best practices for cleaning eye movement data in reading research Journal Article In: Behavior Research Methods, pp. 1–11, 2023. @article{Eskenazi2023, One challenge that comes with studying eye movement behavior is deciding how to clean the eye movement data (e.g., fixation durations) before conducting analyses. Reading researchers must decide which data cleaning methods they will use and which thresholds they will set to remove eye movements that are not reflective of lexical processing. The purpose of this project was to determine what data cleaning methods are typically used and if there are any consequences of using different data cleaning methods. In the first study, an analysis of 192 recently published articles indicated that there is inconsistency in the reporting and application of data cleaning methods. In the second study, three different data cleaning methods were applied based on the literature analysis in the first study. Analyses were conducted to determine the impact of different data cleaning methods on three commonly studied effects in reading research (frequency, predictability, and length). Overall, standardized estimates decreased for each effect when more data were removed; however, removing more data also resulted in decreased variance. As a result, effects remained significant with each data cleaning method, and simulated power remained high for both a moderate and small sample size. Effect sizes remained consistent for most effects but decreased for the length effect as more data were removed. Seven suggestions are provided that are based on open science practices with the intention of helping researchers, reviewers, and the field as a whole. |
Nikki G. Fackler; Peter C. Gordon Mask-related costs in measuring preview benefit: Evidence from a distributional analysis based on target word reading times Journal Article In: Attention, Perception, & Psychophysics, vol. 85, no. 7, pp. 2475–2487, 2023. @article{Fackler2023, Skilled reading involves processing the upcoming word in parafoveal vision before it is fixated, leading to shorter fixations on that word. This phenomenon, parafoveal preview benefit, is a key component of theoretical models of reading; it is measured using the invisible boundary paradigm, in which reading times on a target word are compared for instances when preview is accurate and when the target word is masked while in the parafovea. However, parafoveal masks have been shown to induce unintentional processing costs, thereby inflating measures of preview benefit. The degraded mask has been explored as a potential solution to this problem, leading to mixed results. While previous work has analyzed the preview effect by comparing mean reading times on the target word, the present study provides a more comprehensive analysis by examining the distribution of the preview effect across target word fixation times for unrelated and degraded masks. Participants read sentences containing target words whose preview was either identical, unrelated, or degraded, and their eye movements were recorded. Analyses revealed that although there were no mean differences between reading times for the unrelated and degraded conditions, the pattern of the effects varied as a function of target word fixation times. Unrelated masks resulted in positively sloped generally linear delta plots, while degraded masks resulted in relatively flat delta plots for fixations longer than 200 ms. These differences suggest that different cognitive mechanisms are involved in the processing of the two mask types. Implications for understanding and measuring preview benefit are discussed. |
Mojgan Farahani; Vijay Parsa; Philip C. Doyle Auditory-perceptual and pupillometric evaluation of vocal roughness and listening effort in tracheoesophageal speech Journal Article In: Journal of Voice, pp. 1–16, 2023. @article{Farahani2023, Objectives: This study evaluated auditory-perceptual judgments of perceived vocal roughness (VR) and listening effort (LE) along with pupillometric responses in response to speech samples produced by tracheoesophageal (TE) talkers. Methods: Twenty normal-hearing, naive young adults (eight men and twelve women) served as listeners. Listeners were divided into two groups: (1) a with-anchor (WA) group (four men and six women) and (2) a no-anchor (NA) group (four men and six women). All were presented with speech samples produced by twenty TE talkers; listeners evaluated two auditory-perceptual dimensions—VR and LE—using visual analog scales. Anchors were provided to the WA group as an external referent for their ratings. In addition, during the auditory-perceptual task, each listener's pupil reactions also were recorded with peak pupil dilation (PPD) measures extracted as a physiologic indicator associated with the listening task. Results: High interrater reliability was obtained for both the WA and NA groups. High correlations also were observed between auditory-perceptual ratings of roughness and LE, and between PPD values and ratings of both dimensions for the WA group. The inclusion of an anchor during the auditory-perceptual task improved interrater reliability ratings, but it also imposed an increased demand on listeners. Conclusions: Data obtained offer insights into the relationship between subjective indices of voice quality (ie, auditory-perceptual evaluation) and physiologic responses (PPD) to the abnormal voice quality that characterizes TE talkers. Furthermore, these data provide information on the inclusion/exclusion of audio anchors and potential increases in listener demand in response to abnormal voice quality. |
Argyro Fella; Maria Loizou; Christoforos Christoforou; Timothy C. Papadopoulos Eye movement evidence for simultaneous cognitive processing in reading Journal Article In: Children, vol. 10, no. 12, pp. 1–17, 2023. @article{Fella2023, Measuring simultaneous processing, a reliable predictor of reading development and reading difficulties (RDs), has traditionally involved cognitive tasks that test reaction or response time, which only capture the efficiency at the output processing stage and neglect the internal stages of information processing. However, with eye-tracking methodology, we can reveal the underlying temporal and spatial processes involved in simultaneous processing and investigate whether these processes are equivalent across chronological or reading age groups. This study used eye-tracking to investigate the simultaneous processing abilities of 15 Grade 6 and 15 Grade 3 children with RDs and their chronological-age controls (15 in each Grade). The Grade 3 typical readers were used as reading-level (RL) controls for the Grade 6 RD group. Participants were required to listen to a question and then point to a picture among four competing illustrations demonstrating the spatial relationship raised in the question. Two eye movements (fixations and saccades) were recorded using the EyeLink 1000 Plus eye-tracking system. The results showed that the Grade 3 RD group produced more and longer fixations than their CA controls, indicating that the pattern of eye movements of young children with RD is typically deficient compared to that of their typically developing counterparts when processing verbal and spatial stimuli simultaneously. However, no differences were observed between the Grade 6 groups in eye movement measures. Notably, the Grade 6 RD group outperformed the RL-matched Grade 3 group, yielding significantly fewer and shorter fixations. The discussion centers on the role of the eye-tracking method as a reliable means of deciphering the simultaneous cognitive processing involved in learning. |
Leigh B. Fernandez; Ricarda Bothe; Shanley E. M. Allen The role of L1 reading direction on L2 perceptual span: An eye-tracking study investigating Hindi and Urdu speakers Journal Article In: Second Language Research, vol. 39, no. 2, pp. 1–23, 2023. @article{Fernandez2023b, In the current study we used the gaze-contingent moving window paradigm to directly compare the second language (L2) English perceptual span of two groups that speak languages with essentially the same lexicon and grammar but crucially with different writing directions (and scripts): Hindi (read left to right) and Urdu (read right to left). This is the first study to directly compare first language (L1) speakers of languages that differ primarily in reading direction in a common L2, English. While Urdu speakers had a slightly faster reading rate, we found no additional differences between Hindi and Urdu speakers when reading L2 English; both groups showed a perceptual span between 9 and 11 characters to the right of the fixation based on saccade length. This suggests little to no influence of L1 reading direction on L2 perceptual span, but rather that L2 perceptual span is influenced by allocation of attention during reading. Our data are in line with research by Leung et al. (2014) finding that L2 speakers have a smaller perceptual span than native speakers (L1 perceptual span is approximately 15 characters to the right of the fixation). This most likely stems from the increased demands associated with reading in a second language, which led to a reduction in the amount of attention that can be allocated outside of the current fixation. |
Leigh B. Fernandez; Christoph Scheepers; Shanley E. M. Allen Cross-language semantic and orthographic parafoveal processing by bilingual L1 German-L2 English readers Journal Article In: Bilingualism: Language and Cognition, pp. 1–15, 2023. @article{Fernandez2023, In a recent study, Fernandez et al. (2021) investigated parafoveal processing in L1 English and L1 German-L2 English readers using the gaze contingent boundary paradigm (Rayner, 1975). Unexpectedly, L2 readers derived an interference from a non-cognate translation parafoveal mask (arrow vs. pfeil), but derived a benefit from a German orthographic parafoveal mask (arrow vs. pfexk) when reading in English. The authors argued that bilingual readers incurred a switching cost from the complete German word, and derived a benefit by keeping both lexicons active from the partial German word. In this registered report, we further test this finding with L1 German-L2 English participants using improved items, but with the sentences presented in German. We were able to replicate the non-cognate translation interference but not the orthographic facilitation. Follow up comparisons showed that all parafoveal masks evoked similar inhibition, suggesting that bilingual readers do not process non-cognate semantic or orthographic information parafoveally. |
Leigh B. Fernandez; Agnesa Xheladini; Shanley E. M. Allen Proficient L2 readers do not have a risky reading strategy Journal Article In: Linguistic Approaches to Bilingualism, vol. 13, no. 6, pp. 854–872, 2023. @article{Fernandez2023a, Proficient first-language (L1) readers of alphabetic languages that are read left-to-right typically have a perceptual span of 3–4 characters to the left and 14–15 characters to the right of the foveal fixation. Given that second-language (L2) processing requires more cognitive resources, we hypothesize that L2ers will have a smaller perceptual span than L1ers, and may rely on a compensatory risky reading strategy with a more symmetrical perceptual span similar to that seen in older L1 adults. Here, we test the size and symmetry of the perceptual span in German L1/English L2ers reading in English. We manipulate the amount of information available (3,6,9 characters-left/3,9,15 characters-right) during reading, and also account for the influence of English skills. Results show that L2ers benefit from an increase of window size from 3 to 6 characters to the left, and from 3 to 9 characters to the right, with higher-skilled L2ers further benefiting from an increase to 15 characters to the right. Contrary to our hypothesis, proficient L2ers exhibit an asymmetric perceptual span similar to college-aged L1ers and do not employ a compensatory risky reading strategy. This suggests that L1 and L2 language processing are not qualitatively different, but are rather modulated by individual differences. |
Laura Fernández-Arroyo; Nuria Sagarra; Kaylee Fernández Differential effects of language proficiency and use on L2 lexical prediction Journal Article In: The Mental Lexicon, pp. 1–26, 2023. @article{FernandezArroyo2023, Language experience is essential for SLA. Yet, studies comparing the role of L2 proficiency and L2 use on L2 processing are scant, and there are no studies examining how these variables modulate learners' ability to generalize grammatical associations to new instances. This study investigates whether L2 proficiency and L2 use affect L2 stress-tense suffix associations (a stressed syllable cuing a present suffix, and an unstressed syllable cuing a preterit suffix) using eye-tracking. Spanish monolinguals and English learners of Spanish varying in L2 proficiency and L2 use saw two verbs (e.g., firma-firmó ‘(s)he signs/signed'), heard a sentence containing one of the verbs, and chose the verb they had heard. Both groups looked at target verbs above chance before hearing the suffix, but the monolinguals did so more accurately and earlier than the learners. The learners recognized past verbs faster than present verbs, were faster with higher than lower L2 proficiency, and later with higher than lower L2 use. Finally, higher L2 proficiency yielded earlier morphological activation but higher L2 use produced later morphological activation, indicating that L2 proficiency and L2 use affect L2 word processing differently. We discuss the contribution of these findings to language acquisition and processing models, as well as models of general cognition. |
Francesca Foppolo; Greta Mazzaggio; Ludovico Franco; Maria Rita Manzini A group of researchers are testing pseudopartitives in Italian: Notional number is not the key to the facts Journal Article In: Glossa Psycholinguistics, vol. 2, no. 1, pp. 1–34, 2023. @article{Foppolo2023, The present paper focuses on pseudopartitive constructions headed by quantifier, collective, or container nouns (like a lot of senators, a group of students, a bottle of pills) followed by a singular or a plural verb. We compared these structures with superficially similar adnominal structures of the form NP1[−PL] prep NP2[PL] (e.g., the level of the lakes is/are) in Italian in an acceptability judgment study (Experiment 1), a forced-choice task (Experiment 2), and an eye tracking reading study (Experiment 3). Two major findings were consistent across all studies. First, verb agreement in pseudopartitives always patterned differently from controls. Second, albeit an overall preference for singular verbs was observed, a gradient difference emerged between adnominal controls and pseudopartitives, and among pseudopartitives headed by different nouns. We explain such variability in terms of the availability of a measure interpretation (e.g., pills in the measure of a bottle vs. a bottle containing pills) which is linked to the type of the pseudopartitive's head noun. While in non-pseudopartitive adnominal structures only one parse is allowed by the grammar, in pseudopartitives a given head noun may admit or block a structural configuration in which the plural feature of the embedded constituent (e.g., of students, modifying a group) can determine the plurality of the subsequent verb. We conclude that verb agreement in pseudopartitives is a grammatical phenomenon and, as such, it refers to speakers' grammatical competence and cannot be reduced to agreement attraction of the plural intervener. |
Stefan L. Frank; Anna Aumeistere An eye-tracking-with-EEG coregistration corpus of narrative sentences Journal Article In: Language Resources and Evaluation, pp. 1–17, 2023. @article{Frank2023, We present the Radboud Coregistration Corpus of Narrative Sentences (RaCCooNS), the first freely available corpus of eye-tracking-with-EEG data collected while participants read narrative sentences in Dutch. The corpus is intended for studying human sentence comprehension and for evaluating the cognitive validity of computational language models. RaCCooNS contains data from 37 participants (3 of which eye tracking only) reading 200 Dutch sentences each. Less predictable words resulted in significantly longer reading times and larger N400 sizes, replicating well-known surprisal effects in eye tracking and EEG simultaneously. We release the raw eye-tracking data, the preprocessed eye-tracking data at the fixation, word, and trial levels, the raw EEG after merger with eye-tracking data, and the preprocessed EEG data both before and after ICA-based ocular artifact correction. |
Carina Frondén; Johanna K Kaakinen Reading Easy Language texts written by public authorities: Evidence from eye tracking Journal Article In: Finnish Journal of Linguistics, vol. 36, no. 2023, pp. 7–36, 2023. @article{Fronden2023, Previous research has shown that word length, frequency and word repetition influence word reading times (Rayner 1998; 2009). Guidelines for Easy Language advise writers to use frequent and short words, and to repeat words instead of using synonyms. However, some of these guidelines are based on research that has been misinterpreted, simplified, or is outdated (Wengelin 2015), and studies focusing on effects of word length, frequency and word repetition among adult readers in the Easy Swedish target group are lacking. This eyetracking study investigated the reading of Easy Language texts written by public authorities, as well as the effects of word length, frequency, and word repetition on readers in a day centre for people with intellectual disabilities. The results showed significant effects for word length and frequency in all readers. In addition, the effects were significantly greater in the target group than in the control group. The effects for word repetition were not as clear, affecting only one of the reading measures. Furthermore, the study revealed poor comprehension rates in the target group, i. e., when asked, they were not able to reproduce the main contents of the texts. The significantly greater effects of word length and frequency suggest that the related Easy Language guidelines are valid for this group of readers. The poor comprehension rates indicate that the texts were too difficult for these readers. |
Ian Cunnings; Hiroki Fujita Similarity-based interference and relative clauses in second language processing Journal Article In: Second Language Research, vol. 39, no. 2, pp. 539–563, 2023. @article{Cunnings2023a, Relative clauses have long been examined in research on first (L1) and second (L2) language acquisition and processing, and a large body of research has shown that object relative clauses (e.g. ‘The boy that the girl saw') are more difficult to process than subject relative clauses (e.g. ‘The boy that saw the girl'). Although there are different accounts of this finding, memory-based factors have been argued to play a role in explaining the object relative disadvantage. Evidence of memory-based factors in relative clause processing comes from studies indicating that representational similarity influences the difficulty associated with object relatives as a result of a phenomenon known as similarity-based interference. Although similarity-based interference has been well studied in L1 processing, less is known about how it influences L2 processing. We report two studies – an eye-tracking experiment and a comprehension task – investigating interference in the comprehension of relative clauses in L1 and L2 readers. Our results indicated similarity-based interference in the processing of object relative clauses in both L1 and L2 readers, with no significant differences in the size of interference effects between the two groups. These results highlight the importance of considering memory-based factors when examining L2 processing. |
Ian Cunnings; Patrick Sturt Illusions of plausibility in adjuncts and co-ordination Journal Article In: Language, Cognition and Neuroscience, vol. 38, no. 9, pp. 1318–1337, 2023. @article{Cunnings2023, Illusions of grammaticality, where ungrammatical sentences are misperceived as grammatical (e.g. The key to the cabinets were rusty), have been widely studied during language comprehension. Such grammatical illusions have been influential in debate surrounding so-called representational and retrieval-based accounts of linguistic dependency resolution. Whether analogous illusions of plausibility occur at the level of semantic interpretation has only recently begun to be examined, and thus far, these illusions have been restricted to a narrow range of linguistic phenomena. In two eye-tracking during reading experiments (n = 48 in each) and two self-paced reading experiments (n = 192 in each) we examined the possibility of semantic illusions during the processing of adjuncts and co-ordination. Across experiments, our results suggest illusions of plausibility during dependency resolution, though interference effects were clearer in adjuncts than co-ordination. We argue that our findings are more compatible with retrieval-based rather than representational accounts of linguistic dependency resolution. |
Megan M. Dailey; Camille Straboni; Sharon Peperkamp Using allophonic variation in L2 word recognition: French listeners' processing of English vowel nasalization Journal Article In: Second Language Research, pp. 1–22, 2023. @article{Dailey2023, During spoken word processing, native (L1) listeners use allophonic variation to predictively rule out word competitors and speed up word recognition. There is some evidence that second language (L2) learners develop an awareness of allophonic distributions in their L2, but whether they use their knowledge to facilitate word recognition online, like native listeners do, is largely unknown. In an offline gating experiment and an online eye-tracking experiment in the visual world paradigm, we compare advanced French learners of English and a control group of L1 English listeners on their processing of English vowel nasalization during spoken word recognition. In the gating task, the French listeners' performance did not differ from that of the English ones. The eye-tracking results show that French listeners used the allophonic distribution in the same way as English listeners, although they were not as fast. Together, these results reveal that L2 learners can develop novel processing strategies using sounds in allophonic distribution to facilitate spoken word recognition. |
Anne Françoise Chambrier; Marco Pedrotti; Paolo Ruggeri; Jasinta Dewi; Myrto Atzemian; Catherine Thevenot; Catherine Martinet; Philippe Terrier Reading numbers is harder than reading words: An eye-tracking study Journal Article In: Acta Psychologica, vol. 237, pp. 1–11, 2023. @article{Chambrier2023, We recorded the eye movements of adults reading aloud short (four digit) and long (eight to 11 digit) Arabic numerals compared to matched-in-length words and pseudowords. We presented each item in isolation, at the center of the screen. Participants read each item aloud at their pace, and then pressed the spacebar to display the next item. Reading accuracy was 99 %. Results showed that adults make 2.5 times more fixations when reading short numerals compared to short words, and up to 7 times more fixations when reading long numerals with respect to long words. Similarly, adults make 3 times more saccades when reading short numerals compared to short words, and up to 9 times more saccades when reading long numerals with respect to long words. Fixation duration and saccade amplitude stay almost the same when reading short numerals with respect to short words. However, fixation duration increases by ∼50 ms when reading long numerals (∼300 ms) with respect to long words (∼250 ms), and saccade amplitude decreases up to 0.83 characters when reading long numerals with respect to long words. The pattern of findings for long numerals—more and shorter saccades as well as more and longer fixations—shows the extent to which reading long Arabic numerals is a cognitively costly task. Within the phonographic writing system, this pattern of eye movements stands for the use of the sublexical print-to-sound correspondence rules. The data highlight that reading large numerals is an unautomatized activity and that Arabic numerals must be converted into their oral form by a step-by-step process even by expert readers. |
Elisabetta De Simone; Kristina Moll; Lisa Feldmann; Xenia Schmalz; Elisabeth Beyersmann The role of syllables and morphemes in silent reading: An eye-tracking study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 11, pp. 2493–2513, 2023. @article{DeSimone2023, German skilled readers have been found to engage in morphological and syllable-based processing in visual word recognition. However, the relative reliance on syllables and morphemes in reading multi-syllabic complex words is still unresolved. This study aimed to unveil which of these sublexical units are the preferred units of reading by employing eye-tracking technology. Participants silently read sentences while their eye-movements were recorded. Words were visually marked using colour alternation (Experiment 1) or hyphenation (Experiment 2)—at syllable boundary (e.g., Kir-schen), at morpheme boundary (e.g., Kirsch-en), or within the units themselves (e.g., Ki-rschen). A control condition without disruptions was used as a baseline (e.g., Kirschen). The results of Experiment 1 showed that eye-movements were not modulated by colour alternations. The results of Experiment 2 indicated that hyphens disrupting syllables had a larger inhibitory effect on reading times than hyphens disrupting morphemes, suggesting that eye-movements in German skilled readers are more influenced by syllabic than morphological structure. |
Jack Dempsey; Anna Tsiola; Kiel Christianson Eye-tracking evidence from attachment structures favors a serial model of discourse–sentence interactivity Journal Article In: Discourse Processes, vol. 60, no. 9, pp. 613–633, 2023. @article{Dempsey2023, Many psycholinguistic studies examine how people parse sentences in isolation; however, years of work in discourse processing have shown that sentence-level interpretations are influenced at some stage by discourse-level information. Evidence over the past 20 years remains mixed as to the temporal dynamics of such top-down interactions. In particular, dynamic accounts where readers use the discourse model to generate expectations for certain grammatical structures before and during parsing differ from serial accounts where an algorithmic first-pass processing mechanism precedes integration of sentence material into the discourse model. To test between these two theories, the current study investigates eye-movement behaviors when reading temporarily ambiguous attachment structures following discourses with biases either matching, mismatching, or neutral with respect to the attachment resolution. No evidence was found suggesting readers systematically use discourse information to generate structural expectations, in line with serial accounts of processing at the sentence–discourse interface. Scanpath analyses further highlight the confirmatory nature of rereading when participants encounter discourse continuations that do not fit with prior contexts. |
Shuwen Deng; David R. Reich; Paul Prasse; Patrick Haller; Tobias Scheffer; Lena A. Jäger Eyettention: An attention-based dual-sequence model for predicting human scanpaths during reading Journal Article In: Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. ETRA, pp. 1–24, 2023. @article{Deng2023, Eye movements during reading offer insights into both the reader's cognitive processes and the characteristics of the text that is being read. Hence, the analysis of scanpaths in reading have attracted increasing attention across fields, ranging from cognitive science over linguistics to computer science. In particular, eye-Tracking-while-reading data has been argued to bear the potential to make machine-learning-based language models exhibit a more human-like linguistic behavior. However, one of the main challenges in modeling human scanpaths in reading is their dual-sequence nature: The words are ordered following the grammatical rules of the language, whereas the fixations are chronologically ordered. As humans do not strictly read from left-To-right, but rather skip or refixate words and regress to previous words, the alignment of the linguistic and the temporal sequence is non-Trivial. In this paper, we develop Eyettention, the first dual-sequence model that simultaneously processes the sequence of words and the chronological sequence of fixations. The alignment of the two sequences is achieved by a cross-sequence attention mechanism. We show that Eyettention outperforms state-of-The-Art models in predicting scanpaths. We provide an extensive within-and across-data set evaluation on different languages. An ablation study and qualitative analysis support an in-depth understanding of the model's behavior. |
Félix Desmeules-Trudel; Tania S. Zamuner Spoken word recognition in a second language: The importance of phonetic details Journal Article In: Second Language Research, vol. 39, no. 2, pp. 333–362, 2023. @article{DesmeulesTrudel2023, Spoken word recognition depends on variations in fine-grained phonetics as listeners decode speech. However, many models of second language (L2) speech perception focus on units such as isolated syllables, and not on words. In two eye-tracking experiments, we investigated how fine-grained phonetic details (i.e. duration of nasalization on contrastive and coarticulatory nasalized vowels in Canadian French) influenced spoken word recognition in an L2, as compared to a group of native (L1) listeners. Results from L2 listeners (English-native speakers) indicated that fine-grained phonetics impacted the recognition of words, i.e. they were able to use nasalization duration variability in a way similar to L1-French listeners, providing evidence that lexical representations can be highly specified in an L2. Specifically, L2 listeners were able to distinguish minimal word pairs (differentiated by the presence of phonological vowel nasalization in French) and were able to use variability in a way approximating L1-French listeners. Furthermore, the robustness of the French “nasal vowel” category in L2 listeners depended on age of exposure. Early bilinguals displayed greater sensitivity to some ambiguity in the stimuli than late bilinguals, suggesting that early bilinguals had greater sensitivity to small variations in the signal and thus better knowledge of the phonetic cue associated with phonological vowel nasalization in French, similarly to L1 listeners. |
Lauren M. DiNicola; Wendy Sun; Randy L. Buckner In: Journal of Neurophysiology, vol. 130, no. 6, pp. 1602–1615, 2023. @article{DiNicola2023, A recurring debate concerns whether regions of primate prefrontal cortex (PFC) support domain-flexible or domain-specific processes. Here we tested the hypothesis with functional MRI (fMRI) that side-by-side PFC regions, within distinct parallel association networks, differentially support domain-flexible and domain-specialized processing. Individuals (N = 9) were intensively sampled, and all effects were estimated within their own idiosyncratic anatomy. Within each individual, we identified PFC regions linked to distinct networks, including a dorsolateral PFC (DLPFC) region coupled to the medial temporal lobe (MTL) and an extended region associated with the canonical multiple-demand network. We further identified an inferior PFC region coupled to the language network. Exploration in separate task data, collected within the same individuals, revealed a robust functional triple dissociation. The DLPFC region linked to the MTL was recruited during remembering and imagining the future, distinct from juxtaposed regions that were modulated in a domain-flexible manner during working memory. The inferior PFC region linked to the language network was recruited during sentence processing. Detailed analysis of the trial-level responses further revealed that the DLPFC region linked to the MTL specifically tracked processes associated with scene construction. These results suggest that the DLPFC possesses a domain-specialized region that is small and easily confused with nearby (larger) regions associated with cognitive control. The newly described region is domain specialized for functions traditionally associated with the MTL. We discuss the implications of these findings in relation to convergent anatomical analysis in the monkey.NEW & NOTEWORTHY Competing hypotheses link regions of prefrontal cortex (PFC) to domain-flexible or domain-specific processes. Here, using a precision neuroimaging approach, we identify a domain-specialized region in dorsolateral PFC, coupled to the medial temporal lobe and recruited for scene construction. This region is juxtaposed to, but distinct from, broader PFC regions recruited flexibly for cognitive control. Region distinctions align with broader network differences, suggesting that PFC regions gain dissociable processing properties via segregated anatomical projections. |
Kacie Dunham-Carr; Jacob I. Feldman; David M. Simon; Sarah R. Edmunds; Alexander Tu; Wayne Kuang; Julie G. Conrad; Pooja Santapuram; Mark T. Wallace; Tiffany G. Woynaroski The processing of audiovisual speech is linked with vocabulary in autistic and monautistic children: An ERP study Journal Article In: Brain Sciences, vol. 13, no. 7, pp. 1–15, 2023. @article{DunhamCarr2023, Explaining individual differences in vocabulary in autism is critical, as understanding and using words to communicate are key predictors of long-term outcomes for autistic individuals. Differences in audiovisual speech processing may explain variability in vocabulary in autism. The efficiency of audiovisual speech processing can be indexed via amplitude suppression, wherein the amplitude of the event-related potential (ERP) is reduced at the P2 component in response to audiovisual speech compared to auditory-only speech. This study used electroencephalography (EEG) to measure P2 amplitudes in response to auditory-only and audiovisual speech and norm-referenced, standardized assessments to measure vocabulary in 25 autistic and 25 nonautistic children to determine whether amplitude suppression (a) differs or (b) explains variability in vocabulary in autistic and nonautistic children. A series of regression analyses evaluated associations between amplitude suppression and vocabulary scores. Both groups demonstrated P2 amplitude suppression, on average, in response to audiovisual speech relative to auditory-only speech. Between-group differences in mean amplitude suppression were nonsignificant. Individual differences in amplitude suppression were positively associated with expressive vocabulary through receptive vocabulary, as evidenced by a significant indirect effect observed across groups. The results suggest that efficiency of audiovisual speech processing may explain variance in vocabulary in autism. |
Ciara Egan; Joshua S. Payne; Manon W. Jones In: Neuropsychologia, vol. 184, pp. 1–8, 2023. @article{Egan2023, Readers with developmental dyslexia are known to be impaired in representing and accessing phonology, but their ability to process meaning is generally considered to be intact. However, neurocognitive studies show evidence of a subtle semantic processing deficit in dyslexic readers, relative to their typically-developing peers. Here, we compared dyslexic and typical adult readers on their ability to judge semantic congruency (congruent vs. inconcongruent) in short, two-word phrases, which were further manipulated for phonological relatedness (alliterating vs. non-alliterating); “dazzling-diamond”; “sparkling-diamond”; “dangerous-diamond”; and “creepy-diamond”. At the level of behavioural judgement, all readers were less accurate when evaluating incongruent alliterating items compared with incongruent non-aliterating, suggesting that phonological patterning creates the illusion of semantic congruency (as per Egan et al., 2020). Dyslexic readers showed a similar propensity for this form-meaning relationship despite a phonological processing impairment as evidenced in the cognitive and literacy indicative assessments. Dyslexic readers also showed an overall reduction in the ability to accurately judge semantic congruency, suggestive of a subtle semantic impairment. Whilst no group differences emerged in the electrophysiological measures, our pupil dilation measurements revealed a global tendency for dyslexic readers to manifest a reduced attentional response to these word stimuli, compared with typical readers. Our results show a broad manifestation of neurocognitive differences in adult dyslexic and typical readers' processing of print, at the level of autonomic arousal as well as in higher level semantic judgements. |
Ciara Egan; Anna Siyanova-Chanturia; Paul Warren; Manon W. Jones As clear as glass: How figurativeness and familiarity impact simile processing in readers with and without dyslexia Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 2, pp. 231–247, 2023. @article{Egan2023a, For skilled readers, idiomatic language confers faster access to overall meaning compared with non-idiomatic language, with a processing advantage for figurative over literal interpretation. However, currently very little research exists to elucidate whether atypical readers—such as those with developmental dyslexia—show such a processing advantage for figurative interpretations of idioms, or whether their reading impairment implicates subtle differences in semantic access. We wanted to know whether an initial figurative interpretation of similes, for both typical and dyslexic readers, is dependent on familiarity. Here, we tracked typical and dyslexic readers' eye movements as they read sentences containing similes (e.g., as cold as ice), orthogonally manipulated for novelty (e.g., familiar: as cold as ice, novel: as cold as snow) and figurativeness (e.g., literal: as cold as ice [low temperature], figurative: as cold as ice [emotionally distant]), with figurativeness being defined by the sentence context. Both participant groups exhibited a processing advantage for familiar and figurative similes over novel and literal similes. However, compared with typical readers, participants with dyslexia had greater difficulty processing similes both when they were unfamiliar and when the context biased the simile meaning towards a literal rather than a figurative interpretation. Our findings suggest a semantic processing anomaly in dyslexic readers, which we discuss in light of recent literature on sentence-level semantic processing. |
Dina Abdel Salam El-Dakhs; Suhad Sonbul; Jeanette Altarriba How do foreign language learners process L2 emotion words in silent reading? An eye-tracking study Journal Article In: Languages, vol. 8, no. 2, pp. 1–28, 2023. @article{ElDakhs2023, The current study aimed to examine the processing of emotion words in L2 silent reading. We conducted two experiments in which Arab learners of English as a foreign language (EFL) read short English sentences in which target words were embedded. The participants' eye movements were recorded and analyzed. The results of Experiment 1, which compared the processing of emotionally positive versus neutral words by 44 participants, did not reveal any significant effect for word type. The results only showed a few instances of significant interactions between word type and word frequency (i.e., positive words were read faster than neutral words only in the case of high-frequency words) and arousal (i.e., positive words were recognized faster than neutral words only when the target words were low in arousal). The results of Experiment 2, which compared the processing of emotionally negative versus neutral words by 43 participants, only established one effect of word type on the skipping rate which was also modulated by length (i.e., negative words were less likely to be skipped, particularly shorter ones). Moreover, arousal interacted with word type (i.e., only the negative words with low arousal were read faster than neutral words in two eye-movement measures). |
Irina Elgort; Aaron Veldre Word processing before explicit attention: Using the gaze-contingent boundary paradigm in L2 reading research Journal Article In: Research Methods in Applied Linguistics, vol. 2, no. 3, pp. 1–18, 2023. @article{Elgort2023, Eye-movement studies investigating second language (L2) word processing during reading are growing exponentially. However, what information L2 readers are able to process parafoveally is a less researched topic. The gaze-contingent boundary paradigm (Rayner, 1975) allows researchers to manipulate visual information in an upcoming word during reading, tapping into real-time word processing without awareness. This article provides an overview of experimental studies of parafoveal word processing in reading, followed by a methodological review of the use of the boundary paradigm in L2 and bilingual research. We synthesize key methodological details (including preview type, eye-movement measures) and findings of 15 experiments that met our search criteria, concluding that the parafoveal preview effect observed when reading in the first language is also present in L2 reading. We propose how the gaze-contingent boundary paradigm can be used to study L2 lexical knowledge and factors that affect its development. Finally, we provide advice and instructions for designing and conducting boundary paradigm experiments. |
Irina Elgort; Ross Wetering; Tara Arrow; Elisabeth Beyersmann Previewing novel words before reading affects their processing during reading: An eye-movement study with first and second language readers Journal Article In: Language Learning, pp. 1–33, 2023. @article{Elgort2023a, In this study, we examined the effect of previewing unfamiliar vocabulary on the real-time reading behavior of first language (L1) and second language (L2) readers. University students with English as their L1 or L2 read passages with embedded pseudowords. In a within-participant manipulation, definitions of the pseudowords were either previewed before reading or reviewed after reading. Previewing significantly affected reading behavior on early and late eye-movement measures, and the patterns of change on the first three contextual encounters with the pseudowords differed for L1 and L2 readers. On the multiple-choice cloze posttest, encountering novel words in reading followed by definitions resulted in somewhat more accurate responses for L1 but not L2 participants. The learning condition did not affect the results of the meaning recall posttest. These findings contribute to a more nuanced understanding of the relationship between vocabulary support approaches and the reading behavior of L1 and L2 readers when they encounter unfamiliar words in texts. |
Gareth Carrol; Katrien Segaert As easy as cake or a piece of pie? Processing idiom variation and the contribution of individual cognitive differences Journal Article In: Memory & Cognition, pp. 1–18, 2023. @article{Carrol2023, Language users routinely use canonical, familiar idioms in everyday communication without difficulty. However, creativity in idiom use is more widespread than sometimes assumed, and little is known about how we process creative uses of idioms, and how individual differences in cognitive skills contribute to this. We used eye-tracking while reading and cross-modal priming to investigate the processing of idioms (e.g., play with fire) compared with creative variants (play with acid) and literal controls (play with toys), amongst a group of 47 university-level native speakers of English. We also conducted a series of tests to measure cognitive abilities (working memory capacity, inhibitory control, and processing speed). Eye-tracking results showed that in early reading behaviour, variants were read no differently to literal phrases or idioms but showed significantly longer overall reading times, with more rereading required compared with other conditions. Idiom variables (familiarity, decomposability, literal plausibility) and individual cognitive variables had limited effects throughout, although more decomposable phrases of all kinds required less overall reading time. Cross-modal priming—which has often shown a robust idiom advantage in past studies—demonstrated no difference between conditions, but decomposability again led to faster processing. Overall, results suggest that variants were treated more like literal phrases than novel metaphors, with subsequent effort required to make sense of these in the way that was consistent with the context provided. |
Min Chang; Kuo Zhang; Yue Sun; Sha Li; Jingxin Wang The graded predictive pre-activation in Chinese sentence reading: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–8, 2023. @article{Chang2023a, Previous research has revealed that graded pre-activation rather than specific lexical prediction is more likely to be the mechanism for the word predictability effect in English. However, whether graded pre-activation underlies the predictability effect in Chinese reading is unknown. Accordingly, the present study tested the generality of the graded pre-activation account in Chinese reading. We manipulated the contextual constraint of sentences and the predictability of target words as independent variables. Readers' eye movement behaviors were recorded via an eye tracker. We examined whether processing an unpredictable word in a solid constraining context incurs a prediction error cost when this unpredictable word has a predictable alternative. The results showed no cues of prediction error cost on the early eye movement measures, supported by the Bayes Factor analyses. The current research indicates that graded predictive pre-activation underlies the predictability effect in Chinese reading. |
Lijuan Chen; Xiaodong Xu; Hongling Lv How literary text reading is influenced by narrative voice and focalization: Evidence from eye movements Journal Article In: Discourse Processes, vol. 60, no. 10, pp. 675–694, 2023. @article{Chen2023e, A fictional story is always narrated from a certain narrative voice and mode of focalization. These core narrative techniques have a major impact on how readers interpret the narrative plot and connect with the characters. This study used eye-tracking to investigate how classic narrative reading is affected by narrative voice and focalization. The results showed that the third-person narrative voice was read more slowly than the first-person narrative voice, especially when the narrative was presented with internal focalization. Importantly, the transition from a first-person to a third-person narrative voice generally resulted in longer reading times, whereas a switch from a third-person to a first-person narrative voice only yielded limited benefits in terms of reduced reading time. These findings provide direct evidence to support the assumption that there is a distinction between the first-person narration and the third-person narration and demonstrate the important role of narrative voice and focalization in understanding narrative texts. |
Mingjing Chen; Jiamei Lu The role of format familiarity and word frequency in Chinese reading Journal Article In: Journal of Eye Movement Research, vol. 16, no. 4, pp. 1–22, 2023. @article{Chen2023g, For Chinese readers, reading from left to right is the norm, while reading from right to left is unfamiliar. This study comprises two experiments investigating how format familiarity and word frequency affect reading by Chinese people. Experiment 1 examines the roles of format familiarity (reading from left to right is the familiar Chinese format, and reading from right to left is the unfamiliar Chinese format) and word frequency in vocabulary recognition. Forty students read the same Chinese sentences from left to right and from right to left. Target words were divided into high and low frequency words. In Experiment 2, participants engaged in right-to-left reading training for 10 days to test whether their right-to-left reading performance could be improved. The study yields several main findings. First, format familiarity affects vocabulary recognition. Participants reading from left to right had shorter fixation times, higher skipping rates, and viewing positions closer to word center.. Second, word frequency affects vocabulary recognition in Chinese reading. Third, right-to-left reading training could improve reading performance. In the early indexes, the interaction effect of format familiarity and word frequency was significant. There was also a significant word-frequency effect from left to right but not from right to left. Therefore, word segmentation and vocabulary recognition may be sequential in Chinese reading. |
Shuyuan Chen; Jinzuan Chen; Yanping Liu Are there binocular advantages in Chinese reading? Evidence from eye movements Journal Article In: Scientific Studies of Reading, pp. 1–14, 2023. @article{Chen2023h, Purpose: This study aims to examine whether binocular vision plays a facilitating or impeding role in lexical processing during sentence reading in Chinese. Method: Adopting the revised boundary paradigm, we orthogonally manipulated the parafoveal and foveal viewing conditions (monocular vs. binocular) of target words (high- vs. low-frequency) within sentences. Forty participants (30 females, mean age = 19.9 years) were recruited to read these sentences and their eye movements were monitored. Results: Through directly comparing the eye movement measures in different viewing conditions, the results indicated that compared with monocular viewing, binocular viewing resulted in shorter fixation durations, thereby facilitating lexical processing. Critically, in addition to the higher information encoding speed toward the currently fixated word in the fovea, the more efficient preprocessing of the upcoming text to the right of fixation in the parafovea may also contribute to the superiority of binocular vision over monocular. Conclusion: Our findings provide the first evidence to support the binocular advantages in Chinese reading, which reveals that high-quality visual input from binocular vision plays a vital role in fluent and efficient written text reading. |
Xuemei Chen; Robert J. Hartsuiker Structure prediction occurs when it is needed: Evidence from visual-world structural priming in Dutch comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 49, no. 6, pp. 941–958, 2023. @article{Chen2023l, Arai et al. (2007) showed that structural priming in the comprehension of English dative sentences only occurred when the verb was repeated between prime and target, suggesting a lexically-dependent mechanism of structure prediction. However, a recent study in Mandarin comprehension found abstract (verb-independent) structural priming and such priming was stronger when the structure was unexpected (e.g., when a verb biased toward the double object [DO] structure is followed by an unexpected prepositional object [PO] structure; Chen et al., 2022). The latter finding of inverse preference priming is consistent with error-based implicit learning accounts, which suggest structural priming is based on learning from prediction errors (Chang et al., 2006). Here we tested the mechanism of structure prediction (lexically-dependent vs. abstract) in four visual-world comprehension experiments in Dutch. Dutch is a Germanic language like English; it is biased toward the PO structure like Mandarin. We not only found structural priming when the verb was repeated, but also when the verb was different: During target sentence processing, comprehenders looked more often at the recipient (predicting a DO structure) than at the theme (predicting a PO structure) after a DO prime and vice versa after a PO prime. Importantly, abstract structural priming only occurred when the target structure was relatively unpredictable. We interpret the inconsistent findings across languages in terms of an effortful process of structure prediction in comprehension (Pickering & Gambi, 2018): it occurs when it is needed to disambiguate the postverbal arguments, but not when it is optional. |
Xuemei Chen; Suiping Wang; Robert J. Hartsuiker Do structure predictions persevere to multilinguals' other languages? Evidence from cross-linguistic structural priming in comprehension Journal Article In: Bilingualism: Language and Cognition, vol. 26, no. 4, pp. 653–669, 2023. @article{Chen2023i, Many cross-language sentence processing studies showed structural priming, which suggests a shared representation across languages or separate but interacting representations for each language. To investigate whether multilinguals can rely on such representations to predict structure in comprehension, we conducted two visual-world eye-tracking priming experiments with Cantonese–Mandarin-English multilinguals. Participants were instructed to read aloud prime sentences in either Cantonese, Mandarin, or English; then they heard a target sentence in Mandarin while looking at the corresponding target picture. When prime and target had different verbs, there was within-language structural priming only (Mandarin-to-Mandarin, Experiment 1). But when prime and target had translation-equivalent verbs, there was not only within-language but also between-language priming (only Cantonese-to-Mandarin, Experiment 2). These results indicate that structure prediction between languages in comprehension is partly lexically-based, so that cross-linguistic structural priming only occurs with cognate verbs. |
Tzu Yao Chiu; Denis Drieghe The role of visual crowding in eye movements during reading: Effects of text spacing Journal Article In: Attention, Perception, & Psychophysics, vol. 85, no. 8, pp. 2834–2858, 2023. @article{Chiu2023, Visual crowding, generally defined as the deleterious influence of clutter on visual discrimination, is a form of inhibitory interaction between nearby objects. While the role of crowding in reading has been established in psychophysics research using rapid serial visual presentation (RSVP) paradigms, how crowding affects additional processes involved in natural reading, including parafoveal processing and saccade targeting, remains unclear. The current study investigated crowding effects on reading via two eye-tracking experiments. Experiment 1 was a sentence-reading experiment incorporating an eye-contingent boundary change in which reader's parafoveal processing was quantified through comparing reading times after valid or invalid information was presented in the parafovea. Letter spacing was jointly manipulated to compare how crowding affects parafoveal processing. Experiment 2 was a passage-reading experiment with a line spacing manipulation. In addition to replicating previously observed letter spacing effects on global reading parameters (i.e., more but shorter fixations with wider spacing), Experiment 1 found an interaction between preview validity and letter spacing indicating that the efficiency of parafoveal processing was constrained by crowding and visual acuity. Experiment 2 found reliable but subtle influences of line spacing. Participants had shorter fixation durations, higher skipping probabilities, and less accurate return sweeps when line spacing was increased. In addition to extending the literature on the role of crowding to reading in ecologically valid scenarios, the current results inform future research on characterizing the influence of crowding in natural reading and comparing effects of crowding across reader populations. |
Kiel Christianson; Jack Dempsey; Sarah Elizabeth M. Deshaies; Anna Tsiola; Laura P. Valderrama Do readers misassign thematic roles? Evidence from a trailing boundary-change paradigm Journal Article In: Language, Cognition and Neuroscience, vol. 38, no. 6, pp. 872–892, 2023. @article{Christianson2023, We report an eye-tracking experiment with a trailing boundary-change paradigm as people read subject- and object-relative clauses that were either plausible or implausible. We sought to determine whether readers sometime misassign thematic roles to arguments in implausible, noncanonical sentences. In some sentences, argument nouns were reversed after participants had read them. Thus, implausible noncanonical sentences like “The bird that the worm ate yesterday was small” changed to plausible “The worm that the bird ate was small.” If initial processing generates veridical representations, all changes should disrupt rereading, irrespective of plausibility or syntactic structure. Misinterpretation effects should only arise in offline comprehension. If misassignment of thematic roles occurs during initial processing, differences should be apparent in first-pass reading times, and rereading should be differentially affected by the direction of the text change. Results provide evidence that readers sometimes misassign roles during initial processing and sometimes fail to revise misassignments during rereading. |
Christoforos Christoforou; Maria Theodorou; Argyro Fella; Timothy C. Papadopoulos RAN-related neural-congruency: A machine learning approach toward the study of the neural underpinnings of naming speed Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–15, 2023. @article{Christoforou2023, Objective: Naming speed, behaviorally measured via the serial Rapid automatized naming (RAN) test, is one of the most examined underlying cognitive factors of reading development and reading difficulties (RD). However, the unconstrained-reading format of serial RAN has made it challenging for traditional EEG analysis methods to extract neural components for studying the neural underpinnings of naming speed. The present study aims to explore a novel approach to isolate neural components during the serial RAN task that are (a) informative of group differences between children with dyslexia (DYS) and chronological age controls (CAC), (b) improve the power of analysis, and (c) are suitable for deciphering the neural underpinnings of naming speed. Methods: We propose a novel machine-learning-based algorithm that extracts spatiotemporal neural components during serial RAN, termed RAN-related neural-congruency components. We demonstrate our approach on EEG and eye-tracking recordings from 60 children (30 DYS and 30 CAC), under phonologically or visually similar, and dissimilar control tasks. Results: Results reveal significant differences in the RAN-related neural-congruency components between DYS and CAC groups in all four conditions. Conclusion: Rapid automatized naming-related neural-congruency components capture the neural activity of cognitive processes associated with naming speed and are informative of group differences between children with dyslexia and typically developing children. Significance: We propose the resulting RAN-related neural-components as a methodological framework to facilitate studying the neural underpinnings of naming speed and their association with reading performance and related difficulties. |
Sarah E. Colby; Bob McMurray Efficiency of spoken word recognition slows across the adult lifespan Journal Article In: Cognition, vol. 240, pp. 1–11, 2023. @article{Colby2023, Spoken word recognition is a critical hub during language processing, linking hearing and perception to meaning and syntax. Words must be recognized quickly and efficiently as speech unfolds to be successfully integrated into conversation. This makes word recognition a computationally challenging process even for young, normal hearing adults. Older adults often experience declines in hearing and cognition, which could be linked by age-related declines in the cognitive processes specific to word recognition. However, it is unclear whether changes in word recognition across the lifespan can be accounted for by hearing or domain-general cognition. Participants (N = 107) responded to spoken words in a Visual World Paradigm task while their eyes were tracked to assess the real-time dynamics of word recognition. We examined several indices of word recognition from early adolescence through older adulthood (ages 11–78). The timing and proportion of eye fixations to target and competitor images reveals that spoken word recognition became more efficient through age 25 and began to slow in middle age, accompanied by declines in the ability to resolve competition (e.g., suppressing sandwich to recognize sandal). There was a unique effect of age even after accounting for differences in inhibitory control, processing speed, and hearing thresholds. This suggests a limited age range where listeners are peak performers. |
Carmen Julia Coloma; Ernesto Guerra; Zulema De Barbieri; Andrea Helo; Zulema De Barbieri; Andrea Helo; Carmen Julia; Ernesto Guerra; Zulema De Barbieri; Andrea Helo Article comprehension in monolingual Spanish-speaking children with developmental language disorder: A longitudinal eye tracking study Journal Article In: International Journal of Speech-Language Pathology, pp. 1–13, 2023. @article{Coloma2023, Purpose: Article-noun disagreement in spoken language is a marker of children with developmental language disorder (DLD). However, the evidence is less clear regarding article comprehension. This study investigates article comprehension in monolingual Spanish-speaking children with and without DLD. Method: Eye tracking methodology used in a longitudinal experimental design enabled the examination of real time article comprehension. The children at the time 1 were 40 monolingual Spanish-speaking preschoolers (20 with DLD and 20 with typical language development [TLD]). A year later (time 2), 27 of these children (15 with DLD and 12 with TLD) were evaluated. Children listened to simple phrases while inspecting a four object visual context. The article in the phrase agreed in number and gender with only one of the objects. Result: At the time 1, children with DLD did not use articles to identify the correct image, while children with TLD anticipated the correct picture. At the time 2, both groups used the articles' morphological markers, but children with DLD showed a slower and weaker preference for the correct referent compared to their age-matched peers. Conclusion: These findings suggest a later emergence, but a similar developmental trajectory, of article comprehension in children with DLD compared to their peers with TLD. |
Leonardo Concetti; Vincenzo Moscati The unexpected lightness of the main verb: An eye-tracking study on relative clauses and trace reactivation Journal Article In: Qulso, vol. 7220, pp. 45–58, 2023. @article{Concetti2023, A few studies on relative-clause processing report an unexpected facilitatory effect on the matrix verb that follows an Object Relative (ORC) clause (e.g. Staub, Dillon and Clifton jr. 2017). In this study we present the results of a novel eye-tracking experiment that replicated this effect on Italian. The advan- tage of ORCs is discussed under the hypothesis that subject-verb agreement in the matrix benefits from a general trace-reactivation mechanisms, subsumed from activation-based retrieval models (Lewis and Vasishth 2005). |
Claudia Contadini-Wright; Kaho Magami; Nishchay Mehta; Maria Chait In: Journal of Neuroscience, vol. 43, no. 26, pp. 4856–4866, 2023. @article{ContadiniWright2023, Listening in noisy environments requires effort - the active engagement of attention and other cognitive abilities - as well as increased arousal. The ability to separately quantify the contribution of these components is key to understanding the dynamics of effort and how it may change across listening situations and in certain populations. We concurrently measured two types of ocular data in young participants (both sexes): pupil dilation (PD; thought to index arousal aspects of effort) and microsaccades (MS; hypothesized to reflect automatic visual exploratory sampling), while they performed a speech-in-noise task under high- (HL) and low- (LL) listening load conditions. Sentences were manipulated so that the behaviorally relevant information (WABBLE) appeared at the end (Experiment 1) or beginning (Experiment 2) of the sentence, resulting in different temporal demands on focused attention. In line with previous reports, PD effects were associated with increased dilation under load. We observed a sustained difference between HL and LL conditions, consistent with increased phasic and tonic arousal. Importantly we show that MS rate was also modulated by listening load. This was manifested as a reduced MS rate in HL relative to LL. Critically, in contrast to the sustained difference seen for PD, MS effects were localized in time, specifically during periods when demands on auditory attention were greatest. These results demonstrate that auditory selective attention interfaces with the mechanisms controlling MS generation, establishing MS as an informative measure, complementary to PD, with which to quantify the temporal dynamics of auditory attentional processing under effortful listening conditions. |
Erin Conwell; Gregor Horvath; Allyson Kuznia; Stephen J. Agauas Developmental consistency in the use of subphonemic information during real-time sentence processing Journal Article In: Language, Cognition and Neuroscience, vol. 38, no. 6, pp. 860–871, 2023. @article{Conwell2023, Apparently homophonous sequences contain acoustic information that differentiates their meanings [Gahl. (2008). Time and thyme are not homophones: The effect of lemma frequency on word durations in spontaneous speech. Language, 84(3), 474–496; Quené. (1992). Durational cues for word segmentation in Dutch. Journal of Phonetics, 20(3), 331–350]. Adults use this information to segment embedded homophones [e.g. ham vs. hamster; Salverda et al. (2003). The role of prosodic boundaries in the resolution of lexical embedded in speech comprehension. Cognition, 90(1), 51–89] in fluent speech. Whether children also do this is unknown, as is whether listeners of any age use such information to disambiguate lexical homophones. In two experiments, 48 English-speaking adults and 48 English-speaking 7 to 10-year-old children viewed sets of four images and heard sentences containing phonemically identical sequences while their eye movements were continuously tracked. As in previous research, adults showed greater fixation of target meanings when the acoustic properties of an embedded homophone were consistent with the target than when they were consistent with the alternate interpretation. They did not show this difference for lexical homophones. Children's behaviour was similar to that of adults, indicating that the use of subphonemic information in homophone processing is consistent over development. |
Frances G Cooley; David Quinto-pozos Supplemental material for examining speech-based phonological recoding during reading for adolescent deaf signers Journal Article In: Journal of Experimental Psychology: General, vol. 152, no. 7, pp. 1995–2007, 2023. @article{Cooley2023, Much of the debate regarding literacy development in deaf and hard-of-hearing readers surrounds whether there is dependence on phonological decoding of print to speech for such readers, and the literature is mixed. While some reports of deaf children and adults demonstrate the influence of speech-based processing during reading, others find little to no evidence of speech-sound activation. In order to examine the role of speech- based phonological codes when reading, we utilized eye-tracking to examine eye-gaze behaviors employed by deaf children and a control group of hearing primary-school children when encountering target words in sentences. The target words were of three types: correct, homophonic errors, and nonhomophonic errors.We examined eye-gaze fixations when first encountering target words and, if applicable, when rereading those words. The results revealed that deaf and hearing readers differed in their eye-movement behaviors when re- reading the words, but they did not demonstrate differences for first encounters with the words. Hearing read- ers treated homophonic and nonhomophonic error words differently during their second encounter with the target while deaf readers did not, suggesting that deaf signers did not engage in phonological decoding to the same degree as hearing readers did. Further, deaf signers performed fewer overall regressions to target words than hearing readers, suggesting that they depended less on regressions to resolve errors in the text. |
Jason C. Coronel; Jared M. Ott; Austin Hubner; Matthew D. Sweitzer; Samuel Lerner In: Communication Research, vol. 50, no. 1, pp. 3–29, 2023. @article{Coronel2023, Person-to-person communication is ubiquitous in everyday life, yet the literature on framing has not examined how the content and number of frames change when transmitted across individuals. In Study 1, we use the serial reproduction paradigm to examine how person-to-person communication and message length influence the number of frames in the information environment. In Study 2, we use eye movement monitoring to examine whether individuals direct greater attention to pro- or counter-attitudinal frames in a competitive framing environment. We find that the process of retelling frames from person to person can transform an environment containing multiple competing frames into an environment with a single frame. This is important given work showing that framing effects in competitive environments are more likely to cancel out. Furthermore, message length and prior attitudes play important roles in determining whether individuals direct attention to, remember, and transmit frames. |
Ruth E. Corps; Meijian Liao; Martin J. Pickering Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study Journal Article In: Bilingualism: Language and Cognition, vol. 26, pp. 231–243, 2023. @article{Corps2023a, Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice... ) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages. |
Ruth E. Corps; Fang Yang; Martin J. Pickering Evidence against egocentric prediction during language comprehension Journal Article In: Royal Society Open Science, vol. 10, no. 12, pp. 1–12, 2023. @article{Corps2023, Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear… ) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to. |
M. Eric Cui; Björn Herrmann Eye movements decrease during effortful speech listening Journal Article In: Journal of Neuroscience, vol. 43, no. 32, pp. 5856–5869, 2023. @article{Cui2023, Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry—the most used approach to assess listening effort—has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful. |
Xin Cui; Xiaoming Jiang; Hongwei Ding Affective prosody guides facial emotion processing Journal Article In: Current Psychology, vol. 42, no. 27, pp. 23891–23902, 2023. @article{Cui2023b, Previous studies have reported the "emotional congruency effect (ECE)" in cross-modal emotion processing, claiming that multimodal congruent emotional signals will enhance the emotion processing, yet few studies have shown how this effect is dynamically processed over time and whether it is achieved in the same way across language and cultural backgrounds. We adopted the eye-tracking technique to investigate whether and how the audio emotional signal influences the visual processing of emotional faces according to ECE. We explored this issue by asking thirty-two native Mandarin speakers to scan a visual array of four types of emotional faces while listening to the affective prosody matching one of the four emotions. To eliminate the potential confounding from lexico-semantic information, the affective prosody is pronounced in meaningless di-syllable clusters. Results of the experiment indicate that (1) participants paid more attention to happy faces at first glance and their attention shifted to angry and sad faces over time. (2) Consistent with findings in English-speaking settings, ECE appeared in Mandarin-speaking settings, but took effect earlier in happy faces and persisted in all emotions as the unfolding of the signal. Based on the results, we conclude that the processing time differs across emotion types and therefore ECE takes effect in different temporal points according to the emotion type. Finally, we suggest that language and cultural experience may shape the processing time of different emotions. |
Yaqiong Cui Eye movements of second language learners when reading spaced and unspaced Chinese texts Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–13, 2023. @article{Cui2023a, Unlike English, Chinese does not have interword spacing in written texts, which poses difficulties for Chinese-as-a-second-language (CSL) learners' identification of word boundaries and affects their reading comprehension and vocabulary acquisition. The eye-movement literature has suggested that interword spacing is important in alphabetic languages; examining languages that lack interword spaces such as Chinese, thus, may help to inform theoretical accounts of eye-movement control and word identification during reading. Research investigating the interword spacing effect in reading Chinese showed that adding spacing facilitated CSL learners' reading comprehension and speed as well as vocabulary learning. However, the bulk of this research mainly looked at the learning outcomes (off-line measures), with few studies focusing on L2 learners' reading processes. Building on this background, this study seeks to provide a descriptive perspective of the eye movements of CSL learners. In this study, 24 CSL learners with intermediate Chinese proficiency were recruited as the experimental group, and 20 Chinese native speakers were recruited as the control group. The EyeLink 1,000 eye tracker was used to record their reading of four segmentation conditions of Chinese texts, namely, no space condition, word-spaced condition, non-word-spaced condition, and pinyin-spaced condition. Results show that: (1) CSL learners with intermediate Chinese proficiency generally spent less time reading Chinese texts with spaces between words, and they showed more gazes and regressions when reading texts without spaces; (2) Non-word-spaced texts and Pinyin-spaced texts interfere with CSL learners' reading process; and (3) Intermediate CSL learners show consistent eye movement patterns in the normal no-space condition and word-spaced condition. I conclude that word boundary information can effectively guide CSL learners' eye movement behaviors and eye saccade planning, thus improving reading efficiency. |
Daniela Mertzen; Dario Paape; Brian Dillon; Ralf Engbert; Shravan Vasishth Syntactic and semantic interference in sentence comprehension: Support from English and German eye-tracking data Journal Article In: Glossa Psycholinguistics, vol. 2, no. 1, pp. 1–48, 2023. @article{Mertzen2023, A long-standing debate in the sentence processing literature concerns the time course of syntactic and semantic information processing in online sentence comprehension. The default assumption in cue-based models of parsing is that syntactic and semantic retrieval cues simultaneously guide dependency resolution. When retrieval cues match multiple items in memory, this leads to similarity-based interference. Both semantic and syntactic interference have been shown to occur in English. However, the relative timing of syntactic vs. semantic interference remains unclear. In this cross-linguistic investigation of the time course of syntactic vs. semantic interference, the data from two eye-tracking during reading experiments (English and German) suggest that the two types of interference can in principle arise simultaneously during retrieval. However, the data also indicate that semantic cues are evaluated with a small timing lag in German compared to English. This cross-linguistic difference between English and German may be due to German having richer morphosyntactic marking than English, resulting in syntactic cues dominating over semantic cues during dependency resolution. More broadly, our cross-linguistic results pose a challenge for the cue-based retrieval model's default assumption that syntactic and semantic cues are used simultaneously during long-distance dependency formation. Our work also highlights the importance of collecting cross-linguistic data on psycholinguistic phenomena which can potentially advance theory development. |
Diane C. Mézière; Lili Yu; Genevieve McArthur; Erik D. Reichle; Titus Malsburg Scanpath regularity as an index of reading comprehension Journal Article In: Scientific Studies of Reading, vol. 28, no. 1, pp. 79–100, 2023. @article{Meziere2023, Purpose: Recent research on the potential of using eye-tracking to measure reading comprehension ability suggests that the relationship between standard eye-tracking measures and reading comprehension is influenced by differences in task demands between comprehension assessments. We compared standard eye-tracking measures and scanpath regularity as predictors of reading comprehension scores. Method: We used a dataset in which 79 participants (mean age: 22 years, 82% females, 76% monolingual English speakers) were administered three widely-used reading comprehension assessments with varying task demands while their eye movements were monitored: the York Assessment of Reading for Comprehension; (YARC), the Gray Oral Reading Test; (GORT-5), and the sentence comprehension subtest of the Wide Range Achievement Test; (WRAT-4). Results: Results showed that scanpath regularity measures, similarly to standard eye-tracking measures, were influenced by differences in task demands between the three tests. Nevertheless, both types of eye-tracking measures made unique contributions as predictors of comprehension and the best set of predictors included both standard eye-tracking measures and at least one scanpath measure across tests. Conclusion: The results provide evidence that scanpaths capture differences in eye-movement patterns missed by standard eye-tracking measures. Overall, the results highlight the effect of task demands on eye-movement behavior and suggest that reading goals and task demands need to be considered when interpreting eye-tracking data. |
Diane C. Mézière; Lili Yu; Erik D. Reichle; Titus Malsburg; Genevieve McArthur Using eye-tracking measures to predict reading comprehension Journal Article In: Reading Research Quarterly, vol. 58, no. 3, pp. 425–449, 2023. @article{Meziere2023a, This study examined the potential of eye-tracking as a tool for assessing reading comprehension. We administered three widely used reading comprehension tests with varying task demands to 79 typical adult readers while monitoring their eye movements. In the York Assessment of Reading for Comprehension (YARC), participants were given passages of text to read silently, followed by comprehension questions. In the Gray Oral Reading Test (GORT-5), participants were given passages of text to read aloud, followed by comprehension questions. In the sentence comprehension subtest of the Wide Range Achievement Test (WRAT-4), participants were asked to provide a missing word in sentences that they read silently (i.e., a cloze task). Linear models predicting comprehension scores from eye-tracking measures yielded different results for the three tests. Eye-tracking measures explained significantly more variance than reading-speed data for the YARC (four times better), GORT (three times better), and the WRAT (1.3 time better). Importantly, there was no common strong predictor for all three tests. These results support growing recognition that reading comprehension tests do not measure the same cognitive processes, and that participants adapt their reading strategies to the tests' varying task demands. This study also suggests that eye-tracking may provide a useful alternative for measuring reading comprehension. |
Evelyn Milburn; Michael Walsh Dickey; Tessa Warren; Rebecca Hayes Increased reliance on world knowledge during language comprehension in healthy aging: evidence from verb-argument prediction Journal Article In: Aging, Neuropsychology, and Cognition, vol. 30, no. 1, pp. 1–33, 2023. @article{Milburn2023, Cognitive aging negatively impacts language comprehension performance. However, there is evidence that older adults skillfully use linguistic context and their crystallized world knowledge to offset age-related changes that negatively impact comprehension. Two visual-world paradigm experiments examined how aging changes verb-argument prediction, a comprehension process that relies on world knowledge but has rarely been examined in the cognitive-aging literature. Older adults did not differ from younger adults in their activation of an upcoming likely verb argument, particularly when cued by a semantically-rich agent+verb combination (Experiment 1). However, older adults showed elevated activation of previously-mentioned agents (Experiment 1) and of unlikely but verb-congruent referents (Experiment 2). This is novel evidence that older adults exploit semantic context and world knowledge during comprehension to successfully activate upcoming referents. However, older adults also show elevated activation of irrelevant information, consistent with previous findings demonstrating that older adults may experience greater proactive interference and competition from task-irrelevant information. |
Sara Milligan; Brian Nestor; Martín Antúnez; Elizabeth R. Schotter Out of sight, out of mind: Foveal processing is necessary for semantic integration of words into sentence context Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 49, no. 5, pp. 687–708, 2023. @article{Milligan2023, Word recognition begins before a reader looks directly at a word, as demonstrated by the parafoveal preview benefit and word skipping. Both low-level form and high-level semantic features can be accessed in parafoveal vision and used to promote reading efficiency. However, words are not recognized in isolation during reading; once a semantic representation is retrieved, it must be integrated with the broader sentence context. One open question about parafoveal processing is whether it is limited to shallow stages of lexico-semantic activation or extends to semantic integration. In the present two-experiment study, we recorded event-related brain potentials in response to a sentence-final word that was presented in foveal or parafoveal vision and was either expected, unexpected, or anomalous in the sentence context.We found thatword recognition, indexed by the N400, ensued regardless of perception location whereas identification of the semantic fit of a word in its sentence context, indexed by the late positive component, was only observed for foveally perceived but not parafoveally perceived words. This pattern was not sensitive to task differences that promote different levels of orthographic scrutiny, as manipulated between the two experiments. These findings demonstrate separate roles for parafoveal and foveal processing in reading. |
Serge Minor; Natalia Mitrofanova; Gustavo Guajardo; Myrte Vos; Gillian Ramchand Aspect processing across languages: A visual world eye-tracking study Journal Article In: Frontiers in Language Sciences, vol. 1, pp. 1–14, 2023. @article{Minor2023a, The study employed a combination of a picture selection task and Visual World eye-tracking to investigate the processing of grammatical aspect (perfective vs. imperfective) in three languages: Russian, Spanish and English. In order to probe into the cognitive representations triggered by the aspectual forms we contrasted visual representations of different temporal portions of telic events—a snapshot of the process stage (ongoing event) and a snapshot of the immediate aftermath of the event/the result state (completed event). In all three languages, the gaze patterns and offline responses revealed a strong preference for representations of ongoing events in the imperfective condition. This confirms that the imperfective forms in all the three languages draw attention to the in-progress portion of a telic event. In the perfective condition, however, we found robust differences. Russian uses verbal prefixes to mark perfective aspect, and our results suggest that perfective telic verbs in Russian strongly highlight the result state of an event. In Spanish, the perfective past tense form (Preterite) also highlights event completion, but to a lesser extent than in Russian—in line with its less restrictive semantics in not requiring an inherent boundary. In contrast to Russian and Spanish, English speakers did not show a preference for representations of completed events in the perfective (Simple Past) condition. This suggests that the English Simple Past form does not encode a preferential cognitive salience for either the activity portion of an event or its result state, and lends support to the analysis of the English Simple Past as a non-aspectual tense form. |
Padraic Monaghan; Seamus Donnelly; Katie Alcock; Amy Bidgood; Kate Cain; Samantha Durrant; Rebecca L. A. Frost; Lana S. Jago; Michelle S. Peter; Julian M. Pine; Heather Turnbull; Caroline F. Rowland Learning to generalise but not segment an artificial language at 17 months predicts children's language skills 3 years later Journal Article In: Cognitive Psychology, vol. 147, pp. 1–13, 2023. @article{Monaghan2023, We investigated whether learning an artificial language at 17 months was predictive of children's natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children's early language development. |
Corrin Moss; Sharon Kwabi; Scott P. Ardoin; Katherine S. Binder In: Reading and Writing, pp. 1–27, 2023. @article{Moss2023, The ability to form a mental model of a text is an essential component of successful reading comprehension (RC), and purpose for reading can influence mental model construction. Participants were assigned to one of two conditions during an RC test to alter their purpose for reading: concurrent (texts and questions were presented simultaneously) and sequential (texts were presented first, then questions were shown without text access). Their eye movements were recorded during testing. Working memory capacity (WMC) and centrality of textual information were measured. Participants in the sequential condition had longer first-pass reading times compared to participants in the concurrent condition, while participants in the concurrent condition had longer total processing times per word. In addition, participants with higher WMC had longer total reading times per word. Finally, participants in the sequential condition with higher WMC had longer processing times in central regions. Even among skilled college readers, participants with lower WMC had difficulty adjusting their reading behaviors to meet the task demands such as distinguishing central and peripheral ideas. However, participants with higher WMC increased attention to important text areas. One potential explanation is that participants with higher WMC are better able to construct a coherent mental model of the text, and attending to central text areas is an essential component of mental model formation. Therefore, these results help clarify the relationship between the purpose for reading and mental model development. |
Yoichi Mukai; Juhani Järvikivi; Benjamin V. Tucker The role of phonology-to-orthography consistency in predicting the degree of pupil dilation induced in processing reduced and unreduced speech Journal Article In: Applied Psycholinguistics, vol. 44, no. 5, pp. 784–815, 2023. @article{Mukai2023, The relationship between the ways in which words are pronounced and spelled has been shown to affect spoken word processing, and a consistent relationship between pronunciation and spelling has been reported as a possible cause of unreduced pronunciations being easier to process than reduced counterparts although reduced pronunciations occur more frequently. In the present study, we investigate the effect of pronunciation-to-spelling consistency for reduced and unreduced pronunciations in L1 and L2 listeners of a logographic language. More precisely, we compare L1 and L2 Japanese listeners to probe whether they use orthographic information differently when processing reduced and unreduced speech. Using pupillometry, the current study provides evidence that extends the hypothesis about the role of orthography in the processing of reduced speech. Orthographic realization matters in processing for L1 and L2 advanced listeners. More specifically, how consistent the orthographic realization is with its phonological form (phonology-to-orthography consistency) modulates the extent to which reduced pronunciation induces additional processing costs. The results are further discussed in terms of their implications for how listeners process reduced speech and the role of the orthographic form in speech processing. |
Shingo Nahatame Predicting processing effort during L1 and L2 reading: The relationship between text linguistic features and eye movements Journal Article In: Bilingualism: Language and Cognition, pp. 1–14, 2023. @article{Nahatame2023, Researchers have taken great interest in the assessment of text readability. This study expands on this research by developing readability models that predict the processing effort involved during first language (L1) and second language (L2) text reading. Employing natural language processing tools, the study focused on assessing complex linguistic features of texts, and these features were used to explain the variance in processing effort, as evidenced by eye movement data for L1 or L2 readers of English that were extracted from an open eye-tracking corpus. Results indicated that regression models using the indices of complex linguistic features provided better performance in predicting processing effort for both L1 and L2 reading than the models using simple linguistic features (word and sentence length). Furthermore, many of the predictive variables were lexical features for both L1 and L2 reading, emphasizing the importance of decoding for fluent reading regardless of the language used. |
Mihaela Beatrice Neagu; Abigail A. Kressner; Helia Relaño-Iborra; Per Bækgaard; Torsten Dau; Dorothea Wendt Investigating the reliability of pupillometry as a measure of individualized listening effort Journal Article In: Trends in Hearing, vol. 27, pp. 1–20, 2023. @article{Neagu2023, Recordings of the pupillary response have been used in numerous studies to assess listening effort during a speech-in-noise task. Most studies focused on averaged responses across listeners, whereas less is known about pupil dilation as an indicator of the individuals' listening effort. The present study investigated the reliability of several pupil features as potential indicators of individual listening effort and the impact of different normalization procedures on the reliability. The pupil diameters of 31 normal-hearing listeners were recorded during multiple visits while performing a speech-in-noise task. The signal-to-noise ratios (SNRs) of the stimuli ranged from (Formula presented.) 12 dB to (Formula presented.) 4 dB. All listeners were measured twice at separate visits, and 11 were re-tested at a third visit. To examine the reliability of the pupil responses across visits, the intraclass correlation coefficient was applied to the peak and mean pupil dilation and to the temporal features of the pupil response, extracted using growth curve analysis. The reliability of the pupillary response was assessed in relation to SNR and different normalization procedures over multiple visits. The most reliable pupil features were the traditional mean and peak pupil dilation. The highest reliability results were obtained when the data were baseline-corrected and normalized to the individual pupil response range across all visits. Moreover, the present study results showed only a minor impact of the SNR and the number of visits on the reliability of the pupil response. Overall, the results may provide an important basis for developing a standardized test for pupillometry in the clinic. |
Ulrike Nederstigt; Béryl Hilberink-Schulpen Attention to multilingual job ads: An eye-tracking study on the use of English in German job ads Journal Article In: Folia Linguistica, vol. 57, no. 2, pp. 313–343, 2023. @article{Nederstigt2023, In many non-English-speaking countries, English loanwords in job ads seem to be very common. The question is whether this linguistic choice is advantageous, especially when the job advertised does not involve working in an international environment. Previous research of English loanwords in job ads has revealed that their effect in terms of the evaluation of the company, the job and the ad is limited if effects can be shown at all. Suggestions that English loanwords draw readers' attention because this language choice deviates from what readers expect and, in addition, take more processing time (because they are foreign) lack empirical evidence. The eye-tracking and behavioural data of our experiment did not provide any empirical evidence for the attention-drawing function of English loanwords nor an influence on their effectiveness in job ads geared to graduate students in Germany. We suggest that loanwords need a certain amount of processing to be identified as foreign. This means they are different from other salient cues that were shown to draw readers' attention because they are not subject to automatic processes. In addition, our participants were sufficiently proficient in English so that differences in processing time were not reflected in their eye-movement data. |
M. J. Nelson; S. Moeller; M. Seckin; E. J. Rogalski; M. M. Mesulam; R. S. Hurley The eyes speak when the mouth cannot: Using eye movements to interpret omissions in primary progressive aphasia Journal Article In: Neuropsychologia, vol. 184, pp. 1–9, 2023. @article{Nelson2023, Though it may seem simple, object naming is a complex multistage process that can be impaired by lesions at various sites of the language network. Individuals with neurodegenerative disorders of language, known as primary progressive aphasias (PPA), have difficulty with naming objects, and instead frequently say “I don't know” or fail to give a vocal response at all, known as an omission. Whereas other types of naming errors (paraphasias) give clues as to which aspects of the language network have been compromised, the mechanisms underlying omissions remain largely unknown. In this study, we used a novel eye tracking approach to probe the cognitive mechanisms of omissions in the logopenic and semantic variants of PPA (PPA-L and PPA-S). For each participant, we identified pictures of common objects (e.g., animals, tools) that they could name aloud correctly, as well as pictures that elicited an omission. In a separate word-to-picture matching task, those pictures appeared as targets embedded among an array with 15 foils. Participants were given a verbal cue and tasked with pointing to the target, while eye movements were monitored. On trials with correctly-named targets, controls and both PPA groups ceased visual search soon after foveating the target. On omission trials, however, the PPA-S group failed to stop searching, and went on to view many foils “post-target”. As further indication of impaired word knowledge, gaze of the PPA-S group was subject to excessive “taxonomic capture”, such that they spent less time viewing the target and more time viewing related foils on omission trials. In contrast, viewing behavior of the PPA-L group was similar to controls on both correctly-named and omission trials. These results indicate that the mechanisms of omission in PPA differ by variant. In PPA-S, anterior temporal lobe degeneration causes taxonomic blurring, such that words from the same category can no longer be reliably distinguished. In PPA-L, word knowledge remains relatively intact, and omissions instead appear to be caused by downstream factors (e.g., lexical access, phonological encoding). These findings demonstrate that when words fail, eye movements can be particularly informative. |
Eva M. Nunnemann; Helene Kreysa; Pia Knoeferle The effects of referential gaze in spoken language comprehension: Human speaker vs. virtual agent listener gaze Journal Article In: Frontiers in Communication, vol. 8, pp. 1–16, 2023. @article{Nunnemann2023, Introduction: Four studies addressed effects of human speaker gaze vs. virtual agent listener gaze on eye movements during spoken sentence comprehension. Method: Participants saw videos in which a static scene depicting three characters was presented on a screen. Eye movements were recorded as participants listened to German subject-verb-object (SVO) sentences describing an interaction between two of these characters. Participants' task was to verify whether the sentence matched a schematic depiction of the event. Two critical factors were manipulated across all four experiments: (1) whether the human speaker—uttering the sentence—was visible, and (2) whether the agent listener was present. Moreover, in Experiments 2 and 4, the target second noun phrase (NP2) was made inaudible, and in Experiments 3 and 4, the gaze time course of the agent listener was altered: it looked at the NP2 referent about 400 ms before the speaker did. These manipulations served to increase the value of the speaker's and listener's gaze cues for correctly anticipating the NP2 referent. Results: Human speaker gaze led to increased fixations of the NP2 referent in all experiments, but primarily after the onset of its mention. Only in Experiment 3 did participants reliably anticipate the NP2 referent, in this case making use of both the human speaker's and the virtual agent listener's gaze. In all other cases, virtual agent listener gaze had no effect on visual anticipation of the NP2 referent, even when it was the exclusive cue. Discussion: Such information on the use of gaze cues can refine theoretical models of situated language processing and help to develop virtual agents that act as competent communication partners in conversations with human interlocutors. |
Ryan M. O'Leary; Nicole M. Amichetti; Zoe Brown; Alexander J. Kinney; Arthur Wingfield Congruent prosody reduces cognitive effort in memory for spoken sentences: A pupillometric study with young and older adults Journal Article In: Experimental Aging Research, pp. 1–24, 2023. @article{OLeary2023, Background: In spite of declines in working memory and other processes, older adults generally maintain good ability to understand and remember spoken sentences. In part this is due to preserved knowledge of linguistic rules and their implementation. Largely overlooked, however, is the support older adults may gain from the presence of sentence prosody (pitch contour, lexical stress, intra-and inter-word timing) as an aid to detecting the structure of a heard sentence. Methods: Twenty-four young and 24 older adults recalled recorded sentences in which the sentence prosody corresponded to the clausal structure of the sentence, when the prosody was in conflict with this structure, or when there was reduced prosody uninformative with regard to the clausal structure. Pupil size was concurrently recorded as a measure of processing effort. Results: Both young and older adults' recall accuracy was superior for sentences heard with supportive prosody than for sentences with uninformative prosody or for sentences in which the prosodic marking and causal structure were in conflict. The measurement of pupil dilation suggested that the task was generally more effortful for the older adults, but with both groups showing a similar pattern of effort-reducing effects of supportive prosody. Conclusions: Results demonstrate the influence of prosody on young and older adults' ability to recall accurately multi-clause sentences, and the significant role effective prosody may play in preserving processing effort. |
Ryan M. O'Leary; Jonathan Neukam; Thomas A. Hansen; Alexander J. Kinney; Nicole Capach; Mario A. Svirsky; Arthur Wingfield In: Trends in Hearing, vol. 27, pp. 1–22, 2023. @article{OLeary2023a, Speech that has been artificially accelerated through time compression produces a notable deficit in recall of the speech content. This is especially so for adults with cochlear implants (CI). At the perceptual level, this deficit may be due to the sharply degraded CI signal, combined with the reduced richness of compressed speech. At the cognitive level, the rapidity of time-compressed speech can deprive the listener of the ordinarily available processing time present when speech is delivered at a normal speech rate. Two experiments are reported. Experiment 1 was conducted with 27 normal-hearing young adults as a proof-of-concept demonstration that restoring lost processing time by inserting silent pauses at linguistically salient points within a time-compressed narrative (“time-restoration”) returns recall accuracy to a level approximating that for a normal speech rate. Noise vocoder conditions with 10 and 6 channels reduced the effectiveness of time-restoration. Pupil dilation indicated that additional effort was expended by participants while attempting to process the time-compressed narratives, with the effortful demand on resources reduced with time restoration. In Experiment 2, 15 adult CI users tested with the same (unvocoded) materials showed a similar pattern of behavioral and pupillary responses, but with the notable exception that meaningful recovery of recall accuracy with time-restoration was limited to a subgroup of CI users identified by better working memory spans, and better word and sentence recognition scores. Results are discussed in terms of sensory-cognitive interactions in data-limited and resource-limited processes among adult users of cochlear implants. |
Henri Olkoniemi; Sohvi Halonen; Penny M. Pexman; Tuomo Häikiö Children's processing of written irony: An eye-tracking study Journal Article In: Cognition, vol. 238, pp. 1–18, 2023. @article{Olkoniemi2023, Ironic language is challenging for many people to understand, and particularly for children. Comprehending irony is considered a major milestone in children's development, as it requires inferring the intentions of the person who is being ironic. However, the theories of irony comprehension generally do not address developmental changes, and there are limited data on children's processing of verbal irony. In the present pre-registered study, we examined, for the first time, how children process and comprehend written irony in comparison to adults. Seventy participants took part in the study (35 10-year-old children and 35 adults). In the experiment, participants read ironic and literal sentences embedded in story contexts while their eye movements were recorded. They also responded to a text memory question and an inference question after each story, and children's levels of reading skills were measured. Results showed that for both children and adults comprehending written irony was more difficult than for literal texts (the “irony effect”) and was more challenging for children than for adults. Moreover, although children showed longer overall reading times than adults, processing of ironic stories was largely similar between children and adults. One group difference was that for children, more accurate irony comprehension was qualified by faster reading times whereas for adults more accurate irony comprehension involved slower reading times. Interestingly, both age groups were able to adapt to task context and improve their irony processing across trials. These results provide new insights about the costs of irony and development of the ability to overcome them. |
Henri Olkoniemi; Diane Mézière; Johanna K. Kaakinen Comprehending irony in text: Evidence from scanpaths Journal Article In: Discourse Processes, pp. 1–15, 2023. @article{Olkoniemi2023b, Eyetracking studies have shown that readers reread ironic phrases when resolving their meaning. Moreover, it has been shown that the timecourse of processing ironic meaning is affected by reader's working memory capacity (WMC). Irony is a context-dependent phenomenon but using traditional eye-movement measures it is difficult to analyze processing beyond sentence-level. A promising method to study individual differences in irony processing at the paragraph-level is scanpath analysis. In the present experiment, we analyzed whether individual differences in WMC are reflected in scanpaths during reading ironic stories by combining data from two previous eye-tracking studies (N = 120). The results revealed three different reading patterns: fast-and-linear reading, selective reading, and nonselective rereading. The readers predominantly used the fast-and-linear reading pattern for ironic and literal stories. However, readers were less likely to use the nonselective rereading pattern with ironic than literal texts. The reading patterns for ironic stories were modulated by WMC. Results showed that scanpaths captured differences missed by standard measures, showing it to be a valuable tool to study individual differences in irony processing. |
Tao Gong; Lan Shuai Segmented relations between online reading behaviors, text properties, and reader–text interactions: An eye-movement experiment Journal Article In: Frontiers in Psychology, vol. 13, pp. 1–20, 2023. @article{Gong2023, Purpose: To investigate relations between abilities of readers and properties of words during online sentence reading, we conducted a sentence reading eye-movements study on young adults of English monolinguals from the US, who exhibited a wide scope of individual differences in standard measures of language and literacy skills. Method: We adopted mixed-effects regression models of gaze measures of early and late print processing stages from sentence onset to investigate possible associations between gaze measures, text properties, and skill measures. We also applied segmented linear regressions to detect the dynamics of identified associations. Results: Our study reported significant associations between (a) gaze measures (first-pass reading time, total reading times, and first-pass regression probability) and (b) interactions of lexical properties (word length or position) and skill measures (vocabulary, oral reading fluency, decoding, and verbal working memory), and confirmed a segmented linear dynamics between gaze measures and lexical properties, which was influenced by skill measures. Conclusion: This study extends the previous work on predictive effects of individual language and literacy skills on online reading behavior, enriches the existing methodology exploring the dynamics of associations between lexical properties and eye-movement measures, and stimulates future work investigating factors that shape such dynamics. |
Alexa S. Gonzalez; Kathryn A. Tremblay; Katherine S. Binder Context facilitates the decoding of lexically ambiguous words for adult literacy learners Journal Article In: Reading and Writing, vol. 36, no. 3, pp. 699–722, 2023. @article{Gonzalez2023, An estimated one-fifth of adults in the United States possess low literacy skills, which includes minimal proficiency in reading and difficulty processing contextual information. One way to study reading behavior of adults with low literacy is through eye movement studies; however, these investigations have been generally limited. Thus, the present study collected eye movement data (e.g., gaze duration, total time, regressions) from adult literacy learners while they read sentences to investigate online reading behavior. We manipulated the lexical ambiguity of the target words, context strength, and context location in the sentences. The role of vocabulary depth, which refers to the deeper understanding of a word in one's vocabulary, was also examined. Results show that adult literacy learners spent more total time reading ambiguous words compared to control words and vocabulary depth was significantly correlated with processing of lexically ambiguous words. Participants with higher depth scores were more sensitive to the complexity of ambiguous words and more effective at utilizing context compared to those with lower depth scores, which is reflected by more total time reading ambiguous words when more informative context was available and more regressions made to the target word by participants with higher depth scores. Overall, there is evidence to demonstrate the benefits of context use in lexical processing, as well as adult learners' sensitivity to changes in lexical ambiguity. |
Julie Gregg; Albrecht W. Inhoff; Xingshan Li Lexical competition influences correct and incorrect visual word recognition Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 5, pp. 1011 –1025, 2023. @article{Gregg2023, A growing body of research suggests that visual word recognition is error-prone, and that errors may contribute to inhibitory neighbour frequency effects in word identification and reading. The present study used the neighbourhood frequency effect to examine the relationship between lexical competition and error making during visual word recognition. A novel adaptation of the visual world paradigm (VWP) was used, in which participants selected a briefly presented printed target word from an array containing the target, its higher- or lower-frequency neighbour, an orthographic onset competitor, and an orthographically unrelated distractor word. Analyses of the visual inspection of the arrays suggested that lexical competition occurred when words were correctly identified, as competitors were preferentially viewed as a function of their orthographic similarity with the target, and higher-frequency neighbours were preferentially viewed over lower-frequency neighbours. Orthographic similarity and neighbour frequency also influenced error making. Targets were often mistaken for their neighbours, and these errors were more common for targets with higher-frequency neighbours. The time course of target and neighbour viewing for error trials also provided preliminary evidence for two kinds of errors: early-occurring, perceptual errors and later-occurring selection errors that resulted from unsuccessfully resolved lexical competition. Together, these findings suggest that neighbour frequency effects reflect the contribution of both general lexical competition and occasional errors. |
Junjuan Gu; Junyi Zhou; Yaqian Bao; Jiayu Liu; Manuel Perea; Xingshan Li The effect of transposed-character distance in Chinese reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 49, no. 3, pp. 464–476, 2023. @article{Gu2023a, Previous research in alphabetic languages has shown that both position (external, internal) and distance (adjacent, nonadjacent) modulate letter position encoding during reading. To examine the generality of this pattern for a comprehensive model of word recognition and reading, we examined these effects during Chinese reading (i.e., an unspaced logographic language). Participants in two experiments read intact sentences and sentences containing transposed-character nonwords while their eye movements were monitored. Experiment 1 manipulated the distance between the transposed characters (adjacent vs. nonadjacent) within three-character words. Reading times were longer when nonadjacent characters were transposed compared with adjacent characters. Also, for adjacent character transpositions, a wordbeginning character transposition led to longer reading times than a word-ending character transposition. Experiment 2 manipulated orthogonally character transposition distance (adjacent vs. nonadjacent) and position within four-character words, including the beginning versus the last character. Reading times were longer when the transposition involves the first character than when involves the ending character. Fixation durations on the target regions in the nonadjacent character transposition condition were longer than those in the adjacent character transposition condition. Taken together, these results reveal robust effects of both the initial character position and transposed-character distance in Chinese reading. Thus, the privileged status of the initial character is intrinsically related to how we access lexical information. |
Thomas Günther; Annika Kirschenkern; Axel Mayer; Frederike Steinke; Jürgen Cholewa In: Journal of Speech, Language, and Hearing Research, vol. 66, no. 10, pp. 3907–3924, 2023. @article{Guenther2023, Purpose: Many models of language comprehension assume that listeners predict the continuation of an incoming linguistic stimulus immediately after itonset, based on only partial linguistic and contextual information. Their related developmental models try to determine which cues (e.g., semantic or morpho-syntactic) trigger such prediction, and to which extent, during different period of language acquisition. One morphosyntactic cue utilized predictively in many languages, inter alia German, is grammatical gender. However, studies of the developmental trajectories of the acquisition of predictive gender processing in German remain a few. Method: This study attempts to shed light on such processing strategies usein noun phrase decoding among children acquiring German as their first language by examining their eye movements during a language–picture matching task (N = 78, 5–10 years old). Its aim was to confirm whether the eye moments indicated the presence of age-specific differences in the processing of gender cue, provided either in isolation or in combination with a semantic cue. Results: The results revealed that German children made use of predictive gender processing strategies from the age of 5 years onward; however, the pace online gender processing, as well as confidence in the predicted continuation increased up to the age of 10 years. Conclusion: Predictive processing of gender cues plays a role in German language comprehension even in children younger than 8 years. |
Michael Hahn; Frank Keller Modeling task effects in human reading with neural network-based attention Journal Article In: Cognition, vol. 230, pp. 1–25, 2023. @article{Hahn2023, Research on human reading has long documented that reading behavior shows task-specific effects, but it has been challenging to build general models predicting what reading behavior humans will show in a given task. We introduce NEAT, a computational model of the allocation of attention in human reading, based on the hypothesis that human reading optimizes a tradeoff between economy of attention and success at a task. Our model is implemented using contemporary neural network modeling techniques, and makes explicit and testable predictions about how the allocation of attention varies across different tasks. We test this in an eyetracking study comparing two versions of a reading comprehension task, finding that our model successfully accounts for reading behavior across the tasks. Our work thus provides evidence that task effects can be modeled as optimal adaptation to task demands. |
Yuqi Hao; Yingyi Luo; Kenneth Han-yang Lin-Hong; Ming Yan Shared translation in second language activates unrelated words in first language Journal Article In: Psychonomic Bulletin & Review, pp. 1–11, 2023. @article{Hao2023, The present study explored bilingual coactivation during natural monolingual sentence-reading comprehension. Native Chinese readers who had learned Japanese as a second language and those who had not learned it at all were tested. The results showed that unrelated Chinese word pairs that shared a common Japanese translation could parafoveally prime each other. Critically, this translation-related preview effect was modulated by the readers' language-learning experiences. It was found only among the late Chinese–Japanese bilinguals, but not among the monolingual Chinese readers. By setting a novel step, which was testing bilingual coactivation of semantic knowledge in a natural reading scenario without an explicit presentation of L2 words, our results suggest that bilingual word processing can be automatic, unconscious and nonselective. The study reveals an L2-to-L1 influence on readers' lexical activation during natural sentence reading in an exclusively native context. |
Tami Harel-Arbeli; Yuval Palgi; Boaz M. Ben-David Sow in tears and reap in joy: Eye tracking reveals age-related differences in the cognitive cost of spoken context processing Journal Article In: Psychology and Aging, vol. 38, no. 6, pp. 534–547, 2023. @article{HarelArbeli2023, Older adults have been found to use context to facilitate word recognition at least as efficiently as young adults. This may pose a conundrum, as context use is based on cognitive resources that are considered to decrease with aging. The goal of this study was to shed light on this question by testing age-related differences in context use and the cognitive demands associated with it. The eye movements of 30 young (21–27 years old) and 30 older adults (61–79 years old) were examined as they listened to spoken instructions to touch an image on a monitor. The predictability of the target word was manipulated between trials: nonpredictive (baseline), predictive (context), or predictive of two images (competition). In tandem, listeners were asked to retain one or four spoken digits (low or high cognitive load) for later recall. Separate analyses were conducted for the preceding sentence and the (final) target word. Sentence processing: Older adults were slower than young adults to accumulate evidence for target-word prediction (context condition), and they were more negatively affected by the increase in cognitive load (context and competition). Targetword recognition: No age-related differences were found in word recognition rate or the effect of cognitive load following predictive context (context and competition). Although older adults have greater difficulty processing context, they can use context to facilitate word recognition as efficiently as young adults. These results provide a better understanding of how cognitive processing changes with aging. They may help develop interventions aimed at improving communication in older adults. |
Juan Haro; Natalia López-Cortés; Pilar Ferré Pupillometric and behavioural evidence shows no differences between polyseme and homonym processing Journal Article In: Acta Psychologica, vol. 238, pp. 1–13, 2023. @article{Haro2023, Ambiguous words can have related meanings (polysemes, e.g., newspaper) or unrelated meanings (homonyms, e.g., bat). Here we examined the processing of both types of ambiguous words (as well as unambiguous words) in tasks of increasing level of semantic engagement. Four experiments were conducted in which the degree of semantic engagement of the task was manipulated: lexical decision task (Experiments 1 and 2), semantic categorization task (Experiment 3) and number-of-meanings task (Experiment 4). RTs and pupillary response were recorded. To our knowledge, pupillary response had never been used before to study ambiguous words processing in isolation. Results showed faster RTs for ambiguous words with respect to unambiguous words in LDT, and larger pupil dilation was observed for ambiguous words in comparison to unambiguous ones in number-of-meanings task. However, differences between polysemes and homonyms were not observed in any task. These results provide no evidence that polysemes and homonyms are processed differently. |
J. Hartman; J. Saffran; R. Litovsky Word learning in deaf adults who use cochlear implants: The role of talker variability and attention to the mouth Journal Article In: Ear & Hearing, pp. 1–14, 2023. @article{Hartman2023, OBJECTIVES: Although cochlear implants (CIs) facilitate spoken language acquisition, many CI listeners experience difficulty learning new words. Studies have shown 29that highly variable stimulus input and audiovisual cues improve speech perception in CI listeners. However, less is known whether these two factors improve perception in a word learning context. Furthermore, few studies have examined how CI listeners direct their gaze to efficiently capture visual information available on a talker's face. The purpose of this study was two-fold: (1) to examine whether talker variability could improve word learning in CI listeners and (2) to examine how CI listeners direct their gaze while viewing a talker speak. DESIGN: Eighteen adults with CIs and 10 adults with normal hearing (NH) learned eight novel word-object pairs spoken by a single talker or six different talkers (multiple talkers). The word learning task comprised of nonsense words following the phonotactic rules of English. Learning was probed using a novel talker in a two-alternative forced-choice eye gaze task. Learners' eye movements to the mouth and the target object (accuracy) were tracked over time. RESULTS: Both groups performed near ceiling during the test phase, regardless of whether they learned from the same talker or different talkers. However, compared to listeners with NH, CI listeners directed their gaze significantly more to the talker's mouth while learning the words. CONCLUSIONS: Unlike NH listeners who can successfully learn words without focusing on the talker's mouth, CI listeners tended to direct their gaze to the talker's mouth, which may facilitate learning. This finding is consistent with the hypothesis that CI listeners use a visual processing strategy that efficiently captures redundant audiovisual speech cues available at the mouth. Due to ceiling effects, however, it is unclear whether talker variability facilitated word learning for adult CI listeners, an issue that should be addressed in future work using more difficult listening conditions. |
Kara Hawthorne; Susan J. Loveall In: Journal of Speech, Language, and Hearing Research, vol. 66, no. 9, pp. 3606–3621, 2023. @article{Hawthorne2023, Purpose: Pronouns are referentially ambiguous: For example, “she” could refer to any female. Nonetheless, errors in pronoun interpretation rarely occur for adults with typical development (TD) due to several strategies implicitly shared between the talker and listener. The purpose of this study was to test the impacts of syntactic, semantic, and prosodic prominence on pronoun interpre-tation for adults with intellectual and developmental disabilities (IDD) and TD. Method: Adults with IDD (n =28) andTD (n = 27) listened to ministories involving a pronoun with two potential antecedents that varied in syntactic, semantic, and prag-matic prominence. Subject/first-mentioned antecedents are more syntactically prominent than object antecedents. Semantic prominence was manipulated via verb transitivity: Subjects are more semantically prominent when the verb is highly transitive (e.g., “hit” vs. “see,” a low-transitivity verb for which the subject is merely experiencing the action). Pragmatic prominence was manipulated by placing pro-sodic focus on one of the two potential antecedents. Eye gaze to images represent-ing the potential antecedents was tracked as a measure of online processing. Responses to a follow-up pronoun interpretation question were also recorded. Results: Adults with TD used syntactic, semantic, and—in early processing— pragmatic prominence when interpreting ambiguous pronouns. Adults with IDD were sensitive to syntactic prominence but to a significantly lesser extent than their peers with TD. Conclusions: Pronouns are an integral part of everyday conversation, and when the conversational partners do not share common strategies to link ambiguous pronouns with their antecedents, misunderstandings will occur. Results show that adults with IDD only weakly share pronoun interpretation strategies with adults with TD, suggesting that pronouns may be an important focus for inter-vention for this population. |
Lena Henke; Ashley G. Lewis; Lars Meyer Fast and slow rhythms of naturalistic reading revealed by combined eye-tracking and electroencephalography Journal Article In: Journal of Neuroscience, vol. 43, no. 24, pp. 4461–4469, 2023. @article{Henke2023, Neural oscillations are thought to support speech and language processing. They may not only inherit acoustic rhythms, but might also impose endogenous rhythms onto processing. In support of this, we here report that human (both male and female) eye movements during naturalistic reading exhibit rhythmic patterns that show frequency-selective coherence with the EEG, in the absence of any stimulation rhythm. Periodicity was observed in two distinct frequency bands: First, word-locked saccades at 4-5 Hz display coherence with whole-head theta-band activity. Second, fixation durations fluctuate rhythmically at;1 Hz, in coherence with occipital delta-band activity. This latter effect was additionally phase-locked to sentence endings, suggesting a relationship with the formation of multi-word chunks. Together, eye movements during reading contain rhythmic patterns that occur in synchrony with oscillatory brain activity. This suggests that linguistic processing imposes preferred processing time scales onto reading, largely independent of actual physical rhythms in the stimulus. |
2022 |
Haiyan Wang; Matthew Walenski; Kaitlyn Litcofsky; Jennifer E. Mack; M. Marsel Mesulam; Cynthia K. Thompson Verb production and comprehension in primary progressive aphasia Journal Article In: Journal of Neurolinguistics, vol. 64, pp. 1–18, 2022. @article{Wang2022c, Studies of word class processing have found verb retrieval impairments in individuals with primary progressive aphasia (Bak et al., 2001; Cappa et al., 1998; Cotelli et al., 2006; Hillis, Heidler-Gary, et al., 2006; Hillis, Oh, & Ken, 2004; Marcotte et al., 2014; Rhee, Antiquena, & Grossman, 2001; Silveri & Ciccarelli, 2007; Thompson, Lukic, et al., 2012) associated primarily with the agrammatic variant. However, fewer studies have focused on verb comprehension, with inconsistent results. Because verbs are critical to both production and comprehension of clauses and sentences, we investigated verb processing across domains in agrammatic, logopenic, and semantic PPA and a group of age-matched healthy controls. Participants completed a confrontation naming task for verb production and an eye-tracking word-picture matching task for online verb comprehension. All PPA groups showed impaired verb production and comprehension relative to healthy controls. Most notably, the PPA-S group performed more poorly than the other two PPA variants in both domains. Overall, the results indicate that semantic deficits in the PPA-S extend beyond object knowledge to verbs as well, adding to our knowledge concerning the nature of the language deficits in the three variants of primary progressive aphasia. |
Xuling Li; Man Zeng; Lei Gao; Shan Li; Zibei Niu; Danhui Wang; Tianzhi Li; Xuejun Bai; Xiaolei Gao The mechanism of word satiation in Tibetan reading: Evidence from eye movements Journal Article In: Journal of Eye Movement Research, vol. 15, no. 5, 2022. @article{Li2022k, Two eye-tracking experiments were used to investigate the mechanism of word satiation in Tibetan reading. The results revealed that, at a low repetition level, gaze duration and total fixation duration in the semantically unrelated condition were significantly longer than in the semantically related condition; at a medium repetition level, reaction time in the semantically related condition was significantly longer than in the semantically unrelated condition; at a high repetition level, the total fixation duration and reaction time in the semantically related condition were significantly longer than in the semantically unrelated condition. However, fixation duration and reaction time showed no significant difference between the similar and dissimilar orthography at any repetition level. These findings imply that there are semantic priming effects in Tibetan reading at a low repetition level, but semantic satiation effects at greater repetition levels, which occur in the late stage of lexical processing. |