EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language articles, please email us!
Dato Abashidze; Pia Knoeferle
In: Frontiers in Psychology, vol. 12, pp. 701742, 2021.
In interpreting spoken sentences in event contexts, comprehenders both integrate their current interpretation of language with the recent past (e.g., events they have witnessed) and develop expectations about future event possibilities. Tense cues can disambiguate this linking but temporary ambiguity in their interpretation may lead comprehenders to also rely on further, situation-specific cues (e.g., an actor's gaze as a cue to his future actions). How comprehenders reconcile these different cues in real time is an open issue that we must address to accommodate comprehension. It has been suggested that relating a referential expression (e.g., a verb) to a referent (e.g., a recent event) is preferred over relying on other cues that refer to the future and are not yet referentially grounded (“recent-event preference”). Two visual-world eye-tracking experiments compared this recent-event preference with effects of an actor's gaze and of tense/temporal adverbs as cues to a future action event. The results revealed that people overall preferred to focus on the recent (vs. future) event target in their interpretation, suggesting that while a congruent and incongruent actor gaze can jointly with futuric linguistic cues neutralize the recent-event preference late in the sentence, the latter still plays a key role in shaping participants' initial verb-based event interpretation. Additional post-experimental memory tests provided insight into the longevity of the gaze effects.
Nawras Abbas; Tamar Degani; Anat Prior
In: Frontiers in Psychology, vol. 12, pp. 673535, 2021.
We investigated cross-language influences from the first (L1) and second (L2) languages in third (L3) language processing, to examine how order of acquisition and proficiency modulate the degree of cross-language influences, and whether these cross-language influences manifest differently in online and offline measures of L3 processing. The study focused on morpho-syntactic processing of English as an L3 among Arabic-Hebrew-English university student trilinguals (n = 44). Importantly, both L1 (Arabic) and L2 (Hebrew) of participants are typologically distant from L3 (English), which allows overcoming confounds of previous research. Performance of trilinguals was compared to that of native English monolingual controls (n = 37). To investigate the source of cross-language influences, critical stimuli were ungrammatical sentences in English, which when translated could be grammatical in L1, in L2 or in both. Thus, the L3 morpho-syntactic structures included in the study were a mismatch with L1, a mismatch with L2, a Double mismatch, with both L1 and L2, or a no mismatch condition. Participants read the English sentences while their eye-movements were recorded (online measure), and they also performed grammaticality judgments following each sentence (offline measure). Across both measures, cross-language influences were assessed by comparing the performance of the trilinguals in each of the critical interference conditions to the no-interference condition, and by comparing their performance to that of the monolingual controls. L1 interference was evident in first pass sentence reading, and marginally in offline grammaticality judgment, and L2 interference was robust across second pass reading and grammaticality judgments. These results suggest that either L1 or the L2 can be the source of cross-language influences in L3 processing, but with different time-courses. The findings highlight the difference between online and offline measures of performance: processing language in real-time reflects mainly automatic activation of morpho-syntactic structures, whereas offline judgments might also involve strategic and meta-linguistic decision making. Together, the findings show that during L3 processing, trilinguals have access to all previously acquired linguistic knowledge, and that the multilingual language system is fully interactive.
Victoria I. Adedeji; Martin R. Vasilev; Julie A. Kirkby; Timothy J. Slattery
Return-sweep saccades in oral reading Journal Article
In: Psychological Research, pp. 1–12, 2021.
Recent research on return-sweep saccades has improved our understanding of eye movements when reading paragraphs. However, these saccades, which take our gaze from the end of one line to the start of the next line, have been studied only within the context of silent reading. Articulatory demands and the coordination of the eye–voice span (EVS) at line boundaries suggest that the execution of this saccade may be different in oral reading. We compared launch and landing positions of return-sweeps, corrective saccade probability and fixations adjacent to return-sweeps in skilled adult readers while reading paragraphs aloud and silently. Compared to silent reading, return-sweeps were launched from closer to the end of the line and landed closer to the start of the next line when reading aloud. The probability of making a corrective saccade was higher for oral reading than silent reading. These indicate that oral reading may compel readers to rely more on foveal processing at the expense of parafoveal processing. We found an interaction between reading modality and fixation type on fixation durations. The reading modality effect (i.e., increased fixation durations in oral compared to silent reading) was greater for accurate line-initial fixations and marginally greater for line-final fixations compared to intra-line fixations. This suggests that readers may use the fixations adjacent to return-sweeps as natural pause locations to modulate the EVS.
Miriam Aguilar; Pilar Ferré; José M. Gavilán; José A. Hinojosa; Josep Demestre
In: Cognition, vol. 211, pp. 104624, 2021.
The relationship between syntactic ambiguity and locality has been a reliable cornerstone in theories of language comprehension with one exception: non-local preferences in object-modifying relative clauses preceded by two potential hosts (DP1 of DP2 RC). We test the offline and online effects of the availability of an alternative structure, the pseudo-relative, on the parsing of relative clauses. It has been claimed that pseudo-relatives are preferred to relative clauses because of their simplicity at the structural, interpretive and pragmatic levels, and act as a confound in the attachment literature (Grillo, 2012; Grillo & Costa, 2014). Our results show that attachment preferences are modulated by the availability of pseudo-relatives in offline and online tests. However, when this factor is controlled, parsing of relative clauses in Spanish is initially ruled by principles of locality, which can eventually be overridden by other factors.
Farah Akthar; Hannah Harvey; Ahalya Subramanian; Simon Liversedge; Robin Walker
In: Journal of Vision, vol. 21, no. 12, pp. 1–24, 2021.
Reading with central vision loss (CVL), as caused by macular disease, may be enhanced by presenting text using dynamic formats such as horizontally scrolling text or rapid serial visual presentation (RSVP). The rationale for these dynamic text formats is that they can be read while holding gaze away from the text, potentially supporting reading while using the eccentric viewing strategy. This study was designed to evaluate the practice of reading with CVL, with passages of text presented as static sentences, with horizontal scrolling sentences, or as single-word RSVP. In separate studies, normally sighted participants with a simulated (artificial) central scotoma, controlled by an eye-tracker, or participants with CVL resulting from macular degeneration read passages of text using the eccentric viewing technique. Comprehension was better overall with scrolling text when reading with a simulated CVL, whereas RSVP produced lower overall comprehension and high error rates. Analysis of eye movement behavior showed that participants consistently adopted a strategy of making multiple horizontal saccades on the text itself. Adherence to using eccentric viewing was better with RSVP, but this did not translate into better reading performance. Participants with macular degeneration and an actual CVL also showed the highest comprehension and lowest error rates with scrolling text and the lowest comprehension and highest errors with RSVP. We conclude that scrolling text can support effective reading in people with CVL and has potential as a reading aid.
Maryam A. AlJassmi; Kayleigh L. Warrington; Victoria A. McGowan; Sarah J. White; Kevin B. Paterson
In: Attention, Perception, and Psychophysics, pp. 1–15, 2021.
Contextual predictability influences both the probability and duration of eye fixations on words when reading Latinate alphabetic scripts like English and German. However, it is unknown whether word predictability influences eye movements in reading similarly for Semitic languages like Arabic, which are alphabetic languages with very different visual and linguistic characteristics. Such knowledge is nevertheless important for establishing the generality of mechanisms of eye-movement control across different alphabetic writing systems. Accordingly, we investigated word predictability effects in Arabic in two eye-movement experiments. Both produced shorter fixation times for words with high compared to low predictability, consistent with previous findings. Predictability did not influence skipping probabilities for (four- to eight-letter) words of varying length and morphological complexity (Experiment 1). However, it did for short (three- to four-letter) words with simpler structures (Experiment 2). We suggest that word-skipping is reduced, and affected less by contextual predictability, in Arabic compared to Latinate alphabetic reading, because of specific orthographic and morphological characteristics of the Arabic script.
Jorge González Alonso; Ian Cunnings; HIiroki Fujita; David Miller; Jason Rothman
Gender attraction in sentence comprehension Journal Article
In: Glossa: a journal of general linguistics, vol. 6, no. 20, pp. 1–26, 2021.
Agreement attraction, where ungrammatical sentences are perceived as grammatical (e.g.The key to the cabinets were rusty), has been influential in motivating models of memory access during language comprehension. It is contested, however, whether such effects arise due to a faulty representation of relevant morphosyntactic features, or as a result of memory retrieval. Existing studies of agreement attraction in comprehension have largely been limited to subject-verb number agreement, primarily in English, and while attraction in other agreement phenomena such as gender has been investigated in production, very few studies have focused on gender attraction in comprehension. We conducted five experiments investigating noun-adjective gender agreement during comprehension in Spanish. Our results indicate attraction effects during online sentence processing that are consistent with approaches ascribing attraction to interference during memory retrieval, rather than to a faulty representation of agreement features. We interpret our findings as consistent with the predictions of cue-based parsing.
Nicole M. Amichetti; Jonathan Neukam; Alexander J. Kinney; Nicole Capach; Samantha U. March; Mario A. Svirsky; Arthur Wingfield
In: The Journal of the Acoustical Society of America, vol. 150, no. 6, pp. 4315–4328, 2021.
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort. VC 2021 Acoustical Society of America.
Sally Andrews; Aaron Veldre
In: Scientific Studies of Reading, vol. 25, no. 2, pp. 123–140, 2021.
This study used wrap-up effects on eye movements to assess the relationship between online reading behavior and comprehension. Participants, assessed on measures of reading, vocabulary, and spelling, read short passages that manipulated whether a syntactic boundary was unmarked by punctuation, weakly marked by a comma, or strongly marked by a period. Comprehension demands were manipulated by presenting questions after either 25% or 100% of passages. Wrap-up effects at punctuation boundaries manifested principally in rereading of earlier text and were more marked in lower proficiency readers. High comprehension load was associated with longer total reading time but had little impact on wrap-up effects. The relationship between eye movements and comprehension accuracy suggested that poor comprehension was associated with a shallower reading strategy under low comprehension demands. The implications of these findings for understanding how the processes involved in self-regulating comprehension are modulated by reading proficiency and comprehension goals are discussed.
Caitlyn Antal; Roberto G. Almeida
In: Frontiers in Psychology, vol. 12, pp. 741685, 2021.
A sentence such as We finished the paper is indeterminate with regards to what we finished doing with the paper. Indeterminate sentences constitute a test case for two major issues regarding language comprehension: (1) how we compose sentence meaning; and (2) what is retained in memory about what we read in context over time. In an eye-tracking experiment, participants read short stories that were unexpectedly followed by one of three recognition probes: (a) an indeterminate sentence (Lisa began the book), that is identical to the one in the story; (b) an enriched but false probe (Lisa began reading the book); and (c) a contextually unrelated probe (Lisa began writing the book). The probes were presented either at the offset of the original indeterminate sentence in context or following additional neutral discourse. We measured accuracy, probe recognition time, and reading times of the probe sentences. Results showed that, at the immediate time point, participants correctly accepted the identical probes with high accuracy and short recognition times, but that this effect reversed to chance-level accuracy and significantly longer recognition times at the delayed time point. We also found that participants falsely accept the enriched probe at both time points 50% of the time. There were no reading-time differences between identical and enriched probes, suggesting that enrichment might not be an early, mandatory process for indeterminate sentences. Overall, results suggest that while context produces an enriched proposition, an unenriched proposition true to the indeterminate sentence also lingers in memory.
M. Antúnez; S. Mancini; J. A. Hernández-Cabrera; L. J. Hoversten; H. A. Barber; M. Carreiras
In: Brain and Language, vol. 214, pp. 104905, 2021.
During reading, we can process and integrate information from words allocated in the parafoveal region. However, whether we extract and process the meaning of parafoveal words is still under debate. Here, we obtained Fixation-Related Potentials in a Basque-Spanish bilingual sample during a Spanish reading task. By using the boundary paradigm, we presented different parafoveal previews that could be either Basque non-cognate translations or unrelated Basque words. We prove for the first time cross-linguistic semantic preview benefit effects in alphabetic languages, providing novel evidence of modulations in the N400 component. Our findings suggest that the meaning of parafoveal words is processed and integrated during reading and that such meaning is activated and shared across languages in bilingual readers.
Martín Antúnez; Sara Milligan; Juan Andrés Hernández‐Cabrera; Horacio A. Barber; Elizabeth R. Schotter
In: Psychophysiology, pp. e13986, 2021.
Prior research suggests that we may access the meaning of parafoveal words dur- ing reading. We explored how semantic- plausibility parafoveal processing takes place in natural reading through the co- registration of eye movements (EM) and fixation- related potentials (FRPs), using the boundary paradigm. We replicated previous evidence of semantic parafoveal processing from highly controlled read- ing situations, extending their findings to more ecologically valid reading sce- narios. Additionally, and exploring the time- course of plausibility preview effects, we found distinct but complementary evidence from EM and FRPs measures. FRPs measures, showing a different trend than EM evidence, revealed that plau- sibility preview effects may be long- lasting. We highlight the importance of a co- registration set- up in ecologically valid scenarios to disentangle the mechanisms related to semantic- plausibility parafoveal processing.
Keith S. Apfelbaum; Jamie Klein-Packard; Bob McMurray
In: Journal of Memory and Language, vol. 121, pp. 1–57, 2021.
A common critique of the Visual World Paradigm (VWP) in psycholinguistic studies is that what is designed as a measure of language processes is meaningfully altered by the visual context of the task. This is crucial, particularly in studies of spoken word recognition, where the displayed images are usually seen as just a part of the measure and are not of fundamental interest. Many variants of the VWP allow participants to sample the visual scene before a trial begins. However, this could bias their interpretations of the later speech or even lead to abnormal processing strategies (e.g., comparing the input to only preactivated working memory representations). Prior work has focused only on whether preview duration changes fixation patterns. However, preview could affect a number of processes, such as visual search, that would not challenge the interpretation of the VWP. The present study uses a series of targeted manipulations of the preview period to ask if preview alters looking behavior during a trial, and why. Results show that evidence of incremental processing and phonological competition seen in the VWP are not dependent on preview, and are not enhanced by manipulations that directly encourage phonological prenaming. Moreover, some forms of preview can eliminate nuisance variance deriving from object recognition and visual search demands in order to produce a more sensitive measure of linguistic processing. These results deepen our understanding of how the visual scene interacts with language processing to drive fixations patterns in the VWP, and reinforce the value of the VWP as a tool for measuring real-time language processing. Stimuli, data and analysis scripts are available at https://osf.io/b7q65/.
Susana Araújo; Falk Huettig; Antje S. Meyer
In: Scientific Studies of Reading, vol. 25, no. 6, pp. 534–549, 2021.
This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low- but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination.
Begoña Arechabaleta Regulez; Silvina Montrul
In: Languages, vol. 6, no. 3, pp. 131, 2021.
Spanish marks animate and specific direct objects overtly with the preposition a, an instance of Differential Object Marking (DOM). However, in some varieties of Spanish, DOM is advancing to inanimate objects. Language change starts at the individual level, but how does it start? What manifestation of linguistic knowledge does it affect? This study traced this innovative use of DOM in oral production, grammaticality judgments and on-line comprehension (reading task with eye-tracking) in the Spanish of Mexico. Thirty-four native speakers (ages 18–22) from the southeast of Mexico participated in the study. Results showed that the incidence of the innovative use of DOM with inanimate objects varied by task: DOM innovations were detected in on-line processing more than in grammaticality judgments and oral production. Our results support the hypothesis that language variation and change may start with on-line comprehension.
Nicolai D. Ayasse; Alana J. Hodson; Arthur Wingfield
In: Frontiers in Psychology, vol. 12, pp. 629464, 2021.
There is considerable evidence that listeners' understanding of a spoken sentence need not always follow from a full analysis of the words and syntax of the utterance. Rather, listeners may instead conduct a superficial analysis, sampling some words and using presumed plausibility to arrive at an understanding of the sentence meaning. Because this latter strategy occurs more often for sentences with complex syntax that place a heavier processing burden on the listener than sentences with simpler syntax, shallow processing may represent a resource conserving strategy reflected in reduced processing effort. This factor may be even more important for older adults who as a group are known to have more limited working memory resources. In the present experiment, 40 older adults (Mage = 75.5 years) and 20 younger adults (Mage = 20.7) were tested for comprehension of plausible and implausible sentences with a simpler subject-relative embedded clause structure or a more complex object-relative embedded clause structure. Dilation of the pupil of the eye was recorded as an index of processing effort. Results confirmed greater comprehension accuracy for plausible than implausible sentences, and for sentences with simpler than more complex syntax, with both effects amplified for the older adults. Analysis of peak pupil dilations for implausible sentences revealed a complex three-way interaction between age, syntactic complexity, and plausibility. Results are discussed in terms of models of sentence comprehension, and pupillometry as an index of intentional task engagement.
Ibtehal Baazeem; Hend Al-Khalifa; Abdulmalik Al-Salman
In: Applied Sciences, vol. 11, no. 18, pp. 8607, 2021.
Using physiological data helps to identify the cognitive processing in the human brain. One method of obtaining these behavioral signals is by using eye-tracking technology. Previous cognitive psychology literature shows that readable and difficult-to-read texts are associated with certain eye movement patterns, which has recently encouraged researchers to use these patterns for readability assessment tasks. However, although it seems promising, this research direction has not been explored adequately, particularly for Arabic. The Arabic language is defined by its own rules and has its own characteristics and challenges. There is still a clear gap in determining the potential of using eye-tracking measures to improve Arabic text. Motivated by this, we present a pilot study to explore the extent to which eye-tracking measures enhance Arabic text readability. We collected the eye movements of 41 participants while reading Arabic texts to provide real-time processing of the text; these data were further analyzed and used to build several readability prediction models using different regression algorithms. The findings show an improvement in the readability prediction task, which requires further investigation. To the best of our knowledge, this work is the first study to explore the relationship between Arabic readability and eye movement patterns.
Mireille Babineau; Alex Carvalho; John Trueswell; Anne Christophe
In: Developmental Science, vol. 24, pp. e13010, 2021.
Young children can exploit the syntactic context of a novel word to narrow down its probable meaning. But how do they learn which contexts are linked to which semantic features in the first place? We investigate if 3- to 4-year-old children (n = 60) can learn about a syntactic context from tracking its use with only a few familiar words. After watching a 5-min training video in which a novel function word (i.e., ‘ko') replaced either personal pronouns or articles, children were able to infer semantic properties for novel words co-occurring with the newly learned function word (i.e., objects vs. actions). These findings implicate a mechanism by which a distributional analysis, associated with a small vocabulary of known words, could be sufficient to identify some properties associated with specific syntactic contexts.
Briony Banks; Emma Gowen; Kevin J. Munro; Patti Adank
In: Journal of Speech, Language, and Hearing Research, vol. 64, no. 9, pp. 3432–3445, 2021.
Purpose: Visual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e., without a separate training period). A second aim was to investigate whether eye gaze patterns toward the speaker's mouth were related to better perception, hypothesizing that listeners who looked more at the speaker's mouth would show greater adaptation. Method: A group of listeners (n = 30) was presented with 90 noise-vocoded sentences in audiovisual format, whereas a control group (n = 29) was presented with the audio signal only. Recognition accuracy was measured throughout and eye tracking was used to measure fixations toward the speaker's eyes and mouth in the audiovisual group. Results: Previous studies were partially replicated: The audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall. Longer fixations on the speaker's mouth in the audiovisual group were related to better overall accuracy. An exploratory analysis further demonstrated that the duration of fixations to the speaker's mouth decreased over time. Conclusions: The results suggest that visual cues may not benefit adaptation to degraded speech as much as previously thought. Longer fixations on a speaker's mouth may play a role in successfully decoding visual speech cues; however, this will need to be confirmed in future research to fully understand how patterns of eye gaze are related to audiovisual speech recognition. All materials, data, and code are available at https://osf.io/ 2wqkf/.
Eliza Barach; Laurie Beth Feldman; Heather Sheridan
In: Psychonomic Bulletin & Review, vol. 28, no. 3, pp. 978–991, 2021.
Emojis have many functions that support reading. Most obviously, they convey semantic information and support reading comprehension (Lo, CyberPsychology & Behavior, 11, 595–597, 2008; Riordan, Computers in Human Behavior, 76, 75–86, 2017b). However, it is undetermined whether emojis recruit the same perceptual and cognitive processes for identification and integration during reading as do words. To investigate whether emojis are processed like words, we used eye tracking to examine the time course of semantic processing of emojis during reading. Materials consisted of sentences containing a target word (e.g., coffee in the sentence “My tall coffee is just the right temperature”) when there was no emoji present and when there was a semantically congruent (i.e., synonymous) emoji (e.g., the cup of coffee emoji, [InlineMediaObject not available: see fulltext.]) or an incongruent emoji (e.g., the beer mug emoji, [InlineMediaObject not available: see fulltext.]) present at the end of the sentence. Similar to congruency effects with words, congruent emojis were fixated for shorter periods and were less likely to be refixated than were incongruent emojis. In addition, congruent emojis were more frequently skipped than incongruent emojis, which suggests that semantic aspects of emoji processing begin in the parafovea. Finally, the presence of an emoji, relative to its absence increased target-word skipping rates and reduced total time on target words. We discuss the implications of our findings for models of eye-movement control during reading.
Eliza Barach; Leah Gloskey; Heather Sheridan
In: Visual Cognition, vol. 29, no. 8, pp. 510–518, 2021.
In multiple-target visual searches, subsequent search misses (SSMs; [Cain, M. S., Adamo, S. H., & Mitroff, S. R. (2013). A taxonomy of errors in multiple-target visual search. Visual Cognition, 21(7), 899–921. https://doi.org/10.1080/13506285.2013.843627]) occur when the discovery of one target hinders detection of another target (formerly referred to as Satisfaction of Search [Tuddenham, W. J. (1962). Visual search, image organization, and reader error in roentgen diagnosis studies of the psychophysiology of Roentgen image perception memorial fund lecture. Radiology, 78(5), 694–704. https://doi.org/10.1148/78.5.694]). Here, we examined whether SSMs would occur during a proofreading task, and we contrasted the cognitive resource depletion theory and the Satisfaction of Search (SOS) accounts of SSMs. We monitored participants' eye movements while proofreading for typographical errors (i.e., typos) in lists of words that were presented either in a straight horizontal line or in random locations. There was a significant SSM effect since detection of a first typo hindered the detection of a second typo. Furthermore, in support of the SOS account, the detection of a first typo expedited search, as shown by faster reaction times and reduced fixation durations and refixations on the second typo.
Anne L. Beatty-Martínez; Rosa E. Guzzardo Tamargo; Paola E. Dussias
In: Scientific Reports, vol. 11, pp. 23474, 2021.
Language processing is cognitively demanding, requiring attentional resources to efficiently select and extract linguistic information as utterances unfold. Previous research has associated changes in pupil size with increased attentional effort. However, it is unknown whether the behavioral ecology of speakers may differentially affect engagement of attentional resources involved in conversation. For bilinguals, such an act potentially involves competing signals in more than one language and how this competition arises may differ across communicative contexts. We examined changes in pupil size during the comprehension of unilingual and codeswitched speech in a richly-characterized bilingual sample. In a visual-world task, participants saw pairs of objects as they heard instructions to select a target image. Instructions were either unilingual or codeswitched from one language to the other. We found that only bilinguals who use each of their languages in separate communicative contexts and who have high attention ability, show differential attention to unilingual and codeswitched speech. Bilinguals for whom codeswitching is common practice process unilingual and codeswitched speech similarly, regardless of attentional skill. Taken together, these results suggest that bilinguals recruit different language control strategies for distinct communicative purposes. The interactional context of language use critically determines attentional control engagement during language processing.
Robyn Berghoff; Jayde McLoughlin; Emanuel Bylund
In: Journal of Neurolinguistics, vol. 58, pp. 100979, 2021.
It is well established that access to the bilingual lexicon is non-selective: even in an entirely monolingual context, elements of the non-target language are active. Research has also shown that activation of the non-target language is greater at higher proficiency levels, suggesting that it may be proficiency that drives cross-language lexical activation. At the same time, the potential role of age of acquisition (AoA) in cross-language activation has gone largely unexplored, as most studies have either focused on adult L2 learners or have conflated AoA with L2 proficiency. The present study examines the roles of AoA and L2 proficiency in L2 lexical processing using the visual world paradigm. Participants were a group of early L1 Afrikaans–L2 English bilinguals (AoA 1–9 years) and a control group of L1 English speakers. Importantly, in the bilingual group, AoA and proficiency were not correlated. In the task, participants viewed a screen with four objects on it: a target object, a competitor object whose Afrikaans translation overlapped phonetically with the target object, and two unrelated distractor objects. The results show that the L2 English group was significantly more likely to look at the cross-language competitor than the L1 English group, thus providing evidence of cross-language activation. Importantly, the extent to which this activation occurred was modulated by both L2 proficiency and AoA. These findings suggest that while these two variables may have been confounded in previous research, they actually both exert effects on cross-language activation. The locus of this parallel activation effect is discussed in terms of connectionist models of bilingualism.
Christina Blomquist; Rochelle S. Newman; Yi Ting Huang; Jan Edwards
In: Journal of Speech, Language, and Hearing Research, vol. 64, no. 5, pp. 1636–1649, 2021.
Purpose: Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method: Five-to 10-year-old children with CIs heard sentences with an informative verb (draws) or a neutral verb (gets) preceding a target word ( picture). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results: Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions: Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs.
Max Garagnani; Evgeniya Kirilina; Friedemann Pulvermüller
In: Frontiers in Human Neuroscience, vol. 15, pp. 1–16, 2021.
Embodied theories of grounded semantics postulate that, when word meaning is first acquired, a link is established between symbol (word form) and corresponding semantic information present in modality-specific—including primary—sensorimotor cortices of the brain. Direct experimental evidence documenting the emergence of such a link (i.e., showing that presentation of a previously unknown, meaningless word sound induces, after learning, category-specific reactivation of relevant primary sensory or motor brain areas), however, is still missing. Here, we present new neuroimaging results that provide such evidence. We taught participants aspects of the referential meaning of previously unknown, senseless novel spoken words (such as “Shruba” or “Flipe”) by associating them with either a familiar action or a familiar object. After training, we used functional magnetic resonance imaging to analyze the participants' brain responses to the new speech items. We found that hearing the newly learnt object-related word sounds selectively triggered activity in the primary visual cortex, as well as secondary and higher visual areas.These results for the first time directly document the formation of a link between the novel, previously meaningless spoken items and corresponding semantic information in primary sensory areas in a category-specific manner, providing experimental support for perceptual accounts of word-meaning acquisition in the brain.
Bethany Gardner; Sadie Dix; Rebecca Lawrence; Cameron Morgan; Anaclare Sullivan; Chigusa Kurumada
In: PLoS ONE, vol. 16, no. 2, pp. e0245130, 2021.
Linguistic communication requires understanding of words in relation to their context. Among various aspects of context, one that has received relatively little attention until recently is the speakers themselves. We asked whether comprehenders' online language comprehension is affected by the perceived reliability with which a speaker formulates pragmatically well-formed utterances. In two eye-tracking experiments, we conceptually replicated and extended a seminal work by Grodner and Sedivy (2011). A between-participant manipulation was used to control reliability with which a speaker follows implicit pragmatic conventions (e.g., using a scalar adjective in accordance with contextual contrast). Experiment 1 replicated Grodner and Sedivy's finding that contrastive inference in response to scalar adjectives was suspended when both the spoken input and the instructions provided evidence of the speaker's (un)reliability: For speech from the reliable speaker, comprehenders exhibited the early fixations attributable to a contextually-situated, contrastive interpretation of a scalar adjective. In contrast, for speech from the unreliable speaker, comprehenders did not exhibit such early fixations. Experiment 2 provided novel evidence of the reliability effect in the absence of explicit instructions. In both experiments, the effects emerged in the earliest expected time window given the stimuli sentence structure. The results suggest that real-time interpretations of spoken language are optimized in the context of a speaker identity, characteristics of which are extrapolated across utterances.
Carolina A. Gattei; Luis A. París; Diego E. Shalom
In: Frontiers in Psychology, vol. 12, pp. 629724, 2021.
Word order alternation has been described as one of the most productive information structure markers and discourse organizers across languages. Psycholinguistic evidence has shown that word order is a crucial cue for argument interpretation. Previous studies about Spanish sentence comprehension have shown greater difficulty to parse sentences that present a word order that does not respect the order of participants of the verb's lexico-semantic structure, irrespective to whether the sentences follow the canonical word order of the language or not. This difficulty has been accounted as the cognitive cost related to the miscomputation of prominence status of the argument that precedes the verb. Nonetheless, the authors only analyzed the use of alternative word orders in isolated sentences, leaving aside the pragmatic motivation of word order alternation. By means of an eye-tracking task, the current study provides further evidence about the role of information structure for the comprehension of sentences with alternative word order and verb type, and sheds light on the interaction between syntax, semantics and pragmatics. We analyzed both “early” and “late” eye-movement measures as well as accuracy and response times to comprehension questions. Results showed an overall influence of information structure reflected in a modulation of late eye-movement measures as well as offline measures like total reading time and questions response time. However, effects related to the miscomputation of prominence status did not fade away when sentences were preceded by a context that led to non-canonical word order of constituents, showing that prominence computation is a core mechanism for argument interpretation, even in sentences preceded by context.
Haoyan Ge; Iris Mulders; Xin Kang; Aoju Chen; Virginia Yip
This visual-world eye-tracking study investigated the processing of focus in English sentences with preverbal only by L2 learners whose L1 was either Cantonese or Dutch, compared to native speakers of English. Participants heard only-sentences with prosodic prominence either on the object or on the verb and viewed pictures containing an object-focus alternative and a verb-focus alternative. We found that both L2 groups showed delayed eye movements to the alternative of focus, which was different from the native speakers of English. Moreover, Dutch learners of English were even slower than Cantonese learners of English in directing fixations to the alternative of focus. We interpreted the delayed fixation patterns in both L2 groups as evidence of difficulties in integrating multiple interfaces in real time. Furthermore, the similarity between English and Dutch in the use of prosody to mark focus hindered Dutch learners' L2 processing of focus, whereas the difference between English and Cantonese in the realization of focus facilitated Cantonese learners' processing of focus in English.
Emilie Ginestet; Jalyssa Shadbolt; Rebecca Tucker; Marie Line Bosse; S. Hélène Deacon
In: Journal of Research in Reading, vol. 44, no. 1, pp. 51–69, 2021.
Background: Efficient word identification is directly tied to strong mental representations of words, which include spellings, meanings and pronunciations. Orthographic learning is the process by which spellings for individual words are acquired. Methods: In the present study, we combined the classic self-teaching paradigm with eye tracking to detail the process by which complex pseudowords are learned. With this methodology, we explored the visual processing and learning of complex pseudowords, as well as the transfer of that learning. We explore visual processing across exposures during the initial reading task and then measure learning and transfer in orthographic choice and spelling tasks. Results: Online eye movement monitoring during the repeated reading of complex pseudowords revealed that visual processing varied across exposures with key differences based on word type Further, data from both dictation and eye movements recorded during the orthographic choice task suggested stronger learning of morphologically than orthographically complex pseudowords after four encounters. Finally, results suggested that learning transfer occurred, with similar levels of accurate recognition of new pseudowords that were morphologically or orthographically related to pseudowords learned during the reading phase than of new pseudowords never read. Conclusions: The present study provides new insights into theory and methodological discussions of orthographic learning.
Dominik Glandorf; Sascha Schroeder
Slice: An algorithm to assign fixations in multi-line texts Journal Article
In: Procedia Computer Science, vol. 192, pp. 2971–2979, 2021.
When analyzing eye movement data from the reading of multi-line texts, it is important to ensure that fixations are assigned correctly to the lines of the text. This is a non-trivial problem as eye movement data are noisy and often show complex non-linear distortions. Here, we introduced Slice, a new algorithm to assign fixations in multi-line text. We describe how Slice operates and evaluate it using a data set of natural reading data. Results show that Slice performs better than the default method used by many software packages, is robust to many forms of distortions, and approximates manual coding decisions.
Henry Griffith; Dillon Lohr; Evgeny Abdulin; Oleg Komogortsev
In: Scientific Data, vol. 8, no. 1, pp. 1–9, 2021.
This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged participants. Participants completed a battery of seven tasks in two contiguous sessions during each round of recording, including a – (1) fixation task, (2) horizontal saccade task, (3) random oblique saccade task, (4) reading task, (5/6) free viewing of cinematic video task, and (7) gaze-driven gaming task. Nine rounds of recording were conducted over a 37 month period, with participants in each subsequent round recruited exclusively from prior rounds. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of participants and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other applications applying machine learning to eye movement signal analysis. Classification labels produced by the instrument's real-time parser are provided for a subset of GazeBase, along with pupil area.
Ernesto Guerra; Jasmin Bernotat; Héctor Carvacho; Gerd Bohner
In: Frontiers in Psychology, vol. 12, pp. 589429, 2021.
Immediate contextual information and world knowledge allow comprehenders to anticipate incoming language in real time. The cognitive mechanisms that underlie such behavior are, however, still only partially understood. We examined the novel idea that gender attitudes may influence how people make predictions during sentence processing. To this end, we conducted an eye-tracking experiment where participants listened to passive-voice sentences expressing gender-stereotypical actions (e.g., “The wood is being painted by the florist”) while observing displays containing both female and male characters representing gender-stereotypical professions (e.g., florists, soldiers). In addition, we assessed participants' explicit gender-related attitudes to explore whether they might predict potential effects of gender-stereotypical information on anticipatory eye movements. The observed gaze pattern reflected that participants used gendered information to predict who was agent of the action. These effects were larger for female- vs. male-stereotypical contextual information but were not related to participants' gender-related attitudes. Our results showed that predictive language processing can be moderated by gender stereotypes, and that anticipation is stronger for female (vs. male) depicted characters. Further research should test the direct relation between gender-stereotypical sentence processing and implicit gender attitudes. These findings contribute to both social psychology and psycholinguistics research, as they extend our understanding of stereotype processing in multimodal contexts and regarding the role of attitudes (on top of world knowledge) in language prediction.
Katja I. Haeuser; Shari Baum; Debra Titone
In: Applied Psycholinguistics, vol. 42, no. 1, pp. 101–127, 2021.
Comprehending idioms (e.g., bite the bullet) requires that people appreciate their figurative meanings while suppressing literal interpretations of the phrase. While much is known about idioms, an open question is how healthy aging and noncanonical form presentation affect idiom comprehension when the task is to read sentences silently for comprehension. Here, younger and older adults read sentences containing idioms or literal phrases, while we monitored their eye movements. Idioms were presented in a canonical or a noncanonical form (e.g., bite the iron bullet). To assess whether people integrate figurative or literal interpretations of idioms, a disambiguating region that was figuratively or literally biased followed the idiom in each sentence. During early stages of reading, older adults showed facilitation for canonical idioms, suggesting a greater sensitivity to stored idiomatic forms. During later stages of reading, older adults showed slower reading times when canonical idioms were biased toward their literal interpretation, suggesting they were more likely to interpret idioms figuratively on the first pass. In contrast, noncanonical form presentation slowed comprehension of figurative meanings comparably in younger and older participants. We conclude that idioms may be more strongly entrenched in older adults, and that noncanonical form presentation slows comprehension of figurative meanings.
Tuomo Häikiö; Tinja Luotojärvi
In: Scientific Studies of Reading, pp. 1–9, 2021.
In early Finnish reading instruction, hyphens are used to denote syllable boundaries. However, this practice slows down reading already during the 1st grade. It has been hypothesized that hyphenation forces readers to rely more on phonology than orthography. Since hyphenation highlights the phonology of the word, it may facilitate reading during the very first encounters of the word. To assess whether this is the case, Finnish 1st and 2nd graders read stories about fictional animals while their eye movements were registered. Each story included four occurrences of a novel target (pseudo)word, hyphenated at the syllable level in half of the stories. Target words were read faster with repeated exposure but there were no effects regarding grade or hyphenation. The use of hyphenation does not give rise to enhanced processing of phonology in novel words and is likely to hinder the processes connected to the use of orthography.
Kathleen Hall; Masaya Yoshida
Coreference and parallelism Journal Article
In: Language, Cognition and Neuroscience, vol. 36, no. 3, pp. 296–319, 2021.
Previous studies have demonstrated a reliable effect of parallelism in a variety of domains. These studies have suggested that parallelism is preferred during both production and comprehension, and that parallelism can result in facilitation during sentence processing. There is, however, some debate about whether such effects are truly limited to coordination. In both coordinate and subordinate environments we examined whether parallelism affects pronoun resolution, and furthermore investigated whether distance between a pronoun and an antecedent (locality) affects the retrieval process. Two experiments, each consisting of an offline forced choice task as well as an eye tracking task, indicated that both locality and parallelism influence the pronoun resolution process in both coordinate and subordinate contexts. These findings are discussed in light of popular retrieval models.
Andreas Hallberg; Diederick C. Niehorster
In: Reading and Writing, vol. 34, no. 1, pp. 27–48, 2021.
Morphologically marked case is in Arabic a feature exclusive to the variety of Standard Arabic, with no parallel in the spoken varieties, and it is orthographically marked only on some word classes in specific grammatical situations. In this study we test the hypothesis that readers of Arabic do not parse sentences for case and that orthographically marked case can therefore be removed with no effect on reading. Twenty-nine participants read sentences in which one of the two most frequent types of orthographically marked case was either retained or omitted, while their eye-movements were monitored. The removal of case marking from subjects in the sound masculine plural declension (changing the suffix‑ūn ـون to ‑īn ـين) had no negative effect on gaze duration, regressions out, or go-past time. The removal of case marking form direct objects in the triptote declension (omitting the suffix -an ـاً) did however resulted in an increase in these measures. These results indicate that only some forms of case marking are required in the grammar used by readers for parsing written text.
Chung Hye Han; Keir Moulton; Trevor Block; Holly Gendron; Sander Nederveen
In: Frontiers in Psychology, vol. 12, pp. 611466, 2021.
A number of studies in the extant literature report findings that suggest asymmetry in the way reflexive and pronoun anaphors are interpreted in the early stages of processing: that pronouns are less sensitive to structural constraints, as formulated by Binding Theory, than reflexives, in the initial antecedent retrieval process. However, in previous visual world paradigm eye-tracking studies, these conclusions were based on sentences that placed the critical anaphors within picture noun phrases or prepositional phrases, which have independently been shown not to neatly conform to the Binding Theory principles. We present results from a visual world paradigm eye-tracking experiment that show that when critical anaphors are placed in the indirect object position immediately following a verb as a recipient argument, pronoun and reflexive processing are equally sensitive to structural constraints.
The model of failed foregrounding Journal Article
In: Psychology of Aesthetics, Creativity, and the Arts, pp. 1–16, 2021.
This paper points to a blind spot in the field of empirical study of literature, which is ignoring failures in reading processes. It investigates several kinds of failures based on the foregrounding theory, the most systematic attempt hitherto to empirically examine a model of literature reading. While some of the classical experiments usually considered supportive of foregrounding theory have actually reported mixed findings, these were not seriously considered as indications of failure, or as theoretically interesting. Informed by the standard model of foregrounding and its shortcomings, I propose a new model that examines the possibility that the process may fail and, more importantly, that this failure is integral to actual reading of literature by real-life readers. One type of failure is “shallow processing”, where the reader does not even initiate the foregrounding process; the other is “failed foregrounding”, where failure occurs after an interpretive move has already begun. To examine failures in foregrounding, I conducted a reading experiment: 42 subjects read a short story while their eye movements were tracked. They were then presented with heat maps of their own eye movements and were asked to explain why they focused on particular text segments, a process known as Retrospective Think-Aloud. Analysis of the interviews shows that in 36% of the cases, readers didn't even initiate the foregrounding process, and they completed it successfully in only 21% of the cases. Foregrounding failure didn't distribute randomly, but according to the participants' experience as literature readers and their global aesthetic appraisal.
Tami Harel-Arbeli; Arthur Wingfield; Yuval Palgi; Boaz M. Ben-David
In: Journal of Speech, Language, and Hearing Research, vol. 64, no. 2, pp. 315–327, 2021.
Purpose: The study examined age-related differences in the use of semantic context and in the effect of semantic competition in spoken sentence processing. We used offline (response latency) and online (eye gaze) measures, using the “visual world” eye-tracking paradigm. Method: Thirty younger and 30 older adults heard sentences related to one of four images presented on a computer monitor. They were asked to touch the image corresponding to the final word of the sentence (target word). Three conditions were used: a nonpredictive sentence, a predictive sentence suggesting one of the four images on the screen (semantic context), and a predictive sentence suggesting two possible images (semantic competition). Results: Online eye gaze data showed no age-related differences with nonpredictive sentences, but revealed slowed processing for older adults when context was presented. With the addition of semantic competition to context, older adults were slower to look at the target word after it had been heard. In contrast, offline latency analysis did not show age-related differences in the effects of context and competition. As expected, older adults were generally slower to touch the image than younger adults. Conclusions: Traditional offline measures were not able to reveal the complex effect of aging on spoken semantic context processing. Online eye gaze measures suggest that older adults were slower than younger adults to predict an indicated object based on semantic context. Semantic competition affected online processing for older adults more than for younger adults, with no accompanying age-related differences in latency. This supports an early age-related inhibition deficit, interfering with processing, and not necessarily with response execution.
Jarkko Hautala; Stefan Hawelka; Otto Loberg; Paavo H. T. Leppänen
In: Journal of Cognitive Psychology, pp. 1–19, 2021.
Contemporary models of eye movement control in reading assume a discrete target word selection process preceding saccade length computation, while the selection itself is assumed to be driven by word identification processes. However, a potentially more parsimonious, dynamic adjustment view allows both next word length and its content (e.g. orthographic) to modulate saccade length in a continuous manner. Based on a recently proposed center-based saccade length account a new regression model of forward saccade length is introduced and validated in a simulation study. Further, additional simulations and gaze-contingent invisible boundary experiments were used to study the cognitive mechanisms underlying skipping. Overall, the results support the plausibility of dynamic adjustment of saccade length in word-spaced orthographies. In the future, the present regression formula-based computational model will allow a straightforward implementation of influences of current and next word content (visual, orthographic, or contextual) on saccade length computation.
K. Hawthorne; S. J. Loveall
In: Journal of Intellectual Disability Research, vol. 65, no. 2, pp. 125–132, 2021.
Background: Pronouns are referentially ambiguous (e.g. she could refer to any female), yet they are common in everyday conversations. Individuals with typical development (TD) employ several strategies to avoid pronoun interpretation errors, including the subject bias – an assumption that a pronoun typically refers to the subject (or, with the closely related order-of-mention bias, the first-mentioned character) of the previous sentence. However, it is unknown if adults with intellectual disability (ID) share this strategy or the extent to which the subject bias is associated with non-verbal abilities or receptive vocabulary. Methods: We tested 22 adults with mixed-aetiology ID on their interpretation of ambiguous pronouns using the visual world eye-tracking paradigm and by asking a follow-up pronoun interpretation question. A group of TD adults was also tested to establish the strength of the subject bias with our materials and task. Results: Adults with ID did demonstrate the subject bias, but it was significantly less robust than that seen in TD. For participants with ID, the subject bias was influenced by non-verbal IQ and receptive vocabulary at different stages of processing. Conclusions: Given the frequency of pronouns in conversation, strengthening the subject bias may help alleviate discourse and reading comprehension challenges for individuals with ID, particularly those with lower non-verbal and/or vocabulary skills.
Liyuan He; Weidong Ma; Fengdan Shen; Yongsheng Wang; Jie Wu; Kayleigh L. Warrington; Simon P. Liversedge; Kevin B. Paterson
In: Psychology and Aging, vol. 36, no. 7, pp. 822–833, 2021.
We investigated parafoveal processing by 44 young (18–30 years) and 44 older (65+ years) Chinese readers using eye movement measures. Participants read sentences which included an invisible boundary after a two-character word (N) and before two one-character words (N + 1, N + 2). Before a reader's gaze crossed the boundary, N + 1 and N + 2 were shown normally or masked (i.e., as valid/invalid previews), after which they reverted to normal. Young adults obtained preview beneﬁts (a processing advantage for valid over invalid previews) for both words. However, older adults obtained N + 2 preview beneﬁts only when N + 1 was valid, suggesting their parafoveal processing is more limited
Liyuan He; Ziming Song; Min Chang; Chuanli Zang; Guoli Yan; Simon P. Liversedge
In: British Journal of Psychology, vol. 112, no. 3, pp. 662–689, 2021.
In two experiments, we investigated the correspondences between off-line word segmentation and on-line segmentation processing during Chinese reading. In Experiment 1, participants were asked to read sentences which contained critical four-character strings, and then, they were required to segment the same sentences into words in a later off-line word segmentation task. For each item, participants were split into 1-word segmenters (who segmented four-character strings as a single word) and 2-word segmenters (who segmented four-character strings as 2 two-character words). Thus, we split participants into two groups (1-word segmenters and 2-word segmenters) according to their off-line segmentation bias. The data analysis showed no reliable group effect on all the measures. In order to avoid the heterogeneity of participants and stimuli in Experiment 1, two groups of participants (1-word segmenters and 2-word segmenters) and three types of critical four-character string (1-word strings, ambiguous strings, and 2-word strings) were identified in a norming study in Experiment 2. Participants were required to read sentences containing these critical strings. There was no reliable group effect in Experiment 2, as was the case in Experiment 1. However, in Experiment 2, participants spent less time and made fewer fixations on 1-word strings compared to ambiguous and 2-word strings. These results indicate that the off-line word segmentation preferences do not necessarily reflect on-line word segmentation processing during Chinese reading and that Chinese readers exhibit flexibility such that word, or multiple constituent, segmentation commitments are made on-line.
Katherine P. Hebert; Stephen D. Goldinger; Stephen C. Walenchok
In: Cognition, vol. 210, pp. 104587, 2021.
The label-feedback hypothesis (Lupyan, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter response times (RTs) and higher accuracy. In the present investigation, we conceptually replicated and extended their study, using additional control conditions and recording eye movements during search. Our goal was to evaluate whether self-directed speech influences target locating (i.e. attentional guidance) or object perception (i.e., distractor rejection and target appreciation). In three experiments, during object search, people spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names (all within-participants). Experiments 1 and 2 examined search RTs and accuracy: Speaking target names improved performance, without differences among the remaining conditions. Experiment 3 incorporated eye-tracking: Gaze fixation patterns suggested that language does not affect attentional guidance, but instead affects both distractor rejection and target appreciation. When search trials were conditionalized according to distractor fixations, language effects became more orderly: Search was fastest while people spoke target names, followed in linear order by the nonword, distractor-absent, and distractor-present conditions. We suggest that language affects template maintenance during search, allowing fluent differentiation of targets and distractors. Materials, data, and analyses can be retrieved here: https://osf.io/z9ex2/
Kristi Hendrickson; Keith Apfelbaum; Claire Goodwin; Christina Blomquist; Kelsey Klein; Bob McMurray
In: Quarterly Journal of Experimental Psychology, pp. 1–21, 2021.
Word recognition occurs across two sensory modalities: auditory (spoken words) and visual (written words). While each faces different challenges, they are often described in similar terms as a competition process by which multiple lexical candidates are activated and compete for recognition. While there is a general consensus regarding the types of words that compete during spoken word recognition, there is less consensus for written word recognition. The present study develops a novel version of the Visual World Paradigm (VWP) to examine written word recognition and uses this to assess the nature of the competitor set during word recognition in both modalities using the same experimental design. For both spoken and written words, we found evidence for activation of onset competitors (cohorts, e.g., cat, cap) and words that contain the same phonemes or letters in reverse order (anadromes, e.g., cat, tack). We found no evidence of activation for rhymes (e.g., cat, hat). The results across modalities were quite similar, with the exception that for spoken words, cohorts were more active than anadromes, whereas for written words activation was similar. These results suggest a common characterisation of lexical similarity across spoken and written words: temporal or spatial order is coarsely coded, and onsets may receive more weight in both systems. However, for spoken words, temporary ambiguity during the moment of processing gives cohorts an additional boost during real-time recognition.
Kristi Hendrickson; Jacob Oleson; Elizabeth Walker
In: Child Development, vol. 92, no. 2, pp. 638–649, 2021.
Although the ability to understand speech in adverse listening conditions is paramount for effective communication across the life span, little is understood about how this critical processing skill develops. This study asks how the dynamics of spoken word recognition (i.e., lexical access and competition) change during soft speech in 8- to 11-year-olds (n = 26). Lexical competition and access for speech at lower intensity levels was measured using eye-tracking and the visual world paradigm. Overall the results suggest that soft speech influences the magnitude and timing of lexical access and competition. These results suggest that lexical competition is a cognitive process that can be adapted in the school-age years to help cope with increased uncertainty due to alterations in the speech signal.
Inga Hennecke; Harald Baayen
Romance N Prep N constructions in visual word recognition Journal Article
In: The Mental Lexicon, vol. 16, no. 1, pp. 98–132, 2021.
N Prep N constructions such as Sp. bicicleta de montaña ‘mountain bike' are very productive and frequent in Romance languages. They commonly have been classified as syntagmatic compounds that show no orthographic union and exhibit an internal structure that resembles free syntactic structures, such as Sp. libro para niños ‘book for children'. There is no consensus on how to best distinguish lexical from syntactic N Prep N constructions. The present paper presents an explorative eye-tracking study on N Prep N constructions, varying both lexical type (lexical vs. syntactic) and preposition across three languages, French, Spanish and Portuguese. The task of the eye-tracking study was a reading aloud paradigm of the constructions in sentence context. Constructions were fixated on less when more frequent, independent of lexical status. There was also modest evidence that a higher construction frequency afforded shorter total fixation durations, but only for lower deciles of the response distribution. The (construction-initial) head noun also received fewer fixations as construction frequency increased, and also when the head noun was more frequent. The second fixation durations on the head noun also revealed an effect of lexical status, with syntactic constructions receiving shorter fixations at the 5th and 7th deciles. The probability of a fixation on the preposition decreased with preposition frequency, but first fixations on the preposition increased with preposition frequency. The prepositions of Portuguese, the language with the richest inventory of prepositions, received more fixations than the prepositions of French and Spanish. The observed pattern of results is consistent with models of lexical processing in which reading is guided by knowledge of both higher-level constructions and knowledge of key constituents such as the head noun and the preposition.
Ehab W. Hermena
In: Journal of Eye Movement Research, vol. 14, no. 1, pp. 6, 2021.
Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.
Ehab W. Hermena; Sana Bouamama; Simon P. Liversedge; Denis Drieghe
In: PLoS ONE, vol. 16, no. 11, pp. e0259987, 2021.
In Arabic, a predominantly consonantal script that features a high incidence of lexical ambiguity (heterophonic homographs), glyph-like marks called diacritics supply vowel information that clarifies how each consonant should be pronounced, and thereby disambiguate the pronunciation of consonantal strings. Diacritics are typically omitted from print except in situations where a particular homograph is not sufficiently disambiguated by the surrounding context. In three experiments we investigated whether the presence of disambiguating diacritics on target homographs modulates word frequency, length, and predictability effects during reading. In all experiments, the subordinate representation of the target homographs was instantiated by the diacritics (in the diacritized conditions), and by the context subsequent to the target homographs. The results replicated the effects of word frequency (Experiment 1), word length (Experiment 2), and predictability (Experiment 3). However, there was no evidence that diacritics-based disambiguation modulated these effects in the current study. Rather, diacritized targets in all experiments attracted longer first pass and later (go past and/or total fixation count) processing. These costs are suggested to be a manifestation of the subordinate bias effect. Furthermore, in all experiments, the diacritics-based disambiguation facilitated later sentence processing, relative to when the diacritics were absent. The reported findings expand existing knowledge about processing of diacritics, their contribution towards lexical ambiguity resolution, and sentence processing.
Ehab W. Hermena; Eida J. Juma; Maryam AlJassmi
In: PLoS ONE, vol. 16, no. 8, pp. e0254745, 2021.
Evidence shows that skilled readers extract information about upcoming words in the parafovea. Using the boundary paradigm, we investigated native Arabic readers' processing of orthographic, morphological, and semantic information available parafoveally. Target words were embedded in frame sentences, and prior to readers fixating them, one of the following previews were made available: (a) Identity preview; (b) Preview that shared the pattern morpheme with the target; (c) Preview that shared the root morpheme with the target; (d) Preview that was a synonym with the target word; (e) Preview with two of the root letters were transposed thus creating a new root, while preserving all letter identities of the target; (f) Preview with two of the root letters were transposed thus creating a pronounceable pseudo root, while also preserving all letter identities of the target; and (g) Previews that was unrelated to the target word and shared no information with it. The results showed that identity, root-preserving, and synonymous preview conditions yielded preview benefit. On the other hand, no benefit was obtained from the pattern-preserving previews, and significant disruption to processing was obtained from the previews that contained transposed root letters, particularly when this letter transposition created a new real root. The results thus reflect Arabic readers' dependance on morphological and semantic information, and suggest that these levels of representation are accessed as early as orthographic information. Implications for theory- and model-building, and the need to accommodate early morphological and semantic processing activities in more comprehensive models are further discussed. Copyright:
Annina K. Hessel; Kate Nation; Victoria A. Murphy
In: Scientific Studies of Reading, vol. 25, no. 2, pp. 159–178, 2021.
This experiment investigated comprehension monitoring in children learning English as an additional language (EAL) compared to monolinguals. Sixty-three 9–10-year-old children read texts containing an internal inconsistency (e.g. a barking kitten vs. barking puppy) while their eye movements were monitored. Standardized tests measured word reading fluency and vocabulary size and the children completed a questionnaire tapping rereading behavior. There was no overall difference between EAL and monolingual children. Regardless of EAL status, children with larger vocabularies were more efficient in their re-analysis of inconsistent information, as revealed by regressive eye movements. As efficient re-analysis of inconsistent information is essential for comprehension and is ubiquitous in proficient readers, the presence of this pattern in the children is indicative of successful online monitoring. However, rereading of inconsistent vs consistent words in the eye movement record was not related to children's self-reported rereading, not providing any support for deliberate rereading. Our findings indicate that successful online monitoring relies on strong word knowledge leading to efficient processing of texts, both for bilingual and monolingual children, and beyond deliberate rereading.
Gaisha Oralova; Victor Kuperman
Effects of spacing on sentence reading in Chinese Journal Article
In: Frontiers in Psychology, vol. 12, pp. 765335, 2021.
Given that Chinese writing conventions lack inter-word spacing, understanding whether and how readers of Chinese segment regular unspaced Chinese writing into words is an important question for theories of reading. This study examined the processing outcomes of introducing spaces to written Chinese sentences in varying positions based on native speaker consensus. The measure of consensus for every character transition in our stimuli sentences was the percent of raters who placed a word boundary in that position. The eye movements of native readers of Chinese were recorded while they silently read original unspaced sentences and their experimentally manipulated counterparts for comprehension. We introduced two types of spaced sentences: one with spaces inserted at every probable word boundary (heavily spaced), and another with spaces placed only at highly probable word boundaries (lightly spaced). Linear mixed-effects regression models showed that heavily spaced sentences took identical time to read as unspaced ones despite the shortened fixation times on individual words (Experiment 1). On the other hand, reading times for lightly spaced sentences and words were shorter than those for unspaced ones (Experiment 2). Thus, spaces proved to be advantageous but only when introduced at highly probable word boundaries. We discuss methodological and theoretical implications of these findings.
In: Journal of Psycholinguistic Research, vol. 50, no. 6, pp. 1417–1436, 2021.
Many studies have shown the double processing of negation, suggesting that negation integration into sentence meaning is delayed. This contrasts with some researches that have found that such integration is rather immediate. The present study contributes to this debate. Affirmative and negative compound sentences (e.g., “because he was not hungry, he did not order a salad”) were presented orally in a visual world paradigm while four printed words were on the screen: salad, no salad, soup, and no soup. The eye-tracking data showed two different fixation patterns for negative causal assertions, which are linked to differences in the representation and inferential demands. One indicates that negation is integrated immediately, as people look at the explicit negation (e.g., no salad) very early. The other, in which people look at the alternate (e.g., soup) much later, indicates that what is delayed in time is the representation of the alternate. These results support theories that combine iconic and symbolic representations, such as the model theory.
Isabel Orenes; Orlando Espino; Ruth M. J. Byrne
In: Quarterly Journal of Experimental Psychology, pp. 1–19, 2021.
Two eye-tracking experiments compared affirmative and negative counterfactuals, “if she had (not) arrived early, she would (not) have bought roses” and affirmative and negative causal assertions, “Because she arrived (did not arrive) early, she bought (did not buy) roses.” When participants heard a counterfactual, they looked on screen at words corresponding to its conjecture (“roses”), and its presupposed facts (“no roses”), whereas for a causal assertion, they looked only at words corresponding to the facts. For counterfactuals, they looked at the conjecture first, and later the presupposed facts, and at the latter more than the former. The effect was more pronounced for negative counterfactuals than affirmative ones because the negative counterfactual's presupposed facts identify a specific item (“she bought roses”), whereas the affirmative counterfactual's presupposed facts do not (“she did not buy roses”). Hence, when participants were given a binary context, “she did not know whether to buy roses or carnations,” they looked primarily at the presupposed facts for both sorts of counterfactuals. We discuss the implications for theories of negation, the dual meaning of counterfactuals, and their relation to causal assertions.
Cognitive constraints on advance planning of sentence intonation Journal Article
In: PLoS ONE, vol. 16, no. 11, pp. e0259343, 2021.
Pitch peaks tend to be higher at the beginning of longer than shorter sentences (e.g., 'A farmer is pulling donkeys' vs 'A farmer is pulling a donkey and goat'), whereas pitch valleys at the ends of sentences are rather constant for a given speaker. These data seem to imply that speakers avoid dropping their voice pitch too low by planning the height of sentence-initial pitch peaks prior to speaking. However, the length effect on sentence-initial pitch peaks appears to vary across different types of sentences, speakers and languages. Therefore, the notion that speakers plan sentence intonation in advance due to the limitations in low voice pitch leaves part of the data unexplained. Consequently, this study suggests a complementary cognitive account of length-dependent pitch scaling. In particular, it proposes that the sentence-initial pitch raise in long sentences is related to high demands on mental resources during the early stages of sentence planning. To tap into the cognitive underpinnings of planning sentence intonation, this study adopts the methodology of recording eye movements during a picture description task, as the eye movements are the established approximation of the real-time planning processes. Measures of voice pitch (Fundamental Frequency) and incrementality (eye movements) are used to examine the relationship between (verbal) working memory (WM), incrementality of sentence planning and the height of sentence-initial pitch peaks.
Ayşegül Özkan; Figen Beken Fikri; Bilal Kırkıcı; Reinhold Kliegl; Cengiz Acartürk
Eye movement control in Turkish sentence reading Journal Article
In: Quarterly Journal of Experimental Psychology, vol. 74, no. 2, pp. 377–397, 2021.
Reading requires the assembly of cognitive processes across a wide spectrum from low-level visual perception to high-level discourse comprehension. One approach of unravelling the dynamics associated with these processes is to determine how eye movements are influenced by the characteristics of the text, in particular which features of the words within the perceptual span maximise the information intake due to foveal, spillover, parafoveal, and predictive processing. One way to test the generalisability of current proposals of such distributed processing is to examine them across different languages. For Turkish, an agglutinative language with a shallow orthography–phonology mapping, we replicate the well-known canonical main effects of frequency and predictability of the fixated word as well as effects of incoming saccade amplitude and fixation location within the word on single-fixation durations with data from 35 adults reading 120 nine-word sentences. Evidence for previously reported effects of the characteristics of neighbouring words and interactions was mixed. There was no evidence for the expected Turkish-specific morphological effect of the number of inflectional suffixes on single-fixation durations. To control for word-selection bias associated with single-fixation durations, we also tested effects on word skipping, single-fixation, and multiple-fixation cases with a base-line category logit model, assuming an increase of difficulty for an increase in the number of fixations. With this model, significant effects of word characteristics and number of inflectional suffixes of foveal word on probabilities of the number of fixations were observed, while the effects of the characteristics of neighbouring words and interactions were mixed.
Dario Paape; Shravan Vasishth; Ralf Engbert
In: Open Mind, vol. 5, pp. 42–58, 2021.
Local coherence effects arise when the human sentence processor is temporarily misled by a locally grammatical but globally ungrammatical analysis (The coach smiled at the player tossed a frisbee by the opposing team). It has been suggested that such effects occur either because sentence processing occurs in a bottom-up, self-organized manner rather than under constant grammatical supervision, or because local coherence can disrupt processing due to readers maintaining uncertainty about previous input. We report the results of an eye-tracking study in which subjects read German grammatical and ungrammatical sentences that either contained a locally coherent substring or not and gave binary grammaticality judgments. In our data, local coherence affected on-line processing immediately at the point of the manipulation. There was, however, no indication that local coherence led to illusions of grammaticality (a prediction of self-organization), and only weak, inconclusive support for local coherence leading to targeted regressions to critical context words (a prediction of the uncertain-input approach). We discuss implications for self-organized and noisy-channel models of local coherence.
Ascensión Pagán; Hazel I. Blythe; Simon P. Liversedge
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 47, no. 7, pp. 1186–1203, 2021.
Previous studies exploring the cost of reading sentences with words that have two transposed letters in adults showed that initial letter transpositions caused the most disruption to reading, indicating the important role that initial letters play in lexical identification (e.g., Rayner et al., 2006). Regarding children, it is not clear whether differences in reading ability would affect how they encode letter position information as they attempt to identify misspelled words in a reading-like task. The aim of this experiment was to explore how initial-letter position information is encoded by children compared to adults when reading misspelled words, containing transpositions, during a reading-like task. Four different conditions were used: control (words were correctly spelled), TL12 (letters in first and second positions were transposed), TL13 (letters in first and third positions were transposed), and TL23 (letters in second and third positions were transposed). Results showed that TL13 condition caused the most disruption, whereas TL23 caused the least disruption to reading of misspelled words. Although disruption for the TL13 condition was quite rapid in adults, the immediacy of disruption was less so for the TL23 and TL12 conditions. For children, effects of transposition also occurred quite rapidly but were longer lasting. The time course was particularly extended for the less skilled relative to the more skilled child readers. This pattern of effects suggests that both adults and children with higher, relative to lower,reading ability encode internal letter position information more flexibly to identify misspelled words,with transposed letters, during a reading-like task.
Jinger Pan; Miaomiao Liu; Hong Li; Ming Yan
In: Reading and Writing, vol. 34, no. 2, pp. 355–369, 2021.
Word boundary information is not marked explicitly in Chinese sentences and word ambiguity happens in Chinese texts. This introduces difficulty to parse characters into words when reading Chinese sentences, especially for beginning readers. In an eye-tracking study, we tested whether explicit word boundary information as provided by alternating text-colors affects reading performance of Chinese children and how such an effect is influenced by individual differences in word segmentation ability. Results showed that across a number of eye-movement measures, grade three children overall benefited from explicit marking of word boundary. Additionally, children with highest word segmentation ability showed the largest benefits in reading speed. We discuss possible implications for education.
Jinger Pan; Ming Yan; Eike M. Richter; Hua Shu; Reinhold Kliegl
In: Behavior Research Methods, pp. 1–12, 2021.
This report introduces the Beijing Sentence Corpus (BSC). This is a Chinese sentence corpus of eye-tracking data with relatively clear word boundaries. In addition, we report predictability norms for each word in the corpus. Eye movement corpora are available in alphabetic scripts such as English, German, and French. However, there is no publicly available corpus for Chinese. Thus, to study predictive processes during reading in Chinese, it is necessary to establish such a corpus. Also, given the clear word boundaries in the sentences, BSC is especially useful to provide evidence relevant to the theoretical debate of saccade target selection in Chinese. With the large-scale predictability norms, we conducted new analyses based on 60 BSC readers, testing the influences of launch word and target word properties while controlling for visual and oculomotor constraints, as well as sentence and subject-level individual differences. We discuss implications for guidance of eye movements in Chinese reading.
Jinger Pan; Caicai Zhang; Xunan Huang; Ming Yan
In: Reading and Writing, vol. 34, no. 4, pp. 841–857, 2021.
The current study examined whether or not lexical access is influenced by detailed phonological features during the silent reading of Chinese sentences. We used two types of two-character target words (Mandarin sandhi-tone and base-tone). The first characters of the words in the sandhi-tone condition had a tonal alternation, but no tonal alternation was involved in the base-tone condition. Recordings of eye movements revealed that native Mandarin Chinese readers viewed the base-tone target words more briefly than the sandhi-tone target words when they were infrequent. Such articulation-specific effects on visual word processing, however, diminished for frequent words. We suggest that a conflict in tonal representation at a character/morpheme level and at a word level induces prolongation in fixation duration on infrequent sandhi-tone words, and conclude that these tonal effects appear to reflect articulation simulation of words during the silent reading of Chinese sentences.
Yali Pan; Steven Frisson; Ole Jensen
Neural evidence for lexical parafoveal processing Journal Article
In: Nature Communications, vol. 12, pp. 5234, 2021.
In spite of the reduced visual acuity, parafoveal information plays an important role in natural reading. However, competing models on reading disagree on whether words are previewed parafoveally at the lexical level. We find neural evidence for lexical parafoveal processing by combining a rapid invisible frequency tagging (RIFT) approach with magnetoencephalography (MEG) and eye-tracking. In a silent reading task, target words are tagged (flickered) subliminally at 60 Hz. The tagging responses measured when fixating on the pre-target word reflect parafoveal processing of the target word. We observe stronger tagging responses during pre-target fixations when followed by low compared with high lexical frequency targets. Moreover, this lexical parafoveal processing is associated with individual reading speed. Our findings suggest that reading unfolds in the fovea and parafovea simultaneously to support fluent reading.
Caterina Laura Paolazzi; Nino Grillo; Claudia Cera; Fani Karageorgou; Emily Bullman; Wing Yee Chow; Andrea Santi
In: Language, Cognition and Neuroscience, pp. 1–19, 2021.
Among existing accounts of passivisation difficulty, some argue it depends on the predicate semantics (i.e. passives are more difficult with subject-experiencer than agent-patient verbs). Inconsistent with the accounts that predict passive difficulty, Paolazzi et al. (2019) found that passives were read faster than actives at the verb and object by-phrase in a series of self-paced reading experiments, with no modulation of verb type. However, self-paced reading provides limited direct measurement of late revision/interpretive processing. We used modified stimuli from Paolazzi et al. (2019) to re-examine this issue in two eye-tracking while reading experiments. We found that in late measures, passives with subject-experiencer verbs had longer fixation durations than actives at the verb and two subsequent regions but no difference was observed across agent-patient verbs. Subject-experiencer verbs provide a state, but the passive structure requires an event. Thus, the required eventive interpretation is coerced with subject-experiencers (if possible) and induces difficulty.
Nadia Paraskevoudi; John S. Pezaris
In: Scientific Reports, vol. 11, pp. 11121, 2021.
The visual pathway is retinotopically organized and sensitive to gaze position, leading us to hypothesize that subjects using visual prostheses incorporating eye position would perform better on perceptual tasks than with devices that are merely head-steered. We had sighted subjects read sentences from the MNREAD corpus through a simulation of artificial vision under conditions of full gaze compensation, and head-steered viewing. With 2000 simulated phosphenes, subjects (n = 23) were immediately able to read under full gaze compensation and were assessed at an equivalent visual acuity of 1.0 logMAR, but were nearly unable to perform the task under head-steered viewing. At the largest font size tested, 1.4 logMAR, subjects read at 59 WPM (50% of normal speed) with 100% accuracy under the full-gaze condition, but at 0.7 WPM (under 1% of normal) with below 15% accuracy under head-steering. We conclude that gaze-compensated prostheses are likely to produce considerably better patient outcomes than those not incorporating eye movements.
Fabio Parente; Kathy Conklin; Josephine M. Guy; Rebekah Scott
In: Language and Literature, vol. 30, no. 1, pp. 21–36, 2021.
The popularity of literary biographies and the importance publishers place on author publicity materials suggest the concept of an author's creative intentions is important to readers' appreciation of literary works. However, the question of how this kind of contextual information informs literary interpretation is contentious. One area of dispute concerns the extent to which readers' constructions of an author's creative intentions are text-centred and therefore can adequately be understood by linguistic evidence alone. The current study shows how the relationship between linguistic and contextual factors in readers' constructions of an author's creative intentions may be investigated empirically. We use eye-tracking to determine whether readers' responses to textual features (changes to lexis and punctuation) are affected by prior, extra-textual prompts concerning information about an author's creative intentions. We showed participants pairs of sentences from Oscar Wilde and Henry James while monitoring their eye movements. The first sentence was followed by a prompt denoting a different attribution (Authorial, Editorial/Publisher and Typographic) for the change that, if present, would appear in the second sentence. After reading the second sentence, participants were asked whether they had detected a change and, if so, to describe it. If the concept of an author's creative intentions is implicated in literary reading this should influence participants' reading behaviour and ability to accurately report a change based on the prompt. The findings showed that readers' noticing of textual variants was sensitive to the prior prompt about its authorship, in the sense of producing an effect on attention and re-reading times. But they also showed that these effects did not follow the pattern predicted of them, based on prior assumptions about readers' cultures. This last finding points to the importance, as well as the challenges, of further investigating the role of contextual information in readers' constructions of an author's creative intentions.
In: Linguistic Research, vol. 38, no. 3, pp. 567–592, 2021.
Adult language learners' difficulties with second language (L2) articles are well attested in the literature. Ionin et al. (2004, 2009) argued that L2 learners whose native language lacks articles have access to both possible semantic universals of the article system, namely, definiteness and specificity. The fluctuation between the two options may result in learners' misuse of the articles. This study investigates their claim using both online and offline measures of learners' linguistic knowledge. Twenty-two Korean learners and 22 native speakers of English read pairs of sentences that included (un)grammatical articles twice, first with a focus on meaning while their eye movements were recorded, and then to make grammaticality judgments. The learners' performances are discussed in terms of the grammaticality of the article use and the semantic contexts in which the target articles were used in comparison to native English speakers' performance on the same tasks. The online task produced mixed results for the L2 learners, while the offline task relied on the right option for English. (Gyeongin
Adam J Parker; Timothy J Slattery
In: Quarterly Journal of Experimental Psychology, vol. 74, no. 1, pp. 135–149, 2021.
In recent years, there has been an increase in research concerning individual differences in readers' eye movements. However, this body of work is almost exclusively concerned with the reading of single-line texts. While spelling and reading ability have been reported to influence saccade targeting and fixation times during intra-line reading, where upcoming words are available for parafoveal processing, it is unclear how these variables affect fixations adjacent to return-sweeps. We, therefore, examined the influence of spelling and reading ability on return-sweep and corrective saccade parameters for 120 participants engaged in multiline text reading. Less-skilled readers and spellers tended to launch their return-sweeps closer to the end of the line, prefer a viewing location closer to the start of the next, and made more return-sweep undershoot errors. We additionally report several skill-related differences in readers' fixation durations across multiline texts. Reading ability influenced all fixations except those resulting from return-sweep error. In contrast, spelling ability influenced only those fixations following accurate return-sweeps—where parafoveal processing was not possible prior to fixation. This stands in contrasts to an established body of work where fixation durations are related to reading but not spelling ability. These results indicate that lexical quality shapes the rate at which readers access meaning from the text by enhancing early letter encoding, and influences saccade targeting even in the absence of parafoveal target information.
Olga Parshina; Anna K. Laurinavichyute; Irina A. Sekerina
Eye-movement benchmarks in Heritage Language reading Journal Article
In: Bilingualism: Language and Cognition, vol. 24, no. 1, pp. 69–82, 2021.
This eye-tracking study establishes basic benchmarks of eye movements during reading in heritage language (HL) by Russian-speaking adults and adolescents of high (n = 21) and low proficiency (n = 27). Heritage speakers (HSs) read sentences in Cyrillic, and their eye movements were compared to those of Russian monolingual skilled adult readers, 8-year-old children and L2 learners. Reading patterns of HSs revealed longer mean fixation durations, lower skipping probabilities, and higher regressive saccade rates than in monolingual adults. High-proficient HSs were more similar to monolingual children, while low-proficient HSs performed on par with L2 learners. Low-proficient HSs differed from high-proficient HSs in exhibiting lower skipping probabilities, higher fixation counts, and larger frequency effects. Taken together, our findings are consistent with the weaker links account of bilingual language processing as well as the divergent attainment theory of HL.
Olga Parshina; Irina A. Sekerina; Anastasiya Lopukhina; Titus Malsburg
In: Reading Research Quarterly, pp. 1–24, 2021.
In the present study, we used a scanpath approach to investigate reading processes and factors that can shape them in monolingual Russian-speaking adults, 8-year-old children, and bilingual Russian-speaking readers. We found that monolingual adults' eye movement patterns exhibited a fluent scanpath reading process, representing effortless processing of the written material: They read straight from left to right at a fast pace, skipped words, and regressed rarely. Both high-proficiency heritage-language speakers' and second graders' eye movement patterns exhibited an intermediate scanpath reading process, characterized by a slower pace, longer fixations, an absence of word skipping, and short regressive saccades. Second-language learners and low-proficiency heritage-language speakers exhibited a beginner reading process that involved the slowest pace, even longer fixations, no word skipping, and frequent rereading of the whole sentence and of particular words. We suggest that unlike intermediate readers who use the respective process to resolve local processing difficulties (e.g., word recognition failure), beginner readers, in addition, experience global-level challenges in semantic and morphosyntactic information integration. Proficiency in Russian for heritage-language speakers and comprehension scores for second-language learners were the only individual difference factors predictive of the scanpath reading process adopted by bilingual speakers. Overall, the scanpath analysis revealed qualitative differences in scanpath reading processes among various groups of readers and thus adds a qualitative dimension to the conventional quantitative evaluation of word-level eye-tracking measures.
Ana Pellicer–Sánchez; Kathy Conklin; Michael P. H. Rodgers; Fabio Parente
In: The Modern Language Journal, pp. 1–21, 2021.
Comprehension of many types of texts involves constructing meaning from text and pictures. However, research examining how second language (L2) learners process text and pictures and the relationship with comprehension is scarce. Thus, while verbal input is often presented in written and auditory modes simultaneously (i.e., audio of text with simultaneous reading of it), we do not know how the auditory input affects L2 adult learners' processing of text and pictures and its relation to comprehension. In the current study, L2 adult learners and native (L1) adults read and read while listening to an illustrated story while their eye movements were recorded. Immediately after reading, they completed a comprehension test. Results showed that the presence of auditory input allowed learners to spend more time looking at pictures and supported a better integration of text and pictures. No differences were observed between L2 and L1 readers' allocation of attention to text and pictures. Both reading conditions led to similar levels of comprehension. Processing time on the text was positively related to comprehension for L2 readers, while it was associated to lower comprehension for L1 readers. Processing time on images was positively related to comprehension only for L1 readers.
Ryan E. Peters; Justin B. Kueser; Arielle Borovsky
In: Brain Sciences, vol. 11, no. 2, pp. 163, 2021.
While recent research suggests that toddlers tend to learn word meanings with many “perceptual” features that are accessible to the toddler's sensory perception, it is not clear whether and how building a lexicon with perceptual connectivity supports attention to and recognition of word meanings. We explore this question in 24–30-month-olds (N = 60) in relation to other individual differences, including age, vocabulary size, and tendencies to maintain focused attention. Participants' looking to item pairs with high vs. low perceptual connectivity—defined as the number of words in a child's lexicon sharing perceptual features with the item—was measured before and after target item labeling. Results revealed pre-labeling attention to known items is biased to both high-and low-connectivity items: first to high, and second, but more robustly, to low-connectivity items. Subsequent object–label processing was also facilitated for high-connectivity items, particularly for children with temperamental tendencies to maintain focused attention. This work provides the first empirical evidence that patterns of shared perceptual features within children's known vocabularies influence both visual and lexical processing, highlighting the potential for a newfound set of developmental dependencies based on the perceptual/sensory structure of early vocabularies.
Aurélie Pistono; Robert J. Hartsuiker
In: Language, Cognition and Neuroscience, vol. 36, no. 8, pp. 1038–1055, 2021.
To reveal the underlying cause of disfluency, several authors related the pattern of disfluencies to difficulties at specific levels of production, using a Network Task. Given that disfluencies are multifactorial, we combined this paradigm with eye-tracking to disentangle disfluency related to word preparation difficulties from others (e.g. stalling strategies). We manipulated lexical and grammatical selection difficulty. In Experiment 1, lines connecting the pictures varied in length, which led participants to use a strategy and inspect other areas than the upcoming picture when pictures were preceded by long lines. Experiment 2 only used short lines. In both experiments, lexical selection difficulty promoted self-corrections, pauses and longer fixation latency prior to naming. Multivariate Pattern Analyses demonstrated that disfluency and eye-movement data patterns can predict lexical selection difficulty. Eye-tracking could provide complementary information about network tasks, by disentangling disfluencies related to picture naming from disfluencies related to self-monitoring or stalling strategies.
In: Memory and Cognition, pp. 1–18, 2021.
Two classes of cognitive mechanisms have been proposed to explain segmentation of continuous sensory input into discrete recurrent constituents: clustering and boundary-finding mechanisms. Clustering mechanisms are based on identifying frequently co-occurring elements and merging them together as parts that form a single constituent. Bracketing (or boundary-finding) mechanisms work by identifying rarely co-occurring elements that correspond to the boundaries between discrete constituents. In a series of behavioral experiments, I tested which mechanisms are at play in the visual modality both during segmentation of a continuous syllabic sequence into discrete word-like constituents and during recognition of segmented constituents. Additionally, I explored conscious awareness of the products of statistical learning—whole constituents versus merged clusters of smaller subunits. My results suggest that both online segmentation and offline recognition of extracted constituents rely on detecting frequently co-occurring elements, a process likely based on associative memory. However, people are more aware of having learnt whole tokens than of recurrent composite clusters.
Seema Prasad; Ramesh Kumar Mishra
In: Bilingualism: Language and Cognition, vol. 24, no. 2, pp. 241–270, 2021.
Does a concurrent verbal working memory (WM) load constrain cross-linguistic activation? In a visual world study, participants listened to Hindi (L1) or English (L2) spoken words and viewed a display containing the phonological cohort of the translation equivalent (TE cohort) of the spoken word and 3 distractors. Experiment 1 was administered without a load. Participants then maintained two or four letters (Experiment 2) or two, six or eight letters (Experiment 3) in WM and were tested on backward sequence recognition after the visual world display. Greater looks towards TE cohorts were observed in both the language directions in Experiment 1. With a load, TE cohort activation was inhibited in the L2 - L1 direction and observed only in the early stages after word onset in the L1 - L2 direction suggesting a critical role of language direction. These results indicate that cross-linguistic activation as seen through eye movements depends on cognitive resources such as WM.
Eva Puimège; Maribel Montero Perez; Elke Peters
In: Second Language Research, pp. 1–22, 2021.
This study examines the effect of textual enhancement on learners' attention to and learning of multiword units from captioned audiovisual input. We adopted a within-participants design in which 28 learners of English as a foreign language (EFL) watched a captioned video containing enhanced (underlined) and unenhanced multiword units. Using eye-tracking, we measured learners' online processing of the multiword units as they appeared in the captions. Form recall pre- and posttests measured learners' acquisition of the target items. The results of mixed effects models indicate that enhanced items received greater visual attention, with longer reading times, less single word skipping and more rereading. Further, a positive relationship was found between amount of visual attention and learning odds: items fixated longer, particularly during the first pass, were more likely to be recalled in an immediate posttest. Our findings provide empirical support for the positive effect of visual attention on form recall of multiword units encountered in captioned television. The results also suggest that item difficulty and amount of attention were more important than textual enhancement in predicting learning gains.
Manuel F. Pulido
In: Frontiers in Psychology, vol. 11, pp. 607621, 2021.
Behavioral studies on language processing rely on the eye-mind assumption, which states that the time spent looking at text is an index of the time spent processing it. In most cases, relatively shorter reading times are interpreted as evidence of greater processing efficiency. However, previous evidence from L2 research indicates that non-native participants who present fast reading times are not always more efficient readers, but rather shallow parsers. Because earlier studies did not identify a reliable predictor of variability in L2 processing, such uncertainty around the interpretation of reading times introduces a potential confound that undermines the credibility and the conclusions of online measures of processing. The present study proposes that a recently developed modulator of online processing efficiency, namely, chunking ability, may account for the observed variability in L2 online reading performance. L1 English – L2 Spanish learners' eye movements were analyzed during natural reading. Chunking ability was predictive of overall reading speed. Target relative clauses contained L2 Verb-Noun multiword units, which were manipulated with regards to their L1-L2 congruency. The results indicated that processing of the L1-L2 incongruent units was modulated by an interaction of L2 chunking ability and level of knowledge of multiword units. Critically, the data revealed an inverse U-shaped pattern, with faster reading times in both learners with the highest and the lowest chunking ability scores, suggesting fast integration in the former, and lack of integration in the latter. Additionally, the presence of significant differences between conditions was correlated with individual chunking ability. The findings point at chunking ability as a significant modulator of general L2 processing efficiency, and of cross-language differences in particular, and add clarity to the interpretation of variability in the online reading performance of non-native speakers.
Liam P. Blything; Maialen Iraola Azpiroz; Shanley Allen; Regina Hert; Juhani Järvikivi
In: Journal of Child Language, pp. 1–29, 2021.
In two visual world experiments we disentangled the influence of order of mention (first vs. second mention), grammatical role (subject vs object), and semantic role (proto-agent vs proto-patient) on 7- to 10-year-olds' real-time interpretation of German pronouns. Children listened to SVO or OVS sentences containing active accusative verbs (küssen "to kiss") in Experiment 1 (N = 72), or dative object-experiencer verbs (gefallen "to like") in Experiment 2 (N = 64). This was followed by the personal pronoun er or the demonstrative pronoun der. Interpretive preferences for er were most robust when high prominence cues (first mention, subject, proto-agent) were aligned onto the same entity; and the same applied to der for low prominence cues (second mention, object, proto-patient). These preferences were reduced in conditions where cues were misaligned, and there was evidence that each cue independently influenced performance. Crucially, individual variation in age predicted adult-like weighting preferences for semantic cues (Schumacher, Roberts & Järvikivi, 2017).
Liam P. Blything; Juhani Järvikivi; Abigail G. Toth; Anja Arnhold
In: Frontiers in Psychology, vol. 12, pp. 684639, 2021.
Using visual world eye-tracking, we examined whether adults (N = 58) and children (N = 37; 3;1–6;3) use linguistic focussing devices to help resolve ambiguous pronouns. Participants listened to English dialogues about potential referents of an ambiguous pronoun he. Four conditions provided prosodic focus marking to the grammatical subject or to the object, which were either additionally it-clefted or not. A reference condition focussed neither the subject nor object. Adult online data revealed that linguistic focussing via prosodic marking enhanced subject preference, and overrode it in the case of object focus, regardless of the presence of clefts. Children's processing was also influenced by prosodic marking; however, their performance across conditions showed some differences from adults, as well as a complex interaction with both their memory and language skills. Offline interpretations showed no effects of focus in either group, suggesting that while multiple cues are processed, subjecthood and first mention dominate the final interpretation in cases of conflict.
Hans Rutger Bosker; Esperanza Badaya; Martin Corley
Discourse markers activate their, like, cohort competitors Journal Article
In: Discourse Processes, vol. 58, no. 9, pp. 837–851, 2021.
Speech in everyday conversations is riddled with discourse markers (DMs), such as well, you know, and like. However, in many lab-based studies of speech comprehension, such DMs are typically absent from the carefully articulated and highly controlled speech stimuli. As such, little is known about how these DMs influence online word recognition. The present study specifically investigated the online processing of DM like and how it influences the activation of words in the mental lexicon. We specifically targeted the cohort competitor (CC) effect in the Visual World Paradigm: Upon hearing spoken instructions to “pick up the beaker,” human listeners also typically fixate—next to the target object—referents that overlap phonologically with the target word (cohort competitors such as beetle; CCs). However, several studies have argued that CC effects are constrained by syntactic, semantic, pragmatic, and discourse constraints. Therefore, the present study investigated whether DM like influences online word recognition by activating its cohort competitors (e.g., lightbulb). In an eye-tracking experiment using the Visual World Paradigm, we demonstrate that when participants heard spoken instructions such as “Now press the button for the, like łdots unicycle,” they showed anticipatory looks to the CC referent (lightbulb)well before hearing the target. This CC effect was sustained for a relatively long period of time, even despite hearing disambiguating information (i.e., the /k/ in like). Analysis of the reaction times also showed that participants were significantly faster to select CC targets (lightbulb) when preceded by DM like. These findings suggest that seemingly trivial DMs, such as like, activate their CCs, impacting online word recognition. Thus, we advocate a more holistic perspective on spoken language comprehension in naturalistic communication, including the processing of DMs.
Brittany Bowman; Nicole C. Ross; Peter J. Bex; Tiffany Arango
In: Ophthalmic and Physiological Optics, vol. 41, no. 6, pp. 1183–1197, 2021.
Purpose: Dynamic text presentation methods may improve reading ability in patients with central vision loss (CVL) by eliminating the need for accurate eye movements. We compared rapid serial visual presentation (RSVP) and horizontal scrolling text presentation (scrolling) on reading rate and reading acuity in CVL observers and normally-sighted controls with simulated CVL (simCVL). Methods: CVL observers' (n = 11) central scotomas and preferred retinal loci (PRL) for each eye were determined with MAIA microperimetry and fixation analysis. SimCVL controls (n = 16) used 4° inferior eccentric viewing, enforced with an Eyelink eye-tracker. Observers read aloud 4-word phrases randomly drawn from the MNREAD sentences. Six font sizes (0.50–1.30 logMAR) were tested with the better near acuity eye and both eyes of CVL observers. Three font sizes (0.50–1.00 logMAR) were tested binocularly in simCVL controls. Text presentation duration of each word for RSVP or drift speed for scrolling was varied to determine reading rate, defined as 50% of words read correctly. In a subset of CVL observers (n = 7), relationships between PRL eccentricity, reading threshold and rate were explored. Results: SimCVL controls demonstrated significantly faster reading rates for RSVP than scrolling text (p < 0.0001), and there was a significant main effect of font size (p < 0.0001). CVL patients demonstrated no significant differences in binocular reading rate between font sizes (p = 0.12) and text presentation (p = 0.25). Similar results were seen under monocular conditions. Reading acuity for RSVP and scrolling worsened with increasing PRL eccentricity ($mu$ = 4.5°
Laurel Brehm; Carrie N. Jackson; Karen L. Miller
Probabilistic online processing of sentence anomalies Journal Article
In: Language, Cognition and Neuroscience, vol. 36, no. 8, pp. 959–983, 2021.
Listeners can successfully interpret the intended meaning of an utterance even when it contains errors or other unexpected anomalies. The present work combines an online measure of attention to sentence referents (visual world eye-tracking) with offline judgments of sentence meaning to disclose how the interpretation of anomalous sentences unfolds over time in order to explore mechanisms of non-literal processing. We use a metalinguistic judgment in Experiment 1 and an elicited imitation task in Experiment 2. In both experiments, we focus on one morphosyntactic anomaly (Subject-verb agreement; The key to the cabinets literally *were łdots) and one semantic anomaly (Without; Lulu went to the gym without her hat ?off) and show that non-literal referents to each are considered upon hearing the anomalous region of the sentence. This shows that listeners understand anomalies by overwriting or adding to an initial interpretation and that this occurs incrementally and adaptively as the sentence unfolds.
Laurel Brehm; Antje S. Meyer
In: Journal of Experimental Psychology: General, vol. 150, no. 9, pp. 1772–1799, 2021.
In conversation, turns follow each other with minimal gaps. To achieve this, speakers must launch their utterances shortly before the predicted end of the partner's turn. We examined the relative importance of cues to partner utterance content and partner utterance length for launching coordinated speech. In three experiments, Dutch adult participants had to produce prepared utterances (e.g., vier, “four”) immediately after a recording of a confederate's utterance (zeven, “seven”). To assess the role of corepresenting content versus attending to speech cues in launching coordinated utterances, we varied whether the participant could see the stimulus being named by the confederate, the confederate prompt's length, and whether within a block of trials, the confederate prompt's length was predictable. We measured how these factors affected the gap between turns and the participants' allocation of visual attention while preparing to speak. Using a machine-learning technique, model selection by k-fold cross-validation, we found that gaps were most strongly predicted by cues from the confederate speech signal, though some benefit was also conferred by seeing the confederate's stimulus. This shows that, at least in a simple laboratory task, speakers rely more on cues in the partner's speech than corepresentation of their utterance content. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Rotem Broday-Dvir; Rafael Malach
In: Cerebral Cortex, vol. 31, no. 1, pp. 213–232, 2021.
Resting-state fluctuations are ubiquitous and widely studied phenomena of the human brain, yet we are largely in the dark regarding their function in human cognition. Here we examined the hypothesis that resting-state fluctuations underlie the generation of free and creative human behaviors. In our experiment, participants were asked to perform three voluntary verbal tasks: a verbal fluency task, a verbal creativity task, and a divergent thinking task, during functional magnetic resonance imaging scanning. Blood oxygenation level dependent (BOLD)-activity during these tasks was contrasted with a control- deterministic verbal task, in which the behavior was fully determined by external stimuli. Our results reveal that all voluntary verbal-generation responses displayed a gradual anticipatory buildup that preceded the deterministic control-related responses. Critically, the time-frequency dynamics of these anticipatory buildups were significantly correlated with resting-state fluctuations' dynamics. These correlations were not a general BOLD-related or verbal-response related result, as they were not found during the externally determined verbal control condition. Furthermore, they were located in brain regions known to be involved in language production, specifically the left inferior frontal gyrus. These results suggest a common function of resting-state fluctuations as the neural mechanism underlying the generation of free and creative behaviors in the human cortex.
Trevor Brothers; Liv J. Hoversten; Matthew J. Traxler
In: Bilingualism, vol. 24, no. 4, pp. 612–627, 2021.
Syntactic parsing plays a central role in the interpretation of sentences, but it is unclear to what extent non-native speakers can deploy native-like grammatical knowledge during online comprehension. The current eye-tracking study investigated how Chinese-English bilinguals and native English speakers respond to syntactic category and subcategorization information while reading sentences with object-subject ambiguities. We also obtained measures of English language experience, working memory capacity, and executive function to determine how these cognitive variables influence online parsing. During reading, monolinguals and bilinguals showed similar garden-path effects related to syntactic reanalysis, but native English speakers responded more robustly to verb subcategorization cues. Readers with greater language experience and executive function showed increased sensitivity to verb subcategorization cues, but parsing was not influenced by working memory capacity. These results are consistent with exposure-based accounts of bilingual sentence processing, and they support a link between syntactic processing and domain-general cognitive control.
Sarah Brown-Schmidt; Sun Joo Cho; Nazbanou Nozari; Nathaniel Klooster; Melissa Duff
In: Neuropsychologia, vol. 152, pp. 107730, 2021.
Recent findings point to a role for hippocampus in the moment-by-moment processing of language, including the use and generation of semantic features in certain contexts. What role the hippocampus might play in the processing of semantic relations in spoken language comprehension, however, is unknown. Here we test patients with bilateral hippocampal damage and dense amnesia in order to examine the necessity of hippocampus for lexico-semantic mapping processes in spoken language understanding. In two visual-world eye-tracking experiments, we monitor eye movements to images that are semantically related to spoken words and sentences. We find no impairment in amnesia, relative to matched healthy comparison participants. These findings suggest, at least for close semantic links and simple language comprehension tasks, a lack of necessity for hippocampus in lexico-semantic mapping between spoken words and simple pictures.
Mareike Brych; Barbara F. Händel; Eva Riechelmann; Aleksandra Pieczykolan; Lynn Huestegge
Effects of vocal demands on pupil dilation Journal Article
In: Psychophysiology, vol. 58, no. 2, pp. e13729, 2021.
Pupil dilation is known to be affected by a variety of factors, including physical (e.g., light) and cognitive sources of influence (e.g., mental load due to working memory demands, stimulus/response competition etc.). In the present experiment, we tested the extent to which vocal demands (speaking) can affect pupil dilation. Based on corresponding preliminary evidence found in a reanalysis of an existing data set from our lab, we setup a new experiment that systematically investigated vocal response-related effects compared to mere jaw/lip movement and button press responses. Conditions changed on a trial-by-trial basis while participants were instructed to keep fixating a central cross on a screen throughout. In line with our prediction (and previous observation), speaking caused the pupils to dilate strongest, followed by nonvocal movements and finally a baseline condition without any vocal or muscular demands. An additional analysis of blink rates showed no difference in blink frequency between vocal and baseline conditions, but different blink dynamics. Finally, simultaneously recorded electromyographic activity showed that muscle activity may contribute to some (but not all) aspects of the observed effects on pupil size. The results are discussed in the context of other recent research indicating effects of perceived (instead of executed) vocal action on pupil dynamics.
Annika Bürsgens; Jürgen Cholewa; Axel Mayer; Thomas Günther
Gender dissimilarity between subject and object facilitates online-comprehension of agent–patient–relations in German: An eye-tracking study with 6- to 10-year-old monolingual children Journal Article
In: Lingua, vol. 259, pp. 103110, 2021.
In this eye-tracking study, we examined whether gender dissimilarity between the case-marked subject and object noun phrases in a subject-verb-object (SVO) or object-verb-subject sentence (OVS) was used to predict thematic roles (agent and patient) and facilitate the grammatical analyses needed for thematic role assignment. Forty-two German-speaking 6- to 10-year-old children looked at two drawings depicting an action between two figures while listening to a sentence. One drawing matched the sentence, but the other showed the figures' thematic roles reversed. For half of the sentences, each figure's thematic role could be predicted shortly after stimulus onset from the gender of the first word (an article) due to gender dissimilarity between the subject and the object. For the other sentences, children had to wait at least until they heard the first noun of the sentence. Our results confirmed that the children shifted their looks earlier and more confidently to the target in SVO sentences with gender-dissimilar noun phrases. Results for OVS sentences were not so clear-cut: a sentence-final facilitation effect of gender dissimilarity was only visible subsequently when agent and patient were analyzed separately. Our findings are discussed with respect to different approaches to the role of gender processing in sentence comprehension.
Jon W. Carr; Valentina N. Pescuma; Michele Furlan; Maria Ktori; Davide Crepaldi
In: Behavior Research Methods, pp. 1–24, 2021.
A common problem in eye-tracking research is vertical drift—the progressive displacement of fixation registrations on the vertical axis that results from a gradual loss of eye-tracker calibration over time. This is particularly problematic in experiments that involve the reading of multiline passages, where it is critical that fixations on one line are not erroneously recorded on an adjacent line. Correction is often performed manually by the researcher, but this process is tedious, time-consuming, and prone to error and inconsistency. Various methods have previously been proposed for the automated, post hoc correction of vertical drift in reading data, but these methods vary greatly, not just in terms of the algorithmic principles on which they are based, but also in terms of their availability, documentation, implementation languages, and so forth. Furthermore, these methods have largely been developed in isolation with little attempt to systematically evaluate them, meaning that drift correction techniques are moving forward blindly. We document ten major algorithms, including two that are novel to this paper, and evaluate them using both simulated and natural eye-tracking data. Our results suggest that a method based on dynamic time warping offers great promise, but we also find that some algorithms are better suited than others to particular types of drift phenomena and reading behavior, allowing us to offer evidence-based advice on algorithm selection.
Spyridoula Cheimariou; Thomas A. Farmer; Jean K. Gordon
In: Psychology and Aging, vol. 36, no. 4, pp. 531–542, 2021.
Increased predictability effects in older compared to younger adults have been mostly observed in late eye-movement measures during reading. However, it is unclear whether and how these effects may be related to verbal ability, which typically improves with age. Past studies have shown that verbal abilities modulate the predictability effect. Here, we aimed to replicate predictability effects in younger and older adults in a sentence reading paradigm and to investigate how verbal ability modulates the predictability effect. We monitored 26 younger and 27 older adults' eye movements as they read sentences with target words varying in predictability and examined the impact of age and verbal ability, as reflected in vocabulary and print exposure measures. Replicating previous studies, we found that older adults relied more heavily on contextual information in their anticipation of upcoming input in one late measure. In one early measure (first-fixation duration), participants with higher scores in verbal ability showed greater predictability effects, whereas the predictability effect was virtually absent in those with low scores. In one late measure (regression-path duration), age interacted with predictability. However, verbal ability, when included as a covariate in this model, could not account for the age-related increases in predictability effects. Collectively, our findings indicate that verbal ability influences predictability effects in early processing stages, suggesting facilitation of initial word processing and that some aspect of aging other than verbal ability influences predictability effects in late measures. The latter finding most likely reflects a shift toward integrative controlled processes with age. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Mingjing Chen; Yongsheng Wang; Bingjie Zhao; Xin Li; Xuejun Bai
In: Frontiers in Psychology, vol. 12, pp. 602931, 2021.
In alphabetic writing systems (such as English), the spaces between words mark the word boundaries, and the basic unit of reading is distinguished during visual-level processing. The visual-level information of word boundaries facilitates reading. Chinese is an ideographic language whose text contains no intrinsic inter-word spaces as the marker of word boundaries. Previous studies have shown that the basic processing unit of Chinese reading is also a word. However, findings remain inconsistent regarding whether inserting spaces between words in Chinese text promotes reading performance. Researchers have proposed that there may be a trade-off between format familiarity and the facilitation effect of inter-word spaces. In order to verify this, this study manipulated the format familiarity via reversing the Chinese reading direction from right to left to investigate this issue in Experiment 1 and Experiment 2. The purpose of Experiment 1 was to examine whether inter-word spaces facilitated Chinese reading in an unfamiliar format. Experiment 1 was conducted that 40 native Chinese undergraduates read Chinese sentences from right to left on four format conditions. The results showed faster reading speed and shorter total reading time for the inter-word spaced format. Based on this finding, Experiment 2 examined whether the facilitation effect of inter-word spaces would reduce or disappear after improving the format familiarity; this experiment was conducted that 40 native Chinese undergraduates who did not participate in Experiment 1 read Chinese sentences from right to left on four format conditions after ten-day reading training. There was no significant difference between the total reading time and reading speed in the inter-word spaced format and unspaced format, which suggests that the facilitation effect of inter-word spaces in Chinese reading changed smaller. The combined results of the two experiments suggest that there is indeed a trade-off between format familiarity and the facilitation of word segmentation, which supports the assumption of previous studies.
Shuang Chen; Yuqing Tang; Xiuna Lv; Kevin B. Paterson; Lijing Chen
In: Quarterly Journal of Experimental Psychology, vol. 74, no. 1, pp. 45–53, 2021.
Contrastive focus implies a contrast between two elements. However, it is unclear whether and how any interplay between such a contrast and similarity between potentially contrasting elements might affect focus processing. Accordingly, we report an eye movement experiment investigating this issue. The experiment used a background story to introduce eight characters whose social identities were manipulated to be similar or dissimilar. Participants first read this background story, then a series of two-sentence discourses while their eye movements were recorded. Each discourse referred to two characters from the passage who had either similar or dissimilar identities, with one (the target character) either focused using the Chinese particle zhiyou (meaning only) or unfocused. The results showed a typical focus facilitation effect, such that target character names were processed more quickly when focused than unfocused. We also observed a main effect of the similarity/dissimilarly of characters and, crucially, an interaction between this variable and focus. This interaction was due to slower processing of a post-target region when the target character was focused and the two characters had similar rather than dissimilar identities, but no such effect when the target character was unfocused. The findings suggest that establishing a contrast between referents is effortful during reading when these have similar rather than dissimilar social identities and so are more difficult to differentiate. The distinctiveness of referents in a discourse context may therefore constrain the establishment of contrastive focus during reading. We discuss these findings in relation to current theories of focus interpretation.
In: Translation, Cognition and Behavior, vol. 4, no. 1, pp. 47–74, 2021.
This study investigated the impact of professional experience on the process and product of sight interpreting/translation (SiT). Seventeen experienced interpreters, with at least 150 days' professional experience, and 18 interpreting students were recruited to conduct three tasks: silent reading, reading aloud, and SiT. All participants had similar interpreter training backgrounds. The data of the SiT task are reported here, with two experienced interpreters (both AIIC members) assessing the participants' interpretations on accuracy and style, which includes fluency and other paralinguistic performance. The findings show that professional experience contributed to higher accuracy, although there was no between-group difference in the mean score on style, overall task time, length of the SiT output, and mean fixation duration of each stage of reading. The experienced practitioners exhibited more varied approaches at the beginning of the SiT task, with some biding their time longer than the others before oral production started, but quality was not affected. Moving along, the practitioners showed better language flexibility in that their renditions were faster, steadier, and less disrupted by pauses and the need to read further to maintain the flow of interpretation.
Yesi Cheng; Jason Rothman; Ian Cunnings
In: Applied Psycholinguistics, vol. 42, no. 1, pp. 129–151, 2021.
Using both offline and online measures, the present study investigates attachment resolution in relative clauses in English natives (L1) and nonnatives (L2). We test how relative clause resolution interacts with linguistic factors and participant-level individual differences. Previous L1 English studies have demonstrated a low attachment preference and also an ambiguity advantage suggesting that L1ers may not have as strong a low attachment preference as is sometimes claimed. We employ a similar design to examine this effect in L1 and L2 comprehension. Offline results indicate that both groups exhibit a low attachment preference, positively correlated with reading span scores and with proficiency in the L2 group. Online results also suggest a low attachment preference in both groups. However, our data show that individual differences influence online attachment resolution for both native and nonnatives; higher lexical processing efficiency correlates with quicker resolution of linguistic conflicts. We argue that the current findings suggest that attachment resolution during L1 and L2 processing share the same processing mechanisms and are modulated by similar individual differences.
Meghan Clayards; M. Gareth Gaskell; Sarah Hawkins
In: Journal of Phonetics, vol. 87, pp. 101055, 2021.
An eye-tracking experiment tested the hypothesis that listeners use within-word fine phonetic detail that systematically reflects morphological structure, when the phonemes are identical (dis in discolour (true prefix) vs. discover (pseudo prefix)) and when they differ (re-cover vs. recover). Spoken sentence pairs, identical up to at least the critical word (e.g. I'd be surprised if the boys discolour/discover it), were cross-spliced at the prefix-stem boundary to produce stimuli in which the critical syllable's acoustics either matched or mismatched the sentence continuation. On each trial listeners heard one sentence, and selected one of two photographs depicting the pair. Matched and mismatched stimuli were heard in separate sessions, at least a week apart. Matched stimuli led to more looks to the target photograph overall and time-course analysis suggested this was true at the earliest moments. We also observed stronger effects for earlier trials and the effects tended to weaken over the course of the experiment. These results suggest that normal speech perception involves continuously monitoring phonetic detail, and, when it is systematically associated with meaning, using it to facilitate rapid understanding.
Sarah Colby; Bob McMurray
In: Journal of Speech, Language, and Hearing Research, vol. 64, no. 9, pp. 3627–3652, 2021.
Purpose: Listening effort is quickly becoming an important metric for assessing speech perception in less-than-ideal situations. However, the relationship between the construct of listening effort and the measures used to assess it remains unclear. We compared two measures of listening effort: a cognitive dual task and a physiological pupillometry task. We sought to investigate the relationship between these measures of effort and whether engaging effort impacts speech accuracy. Method: In Experiment 1, 30 participants completed a dual task and a pupillometry task that were carefully matched in stimuli and design. The dual task consisted of a spoken word recognition task and a visual match-to-sample task. In the pupillometry task, pupil size was monitored while participants completed a spoken word recognition task. Both tasks presented words at three levels of listening difficulty (unmodified, eight-channel vocoding, and four-channel vocoding) and provided response feedback on every trial. We refined the pupillometry task in Experiment 2 (n = 31); crucially, participants no longer received response feedback. Finally, we ran a new group of subjects on both tasks in Experiment 3 (n = 30). Results: In Experiment 1, accuracy in the visual task decreased with increased signal degradation in the dual task, but pupil size was sensitive to accuracy and not vocoding condition. After removing feedback in Experiment 2, changes in pupil size were predicted by listening condition, suggesting the task was now sensitive to engaged effort. Both tasks were sensitive to listening difficulty in Experiment 3, but there was no relationship between the tasks and neither task predicted speech accuracy. Conclusions: Consistent with previous work, we found little evidence for a relationship between different measures of listening effort. We also found no evidence that effort predicts speech accuracy, suggesting that engaging more effort does not lead to improved speech recognition. Cognitive and physiological measures of listening effort are likely sensitive to different aspects of the construct of listening effort.
Fengjiao Cong; Baoguo Chen
In: Quarterly Journal of Experimental Psychology, pp. 1–16, 2021.
We conducted three eye movement experiments to investigate the mechanism for coding letter positions in a person's second language during sentence reading; we also examined the role of morphology in this process with a more rigorous manipulation. Given that readers obtain information not only from currently fixated words (i.e., the foveal area) but also from upcoming words (i.e., the parafoveal area) to guide their reading, we examined both when the targets were fixated (Exp. 1) and when the targets were seen parafoveally (Exps. 2 and 3). First, we found the classic transposed letter (TL) effect in Exp. 1, but not in Exp. 2 or 3. This implies that flexible letter position coding exists during sentence reading. However, this was limited to words located in the foveal area, suggesting that L2 readers whose L2 proficiency is not as high as skilled native readers are not able to extract and utilise the parafoveal letter identity and position information of a word, whether the word length is long (Exp. 2) or short (Exp. 3). Second, we found morphological information to influence the magnitude of the TL effect in Exp. 1. These results provide new eye movement evidence for the flexibility of L2 letter position coding during sentence reading, as well as the interactions between the different internal representations of words in this process. Future L2 reading frameworks should integrate word recognition and eye movement control models.
Kathy Conklin; Gareth Carrol
In: Applied Linguistics, vol. 42, no. 3, pp. 492–513, 2021.
While it is possible to express the same meaning in different ways ('bread and butter' versus 'butter and bread'), we tend to say things in the same way. As much as half of spoken discourse is made up of formulaic language or linguistic patterns. Despite its prevalence, little is known about how the processing system treats novel patterns and how rapidly a sensitivity to them arises in natural contexts. To address this, we monitored native English speakers' eye movements when reading short stories containing existing (conventional) patterns ('time and money'), seen once, and novel patterns ('wires and pipes'), seen one to five times. Subsequently, readers saw both existing and novel phrases in the reversed order ('money and time'; 'pipes and wires'). In four to five exposures, much like existing lexical patterns, novel ones demonstrate a processing advantage. Sensitivity to lexical patterns - including the co-occurrence of lexical items and the order in which they occur - arises rapidly and automatically during natural reading. This has implications for language learning and is in line with usage-based models of language processing.
Jason C. Coronel; Olivia M. Bullock; Hillary C. Shulman; Matthew D. Sweitzer; Robert M. Bond; Shannon Poulsen
Eye movements predict large-scale voting decisions Journal Article
In: Psychological Science, vol. 32, no. 6, pp. 836–848, 2021.
More than 100 countries allow people to vote directly on policies in direct democracy elections (e.g., 2016 Brexit referendum). Politicians are often responsible for writing ballot language, and voters frequently encounter ballot measures that are difficult to understand. We examined whether eye movements from a small group of individuals can predict the consequences of ballot language on large-scale voting decisions. Across two preregistered studies (Study 1: N = 120 registered voters, Study 2: N = 120 registered voters), we monitored laboratory participants' eye movements as they read real ballot measures. We found that eye-movement responses associated with difficulties in language comprehension predicted aggregate voting decisions to abstain from voting and vote against ballot measures in U.S. elections (total number of votes cast = 137,661,232). Eye movements predicted voting decisions beyond what was accounted for by widely used measures of language difficulty. This finding demonstrates a new way of linking eye movements to out-of-sample aggregate-level behaviors.
Lei Cui; Jue Wang; Yingliang Zhang; Fengjiao Cong; Wenxin Zhang; Jukka Hyönä
In: Quarterly Journal of Experimental Psychology, vol. 74, no. 4, pp. 610–633, 2021.
In two eye-tracking studies, reading of two-character Chinese compound words was examined. First and second character frequency were orthogonally manipulated to examine the extent to which Chinese compound words are processed via the component characters. In Experiment 1, first and second character frequency were manipulated for frequent compound words, whereas in Experiment 2 it was done for infrequent compound words. Fixation time and skipping probability for the first and second character were affected by its frequency in neither experiment, nor in their pooled analysis. Yet, in Experiment 2 fixations on the second character were longer when a high-frequency character was presented as the first character compared with when a low-frequency character was presented as the first character. This reversed character frequency effect reflects a morphological family size effect and is explained by the constraint hypothesis, according to which fixation time on the second component of two-component compound words is shorter when its identity is constrained by the first component. It is concluded that frequent Chinese compound words are processed holistically, whereas with infrequent compound words there is some room for the characters to play a role in the identification process.
Ian Cunnings; Hiroki Fujita
In: Second Language Research, pp. 1–25, 2021.
Relative clauses have long been examined in research on first (L1) and second (L2) language acquisition and processing, and a large body of research has shown that object relative clauses (e.g. ‘The boy that the girl saw') are more difficult to process than subject relative clauses (e.g. ‘The boy that saw the girl'). Although there are different accounts of this finding, memory-based factors have been argued to play a role in explaining the object relative disadvantage. Evidence of memory-based factors in relative clause processing comes from studies indicating that representational similarity influences the difficulty associated with object relatives as a result of a phenomenon known as similarity-based interference. Although similarity-based interference has been well studied in L1 processing, less is known about how it influences L2 processing. We report two studies – an eye-tracking experiment and a comprehension task – investigating interference in the comprehension of relative clauses in L1 and L2 readers. Our results indicated similarity-based interference in the processing of object relative clauses in both L1 and L2 readers, with no significant differences in the size of interference effects between the two groups. These results highlight the importance of considering memory-based factors when examining L2 processing.