EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language article, please email us!
Guangyao Zhang; Binke Yuan; Huimin Hua; Ya Lou; Nan Lin; Xingshan Li
In: Brain and Language, 213 , pp. 1–10, 2021.
Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.
Sainan Zhao; Lin Li; Min Chang; Jingxin Wang; Kevin B Paterson
In: Quarterly Journal of Experimental Psychology, 74 (1), pp. 68–78, 2021.
Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems.
Mira L Nencheva; Elise A Piazza; Casey Lew‐Williams
In: Developmental Science, 24 , pp. 1–15, 2021.
Young children have an overall preference for child-directed speech (CDS) over adult- directed speech (ADS), and its structural features are thought to facilitate language learning. Many studies have supported these findings, but less is known about pro- cessing of CDS at short, sub-second timescales. How do the moment-to-moment dynamics of CDS influence young children's attention and learning? In Study 1, we used hierarchical clustering to characterize patterns of pitch variability in a natural CDS corpus, which uncovered four main word-level contour shapes: ‘fall', ‘rise', ‘hill', and ‘valley'. In Study 2, we adapted a measure from adult attention research—pupil size synchrony—to quantify real-time attention to speech across participants, and found that toddlers showed higher synchrony to the dynamics of CDS than to ADS. Importantly, there were consistent differences in toddlers' attention when listening to the four word-level contour types. In Study 3, we found that pupil size synchrony during exposure to novel words predicted toddlers' learning at test. This suggests that the dynamics of pitch in CDS not only shape toddlers' attention but guide their learn- ing of new words. By revealing a physiological response to the real-time dynamics of CDS, this investigation yields a new sub-second framework for understanding young children's engagement with one of the most important signals in their environment.
Adam J Parker; Timothy J Slattery
In: Quarterly Journal of Experimental Psychology, 74 (1), pp. 135–149, 2021.
In recent years, there has been an increase in research concerning individual differences in readers' eye movements. However, this body of work is almost exclusively concerned with the reading of single-line texts. While spelling and reading ability have been reported to influence saccade targeting and fixation times during intra-line reading, where upcoming words are available for parafoveal processing, it is unclear how these variables affect fixations adjacent to return-sweeps. We, therefore, examined the influence of spelling and reading ability on return-sweep and corrective saccade parameters for 120 participants engaged in multiline text reading. Less-skilled readers and spellers tended to launch their return-sweeps closer to the end of the line, prefer a viewing location closer to the start of the next, and made more return-sweep undershoot errors. We additionally report several skill-related differences in readers' fixation durations across multiline texts. Reading ability influenced all fixations except those resulting from return-sweep error. In contrast, spelling ability influenced only those fixations following accurate return-sweeps—where parafoveal processing was not possible prior to fixation. This stands in contrasts to an established body of work where fixation durations are related to reading but not spelling ability. These results indicate that lexical quality shapes the rate at which readers access meaning from the text by enhancing early letter encoding, and influences saccade targeting even in the absence of parafoveal target information.
Minke J de Boer; Deniz Başkent; Frans W Cornelissen
In: Multisensory Research, 34 , pp. 17–47, 2021.
The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
Alex de Carvalho; Isabelle Dautriche; Anne Caroline Fiévet; Anne Christophe
In: Journal of Experimental Child Psychology, 203 , pp. 1–25, 2021.
Because linguistic communication is often noisy and uncertain, adults flexibly rely on different information sources during sentence processing. We tested whether toddlers engage in a similar process and how that process interacts with verb learning. Across two experiments, we presented French 28-month-olds with right-dislocated sentences featuring a novel verb (“Hei is VERBing, the boyi”), where a clear prosodic boundary after the verb indicates that the sentence is intransitive (such that the NP “the boy” is coreferential with the pronoun “he” and the sentence means “The boy is VERBing”). By default, toddlers incorrectly interpreted the sentence based on the number of NPs (assuming, e.g., that someone is VERBing the boy). Yet, when children were provided with additional information about the syntactic contexts (Experiment 1
Gayle DeDe; Denis Kelleher
In: Journal of Neurolinguistics, 57 , pp. 1–19, 2021.
The present study examined how healthy aging and aphasia influence the capacity for readers to generate structural predictions during online reading, and how animacy cues influence this process. Non-brain-damaged younger (n = 24) and older (n = 12) adults (Experiment 1) and individuals with aphasia (IWA; n = 11; Experiment 2) read subject relative and object relative sentences in an eye-tracking experiment. Half of the sentences included animate sentential subjects, and the other half included inanimate sentential subjects. All three groups used animacy information to mitigate effects of syntactic complexity. These effects were greater in older than younger adults. IWA were sensitive to structural frequency, with longer reading times for object relative than subject relative sentences. As in previous work, effects of structural complexity did not emerge on IWA's first pass through the sentence, but were observed when IWA reread critical segments of the sentences. Thus, IWA may adopt atypical reading strategies when they encounter low frequency or complex sentence structures, but they are able to use animacy information to reduce the processing disruptions associated with these structures.
Avital Deutsch; Hadas Velan; Yiska Merzbach; Tamar Michaly
In: Journal of Memory and Language, 116 , pp. 104182, 2021.
In Hebrew, as in other Semitic languages, most words are formed in a non-concatenated way, with a root morpheme embedded in a word-pattern morpheme consisting of only vowels or vowels plus consonants. Previous research on visual word recognition in Hebrew has revealed a robust morphological root-priming effect, with word recognition facilitated by the prior sub-perceptual presentation of the root morpheme, along with a less stable and more fragile word-pattern priming effect, particularly in the nominal system. These findings support the theory that morphological principles govern lexical access, with the root morpheme as a main organizational unit of the mental lexicon. However, less research has been done to delineate the algorithm underlying decomposition. The current study explores the importance of the natural lexical orthographic context of a complex root + pattern word structure for root extraction, using on-line measures based on tracking eye-movements in sentence reading. A series of 4 experiments using a fast-priming paradigm demonstrated that detaching the root morpheme from its lexical orthographic structure hinders the root-priming effect. Presenting the root in a non-word or a pseudo-word, that is, a non-existent combination of a real root + a real pattern did not make any difference. These results suggest that mapping the orthographic root onto its morphological mental representation depends on the orthographic context in which its letters appear. This finding constrains the role of the root in visual word-recognition, highlighting the crucial conditions for extracting it in the natural setting of reading.
Lynn Eekhof; Moniek M Kuijpers; Myrthe Faber; Xin Gao; Marloes Mak; Emiel Van den Hoven; Roel M Willems
Lost in a story, detached from the words. Journal Article
In: Discourse Processes, pp. 1–20, 2021.
This article explores the relationship between low-and high-level aspects of reading by studying the interplay between word processing, as measured with eye tracking, and narrative absorption and liking, as measured with questionnaires. Specifically, we focused on how individual differences in sensitivity to lexical word characteristics-measured as the effect of these characteristics on gaze duration-were related to narrative absorption and liking. By reanalyzing a large data set consisting of three previous eye-tracking experiments in which subjects (N = 171) read literary short stories, we replicated the well-established finding that word length, lemma frequency , position in sentence, age of acquisition, and orthographic neighborhood size of words influenced gaze duration. More importantly, we found that individual differences in the degree of sensitivity to three of these word characteristics, i.e., word length, lemma frequency, and age of acquisition, were negatively related to print exposure and to a lesser degree to narrative absorption and liking. Even though the underlying mechanisms of this relationship are still unclear, we believe the current findings underline the need to map out the interplay between, on the one hand, the technical and, on the other hand, the subjective processes of reading by studying reading behavior in more natural settings.
Kunyu Lian; Jie Ma; Feifei Liang; Ling Wei; Shuwei Zhang; Yingying Wu; Xuejun Bai; Rong Lian
In: Social Behavior and Personality, 49 (1), pp. 1–13, 2021.
How frequently a character appears in a word (positional character frequency) is used as a cue in word segmentation when reading aloud in the Chinese language. In this study we created 176 sentences with a target word in the center of each. Participants were 76 college students (mature readers) and 76 third-grade students (beginner readers). Results show an interaction effect of age and positional frequency of the initial character in the word on gaze duration. Further analysis shows that the third-grade students' gaze duration was significantly longer in high, relative to low, positional character frequency of the target words. This trend was consistent with refixation duration, and there was a marginally significant interaction between age and total fixation time. Overall, positional character frequency was an important cue for word segmentation in oral reading in the Chinese language, and third-grade students relied more heavily on this cue than did college students.
Feifei Liang; Jie Ma; Xuejun Bai; Simon P Liversedge
In: Journal of Memory and Language, 116 , pp. 104183, 2021.
textcopyright 2020 We adopted a word learning paradigm to examine whether children and adults differ in their saccade targeting strategies when learning novel words in Chinese reading. Adopting a developmental perspective, we extrapolated hypotheses pertaining to saccadic targeting and its development from the Chinese Reading Model (Li & Pollatsek, 2020). In our experiment, we embedded novel words into eight sentences, each of which provided a context for readers to form a new lexical representation. A group of children and a group of adults were required to read these sentences as their eye movements were recorded. At a basic level, we showed that decisions of initial saccadic targeting, and mechanisms responsible for computation of initial landing sites relative to launch sites are in place early in children, however, such targeting was less optimal in children than adults. Furthermore, for adults as lexical familiarity increased saccadic targeting behavior became more optimized, however, no such effects occurred in children. Mechanisms controlling initial saccadic targeting in relation to launch sites and in respect of lexical familiarity appear to operate with functional efficacy that is developmentally delayed. At a broad theoretical level, we consider our results in relation to issues associated with visually and linguistically, mediated saccadic control. More specifically, our novel findings fit neatly with our theoretical extrapolations from the CRM and suggest that its framework may be valuable for future investigations of the development of eye movement control in Chinese reading.
Anne Wienholz; Derya Nuhbalaoglu; Markus Steinbach; Annika Herrmann; Nivedita Mani
In: Sign Language & Linguistics, 24 (1), pp. 1–32, 2021.
A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.
Danila Rusich; Lisa S Arduino; Marika Mauti; Marialuisa Martelli; Silvia Primativo
In: Brain Sciences, 11 (28), pp. 1–10, 2021.
This study explores whether semantic processing in parafoveal reading in the Italian language is modulated by the perceptual and lexical features of stimuli by analyzing the results of the rapid parallel visual presentation (RPVP) paradigm experiment, which simultaneously presented two words, with one in the fovea and one in the parafovea. The words were randomly sampled from a set of semantically related and semantically unrelated pairs. The accuracy and reaction times in reading the words were measured as a function of the stimulus length and written word frequency. Fewer errors were observed in reading parafoveal words when they were semantically related to the foveal ones, and a larger semantic facilitatory effect was observed when the foveal word was highly frequent and the parafoveal word was short. Analysis of the reaction times suggests that the semantic relation between the two words sped up the naming of the foveal word when both words were short and highly frequent. Altogether, these results add further evidence in favor of the semantic processing of words in the parafovea during reading, modulated by the orthographic and lexical features of the stimuli. The results are discussed within the context of the most prominent models of word processing and eye movement controls in reading.
Gaston Saux; Nicolas Vibert; Julien Dampuré; Debora I Burin; Anne M Britt; Jean François Rouet
In: Acta Psychologica, 212 , pp. 1–16, 2021.
The study examined how readers integrate information from and about multiple information sources into a memory representation. In two experiments, college students read brief news reports containing two critical statements, each attributed to a source character. In half of the texts, the statements were consistent with each other, in the other half they were discrepant. Each story also featured a non-source character (who made no statement). The hypothesis was that discrepant statements, as compared to consistent statements, would promote distinct attention and memory only for the source characters. Experiment 1 used short interviews to assess participants' ability to recognize the source of one of the statements after reading. Experiment 2 used eye-tracking to collect data during reading and during a source-content recognition task after reading. As predicted, discrepancies only enhanced memory of, and attention to source-related segments of the texts. Discrepancies also enhanced the link between the two source characters in memory as opposed to the non-source character, as indicated by the participants' justifications (Experiment 1) and their visual inspection of the recognition items (Experiment 2). The results are interpreted within current theories of text comprehension and document literacy.
Sarah Schuster; Nicole Alexandra; Florian Hutzler; Fabio Richlan; Martin Kronbichler; Stefan Hawelka
In: NeuroImage, 228 , pp. 1–12, 2021.
Evidence accrues that readers form multiple hypotheses about upcoming words. The present study investigated the hemodynamic effects of predictive processing during natural reading by means of combining fMRI and eye movement recordings. In particular, we investigated the neural and behavioral correlates of precision-weighted prediction errors, which are thought to be indicative of subsequent belief updating. Participants silently read sentences in which we manipulated the cloze probability and the semantic congruency of the final word that served as an index for precision and prediction error respectively. With respect to the neural correlates, our findings indicate an enhanced activation within the left inferior frontal and middle temporal gyrus suggesting an effect of precision on prediction update in higher (lexico-)semantic levels. Despite being evident at the neural level, we did not observe any evidence that this mechanism resulted in disproportionate reading times on participants' eye movements. The results speak against discrete predictions, but favor the notion that multiple words are activated in parallel during reading. 1.
Rotem Broday-Dvir; Rafael Malach
In: Cerebral Cortex, 31 (1), pp. 213–232, 2021.
Resting-state fluctuations are ubiquitous and widely studied phenomena of the human brain, yet we are largely in the dark regarding their function in human cognition. Here we examined the hypothesis that resting-state fluctuations underlie the generation of free and creative human behaviors. In our experiment, participants were asked to perform three voluntary verbal tasks: a verbal fluency task, a verbal creativity task, and a divergent thinking task, during functional magnetic resonance imaging scanning. Blood oxygenation level dependent (BOLD)-activity during these tasks was contrasted with a control- deterministic verbal task, in which the behavior was fully determined by external stimuli. Our results reveal that all voluntary verbal-generation responses displayed a gradual anticipatory buildup that preceded the deterministic control-related responses. Critically, the time-frequency dynamics of these anticipatory buildups were significantly correlated with resting-state fluctuations' dynamics. These correlations were not a general BOLD-related or verbal-response related result, as they were not found during the externally determined verbal control condition. Furthermore, they were located in brain regions known to be involved in language production, specifically the left inferior frontal gyrus. These results suggest a common function of resting-state fluctuations as the neural mechanism underlying the generation of free and creative behaviors in the human cortex.
Tamás Káldi; Anna Babarczy
In: Journal of Memory and Language, 116 , pp. 104187, 2021.
Focus is a linguistic device that marks a piece of information within an utterance as most relevant, as when emphasis is placed by the speaker on a word using phonological stress, special intonation, or prosodic prominence. The question addressed in the present study is whether the use of linguistic focus is best seen as a means of directing the listener's attention. We investigated attention allocation on the part of the listener to linguistically focused elements in working memory in a series of eye-tracking experiments. We concentrated on two processes: the encoding of the focused element and its retention. Attentional load during encoding was measured by pupil dilation, and attention allocation during retention was estimated from fixations to locations of previously present visual stimuli on a blank screen. It was found that i) more attention was allocated during the processing of sentences with linguistic focus and ii) linguistically focused elements received more attention during memory retention. However, when the task demanded the sharing of attention, the advantage of the focused element during retention disappeared. Further experiments showed that when verbal stimuli whose prominence was not linguistically marked were presented, the patterns of attention allocation associated with linguistic focus during retention replicated. These results lend further support to the claim that linguistic focus is a grammaticalized means of expressing prominence, and as such, functions as an attention capturing device.
Jukka Hyönä; Timo T Heikkilä; Seppo Vainio; Reinhold Kliegl
In: Cognition, 208 , pp. 1–13, 2021.
Previous studies (Hyönä, Yan, & Vainio, 2018; Yan et al., 2014) have demonstrated that in morphologically rich languages a word's morphological status is processed parafoveally to be used in modulating saccadic programming in reading. In the present parafoveal preview study conducted in Finnish, we examined the exact nature of this effect by comparing reading of morphologically complex words (a stem + two suffixes) to that of monomorphemic words. In the preview-change condition, the final 3–4 letters were replaced with other letters making the target word a pseudoword; for suffixed words, the word stem remained intact but the suffix information was unavailable; for monomorphemic words, only part of the stem was parafoveally available. Three alternative predictions were put forth. According to the first alternative, the morphological effect in initial fixation location is due to parafoveally perceiving the suffix as a highly frequent letter cluster and then adjusting the saccade program to land closer to the word beginning for suffixed than monomorphemic words. The second alternative, the processing difficulty hypothesis, assumes a morphological complexity effect: suffixed words are more complex than monomorphemic words. Therefore, the attentional window is narrower and the saccade is shorter. The third alternative posits that the effect reflects parafoveal access to the word's stem. The results for the initial fixation location and fixation durations were consistent with the parafoveal stem-access view.
Lili Yu; Yanping Liu; Erik D Reichle
In: Journal of Experimental Psychology: General, pp. 1-30, 2020.
Chinese words consist of a variable number of characters that are normally written in continuous lines, without the blank spaces that are used to separate words in most alphabetic writing systems. These conventions raise questions about the relative roles of character versus whole-word processing in word identification, and how words are segmented from strings of characters for the purpose of their identification and saccade targeting. The present article attempts to address these questions by reporting an eye-movement experiment in which 60 participants read a corpus of sentences containing two-character target words that varied in terms of their overall frequency and the frequency of their initial characters. We examine participants' eye movements using both corpus-based statistical models and more standard analyses of our target words. In addition to documenting how key lexical variables influence eye movements and highlighting a few discrepancies between the results obtained using our two statistical approaches, our experiment shows that high-frequency initial characters can actually slow word identification. We discuss the theoretical significance of this finding and others for current models of Chinese reading, and then describe a new computational model of eye-movement control during the reading of Chinese. Finally, we report simulations showing that this model can account for our findings. (PsycInfo Database Record (c) 2020 APA, all rights reserved)
Aaron Veldre; Roslyn Wong; Sally Andrews
In: Attention, Perception, and Psychophysics, pp. 1–9, 2020.
The gaze-contingent moving-window paradigm was used to assess the size and symmetry of the perceptual span in older readers. The eye movements of 49 cognitively intact older adults (60–88 years of age) were recorded as they read sentences varying in difficulty, and the availability of letter information to the right and left of fixation was manipulated. To reconcile discrepancies in previous estimates of the perceptual span in older readers, individual differences in written language proficiency were assessed with tests of vocabulary, reading comprehension, reading speed, spelling ability, and print exposure. The results revealed that higher proficiency older adults extracted information up to 15 letter spaces to the right of fixation, while lower proficiency readers showed no additional benefit beyond 9 letters to the right. However, all readers showed improvements to reading with the availability of up to 9 letters to the left—confirming previous evidence of reduced perceptual span asymmetry in older readers. The findings raise questions about whether the source of age-related changes in parafoveal processing lies in the adoption of a risky reading strategy involving an increased propensity to both guess upcoming words and make corrective regressions.
Han Zhang; Chuyan Qu; Kevin F Miller; Kai S Cortina
In: Journal of experimental psychology. Learning, memory, and cognition, 46 (4), pp. 638–648, 2020.
Mind-wandering (i.e., thoughts irrelevant to the current task) occurs frequently during reading. The current study examined whether mind-wandering was associated with reduced rereading when the reader read the so-called garden-path jokes. In a garden-path joke, the reader's initial interpretation is violated by the final punchline, and the violation creates a semantic incongruity that needs to be resolved (e.g., "My girlfriend has read so many negative things about smoking. Therefore, she decided to quit reading."). Rereading text prior to the punchline can help resolve the incongruity. In a main study and a preregistered replication, participants read jokes and nonfunny controls embedded in filler texts and responded to thought probes that assessed intentional and unintentional mind-wandering. Results were consistent across the two studies: When the reader was not mind-wandering, jokes elicited more rereading (from the punchline) than the nonfunny controls did, and had a recall advantage over the nonfunny controls. During mind-wandering, however, the additional eye movement processing and the recall advantage of jokes were generally reduced. These results show that mind-wandering is associated with reduced rereading, which is important for resolving higher level comprehension difficulties. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Hui Zhang; Ping Wang; Tinghu Kang
In: Art and Design Review, 8 , pp. 215–227, 2020.
This study compares the characteristics of the aesthetic experience of different cognitive styles in calligraphy style. The study used a cursive script and running script as experimental materials and the EyeLink 1000 Plus eye tracker to record eye movements while viewing calligraphy. The results showed that, in the overall analysis, there were differences in the field cogni-tion style in total fixation counts, saccade amplitude, and saccade counts and differences in the calligraphic style in total fixation counts and saccade counts. Further local analysis found significant differences in the field cogni-tive style in mean pupil diameter, fixation counts, and regression in count, and that there were differences in fixation counts and regression in count in the calligraphic style, as well as interactions with the area of interest. The results indicate that the field cognitive style is characterized by different aesthetic experiences in calligraphy appreciation and that there are aesthetic preferences in calligraphy style.
Manman Zhang; Simon P Liversedge; Xuejun Bai; Guoli Yan; Chuanli Zang
In: Acta Psychologica Sinica, 52 (8), pp. 1–11, 2020.
Parafoveal pre-processing contributes to highly efficient reading for skilled readers. Research has demonstrated that high-skilled or fast readers extract more parafoveal information from a wider parafoveal region more efficiently compared to less-skilled or slow readers. It is argued that individual differences in parafoveal preview are due to high-skilled or fast readers focusing less of their at- tention on foveal word processing than less-skilled or slow readers. In other words, foveal processing difficulty might modulate an individual's amount of parafoveal preview (i.e., Foveal Load Hypothesis). However, few studies have provided evidence in support of this claim. Therefore, the present study aimed to explore whether and how foveal lexical processing load modulates parafoveal preview of readers with different reading speeds (a commonly used measurement of reading skill or reading proficiency). By using a three-minute reading comprehension task, 28 groups of fast and slow readers were selected from 300 participants (234 were valid) according to their reading speed in the current study. Participants were then asked to read sentences while their eye movements were recorded using an Eyelink 1000 eye-tracker. Each experimental sentence contained a pre-target word that varied in lexical frequency to manipulate foveal processing load (low load: high frequency; high load: low frequency), and a target word ma- nipulated for preview (identical or pseudocharacter) within the boundary paradigm. Global analyses showed that, although fast readers had similar accuracy of reading comprehension to slow readers, they had shorter reading times, longer forward saccades, made fewer fixations and regressions, and had higher reading speeds compared to slow readers, indicating that our selection of fast and slow readers was highly effective. The pre-target word analyses showed that there was a main effect of word frequency on first-pass reading times, indicating an effective manipulation of foveal load. Addition- ally, there were significant interactions of Reading Group × Word Frequency, and Reading Group × Word Frequency × Parafoveal Preview for first fixation and single fixation durations, showing that the frequency effects were reliable for fast readers rather than for slow readers with pseudocharacter previews, while the frequency effects were similar for the two groups with identical previews. However, the target word analyses did not show any three-way or two-way interactions for the first-pass reading times as well as for skipping probability. To be specific, the first-pass reading times were shorter at the target word with identical previews in relation to pseudocharacter previews (i.e., preview benefit effects); importantly, similar size effects occurred for both fast readers and slow readers. The findings in the present study suggest that lexical information from the currently fixated word can be extracted and can be used quickly for fast readers, while such information is used later for slow readers. This, however, does not result in more (or less) preview benefit for fast readers in relation to slow readers. In conclusion, foveal lexical processing does not modulate preview benefit for fast and slow readers, and the present results provide no support for the Foveal Load Hypothesis. Our findings of foveal load effects on parafoveal preview for fast and slow readers cannot be readily explained by current computational models (e.g., E-Z Reader model and SWIFT model).
Vladislav I Zubov; Tatiana E Petrova
In: Procedia Computer Science, 176 , pp. 2117–2124, 2020.
This article presents the results of an eye-tracking experiment on Russian language material, exploring the reading process in secondary school children with general speech underdevelopment. The objective of the study is to reveal what type of a text is better to use to make the reading and comprehension easier: lexically adapted text or grammatically adapted text? The data from Russian-speaking participants from the compulsory school (experimental group) and 28 secondary school children with normal speech development (control group) indicate that both types of adaptation proved to be efficient for recalling the information from the text. Though, we revealed that in teenagers with language disorders in anamnesis lower perceptual processes are partially compensated (parameters of eye movements), but higher comprehension processes remain affected.
Mengyan Zhu; Xiangling Zhuang; Guojie Ma
In: Reading and Writing, pp. 1–18, 2020.
In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed.
Bin Zhao; Jinfeng Huang; Gaoyan Zhang; Jianwu Dang; Minbo Chen; Yingjian Fu; Longbiao Wang
In: Acoustical Science and Technology, 41 (1), pp. 349–350, 2020.
To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range . However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses . In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling  for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions.
Peng Zhou; Weiyi Ma; Likan Zhan
In: First Language, 40 (1), pp. 41–63, 2020.
The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others' communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD.
Wei Zheng; Yizhen Wang; Xiaolu Wang
In: Frontiers in Psychology, 11 , pp. 1–12, 2020.
The present study adopted the printed-word visual world paradigm to investigate the salience effect on Chinese pun comprehension. In such an experiment, participants listen to a spoken sentence while looking at a visual display of four printed words (including a semantic competitor, a phonological competitor, and two unrelated distractors). Previous studies based on alphabetic languages have found robust phonological effects (participants fixated more at phonological competitors than distractors during the unfolding of the spoken target words), while controversy remains regarding the existence of a similar semantic effect. A recent Chinese study reported reliable semantic effects in two experiments using this paradigm, suggesting that Chinese participants could actively map the semantic input from the auditory modality with the semantic information retrieved from printed words. In light of their study, we designed an experiment with two conditions: a replication condition to test the validity of using the printed-word world paradigm in Chinese semantic research, and a pun condition to assess the role played by salience during pun comprehension. Indeed, global analyses have revealed robust semantic effects in both experimental conditions, where participants were found more attracted to the semantic competitors than to the distractors with the emergence of target words. More importantly, the local analyses from the pun condition have shown that the participants were more attracted to the semantic competitors related to the salient meaning of the ambiguous word in a pun than to those related to the less salient meanings within 200 ms after target word offset. This finding suggests that the salient meaning of the ambiguous word in a pun is activated and assessed faster than its less salient counterpart. The initial advantage observed in the present study is consistent with the prediction of the graded salience hypothesis rather than the direct access model.
K Hawthorne; S J Loveall
In: Journal of Intellectual Disability Research, pp. 1–8, 2020.
Background: Pronouns are referentially ambiguous (e.g. she could refer to any female), yet they are common in everyday conversations. Individuals with typical development (TD) employ several strategies to avoid pronoun interpretation errors, including the subject bias – an assumption that a pronoun typically refers to the subject (or, with the closely related order-of-mention bias, the first-mentioned character) of the previous sentence. However, it is unknown if adults with intellectual disability (ID) share this strategy or the extent to which the subject bias is associated with non-verbal abilities or receptive vocabulary. Methods: We tested 22 adults with mixed-aetiology ID on their interpretation of ambiguous pronouns using the visual world eye-tracking paradigm and by asking a follow-up pronoun interpretation question. A group of TD adults was also tested to establish the strength of the subject bias with our materials and task. Results: Adults with ID did demonstrate the subject bias, but it was significantly less robust than that seen in TD. For participants with ID, the subject bias was influenced by non-verbal IQ and receptive vocabulary at different stages of processing. Conclusions: Given the frequency of pronouns in conversation, strengthening the subject bias may help alleviate discourse and reading comprehension challenges for individuals with ID, particularly those with lower non-verbal and/or vocabulary skills.
Kristi Hendrickson; Jessica Spinelli; Elizabeth Walker
In: Cognition, 198 , pp. 1–15, 2020.
In two eye-tracking experiments using the Visual World Paradigm, we examined how listeners recognize words when faced with speech at lower intensities (40, 50, and 65 dBA). After hearing the target word, participants (n = 32) clicked the corresponding picture from a display of four images – a target (e.g., money), a cohort competitor (e.g., mother), a rhyme competitor (e.g., honey) and an unrelated item (e.g., whistle) – while their eye-movements were tracked. For slightly soft speech (50 dBA), listeners demonstrated an increase in cohort activation, whereas for rhyme competitors, activation started later and was sustained longer in processing. For very soft speech (40 dBA), listeners waited until later in processing to activate potential words, as illustrated by a decrease in activation for cohorts, and an increase in activation for rhymes. Further, the extent to which words were considered depended on word length (mono- vs. bi-syllabic words), and speech-extrinsic factors such as the surrounding listening environment. These results advance current theories of spoken word recognition by considering a range of speech levels more typical of everyday listening environments. From an applied perspective, these results motivate models of how individuals who are hard of hearing approach the task of recognizing spoken words.
Annina K Hessel; Kate Nation; Victoria A Murphy
In: Scientific Studies of Reading, pp. 1–21, 2020.
This experiment investigated comprehension monitoring in children learning English as an additional language (EAL) compared to monolinguals. Sixty-three 9–10-year-old children read texts containing an internal inconsistency (e.g. a barking kitten vs. barking puppy) while their eye movements were monitored. Standardized tests measured word reading fluency and vocabulary size and the children completed a questionnaire tapping rereading behavior. There was no overall difference between EAL and monolingual children. Regardless of EAL status, children with larger vocabularies were more efficient in their re-analysis of inconsistent information, as revealed by regressive eye movements. As efficient re-analysis of inconsistent information is essential for comprehension and is ubiquitous in proficient readers, the presence of this pattern in the children is indicative of successful online monitoring. However, rereading of inconsistent vs consistent words in the eye movement record was not related to children's self-reported rereading, not providing any support for deliberate rereading. Our findings indicate that successful online monitoring relies on strong word knowledge leading to efficient processing of texts, both for bilingual and monolingual children, and beyond deliberate rereading.
Annina K Hessel; Sascha Schroeder
In: Discourse Processes, 57 (10), pp. 940–964, 2020.
This experiment investigated interactions between lower- and higher-level processing when reading in a second language (L2). We conducted an eye-tracking experiment with the within-subject manipulation inconsistency (to tap higher-level coherence-building) crossed with a within-subject manipulation of word-processing difficulty (to alter the ease of lower-level processing), both manipulated on the text level. Sixty-three L2 learners read 48 short expository texts containing inconsistencies created through mismatches between pretargets such as soya and targets such as corn, or consistent controls. Word-processing difficulty was manipulated by inserting either shorter and higher-frequency words such as often or longer and lower-frequency words such as increasingly. We found evidence of interactions between lower-level word-processing difficulty and higher-level coherence building, as revealed by reduced a inconsistency effect showing in go-past durations and rereading in the difficult condition. This effect did not, however, extend to targeted regressions into inconsistent information. Our findings provide the first experimental evidence for online interactions between lower-level word processing and higher-level coherence building.
Florian Hintz; Antje S Meyer; Falk Huettig
In: Quarterly Journal of Experimental Psychology, 73 (3), pp. 458–467, 2020.
Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants' eye movements as they listened to sentences in which an object was predictable based on the verb's selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: the target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 s before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 s after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.
Jens Hjortkjær; Jonatan Märcher-Rørsted; Søren A Fuglsang; Torsten Dau
In: European Journal of Neuroscience, 51 (5), pp. 1279–1289, 2020.
Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task. The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels of background noise. Increasing WM load at higher n-back levels was associated with a decrease in posterior alpha power as well as increased pupil dilations. Frontal theta power increased at the start of the trial and increased additionally with higher n-back level. The observed alpha–theta power changes are consistent with visual n-back paradigms suggesting general oscillatory correlates of WM processing load. Speech entrainment was measured as a linear mapping between the envelope of the speech signal and low-frequency cortical activity (textless 13 Hz). We found that increases in both types of WM load (background noise and n-back level) decreased cortical speech envelope entrainment. Although entrainment persisted under high load, our results suggest a top-down influence of WM processing on cortical speech entrainment.
Chen-En Ho; Tze-Wei Chen; Jie-Li Tsai
In: Translation, Cognition & Behavior, 3 (1), pp. 1–24, 2020.
This study investigated cognitive aspects of sight translation by analysing the reading behaviour in the process and the output. In our empirical study, two groups of participants—interpreting trainees and untrained bilinguals—carried out three tasks: (a) silent reading, (b) reading aloud, and (c) sight translation. The results show that the two groups were almost identical in the first two tasks, further substantiating the similarity of their language command, but were drastically different in how they tackled sight translation. Interpreting trainees provided much more accurate, fluent, and adequate renditions with much less time and fewer fixations. However, their efficiency at information retrieval was statistically similar to that of the untrained bilinguals. Thus, interpreting trainees were more efficient by being more “economical” during reading, rather than by reading ahead faster, as some would intuitively expect. Chunking skills seem to have also been at play behind their remarkable performance.
Liv J Hoversten; Matthew J Traxler
In: Cognition, 195 , pp. 1–17, 2020.
Prominent models of bilingual visual word recognition posit a bottom-up nonselective view of lexical processing with parallel access to lexical candidates of both languages. However, these accounts do not accommodate recent findings of top-down effects on the relative global activation level of each language during bilingual reading. We conducted two eye-tracking experiments to systematically assess the degree of accessibility of each language in different global language contexts. When critical words were presented overtly in Experiment 1, code switches disrupted reading early during lexical processing, but not as much as pseudowords did. Participants zoomed out of the target language with increasing exposure to language switches. In Experiment 2, a monolingual language context was created by presenting critical words covertly as parafoveal previews. Here, code-switched words were treated like pseudowords, and participants remained zoomed in to the target language throughout the experiment. Switch direction analyses confirmed and extended these interpretations to provide further support for the role of global language control on lexical access, above and beyond effects due to proficiency differences across languages. Together, these data provide strong evidence for dynamic top-down adjustment of the degree of language selectivity during bilingual reading.
Nina S Hsu; Stefanie E Kuchinsky; Jared M Novick
In: Language, Cognition and Neuroscience, pp. 1–29, 2020.
Incremental language processing means that listeners confront temporary ambiguity about how to structure the input, which can generate misinterpretations. In four “visual-world” experiments, we tested whether engaging cognitive control – which detects and resolves conflict – assists revision during comprehension. We recorded listeners' eye-movements and actions while following instructions that were ripe for misanalysis. In Experiments 1 and 3, sentences followed trials from a nonverbal conflict task that manipulated cognitive-control engagement, to test its impact on the ability to revise. To isolate conflict-driven effects of cognitive-control on comprehension, we manipulated attention in a non-conflict task in Experiments 2 and 4. We observed fewer comprehension errors, and earlier revision, when cognitive control (more than attention) was elicited on an immediately preceding trial. These results extend previous correlations between cognitive control and language processing by revealing the influence of domain-general cognitive-control engagement on the temporal unfolding of error-revision processes during language comprehension.
Linjieqiong Huang; Xingshan Li
In: Quarterly Journal of Experimental Psychology, 73 (9), pp. 1382–1395, 2020.
The current study investigated how the prior context influences word segmentation of overlapping ambiguous strings when reading Chinese. Chinese readers' eye movements were recorded as they read sentences containing a three-character overlapping ambiguous string (ABC), where both AB and BC were two-character words. In the informative condition, prior contexts provided syntactic information that supported either the first word segmentation (AB-C) or the second word segmentation (A-BC). The neutral condition did not provide syntactic constraint for word-segmentation. The post-target contexts were syntactically consistent with either the first word (AB-C) or the second word (A-BC) segmentation. The results showed that there were higher skipping rates and shorter first-fixation durations on the overlapping ambiguous string region in the informative AB-C condition than those in the informative A-BC condition, whereas no difference between the AB-C and A-BC segmentation types was found in the neutral condition. Readers still made regressions into the overlapping ambiguous string region in the informative condition. These results imply that readers use sentence context information immediately to segment the overlapping ambiguous words, but they do not use the context information fully. The first word (AB) has processing advantages over the second word (BC), suggesting a left-side word advantage.
Ferdy Hubers; Theresa Redl; Hugo de Vos; Lukas Reinarz; Helen de Hoop
In: Frontiers in Psychology, 11 , pp. 1–12, 2020.
Speakers of a language sometimes use particular constructions which violate prescriptive grammar rules. Despite their prescriptive ungrammaticality, they can occur rather frequently. One such example is the comparative construction in Dutch and similarly in German, where the equative particle is used in comparative constructions instead of the prescriptively correct comparative particle (Dutch beter als Jan and German besser wie Jan “lit. better as John”). In a series of three experiments using sentence-matching and eye-tracking methodology, we investigated whether this grammatical norm violation is processed as grammatical, as ungrammatical, or whether it falls in between these two. We hypothesized that the latter would be the case. We analyzed our data using linear mixed effects models in order to capture possible individual differences. The results of the sentence-matching experiments, which were conducted in both Dutch and German, showed that the grammatical norm violation patterns with ungrammatical sentences in both languages. Our hypothesis was therefore not borne out. However, using the more sensitive eye-tracking method on Dutch speakers only, we found that the ungrammatical alternative leads to higher reading times than the grammatical norm violation. We also found significant individual variation regarding this very effect. We furthermore replicated the processing difference between the grammatical norm violation and the prescriptively correct variant. In summary, we conclude that while the results of the more sensitive eye-tracking experiment suggest that the grammatical norm violation is not processed completely on a par with ungrammatical sentences, the results of all three experiments clearly show that the grammatical norm violation cannot be considered grammatical, either.
Falk Huettig; Ernesto Guerra; Andrea Helo
In: Journal of Cognition, 3 (1), pp. 1–19, 2020.
A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual-world eye-tracking experiments. On critical trials, participants listened to sentence-embedded words associated with a prototypical colour (e.g., 'łdotsspinachłdots') while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a 'blank screen' after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.
Jukka Hyönä; Alexander Pollatsek; Minna Koski; Henri Olkoniemi
In: Journal of Eye Movement Research, 13 (4), pp. 1–17, 2020.
An eye-tracking experiment examined the recognition of novel and lexicalized compound words during sentence reading. The frequency of the head noun in modifier-head compound words was manipulated to tap into the degree of compositional processing. This was done separately for long (12-16 letter) and short (7-9 letters) compound words. Based on the dual-route race model (Pollatsek et al., 2000) and the visual acuity principle (Bertram & Hyona, 2003), long lexicalized and novel compound words were predicted to be processed via the decomposition route and short lexicalized compound words via the holistic route. Gaze du-ration and selective regression-path duration demonstrated a constituent frequency effect of similar size for long lexicalized and novel compound words. For short compound words the constituent frequency effect was negligible for lexicalized words but robust for novel words. The results are consistent with the visual acuity principle that assumes long novel compound words to be recognized via the decomposition route and short lexicalized compound words via the holistic route.
Joanne Ingram; Christopher J Hand
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (11), pp. 2179–2192, 2020.
The influence of domain knowledge on reading behavior has received limited investigation compared to the influence of, for example, context and/or word frequency. The current study tested participants with and without domain knowledge of the Harry Potter (HP) universe. Fans and non-fans read sentences containing HP, high-frequency (HF), or low-frequency target-words. Targets were presented in contexts that were supportive or unsupportive within a 2 (group: fans, non-fans) × 3 (context: HP, HF, LF) × 3 (word type: HP, HF, LF) mixed design. Thirty-two fans and 22 non-fans read 72 two-sentence experimental items while eye-movement behavior was recorded: Initial sentences established context; second sentences contained target-words. Fans processed HP words faster than non-fans. No group difference was observed on HF or LF processing durations, suggesting equivalent reading capabilities. In HP contexts, HP and LF targets were processed equivalently. Processing of HF and LF words was facilitated by their supportive context as expected. Non-fans made more regressions into the target region in HP contexts and regressed more into HP targets than other targets; fans regressed into target word regions equivalently across all context and word types. Results suggest that domain knowledge influences early but not immediate lexical access, while the processing effect of novelty was seen in regressive eye movements. These results are more supportive of modular accounts of linguistic processing and serial models of eye movement control. Words without grounding in reality, or true embodiment, were integrated into fans' mental lexicons.
Lena A Jäger; Daniela Mertzen; Julie A Van Dyke; Shravan Vasishth
In: Journal of Memory and Language, 111 (October 2019), pp. 104063, 2020.
Cue-based retrieval theories in sentence processing predict two classes of interference effect: (i) Inhibitory interference is predicted when multiple items match a retrieval cue: cue-overloading leads to an overall slowdown in reading time; and (ii) Facilitatory interference arises when a retrieval target as well as a distractor only partially match the retrieval cues; this partial matching leads to an overall speedup in retrieval time. Inhibitory interference effects are widely observed, but facilitatory interference apparently has an exception: reflexives have been claimed to show no facilitatory interference effects. Because the claim is based on underpowered studies, we conducted a large-sample experiment that investigated both facilitatory and inhibitory interference. In contrast to previous studies, we find facilitatory interference effects in reflexives. We also present a quantitative evaluation of the cue-based retrieval model of Engelmann, Jäger, and Vasishth (2019).
Jill Jegerski; Irina A Sekerina
In: Bilingualism, 23 (2), pp. 274–282, 2020.
Heritage Spanish speakers and adult immigrant bilinguals listened to wh-questions with the differential object marker a (quién/a quién 'who/whoACC') while their eye movements across four referent pictures were tracked. The heritage speakers were less accurate than the adult immigrants in their verbal responses to the questions, leaving objects unmarked for case at a rate of 18%, but eye movement data suggested that the two groups were similar in their comprehension, with both starting to look at the target picture at the same point in the question and identifying the target sooner with a quién 'whoACC' than with quién 'who' questions.
Shang Jiang; Xin Jiang; Anna Siyanova-Chanturia
In: Applied Psycholinguistics, 41 (4), pp. 901–931, 2020.
The processing advantage for multiword expressions over novel language has long been attested in the literature. However, the evidence pertains almost exclusively to multiword expression processing in adults. Whether or not other populations are sensitive to phrase frequency effects is largely unknown. Here, we sought to address this gap by recording the eye movements of third and fourth graders, as well as adults (first-language Mandarin) as they read phrases varying in frequency embedded in sentence context. We were interested in how phrase frequency, operationalized as phrase type (collocation vs. control) or (continuous) phrase frequency, and age might influence participants' reading. Adults read collocations and higher frequency phrases consistently faster than control and lower frequency phrases, respectively. Critically, fourth, but not third, graders read collocations and higher frequency phrases faster than control and lower frequency sequences, respectively, although this effect was largely confined to a late measure. Our results reaffirm phrase frequency effects in adults and point to emerging phrase frequency effects in primary school children. The use of eye tracking has further allowed us to tap into early versus late stages of phrasal processing, to explore different areas of interest, and to probe possible differences between phrase frequency conceptualized as a dichotomy versus a continuum.
Yu Cin Jian
In: Reading and Writing, pp. 1–26, 2020.
Reading strategy instruction has been an important area in educational psychology for decades, however, research has primarily focused on its influence on learning outcomes rather than learning processes; reading pure texts rather than illustrated texts; and immediate effect rather than retention effect. This study used an eye-tracker to investigate the immediate and delayed effects of text–diagram reading instruction on reading comprehension and learning processes in illustrated text reading. Fourth-grade students with high (N = 66) and low reading ability (N = 66) were randomly assigned to one of three groups: a text–diagram group who received text–diagram instruction which emphasized diagram decoding and integration of relevant textual and pictorial information, a placebo group who received instruction which emphasized comprehension monitoring, and a control group which received no reading instruction. All participants read four illustrated science texts for a baseline check, instructional example, immediate testing, and delayed testing. The results showed that the effect of text–diagram instruction was more evident in the immediate test than the delayed test. The eye-movement pattern showed that the students who received text–diagram reading instruction spent significantly more reading time on illustrations, made more integrative transitions between text and illustrations, and spent a higher proportion of total reading time on illustrations in immediate and delayed reading situations than the other groups. Overall, this study developed an effective text–diagram instruction method to promote reading comprehension, identified the reading processes underlying the effect of text–diagram strategy instruction, and depicted the changing appearances of reading instruction intervention over time.
Elizabeth Carolina Jiménez; August Romeo; Laura Pérez Zapata; Maria Solé Puig; Patricia Bustos-Valenzuela; José Cañete; Paloma Varela Casal; Hans Supèr
In: Vision Research, 169 , pp. 6–11, 2020.
Vergence eye movements are movements of both eyes in opposite directions. Vergence is known to have a role in binocular vision. However recent studies link vergence eye movements also to attention and attention disorders. As attention may be involved in dyslexia, it is sensible to guess that the presence of reading difficulties can be associated with specific patterns in vergence responses. Data from school children performing a word-reading task have been analysed. In the task, children had to distinguish words from non-words (scrambled words or row of X's), while their eye positions were recorded. Our findings show that after stimulus presentation eyes briefly converge. These vergence responses depend on the stimulus type and age of the child, and are different for children with reading difficulties. Our findings support the idea of a role of attention in word reading and offer an explanation of altered attention in dyslexia.
Gary Jones; Francesco Cabiddu; Daniela S Avila-Varela
In: Journal of Experimental Child Psychology, 199 , pp. 1–19, 2020.
We know that 8-month-old infants track the statistical properties of a series of syllables and that 2- and 3-year-old children process familiar phrases more efficiently than unfamiliar phrases, but less is known about the intermediary level of two-word sequences. In Study 1, 2-year-olds (N = 45, mean age = 651 days) heard two-word sequences consisting of a prime word followed by a noun, with two pictures appearing on the screen (depicting the noun and a distractor). Eye tracking showed that children looked more quickly at the noun picture for two-word sequences occurring an average of 19 times per million and 206 times per million in child-directed speech than for novel sequences. In Study 2, corpus analyses showed that 2-year-olds' noun learning increased in line with the frequency of the two-word sequence that preceded it in caregiver speech utterances. This effect holds even after controlling nouns for frequency in caregiver speech, phonemic length, neighborhood density, phonotactic probability, and concreteness and after removing nouns produced in isolation by caregivers and nouns produced by children before being produced by caregivers. These studies show that young children's language processing is facilitated by known two-word sequences, allowing children to focus on more novel aspects of the utterance. Such efficiencies are far-reaching because nearly two thirds of child-directed utterances contain two-word sequences with frequencies of 19 or more per million.
Lei Cui; Jue Wang; Yingliang Zhang; Fengjiao Cong; Wenxin Zhang; Jukka Hyönä
In: Quarterly Journal of Experimental Psychology, pp. 1–24, 2020.
In two eye-tracking studies, reading of two-character Chinese compound words was examined. First and second character frequency were orthogonally manipulated to examine the extent to which Chinese compound words are processed via the component characters. In Experiment 1, first and second character frequency were manipulated for frequent compound words, whereas in Experiment 2 it was done for infrequent compound words. Fixation time and skipping probability for the first and second character were affected by its frequency in neither experiment, nor in their pooled analysis. Yet, in Experiment 2 fixations on the second character were longer when a high-frequency character was presented as the first character compared with when a low-frequency character was presented as the first character. This reversed character frequency effect reflects a morphological family size effect and is explained by the constraint hypothesis, according to which fixation time on the second component of two-component compound words is shorter when its identity is constrained by the first component. It is concluded that frequent Chinese compound words are processed holistically, whereas with infrequent compound words there is some room for the characters to play a role in the identification process.
Michael G Cutter; Andrea E Martin; Patrick Sturt
In: Quarterly Journal of Experimental Psychology, 73 (9), pp. 1423–1430, 2020.
We present an eye-tracking study testing a hypothesis emerging from several theories of prediction during language processing, whereby predictable words should be skipped more than unpredictable words even in syntactically illegal positions. Participants read sentences in which a target word became predictable by a certain point (e.g., “bone” is 92% predictable given, “The dog buried his..”), with the next word actually being an intensifier (e.g., “really”), which a noun cannot follow. The target noun remained predictable to appear later in the sentence. We used the boundary paradigm to present the predictable noun or an alternative unpredictable noun (e.g., “food”) directly after the intensifier, until participants moved beyond the intensifier, at which point the noun changed to a syntactically legal word. Participants also read sentences in which predictable or unpredictable nouns appeared in syntactically legal positions. A Bayesian linear-mixed model suggested a 5.7% predictability effect on skipping of nouns in syntactically legal positions, and a 3.1% predictability effect on skipping of nouns in illegal positions. We discuss our findings in relation to theories of lexical prediction during reading.
Michael G Cutter; Andrea E Martin; Patrick Sturt
In: Cognition, 204 , pp. 1–13, 2020.
In two eye-tracking studies we investigated whether readers can detect a violation of the phonological-grammatical convention for the indefinite article an to be followed by a word beginning with a vowel when these two words appear in the parafovea. Across two experiments participants read sentences in which the word an was followed by a parafoveal preview that was either correct (e.g. Icelandic), incorrect and represented a phonological violation (e.g. Mongolian), or incorrect without representing a phonological violation (e.g. Ethiopian), with this parafoveal preview changing to the target word as participants made a saccade into the space preceding an. Our data suggests that participants detected the phonological violation while the target word was still two words to the right of fixation, with participants making more regressions from the previewed word and having longer go-past times on this word when they received a violation preview as opposed to a non-violation preview. We argue that participants were attempting to perform aspects of sentence integration on the basis of low-level orthographic information from the previewed word.
Rutvik H Desai; Wonil Choi; John M Henderson
Word frequency effects in naturalistic reading Journal Article
In: Language, Cognition and Neuroscience, 35 (5), pp. 1–12, 2020.
Word frequency is a central psycholinguistic variable that accounts for substantial variance in language processing. A number of neuroimaging studies have examined frequency at a single word level, typically demonstrating a strong negative, and sometimes positive correlation between frequency and hemodynamic response. Here, 40 subjects read passages of text in an MRI scanner while their eye movements were recorded. We used fixation-related analysis to identify neural activity tied to the frequency of each fixated word. We found that negative correlations with frequency were reduced, while strong positive correlations were found in the temporal and parietal areas associated with semantics. We propose that the processing cost of low frequency words is reduced due to contextual cues. Meanings of high frequency words are more readily accessed and integrated with context resulting in enhanced processing in the semantic system. The results demonstrate similarities and differences between single word and naturalistic text processing.
Guadalupe de los Santos; Julie E Boland; Richard L Lewis
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (5), pp. 907–925, 2020.
Although bilingual individuals know 2 languages, research suggests that the languages are not separate in the mind. This is especially evident when a bilingual individual switches languages midsentence, indicating that mental representations are, to some degree, overlapping or integrated across the 2 languages. In 2 eye-tracking experiments, we investigated the nature of this integration during reading to examine whether incremental grammatical predictions generated by Spanish-English bilinguals (Experiment 1
Perceptual span in individuals with aphasia Journal Article
In: Aphasiology, 34 (2), pp. 235–253, 2020.
Background: Perceptual span refers to the field of effective vision during reading comprehension. It is determined by many factors, including reading proficiency. No studies have investigated the perceptual span in people with reading comprehension impairments due to aphasia. Aims: The present study examined whether perceptual span is smaller in individuals with aphasia than controls. Methods and Procedures: The task was a gaze-contingent moving windows paradigm during silent reading using an eye tracker. Data from 11 individuals with aphasia and 15 neurotypical controls were analyzed. Outcomes and Results: Perceptual span in individuals with aphasia was the fixated word plus one word to the right of fixation, whereas perceptual span in controls was the fixated word plus two words to the right of fixation. Conclusion: Individuals with aphasia have a smaller perceptual span than controls, which likely reflects increased effort during reading comprehension.
Ambre Denis-Noël; Chotiga Pattamadilok; Éric Castet; Pascale Colé
In: Annals of Dyslexia, 70 (3), pp. 313–338, 2020.
In skilled adult readers, reading words is generally assumed to rapidly and automatically activate the phonological code. In adults with dyslexia, despite the main consensus on their phonological processing deficits, little is known about the activation time course of this code. The present study investigated this issue in both populations. Participants' accuracy and eye movements were recorded while they performed a visual lexical decision task in which phonological consistency of written words was manipulated. Readers with dyslexia were affected by phonological consistency during second fixation duration of visual word recognition suggesting a late activation of the phonological code. Regarding skilled readers, no influence of phonological consistency was found when the participants were considered a homogeneous population. However, a different pattern emerged when they were divided into two subgroups according to their phonological and semantic abilities: Those who showed better decoding than semantic skills were affected by phonological consistency at the earliest stage of visual word recognition while those who showed better semantic than decoding skills were not affected by this factor at any processing stage. Overall, the findings suggest that the presence of phonological deficits in readers with dyslexia is associated with a delayed activation of phonological representations during reading. In skilled readers, the contribution of phonology varies with their reading profile, i.e., being phonologically or semantically oriented.
Dagmar Divjak; Petar Milin; Srdan Medimorec
The theoretical notion of 'construal' captures the idea that the way in which we describe a scene reflects our conceptualization of it. Relying on the concept of ception - which conjoins conception and perception - we operationalized construal and employed a Visual World Paradigm to establish which aspects of linguistic scene description modulate visual scene perception, thereby affecting event conception. By analysing viewing behaviour after alternating ways of describing location (prepositions), agentivity (active/passive voice) and transfer (NP/PP datives), we found that the linguistic construal of a scene affects its spontaneous visual perception in two ways: either by determining the order in which the components of a scene are accessed or by modulating the distribution of attention over the components, making them more or less salient than they naturally are. We also found evidence for the existence of a cline in the construal effect with stronger expressive differences, such as the prepositional manipulation, inducing more prominent changes in visual perception than the dative manipulation. We discuss the claims language can lay to affecting visual information uptake and hence conceptualization of a static scene in the light of these results.
Linda Drijvers; Ole Jensen; Eelke Spaak
In: Human Brain Mapping, pp. 1–15, 2020.
During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
Kaitlyn Easson; Noor Z Al Dahhan; Donald C Brien; John R Kirby; Douglas P Munoz
In: Frontiers in Human Neuroscience, 14 , pp. 1–14, 2020.
Studying the typical development of reading is key to understanding the precise deficits that underlie reading disabilities. An important correlate of efficient reading is the speed of naming arrays of simple stimuli such as letters and pictures. In this cross-sectional study, we examined developmental changes in visual processing that occurs during letter and object naming from childhood to early adulthood in terms of behavioral task efficiency, associated articulation and eye movement parameters, and the coordination between them, as measured by eye-voice span in both the spatial and temporal domains. We used naming speed (NS) tasks, in which participants were required to name sets of letters or simple objects as quickly and as accurately as possible. Single stimulus manipulations were made to these tasks to make the stimuli either more visually and/or phonologically similar to one another in order to examine how these manipulations affected task performance and the coordination between speech and eye movements. Across development there was an increased efficiency in speech and eye movement performance and their coordination in both the spatial and temporal domains. Furthermore, manipulations to the phonological and visual similarity of specific letter and object stimuli revealed that orthographic processing played a greater role than phonological processing in performance, with the contribution of phonological processing diminishing across development. This comprehensive typical developmental trajectory provides a benchmark for clinical populations to elucidate the nature of the cognitive dysfunction underlying reading difficulties.
Grant Eckstein; Sarah Miner; Katie Watkins; Judy James; Mornie Sims; Allison Wallace Baker; Larissa Grahl
In: The Reading Matrix: An International Online Journal, 20 (1), pp. 1–19, 2020.
Citations provide truncated yet socially complex information about sources in academic texts which students are obliged to read, comprehend, and then ultimately produce as part of an academic discourse community. While researchers have observed a developmental process whereby students produce citations during source-based writing, little work has investigated the reading stage when students visually encounter citations. In this study, we explored academic reading behaviors by examining eye movements of 27 graduate students and 18 professors as they read 6 authentic research texts for various purposes (summary, analysis, synthesis). Results of factorial ANOVAs showed no differences between students and professors but did reveal that both groups spent far less time looking at citations than surrounding text and that reading purposes affected citation reading behavior. These results indicate that students and professors read academic citations in similar ways. Further, the findings suggest that synthesizing sources, not just summarizing or analyzing them, results in greater attention to citations; thus, students developing their academic writing and citation skills may benefit from synthesizing multiple sources.
Ciara Egan; Filipe Cristino; Joshua S Payne; Guillaume Thierry; Manon W Jones
In: Cortex, 124 , pp. 111–118, 2020.
In linguistics, the relationship between phonological word form and meaning is mostly considered arbitrary. Why, then, do literary authors traditionally craft sound relationships between words? We set out to characterise how dynamic interactions between word form and meaning may account for this literary practice. Here, we show that alliteration influences both meaning integration and attentional engagement during reading. We presented participants with adjective-noun phrases, having manipulated semantic relatedness (congruent, incongruent) and form repetition (alliterating, non-alliterating) orthogonally, as in “dazzling-diamond”; “sparkling-diamond”; “dangerous-diamond”; and “creepy-diamond”. Using simultaneous recording of event-related brain potentials and pupil dilation (PD), we establish that, whilst semantic incongruency increased N400 amplitude as expected, it reduced PD, an index of attentional engagement. Second, alliteration affected semantic evaluation of word pairs, since it reduced N400 amplitude even in the case of unrelated items (e.g., “dangerous-diamond”). Third, alliteration specifically boosted attentional engagement for related words (e.g., “dazzling-diamond”), as shown by a sustained negative correlation between N400 amplitudes and PD change after the window of lexical integration. Thus, alliteration strategically arouses attention during reading and when comprehension is challenged, phonological information helps readers link concepts beyond the level of literal semantics. Overall, our findings provide a tentative mechanism for the empowering effect of sound repetition in literary constructs.
Julia Egger; Caroline F Rowland; Christina Bergmann
In: Behavior Research Methods, 52 (5), pp. 2188–2201, 2020.
Visual reaction times to target pictures after naming events are an informative measurement in language acquisition research, because gaze shifts measured in looking-while-listening paradigms are an indicator of infants' lexical speed of processing. This measure is very useful, as it can be applied from a young age onwards and has been linked to later language development. However, to obtain valid reaction times, the infant is required to switch the fixation of their eyes from a distractor to a target object. This means that usually at least half the trials have to be discarded—those where the participant is already fixating the target at the onset of the target word—so that no reaction time can be measured. With few trials, reliability suffers, which is especially problematic when studying individual differences. In order to solve this problem, we developed a gaze-triggered looking-while-listening paradigm. The trials do not differ from the original paradigm apart from the fact that the target object is chosen depending on the infant's eye fixation before naming. The object the infant is looking at becomes the distractor and the other object is used as the target, requiring a fixation switch, and thus providing a reaction time. We tested our paradigm with forty-three 18-month-old infants, comparing the results to those from the original paradigm. The Gaze-triggered paradigm yielded more valid reaction time trials, as anticipated. The results of a ranked correlation between the conditions confirmed that the manipulated paradigm measures the same concept as the original paradigm.
Yulia Esaulova; Martina Penke; Sarah Dolscheid
In: Frontiers in Psychology, 11 , pp. 1–13, 2020.
Speakers' readiness to describe event scenes using active or passive constructions has previously been attributed—among other factors—to the accessibility of referents. While most research has highlighted the accessibility of agents, the present study examines whether patients' accessibility can be modulated by means of visual preview of the patient character (derived accessibility), as well as by manipulating the animacy status of patients (inherent accessibility). Crucially, we also examined whether effects of accessibility were amenable to the visuospatial position of the patient by presenting the patient character either to the left or to the right of the agent. German native speakers were asked to describe drawings depicting event scenes while their gaze and speech were recorded. Our results show that making patients more accessible using derived and inherent accessibility factors led to more produced passives, shorter speech onsets, and a reduction of fixations on patients. Complementing previous research on agent accessibility, our findings demonstrate that the accessibility of patients affected both sentence production and looking behavior. While effects were observed for both inherent and derived accessibility, they appeared to be more pronounced for the latter. Regarding character position, we observed a significant effect of position on participants' gaze patterns and structural choices, suggesting that position itself can be considered an accessibility-related factor. Importantly, the position of a patient also interacted with our manipulation of its accessibility via visual preview. Participants produced more passives after preview than no preview for left-positioned but not for right-positioned patients, demonstrating that effects of patient accessibility (i.e., visual preview) were susceptible to character position. A similar interaction was observed for participants' viewing patterns. These findings provide the first evidence that the position of a referent is a factor that interacts with other accessibility-related factors (i.e., cueing), emphasizing the need of controlling for position effects when testing referent accessibility.
Michael A Eskenazi; Paige Kemp; Jocelyn R Folk
Word skipping during the lexical acquisition process Journal Article
In: Quarterly Journal of Experimental Psychology, pp. 1–11, 2020.
During reading, most words are identified in the fovea through a direct fixation; however, readers also identify some words in the parafovea without directly fixating them. This word skipping process is influenced by many lexical and visual factors including word length, launch position, frequency, and predictability. Although these factors are well understood, there is some disagreement about the process that leads to word skipping and the degree to which skipped words are processed. The purpose of this study was to investigate the word skipping process when readers are exposed to novel words in an incidental lexical acquisition paradigm. Participants read 18 three-letter novel words (i.e., pru, cho) in three different informative contexts each while their eye movements were monitored. They then completed a surprise test of their orthographic and semantic acquisition and a spelling skill assessment. Mixed-effects models indicated that participants learned spellings and meanings of words at the same rate regardless of the number of times that they were skipped. However, word skipping rates increased across the three exposures and reading times decreased. Results indicate that readers appear to process skipped words to the same degree as fixated words. However, this may be due to a more cautious skipping process used during lexical acquisition of unfamiliar words compared to processing of already known words.
Núria Esteve-Gibert; Amy J Schafer; Barbara Hemforth; Cristel Portes; Céline Pozniak; Mariapaola D'Imperio
In: Memory and Cognition, 48 (4), pp. 566–580, 2020.
This study examines how individual pragmatic skills, and more specifically, empathy, influences language processing when a temporary lexical ambiguity can be resolved via intonation. We designed a visual-world eye-tracking experiment in which participants could anticipate a referent before disambiguating lexical information became available, by inferring either a contrast meaning or a confirmatory meaning from the intonation contour alone. Our results show that individual empathy skills determine how listeners deal with the meaning alternatives of an ambiguous referent, and the way they use intonational meaning to disambiguate the referent. Listeners with better pragmatic skills (higher empathy) were sensitive to intonation cues when forming sound–meaning associations during the unfolding of an ambiguous referent, and showed higher sensitivity to all the alternative interpretations of that ambiguous referent. Less pragmatically skilled listeners showed weaker processing of intonational meaning because they needed subsequent disambiguating material to select a referent and showed less sensitivity to the set of alternative interpretations. Overall, our results call for taking into account individual pragmatic differences in the study of intonational meaning processing and sentence comprehension in general.
Myrthe Faber; Marloes Mak; Roel M Willems
In: Journal of Eye Movement Research, 13 (3), pp. 1–9, 2020.
Decades of research have established that the content of language (e.g. lexical characteristics of words) predicts eye movements during reading. Here we investigate whether there exist individual differences in 'stable' eye movement patterns during narrative reading. We computed Euclidean distances from correlations between gaze durations time courses (word level) across 102 participants who each read three literary narratives in Dutch. The resulting distance matrices were compared between narratives using a Mantel test. The results show that correlations between the scaling matrices of different narratives are relatively weak (r ≤.11) when missing data points are ignored. However, when including these data points as zero durations (i.e. skipped words), we found significant correlations between stories (r textgreater.51). Word skipping was significantly positively associated with print exposure but not with self-rated attention and story-world absorption, suggesting that more experienced readers are more likely to skip words, and do so in a comparable fashion. We interpret this finding as suggesting that word skipping might be a stable individual eye movement pattern.
Benjamin J Falandays; Sarah Brown-Schmidt; Joseph C Toscano
In: Journal of Memory and Language, 112 , pp. 1–14, 2020.
During speech processing, listeners must map a fundamentally continuous acoustic signal onto discrete symbols, such as words. A current debate concerns the time-course over which sub-phonemic (i.e., gradient) acoustic information continues to influence symbolic (i.e., linguistic) interpretation, which can provide evidence regarding the level of representation at which gradient information is maintained. In a visual-world paradigm experiment, participants indicated whether a spoken sentence matched a display while eye-gaze was monitored. Participants heard an acoustically ambiguous stimulus (a pronoun referring to either a male or female referent in the display), which was not disambiguated until later in the discourse. The acoustic properties of the pronouns and length of the ambiguous period were varied while responses and eye-movements to the discourse-relevant items were recorded, providing a measure of whether gradient referential uncertainty is maintained over time. Fixation patterns during the ambiguous period and latencies to fixate the target at the end of the trial varied linearly with the acoustics of the earlier pronoun, indicating that gradient information can be maintained over intervening periods of 35 syllables. These results provide strong evidence that gradient uncertainty is maintained at the level of referent representations.
Xi Fan; Ronan Reilly
In: Journal of Eye Movement Research, 13 (6), pp. 1–16, 2020.
This paper describes the use of semantic similarity measures based on distributed representations of words, sentences, and paragraphs (so-called "embeddings") to assess the impact of supra-lexical factors on eye-movement data from early readers of Chinese. In addition, we used a corpus-based measure of surprisal to assess the impact of local word predictability. Eye movement data from 56 Chinese students were collected (a) in the students' 4th grade and (b) one year later while they were in 5th grade. Results indicated that surprisal and some text similarity measures have a significant impact on the moment-to-moment processing of words in reading. The paper presents an easy-to-use set of tools for linking the low-level aspects of fixation durations to a hierarchy of sentence-level and paragraph-level features that can be computed automatically. The study is the first attempt, as far as we are aware, to track the developmental trajectory of these influences in developing readers across a range of reading abilities. The similarity-based measures described here can be used (a) to provide a measure of reader sensitivity to sentence and paragraph cohesion and (b) to assess specific texts for their suitability for readers of different reading ability levels.
Mojgan Farahani; Vijay Parsa; Björn Herrmann; Mason Kadem; Ingrid Johnsrude; Philip C Doyle
In: Applied Sciences, 10 , pp. 1–14, 2020.
This study evaluated ratings of vocal strain and perceived listening effort by normal hearing participants while listening to speech samples produced by talkers with adductor spasmodic dysphonia (AdSD). In addition, objective listening effort was measured through concurrent pupillometry to determine whether listening to disordered voices changed arousal as a result of emotional state or cognitive load. Recordings of the second sentence of the "Rainbow Passage" produced by talkers with varying degrees of AdSD served as speech stimuli. Twenty naïve young adult listeners perceptually evaluated these stimuli on the dimensions of vocal strain and listening effort using two separate visual analogue scales. While making the auditory-perceptual judgments, listeners' pupil characteristics were objectively measured in synchrony with the presentation of each voice stimulus. Data analyses revealed moderate-to-high inter-and intra-rater reliability. A significant positive correlation was found between the ratings of vocal strain and listening effort. In addition, listeners displayed greater peak pupil dilation (PPD) when listening to more strained and effortful voice samples. Findings from this study suggest that when combined with an auditory-perceptual task, non-volitional physiologic changes in pupil response may serve as an indicator of listening and cognitive effort or arousal.
Marion Fechino; Arthur M Jacobs; Jana Lüdtke
In: Journal of Eye Movement Research, 13 (3), pp. 1–19, 2020.
Following Jakobson and Levi-Strauss famous analysis of Baudelaire's poem 'Les Chats' ('The Cats'), in the present study we investigated the reading of French poetry from a Neurocognitive Poetics perspective. Our study is exploratory and a first attempt in French, most previous work having been done in either German or English (e.g., Jacobs, 2015a, 2018a, b; Muller et al., 2017; Xue et al., 2019). We varied the presentation mode of the poem Les Chats (verse vs. prose form) and measured the eye movements of our readers to test the hypothesis of an interaction between presentation mode and reading behavior. We specifically focussed on rhyme scheme effects on standard eye movement parameters. Our results replicate those from previous English poetry studies in that there is a specific pattern in poetry reading with longer gaze durations and more rereading in the verse than in the prose format. Moreover, presentation mode also matters for making salient the rhyme scheme. This first study generates interesting hypotheses for further research applying quantitative narrative analysis to French poetry and developing the Neurocognitive Poetics Model of literary reading (NCPM; Jacobs, 2015a) into a cross-linguistic model of poetry reading.
Danbi Ahn; Matthew J Abbott; Keith Rayner; Victor S Ferreira; Tamar H Gollan
In: Journal of Neurolinguistics, 54 , pp. 1–22, 2020.
Bilinguals are remarkable at language control—switching between languages only when they want. However, language control in production can involve switch costs. That is, switching to another language takes longer than staying in the same language. Moreover, bilinguals sometimes produce language intrusion errors, mistakenly producing words in an unintended language (e.g., Spanish–English bilinguals saying “pero” instead of “but”). Switch costs are also found in comprehension. For example, reading times are longer when bilinguals read sentences with language switches compared to sentences with no language switches. Given that both production and comprehension involve switch costs, some language–control mechanisms might be shared across modalities. To test this, we compared language switch costs found in eye–movement measures during silent sentence reading (comprehension) and intrusion errors produced when reading aloud switched words in mixed–language paragraphs (production). Bilinguals who made more intrusion errors during the read–aloud task did not show different switch cost patterns in most measures in the silent–reading task, except on skipping rates. We suggest that language switching is mostly controlled by separate, modality–specific processes in production and comprehension, although some points of overlap might indicate the role of domain general control and how it can influence individual differences in bilingual language control.
Noor Z Al Dahhan; John R Kirby; Ying Chen; Donald C Brien; Douglas P Munoz
In: European Journal of Neuroscience, 51 (11), pp. 2277–2298, 2020.
We combined fMRI with eye tracking and speech recording to examine the neural and cognitive mechanisms that underlie reading. To simplify the study of the complex processes involved during reading, we used naming speed (NS) tasks (also known as rapid automatized naming or RAN) as a focus for this study, in which average reading right-handed adults named sets of stimuli (letters or objects) as quickly and accurately as possible. Due to the possibility of spoken output during fMRI studies creating motion artifacts, we employed both an overt session and a covert session. When comparing the two sessions, there were no significant differences in behavioral performance, sensorimotor activation (except for regions involved in the motor aspects of speech production) or activation in regions within the left-hemisphere-dominant neural reading network. This established that differences found between the tasks within the reading network were not attributed to speech production motion artifacts or sensorimotor processes. Both behavioral and neuroimaging measures showed that letter naming was a more automatic and efficient task than object naming. Furthermore, specific manipulations to the NS tasks to make the stimuli more visually and/or phonologically similar differentially activated the reading network in the left hemisphere associated with phonological, orthographic and orthographic-to-phonological processing, but not articulatory/motor processing related to speech production. These findings further our understanding of the underlying neural processes that support reading by examining how activation within the reading network differs with both task performance and task characteristics.
Sally Andrews; Aaron Veldre
In: Scientific Studies of Reading, pp. 1–18, 2020.
This study used wrap-up effects on eye movements to assess the relationship between online reading behavior and comprehension. Participants, assessed on measures of reading, vocabulary, and spelling, read short passages that manipulated whether a syntactic boundary was unmarked by punctuation, weakly marked by a comma, or strongly marked by a period. Comprehension demands were manipulated by presenting questions after either 25% or 100% of passages. Wrap-up effects at punctuation boundaries manifested principally in rereading of earlier text and were more marked in lower proficiency readers. High comprehension load was associated with longer total reading time but had little impact on wrap-up effects. The relationship between eye movements and comprehension accuracy suggested that poor comprehension was associated with a shallower reading strategy under low comprehension demands. The implications of these findings for understanding how the processes involved in self-regulating comprehension are modulated by reading proficiency and comprehension goals are discussed.
Susana Araújo; Falk Huettig; Antje Meyer
In: Scientific Studies of Reading, pp. 1–16, 2020.
This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low- but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination.
Anja Arnhold; Vincent Porretta; Aoju Chen; Saskia A J M Verstegen; Ivy Mok; Juhani Järvikivi
In: Psychonomic Bulletin & Review, 27 (4), pp. 801–808, 2020.
Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues. However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent. Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech, we show that regional accent impedes higher levels of language processing, making native listeners' processing resemble that of second-language listeners. In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a screen while their eye movements were tracked. Native listeners use prosodic cues to information status to disambiguate between two possible referents, a new and a previously mentioned one, before they have heard the complete word. By contrast, the Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native British listeners do. In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent matched than for mismatches, suggesting a native-like competence in these offline ratings. These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and representation to include both prosody and regional variation.
Nicolai D Ayasse; Arthur Wingfield
In: Frontiers in Human Neuroscience, 14 , pp. 1–11, 2020.
Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process.
Mireille Babineau; Alex de Carvalho; John Trueswell; Anne Christophe
In: Developmental Science, 24 , pp. 1–12, 2020.
Young children can exploit the syntactic context of a novel word to narrow down its probable meaning. But how do they learn which contexts are linked to which semantic features in the first place? We investigate if 3- to 4-year-old children (n = 60) can learn about a syntactic context from tracking its use with only a few familiar words. After watching a 5-min training video in which a novel function word (i.e., ‘ko') replaced either personal pronouns or articles, children were able to infer semantic properties for novel words co-occurring with the newly learned function word (i.e., objects vs. actions). These findings implicate a mechanism by which a distributional analysis, associated with a small vocabulary of known words, could be sufficient to identify some properties associated with specific syntactic contexts.
James Bartolotti; Scott R Schroeder; Sayuri Hayakawa; Sirada Rochanavibhata; Peiyao Chen; Viorica Marian
In: Quarterly Journal of Experimental Psychology, 73 (8), pp. 1135–1149, 2020.
How does the mind process linguistic and non-linguistic sounds? The current study assessed the different ways that spoken words (e.g., “dog”) and characteristic sounds (e.g., textlessbarkingtextgreater) provide access to phonological information (e.g., word-form of “dog”) and semantic information (e.g., knowledge that a dog is associated with a leash). Using an eye-tracking paradigm, we found that listening to words prompted rapid phonological activation, which was then followed by semantic access. The opposite pattern emerged for sounds, with early semantic access followed by later retrieval of phonological information. Despite differences in the time courses of conceptual access, both words and sounds elicited robust activation of phonological and semantic knowledge. These findings inform models of auditory processing by revealing the pathways between speech and non-speech input and their corresponding word forms and concepts, which influence the speed, magnitude, and duration of linguistic and nonlinguistic activation.
Mahsa Barzy; Ruth Filik; David Williams; Heather J Ferguson
In: Autism Research, 13 (4), pp. 563–578, 2020.
Typically developing adults are able to keep track of story characters' emotional states online while reading. Filik et al. showed that initially, participants expected the victim to be more hurt by ironic comments than literal, but later considered them less hurtful; ironic comments were regarded as more amusing. We examined these processes in autistic adults, since previous research has demonstrated socio-emotional difficulties among autistic people, which may lead to problems processing irony and its related emotional processes despite an intact ability to integrate language in context. We recorded eye movements from autistic and nonautistic adults while they read narratives in which a character (the victim) was either criticized in an ironic or a literal manner by another character (the protagonist). A target sentence then either described the victim as feeling hurt/amused by the comment, or the protagonist as having intended to hurt/amused the victim by making the comment. Results from the nonautistic adults broadly replicated the key findings from Filik et al., supporting the two-stage account. Importantly, the autistic adults did not show comparable two-stage processing of ironic language; they did not differentiate between the emotional responses for victims or protagonists following ironic versus literal criticism. These findings suggest that autistic people experience a specific difficulty taking into account other peoples' communicative intentions (i.e., infer their mental state) to appropriately anticipate emotional responses to an ironic comment. We discuss how these difficulties might link to atypical socio-emotional processing in autism, and the ability to maintain successful real-life social interactions.
Marion Beretti; Naomi Havron; Anne Christophe
In: Journal of Experimental Child Psychology, 200 , pp. 1–21, 2020.
A central challenge in language acquisition is the integration of multiple sources of information, potentially in conflict, to acquire new knowledge and adjust current linguistic representations. One way to accomplish this is to assign more weight to more reliable sources of information in context. We tested the hypothesis that children adjust the weight of different sources of information during learning, considering two specific sources of information: their knowledge of the meaning of familiar words (semantics) and their familiarity with syntax. We varied the reliability of these sources of information through an induction phase (reliable syntax or reliable semantics). At test, French 4- and 5-year-old children and adults listened to sentences where information provided by these two cues conflicted and were asked to choose between two videos that illustrate the sentence. One video presented the reasonable choice if the sentence is assumed to be syntactically correct, but familiar words refer to novel things (e.g., une mange–“an eats” describes a novel object). The other video was the reasonable choice if the sentence is assumed to be syntactically incorrect and familiar words' meaning is preserved (e.g., “an eats” describes a girl eating and actually should have been “she eats”). As predicted, the proportion of syntactic choices (e.g., interpreting mange–“eats” as a novel noun) was found to be higher in the reliable syntax condition than in the reliable semantics condition, showing that children and adults can adapt their expectations to the reliability of sources of information.
Raymond Bertram; Victor Kuperman
In: Bilingualism, 23 (3), pp. 579–590, 2020.
Most English compounds are spaced compounds, whereas spelling regulations prescribe Finnish compounds to be written in a concatenated format. However, as in English, Finnish compounds are commonly spaced nowadays (e.g., piha juhla 'garden party'), a phenomenon that we labeled the 'English disease'. In this eye movement study with Finnish-English bilinguals we investigate whether the reading of a concatenated or illegally spaced Finnish compound is affected by the spelling of an English translation equivalent (ETE). We found that spaced Finnish compounds were read slower than their concatenated counterparts, but this effect was attenuated when ETEs were thought to be spaced. Similarly, concatenated Finnish compounds were read faster when their ETEs were also concatenated. These backward transfer effects are in line with studies that show that processing behavior in L1 is affected by a strong concurrent L2, even when the L1 is the native language as well as the dominant community language.
Elisabeth Beyersmann; Signy Wegener; Kate Nation; Ayako Prokupzcuk; Hua-chen Wang; Elisabeth Beyersmann; Signy Wegener; Kate Nation; Hua-chen Wang; Anne Castles
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–13, 2020.
It is well known that information from spoken language is integrated into reading processes, but the nature of these links and how they are acquired is less well understood. Recent evidence has suggested that predictions about the written form of newly learned spoken words are already generated prior to print exposure. We extend this work to morphologically complex words and ask whether the information that is available in spoken words goes beyond the mappings between phonology and orthography. Adults were taught the oral form of a set of novel morphologically complex words (e.g., “neshing”, “neshed”, “neshes”), with a 2nd set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., nesh), embedded in sentences, and their eye movements were monitored. Half of the stems were allocated a predictable and half an unpredictable spelling. Reading times were shorter for orally trained than untrained stems and for stems with predictable rather than unpredictable spellings. Crucially, there was an interaction between spelling predictability and training. This suggests that orthographic expectations of embedded stems are formed during spoken word learning. Reading aloud and spelling tests complemented the eye movement data, and findings are discussed in the context of theories of reading acquisition.
Katherine S Binder; Kathryn A Tremblay; Alison Joseph
In: Journal of Research in Reading, 43 (4), pp. 395–416, 2020.
Background: The purpose of the current study was to examine how the morphological structure of a real word or novel word affected the incidental vocabulary learning of participants and to examine how these target items are processed as they are read. In addition, we examined the roles of vocabulary depth and breadth in the process of incidental vocabulary learning. Methods: We had participants read short passages that contained real words or novel words that differed on their morphological accessibility as we collected eye movement data. Participants also completed several vocabulary depth and breadth measures. Results: Accessible real words and novel words were learned better than inaccessible and less accessible items, but there was a processing cost associated with accessible real words compared with inaccessible real words. In contrast, participants spent more time on the less accessible novel words compared with accessible novel words, but that extra processing time did not translate into better acquisition scores. Finally, both vocabulary breadth and depth explained variance in incidental vocabulary acquisition, while breadth explained variance in gaze duration and depth explained variance in regressive eye movements. Conclusions: Accessibility of the targets affected both acquisition and reading time, and depth and breadth are both individual differences that explain variance in incidental acquisition and the processing of those words.
Hazel I Blythe; Jonathan H Dickins; Colin R Kennedy; Simon P Liversedge
In: PLoS ONE, 15 (3), pp. e0229934, 2020.
We examined phonological recoding during silent sentence reading in teenagers with a history of dyslexia and their typically developing peers. Two experiments are reported in which participants' eye movements were recorded as they read sentences containing correctly spelled words (e.g., church), pseudohomophones (e.g., cherch), and spelling controls (e.g., charch). In Experiment 1 we examined foveal processing of the target word/nonword stimuli, and in Experiment 2 we examined parafoveal pre-processing. There were four participant groups-older teenagers with a history of dyslexia, older typically developing teenagers who were matched for age, younger typically developing teenagers who were matched for reading level, and younger teenagers with a history of dyslexia. All four participant groups showed a pseudohomophone advantage, both from foveal processing and parafoveal preprocessing, indicating that teenagers with a history of dyslexia engage in phonological recoding for lexical identification during silent sentence reading in a comparable manner to their typically developing peers.
Giulia Borghini; Valerie Hazan
In: The Journal of the Acoustical Society of America, 147 (6), pp. 3783–3794, 2020.
Relative to native listeners, non-native listeners who are immersed in a second language environment experience increased listening effort and reduced ability to successfully perform an additional task while listening. Previous research demonstrated that listeners can exploit a variety of intelligibility-enhancing cues to cope with adverse listening conditions. However, little is known about the implications of those speech perception strategies for listening effort. The current research aims to investigate by means of pupillometry how listening effort is modulated in native and non-native listeners by the availability of semantic context and acoustic enhancements during the comprehension of spoken sentences. For this purpose, semantic plausibility and speaking style were manipulated both separately and in combination during a speech perception task in noise. The signal to noise ratio was individually adjusted for each participant in order to target 50% intelligibility level. Behavioural results indicated that native and non-native listeners were equally able to fruitfully exploit both semantic and acoustic cues to aid their comprehension. Pupil data indicated that listening effort was reduced for both groups of listeners when acoustic enhancements were available, while the presence of a plausible semantic context did not lead to a reduction in listening effort.
In: Developmental Science, 23 , pp. 1–15, 2020.
This project explores how children disambiguate and retain novel object-label mappings in the face of semantic similarity. Burgeoning evidence suggests that semantic structure in the developing lexicon promotes word learning in ostensive contexts, whereas other findings indicate that semantic similarity interferes with and temporarily slows familiar word recognition. This project explores how these distinct processes interact when mapping and retaining labels for novel objects (i.e., low-frequency objects that are unfamiliar to toddlers) via disambiguation from a semantically similar familiar referent in 24-month-olds (N = 65). Toddlers' log-adjusted looking to labeled target objects (relative to distractor objects) was measured in three conditions: Familiar trials (familiar label spoken while viewing semantically related familiar and novel objects), Disambiguation trials (unfamiliar label spoken while viewing semantically similar familiar and unfamiliar object), and Retention trials (unfamiliar label spoken while viewing novel object pairs). Toddlers' individual vocabulary structure was then compared to performance on each condition. Vocabulary structure was measured at two levels: category-level structure (semantic density) for experimental items, and lexicon-level structure (global clustering coefficient). The findings suggest, consistent with prior results, that semantic density interfered with known word recognition, and facilitated unfamiliar word retention. Children did not show a significant novel word preference during disambiguation, and disambiguation behavior was not impacted by semantic structure. These findings connect seemingly disparate mechanisms of semantic interference in processing and semantic leveraging in word learning. Semantic interference momentarily slows word recognition and resolution of referential uncertainty for novel label-object mappings. Nevertheless, this slowing might support retention by enabling comparison between related objects.
Hans Rutger Bosker; David Peeters; Judith Holler
How visual cues to speech rate influence speech perception Journal Article
In: Quarterly Journal of Experimental Psychology, 73 (10), pp. 1523–1536, 2020.
Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two “Go Fish”–like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants' target categorisation responses. These findings contribute to a better understanding of how what we see influences what we hear.
Evelyn Bosma; Naomi Nota
In: Journal of Experimental Child Psychology, 189 , pp. 1–18, 2020.
Bilingual adults are faster in reading cognates than in reading non-cognates in both their first language (L1) and second language (L2). This cognate effect has been shown to be gradual: recognition was facilitated when words had higher degrees of cross-linguistic similarity. The aim of the current study was to investigate whether cognate facilitation can also be observed in bilingual children's sentence reading. To answer this question, a group of Frisian–Dutch bilingual children (N = 37) aged 9–12 years completed a reading task in both their languages. All children had Dutch as their dominant reading language, but most of them spoke mainly Frisian at home. Identical cognates (e.g., Dutch–Frisian boek–boek ‘book'), non-identical cognates (e.g., beam–boom ‘tree'), and non-cognates (e.g., beppe–oma ‘grandmother') were presented in sentence context, and eye movements were recorded. The results showed a non-gradual cognate facilitation effect in Frisian: identical cognates were read faster than non-identical cognates and non-cognates. In Dutch, no cognate facilitation effect could be observed. This suggests that bilingual children use their dominant reading language while reading in their non-dominant one, but not vice versa.
Mathieu Bourguignon; Martijn Baart; Efthymia C Kapnoula; Nicola Molinaro
In: Journal of Neuroscience, 40 (5), pp. 1053–1065, 2020.
Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies textless1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere, compared with ~20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at textless1 Hz, and in early visual cortices at 1– 8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing.
Clare Patterson; Petra B Schumacher
In: Dialogue and Discourse, 11 (1), pp. 1–39, 2020.
German personal and demonstrative pronouns have distinct preferences in their interpretation; personal pronouns are more flexible in their interpretation but tend to resolve to a prominent antecedent, while demonstratives have a strong preference for a non-prominent antecedent. However, less is known about how prominence information is used during the process of resolution, particularly in the light of twostage processing models which assume that reference will normally be to the most accessible candidate. We conducted three experiments investigating how prominence information is used during the resolution of gender-disambiguated personal and demonstrative pronouns in German. While the demonstrative pronoun required additional processing compared to the personal pronoun, prominence information did not affect resolution in shallow conditions. It did, however, affect resolution under deep processing conditions. We conclude that prominence information is not ruled out by the presence of stronger resolution cues such as gender. However, the deployment of prominence information in the evaluation of candidate antecedents is under strategic control.
Brennan R Payne; Kara D Federmeier; Elizabeth A L Stine-Morrow
In: Quarterly Journal of Experimental Psychology, 73 (11), pp. 1841–1861, 2020.
To understand the effects of literacy on fundamental processes involved in reading, we report a secondary data analysis examining individual differences in global eye-movement measures and first-pass eye-movement distributions in a diverse sample of community-dwelling adults aged 16 to 64. Participants (n = 80) completed an assessment battery probing verbal and non-verbal cognitive abilities and read simple two-sentence passages while their eye movements were recorded. Analyses were focused on characterising the effects of literacy skill on both global indices of eye-fixation distributions and distributional differences in the sensitivity to lexical features. Global reading measures showed that lower literate adults read more slowly on average. However, distributional analyses of fixation durations revealed that the first-pass fixation durations of adults with lower literacy skill were not slower in general (i.e., there was no shift in the fixation duration distribution among lower literate adults). Instead, lower literacy was associated with greater intra-individual variability in first-pass fixation durations, including an increased proportion of extremely long fixations, differentially skewing the distribution of both first-fixation and gaze durations. Exploratory repeated-measures quantile regression analyses of gaze duration revealed differentially greater influences of word length among lower literate readers and greater activation of phonological and orthographic neighbours among higher literate readers, particularly in the tail of the distribution. Collectively, these findings suggest that literacy skill in adulthood is associated with systematic differences in both global and lexically driven eye-movement control during reading.
Ana Pellicer-Sánchez; Kathy Conklin; Laura Vilkaitė-Lozdienė
In: Language Learning, pp. 1–42, 2020.
This study examined the effect of pre-reading vocabulary instruction on learners' attention and vocabulary learning. We randomly assigned participants (L1 = 92; L2 = 88) to one of four conditions: pre-reading instruction, where participants' received explicit instruction on six novel items and read a text with the items repeated eight times; reading-only, where participants simply read the same text with the novel items repeated eight times; reading-baseline, where participants read the same text with the repeated items replaced by known (control) words; and instruction-only, where participants received explicit instruction on the novel items and read an unrelated text. Eye-tracking was used to measure amount of attention to the vocabulary during reading. We assessed knowledge of the target vocabulary in three immediate posttests (form recognition, meaning recall, and meaning recognition). Results showed that pre-reading instruction (plus reading the text) led to both more vocabulary learning and a processing advantage. Cumulative reading times were a significant predictor of meaning recognition scores.
Ellen Z Peng; Alan Kan; Ruth Y Litovsky
In: Frontiers in Systems Neuroscience, 14 , pp. 1–13, 2020.
Children localize sounds using binaural cues when navigating everyday auditory environments. While sensitivity to binaural cues reaches maturity by 8–10 years of age, large individual variability has been observed in the just-noticeable-difference (JND) thresholds for interaural time difference (ITD) among children in this age range. To understand the development of binaural sensitivity beyond JND thresholds, the “looking-while-listening” paradigm was adapted in this study to reveal the real-time decision-making behavior during ITD processing. Children ages 8–14 years with normal hearing (NH) and a group of young NH adults were tested. This novel paradigm combined eye gaze tracking with behavioral psychoacoustics to estimate ITD JNDs in a two-alternative forced-choice discrimination task. Results from simultaneous eye gaze recordings during ITD processing suggested that children had adult-like ITD JNDs, but they demonstrated immature decision-making strategies. While the time course of arriving at the initial fixation and final decision in providing a judgment of the ITD direction was similar, children exhibited more uncertainty than adults during decision-making. Specifically, children made more fixation changes, particularly when tested using small ITD magnitudes, between the target and non-target response options prior to finalizing a judgment. These findings suggest that, while children may exhibit adult-like sensitivity to ITDs, their eye gaze behavior reveals that the processing of this binaural cue is still developing through late childhood.
Tatiana E Petrova; Elena I Riekhakaynen; Valentina S Bratash
In: Frontiers in Psychology, 11 , pp. 1–7, 2020.
This study investigates the online process of reading and analyzing of sketchnotes (visual notes containing a handwritten text and drawings) on Russian language material. Using the eye-tracking method, we compared the processing of different types of sketchnotes [“path” (trajectory), linear, and radial] and the processing of a verbal text. Biographies of Russian writers were used as the material. In a preliminary experiment, we asked 89 college students to read the biographies and to evaluate each text or sketch using five scales (from −2 to +2). The best example for each of three formats of sketchnotes and a verbal text was chosen. In the main experiment, 21 secondary school students examined four different biographies in four different formats (three sketchnotes and a verbal text), answered to the factual and analytical questions to these texts and estimated the difficulty of each text. We measured the total dwell time, the total fixation count, the average fixation duration for each stimulus as well as for separate zones inside the sketches including verbal and non-verbal information. Our results show that readers process the information better and faster while reading sketchnotes than a verbal text. In the trajectory sketchnotes, the readers followed the order of elements aimed by the author of the sketchnotes better than in the radial and linear sketchnotes. The analysis of participants' eye movements while processing the stimuli made it possible to propose several recommendations for creating effective sketchnotes.
Christian Pfeiffer; Nora Hollenstein; Ce Zhang; Nicolas Langer
In: NeuroImage, 218 , pp. 1–15, 2020.
When we read, our eyes move through the text in a series of fixations and high-velocity saccades to extract visual information. This process allows the brain to obtain meaning, e.g., about sentiment, or the emotional valence, expressed in the written text. How exactly the brain extracts the sentiment of single words during naturalistic reading is largely unknown. This is due to the challenges of naturalistic imaging, which has previously led researchers to employ highly controlled, timed word-by-word presentations of custom reading materials that lack ecological validity. Here, we aimed to assess the electrical neural correlates of word sentiment processing during naturalistic reading of English sentences. We used a publicly available dataset of simultaneous electroencephalography (EEG), eye-tracking recordings, and word-level semantic annotations from 7129 words in 400 sentences (Zurich Cognitive Language Processing Corpus; Hollenstein et al., 2018). We computed fixation-related potentials (FRPs), which are evoked electrical responses time-locked to the onset of fixations. A general linear mixed model analysis of FRPs cleaned from visual- and motor-evoked activity showed a topographical difference between the positive and negative sentiment condition in the 224–304 ms interval after fixation onset in left-central and right-posterior electrode clusters. An additional analysis that included word-, phrase-, and sentence-level sentiment predictors showed the same FRP differences for the word-level sentiment, but no additional FRP differences for phrase- and sentence-level sentiment. Furthermore, decoding analysis that classified word sentiment (positive or negative) from sentiment-matched 40-trial average FRPs showed a 0.60 average accuracy (95% confidence interval: [0.58, 0.61]). Control analyses ruled out that these results were based on differences in eye movements or linguistic features other than word sentiment. Our results extend previous research by showing that the emotional valence of lexico-semantic stimuli evoke a fast electrical neural response upon word fixation during naturalistic reading. These results provide an important step to identify the neural processes of lexico-semantic processing in ecologically valid conditions and can serve to improve computer algorithms for natural language processing.
Ulrich Pomper; Rebecca Schmid; Ulrich Ansorge
In: Frontiers in Psychology, 11 , pp. 1–12, 2020.
Sounds in our environment can easily capture human visual attention. Previous studies have investigated the impact of spatially localized, brief sounds on concurrent visuospatial attention. However, little is known on how the presence of a continuous, lateralized auditory stimulus (e.g., a person talking next to you while driving a car) impacts visual spatial attention (e.g., detection of critical events in traffic). In two experiments, we investigated whether a continuous auditory stream presented from one side biases visual spatial attention toward that side. Participants had to either passively or actively listen to sounds of various semantic complexities (tone pips, spoken digits, and a spoken story) while performing a visual target discrimination task. During both passive and active listening, we observed faster response times to visual targets presented spatially close to the relevant auditory stream. Additionally, we found that higher levels of semantic complexity of the presented sounds led to reduced visual discrimination sensitivity, but only during active listening to the sounds. We provide important novel results by showing that the presence of a continuous, ongoing auditory stimulus can impact visual processing, even when the sounds are not endogenously attended to. Together, our findings demonstrate the implications of ongoing sounds on visual processing in everyday scenarios such as moving about in traffic.
Vincent Porretta; Lori Buchanan; Juhani Järvikivi
In: Attention, Perception, and Psychophysics, pp. 1–8, 2020.
Listeners use linguistic information and real-world knowledge to predict upcoming spoken words. However, studies ofpredictive processing have focused on prediction under optimal listening conditions. We examined the effect offoreign-accented speech on predictive processing. Furthermore, we investigated whether accent-specific experience facilitates predictive processing. Using the visual world paradigm, we demonstrated that although the presence of an accent impedes predictive processing, it does not preclude it. We further showed that as listener experience increases, predictive processing for accented speech increases and begins to approximate the pattern seen for native speech. These results speak to the limitation of the processing resources that must be allocated, leading to a trade-offwhen listeners are faced with increased uncertainty and more effortful recognition due to a foreign accent.
Heather D Porter; Koomi Kim; Judith K Franzak; Katherine MacDonald
In: Journal of Adolescent and Adult Literacy, 63 (5), pp. 519–528, 2020.
As one of multiple ways to explore the reading process, eye movement miscue analysis is a tool that provides a continuous record of eye fixations and movements over an entire text, and a record of the oral reading of that text and the miscues (observed responses) that readers produce. The authors present profiles of two successful college readers who doubted their reading efficacy. Using data from eye tracking, miscue analysis, and the retelling, the authors invited the readers to examine their assumptions about reading and how they positioned themselves as readers. Data presented were drawn from the readers' eye movements during the reading of two texts—one an informational text and the other a constructed text with embedded errors—and are discussed in relation to the readers' perceptions of reader identity and processes. Implications for teachers include strategies for helping readers address common misconceptions about reading and reclaim their role as meaning makers.
Krishnamachari S Prahalad; Daniel R Coates
In: Vision Research, 171 , pp. 1–10, 2020.
Patients with central vision loss are forced to use an eccentric retinal location as a substitute for the fovea, called a preferred retinal locus, or PRL. Clinical studies have shown that patients habitually choose a PRL located either to the left, and/or below the scotoma in the visual field. The position to the right of the scotoma is almost never chosen, even though this would be theoretically more suitable for reading, since the scotoma no longer blocks the upcoming text. In the current study, we tested whether this asymmetry may have an oculomotor basis. Six normally sighted subjects viewed page-like text with a simulated scotoma, identifying embedded numbers in “words” comprising random letters. Subjects trained and tested with three different artificial PRL (“pseudo-PRL,” or pPRL) locations: inferior, to the right, or to the left of the scotoma. After several training blocks for each pPRL position, subjects were found to produce reliable oculomotor control. Both reading speed and eye movement characteristics reproduced observations from traditional paradigms such as page-mode reading and RSVP for an advantage for an inferior pPRL. While left and right positions resulted in similar reading speeds, we observed that a right pPRL caused excessively large saccades and more direction switches, exhibiting a zig-zag pattern that developed spontaneously. Thus, we propose that patients' typical avoidance of pPRL positions to the right of their scotoma could have an oculomotor component: the erratic eye motion might potentially negate the perceptual benefit that this pPRL would offer.
Mengsheng Qian; Ningjun Xu
In: International Journal of Applied Linguistics and Translation, 6 (4), pp. 116–123, 2020.
It is widely known that sight-interpreting, one of the typical forms of conference interpreting, requires the interpreters to exert great effort in transforming one language into another. Due to the difference between Chinese and English, some sentence structures such as relative clauses prove to be even more difficult to render. Some experienced interpreters are able to do such strenuous task with ease. Uncovering what is going on during information processing is enlightening in that it would shed light on how human brain uses certain types of mechanism to process information, which is conducive to the artificial intelligence. Eye-tracking experiment is designed, 31 subjects are recruited with an average age of 22 and comparable linguistic competence to participate in a 40-50 min experiment, during which each subject is required to sight-interpret the self-designed, expert-proven sentences which differ only in the role of the relative pronoun in the relative clauses. Data analysis clearly indicates that the cognitive effort of processing complex sentences as evidenced by two types of relative clauses (one is called OR because the relative pronouns function as object in the relative clause, the other SB because the relative pronouns function as subject in the relative clause) are different, the former requiring more cognitive effort than the latter, as shown in several key eye-movement measures such as regression-in, regression-out, first fixation duration, gaze duration, regression duration, and total reading duration. These differences are statistically significant within the AOIs such as the antecedent, relative clause. The finding further substantiates the hypothesis that sight-interpreting is more strenuous and thus requires more cognitive effort than the common readers. Besides, different structure of the relative clauses also plays a role in consuming the cognitive effort of the interpreters. However, it remains unclear whether the length of the relative clause plays a decisive role in influencing the cognition of whole sentence while sight-interpreting. Besides, whether the research results are applicable to other types of complex structure remain unanswered. More data should be collected to incorporate more complex structure in order to uncover the possible cognitive effort during sight-interpreting. WIBBLE:
Rishi Rajalingham; Kohitij Kar; Sachi Sanghavi; Stanislas Dehaene; James J DiCarlo
In: Nature Communications, 11 (1), pp. 1–13, 2020.
The ability to recognize written letter strings is foundational to human reading, but the underlying neuronal mechanisms remain largely unknown. Recent behavioral research in baboons suggests that non-human primates may provide an opportunity to investigate this question. We recorded the activity of hundreds of neurons in V4 and the inferior temporal cortex (IT) while naïve macaque monkeys passively viewed images of letters, English words and non-word strings, and tested the capacity of those neuronal representations to support a battery of orthographic processing tasks. We found that simple linear read-outs of IT (but not V4) population responses achieved high performance on all tested tasks, even matching the performance and error patterns of baboons on word classification. These results show that the IT cortex of untrained primates can serve as a precursor of orthographic processing, suggesting that the acquisition of reading in humans relies on the recycling of a brain network evolved for other visual functions.