EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language articles, please email us!
2019 |
Michelle S. Peter; Samantha Durrant; Andrew Jessop; Amy Bidgood; Julian M. Pine; Caroline F. Rowland Does speed of processing or vocabulary size predict later language growth in toddlers? Journal Article In: Cognitive Psychology, vol. 115, pp. 1–25, 2019. @article{Peter2019a, It is becoming increasingly clear that the way that children acquire cognitive representations depends critically on how their processing system is developing. In particular, recent studies suggest that individual differences in language processing speed play an important role in explaining the speed with which children acquire language. Inconsistencies across studies, however, mean that it is not clear whether this relationship is causal or correlational, whether it is present right across development, or whether it extends beyond word learning to affect other aspects of language learning, like syntax acquisition. To address these issues, the current study used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UK-CDI, Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing speed correlated with vocabulary size - though this relationship changed over time, and was observed only when there was variation in how well the items used in the looking-while-listening task were known. Fast processing speed was a positive predictor of subsequent vocabulary growth, but only for children with smaller vocabularies. Faster processing speed did, however, predict faster syntactic growth across the whole sample, even when controlling for concurrent vocabulary. The results indicate a relatively direct relationship between processing speed and syntactic development, but point to a more complex interaction between processing speed, vocabulary size and subsequent vocabulary growth. |
Mikhail Y. Pokhoday; Yury Y. Shtyrov; Andriy Myachykov Effects of visual priming and event orientation on word order choice in Russian sentence production Journal Article In: Frontiers in Psychology, vol. 10, pp. 1661, 2019. @article{Pokhoday2019, Existing research shows that distribution of the speaker's attention among event's protagonists affects syntactic choice during sentence production. One of the debated issues concerns the extent of the attentional contribution to syntactic choice in languages that put stronger emphasis on word order arrangement rather than the choice of the overall syntactic frame. To address this, the current study used a sentence production task, in which Russian native speakers were asked to verbally describe visually perceived transitive events. Prior to describing the target event, a visual cue directed the participants' attention to the location of either the agent or the patient of the subsequently presented visual event. In addition, we also manipulated event orientation (agent-left vs agent-right) as another potential contributor to syntactic choice. The number of patient-initial sentences was the dependent variable compared between conditions. First, the obtained results replicated the effect of visual cueing on the word order in Russian language: more patient-initial sentences in patient cued condition. Second, we registered a novel effect of event orientation: Russian native speakers produced more patient-initial sentences after seeing events developing from right to left as opposed to left-to-right events. Our study provides new evidence about the role of the speaker's attention and event orientation in syntactic choice in language with flexible word order. |
Vincent Porretta; Aki-Juhani Kyröläinen Influencing the time and space of lexical competition: The effect of gradient foreign accentedness Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 10, pp. 1832–1851, 2019. @article{Porretta2019, This article examines the influence of gradient foreign accentedness on lexical competition during spoken word recognition. Using native and Mandarin-accented English words ranging in degree of foreign accentedness, we investigate the effect of increased accentedness on (a) the size of the competitor space and (b) the strength and duration of competitor activation. Here, we analyze the number of misperceptions in a transcription task, as well as the time course of competitor activation in a Visual World Paradigm eye-tracking task. The transcription data show that as accentedness increases, the number of unique misperceptions increases. This indicates that greater accent strength induces the activation of many additional competitors within the competition space relative to native speech. The eye-tracking data further show that, as accentedness increases, looks to competitors (not produced in the transcription task) increase both in likelihood and duration. This indicates that greater accentedness boosts the strength of competitor activation as well as the duration of the competition process, even when comprehension is ultimately successful, suggesting strong and diffuse competition within the lexicon. The results provide evidence of changes in the underlying dynamics, which lead to the pervasive processing costs associated with foreign-accented speech that are commonly observed in behavioral data. |
Céline Pozniak; Barbara Hemforth; Yair Haendler; Andrea Santi; Nino Grillo Seeing events vs. entities: The processing advantage of Pseudo Relatives over Relative Clauses Journal Article In: Journal of Memory and Language, vol. 107, pp. 128–151, 2019. @article{Pozniak2019, We present the results of three offline questionnaires (one attachment preference study and two acceptability judgments) and two eye-tracking studies in French and English, investigating the resolution of the ambiguity between pseudo relative and relative clause interpretations. This structural and interpretive ambiguity has recently been shown to play a central role in the explanation of apparent cross-linguistic asymmetries in relative clause attachment (Grillo and Costa, 2014; Grillo et al., 2015). This literature has argued that pseudo relatives are preferred to relative clauses because of their structural and interpretive simplicity. This paper adds to this growing body of literature in two ways. First we show that, in contrast to previous findings, French speakers prefer to attach relative clauses to the most local antecedent once pseudo relative availability is controlled for. We then provide direct support for the pseudo relative preference: grammatically forced disambiguation to a relative clause interpretation leads to degraded acceptability and greater processing cost in a pseudo relative environment than maintaining compatibility with a pseudo relative. |
Esha Prakash; Rebecca J. McLean; Sarah J. White; Kevin B. Paterson; Irene Gottlob; Frank A. Proudlock Reading individual words within sentences in infantile nystagmus Journal Article In: Investigative Ophthalmology & Visual Science, vol. 60, no. 6, pp. 2226–2236, 2019. @article{Prakash2019, PURPOSE. Normal readers make immediate and precise adjustments in eye movements during sentence reading in response to individual word features, such as lexical difficulty (e.g., common or uncommon words) or word length. Our purpose was to assess the effect of infantile nystagmus (IN) on these adaptive mechanisms. METHODS. Eye movements were recorded from 29 participants with IN (14 albinism, 12 idiopathic, and 3 congenital stationary night blindness) and 15 controls when reading sentences containing either common/uncommon words or long/short target words. Parameters assessed included: duration of first foveation/fixation, number of first-pass and percentage second-pass foveations/fixations, percentage words skipped, gaze duration, acquisition time (gaze þ nongaze duration), landing site locations, clinical and experimental reading speeds. RESULTS. Participants with IN could not modify first foveation durations in contrast to controls who made longer first fixations on uncommon words (P < 0.001). Participants with IN made more first-pass foveations on uncommon and long words (P < 0.001) to increase gaze durations. However, this also increased nongaze durations (P < 0.001) delaying acquisition times. Participants with IN reread shorter words more often (P < 0.005). Similar to controls, participants with IN landed more first foveations between the start and center of long words. Reading speeds during experiments were lower in IN participants compared to controls (P < 0.01). CONCLUSIONS. People with IN make more first-pass foveations on uncommon and long words influencing reading speeds. This demonstrates that the ‘‘slow to see'' phenomenon occurs during word reading in IN. These deficits are not captured by clinical reading charts. |
Stéphanie Bellocchi; Delphine Massendari; Jonathan Grainger; Stéphanie Ducrot Effects of inter-character spacing on saccade programming in beginning readers and dyslexics Journal Article In: Child Neuropsychology, vol. 25, no. 4, pp. 482–506, 2019. @article{Bellocchi2019, The present study investigated the impact of inter-character spacing on saccade programming in beginning readers and dyslexic children. In two experiments, eye movements were recorded while dyslexic children, reading-age, and chronological-age controls, performed an oculomotor lateralized bisection task on words and strings of hashes presented either with default inter-character spacing or with extra spacing between the characters. The results of Experiment 1 showed that (1) only proficient readers had already developed highly automatized procedures for programming both left- and rightward saccades, depending on the discreteness of the stimuli and (2) children of all groups were disrupted (i.e., had trouble to land close to the beginning of the stimuli) by extra spacing between the characters of the stimuli, and particularly for stimuli presented in the left visual field. Experiment 2 was designed to disentangle the role of inter-character spacing and spatial width. Stimuli were made the same physical length in the default and extra-spacing conditions by having more characters in the default spacing condition. Our results showed that inter-letter spacing still influenced saccade programming when controlling for spatial width, thus confirming the detrimental effect of extra spacing for saccade programming. We conclude that the beneficial effect of increased inter-letter spacing on reading can be better explained in terms of decreased visual crowding than improved saccade targeting. |
Jean-Baptiste Bernard; Eric Castet The optimal use of non-optimal letter information in foveal and parafoveal word recognition Journal Article In: Vision Research, vol. 155, pp. 44–61, 2019. @article{Bernard2019, Letters and words across the visual field can be difficult to identify due to limiting visual factors such as acuity, crowding and position uncertainty. Here, we show that when human readers identify words presented at foveal and para-foveal locations, they act like theoretical observers making optimal use of letter identity and letter position information independently extracted from each letter after an unavoidable and non-optimal letter recognition guess. The novelty of our approach is that we carefully considered foveal and parafoveal letter identity and position uncertainties by measuring crowded letter recognition performance in five subjects without any word context influence. Based on these behavioral measures, lexical access was simulated for each subject by an observer making optimal use of each subject's uncertainties. This free-parameter model was able to predict individual behavioral recognition rates of words presented at different positions across the visual field. Importantly, the model was also able to predict individual mislocation and identity letter errors made during behavioral word recognition. These results reinforce the view that human readers recognize foveal and parafoveal words by parts (the word letters) in a first stage, independently of word context. They also suggest a second step where letter identity and position uncertainties are generated based on letter first guesses and positions. During the third lexical access stage, identity and position uncertainties from each letter look remarkably combined together through an optimal word recognition decision process. |
Nicoletta Biondo; Francesco Vespignani; Brian W. Dillon Attachment and concord of temporal adverbs: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 10, pp. 983, 2019. @article{Biondo2019, The present study examined the processing of temporal adverbial phrases such as "last week," which must agree in temporal features with the verb they modify. We investigated readers' sensitivity to this feature match or mismatch in two eye-tracking studies. The main aim of this study was to expand the range of concord phenomena which have been investigated in real-time processing in order to understand how linguistic dependencies are formed during sentence comprehension (Felser et al., 2017). Under a cue-based perspective, linguistic dependency formation relies on an associative cue-based retrieval mechanism (Lewis et al., 2006; McElree, 2006), but how such a mechanism is deployed over diverse linguistic dependencies remains a matter of debate. Are all linguistic features candidate cues that guide retrieval? Are all cues given similar weight? Are different cues differently weighted based on the dependency being processed? To address these questions, we implemented a mismatch paradigm (Sturt, 2003) adapted for temporal concord dependencies. This paradigm tested whether readers were sensitive to a temporal agreement between a temporal adverb like last week and a linearly distant, but structurally accessible verb, as well as a linearly proximate but structurally inaccessible verb. We found clear evidence that readers were sensitive to feature match between the adverb and the linearly distant, structurally accessible verb. We found no clear evidence on whether feature match with the inaccessible verb impacted the processing of a temporal adverb. Our results suggest syntactic positional information plays an important role during the processing of the temporal concord relation. |
Jo Black; Mahsa Barzy; David Williams; Heather Ferguson Intact counterfactual emotion processing in autism spectrum disorder: Evidence from eye-tracking Journal Article In: Autism Research, vol. 12, no. 3, pp. 422–444, 2019. @article{Black2019, Counterfactual emotions, such as regret and relief, require an awareness of how things could have been different. We report a preregistered experiment that examines how adults with and without ASD process counterfactual emotions in real-time, based on research showing that the developmental trajectory of counterfactual thinking may be disrupted in people with ASD. Participants were eye-tracked as they read narratives in which a character made an explicit decision then subsequently experienced either a mildly negative or positive outcome. The final sentence in each story included an explicit remark about the character's mood that was either consistent or inconsistent with the character's expected feelings of regret or relief (e.g., “… she feels happy/annoyed about her decision.”). Results showed that adults with ASD are unimpaired in processing emotions based on counterfactual reasoning, and in fact showed earlier sensitivity to inconsistencies within relief contexts compared to TD participants. This finding highlights a previously unknown strength in empathy and emotion processing in adults with ASD, which may have been masked in previous research that has typically relied on explicit, response-based measures to record emotional inferences, which are likely to be susceptible to demand characteristics and response biases. Therefore, this study highlights the value of employing implicit measures that provide insights on peoples' immediate responses to emotional content without disrupting ongoing processing. Autism Res 2019, 12: 422–444 © 2019 International Society for Autism Research, Wiley Periodicals, Inc. Lay Summary: Despite known difficulties with empathy and perspective-taking, we found that adults with autism are unimpaired at inferring complex emotions (regret and relief) in others. This finding extends existing evidence showing dysfunctional counterfactual thinking in children with autism. We highlight the value of using implicit measures to identify strengths and abilities in ASD that may be masked by explicit tasks that require participants to interact socially or report their own thoughts. |
Frances Blanchette; Cynthia Lukyanenko Unacceptable grammars? An eye-tracking study of English negative concord Journal Article In: Language and Cognition, vol. 11, no. 1, pp. 1–40, 2019. @article{Blanchette2019, This paper uses eye-tracking while reading to examine Standard English speakers' processing of sentences with two syntactic negations: a negative auxiliary and either a negative subject (e.g., Nothing didn't fall from the shelf) or a negative object (e.g., She didn't answer nothing in that interview). Sentences were read in Double Negation (DN; the 'she answered something' reading of she didn't answer nothing) and Negative Concord (NC; the 'she answered nothing' reading of she didn't answer nothing) biasing contexts. Despite the social stigma associated with NC, and linguistic assumptions that Standard English has a DN grammar, in which each syntactic negation necessarily contributes a semantic negation, our results show that Standard English speakers generate both NC and DN interpretations, and that their interpretation is affected by the syntactic structure of the negative sentence. Participants spent more time reading the critical sentence and rereading the context sentence when negative object sentences were paired with DN-biasing contexts and when negative subject sentences were paired with NC-biasing contexts. This suggests that, despite not producing NC, they find NC interpretations of negative object sentences easier to generate than DN interpretations. The results illustrate the utility of online measures when investigating socially stigmatized construction types. |
Hazel I. Blythe; Barbara J. Juhasz; Lee W. Tbaily; Keith Rayner; Simon P. Liversedge Reading sentences of words with rotated letters: An eye movement study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 7, pp. 1790–1804, 2019. @article{Blythe2019, Participants' eye movements were measured as they read sentences in which individual letters within words were rotated. Both the consistency of direction and the magnitude of rotation were manipulated (letters rotated all in the same direction, or alternately clockwise and anti-clockwise, by 30° or 60°). Each sentence included a target word that was manipulated for frequency of occurrence. Our objectives were threefold: To quantify how change in the visual presentation of individual letters disrupted word identification, and whether disruption was consistent with systematic change in visual presentation; to determine whether inconsistent letter transformation caused more disruption than consistent letter transformation; and to determine whether such effects were comparable for words that were high and low frequency to explore the extent to which they were visually or linguistically mediated. We found that disruption to reading was greater as the magnitude of letter rotation increased, although even small rotations affected processing. The data also showed that alternating letter rotations were significantly more disruptive than consistent rotations; this result is consistent with models of lexical identification in which encoding occurs over units of more than one adjacent letter. These rotation manipulations also showed significant interactions with word frequency on the target word: Gaze durations and total fixation duration times increased disproportionately for low-frequency words when they were presented at more extreme rotations. These data provide a first step towards quantifying the relative contribution of the spatial relationships between individual letters to word recognition and eye movement control in reading. |
Hans Rutger Bosker; Marjolein Os; Rik Does; Geertje Bergen Counting ‘uhm's: How tracking the distribution of native and non-native disfluencies influences online language comprehension Journal Article In: Journal of Memory and Language, vol. 106, pp. 189–202, 2019. @article{Bosker2019, Disfluencies, like uh, have been shown to help listeners anticipate reference to low-frequency words. The associative account of this ‘disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension. |
Bettina Braun; Yuki Asano; Nicole Dehé When (not) to look for contrastive alternatives: The role of pitch accent type and additive particles Journal Article In: Language and Speech, vol. 62, no. 4, pp. 751–778, 2019. @article{Braun2019, This study investigates how pitch accent type and additive particles affect the activation of contrastive alternatives. In Experiment 1, German listeners heard declarative utterances (e.g., The swimmer wanted to put on flippers) and saw four printed words displayed on screen: one that was a contrastive alternative to the subject noun (e.g., diver), one that was non-contrastively related (e.g., pool), the object (e.g., flippers), and an unrelated distractor. Experiment 1 manipulated pitch accent type, comparing a broad focus control condition to two narrow focus conditions: with a contrastive or non-contrastive accent on the subject noun (nuclear L+H* vs. H+L*, respectively, followed by deaccentuation). In Experiment 2, the utterances in the narrow focus conditions were preceded by the unstressed additive particle auch (“also”), which may trigger alternatives itself. It associated with the accented subject. Results showed that, compared to the control condition, participants directed more fixations to the contrastive alternative when the subject was realized with a contrastive accent (nuclear L+H*) than when it was realized with non-contrastive H+L*, while additive particles had no effect. Hence, accent type is the primary trigger for signaling the presence of alternatives (i.e., contrast). Implications for theories of information structure and the processing of additive particles are discussed. |
Laurel Brehm; Linda Taschenberger; Antje S. Meyer Mental representations of partner task cause interference in picture naming Journal Article In: Acta Psychologica, vol. 199, pp. 1–13, 2019. @article{Brehm2019, Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner- present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli. |
Emma Bridgwater; Aki-Juhani Kyröläinen; Victor Kuperman The influence of syntactic expectations on reading comprehension is malleable and strategic: An eye-tracking study of English dative alternation Journal Article In: Canadian Journal of Experimental Psychology, vol. 73, no. 3, pp. 179–192, 2019. @article{Bridgwater2019, Language processing is incremental and inherently predictive. Against this theoretic backdrop, we investigated the role of upcoming structural information in the comprehension of the English dative alternation. The use of eye-tracking enabled us to examine both the time course and locus of the effect associated with (a) structural expectations based on a lifetime of experience with language, and (b) rapid adaptation of the reader to the local statistics of the experiment. We quantified (a) as a verb subcategorization bias toward dative alternatives, and (b) as distributional biases in the syntactic input during the experiment. A reliable facilitatory effect of the verb bias was only observed in the double-object datives and only in the disambiguation region of the second object. Furthermore, structural priming led to an earlier locus of the verb bias effect, suggesting an interaction between (a) and (b). Our results offer a new outlook on the utilization of syntactic expectations during reading, in conjunction with rapid adaptation to the immediate linguistic environment. We demonstrate that this utilization is both malleable and strategic. |
Julie Gregg; Albrecht W. Inhoff; Cynthia M. Connine Re-reconsidering the role of temporal order in spoken word recognition Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 11, pp. 2574–2583, 2019. @article{Gregg2019, Spoken word recognition models incorporate the temporal unfolding of word information by assuming that positional match constrains lexical activation. Recent findings challenge the linearity constraint. In the visual world paradigm, Toscano, Anderson, and McMurray observed that listeners preferentially viewed a picture of a target word's anadrome competitor (e.g., competitor bus for target sub) compared with phonologically unrelated distractors (e.g., well) or competitors sharing an overlapping vowel (e.g., sun). Toscano et al. concluded that spoken word recognition relies on coarse grain spectral similarity for mapping spoken input to a lexical representation. Our experiments aimed to replicate the anadrome effect and to test the coarse grain similarity account using competitors without vowel position overlap (e.g., competitor leaf for target flea). The results confirmed the original effect: anadrome competitor fixation curves diverged from unrelated distractors approximately 275 ms after the onset of the target word. In contrast, the no vowel position overlap competitor did not show an increase in fixations compared with the unrelated distractors. The contrasting results for the anadrome and no vowel position overlap items are discussed in terms of theoretical implications of sequential match versus coarse grain similarity accounts of spoken word recognition. We also discuss design issues (repetition of stimulus materials and display parameters) concerning the use of the visual world paradigm in making inferences about online spoken word recognition. |
Jason W. Gullifer; Debra Titone The impact of a momentary language switch on bilingual reading: Intense at the switch but merciful downstream for L2 but not L1 Readers Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 11, pp. 2036–2050, 2019. @article{Gullifer2019, We investigated whether cross-language activation is sensitive to shifting language demands and language experience during first and second language (i.e., L1, L2) reading. Experiment 1 consisted of L1 French-L2 English bilinguals reading in the L2, and Experiment 2 consisted of L1 English-L2 French bilinguals reading in the L1. Both groups read English sentences with target words serving as indices of cross-language activation: cross-language homographs, cognates, and matched language-unique control words. Critically, we manipulated whether English sentences contained a momentary language switch into French before downstream target words. This allowed us to assess the consequences of shifting language demands, both in the moment, and residually following a switch as a function of language experience. Switches into French were associated with a reading cost at the switch site for both L2 and L1 readers. However, downstream cross-language activation was larger following a switch only for L1 readers. These results suggest that cross-language activation is jointly sensitive to momentary shifts in language demands and language experience, likely reflecting different control demands faced by L2 versus L1 readers, consistent with models of bilingual processing that ascribe a primary role for language control. |
Arella E. Gussow; Efthymia C. Kapnoula; Nicola Molinaro Any leftovers from a discarded prediction? Evidence from eye-movements during sentence comprehension Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 8, pp. 1041–1058, 2019. @article{Gussow2019, We investigated how listeners use gender-marked adjectives to adjust lexical predictions during sentence comprehension. Participants listened to sentence fragments in Spanish (e.g. “The witch flew to the village on her … ”) that created expectation for a specific noun (broomstickfem), and were completed by an adjective and a noun. The adjective either agreed (newfem), disagreed (newmasc), or was neutral (bigfem/masc) with respect to the expected noun's gender. Using the visual-world paradigm, we monitored looks toward images of the expected noun versus an alternative of the opposite gender (helicoptermasc). While listening to the initial fragment, participants looked more towards the expected noun. Once the adjective was heard, looks shifted toward the noun that matched the adjective's gender. Finally, upon hearing the noun, looks were affected by both previous context and adjective gender. We conclude that predictions are updated online based on gender cues, but sentence context still affects integration of the expected noun. |
Julia Habicht; Oliver Behler; Birger Kollmeier; Tobias Neher In: Frontiers in Neuroscience, vol. 13, pp. 420, 2019. @article{Habicht2019, Recently, evidence has been accumulating that untreated hearing loss can lead to neurophysiological changes that affect speech processing abilities in noise. To shed more light on how aiding may impact these effects, this study explored the influence of hearing aid (HA) experience on the cognitive processes underlying speech comprehension. Eye-tracking and functional magnetic resonance imaging (fMRI) measurements were carried out with acoustic sentence-in-noise (SiN) stimuli complemented by pairs of pictures that either correctly (target picture) or incorrectly (competitor picture) depicted the sentence meanings. For the eye-tracking measurements, the time taken by the participants to start fixating the target picture (the ‘processing time') was measured. For the fMRI measurements, brain activation inferred from blood-oxygen-level dependent responses following sentence comprehension was measured. A noise-only condition was also included. Groups of older hearing-impaired individuals matched in terms of age, hearing loss, and working memory capacity with (eHA; N = 13) or without (iHA; N = 14) HA experience participated. All acoustic stimuli were presented via earphones with individual linear amplification to ensure audibility. Consistent with previous findings, the iHA group had significantly longer (poorer) processing times than the eHA group, despite no differences in speech recognition performance. Concerning the fMRI measurements, there were indications of less brain activation in some right frontal areas for SiN relative to noise-only stimuli in the eHA group compared to the iHA group. Together, these results suggest that HA experience leads to faster speech-in-noise processing, possibly related to less recruitment of brain regions outside the core sentence-comprehension network. Follow-up research is needed to substantiate the findings related to changes in cortical speech processing with HA use. |
Lei Han; Rui Sun; Fengqiang Gao; Yuci Zhou; Min Jou The effect of negative energy news on social trust and helping behavior Journal Article In: Computers in Human Behavior, vol. 92, pp. 128–138, 2019. @article{Han2019, Due to the fast development of science and technology, people increasingly like to get information from the Internet rather than newspapers and magazines. However, recently, this accessible online information has been filled with enormous negative news or neutral news with negative headlines. Therefore, the current study conducted three experiments to examine how negative online news affects social trust and helping behavior. The results of experiment 1 showed that individuals were inclined to demonstrate an attentional bias on and perform a preference for the negative news during the eye-movement task and indicated that individuals were easily affected by negative online news compared with positive online news. Based on experiment 1, experiment 2 used the guiding effect of online news and found that relative to some readers who were presented with positive news, others who read negative news showed less helping behavior, and this relationship was completely mediated by social trust. In experiment 3, we changed the headline of every neutral news story into two versions, one with a neutral headline and another with a negative headline, and found more negative cognition, lower social trust, and less helping behavior when individuals read negative headlines. Results of the current study supported the general learning model and the social cognitive theory, which showed that negative news had an impact on individuals' cognition, such as social trust, and then influenced their helping behavior. In particular, negative headlines led to a severely negative effect on social trust and helping behavior. |
Hannah Harvey; Stephen J. Anderson; Robin Walker Increased word spacing improves performance for reading scrolling text with central vision loss Journal Article In: Optometry and Vision Science, vol. 96, no. 8, pp. 609–616, 2019. @article{Harvey2019, SIGNIFICANCE: Scrolling text can be an effective reading aid for those with central vision loss. Our results suggest that increased interword spacing with scrolling text may further improve the reading experience of this population. This conclusion may be of particular interest to low-vision aid developers and visual rehabilitation practitioners. PURPOSE: The dynamic, horizontally scrolling text format has been shown to improve reading performance in individuals with central visual loss. Here, we sought to determine whether reading performance with scrolling text can be further improved by modulating interword spacing to reduce the effects of visual crowding, a factor known to impact negatively on reading with peripheral vision. METHODS: The effects of interword spacing on reading performance (accuracy, memory recall, and speed) were assessed for eccentrically viewed single sentences of scrolling text. Separate experiments were used to determine whether performance measures were affected by any confound between interword spacing and text presentation rate in words per minute. Normally sighted participants were included, with a central vision loss implemented using a gaze-contingent scotoma of 8° diameter. In both experiments, participants read sentences that were presented with an interword spacing of one, two, or three characters. RESULTS: Reading accuracy and memory recall were significantly enhanced with triple-character interword spacing (both measures, P ≤.01). These basic findings were independent of the text presentation rate (in words per minute). CONCLUSIONS: We attribute the improvements in reading performance with increased interword spacing to a reduction in the deleterious effects of visual crowding. We conclude that increased interword spacing may enhance reading experience and ability when using horizontally scrolling text with a central vision loss. |
Hannah Harvey; Simon P. Liversedge; Robin Walker Evidence for a reduction of the rightward extent of the perceptual span when reading dynamic horizontally scrolling text Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 7, pp. 951–965, 2019. @article{Harvey2019a, The dynamic horizontally scrolling text format produces a directional conflict in the allocation of attention for reading, with a necessity to track each word leftward (in the direction of movement) concurrently with normal rightward shifts made to progress through the text (in left-to-right orthographies such as English). The gaze-contingent window paradigm was used to compare the extent of the perceptual span in reading of scrolling and static sentences. Across two experiments, this investigation confirmed that the allocation of attentional resources to the right of fixation was compressed with scrolling text. There was no evidence for a reversal of the direction of asymmetry or a confounding shift of landing position. |
Naomi Havron; Alex Carvalho; Anne-Caroline Fiévét; Anne Christophe Three- to four-year-old children rapidly adapt their predictions and use them to learn novel word meanings Journal Article In: Child Development, vol. 90, no. 1, pp. 82–90, 2019. @article{Havron2019, Adults create and update predictions about what speakers will say next. This study asks whether prediction can drive language acquisition, by testing whether 3- to 4-year-old children (n = 45) adapt to recent information when learning novel words. The study used a syntactic context which can precede both nouns and verbs to manipulate children's predictions about what syntactic category will follow. Children for whom the syntactic context predicted verbs were more likely to infer that a novel word appearing in this context referred to an action, than children for whom it predicted nouns. This suggests that children make rapid changes to their predictions, and use this information to learn novel information, supporting the role of prediction in language acquisition. |
Michael G. Cutter; Andrea E. Martin; Patrick Sturt Capitalization interacts with syntactic complexity Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–19, 2019. @article{Cutter2019, We investigated whether readers use the low-level cue of proper noun capitalization in the parafovea to infer syntactic category, and whether this results in an early update of the representation of a sentence's syntactic structure. Participants read sentences containing either a subject relative or object relative clause, in which the relative clause's overt argument was a proper noun (e.g., The tall lanky guard who alerted Charlie/Charlie alerted to the danger was young) across three experiments. In Experiment 1 these sentences were presented in normal sentence casing or entirely in upper case. In Experiment 2 participants received either valid or invalid parafoveal previews of the relative clause. In Experiment 3 participants viewed relative clauses in only normal conditions. We hypothesized that we would observe relative clause effects (i.e., inflated fixation times for object relative clauses) while readers were still fixated on the word who, if readers use capitalization to infer a parafoveal word's syntactic class. This would constitute a syntactic parafoveal-on-foveal effect. Furthermore, we hypothesized that this effect should be influenced by sentence casing in Experiment 1 (with no cue for syntactic category being available in upper case sentences) but not by parafoveal preview validity of the target words. We observed syntactic parafoveal-on-foveal effects in Experiment 1 and 3, and a Bayesian analysis of the combined data from all three experiments. These effects seemed to be influenced more by noun capitalization than lexical processing. We discuss our findings in relation to models of eye movement control and sentence processing theories. |
Julien Dampuré; Pedro Javier López-Pérez; Horacio A. Barber Meaning-based attentional guidance as a function of foveal and task-related cognitive loads Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 1, pp. 1–12, 2019. @article{Dampure2019, The depth of parafoveal word processing depends on the amount of cognitive resources available. Whether this principle applies to the parafoveal semantic processing of multiple words remains, however, controversial. This study therefore aimed at testing the impact of the amount of cognitive resources available on the parafoveal semantic processing of words, by manipulating the foveal and task-related cognitive loads. Participants searched for words in displays of three semantically related or unrelated words, one of which was presented in the centre of the screen and two within the parafovea. The nature of the task and the characteristics of the centred word were manipulated to vary respectively the load associated to the task and to the foveal load. Analyses revealed more first saccades toward the parafoveal semantic distractors when both loads were low. These results indicate that fast parafoveal semantic word processing is constrained by the availability of cognitive resources. |
Federica Degno; Otto Loberg; Chuanli Zang; Manman Zhang; Nick Donnelly; Simon P. Liversedge A co-registration investigation of inter-word spacing and parafoveal preview: Eye movements and fixation-related potentials Journal Article In: PLoS ONE, vol. 14, no. 12, pp. e0225819, 2019. @article{Degno2019, Participants' eye movements (EMs) and EEG signal were simultaneously recorded to examine foveal and parafoveal processing during sentence reading. All the words in the sentence were manipulated for inter-word spacing (intact spaces vs. spaces replaced by a random letter) and parafoveal preview (identical preview vs. random letter string preview). We observed disruption for unspaced text and invalid preview conditions in both EMs and fixation- related potentials (FRPs). Unspaced and invalid preview conditions received longer reading times than spaced and valid preview conditions. In addition, the FRP data showed that unspaced previews disrupted reading in earlier time windows of analysis, compared to string preview conditions. Moreover, the effect of parafoveal preview was greater for spaced relative to unspaced conditions, in both EMs and FRPs. These findings replicate well-established preview effects, provide novel insight into the neural correlates of reading with and without inter-word spacing and suggest that spatial selection precedes lexical processing. |
Federica Degno; Otto Loberg; Chuanli Zang; Manman Zhang; Nick Donnelly; Simon P. Liversedge Parafoveal previews and lexical frequency in natural reading: Evidence from eye movements and fixation-related potentials. Journal Article In: Journal of Experimental Psychology: General, vol. 148, no. 3, pp. 453–474, 2019. @article{Degno2019a, Participants' eye movements and electroencephalogram (EEG) signal were recorded as they read sentences displayed according to the gaze-contingent boundary paradigm. Two target words in each sentence were manipulated for lexical frequency (high vs. low frequency) and parafoveal preview of each target word (identical vs. string of random letters vs. string of Xs). Eye movement data revealed visual parafoveal-on-foveal (PoF) effects, as well as foveal visual and orthographic preview effects and word frequency effects. Fixation-related potentials (FRPs) showed visual and orthographic PoF effects as well as foveal visual and orthographic preview effects. Our results replicated the early preview positivity effect (Dimigen, Kliegl, & Sommer, 2012) in the X-string preview condition, and revealed different neural correlates associated with a preview comprised of a string of random letters relative to a string of Xs. The former effects seem likely to reflect difficulty associated with the integration of parafoveal and foveal information, as well as feature overlap, while the latter reflect inhibition, and potentially disruption, to processing underlying reading. Interestingly, and consistent with Kretzschmar, Schlesewsky, and Staub (2015), no frequency effect was reflected in the FRP measures. The findings provide insight into the neural correlates of parafoveal processing and written word recognition in reading and demonstrate the value of utilizing ecologically valid paradigms to study well established phenomena that occur as text is read naturally. |
Félix Desmeules-Trudel; Tania S. Zamuner Gradient and categorical patterns of spoken-word recognition and processing of phonetic details Journal Article In: Attention, Perception, and Psychophysics, vol. 81, pp. 1673–1674, 2019. @article{DesmeulesTrudel2019, The speech signal is inherently rich, and this reflects complexities of speech articulation. During spoken-word recognition, listeners must process time-dependent perceptual cues, and the role that these cues play varies depending on the phonological status of the sounds across languages. For example, Canadian French has both phonologically nasal vowels (i.e., contrastive) and coarticulatorily nasalized vowels, as opposed to English, which only has coarticulatorily nasalized vowels. We investigated how vowel nasalization duration, a time-dependent phonetic cue to the French nasal contrast, affects spoken-word recognition. Using eye tracking in two visual world paradigm experiments, the results show that fine-grained phonetic information is important for lexical recognition, and that lexical access is dependent on small variations in the signal. The results also show gradient interpretation of ambiguous vowel nasalization despite the phonemic distinction between phonological nasal vowels and coarticulatorily nasalized vowels in Canadian French. Gradience was found when words were ambiguous, and interpretation was more categorical when words were unambiguous. These results support the hypothesis of gradient interpretation of phonetic cues for ambiguously produced stimuli and the storage of coarticulatory information in phono-lexical representations for a language that has a phonological contrast for nasality (i.e., French). |
Heather R. Dial; Bob McMurray; Randi C. Martin Lexical processing depends on sublexical processing: Evidence from the visual world paradigm and aphasia Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 4, pp. 1047–1064, 2019. @article{Dial2019, Some early studies of people with aphasia reported strikingly better performance on lexical than on sublexical speech perception tasks. These findings challenged the claim that lexical processing depends on sublexical processing and suggested that acoustic information could be mapped directly to lexical representations. However, Dial and Martin (Neuropsychologia 96: 192-212, 2017) argued that these studies failed to match the discriminability of targets and distractors for the sublexical and lexical stimuli and showed that when using closely matched tasks with natural speech tokens, no patient performed substantially better at the lexical than at the sublexical processing task. In the current study, we sought to provide converging evidence for the dependence of lexical on sublexical processing by examining the perception of synthetic speech stimuli varied on a voice-onset time continuum using eye-tracking methodology, which is sensitive to online speech perception processes. Eight individuals with aphasia and ten age-matched controls completed two visual world paradigm tasks: phoneme (sublexical) and word (lexical) identification. For both identification and eye-movement data, strong correlations were observed between the sublexical and lexical tasks. Critically, no patient within the control range on the lexical task was impaired on the sublexical task. Overall, the current study supports the claim that lexical processing depends on sublexical processing. Implications for inferring deficits in people with aphasia and the use of sublexical tasks to assess sublexical processing are also discussed. |
Monica L. Do; Elsi Kaiser Subjecthood and linear order in linguistic encoding: Evidence from the real-time production of wh-questions in English and Mandarin Chinese Journal Article In: Journal of Memory and Language, vol. 105, pp. 60–75, 2019. @article{Do2019, We use visual world eye-tracking to provide a first look into the real-time production of an under-researched but communicatively crucial construction – wh-questions. We investigate whether the transition from abstract message to highly-structured utterances (linguistic encoding) is driven by linear order (positional processing) or subjecthood assignment (functional processing). Experiment 1 decouples positional and functional processes by comparing production of English declaratives versus object wh-questions (‘Which nurses did the maids tickle?'). Experiment 2 compares the production of declaratives versus object wh-questions in Mandarin Chinese to investigate potential information-focus effects on linguistic encoding and tests whether Experiment 1's findings could be due to focus. Experiment 1 found that even though the articulation of a sentence is necessarily linear, speakers do not necessarily encode sentences in accordance with the linear order in which the words are uttered. Experiment 2 suggests that information-focus does not guide speakers' eye-movements during linguistic encoding. |
Denis Drieghe; Aaron Veldre; Gemma Fitzsimmons; Jane Ashby; Sally Andrews The influence of number of syllables on word skipping during reading revisited Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 2, pp. 616–621, 2019. @article{Drieghe2019, Fitzsimmons and Drieghe (Psychonomic Bulletin & Review, 18, 736–741, 2011) showed that a monosyllabic word was skipped more often than a disyllabic word during reading. This finding was interpreted as evidence that syllabic information was extracted from the parafovea early enough to influence word skipping. In the present, large-scale replication of this study, in which we additionally measured the reading, vocabulary, and spelling abilities of the participants, the effect of number of syllables on word skipping was not significant. Moreover, a Bayesian analysis indicated strong evidence for the absence of the effect. The individual differences analyses replicate previous observations showing that spelling ability uniquely predicts word skipping (but not fixation times) because better spellers skip more often. The results indicate that high-quality lexical representations allow the system to reach an advanced stage in the word-recognition process of the parafoveal word early enough to influence the decision of whether or not to skip the word, but this decision is not influenced by number of syllables. |
Linda Drijvers; Julija Vaitonytė; Asli Özyürek Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension Journal Article In: Cognitive Science, vol. 43, pp. 1–25, 2019. @article{Drijvers2019, Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non-native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye-tracking to investigate whether and how native and highly proficient non-native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6-band noise-vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued-recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non-native listeners mostly gazed at the face during comprehension, but non-native listeners gazed more often at gestures than native listeners. However, only native but not non-native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non-native listeners might gaze at gesture more as it might be more challenging for non-native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non-native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non-native listeners. |
Linda Drijvers; Mircea Plas; Asli Özyürek; Ole Jensen Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise Journal Article In: NeuroImage, vol. 194, pp. 55–67, 2019. @article{Drijvers2019a, Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech. |
Grant Eckstein; Wesley Schramm; Madeline Noxon; Jenna Snyder Reading L1 and L2 writing: An eye-tracking study of TESOL rater behavior Journal Article In: The Electronic Journal for English as a Second Language, vol. 23, no. 1, pp. 1–24, 2019. @article{Eckstein2019, Researchers have found numerous differences in the approaches raters take to the complex task of essay rating including differences when rating native (L1) and non-native (L2) English writing. Yet less is known about raters' reading practices while scoring those essays. This small-scale study uses eye-tracking technology and reflective protocols to examine the reading behavior of TESOL teachers who evaluated university-level L1 and L2 writing. Results from the eye-tracking component indicate that the teachers read the rhetorical, organizational, and grammatical features of an L1 text more deliberately while skimming through and then returning to rhetorical features of an L2 text and initially skipping over many L2 grammatical structures. In reflective interviews, the teachers also reported more consensus on their approach to evaluating grammar and organization than word choice and rhetoric. While these findings corroborate prior research comparing the rating of L1 and L2 writing, they promise to expand our understanding of rating processes by reflecting the teachers' reading practices and attentional focus while rating. Moreover, the study demonstrates the potential for using eye-tracking research to unobtrusively investigate the reading behaviors involved in assessing L1 and L2 writing. |
C. Egan; Gary M. Oppenheim; Christopher Saville; Kristina Moll; Manon Wyn Jones Bilinguals apply language-specific grain sizes during sentence reading Journal Article In: Cognition, vol. 193, pp. 104018, 2019. @article{Egan2019, Languages differ in the consistency with which they map orthography to phonology, and a large body of work now shows that orthographic consistency determines the style of word decoding in monolinguals. Here, we characterise word decoding in bilinguals whose two languages differ in orthographic consistency, assessing whether they maintain two distinct reading styles or settle on a single ‘compromise' reading style. In Experiment 1, Welsh-English bilinguals read cognates and pseudowords embedded in Welsh and English sentences. Eye-movements revealed that bilinguals dynamically alter their decoding strategy according to the language context, including more fixations during lexical access for cognates in the more consistent orthography (Welsh) than in the less consistent orthography (English), and these effects were specific to word (as opposed to pseudoword) processing. In Experiment 2, we compared the same bilinguals' eye movements in the English sentence reading context to those of monolinguals'. Bilinguals' eye-movement behaviour was very similar to monolinguals' when reading English, suggesting that their knowledge of the more consistent orthography (Welsh) did not alter their decoding style when reading in English. This study presents the first characterisation of bilingual decoding style in sentence reading. We discuss our findings in relation to connectionist reading models and models of bilingual visual word recognition. |
Nikola Anna Eger; Holger Mitterer; Eva Reinisch Learning a new sound pair in a second language: Italian learners and German glottal consonants Journal Article In: Journal of Phonetics, vol. 77, pp. 1–24, 2019. @article{Eger2019, The present study investigated Italian learners' production and perception of German /h/ and /Ɂ/ – two sounds that lack obvious linguistic counterparts in Italian. Critically, of these sounds only /h/ is explicitly known to learners from instruction and orthography. We therefore asked whether this awareness would lead to better acquisition of /h/ than /Ɂ/, and whether any differences would depend on the explicitness of the task. In production, learners of a medium proficiency level performed accurately in about 70% of the cases, with errors including sound deletions and substitutions. In spoken word recognition, two other learner groups of the same proficiency were hindered by sound deletions, but not by substitutions, although they were able to differentiate the sounds in an explicit goodness rating task. Overall, acquisition of /Ɂ/ was similar to /h/, despite lack of awareness for this sound. The results suggest that learners have established one combined “glottal category” to which both sounds map in speech processing, while they may be better implemented in production. |
Sarah Eilers; Simon P. Tiffin-Richards; Sascha Schroeder The repeated name penalty effect in children's natural reading: Evidence from eye tracking Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 3, pp. 403–412, 2019. @article{Eilers2019a, We report data from an eye tracking experiment on the repeated name penalty effect in 9-year-old children and young adults. The repeated name penalty effect is informative for the study of children's reading because it allows conclusions about children's ability to direct attention to discourse-level processing cues during reading. We presented children and adults simple three-sentence stories with a single referent, which was referred to by an anaphor—either a pronoun or a repeated name—downstream in the text. The anaphor was either near or far from the antecedent. We found a repeated name penalty effect in early processing for children as well as adults, suggesting that beginning readers are already susceptible to discourse-level expectations of anaphora during reading. Furthermore, children's reading was more influenced by the distance of anaphor and antecedent than adults', which we attribute to differences in reading fluency and the resulting cognitive load during reading. |
Sarah Eilers; Simon P. Tiffin-Richards; Sascha Schroeder Gender cue effects in children's pronoun processing: A longitudinal eye tracking study Journal Article In: Scientific Studies of Reading, vol. 23, no. 6, pp. 509–522, 2019. @article{Eilers2019, Children struggle with the resolution of pronouns during reading, but little is known about the sources of their difficulties. We conducted a longitudinal eye tracking experiment with 70 children in the final years of primary school. The children read sentences with a contextual resolution preference in which gender was either an informative resolution cue for the pronoun or not. We were interested in children's processing of the pronoun and their resolution preferences, as well as the effects of individual differences of Grade level and reading skill. Children's resolution ability improved with age, and good readers were more accurate than poor readers. In the eye-tracking measures, we found strong individual differences related to reading skill: Children with good reading skill took more time to read the pronoun region when pronoun gender was informative, suggesting that good readers make better use of the available information at the pronoun than poor readers. |
Susanne Eisenhauer; Christian J. Fiebach; Benjamin Gagl Context-based facilitation in visual word recognition: Evidence for visual and lexical but not pre-lexical contributions Journal Article In: eNeuro, vol. 6, no. 2, pp. 1–25, 2019. @article{Eisenhauer2019, Word familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword (PW) familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N = 38 human participants), while experiment 2 assessed behavioral facilitation effects (N = 24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context-based facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in PWs familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context-based facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism. |
Isabel Orenes; Juan A. García-Madruga; Isabel Gómez-Veiga; Orlando Espino; Ruth M. J. Byrne The comprehension of counterfactual conditionals: Evidence from eye-tracking in the visual world paradigm Journal Article In: Frontiers in Psychology, vol. 10, pp. 1172, 2019. @article{Orenes2019, Three experiments tracked participants' eye-movements to examine the time course of comprehension of the dual meaning of counterfactuals, such as "if there had been oranges then there would have been pears." Participants listened to conditionals while looking at images in the visual world paradigm, including an image of oranges and pears that corresponds to the counterfactual's conjecture, and one of no oranges and no pears that corresponds to its presumed facts, to establish at what point in time they consider each one. The results revealed striking individual differences: some participants looked at the negative image and the affirmative one, and some only at the affirmative image. The first experiment showed that participants who looked at the negative image increased their fixation on it within half a second. The second experiment showed they do so even without explicit instructions, and the third showed they do so even for printed words. |
Aisling E. O'Sullivan; Chantelle Y. Lim; Edmund C. Lalor In: European Journal of Neuroscience, vol. 50, no. 8, pp. 3282–3295, 2019. @article{OSullivan2019, Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope-based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV—facing the attended speaker while ignoring another speaker represented by the audio-only stream and (b) Incongruent AV (eavesdropping)—attending the audio-only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto-occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near-ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy. |
Irene Ablinger; Anne Friede; Ralph Radach A combined lexical and segmental therapy approach in a participant with pure alexia Journal Article In: Aphasiology, vol. 33, no. 5, pp. 579–605, 2019. @article{Ablinger2019, Background: Pure alexia is characterized by effortful left-to-right word processing, leading to a pathological length effect during reading aloud. Results of previous therapy outcome research suggest that patients with pure alexia tend to develop and maintain an adaptive sequential reading strategy in an effort to cope with their severe deficit and at least master a slow and laborious reading mode. Aim: We applied a theory-based, strategy-driven and eye-movement-supported therapy approach on HC, a participant with pure alexia. Our intention was to help optimizing his very persistent sequential reading strategy, while concurrently facilitating fast parallel word processing. Methods & Procedures: Therapy included a systematic combination of segmental and holistic reading as well as text reading components. Exposure duration and font size were gradually reduced. Following a single case experimental reading design with follow-up testing, we assessed reading performance at four testing points focusing on analyses of linguistic errors and word viewing patterns. Outcomes & Results: With respect to reading accuracy and oculomotor measures, the combined therapy approach resulted in sustained training effects evident in significant improvements for trained and untrained word materials. Text reading intervention only led to therapy specific improvements. Spatio-temporal analyses of eye fixation positions revealed a more and more efficient adaptive strategy to compensate for reading difficulties. However, spatial changes in fixation position were less pronounced at T4, suggesting some diminishing of success at follow-up. Conclusions: Our results underscore the need for a continuous systematic training of underlying reading strategies in pure alexia to develop and sustain more economic reading procedures. |
Luis Aguado; Karisa B. Parkington; Teresa Dieguez-Risco; José A. Hinojosa; Roxane J. Itier Joint modulation of facial expression processing by contextual congruency and task demands Journal Article In: Brain Sciences, vol. 9, pp. 1–20, 2019. @article{Aguado2019, Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions. |
Scott P. Ardoin; Katherine S. Binder; Andrea M. Zawoyski; Eloise Nimocks; Tori E. Foster Measuring the behavior of reading comprehension test takers: What do they do, and should they do it? Journal Article In: Reading Research Quarterly, vol. 54, no. 4, pp. 507–529, 2019. @article{Ardoin2019, The authors sought to further the understanding of reading processes and their links to comprehension using two reading tasks for elementary-grade students. One hundred sixty-six students in grades 2–5 were randomly assigned to one of two conditions: reading with questions presented concurrently with text or reading with questions presented after reading the text (with the text unavailable when answering questions). Eye movement data suggested different processes for each task: Rereading occurred and more time was spent on higher level processing measures in the with-text condition, and in particular, those who did not reread had more accurate answers than those who engaged in rereading. Measurement of students' precision in returning directly to the portion of the passage with information corresponding to a question also predicted students' response accuracy. |
Vahid Aryadoust; Bee Hoon Ang Exploring the frontiers of eye tracking research in language studies: A novel co-citation scientometric review Journal Article In: Computer Assisted Language Learning, 2019. @article{Aryadoust2019, Eye tracking technology has become an increasingly popular methodology in language studies. Using data from 27 journals in language sciences indexed in the Social Science Citation Index and/or Scopus, we conducted an in-depth scientometric analysis of 341 research publications together with their 14,866 references between 1994 and 2018. We identified a number of countries, researchers, universities, and institutes with large numbers of publications in eye tracking research in language studies. We further discovered a mixed multitude of connected research trends that have shaped the nature and development of eye tracking research. Specifically, a document co-citation analysis revealed a number of major research clusters, their key topics, connections, and bursts (sudden citation surges). For example, the foci of clusters #0 through #5 were found to be perceptual learning, regressive eye movement(s), attributive adjective(s), stereotypical gender, discourse processing, and bilingual adult(s). The content of all the major clusters was closely examined and synthesized in the form of an in-depth review. Finally, we grounded the findings within a data-driven theory of scientific revolution and discussed how the observed patterns have contributed to the emergence of new trends. As the first scientometric investigation of eye tracking research in language studies, the present study offers several implications for future research that are discussed. |
Mahsa Barzy; Jo Black; David Williams; Heather J. Ferguson Autistic adults anticipate and integrate meaning based on the speaker's voice: Evidence from eye-tracking and event-related potentials Journal Article In: Journal of Experimental Psychology: General, vol. 149, no. 6, pp. 1097–1115, 2019. @article{Barzy2019, Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults and tested their timecourse in 2 preregistered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker's voice and message were either consistent or inconsistent (e.g., "When we go shopping, I usually look for my favorite wine," spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g., wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias toward the voiceconsistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker's voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240 ms vs. 1800 ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterized autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information and were comparably sensitive to speaker-meaning inconsistency effects. |
Benjamin T. Carter; Brent Foster; Nathan M. Muncy; Steven G. Luke Linguistic networks associated with lexical, semantic and syntactic predictability in reading: A fixation-related fMRI study Journal Article In: NeuroImage, vol. 189, pp. 224–240, 2019. @article{Carter2019, The ability to make predictions is thought to facilitate language processing. During language comprehension such predictions appear to occur at multiple levels of linguistic representations (i.e. semantic, syntactic and lexical). The neural mechanisms that define the network sensitive to linguistic predictability have yet to be adequately defined. The purpose of the present study was to explore the neural network underlying predictability during the normal reading of connected text. Predictability values for different linguistic information were obtained from a pre-existing text corpus. Forty-one subjects underwent simultaneous eye-tracking and fMRI scans while reading these select paragraphs. Lexical, semantic, and syntactic predictability measures were then correlated with functional activation associated with fixation onset on the individual words. Activation patterns showed both positive and negative correlations to lexical, semantic, and syntactic predictabilities. Conjunction analysis revealed regions specific to or shared between each type of predictability. The regions associated with the different predictability measures were largely separate. Results suggest that most linguistic predictions are graded in nature, activating components of the existing language system. A number of regions were also found to be uniquely associated with full lexical predictability, most notably the anterior temporal lobe and the inferior posterior temporal cortex. |
Xianglan Chen; Fang Li The length of preceding context influences metonymy processing Journal Article In: Review of Cognitive Linguistics, vol. 17, no. 1, pp. 243–256, 2019. @article{Chen2019f, Earlier studies have shown that conceptually supportive context is an important factor in the comprehension of metaphors (Inhoff, Lima, & Carroll, 1984 ; Ortony, Schallert, Reynolds, & Antos, 1978). However, little empirical evidence has been found so far regarding contextual effects on metonymy processing (Lowder & Gordon, 2013). Implementing an eye-tracking experiment with Chinese materials, this present paper investigated whether and how preceding contextual information affects the processing of metonymy. The results show that for unfamiliar metonymies, it takes readers longer time to interpret unfamiliar metonymies than to literally interpret them given a shorter context. However, the processing disparity between metonymic comprehension and literal comprehension disappears when longer supportive information is available in the preceding context. These results are analogous to those found for metaphors and familiar metonymies, supporting the parallel model of language processing. In addition, our results suggest that the presence of supportive preceding context facilitates the processing of unfamiliar metonymies more than it does to the literal controls. |
Cristiano Chesi; Paolo Canal Person features and lexical restrictions in Italian clefts Journal Article In: Frontiers in Psychology, vol. 10, pp. 2105, 2019. @article{Chesi2019, In this paper, we discuss the results of two experiments, one off-line (acceptability judgment) and the other on-line (eye-tracking), targeting Object Cleft (OC) constructions. In both experiments, we used the same materials presenting a manipulation on person features: second person plural pronouns and plural definite determiners alternate in introducing a full NP (“it was [DP1 the/you [NP bankers]]i that [DP2 the/you [NP lawyers]] have avoided _i at the party”) in a language, Italian, with overt person (and number) subject-verb agreement. As results, we first observed that the advantage of the bare pronominal forms reported in previous experiments (Gordon et al., 2001; Warren and Gibson, 2005, a.o.) is lost when the full NP (the “lexical restriction” in Belletti and Rizzi, 2013) is present. Second, an advantage for the mismatch condition, Art1-Pro2, in which the focalized subject is introduced by the determiner and the OC subject by the pronoun, as opposed to the matching Pro1-Pro2 condition, is observed, both off-line (higher acceptability and accuracy in answering comprehension questions after eyetracking) and on-line (e.g., smaller number of regressions from the subject region); third, we found a relevant difference between acceptability and accuracy in comprehension questions: despite similar numerical patterns in both off-line measures, the difference across conditions in accuracy is mostly not significant, while it is significant in acceptability. Moreover, while the matching condition Pro1-Pro2 is perceived as nearly ungrammatical (far below the mean acceptability across-conditions), the accuracy in comprehension is still high (close to 80%). To account for these facts, we compare different formal competence and processing models that predict difficulties in OC constructions: similarity-based (Gordon et al., 2001, a.o.), memory load (Gibson, 1998), and intervention-based (Friedmann et al., 2009) accounts are compared to processing oriented ACT-R-based predictions (Lewis and Vasishth, 2005) and to top-down Minimalist derivations (Chesi, 2015). We conclude that most of these approaches fail in making predictions able to reconcile the competence and the performance perspective in a coherent way to the exception of the top-down model that is able to predict correctly both the on-line and the off-line main effects obtained. |
Agnieszka Chmiel; Agnieszka Lijewska Syntactic processing in sight translation by professional and trainee interpreters Journal Article In: Target, vol. 31, no. 3, pp. 378–397, 2019. @article{Chmiel2019, The study examines how professional and trainee interpreters process syntax in sight translation. We asked 24 professionals and 15 trainees to sight translate sentences with subject-relative clauses and more difficult object-relative clauses while measuring translation accuracy, eye movements and translation durations. We found that trainees took longer to achieve similar translation accuracy as professionals and viewed the source text less than professionals to avoid interference, especially when reading more difficult object-relative sentences. Syntactic manipulation modulated translation and viewing times: participants took longer to translate object-relative sentences but viewed them less in order to avoid interference in target language reformulations. To the best of our knowledge, this is the first study to show that reading measures in sight translation should be analysed together with translation times to explain complex reading patterns. It also proposes a new measure, percentage of dwell time, as an index of interference avoidance. |
Jürgen Cholewa; Isabel Neitzel; Annika Bürsgens; Thomas Günther Online-processing of grammatical gender in noun-phrase decoding: An eye-tracking study with monolingual German 3rd and 4th graders Journal Article In: Frontiers in Psychology, vol. 10, pp. 2586, 2019. @article{Cholewa2019, Like many other languages, German employs a linguistic category called “grammatical gender.” In gender-marking languages each noun is assigned to a particular gender-class (in German: masculine, feminine or neuter) and other words in a sentence which are grammatically controlled by the noun are marked by particular morphemes according to the noun's gender feature – so called gender agreement. Within psycholinguistic theories of language comprehension, it is often assumed that gender agreement might help to predict the continuation of a sentence on grammatical grounds and to reduce the lexical search space for the next words emerging within the speech signal. Thus, gender agreement relations may provide a means to make the comprehension process more effective and targeted. The aim of the current study was to assess whether monolingual German 3rd and 4th grade primary school children make use of gender agreement in online auditory comprehension and whether different gender cues interact with each other and with semantic information. A language-picture matching task was conducted in which 32 children looked at two pictures while listening to a noun phrase. Due to features of the German gender system, the target picture corresponding with the noun phrase could be predicted shortly after stimulus onset on account of gender agreement relations. The predictive impact of grammatical gender agreement on noun-phrase decoding was investigated by measuring the time course of eye-movements onto the target and distractor pictures. The results confirm and extend previous findings that gender plays a role in predictive online comprehension of gender-marking languages like German, and that even primary school children are able to make use of this grammatical device. |
Wing Yee Chow; Yangzi Zhou Eye-tracking evidence for active gap-filling regardless of dependency length Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 6, pp. 1297–1307, 2019. @article{Chow2019, Previous work on real-time sentence processing has established that comprehenders build and interpret filler-gap dependencies without waiting for unambiguous evidence about the actual location of the gap ("active gap-filling") as long as such dependencies are grammatically licensed. However, this generalisation was called into question by recent findings in a self-paced reading experiment by Wagers and Phillips, which may be taken to show that comprehenders do not interpret the filler at the posited gap when the dependency spans a longer distance. In the present study, we aimed to replicate these findings in an eye-tracking experiment with better controlled materials and increased statistical power. Crucially, we found clear evidence for active gap-filling across all levels of dependency length. This diverges from Wagers and Phillips's findings but is in line with the long-standing generalisation that comprehenders build and interpret filler-gap dependencies predictively as long as they are grammatically licensed. We found that the effect became smaller in the long dependency conditions in the post-critical region, which suggests the weaker effect in the long dependency conditions may have been undetected in Wagers and Phillips's study due to insufficient statistical power and/or the use of a self-paced reading paradigm. |
Eunjin Chun; Edith Kaan L2 prediction during complex sentence processing Journal Article In: Journal of Cultural Cognitive Science, vol. 3, no. 2, pp. 203–216, 2019. @article{Chun2019, Recent studies have found that proficient second language (L2) listeners are able to predict upcoming linguistic information to the same extent as first language (L1) listeners during simple sentence processing, particularly when semantic cues are given and/or few cognitive resources are required for language processing. These findings may suggest that L2 listeners use the same mechanisms as L1 listeners for prediction. Yet, it has not been fully specified under which conditions L2 listeners can use predictive mechanisms. To address this issue, we investigated whether advanced L2 listeners make predictions while processing more complex constructions that are cognitively more taxing. Specifically, we investigated prediction in sentences containing a relative clause that can modify either of two noun phrases. In an eye-tracking study using a visual world paradigm, L2 learners listened to sentences containing a semantically biasing verb or a neutral one (e.g., ‘‘I know the friend of the dancer that will open/get the present''). We measured L2 listeners' prediction by comparing the fixations to target objects (e.g., present among non-openable objects) between the two experimental conditions. Results showed that L2 listeners, similar to L1 listeners, made significantly more anticipatory looks to the targets in the semantically biasing condition than in the neutral condition, though their prediction started a bit (180 ms) later than L1 listeners' prediction. These findings suggest that L2 speakers can use prediction mechanisms even during complex sentence processing and provide further evidence for the claim that there is no fundamental difference between L1 and L2 speakers, but that cognitive resources matter for prediction. |
Daniel R. Coates; Jean-Baptiste Bernard; Susana T. L. Chung Feature contingencies when reading letter strings Journal Article In: Vision Research, vol. 156, pp. 84–95, 2019. @article{Coates2019, Many models posit the use of distinctive spatial features to recognize letters of the alphabet, a fundamental component of reading. It has also been hypothesized that when letters are in close proximity, visual crowding may cause features to mislocalize between nearby letters, causing identification errors. Here, we took a data-driven approach to investigate these aspects of textual processing. Using data collected from subjects identifying each letter in thousands of lower-case letter trigrams presented in the peripheral visual field, we found characteristic error patterns in the results suggestive of the use of particular spatial features. Distinctive features were seldom entirely missed, and we found evidence for errors due to doubling, masking, and migration of features. Dependencies both amongst neighboring letters and in the responses revealed the contingent nature of processing letter strings, challenging the most basic models of reading that ignore either crowding or featural decomposition. |
Carla Contemori; Paola E. Dussias Prediction at the discourse level in Spanish-English bilinguals: An eye-tracking study Journal Article In: Frontiers in Psychology, vol. 10, pp. 956, 2019. @article{Contemori2019, In two experiments, we examine English monolinguals' and Spanish-English bilinguals' ability to predict an upcoming pronoun referent based on the Implicit Causality (IC) bias of the verb. In an eye-tracking experiment, the monolingual data show anticipation of the upcoming referent for NP1-bias verbs. For bilinguals, the same effect is found, showing that bilinguals are not slower than monolinguals at processing the information associated with the IC of the verb. In an off-line experiment, both groups showed knowledge of IC bias information for the verbs used in the eye-tracking experiment. Based on the findings of the two experiments, we show that highly proficient bilinguals have similar online and off-line predictions based on IC verb information than monolingual speakers. |
Ashley Farris-Trimble; Anne-Michelle Tessier The effect of allophonic processes on word recognition: Eye-tracking evidence from Canadian raising Journal Article In: Language, vol. 95, no. 1, pp. 136–160, 2019. @article{FarrisTrimble2019, Whether lexical representations are stored as abstract forms or exemplar tokens is the focus of much debate in both the phonological and word-recognition literature. This research report examines the recognition of words that have undergone Canadian raising and/or intervocalic flapping. Two eye-tracking experiments suggest that listeners are slower to fixate words that have undergone one or more phonological processes within their own raising dialect, supporting the idea that they must calculate a mapping from surface word forms to more abstract representations. Implications for representational and phonological theories are discussed.* 1. Introduction. The interaction between two mostly predictable segmental processes in Canadian English-Canadian raising, which causes some diphthong nuclei to be raised, and intervocalic flapping, which reduces some /t/s and /d/s to [ɾ]-has long been of interest to phonologists, in part because its analysis highlights a core question: How are words that are subject to phonological processes stored in the mind? One way to frame this question is to ask whether the 'distance' between a surface form of a word and its underlying representation, according to a particular phonological analysis, is reflected somewhere in lexical processing. This research report tries to address this version of the question, using new eye-tracking data, with results that support some degree of abstract representations in the minds of listeners. We discuss their implications for both phonological grammar and word recognition. |
Heather J. Ferguson; Jo Black; David Williams Distinguishing reality from fantasy in adults with autism spectrum disorder: Evidence from eye movements and reading Journal Article In: Journal of Memory and Language, vol. 106, pp. 95–107, 2019. @article{Ferguson2019, Understanding fictional events requires one to distinguish reality from fantasy, and thus engages high-level processes including executive functions and imagination, both of which are impaired in autism spectrum disorder (ASD). We examined how adults with and without ASD make sense of reality-violating fantasy narratives by testing real-time understanding of counterfactuals. Participants were eye-tracked as they read narratives that depicted novel counterfactual scenarios that violate reality (e.g. “If margarine contained detergent, Mum could use margarine in her washing/baking” Experiment 1), or counterfactual versions of known fictional worlds (e.g. “If Harry Potter had lost all his magic powers, he would use his broom to sweep/fly” Experiment 2). Results revealed anomaly detection effects in the early moments of processing (immediately in Experiment 1, and from the post-critical region in Experiment 2), which were not modulated by group. We discuss these findings in relation to the constraints from real-world and fantasy contexts that compete to influence language comprehension, and identify a dissociation between ToM impairments and counterfactual processing abilities. |
Eva Findelsberger; Florian Hutzler; Stefan Hawelka Spill the load: Mixed evidence for a foveal load effect, reliable evidence for a spillover effect in eye-movement control during reading Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 5, pp. 1442–1453, 2019. @article{Findelsberger2019, It has been hypothesized that the processing difficulty of the fixated word (i.e., “foveal load”) modulates the amount of parafoveal preprocessing of the next word. Evidence for the hypothesis has been provided by the application of parafoveal masks within the boundary paradigm. Other studies that applied alternative means of manipulating the parafoveal preview (i.e., visual degradation) could not replicate the effect of foveal load. The present study examined the effect of foveal load by directly comparing the application of parafoveal masks (Exp. 1) with the alternative manipulation of visually degrading the parafoveal preview (Exp. 2) in adult readers. Contrary to expectation, we did not find the foveal-load interaction in the first experiment with traditional letter masks. We did, however, find the expected interaction in the second experiment with visually degraded previews. Both experiments revealed a spillover effect indicating that the processing of a word is not (always) fully completed when the reader already fixates the next word (i.e., processing “spills over” to the next word). The implications for models of eye-movement control in reading are discussed. |
Gemma Fitzsimmons; Mark J. Weal; Denis Drieghe The impact of hyperlinks on reading text Journal Article In: PLoS ONE, vol. 14, no. 2, pp. e0210900, 2019. @article{Fitzsimmons2019, There has been debate about whether blue hyperlinks on the Web cause disruption to reading. A series of eye tracking experiments were conducted to explore if coloured words in black text had any impact on reading behaviour outside and inside a Web environment. Experiment 1 and 2 explored the saliency of coloured words embedded in single sentences and the impact on reading behaviour. In Experiment 3, the effects of coloured words/hyperlinks in passages of text in a Web-like environment was explored. Experiment 1 and 2 showed that multiple coloured words in text had no negative impact on reading behaviour. However, if the sentence featured only a single coloured word, a reduction in skipping rates was observed. This suggests that the visual saliency associated with a single coloured word may signal to the reader that the word is important, whereas this signalling is reduced when multiple words are coloured. In Experiment 3, when reading passages of text containing hyperlinks in a Web environment, participants showed a tendency to re-read sentences that contained hyperlinked, uncommon words compared to hyperlinked, common words. Hyperlinks highlight important information and suggest additional content, which for more difficult concepts, invites rereading of the preceding text. |
Kathleen C. Fraser; Kristina Lundholm Fors; Marie Eckerström; Fredrik Öhman; Dimitrios Kokkinakis Predicting MCI status from multimodal language data using cascaded classifiers Journal Article In: Frontiers in Aging Neuroscience, vol. 11, pp. 205, 2019. @article{Fraser2019, Recent work has indicated the potential utility of automated language analysis for the detection of mild cognitive impairment (MCI). Most studies combining language processing and machine learning for the prediction of MCI focus on a single language task; here, we consider a cascaded approach to combine data from multiple language tasks. A cohort of 26 MCI participants and 29 healthy controls completed three language tasks: picture description, reading silently, and reading aloud. Information from each task is captured through different modes (audio, text, eye-tracking, and comprehension questions). Features are extracted from each mode, and used to train a series of cascaded classifiers which output predictions at the level of features, modes, tasks, and finally at the overall session level. The best classification result is achieved through combining the data at the task level (AUC = 0.88 |
Cheryl Frenck-Mestre; Seung Kyung Kim; Hyeree Choo; Alain Ghio; Julia Herschensohn; Sungryong Koh Look and listen! The online processing of Korean case by native and non-native speakers Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 3, pp. 385–404, 2019. @article{FrenckMestre2019, We used a forced choice visual world paradigm to examine when listeners integrate case when processing Korean, in native speakers and two groups of adult L2 learners. The L2 learners varied as concerns the typological proximity between their L1 (French or Kazakh) and Korean. Processing was compared for canonical (SOV) and scrambled (OSV) word order. Nominal case marking was either accusative (NOM-ACC) or dative (NOM-DAT). Native Koreans showed anticipatory looks to the correct image, regardless of word order or case. Neither L2 group showed anticipatory looks to the correct image prior to the final auditory verb. Both L2 groups demonstrated superior performance for the dative. However, the Kazakh group showed better capacities to correctly interpret utterances based on case than the French. Our results provide evidence of the incremental nature of processing in Korean for native speakers and, for L2 learners, the effect of L1–L2 overlap and specific case marking |
Lee Friedman; Oleg V. Komogortsev Assessment of the effectiveness of 7 biometric feature normalization techniques Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 14, no. 10, pp. 2528–2536, 2019. @article{Friedman2019, The importance of normalizing biometric features or matching scores is understood in the multimodal biometric case, but there is less attention to the unimodal case. Prior reports assess the effectiveness of normalization directly on biometric performance. We propose that this process is logically comprised of two independent steps: (1) methods to equalize the effect of each biometric feature on the similarity scores calculated from all the features together and (2) methods of weighting the normalized features to optimize biometric performance. In this report, we address step 1 only and focus exclusively on normally distributed features. We show how differences in the variance of features lead to differences in the strength of the influence of each feature on the similarity scores produced from all the features. Since these differences in variance have nothing to do with importance in the biometric sense, it makes no sense to allow them to have greater weight in the assessment of biometric performance. We employed two types of features: (1) real eye-movement features and (2) synthetic features. We compare six variance normalization methods (histogram equalization, L1-normalization, median normalization, z-score normalization, min-max normalization, and L-infinite normalization) and one distance metric (Mahalanobis distance) in terms of how well they reduce the impact of the variance differences. The effectiveness of different techniques on real data depended on the strength of the inter-correlation of the features. For weakly correlated real features and synthetic features, histogram equalization was the best method followed by L1 normalization. |
Isidora Gatarić The cognitive processing of derived nouns with ambiguous suffixes: Behavioral and eye-emovement study Journal Article In: Primenjena Psihologija, vol. 12, no. 1, pp. 85–104, 2019. @article{Gataric2019, The primary aim of this research has been to investigate whether the suffix ambiguity affects the lexical processing of derived nouns in Serbian. Consequently, in the Experiment 1, the derived nouns were presented isolated to participants in the visual lexical decision task. Bearing in mind that the sentence context was important for the lexical processing, the Experiment 2 was designed as an eye-movement study with the sentences (with derived nouns from the Experiment 1) as stimuli. To the best of our knowledge, the similar experimental study was not performed before in the Serbian language, and therefore this study represents the first attempt to investigate this phenomenon in Serbian. An identical statistical analysis was used to analyze the data collected in both experiments, the Generalized Additive Mixed Models (GAMMs). The final results of all GAMMs analyses suggested that the suffixal ambiguity did not affect the lexical processing of derived nouns in Serbian, regardless of whether they were displayed isolated or in the sentence context. The observed results supported the a-morphous perspective in the morpho-lexical processing, as well as the distributed morphology insights from the theoretical linguistics. |
Martina Micai; Mila Vulchanova; David Saldaña Do individuals with autism change their reading behavior to adapt to errors in the text? Journal Article In: Journal of Autism and Developmental Disorders, vol. 39, pp. 4232–4243, 2019. @article{Micai2019, Reading monitoring is poorly explored, but it may have an impact on well-documented reading comprehension difficulties in autism. This study explores reading monitoring through the impact of instructions and different error types on reading behavior. Individuals with autism and matched controls read correct sentences and sentences containing orthographic and semantic errors. Prior to the task, participants were given instructions either to focus on semantic or orthographic errors. Analysis of eye-movements showed that the group with autism, differently from controls, were less influenced by the error's type in the regression-out to-error measure, showing less change in eye-movements behavior between error types. Individuals with autism might find it more difficult to adapt their reading strategies to various reading materials and task demands. |
Evelyn Milburn; Tessa Warren Idioms show effects of meaning relatedness and dominance similar to those seen for ambiguous words Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 2, pp. 591–598, 2019. @article{Milburn2019, Does the language comprehension system resolve ambiguities for single- and multiple-word units similarly? We investigate this question by examining whether two constructs with robust effects on ambiguous word processing – meaning relatedness and meaning dominance – have similar influences on idiom processing. Eye tracking showed that: (1) idioms with more related figurative and literal meanings were read faster, paralleling findings for ambiguous words, and (2) meaning relatedness and meaning dominance interacted to drive eye movements on idioms just as they do on polysemous ambiguous words. These findings are consistent with a language comprehension system that resolves ambiguities similarly regardless of literality or the number of words in the unit. |
Jonathan Mirault; Joshua Snell; Jonathan Grainger Reading without spaces revisited: The role of word identification and sentence-level constraints Journal Article In: Acta Psychologica, vol. 195, pp. 22–29, 2019. @article{Mirault2019, The present study examined the relative contribution of bottom-up word identification and top-down sentence-level constraints in facilitating the reading of text printed without between-word spacing. We compared reading of grammatically correct sentences and shuffled versions of the same words presented both with normal spacing and without spaces. We found that reading was hampered by removing sentence structure as well as by removing spaces. A significantly greater impact of sentence structure when reading unspaced text was found in probe word identification accuracies and total viewing times per word, whereas the impact of sentence structure on the probability of making a regressive eye movement was greater when reading normally spaced text. Crucially, we also found that the length of the currently fixated word determined the amplitude of forward saccades leaving that word during the reading of unspaced text. We conclude that the relative ease with which skilled readers can read unspaced text is due to a combination of an increased use of bottom-up word identification in guiding the timing and targeting of eye movements, plus an increased interactivity between word identification and sentence level processing. |
Jonathan Mirault; Joshua Snell; Jonathan Grainger Reading without spaces: The role of precise letter order Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 3, pp. 846–860, 2019. @article{Mirault2019a, Prior research points to efficient identification of embedded words as a key factor in facilitating the reading of text printed without spacing between words. Here we further tested the primary role of bottom-up word identification by altering this process with a letter transposition manipulation. In two experiments, we examined silent reading and reading aloud of normal sentences and sentences containing words with letter transpositions, in both normally spaced and unspaced conditions. We predicted that letter transpositions should be particularly harmful for reading unspaced text. In line with our prediction, the majority of our measures of reading fluency showed that unspaced text with letter transpositions was disproportionately difficult to read. These findings provide further support for the claim that reading text without between-word spacing relies principally on efficient bottom-up processing, enabling accurate word identification in the absence of visual cues to identify word boundaries. |
Jelena Mirković; Gerry T. M. Altmann Unfolding meaning in context: The dynamics of conceptual similarity Journal Article In: Cognition, vol. 183, pp. 19–43, 2019. @article{Mirkovic2019, How are relationships between concepts affected by the interplay between short-term contextual constraints and long-term conceptual knowledge? Across two studies we investigate the consequence of changes in visual context for the dynamics of conceptual processing. Participants' eye movements were tracked as they viewed a visual depiction of e.g. a canary in a birdcage (Experiment 1), or a canary and three unrelated objects, each in its own quadrant (Experiment 2). In both studies participants heard either a semantically and contextually similar “robin” (a bird; similar size), an equally semantically similar but not contextually similar “stork” (a bird; bigger than a canary, incompatible with the birdcage), or unrelated “tent”. The changing patterns of fixations across time indicated first, that the visual context strongly influenced the eye movements such that, in the context of a birdcage, early on (by word offset) hearing “robin” engendered more looks to the canary than hearing “stork” or “tent” (which engendered the same number of looks), unlike in the context of unrelated objects (in which case “robin” and “stork” engendered equivalent looks to the canary, and more than did “tent”). Second, within the 500 ms post-word-offset eye movements in both experiments converged onto a common pattern (more looks to the canary after “robin” than after “stork” and for both more than after “tent”). We interpret these findings as indicative of the dynamics of activation within semantic memory accessed via pictures and via words, and reflecting the complex interaction between systems representing context-independent and context-dependent conceptual knowledge driven by predictive processing. |
Holger Mitterer; Sahyang Kim; Taehong Cho The glottal stop between segmental and suprasegmental processing: The case of Maltese Journal Article In: Journal of Memory and Language, vol. 108, pp. 1–20, 2019. @article{Mitterer2019, Many languages mark vowel-initial words with a glottal stop. We show that this occurs in Maltese, even though the glottal stop also occurs as a phoneme in Maltese. As a consequence, words with and without an underlying (phonemic) glottal stop (e.g., a glottal stop-zero minimal pair qal /Ɂɑ:l/ vs. ghal /ɑ:l/ Engl., ‘he said'-‘because') can become homophonous in connected speech. We first tested the extent of this phonetic marking of vowel-initial words in a production experiment and found that even in fluent productions, about half of the vowel-initial words are marked with an epenthetic glottal stop. The epenthetic glottal stop is more likely to occur when the preceding word is longer, showing a kind of preboundary lengthening at a phrase-level prosodic boundary. A subsequent perception study (Experiment 2) using a two-alternative forced-choice task with a minimal pair of a glottal stop-initial and a vowel-initial word indicated that listeners are sensitive to the durationally conditioned prosodic context before the test word, and they are more likely to perceive a vowel-initial word when the preceding word is lengthened. An additional eye-tracking study (Experiment 3) using onset-overlap pairs (e.g., qafla /Ɂɑflɑ/ - afda, /ɑfdɑ/ → [Ɂɑfda], Engl., ‘to trust' - ‘chord') showed no early influence of prosodic cues on segmental processing. But a gating experiment (Experiment 4) replicated the prosodic effect observed in Experiment 2. Taken together, our results indicate an interaction between prosodic processing and segmental processing that comes into effect relatively late in speech processing. |
Jana Annina Müller; Dorothea Wendt; Birger Kollmeier; Stefan Debener; Thomas Brand Effect of speech rate on neural tracking of speech Journal Article In: Frontiers in Psychology, vol. 10, pp. 449, 2019. @article{Mueller2019, Speech comprehension requires effort in demanding listening situations. Selective attention may be required for focusing on a specific talker in a multi-talker environment, may enhance effort by requiring additional cognitive resources, and is known to enhance the neural representation of the attended talker in the listener's neural response. The aim of the study was to investigate the relation of listening effort, as quantified by subjective effort ratings and pupil dilation, and neural speech tracking during sentence recognition. Task demands were varied using sentences with varying levels of linguistic complexity and using two different speech rates in a picture-matching paradigm with 20 normal-hearing listeners. The participants' task was to match the acoustically presented sentence with a picture presented before the acoustic stimulus. Afterwards they rated their perceived effort on a categorical effort scale. During each trial, pupil dilation (as an indicator of listening effort) and electroencephalogram (as an indicator of neural speech tracking) were recorded. Neither measure was significantly affected by linguistic complexity. However, speech rate showed a strong influence on subjectively rated effort, pupil dilation, and neural tracking. The neural tracking analysis revealed a shorter latency for faster sentences, which may reflect a neural adaptation to the rate of the input. No relation was found between neural tracking and listening effort, even though both measures were clearly influenced by speech rate. This is probably due to factors that influence both measures differently. Consequently, the amount of listening effort is not clearly represented in the neural tracking. |
Christian A. Navarro-Torres; Dalia L. Garcia; Vrinda Chidambaram; Judith F. Kroll Cognitive control facilitates attentional disengagement during second language comprehension Journal Article In: Brain Sciences, vol. 9, pp. 1–23, 2019. @article{NavarroTorres2019, Bilinguals learn to resolve conflict between their two languages and that skill has been hypothesized to create long-term adaptive changes in cognitive functioning. Yet, little is known about how bilinguals recruit cognitive control to enable efficient use of one of their languages, especially in the less skilled and more effortful second language (L2). Here we examined how real-time cognitive control engagement influences L2 sentence comprehension (i.e., conflict adaptation). We tested a group of English monolinguals and a group of L2 English speakers using a recently-developed cross-task adaptation paradigm. Stroop sequences were pseudo-randomly interleaved with a visual-world paradigm in which participants were asked to carry out spoken instructions that were either syntactically ambiguous or unambiguous. Consistent with previous research, eye-movement results showed that Stroop-related conflict improved the ability to engage correct-goal interpretations, and disengage incorrect-goal interpretations, during ambiguous instructions. Such cognitive-to-language modulations were similar in both groups, but only in the engagement piece. In the disengagement portion, the modulation emerged earlier in bilinguals than in monolinguals, suggesting group differences in attentional disengagement following cognitive control recruitment. Additionally, incorrect-goal eye-movements were modulated by individual differences in working memory, although differently for each group, suggesting an involvement of both language-specific and domain-general resources. |
Gal Nitsan; Arthur Wingfield; Limor Lavie; Boaz M. Ben-David Differences in working memory capacity affect online spoken word recognition: Evidence from eye movements Journal Article In: Trends in Hearing, vol. 23, 2019. @article{Nitsan2019, Individual differences in working memory capacity have been gaining recognition as playing an important role in speech comprehension, especially in noisy environments. Using the visual world eye-tracking paradigm, a recent study by Hadar and coworkers found that online spoken word recognition was slowed when listeners were required to retain in memory a list of four spoken digits (high load) compared with only one (low load). In the current study, we recognized that the influence of a digit preload might be greater for individuals who have a more limited memory span. We compared participants with higher and lower memory spans on the time course for spoken word recognition by testing eye-fixations on a named object, relative to fixations on an object whose name shared phonology with the named object. Results show that when a low load was imposed, differences in memory span had no effect on the time course of preferential fixations. However, with a high load, listeners with lower span were delayed by $550 ms in discriminating target from sound-sharing competitors, relative to higher span listeners. This follows an assumption that the interference effect of a memory preload is not a fixed value, but rather, its effect is greater for individuals with a smaller memory span. Interestingly, span differences affected the timeline for spoken word recognition in noise, but not offline accuracy. This highlights the significance of using eye-tracking as a measure for online speech processing. Results further emphasize the importance of considering differences in cognitive capacity, even when testing normal hearing young adults. |
Henri Olkoniemi; Eerika Johander; Johanna K. Kaakinen The role of look-backs in the processing of written sarcasm Journal Article In: Memory & Cognition, vol. 47, no. 1, pp. 87–105, 2019. @article{Olkoniemi2019, Previous eye-tracking studies suggest that when resolving the meaning of sarcastic utterances in a text, readers often initiate fixations that return to the sarcastic utterance from subsequent parts of the text. We used a modified trailing mask paradigm to examine both the role of these look-back fixations in sarcasm comprehension and whether there are individual differences in how readers resolve sarcasm. Sixty-two adult participants read short paragraphs containing either a literal or a sarcastic utterance while their eye movements were recorded. The texts were presented using a modified trailing mask paradigm: sentences were initially masked with a string of x's and were revealed to the reader one at a time. In the normal reading condition, sentences remained visible on the screen when the reader moved on to the next sentence; in the masked condition, the sentences were replaced with a mask. Individual differences in working memory capacity (WMC) and the processing of emotional information were also measured. The results showed that readers adjusted their reading behavior when a mask prevented them from re-examining the text content. Interestingly, the readers' compensatory strategies depended on spatial WMC. Moreover, the results showed that the ability to process emotional information was related to less processing effort invested in resolving sarcasm. The present study suggests that look-backs are driven by a need to re-examine the text contents but that they are not necessary for the successful comprehension of sarcasm. The strategies used to resolve sarcasm are mediated by individual differences. |
Henri Olkoniemi; Viivi Strömberg; Johanna K. Kaakinen The ability to recognise emotions predicts the time-course of sarcasm processing: Evidence from eye movements Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 5, pp. 1212–1223, 2019. @article{Olkoniemi2019a, A core feature of sarcasm is that there is a discrepancy between the literal meaning of the utterance and the context in which it is presented. This means that a sarcastic statement embedded in a story introduces a break in local coherence. Previous studies have shown that sarcastic statements in written stories often elicit longer processing times than their literal counterparts, possibly reflecting the difficulty of integrating the statement into the story's context. In the present study, we examined how sarcastic statements are processed when the location of the local coherence break is manipulated by presenting the sarcastic dialogues either before or after contextual information. In total, 60 participants read short text paragraphs containing sarcastic or literal target statements, while their eye movements were recorded. Individual differences in ability to recognise emotions and working memory capacity were measured. The results suggest that longer reading times with sarcastic statements not only reflect local inconsistency but also attempt to resolve the meaning of the sarcastic statement. The ability to recognise emotions was reflected in eye-movement patterns, suggesting that readers who are poor at recognising emotions are slower at categorising the statement as sarcastic. Thus, they need more processing effort to resolve the sarcastic meaning. |
Akira Omaki; Zoe Ovans; Anthony Yacovone; Brian W. Dillon Rebels without a clause: Processing reflexives in fronted wh-predicates Journal Article In: Journal of Memory and Language, vol. 107, pp. 80–94, 2019. @article{Omaki2019, English reflexives like herself tend to associate with a structurally prominent local antecedent in online processing. However, past work has primarily investigated reflexives in canonical direct object positions. The present study investigates cataphoric reflexives in fronted wh-predicates (e.g., The mechanic that James hired predicted how annoyed with himself the insurance agent would be). Here, the reflexive is encountered in advance of its grammatical antecedent. We ask two questions. First, will readers engage an anaphoric (backwards-looking) or cataphoric (forwards-looking) search for an antecedent? Two, how similar is this process to the retrieval process for direct object reflexives? In two eye-tracking experiments, we found that readers initially interpret a cataphoric reflexive anaphorically, and tend to associate the reflexive with the more recently encountered antecedent. We propose that structural guidance for reflexive resolution occurs only when the necessary configurational syntactic information is available when the reflexive is encountered. |
2018 |
Clarissa Vries; W. Gudrun Reijnierse; Roel M. Willems Eye movements reveal readers' sensitivity to deliberate metaphors during narrative reading Journal Article In: Scientific Study of Literature, vol. 8, no. 1, pp. 135–164, 2018. @article{Vries2018, Metaphors occur frequently in literary texts. Deliberate Metaphor Theory (DMT; e.g., Steen, 2017 ) proposes that metaphors that serve a communicative function as metaphor are radically different from metaphors that do not have this function. We investigated differences in processing between deliberate and non-deliberate metaphors, compared to non-metaphorical words in literary reading. Using the Deliberate Metaphor Identification Procedure ( Reijnierse et al., 2018 ), we identified metaphors in two literary stories. Then, eye-tracking was used to investigate participants' ( N = 72) reading behavior. Deliberate metaphors were read slower than non-deliberate metaphors, and both metaphor types were read slower than non-metaphorical words. Differences were controlled for several psycholinguistic variables. Differences in reading behavior were related to individual differences in reading experience and absorption and appreciation of the story. These results are in line with predictions from DMT and underline the importance of distinguishing between metaphor types in the experimental study of literary reading. |
Jocelyn R. Folk; Michael A. Eskenazi Eye-tracking to distinguish comprehension-based and oculomotor-based regressive eye movements during reading Journal Article In: Journal of Visualized Experiments, no. 140, pp. 1–6, 2018. @article{Folk2018, Regressive eye movements are eye movements that move backwards through the text and comprise approximately 10-25% of eye movements during reading. As such, understanding the causes and mechanisms of regressions plays an important role in understanding eye movement behavior. Inhibition of return (IOR) is an oculomotor effect that results in increased latency to return attention to a previously attended target versus a target that was not previously attended. Thus, IOR may affect regressions. This paper describes how to design materials to distinguish between regressions caused by comprehension-related and oculomotor processes; the latter is subject to IOR. The method allows researchers to identify IOR and control the causes of regressions. While the method requires tightly controlled materials and large numbers of participants and materials, it allows researchers to distinguish and control the types of regressions that occur in their reading studies. |
Hazel I. Blythe; Jonathan H. Dickins; Colin R. Kennedy; Simon P. Liversedge Phonological processing during silent reading in teenagers who are deaf/hard of hearing: An eye movement investigation Journal Article In: Developmental Science, vol. 21, pp. 1–19, 2018. @article{Blythe2018, There has been considerable variability within the literature concerning the extent to which deaf/hard of hearing individuals are able to process phonological codes during reading. Two experiments are reported in which participants' eye movements were recorded as they read sentences containing correctly spelled words (e.g., church), pseudohomophones (e.g., cherch), and spelling controls (e.g., charch). We examined both foveal processing and parafoveal pre‐processing of phonology for three participant groups—teenagers with permanent childhood hearing loss (PCHL), chronological age‐matched controls, and reading age‐matched controls. The teenagers with PCHL showed a pseudohomophone advantage from both directly fixated words and parafoveal preview, similar to their hearing peers. These data provide strong evidence for phonological recoding during silent reading in teenagers with PCHL. |
Tania S. Zamuner; Stephanie Strahm; Elizabeth Morin-Lessard; Michael P. A. Page Reverse production effect: Children recognize novel words better when they are heard rather than produced Journal Article In: Developmental Science, vol. 21, no. 4, pp. 1–13, 2018. @article{Zamuner2018, This research investigates the effect of production on 4.5‐ to 6‐year‐old children's recognition of newly learned words. In Experiment 1, children were taught four novel words in a produced or heard training condition during a brief training phase. In Experiment 2, children were taught eight novel words, and this time training condition was in a blocked design. Immediately after training, children were tested on their recognition of the trained novel words using a preferential looking paradigm. In both experiments, children recognized novel words that were produced and heard during training, but demonstrated better recognition for items that were heard. These findings are opposite to previous results reported in the literature with adults and children. Our results show that benefits of speech production for word learning are dependent on factors such as task complexity and the developmental stage of the learner. |
Signy Wegener; Hua-Chen Wang; Peter Lissa; Serje Robidoux; Kate Nation; Anne Castles Children reading spoken words: Interactions between vocabulary and orthographic expectancy Journal Article In: Developmental Science, vol. 21, no. 3, pp. 1–9, 2018. @article{Wegener2018, There is an established association between children's oral vocabulary and their word reading but its basis is not well understood. Here, we present evidence from eye movements for a novel mechanism underlying this association. Two groups of 18 Grade 4 children received oral vocabulary training on one set of 16 novel words (e.g., ‘nesh', ‘coib'), but no training on another set. The words were assigned spellings that were either predictable from phonology (e.g., nesh) or unpredictable (e.g., koyb). These were subsequently shown in print, embedded in sentences. Reading times were shorter for orally familiar than unfamiliar items, and for words with predictable than unpredictable spellings but, importantly, there was an interaction between the two: children demonstrated a larger benefit of oral familiarity for predictable than for unpredictable items. These findings indicate that children form initial orthographic expectations about spoken words before first seeing them in print. A video abstract of this article can be viewed at: https://youtu.be/jvpJwpKMM3E. |
Martin R. Vasilev; Timothy J. Slattery; Julie A. Kirkby; Bernhard Angele What are the costs of degraded parafoveal previews during silent reading? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 44, no. 3, pp. 371–386, 2018. @article{Vasilev2018, It has been suggested that the preview benefit effect is actually a combination of preview benefit and preview costs. Marx et al. (2015) proposed that visually degrading the parafoveal preview reduces the costs associated with traditional parafoveal letter masks used in the boundary paradigm (Rayner, 1975), thus leading to a more neutral baseline. We report 2 experiments of skilled adults reading silently. In Experiment 1, we found no compelling evidence that degraded previews reduced processing costs associated with traditional letter masks. Moreover, participants were highly sensitive to detecting degraded display changes. Experiment 2 used the boundary detection paradigm (Slattery, Angele, & Rayner, 2011) to explore whether participants were capable of detecting actual letter changes or if they were responding purely to changes in degradation. Half of the participants were instructed to respond to any noticed display changes; the other half were instructed to respond only to changes in letter identities. Participants were highly sensitive to degraded changes. In fact, these changes were so apparent that they reduced the sensitivity to letter masks. In the context of the model proposed by Angele, Slattery, and Rayner (2016), we suggest that degraded previews interfere with the attentional stage, as evidenced by the general lack of foveal load effects. In summary, we found that increasingly degrading parafoveal letter masks does not reduce their processing costs in adults, but that both degraded valid and invalid previews introduce additional costs in terms of greater display change awareness. |
Shravan Vasishth; Daniela Mertzen; Lena A. Jäger; Andrew Gelman The statistical significance filter leads to overoptimistic expectations of replicability Journal Article In: Journal of Memory and Language, vol. 103, pp. 151–175, 2018. @article{Vasishth2018, It is well-known in statistics (e.g., Gelman & Carlin, 2014) that treating a result as publishable just because the p-value is less than 0.05 leads to overoptimistic expectations of replicability. These effects get published, leading to an overconfident belief in replicability. We demonstrate the adverse consequences of this statistical significance filter by conducting seven direct replication attempts (268 participants in total) of a recent paper (Levy & Keller, 2013). We show that the published claims are so noisy that even non-significant results are fully compatible with them. We also demonstrate the contrast between such small-sample studies and a larger-sample study; the latter generally yields a less noisy estimate but also a smaller effect magnitude, which looks less compelling but is more realistic. We reiterate several suggestions from the methodology literature for improving current practices. |
Outi Veivo; Vincent Porretta; Jukka Hyona; Juhani Jarvikivi Spoken second language words activate native language orthographic information in late second language learners Journal Article In: Applied Psycholinguistics, vol. 39, no. 05, pp. 1011–1032, 2018. @article{Veivo2018, This study investigated the time course of activation of orthographic information in spoken word recognition with two visual world eye-tracking experiments in a task where second language (L2) spoken word forms had to be matched with their printed referents. Participants ( n = 64) were native Finnish learners of L2 French ranging from beginners to highly proficient. In Experiment 1, L2 targets (e.g., <cidre> /sidʀ/) were presented with either orthographically overlapping onset competitors (e.g., <cintre> /sɛ̃tʀ/) or phonologically overlapping onset competitors ( <cycle> /sikl/). In Experiment 2, L2 targets (e.g., <paume> /pom/) were associated with competitors in Finnish, L1 of the participants, in conditions symmetric to Experiment 1 ( <pauhu> /pauhu/ vs. <pommi> /pom:i/). In the within-language experiment (Experiment 1), the difference in target identification between the experimental conditions was not significant. In the between-language experiment (Experiment 2), orthographic information impacted the mapping more in lower proficiency learners, and this effect was observed 600 ms after the target word onset. The influence of proficiency on the matching was nonlinear: proficiency impacted the mapping significantly more in the lower half of the proficiency scale in both experiments. These results are discussed in terms of coactivation of orthographic and phonological information in L2 spoken word recognition. |
Aaron Veldre; Sally Andrews How does foveal processing difficulty affect parafoveal processing during reading? Journal Article In: Journal of Memory and Language, vol. 103, pp. 74–90, 2018. @article{Veldre2018, Models of eye movement control during reading assume that the difficulty of processing word n in a sentence modulates the depth of processing of the upcoming word/s (word n + 1) in the parafovea. This foveal load hypothesis is widely accepted in the literature despite surprisingly few clear replications of the basic effect. We sought to establish whether observing a foveal load effect depends on the type of parafoveal preview used in the boundary paradigm. Participants' eye movements were recorded in two experiments as they read sentences in which a low- or high-frequency word n—a typical manipulation of foveal load—preceded a critical target word. Prior to the reader making a saccade to word n + 1, the parafoveal preview was either identical to word n + 1; an orthographically similar word or nonword; or an unrelated word or nonword. The results revealed that the critical evidence for a foveal load effect—an interaction between word n frequency and word n + 1 preview—was limited to conditions in which the invalid preview baseline was an orthographically illegal nonword. The remaining conditions produced completely additive effects of the two factors. These findings raise questions about the mechanisms underlying the spillover of foveal processing difficulty to parafoveal words. The implications for theories of reading are discussed. |
Aaron Veldre; Sally Andrews Beyond cloze probability: Parafoveal processing of semantic and syntactic information during reading Journal Article In: Journal of Memory and Language, vol. 100, pp. 1–17, 2018. @article{Veldre2018a, Theories of eye movement control in reading assume that early oculomotor decisions are determined by a word's frequency and cloze probability. This assumption is challenged by evidence that readers are sensitive to the contextual plausibility of an upcoming word: First-pass fixation probability and duration are reduced when the parafoveal preview is a plausible, but unpredictable, word relative to an implausible word. The present study sought to establish whether the source of this effect is sensitivity to violations of syntactic acceptability. In two experiments, the gaze-contingent boundary paradigm was used to compare contextually plausible previews to semantically acceptable and anomalous previews that either matched or violated syntactic rules. Results showed that readers benefited from the convergence of semantic and syntactic acceptability early enough in the timecourse of reading to affect skipping. In addition, both semantic and syntactic plausibility yielded preview effects on target fixation duration measures, providing direct evidence of parafoveal syntactic processing in reading. These results highlight the limitations of relying solely on cloze probability to index contextual influences on early lexical processing. The implications of the data for models of eye movement control and language comprehension are discussed. |
Aaron Veldre; Sally Andrews Parafoveal preview effects depend on both preview plausibility and target predictability Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 64–74, 2018. @article{Veldre2018b, Recent studies using the boundary paradigm have shown that readers benefit from a parafoveal preview of a plausible continuation of the sentence. This plausibility preview effect occurs irrespective of the semantic or orthographic relatedness of the preview and target word, suggesting that it depends on the degree to which a preview word fits the preceding context. The present study tested this hypothesis by examining the impact of contextual constraint on processing a plausible word in the parafovea. Participants' eye movements were recorded as they read sentences in which a target word was either highly predictable or unpredictable. The boundary paradigm was used to compare predictable, unpredictable, and implausible previews. The results showed that target predictability significantly modulated the effects of identical and plausible previews. Identical previews yielded significantly more benefit than plausible previews for highly predictable targets, but for unpredictable targets a plausible preview was as beneficial as an identical preview. The results shed light on the role of contextual predictability in early lexical processing. Furthermore, these data support the view that readers activate a set of appropriate words from the preceding sentence context, prior to the presentation of the target word. |
Mirta Vernice; Antonella Sorace Animacy effects on the processing of intransitive verbs: An eye-tracking study Journal Article In: Language, Cognition and Neuroscience, vol. 33, no. 7, pp. 850–866, 2018. @article{Vernice2018, This paper tested an assumption of the gradient model of split intransitivity put forward by Sorace (“Split Intransitivity Hierarchy” (SIH), 2000, 2004), namely that agentivity is a fundamental feature for unergatives but not for unaccusatives. According to this hypothesis, the animacy of the verb's argument should affect the processing of unergative verbs to a greater extent than unaccusative verbs. By using eye-tracking methodology we monitored the online processing and integration costs of the animacy of the verb's argument in intransitive verbs. We observed that inanimate subjects caused longer reading times only for unergative verbs, whereas the animacy of the verb's argument did not influence the pattern of results for unaccusatives. In addition, the unergative verb data directly support the existence of gradient effects on the processing of the subject argument. |
Stuart Wallis; Yit Yang; Stephen J. Anderson Word Mode: A crowding-free reading protocol for individuals with macular disease Journal Article In: Scientific Reports, vol. 8, pp. 1241, 2018. @article{Wallis2018, Central retinal loss through macular disease markedly reduces the ability to read largely because identification of a word using peripheral vision is negatively influenced by nearby text, a phenomenon termed visual crowding. Here, we present a novel peripheral reading protocol, termed Word Mode, that eliminates crowding by presenting each word in isolation but in a position that mimics its natural position in the line of text being read, with each new word elicited using a self-paced button press. We used a gaze-contingent paradigm to simulate a central scotoma in four normally-sighted observers, and measured oral reading speed for text positioned 7.5° in the inferior field. Compared with reading whole sentences, our crowding-free protocol increased peripheral reading speeds by up to a factor of seven, resulted in significantly fewer reading errors and fixations per sentence, and reduced both the critical print size and the text size required for spot reading by 0.2–0.3 logMAR. We conclude that the level of reading efficiency afforded by the crowding-free reading protocol Word Mode may return reading as a viable activity to many individuals with macular disease. |
Jingxin Wang; Lin Li; Sha Li; Fang Xie; Min Chang; Kevin B. Paterson; Sarah J. White; Victoria A. McGowan Adult age differences in eye movements during reading: The evidence from Chinese Journal Article In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, vol. 73, no. 4, pp. 584–593, 2018. @article{Wang2018n, Objectives: Substantial evidence indicates that older readers of alphabetic languages (e.g., English and German) compensate for age-related reading difficulty by employing a more risky reading strategy in which words are skipped more frequently. The effects of healthy aging on reading behavior for nonalphabetic languages, like Chinese, are largely unknown, although this would reveal the extent to which age-related changes in reading strategy are universal. Accordingly, the present research used measures of eye movements to investigate adult age differences in Chinese reading. Method: The eye movements of young (18–30 years) and older (60+ years) Chinese readers were recorded. Results: The older adults exhibited typical patterns of age-related reading difficulty. But rather than employing a more risky reading strategy compared with the younger readers, the older adults read more carefully by skipping words infre- quently, making shorter forward eye movements, and fixating closer to the beginnings of two-character target words in sentences. Discussion: In contrast with the findings for alphabetic languages, older Chinese readers appear to compensate for age- related reading difficulty by employing a more careful reading strategy. Age-related changes in reading strategy therefore appear to be language specific, rather than universal, and may reflect the specific visual and linguistic requirements of the writing system. |
Paul A. Warren; Frank Boers; Gina M. Grimshaw; Anna Siyanova-Chanturia The effect of gloss type on learners' intake of new words during reading: Evidence from eye-tracking Journal Article In: Studies in Second Language Acquisition, vol. 40, no. 4, pp. 883–906, 2018. @article{Warren2018, A reading experiment combining online and offline data evaluates the effect on second language learners' reading behaviors and lexical uptake of three gloss types designed to clarify word meaning. These are (a) textual definition, (b) textual definition accompanied by picture, and (c) picture only. We recorded eye movements while intermediate learners of English read a story presented on-screen and containing six glossed pseudowords repeated three times each. Cumulative fixation counts and time spent on the pseudowords predicted posttest performance for form recall and meaning recognition, confirming findings of previous eye-tracking studies of vocabulary acquisition from reading. However, the total visual attention given to pseudowords and glosses was smallest in the condition with picture-only glosses, and yet this condition promoted best retention of word meaning. This suggests that gloss types differentially influence learners' processing of novel words in ways that may elude the quantitative measures of attention captured by eye-tracking. |
Kayleigh L. Warrington; Victoria A. McGowan; Kevin B. Paterson; Sarah J. White Effects of aging, word frequency, and text stimulus quality on reading across the adult lifespan: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 44, no. 11, pp. 1714–1729, 2018. @article{Warrington2018, Reductions in stimulus quality may disrupt the reading performance of older adults more when compared with young adults because of sensory declines that begin early in middle age. However, few studies have investigated adult age differences in the effects of stimulus quality on reading, and none have examined how this affects lexical processing and eye movement control. Accordingly, we report two experiments that examine the effects of reduced stimulus quality on the eye movements of young (18–24 years), middle-aged (41–51 years), and older (65⫹ years) adult readers. In Experiment 1, participants read sentences that contained a high- or low-frequency critical word and that were presented normally or with contrast reduced so that words appeared faint. Experiment 2 further investigated effects of reduced stimulus quality using a gaze-contingent technique to present upcoming text normally or with contrast reduced. Typical patterns of age-related reading difficulty (e.g., slower reading, more regressions) were observed in both experiments. In addition, eye movements were disrupted more for older than younger adults when all text (Experiment 1) or just upcoming text (Experiment 2) appeared faint. Moreover, there was an interaction between stimulus quality and word frequency (Experiment 1), such that readers fixated faint low-frequency words for disproportionately longer. Crucially, this effect was similar across all age groups. Thus, although older readers suffer more from reduced stimulus quality, this additional difficulty primarily affects their visual processing of text. These findings have important implications for understanding the role of stimulus quality on reading behavior across the lifespan. |
Kayleigh L. Warrington; Sarah J. White; Kevin B. Paterson Ageing and the misperception of words: Evidence from eye movements during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 75–84, 2018. @article{Warrington2018a, Research with lexical neighbours (words that differ by a single letter while the number and order of letters are preserved) indicates that readers frequently misperceive a word as its higher frequency neighbour (HFN) even during normal reading. But how this lexical influence on word identification changes across the adult lifespan is largely unknown, although slower lexical processing and reduced visual abilities in later adulthood may lead to an increased incidence of word misperception errors. In particular, older adults may be more likely than younger adults to misidentify a word as its HFN, especially when the HFN is congruent with prior sentence context, although this has not been investigated. Accordingly, to address this issue, young and older adults read sentences containing target words with and without an HFN, where the HFN was either congruent with prior sentence context or not. Consistent with previous findings for young adults, eye movements were disrupted more for words with than without an HFN, especially when the HFN was congruent with prior context. Crucially, however, there was no indication of an adult age difference in this word misperception effect. We discuss these findings in relation to the nature of misperception effects in older age. |
Anna Fiona Weiss; Franziska Kretzschmar; Matthias Schlesewsky; Ina Bornkessel-Schlesewsky; Adrian Staub Comprehension demands modulate re-reading, but not first-pass reading behavior Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 198–210, 2018. @article{Weiss2018, Several studies have examined effects of explicit task demands on eye movements in reading. However, there is relatively little prior research investigating the influence of implicit processing demands. In this study, processing demands were manipulated by means of a between-subject manipulation of comprehension question difficulty. Consistent with previous results from Wotschack and Kliegl, the question difficulty manipulation influenced the probability of regressing from late in sentences and re-reading earlier regions; readers who expected difficult comprehension questions were more likely to re-read. However, this manipulation had no reliable influence on eye movements during first-pass reading of earlier sentence regions. Moreover, for the subset of sentences that contained a plausibility manipulation, the disruption induced by implausibility was not modulated by the question manipulation. We interpret these results as suggesting that comprehension demands influence reading behavior primarily by modulating a criterion for comprehension that readers apply after completing first-pass processing. |
Alex L. White; John Palmer; Geoffrey M. Boynton Evidence of serial processing in visual word recognition Journal Article In: Psychological Science, vol. 29, no. 7, pp. 1062–1071, 2018. @article{White2018, To test the limits of parallel processing in vision, we investigated whether people can recognize two words at once. Participants viewed brief, masked pairs of words and were instructed in advance to judge both of the words (dual-task condition) or just one of the words (single-task condition). For judgments of semantic category, the dual-task deficit was so large that it supported all-or-none serial processing: Participants could recognize only one word and had to guess about the other. Moreover, participants were more likely to be correct about one word if they were incorrect about the other, which also supports a serial-processing model. In contrast, judgments of text color with identical stimuli were consistent with unlimited-capacity parallel processing. Thus, under these conditions, serial processing is necessary to judge the meaning of words but not their physical features. Understanding the implications of this result for natural reading will require further investigation. |
Veronica Whitford; Marc F. Joanisse In: Journal of Experimental Child Psychology, vol. 173, pp. 318–337, 2018. @article{Whitford2018, An extensive body of research has examined reading acquisition and performance in monolingual children. Surprisingly, however, much less is known about reading in bilingual children, who outnumber monolingual children globally. Here, we address this important imbalance in the literature by employing eye movement recordings to examine both global (i.e., text-level) and local (i.e., word-level) aspects of monolingual and bilingual children's reading performance across their first-language (L1) and second-language (L2). We also had a specific focus on lexical accessibility, indexed by word frequency effects. We had three main findings. First, bilingual children displayed reduced global and local L1 reading performance relative to monolingual children, including larger L1 word frequency effects. Second, bilingual children displayed reduced global and local L2 versus L1 reading performance, including larger L2 word frequency effects. Third, both groups of children displayed reduced global and local reading performance relative to adult comparison groups (across their known languages), including larger word frequency effects. Notably, our first finding was not captured by traditional offline measures of reading, such as standardized tests, suggesting that these measures may lack the sensitivity to detect such nuanced between-group differences in reading performance. Taken together, our findings demonstrate that bilingual children's simultaneous exposure to two reading systems leads to eye movement reading behavior that differs from that of monolingual children and has important consequences for how lexical information is accessed and integrated in both languages. |
Andreas Widmann; Erich Schröger; Nicole Wetzel In: Biological Psychology, vol. 133, pp. 10–17, 2018. @article{Widmann2018, Novel sounds in the auditory oddball paradigm elicit a biphasic dilation of the pupil (PDR) and P3a as well as novelty P3 event-related potentials (ERPs). The biphasic PDR has been hypothesized to reflect the relaxation of the iris sphincter muscle due to parasympathetic inhibition and the constriction of the iris dilator muscle due to sympathetic activation. We measured the PDR and the P3 to neutral and to emotionally arousing negative novels in dark and moderate lighting conditions. By means of principal component analysis (PCA) of the PDR data we extracted two components: the early one was absent in darkness and, thus, presumably reflects parasympathetic inhibition, whereas the late component occurred in darkness and light and presumably reflects sympathetic activation. Importantly, only this sympathetic late component was enhanced for emotionally arousing (as compared to neutral) sounds supporting the hypothesis that emotional arousal specifically activates the sympathetic nervous system. In the ERPs we observed P3a and novelty P3 in response to novel sounds. Both components were enhanced for emotionally arousing (as compared to neutral) novels. Our results demonstrate that sympathetic and parasympathetic contributions to the PDR can be separated and link emotional arousal to sympathetic nervous system activation. |
Matthew B. Winn; Ashley N. Moore In: Trends in Hearing, vol. 22, 2018. @article{Winn2018, Contextual cues can be used to improve speech recognition, especially for people with hearing impairment. However, previous work has suggested that when the auditory signal is degraded, context might be used more slowly than when the signal is clear. This potentially puts the hearing-impaired listener in a dilemma of continuing to process the last sentence when the next sentence has already begun. This study measured the time course of the benefit of context using pupillary responses to high- and low-context sentences that were followed by silence or various auditory distractors (babble noise, ignored digits, or attended digits). Participants were listeners with cochlear implants or normal hearing using a 12-channel noise vocoder. Context-related differences in pupil dilation were greater for normal hearing than for cochlear implant listeners, even when scaled for differences in pupil reactivity. The benefit of context was systematically reduced for both groups by the presence of the later-occurring sounds, including virtually complete negation when sentences were followed by another attended utterance. These results challenge how we interpret the benefit of context in experiments that present just one utterance at a time. If a listener uses context to ‘‘repair'' part of a sentence, and later-occurring auditory stimuli interfere with that repair process, the benefit of context might not survive outside the idealized laboratory or clinical environment. Elevated listening effort in hearing-impaired listeners might therefore result not just from poor auditory encoding but also inefficient use of context and prolonged processing of misperceived utterances competing with perception of incoming speech. |
Jeffrey S. Wood; Matthew Haigh; Andrew J. Stewart An eye-tracking examination of readers' sensitivity to pragmatic scope information during the processing of conditional inducements Journal Article In: Canadian Journal of Experimental Psychology, vol. 72, no. 3, pp. 197–207, 2018. @article{Wood2018, Previous research into conditional inducements has shown that readers are sensitive after reading such conditionals to pragmatic scope differences between promises and threats; specifically, threats can be referred to as promises, but promises cannot be referred to as threats. Crucially, previous work has not revealed whether such scope effects emerge while processing the conditional itself. In the experiment reported here, participants' eye movements were recorded while they read vignettes containing conditional promises and threats. We observed a reading time penalty on the conditional itself when participants read a conditional promise that was described as a 'threat' (e.g., Liam threatened Perry 'if you tell dad, then I'll take equal responsibility'). There was no such penalty when the word 'promise' was presented before a conditional threat. These results suggest that readers are sensitive during reading of the conditional itself to pragmatic scope differences between 'threats' and 'promises.' |
Guoli Yan; Zhu Meng; Nina Liu; Liyuan He; Kevin B. Paterson Effects of irrelevant background speech on eye movements during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 6, pp. 1270–1275, 2018. @article{Yan2018, The irrelevant speech effect (ISE) refers to the impairment of visual information processing by background speech. Prior research on the ISE has focused on short-term memory for visually presented word lists. The present research extends this work by using measurements of eye movements to examine effects of irrelevant background speech during Chinese reading. This enabled an examination of the ISE for a language in which access to semantic representations is not strongly mediated by phonology. Participants read sentences while exposed to meaningful irrelevant speech, meaningless speech (scrambled meaningful speech) or silence. A target word of high or low lexical frequency was embedded in each sentence. The results show that meaningful, but not meaningless, background speech produced increased re-reading. In addition, the appearance of a normal word frequency effect, characterised by longer fixation times on low- compared to high- frequency words, was delayed when meaningful or meaningless speech was present in the background. These findings show that irrelevant background speech can disrupt normal processes of reading comprehension and, in addition, that background noise can interfere with the early processing of words. The findings add to evidence showing that normal reading processes can be disrupted by environmental noise such as irrelevant background speech. |
Michael C. W. Yip; Mingjun Zhai Context effects and spoken word recognition of Chinese: An eye-tracking study Journal Article In: Cognitive Science, vol. 42, pp. 1134–1153, 2018. @article{Yip2018a, This study examined the time-course of context effects on spoken word recognition during Chinese sentence processing. We recruited 60 native Mandarin listeners to participate in an eye-tracking experiment. In this eye-tracking experiment, listeners were told to listen to a sentence carefully, which ended with a Chinese homophone, and look at different visual probes (Chinese characters or different line-drawing pictures) presented concurrently on the computer screen naturally. Different types of context and probe types were manipulated in the experiment. The results showed that (1) preceding sentence context had an early effect on spoken word recognition processes and (2) phonological information of the distracters had only a negligible effect on the spoken word recognition processes. Finally, the patterns of eye-tracking results seemed to favor an interactive approach in spoken word recognition. |