EyeLink 临床和动眼神经眼球追踪出版物
EyeLink临床和oculomotor研究出版物至2023年(一些早于2024年)列在以下年份。您可以使用Saccadic Adaptation、Schizophrenia、Nystagmus等关键词搜索出版物。您还可以按年份搜索个人作者姓名和有限搜索(选择年份,然后单击搜索按钮)。如果我们错过了任何EyeLink临床或oculomotor文章,请给我们发电子邮件!
2019 |
Zhuoting Zhu; Yin Hu; Chimei Liao; Stuart Keel; Ren Huang; Yanping Liu; Mingguang He Visual span and cognitive factors affect Chinese reading speed Journal Article In: Journal of Vision, vol. 19, no. 14, pp. 1–11, 2019. @article{Zhu2019d, Visual span, which is the number of recognizable letters seen without moving the eyes, has been proven to impose a sensory limitation for alphabetic reading speed (Chung, 2011; Chung, Legge, & Cheung, 2004; Lee, Kwon, Legge, & Gefroh, 2010; Legge, Ahn, Klitz, & Luebker, 1997; Legge, Hooven, Klitz, Stephen Mansfield, & Tjan, 2002; D. Yu, Cheung, Legge, & Chung, 2010). However, little is known about the effects of visual span on Chinese reading performance. Of note, Chinese text differs greatly from that of the alphabetic writing system. There are no spaces between words, and readers are forced to utilize their lexical knowledge to segment Chinese characters into meaningful words, thus increasing the relative importance of cognitive/linguistic factors in reading performance. Therefore, the aim of the present study is to explore whether visual span and cognitive/linguistic factors have independent effects on Chinese reading speed. Visual span profiles, cognitive/linguistic factors indicated by word frequency, and Chinese sentence-reading performance were collected from 28 native Chinese-speaking subjects. We found that the visual-span size and cognitive/linguistic factors independently contributed to Chinese sentence-reading speed (all ps < 0.05). We concluded that both the visual-span size and cognitive/linguistic factors represented bottlenecks for Chinese sentence-reading speed. |
Zhuoting Zhu; Yin Hu; Chimei Liao; Ren Huang; Stuart Keel; Yanping Liu; Mingguang He; Stuart Keell; Yanping Liu; Mingguang He Perceptual Learning of Visual Span Improves Chinese Reading Speed Journal Article In: Visual Psychophysics and Physiological Optics, vol. 60, no. 6, pp. 2357–2368, 2019. @article{Zhu2019c, PURPOSE. Evidence has indicated that the size of the visual span (the number of identifiable letters without movement of the eyes) and reading speed can be boosted through perceptual learning in alphabetic scripts. In this study, we investigated whether benefits of perceptual learning could be extended to visual-span size and sentence reading (all characters are presented at the same time) for Chinese characters and explored changes in sensory factors contributing to changes in visual-span size following training. METHODS. We randomly assigned 26 normally sighted subjects to either a control group (n ¼ 13) or a training group (n ¼ 13). Pre- and posttests were administered to evaluate visual-span profiles (VSPs) and reading speed. Training consisted of trigram (sequences of three characters) character-recognition trials over 4 consecutive days. VSPs are plots of recognition accuracy as a function of character position. Visual-span size was quantified as the area under VSPs in bits of information transmitted. A decomposition analysis of VSPs was used to quantify the effects of sensory factors (crowding and mislocation). We compared the size and sensory factors of visual span and reading speed following training. RESULTS. Following training, the visual-span size significantly increased by 11.7 bits, and reading speed increased by 50.8%. The decomposition analysis showed a significant reduction for crowding (?13.1 bits) but a minor increase in the magnitude of mislocation errors (1.46 bits) following training. CONCLUSIONS. These results suggest that perceptual learning expands the visual-span size and further improves Chinese text sentence-reading speed, indicating that visual span may be a common sensory limitation on reading that can be overcome with practice. |
Peng Zhou; Likan Zhan; Huimin Ma Predictive language processing in preschool children with autism spectrum disorder: An eye-tracking study Journal Article In: Journal of Psycholinguistic Research, vol. 48, no. 2, pp. 431–452, 2019. @article{Zhou2019, Sentence comprehension relies on the abilities to rapidly integrate different types of linguistic and non-linguistic information. The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorder (ASD) are able to use verb information predictively to anticipate the upcoming linguistic input during real-time sentence comprehension. 26 five-year-olds with ASD, 25 typically developing (TD) five-year-olds and 24 TD four-year-olds were tested using the visual world eye-tracking paradigm. The results showed that the 5-year-olds with ASD, like their TD peers, exhibited verb-based anticipatory eye movements during real-time sentence comprehension. No difference was observed between the ASD and TD groups in the time course of their eye gaze patterns, indicating that Mandarin-speaking preschool children with ASD are able to use verb information as effectively and rapidly as TD peers to predict the upcoming linguistic input. |
Adrian Staub; Sophia Dodge; Andrew L. Cohen Failure to detect function word repetitions and omissions in reading: Are eye movements to blame? Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 1, pp. 340–346, 2019. @article{Staub2019a, We tested whether failure to notice repetitions of function words during reading (e.g., Amanda jumped off the the swing and landed on her feet.) is due to the eyes' tendency to skip one of the instances of the word. Eye movements were recorded during reading of sentences with repetitions of the word the or repetitions of a noun, after which readers were asked whether an error was present. A repeated the was detected on 46% of trials overall. On trials on which both instances of the were fixated, detection was still only 66%. A repeated noun was detected on 90% of trials, with no significant effect of eye movement patterns. Detecting an omitted the also proved difficult, with eye movement patterns having only a small effect. Readers frequently overlook function word errors even when their eye movements provide maximal opportunity for noticing such errors, but they notice content word repetitions regardless of eye movement patterns. We propose that readers overlook function word errors because they attribute the apparent error to noise in the eye movement control system. |
Adrian Staub; Kirk Goddard The role of preview validity in predictability and frequency effects on eye movements in reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 1, pp. 110–127, 2019. @article{Staub2019, A word's predictability, as measured by its cloze probability, has a robust influence on the time a reader's eyes spend on the word, with more predictable words receiving shorter fixations. However, several previous studies using the boundary paradigm have found no apparent effect of predictability on early reading time measures when the reader does not have valid parafoveal preview of the target word. The present study directly assesses this pattern in two experiments, demonstrating evidence for a null effect of predictability on first fixation and gaze duration with invalid preview, supported by Bayes factor analyses. While the effect of context independent word frequency is shown to survive with invalid preview, consistent with previous studies, the effect of predictability is eliminated with both unrelated word previews and random letter string previews. These results suggest that a word's predictability influences early stages of orthographic processing, and does so only when perceptual evidence is equivocal, as is the case when the word is initially viewed in parafoveal vision. Word frequency may influence not only early orthographic processing, but also later processing stages. |
Marianna Stella; Paul E. Engelhardt Syntactic ambiguity resolution in dyslexia: An examination of cognitive factors underlying eye movement differences and comprehension failures Journal Article In: Dyslexia, vol. 25, no. 2, pp. 115–141, 2019. @article{Stella2019, This study examined eye movements and comprehension of temporary syntactic ambiguities in individuals with dyslexia, as few studies have focused on sentence-level comprehension in dyslexia. We tested 50 participants with dyslexia and 50 typically developing controls, in order to investigate (a) whether dyslexics have difficulty revising temporary syntactic misinterpretations and (b) underlying cognitive factors (i.e., working memory and processing speed) associated with eye movement differences and comprehension failures. In the sentence comprehension task, participants read subordinate-main structures that were either ambiguous or unambiguous, and we also manipulated the type of verb contained in the subordinate clause (i.e., reflexive or optionally transitive). Results showed a main effect of group on comprehension, in which individuals with dyslexia showed poorer comprehension than typically developing readers. In addition, participants with dyslexia showed longer total reading times on the disambiguating region of syntactically ambiguous sentences. With respect to cognitive factors, working memory was more associated with group differences than was processing speed. Conclusions focus on sentence-level syntactic processing issues in dyslexia (a previously under-researched area) and the relationship between online and offline measures of syntactic ambiguity resolution. |
Anastasia Stoops; Kiel Christianson Parafoveal processing of inflectional morphology in Russian: A within-word boundary-change paradigm Journal Article In: Vision Research, vol. 158, pp. 1–10, 2019. @article{Stoops2019, The present study examined whether the inflectional morphology on Russian nouns is processed parafoveally in words longer than five characters while the eyes are fixated on the word. A modified boundary-change paradigm was used to examine parafoveal processing of nominal case markings within a currently fixated word n. The results elicited identical preview benefit for both first and second-pass measures on the post boundary and whole word regions. The morphologically related preview benefit (vs. nonword) was observed for first and second-pass measures as early as pre-boundary, post-boundary, and whole word regions. Additionally the morphologically related preview elicited cost (vs. identical) for first-pass measures on the post-boundary region, total time for the whole word, and regressions into the pre-boundary region. The contribution of the study is two-fold. First, this is the first study to use within-word boundary changes to study the parafoveal processing of inflectional morphology in Russian. Second, we provide additional evidence that inflectional morphology can be integrated parafoveally while reading a language with linear concatenative morphology. |
Hideko Teruya; Vsevolod Kapatsinski Deciding to look: Revisiting the linking hypothesis for spoken word recognition in the visual world Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 7, pp. 861–880, 2019. @article{Teruya2019, The visual world paradigm (VWP) studies of spoken word recognition rely on a linking hypothesis that connects lexical activation to the probability of looking at the referent of a word. The standard hypothesis is that fixation probabilities track activation levels transformed via the Luce Choice Rule. Under this assumption, given enough power, any difference between positive activations should be detectable using VWP. We argue that looking at a referent of a word is a decision, made when the word's activation exceeds a context-specific threshold. Subthreshold activations do not drive saccades, and differences among such activations are undetectable in VWP. Evidence is provided by VWP experiments on Japanese. Bayesian analyses indicate a relatively high threshold: saccades to cohort competitors do not exceed those to unrelated distractors unless the cohort competitor shares the initial CVC with the target. We argue that threshold setting constitutes an understudied source of variability in VWP data. |
Debra Titone; Kyle Lovseth; Kristina Kasparian; Mehrgol Tiv Are figurative interpretations of idioms directly retrieved, compositionally built, or both? Evidence from eye movement measures of reading Journal Article In: Canadian Journal of Experimental Psychology, vol. 73, no. 4, pp. 216–230, 2019. @article{Titone2019, Idioms are part of a general class of multiword expressions where the overall interpretation cannot be fully determined through a simple syntactic and semantic (i.e., compositional) analysis of their component words (e.g., kick the bucket, save your skin). Idioms are thus simultaneously amenable to direct retrieval from memory, and to an on-demand compositional analysis, yet it is unclear which processes lead to figurative interpretations of idioms during comprehension. In this eye-tracking study, healthy adults read sentences in their native language that contained idioms, which were followed by figurativeor literal-biased disambiguating sentential information. The results showed that the earliest stages of comprehension are driven by direct retrieval of idiomatic forms; however, later stages of comprehension, after which point the intended meaning of an idiom is known, are driven by both direct retrieval and compositional processing. Of note, at later stages, increased idiom decomposability slowed reading time, suggesting more effortful figurative comprehension. Together, these results are most consistent with multidetermined or hybrid models of idiom processing. |
Mehrgol Tiv; Laura Gonnerman; Veronica Whitford; Deanna Friesen; Debra Jared; Debra Titone Figuring out how verb-particle constructions are understood during L1 and L2 reading Journal Article In: Frontiers in Psychology, vol. 10, pp. 1733, 2019. @article{Tiv2019, The aim of this paper was to investigate first-language (L1) and second-language (L2) reading of verb particle constructions (VPCs) among English-French bilingual adults. VPCs, or phrasal verbs, are highly common collocations of a verb paired with a particle, such as eat up or chew out, that often convey a figurative meaning. VPCs vary in form (eat up the candy vs. eat the candy up) and in other factors, such as the semantic contribution of the constituent words to the overall meaning (semantic transparency) and form frequency. Much like classic forms of idioms, VPCs are difficult for L2 users. Here, we present two experiments that use eye-tracking to discover factors that influence the ease with which VPCs are processed by bilingual readers. In Experiment 1, we compared L1 reading of adjacent vs. split VPCs, and then explored whether the general pattern was driven by item-level factors. L1 readers did not generally find adjacent VPCs (eat up the candy) easier to process than split VPCs (eat the candy up); however, VPCs low in co-occurrence strength (i.e., low semantic transparency) and high in frequency were easiest to process in the adjacent form during first pass reading. In Experiment 2, we compared L2 reading of adjacent vs split VPCs, and then explored whether the general pattern varied with item-level or participant-level factors. L2 readers generally allotted more second pass reading time to split vs. adjacent forms, and there was some evidence that this pattern was greater for L2 English readers who had less English experience. In contrast with L1 reading, there was no influence of item differences on L2 reading behavior. These data suggest that L1 readers often have lexicalized VPC representations that are directly retrieved during comprehension, whereas L2 readers are more likely to compositionally process VPCs given their more general preference for adjacent particles, as demonstrated by longer second pass reading time for all split items. |
Wilhelmiina Toivo; Christoph Scheepers Pupillary responses to affective words in bilinguals' first versus second language Journal Article In: PLoS ONE, vol. 14, no. 4, pp. e0210450, 2019. @article{Toivo2019, Late bilinguals often report less emotional involvement in their second language, a phenomenon called reduced emotional resonance in L2. The present study measured pupil dilation in response to high-versus low-arousing words (e.g., riot vs. swamp) in German-English and Finnish-English late bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. To improve on previous research, we controlled for lexical confounds such as length, frequency, emotional valence, and abstractness–both within and across languages. Results showed no appreciable differences in post-trial word recognition judgements (98% recognition on average), but reliably stronger pupillary effects of the arousal manipulation when stimuli were presented in participants' first rather than second language. This supports the notion of reduced emotional resonance in L2. Our findings are unlikely to be due to differences in stimulus-specific control variables or to potential word-recognition difficulties in participants' second language. Linguistic relatedness between first and second language (German-English vs. Finnish-English) was also not found to have a modulating influence. |
Jacolien Rij; Petra Hendriks; Hedderik Rijn; R. Harald Baayen; Simon N. Wood Analyzing the time course of pupillometric data Journal Article In: Trends in Hearing, vol. 23, 2019. @article{Rij2019, This article provides a tutorial for analyzing pupillometric data. Pupil dilation has become increasingly popular in psychological and psycholinguistic research as a measure to trace language processing. However, there is no general consensus about procedures to analyze the data, with most studies analyzing extracted features from the pupil dilation data instead of analyzing the pupil dilation trajectories directly. Recent studies have started to apply nonlinear regression and other methods to analyze the pupil dilation trajectories directly, utilizing all available information in the continuously measured signal. This article applies a nonlinear regression analysis, generalized additive mixed modeling, and illustrates how to analyze the full-time course of the pupil dilation signal. The regression analysis is particularly suited for analyzing pupil dilation in the fields of psychological and psycholinguistic research because generalized additive mixed models can include complex nonlinear interactions for investigating the effects of properties of stimuli (e.g., formant frequency) or participants (e.g., working memory score) on the pupil dilation signal. To account for the variation due to participants and items, nonlinear random effects can be included. However, one of the challenges for analyzing time series data is dealing with the autocorrelation in the residuals, which is rather extreme for the pupillary signal. On the basis of simulations, we explain potential causes of this extreme autocorrelation, and on the basis of the experimental data, we show how to reduce their adverse effects, allowing a much more coherent interpretation of pupillary data than possible with feature-based techniques. |
Martin R. Vasilev; Simon P. Liversedge; Daniel Rowan; Julie A. Kirkby; Bernhard Angele Reading is disrupted by intelligible background speech: Evidence from eye-tracking Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 11, pp. 1484–1512, 2019. @article{Vasilev2019, It is not well understood whether background speech affects the initial processing of words during reading or only the later processes of sentence integration. Additionally, it is not clear how eye movements support text compre- hension in the face of distraction by background speech and noise. In the present research, participants read single sentences (Experiment 1) and short paragraphs (Experiments 2–3) in 4 sound conditions: silence, speech-spectrum Gaussian noise, English speech (intelligible to participants), and Mandarin speech (unintelligible to participants). Intelligible speech did not affect the lexical access of words and had a limited effect on the first-pass fixations of words. However, it led to more regressions and more rereading fixations compared with both unintelligible speech and silence. The results suggested that the distraction is mostly semantic in nature, and there was only limited evidence for a contribution of phonology. Finally, intelligible speech disrupted comprehension only when partici- pants were prevented from rereading previous words. These findings suggest that the semantic properties of irrelevant speech can disrupt the ongoing reading process, but that this disruption occurs in the postlexical stages of reading when participants need to integrate words to form the sentence context and to construct a coherent discourse of the text. |
Martin R. Vasilev; Fabrice B. R. Parmentier; Bernhard Angele; Julie A. Kirkby Distraction by deviant sounds during reading: An eye-movement study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 7, pp. 1863–1875, 2019. @article{Vasilev2019a, Oddball studies have shown that sounds unexpectedly deviating from an otherwise repeated sequence capture attention away from the task at hand. While such distraction is typically regarded as potentially important in everyday life, previous work has so far not examined how deviant sounds affect performance on more complex daily tasks. In this study, we developed a new method to examine whether deviant sounds can disrupt reading performance by recording participants' eye movements. Participants read single sentences in silence and while listening to task-irrelevant sounds. In the latter condition, a 50-ms sound was played contingent on the fixation of five target words in the sentence. On most occasions, the same tone was presented (standard sound), whereas on rare and unexpected occasions it was replaced by white noise (deviant sound). The deviant sound resulted in significantly longer fixation durations on the target words relative to the standard sound. A time-course analysis showed that the deviant sound began to affect fixation durations around 180 ms after fixation onset. Furthermore, deviance distraction was not modulated by the lexical frequency of target words. In summary, fixation durations on the target words were longer immediately after the presentation of the deviant sound, but there was no evidence that it interfered with the lexical processing of these words. The present results are in line with the recent proposition that deviant sounds yield a temporary motor suppression and suggest that deviant sounds likely inhibit the programming of the next saccade. |
Lorenzo Vignali; Stefan Hawelka; Florian Hutzler; Fabio Richlan Processing of parafoveally presented words. An fMRI study Journal Article In: NeuroImage, vol. 184, pp. 1–9, 2019. @article{Vignali2019, The present fMRI study investigated neural correlates of parafoveal preprocessing during reading and the type of information that is accessible from the upcoming - not yet fixated - word. Participants performed a lexical decision flanker task while the constraints imposed by the first three letters (the initial trigram) of parafoveally presented words were controlled. Behavioral results evidenced that the amount of information extracted from parafoveal stimuli, was affected by the difficulty of the foveal stimulus. Easy to process foveal stimuli (i.e., high frequency nouns) allowed parafoveal information to be extracted up to the lexical level. Conversely, when foveal stimuli were difficult to process (orthographically legal nonwords) only constraining trigrams modulated the task performance. Neuroimaging findings showed no effects of lexicality (i.e., difference between words and pseudowords) in the parafovea independently from the difficulty of the foveal stimulus. The constraints imposed by the initial trigrams, however, modulated the hemodynamic response in the left supramarginal gyrus. We interpreted the supramarginal activation as reflecting sublexical (phonological) processes. The missing parafoveal lexicality effect was discussed in relation to findings of experiments which observed effects of parafoveal semantic congruency on electrophysiological correlates. |
Lorenzo Vignali; Stefan Hawelka; Florian Hutzler; Fabio Richlan No effect of cathodal tDCS of the posterior parietal cortex on parafoveal preprocessing of words Journal Article In: Neuroscience Letters, vol. 705, pp. 219–226, 2019. @article{Vignali2019a, The present study investigated the functional role of the posterior parietal cortex during the processing of parafoveally presented letter strings. To this end, we simultaneously presented two letter strings (word or pseudoword)– one foveally and one parafoveally – and asked the participants to indicate the presence of a word (i.e., lexical decision flanker task). We applied cathodal transcranial direct current stimulation (tDCS)over the posterior parietal cortex in order to establish causal links between brain activity and lexical decision performance (accuracy and latency). The results indicated that foveal stimulus difficulty affected the amount of parafoveally processed information. Bayes factor analysis showed no effects of brain stimulation suggesting that posterior parietal cathodal tDCS does not modulate attention-related processes during parafoveal preprocessing. This result is discussed in the context of recent tDCS studies on attention and performance. |
Saúl Villameriel; Brendan Costello; Patricia Dias; Marcel Giezen; Manuel Carreiras Language modality shapes the dynamics of word and sign recognition Journal Article In: Cognition, vol. 191, pp. 103979, 2019. @article{Villameriel2019, Spoken words and signs both consist of structured sub-lexical units. While phonemes unfold in time in the case of the spoken signal, visual sub-lexical units such as location and handshape are produced simultaneously in signs. In the current study we investigate the role of sub-lexical units in lexical access in spoken Spanish and in Spanish Sign Language (LSE) in hearing early bimodal bilinguals and in hearing second language (L2) learners of LSE, both native speakers of Spanish, using the visual world paradigm. Experiment 1 investigated phonological competition in spoken Spanish from words sharing onset or rhyme. Experiment 2 investigated competition in LSE from signs sharing handshape or location. For Spanish, the results confirm previous findings for word recognition: onset competition comes first and is more salient than rhyme competition. For sign recognition, native bimodal bilinguals (native speakers of spoken and signed languages) showed earlier competition from location than handshape, and overall stronger competition from handshape compared to location. Hearing bimodal bilinguals who learned LSE as a second language also experienced competition from both signed parameters. However, they showed later effects for location competitors and weaker effects for handshape competitors than native signers. Our results demonstrate that the temporal dynamics of spoken words and signs impact the time course of lexical co-activation. Furthermore, age of acquisition of the signed language modulates sub-lexical processing of signs, and may reflect enhanced abilities of native signers to use early phonological cues in transition movements to constrain sign recognition. |
Hongyan Wang; Zhongling Pi; Weiping Hu The instructor's gaze guidance in video lectures improves learning Journal Article In: Journal of Computer Assisted Learning, vol. 35, no. 1, pp. 42–50, 2019. @article{Wang2019c, Instructor behaviour is known to affect learning performance, but it is unclear which specific instructor behaviours can optimize learning. We used eye-tracking technology and questionnaires to test whether the instructor's gaze guidance affected learners' visual attention, social presence, and learning performance, using four video lectures: declarative knowledge with and without the instructor's gaze guidance and procedural knowledge with and without the instructor's gaze guidance. The results showed that the instructor's gaze guidance not only guided learners to allocate more visual attention to corresponding learning content but also increased learners' sense of social presence and learning. Furthermore, the link between the instructor's gaze guidance and better learning was especially strong for participants with a high sense of social connection with the instructor when they learned procedural knowledge. The findings lead to a strong recommendation for educational practitioners: Instructors should provide gaze guidance in video lectures for better learning performance. |
Xiaoming Wang; Xinbo Zhao; Jinchang Ren; Jungong Han A new type of eye movement model based on recurrent neural networks for simulating the gaze behavior of human reading Journal Article In: Complexity, vol. 2019, pp. 1–12, 2019. @article{Wang2019, Traditional eye movement models are based on psychological assumptions and empirical data that are not able to simulate eye movement on previously unseen text data. To address this problem, a new type of eye movement model is presented and tested in this paper. In contrast to conventional psychology-based eye movement models, ours is based on a recurrent neural network (RNN) to generate a gaze point prediction sequence, by using the combination of convolutional neural networks (CNN), bidirectional long short-term memory networks (LSTM), and conditional random fields (CRF). The model uses the eye movement data of a reader reading some texts as training data to predict the eye movements of the same reader reading a previously unseen text. A theoretical analysis of the model is presented to show its excellent convergence performance. Experimental results are then presented to demonstrate that the proposed model can achieve similar prediction accuracy while requiring fewer features than current machine learning models. |
Xiaotong Wang; Xue Sui; Sarah J. White Searching for a word in Chinese text: Insights from eye movement behaviour Journal Article In: Journal of Cognitive Psychology, vol. 31, pp. 145–156, 2019. @article{Wang2019j, Locating relevant information in text is an important aspect of the reading process, however relatively few studies have examined this, especially for logographic languages such as Chinese. The present study examines eye movement behaviour during search for a target word in Chinese sentences, compared with reading the sentences for comprehension. Although there were clear effects of word frequency during reading for comprehension, the study shows no evidence for an influence of the word frequency of non-target words on eye movement behaviour during target word search. The results are in line with previous research undertaken in English (Rayner, K., & Fischer, M. H. (1996). Mindless reading revisited: Eye movements during reading and scanning are different. Perception & Psychophysics, 58, 734–747.), such that during search for a target word, eye movement behaviour for non-target words is largely driven by superficial processing of those words. The study also highlights the prevalence of word skipping, indicating that words are often sampled only in visually degraded parafoveal vision during target word search in Chinese. |
Kayleigh L. Warrington; Victoria A. McGowan; Kevin B. Paterson; Sarah J. White Effects of adult aging on letter position coding in reading: Evidence from eye movements Journal Article In: Psychology and Aging, vol. 34, no. 4, pp. 598–612, 2019. @article{Warrington2019, It is well-established that young adults encode letter position flexibly during natural reading. However, given the visual changes that occur with normal aging, it is important to establish whether letter position coding is equivalent across adulthood. In 2 experiments, young (18-25 years) and older (65+ years) adults' were recorded while reading sentences with words containing transposed adjacent letters. Transpositions occurred at beginning (rpoblem), internal (porblem), or end (problme) locations in words. In Experiment 1, these transpositions were present throughout reading. By comparison, Experiment 2 used a gaze-contingent paradigm such that once the reader's gaze moved past a word containing a transposition, this word was shown correctly and did not subsequently change. Both age groups showed normal levels of comprehension for text including words with transposed letters. The pattern of letter transposition effects on eye movements was similar for the young and older adults, with greater increases in reading times when external relative to internal letters were transposed. In Experiment 1, however, effects of word beginning transpositions during rereading were larger for the older adults. In Experiment 2 there were no interactions, confirming that letter position coding is similar for both age groups at least during first-pass processing of words. These findings show that flexibility in letter position encoding during the initial processing of words is preserved across adulthood, although the interaction effect in rereading in Experiment 1 also suggests that older readers may use more stringent postlexical verification processes, for which the accuracy of word beginning letters is especially important. |
Veronica Whitford; Debra Titone Lexical entrenchment and cross-language activation: Two sides of the same coin for bilingual reading across the adult lifespan Journal Article In: Bilingualism: Language and Cognition, vol. 22, no. 1, pp. 58–77, 2019. @article{Whitford2019, We used eye movement measures of paragraph reading to examine whether two consequences of bilingualism, namely, reduced lexical entrenchment (i.e., reduced lexical quality and accessibility arising from less absolute language experience) and cross-language activation (i.e., simultaneous co-activation of target- and non-target-language lexical representations) interact during word processing in bilingual younger and older adults. Specifically, we focused on the interaction between word frequency (a predictor of lexical entrenchment) and cross-language neighborhood density (a predictor of cross-language activation) during first- and second-language reading. Across both languages and both age groups, greater cross-language (and within-language) neighborhood density facilitated word processing, indexed by smaller word frequency effects. Moreover, word frequency effects and, to a lesser extent, cross-language neighborhood density effects were larger in older versus younger adults, potentially reflecting age-related changes in lexical accessibility and cognitive control. Thus, lexical entrenchment and cross-language activation multiplicatively influence bilingual word processing across the adult lifespan. |
Bogusława Whyatt In search of directionality effects in the translation process and in the end product Journal Article In: Translation, Cognition and Behavior, vol. 2, no. 1, pp. 79–100, 2019. @article{Whyatt2019, This article tackles directionality as one of the most contentious issues in translation studies, still without solid empirical footing. The research presented here shows that, to understand directionality effects on the process of translation and its end product, performance in L2 → L1 and L1 → L2 translation needs to be compared in a specific setting in which more factors than directionality are considered-especially text type. For 26 professional translators who participated in an experimental study, L1 → L2 translation did not take significantly more time than L2 → L1 translation and the end products of both needed improvement from proofreaders who are native speakers of the target language. A close analysis of corrections made by the proofreaders shows that different aspects of translation quality are affected by directionality. A case study of two translators who produced high quality L1 → L2 translations reveals that their performance was affected more by text type than by directionality. |
Anne Wienholz; Amy M. Lieberman Semantic processing of adjectives and nouns in American Sign Language: Effects of reference ambiguity and word order across development Journal Article In: Journal of Cultural Cognitive Science, pp. 1–18, 2019. @article{Wienholz2019, When processing spoken language sentences , listeners continuously make and revise predictions about the upcoming linguistic signal. In contrast, during comprehension of American Sign Language (ASL), signers must simultaneously attend to the unfolding linguistic signal and the surrounding scene via the visual modality. This may affect how signers activate potential lexical candidates and allocate visual attention as a sentence unfolds. To determine how signers resolve referential ambiguity during real-time comprehension of ASL adjectives and nouns, we presented deaf adults (n = 18, 19-61 years) and deaf children (n = 20, 4-8 years) with videos of ASL sentences in a visual world paradigm. Sentences had either an adjective-noun (e.g., ''SEE YELLOW WHAT? FLOWER'') or a noun-adjective (e.g., ''SEE FLOWER WHICH? YELLOW'') structure. The degree of ambiguity in the visual scene was manipulated at the adjective and noun levels (e.g., including one or more yellow items and one or more flowers in the visual array). We investigated effects of ambiguity and word order on target looking at early and late points in the sentence. Analysis revealed that adults and children made anticipatory looks to a target when it could be identified early in the sentence. Further, signers looked more to potential lexical candidates than to unrelated competitors in the early window, and more to matched than unrelated competitors in the late window. Chil-dren's gaze patterns largely aligned with those of adults, although they made fewer anticipatory fixations to the target in the early window and were more susceptible to competitors in the late window. Together, these findings suggest that signers allocate referential attention strategically based on the amount and type of ambiguity at different points in the sentence when processing adjectives and nouns in ASL. |
Glenn P. Williams; Anuenue Kukona; Yuki Kamide Spatial narrative context modulates semantic (but not visual) competition during discourse processing Journal Article In: Journal of Memory and Language, vol. 108, pp. 1–18, 2019. @article{Williams2019b, Recent research highlights the influence of (e.g., task) context on conceptual retrieval. To assess whether conceptual representations are context-dependent rather than static, we investigated the influence of spatial narrative context on accessibility for lexical-semantic information by exploring competition effects. In two visual world experiments, participants listened to narratives describing semantically related (piano-trumpet; Experiment 1) or visually similar (bat-cigarette; Experiment 2) objects in the same or separate narrative locations while viewing arrays displaying these (‘target' and ‘competitor') objects and other distractors. Upon re-mention of the target, we analysed eye movements to the competitor. In Experiment 1, we observed semantic competition only when targets and competitors were described in the same location; in Experiment 2, we observed visual competition regardless of context. We interpret these results as consistent with context-dependent approaches, such that spatial narrative context dampens accessibility for semantic but not visual information in the visual world. |
Matthew B. Winn; Alan Kan; Ruth Y. Litovsky Temporal dynamics and uncertainty in binaural hearing revealed by anticipatory eye movements Journal Article In: The Journal of the Acoustical Society of America, vol. 145, no. 2, pp. 676–691, 2019. @article{Winn2019, Accurate perception of binaural cues is essential for left-right sound localization. Much literature focuses on threshold measures of perceptual acuity and accuracy. This study focused on suprathreshold perception using an anticipatory eye movement (AEM) paradigm designed to capture subtle aspects of perception that might not emerge in behavioral-motor responses, such as the accumulation of certainty, and rapid revisions in decision-making. Participants heard interaural timing differences (ITDs) or interaural level differences in correlated or uncorrelated narrowband noises, respectively. A cartoon ball moved behind an occluder and then emerged from the left or right side, consistent with the binaural cue. Participants anticipated the correct answer (before it appeared) by looking where the ball would emerge. Results showed quicker and more steadfast gaze fixations for stimuli with larger cue magnitudes. More difficult stimuli elicited a wider distribution of saccade times and greater number of corrective saccades before final judgment, implying perceptual uncertainty or competition. Cue levels above threshold elicited some wrong-way saccades that were quickly corrected. Saccades to ITDs were earlier and more reliable for low-frequency noises. The AEM paradigm reveals the time course of uncertainty and changes in perceptual decision-making for supra-threshold binaural stimuli even when behavioral responses are consistently correct. |
Daniel Schmidtke; Victor Kuperman A paradox of apparent brainless behavior: The time-course of compound word recognition Journal Article In: Cortex, vol. 116, pp. 250–267, 2019. @article{Schmidtke2019, A review of the behavioral and neurophysiological estimates of the time-course of compound word recognition brings to light a paradox whereby temporal activity associated with lexical variables in behavioral studies predates temporal activity of seemingly comparable lexical processing in neuroimaging studies. However, under the assumption that brain activity is a cause of behavior, the earliest reliable behavioral effect of a lexical variable must represent an upper temporal bound for the origin of that effect in the neural record. The present research provides these behavioral bounds for lexical variables involved in compound word processing. We report data from five naturalistic reading studies in which participants read sentences containing English compound words, and apply a distributional technique of survival analysis to resulting eye-movement fixation durations (Reingold & Sheridan, 2014). The results of the survival analysis of the eye-movement record place a majority of the earliest discernible onsets of orthographic, morphological, and semantic effects at less than 200 ms (with a range of 138–269 ms). Our results place constraints on the absolute time-course of effects reported in the neurolinguistic literature, and support theories of complex word recognition which posit early simultaneous access of form and meaning. |
Elizabeth R. Schotter; Chuchu Li; Tamar H. Gollan In: Quarterly Journal of Experimental Psychology, vol. 72, no. 8, pp. 2032–2045, 2019. @article{Schotter2019a, Bilinguals occasionally produce language intrusion errors (inadvertent translations of the intended word), especially when attempting to produce function word targets, and often when reading aloud mixed-language paragraphs. We investigate whether these errors are due to a failure of attention during speech planning, or failure of monitoring speech output by classifying errors based on whether and when they were corrected, and investigating eye movement behaviour surrounding them. Prior research on this topic has primarily tested alphabetic languages (e.g., Spanish-English bilinguals) in which part of speech is confounded with word length, which is related to word skipping (i.e., decreased attention). Therefore, we tested 29 Chinese-English bilinguals whose languages differ in orthography, visually cueing language membership, and for whom part of speech (in Chinese) is less confounded with word length. Despite the strong orthographic cue, Chinese-English bilinguals produced intrusion errors with similar effects as previously reported (e.g., especially with function word targets written in the dominant language). Gaze durations did differ by whether errors were made and corrected or not, but these patterns were similar for function and content words and therefore cannot explain part of speech effects. However, bilinguals regressed to words produced as errors more often than to correctly produced words, but regressions facilitated correction of errors only for content, not for function words. These data suggest that the vulnerability of function words to language intrusion errors primarily reflects automatic retrieval and failures of speech monitoring mechanisms from stopping function versus content word errors after they are planned for production. |
Elizabeth R. Schotter; Titus Malsburg; Mallorie Leinenger Forced fixations, trans-saccadic integration, and word recognition: Evidence for a hybrid mechanism of saccade triggering in reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 4, pp. 677–688, 2019. @article{Schotter2019b, Recent studies using the gaze-contingent boundary paradigm reported a reversed preview benefit- shorter fixations on a target word when an unrelated preview was easier to process than the fixated target (Schotter & Leinenger, 2016). This is explained via forced fixations-short fixations on words that would ideally be skipped (because lexical processing has progressed enough) but could not be because saccade planning reached a point of no return. This contrasts with accounts of preview effects via trans-saccadic integration-shorter fixations on a target word when the preview is more similar to it (see Cutter, Drieghe, & Liversedge, 2015). In addition, if the previewed word-not the fixated target- determines subsequent eye movements, is it also this word that enters the linguistic processing stream? We tested these accounts by having 24 subjects read 150 sentences in the boundary paradigm in which both the preview and target were initially plausible but later one, both, or neither became implausible, providing an opportunity to probe which one was linguistically encoded. In an intervening buffer region, both words were plausible, providing an opportunity to investigate trans-saccadic integration. The frequency of the previewed word affected progressive saccades (i.e., forced fixations) as well as when transsaccadic integration failure increased regressions, but, only the implausibility of the target word affected semantic encoding. These data support a hybrid account of saccadic control (Reingold, Reichle, Glaholt, & Sheridan, 2012) driven by incomplete (often parafoveal) word recognition, which occurs prior to complete (often foveal) word recognition. |
Zeshu Shao; Jeroen Paridon; Fenna Poletiek; Antje S. Meyer Effects of phrase and word frequencies in noun phrase production Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 1, pp. 147–165, 2019. @article{Shao2019, There is mounting evidence that the ease of producing and understanding language depends not only on the frequencies of individual words but also on the frequencies of word combinations. However, in two picture description experiments, Janssen and Barber (2012) found that French and Spanish speakers' speech onset latencies for short phrases depended exclusively on the frequencies of the phrases but not on the frequencies of the individual words. They suggested that speakers retrieved phrase-sized units from the mental lexicon. In the present study, we examined whether the time required to plan complex noun phrases in Dutch would likewise depend only on phrase frequencies. Participants described line drawings in phrases such as rode schoen [red shoe] (Experiments 1 and 2) or de rode schoen [the red shoe] (Experiment 3). Replicating Janssen and Barber's findings, utterance onset latencies depended on the frequencies of the phrases but, deviating from their findings, also depended on the frequencies of the adjectives in adjective-noun phrases and the frequencies of the nouns in determiner-adjective-noun phrases. We conclude that individual word frequencies and phrase frequencies both affect the time needed to produce noun phrases and discuss how these findings may be captured in models of the mental lexicon and of phrase production. |
Signy Sheldon; Kelly Cool; Nadim El-Asmar The processes involved in mentally constructing event- and scene-based autobiographical representations Journal Article In: Journal of Cognitive Psychology, vol. 31, pp. 261–275, 2019. @article{Sheldon2019, Autobiographical experiences can be mentally constructed as generalised events or as spatial scenes. We investigated the commonalities and distinctions in using episodic and visual imagery processes to imagine autobiographical scenarios as events or scenes. Participants described personal scenarios framed as future events or spatial scenes. We analyzed the number and type of episodic details within the descriptions. To measure imagery processing, we monitored eye-movements and examined the impact of viewing a imagery disrupting stimulus (Dynamic Visual Noise; DVN) when these descriptions were made. We found that events were described with more generalised details and scenes with more perceptual details. DVN reduced the number of episodic details generated for all descriptions and eye fixation rates negatively correlated with the number of these details that were generated. This suggests that different content is used to imagine event- or scene-based experiences and imagery contributes similarly to the episodic specificity of these imaginations. |
Anthony Shook; Viorica Marian Covert co-activation of bilinguals' non-target language: Phonological competition from translations Journal Article In: Linguistic Approaches to Bilingualism, vol. 9, no. 2, pp. 228–252, 2019. @article{Shook2019, When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala , respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing. |
Anthony Shook; Viorica Marian Covert co-activation of bilinguals' non-target language Journal Article In: Linguistic Approaches to Bilingualism, vol. 9, no. 2, pp. 228–252, 2019. @article{Shook2019a, When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala , respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing. |
Timothy J. Slattery; Adam J. Parker Return sweeps in reading: Processing implications of undersweep-fixations Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 6, pp. 1948–1957, 2019. @article{Slattery2019, Models of eye-movement control during reading focus on reading single lines of text. However, with multiline texts, return sweeps, which bring fixation from the end of one line to the beginning of the next, occur regularly and influence ~20% of all reading fixations. Our understanding of return sweeps is still limited. One common feature of return sweeps is the prevalence of oculomotor errors. Return sweeps, often initially undershoot the start of the line. Corrective saccades then bring fixation closer to the line start. The fixation occurring between the undershoot and the corrective saccade (undersweep-fixation) has important theoretical implications for the serial nature of lexical processing during reading, as they occur on words ahead of the intended attentional target. Furthermore, since the attentional target of a return sweep will lie far outside the parafovea during the prior fixation, it cannot be lexically preprocessed during this prior fixation. We explore the implications of undersweep-fixations for ongoing processing and models of eye movements during reading by analysing two existing eye-movement data sets of multiline reading. |
Timothy J. Slattery; Martin R. Vasilev An eye-movement exploration into return-sweep targeting during reading Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 5, pp. 1197–1203, 2019. @article{Slattery2019a, Return-sweeps are an essential eye-movement that takes the readers' eyes from the end of one line of text to the start of the next. While return-sweeps are common during normal reading, the eye-movement literature is dominated by single-line reading studies where no return-sweeps are needed. The present experiment was designed to explore what readers are targeting with their return-sweeps. Participants read two short stories by Frank L. Baum while their eye-movements were being recorded. In one story, every line-initial word was highlighted by formatting it in bold, while the other story was presented normally (i.e., without any bolding). The bolding manipulation significantly reduced oculomotor error associated with return-sweeps, as these saccades landed closer to the left margin and were less likely to require corrective saccades compared to the control condition. However, despite this reduction in oculomotor error, the bolding had no influence on local fixation durations or global reading-time measures. Moreover, return-sweep landing sites were not impacted by line-initial word length nor did the effect of bolding interact with the length of the line-initial word, suggesting that readers were not targeting the centre of line-initial words. We discuss the implication of these findings for return-sweep targeting and eye-movement control during reading. |
Renske S. Hoedemaker; Antje S. Meyer Planning and coordination of utterances in a joint naming task Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 4, pp. 732–752, 2019. @article{Hoedemaker2019, Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other's task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers' utterances and Speaker A's eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A's speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B's naming task. This suggests speakers were not fully incorporating their partner's task into their own speech planning. Moreover, Speaker A's eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner's objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor's task and suggest that effective coordination can be achieved without full corepresentation of the partner's task. |
Falk Huettig; Ernesto Guerra Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing Journal Article In: Brain Research, vol. 1706, pp. 196–208, 2019. @article{Huettig2019, There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle') while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition. |
Yueh-Nu Hung Fifth grade students reading a Chinese text with embedded errors: An eye movement miscue analysis study Journal Article In: Reading Psychology, vol. 40, pp. 397–424, 2019. @article{Hung2019a, This study adopted eye movement miscue analysis research method to examine and illustrate the cognitive and psychological processes of meaning construction and error detection in reading Chinese. Eighteen Taiwanese grade five elementary students read a short Chinese text with six embedded errors. Results show that like earlier studies, only about a third of the errors were detected. Unlike earlier research, meaning group found more errors than did the error group. Reading miscues, eye movements, and the juxtaposition of the two sources of information helped to more fully illustrate the dynamic and complex processes of seeing, perceiving, reading aloud and comprehending. |
Yueh-Nu Hung; Hui-Yu Kuo; Shih-Chieh Liao Seeing what they see: Elementary EFL students reading science texts Journal Article In: RELC Journal, pp. 1–15, 2019. @article{Hung2019, Science texts use various text features and multiple representations to communicate meaning to their readers. English science texts are challenging for elementary-level English as a foreign language (EFL) learners in Taiwan because they are familiar with reading language-controlled texts from textbooks. Teaching students to make use of various text features and visual representations will help them achieve a more successful science text reading experience. In this study, 27 Grade 6 Taiwanese students were instructed in science text reading strategies that included understanding text features, creating imagery, and using visual representations. Before and after the instruction, they took an English reading and writing test. Their eye movements during science text reading were recorded before and after the instruction to more fully understand their visual attention while reading English science texts. Eye movement performances such as number of fixations, mean fixation duration, and saccade size were examined. The findings showed that although the participants' English reading and writing performance improved in the post-test, they focussed more on the written language than the visuals in both tests. More visual representation reading strategies should therefore be taught to help young EFL students read and learn from science texts. |
Bernard I. Issa; Kara Morgan-Short Effects of external and internal attentional manipulations on second language grammar development: An eye-tracking study Journal Article In: Studies in Second Language Acquisition, vol. 41, no. 2, pp. 389–417, 2019. @article{Issa2019, The role of attention has been central to theoretical and empirical inquiries in second language (L2) acquisition. The current eye-tracking study examined how external and internal attentional manipulations (Chun, Golomb, &Turk-Browne, 2011) promote L2 grammatical development. Participants (n = 55) were exposed to Spanish direct-object pronouns under external or internal attentional manipulations, which were implemented through textual input enhancement or structured input practice, respectively. Results for both manipulations indicated that (a) learner attentional allocation to the form was affected; (b) L2 gains were evidenced, although only the internal manipulation led to above-chance performance; and (c) L2 gains were related to attention allocated to the form under the external manipulation and to a lesser extent the internal manipulation. Overall, findings may inform theoretical perspectives on attention and elucidate cognitive processes related to L2 instruction. |
Aine Ito Prediction of orthographic information during listening comprehension: A printed-word visual world study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 11, pp. 2584–2596, 2019. @article{Ito2019, Two visual world eye-tracking experiments examined the role of orthographic information in the visual context in pre-activation of orthographic and phonological information using Japanese. Participants heard sentences that contained a predictable target word and viewed a display showing four words in a logogram, kanji (Experiment 1) or in a phonogram, hiragana (Experiment 2). The four words included either the target word (e.g., 魚 /sakana/; fish), an orthographic competitor (e.g.,角 /tuno/; horn), a phonological competitor (e.g., 桜 /sakura/; cherry blossom), or an unrelated word (e.g., 本 /hon/; book), together with three distractor words. The orthographic competitor was orthographically or phonologically dissimilar to the target in hiragana. In Experiment 1, target and orthographic competitor words attracted more fixations than unrelated words before the target word was mentioned, suggesting that participants pre-activated orthographic form of the target word. In Experiment 2, target and phonological competitor words attracted more predictive fixations than unrelated words, but orthographic competitor words did not, suggesting a critical role of the visual context. This pre-activation pattern does not fit with the pattern of lexical activation in auditory word recognition, where orthography and phonology interact. However, it is compatible with the pattern of lexical activation in spoken word production, where orthographic information is not automatically activated, in line with production-based prediction accounts. |
Juhani Järvikivi; Sarah Schimke; Pirita Pyykkönen-Klauck Understanding indirect reference in a visual context Journal Article In: Discourse Processes, vol. 56, no. 2, pp. 117–135, 2019. @article{Jaervikivi2019, We often use pronouns like it or they without explicitly mentioned antecedents. We asked whether the human processing system that resolves such indirect pronouns uses the immediate visual-sensory context in multimodal discourse. Our results showed that people had no difficulty understanding conceptually central referents, whether explicitly mentioned or not, whereas referents that were conceptually peripheral were much harder to understand when left implicit than when they had been mentioned before. Importantly, we showed that people could not recover this information from the visual environment. The results suggest that the semantic–conceptual relatedness of the potential referent with respect to the defining events and actors in the current discourse representation is a determining factor of how easy it is to establish the referential link. The visual environment is only integrated to the extent that it is relevant or acts as a fall-back when the referential search within the current discourse representation fails. |
Yu-Cin Jian Reading instructions facilitate signaling effect on science text for young readers: An eye-movement study Journal Article In: International Journal of Science and Mathematics Education, vol. 17, no. 3, pp. 503–522, 2019. @article{Jian2019, Science texts often use visual representations (e.g. diagrams, graphs, photographs) to help readers learn science knowledge. Reading an illustrated text for learning is one type of multimedia learning. Empirical research has increasingly confirmed the signaling principle's effectiveness in multimedia learning. Highlighting correspondences between text and pictures benefits learning outcomes. However, the signaling effect's cognitive processes and its generalizability to young readers are unknown. This study clarified these aspects using eye-tracking technology and reading tests. Eighty-nine sixth-grade students read an illustrated science text in one of three conditions: reading material with signals, without signals (identical labels of Diagram 1 and Diagram 2 in text and illustration), and with signals combined with reading instructions. Findings revealed that the signaling principle alone cannot be generalized to young readers. Specifically, “Diagram 1” and “Diagram 2” in parentheses mixed with science text content had limited signaling effect for students and reading instructions are necessary. Eye movements reflected cognitive processes of science reading; students who received reading instructions employed greater cognitive effort and time in reading illustrations and tried to integrate textual and pictorial information using signals. |
Yu-Cin Jian; Jia-Han Su; Yong-Ru Hsiao Differentiated processing strategies for science reading among sixth-grade students: Exploration of eye movements using cluster analysis Journal Article In: Computers and Education, vol. 142, pp. 1–14, 2019. @article{Jian2019a, This study used eye-tracking technology to investigate the different types of reading strategies that sixth graders adopt to comprehend illustrated science articles, as well as the relationship between reading process and reading comprehension. The participants were 122 sixth-grade students whose eye movements were monitored during silent reading of a science article containing one representational diagram and one explanatory diagram. Cluster analysis was performed based on five eye movement indices: first-pass (initial processing)/look-back (late-stage processing) total fixation duration on texts and diagrams, and number of saccades between text and diagram. Results showed that sixth graders adopted four types of reading strategy to read science article: Initial-global-scan students (21%) reading the science text and examining the science diagram for the first time tend to quickly scan the material, then read it carefully, and engage in saccade behavior. Shallow-processing students (58%) spent little time on the text or diagram during their first-pass and second-pass reading, and they also seldom engage in saccade behavior. Words-dominated students (12%) spend a long time reading the text during the first-pass reading. Diagram-dominated students (9%) spent considerable time and effort on diagrams during the first-pass reading, and outperformed the other three groups in the reading comprehension test. Students who were proficient at using diagram information could distinguish the importance of various types of science diagrams; they also spent much mental effort on the explanatory diagram compared with the representational diagram. A multiple regression analysis indicated first-pass total fixation durations on the diagram predicted reading comprehension performance. |
Zhen Qin; Annie Tremblay; Jie Zhang In: Journal of Phonetics, vol. 73, pp. 144–157, 2019. @article{Qin2019, This study investigates how within-category tonal information influences native and non-native Mandarin listeners' spoken word recognition. Previous eye-tracking research has shown that the within-category phonetic details of consonants and vowels constrain lexical activation. However, given the highly dynamic and variable nature of lexical tones, it is unclear whether the within-category phonetic details of lexical tones would similarly modulate lexical activation. Native Mandarin listeners and proficient adult English-speaking Mandarin learners were tested in a visual-world eye-tracking experiment. The target word contained a level tone and the competitor word contained a high-rising tone, or vice versa. The auditory stimuli were manipulated such that the target tone was either canonical (Standard condition), phonetically more distant from the competitor (Distant condition), or phonetically closer to the competitor (Close condition). Growth curve analyses on fixations suggest that, compared to the Standard condition, Mandarin listeners' target-over-competitor word activation was enhanced in the Distant condition and inhibited in the Close condition, whereas English listeners' target-over-competitor word activation was inhibited in both the Distant and Close conditions. These results suggest that within-category tonal information influences both native and non-native Mandarin listeners' word recognition, but does so differently for the two groups. |
Qingqing Qu Co-activation of taxonomic and thematic relations in spoken word comprehension: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 10, pp. 964, 2019. @article{Qu2019, Evidence from behavior, computational linguistics, and neuroscience studies supported that semantic knowledge is represented in (at least) two semantic systems (i.e., taxonomic and thematic systems). It remains unclear whether, when and to what extent taxonomic and thematic relations are co-activated. The present study investigated the relative strength of the co-activation of the two types of semantic representations when both types of semantic relations are simultaneously presented. In a visual-world task, participants listened to a spoken target word and looked at a visual display consisted of a taxonomic competitor, a thematic competitor and two distractors. The growth curve analysis revealed that although taxonomic competitors were fixated more than thematic competitors, and these two types of competitors started to receive more fixations than distractor in a similar time window, which suggested that taxonomic and thematic relations are co-activated by the spoken word. |
Sadaf Rahmanian; Victor Kuperman Spelling errors impede recognition of correctly spelled word forms Journal Article In: Scientific Studies of Reading, vol. 23, no. 1, pp. 24–36, 2019. @article{Rahmanian2019, Spelling errors are typically thought of as an effect of a word's weak orthographic representation in an individual mind. What if existence of spelling errors is a partial cause of effortful orthographic learning and word recognition? We selected words that had homophonic substandard spelling variants of varying frequency (e.g., innocent and inocent occur in 69% and 31% of occurrences of the word, respectively). Conventional spellings were presented for recognition either in context (Experiment 1, eye-tracking sentence reading) or in isolation (Experiment 2, lexical decision). Words elicited longer fixation durations and lexical decision latencies if there was more uncertainty (higher entropy) regarding which spelling is a preferred one. The inhibitory effect of frequency was not modulated by spelling or other reading skill. This finding is in line with theories of learning that predict spelling errors to weaken associations between conventional spellings and the word's meaning. |
Tracy Reuter; Arielle Borovsky; Casey Lew-Williams Predict and redirect: Prediction errors support children's word learning Journal Article In: Developmental Psychology, vol. 55, no. 8, pp. 1656–1665, 2019. @article{Reuter2019a, According to prediction-based learning theories, erroneous predictions support learning. However, empirical evidence for a relation between prediction error and children's language learning is currently lacking. Here we investigated whether and how prediction errors influence children's learning of novel words. We hypothesized that word learning would vary as a function of 2 factors: the extent to which children generate predictions, and the extent to which children redirect attention in response to errors. Children were tested in a novel word learning task, which used eye tracking to measure (a) real-time semantic predictions to familiar referents, (b) attention redirection following prediction errors, and (c) learning of novel referents. Results indicated that predictions and prediction errors interdependently supported novel word learning, via children's efficient redirection of attention. This study provides a developmental evaluation of prediction-based theories and suggests that erroneous predictions play a mechanistic role in children's language learning. |
Sarah Risse; Stefan Seelig Stable preview difficulty effects in reading with an improved variant of the boundary paradigm Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 7, pp. 1632–1645, 2019. @article{Risse2019, Using gaze-contingent display changes in the boundary paradigm during sentence reading, it has recently been shown that parafoveal word-processing difficulties affect fixations on words to the right of the boundary. Current interpretations of this post-boundary preview difficulty effect range from delayed parafoveal-on-foveal effects in parallel word-processing models to forced fixations in serial word-processing models. However, these findings are based on an experimental design that, while allowing to isolate preview difficulty effects, might have established a bias with respect to asymmetries in parafoveal preview benefit for high-frequent and low-frequent target words. Here, we present a revision of this paradigm varying the preview's lexical frequency and keeping the target word constant. We found substantial effects of the preview difficulty in fixation durations after the boundary confirming that preview processing affects the oculomotor decisions not only via trans-saccadic integration of preview and target word information. An additional time-course analysis showed that the preview difficulty effect was significant across the full fixation duration distribution on the target word without any evidence on the pretarget word before the boundary. We discuss implications of the accumulating evidence of post-boundary preview difficulty effects for models of eye movement control during reading. |
Erin K. Robertson; Jennifer E. Gallant Eye tracking reveals subtle spoken sentence comprehension problems in children with dyslexia Journal Article In: Lingua, vol. 228, pp. 1–17, 2019. @article{Robertson2019, Children with dyslexia who did not have SLI (n = 31) and typically-developing (TD |
Isabel R. Rodríguez-Ortiz; Francisco J. Moreno-Pérez; Pablo Delgado; David Saldaña The development of anaphora resolution in Spanish Journal Article In: Journal of Psycholinguistic Research, vol. 48, no. 4, pp. 797–817, 2019. @article{RodriguezOrtiz2019, The present study focuses on the development of Spanish pronominal processing. We investigate whether the pronoun interpretation problem (i.e., reflexive pronouns comprehension is resolved at an earlier age than that of personal pronouns, also known as the Delay of the Principle B Effect), which has been documented in other languages, also occurs in Spanish. For this purpose, we conducted two experiments including pronoun resolution tasks. In Experiment 1, a task adapted from the experimental paradigm proposed by Love et al. (J Psycholinguist Res 38:285–304, 2009. https://doi.org/10.1007/s10936-009-9103-9) was used, which examines the off-line processing of the Spanish pronouns se and le. In Experiment 2, on-line processing of the same pronouns was evaluated with eye-tracking, using a paradigm developed by Thompson and Choy (J Psycholinguist Res 38:255–283, 2009. https://doi.org/10.1007/s10936-009-9105-7). Forty-three participants aged 4–16 years completed both experiments. Results indicated that there is no developmental asymmetry in the acquisition of successful resolution of the two types of anaphora in Spanish: from age 4, reflexive and clitic pronouns are processed with the same degree of accuracy. |
Jens Roeser; Mark Torrance; Thom Baguley Advance planning in written and spoken sentence production Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 11, pp. 1983–2009, 2019. @article{Roeser2019, Response onset latencies for sentences that start with a conjoined noun phrase are typically longer than for sentences starting with a simple noun phrase. This suggests that advance planning has phrasal scope, which may or may not be lexically driven. All previous studies have involved spoken production, leaving open the possibility that effects are, in part, modality-specific. In 3 image-description experiments (Ns = 32) subjects produced sentences with conjoined (e.g., Peter and the hat) and simple initial noun phrases (e.g., Peter) in both speech and writing. Production onset latencies and participants' eye movements were recorded. Ease of lexical retrieval of sentences' second noun was assessed by manipulating codability (Experiment 1) and by gaze-contingent name priming (Experiments 2 and 3). Findings confirmed a modality-independent phrasal scope for advance planning but did not support obligatory lexical retrieval beyond the sentence-initial noun. This research represents the first direct experimental comparison of sentence planning in speech and writing. |
Koen Rummens; Bilge Sayim Disrupting uniformity: Feature contrasts that reduce crowding interfere with peripheral word recognition Journal Article In: Vision Research, vol. 161, pp. 25–35, 2019. @article{Rummens2019, Peripheral word recognition is impaired by crowding, the harmful influence of surrounding objects (flankers) on target identification. Crowding is usually weaker when the target and the flankers differ (for example in color). Here, we investigated whether reducing crowding at syllable boundaries improved peripheral word recognition. In Experiment 1, a target letter was flanked by single letters to the left and right and presented at 8° in the lower visual field. Target and flankers were either the same or different in regard to contrast polarity, color, luminance, and combined color/luminance. Crowding was reduced when the target differed from the flankers in contrast polarity, but not in any of the other conditions. Using the same color and luminance values as in Experiment 1, we measured recognition performance (speed and accuracy) for uniform (e.g., all letters black), congruent (e.g., alternating black and white syllables), and incongruent (e.g., alternating black and white non-syllables) words in Experiment 2. Participants verbally reported the target word, briefly displayed at 8° in the lower visual field. Congruent and incongruent words were recognized slower compared to uniform words in the opposite contrast polarity condition, but not in the other conditions. Our results show that the same feature contrast between the target and the flankers that yielded reduced crowding, deteriorated peripheral word recognition when applied to syllables and non-syllabic word parts. We suggest that a potential advantage of reduced crowding at syllable boundaries in word recognition is counteracted by the disruption of word uniformity. |
Rachel A. Ryskin; Chigusa Kurumada; Sarah Brown-Schmidt Information integration in modulation of pragmatic inferences during online language comprehension Journal Article In: Cognitive Science, vol. 43, no. 8, pp. 1–35, 2019. @article{Ryskin2019, Upon hearing a scalar adjective in a definite referring expression such as "the big…," listeners typically make anticipatory eye movements to an item in a contrast set, such as a big glass in the context of a smaller glass. Recent studies have suggested that this rapid, contrastive interpretation of scalar adjectives is malleable and calibrated to the speaker's pragmatic competence. In a series of eye-tracking experiments, we explore the nature of the evidence necessary for the modulation of pragmatic inferences in language comprehension, focusing on the complementary roles of top-down information - (knowledge about the particular speaker's pragmatic competence) and bottom-up cues (distributional information about the use of scalar adjectives in the environment). We find that bottom-up evidence alone (e.g., the speaker says "the big dog" in a context with one dog), in large quantities, can be sufficient to trigger modulation of the listener's contrastive inferences, with or without top-down cues to support this adaptation. Further, these findings suggest that listeners track and flexibly combine multiple sources of information in service of efficient pragmatic communication. |
Astrid Kraal; Paul W. Broek; Arnout W. Koornneef; Lesya Y. Ganushchak; Nadira Saab Differences in text processing by low- and high-comprehending beginning readers of expository and narrative texts: Evidence from eye movements Journal Article In: Learning and Individual Differences, vol. 74, pp. 101752, 2019. @article{Kraal2019, The present study investigated on-line text processing of second-grade low- and high-comprehending readers by recording their eye movements as they read expository and narrative texts. For narrative texts, the reading patterns of low- and high-comprehending readers revealed robust differences consistent with prior findings for good versus struggling readers (e.g., longer first- and second-pass reading times for low-comprehending readers). For expository texts, however, the differences in the reading patterns of low- and high-comprehending readers were attenuated. These results suggest that low-comprehending readers adopt a suboptimal processing approach for expository texts: relative to their processing approach for narrative texts, they either do not adjust their reading strategy or they adjust towards a more cursory strategy. Both processing approaches are suboptimal because expository texts tend to demand more, rather than less, cognitive effort of the reader than narrative texts. We discuss implications for (reading) education. |
Edmundo Kronmüller; Ira Noveck How do addressees exploit conventionalizations? From a negative reference to an ad hoc implicature Journal Article In: Frontiers in Psychology, vol. 10, pp. 1461, 2019. @article{Kronmueller2019, A negative reference, such as "not the sculpture" (where the sculpture is a name the speaker had only just invented to describe an unconventional-looking object and where the negation is saying that she does not currently desire that object), seems like a perilous and linguistically underdetermined way to point to another object, especially when there are three objects to choose from. To succeed, it obliges listeners to rely on contextual elements to determine which object the speaker has in mind. Prior work has shown that pragmatic inference-making plays a crucial role in such an interpretation process. When a negative reference leaves two candidate objects to choose from, listeners avoid an object that had been previously named, preferring instead an unconventional-looking object that had remained unnamed (Kronmüller et al., 2017). In the present study, we build over these findings by maintaining our focus on the two remaining objects (what we call the second and third objects) as we systematically vary two features. With respect to the second object - which is always unconventional looking - we vary whether or not it has been given a name. With respect to the third object - which is never named - we vary whether it is unconventional or conventional looking (for the latter, imagine an object that clearly resembles a bicycle). As revealed by selection patterns and eye-movements in a visual-world eye-tracking paradigm, we replicate our previous findings that show that participants choose randomly when both of the remaining objects are unconventional looking and unnamed and that they opt reliably in favor of the most nondescript (the unnamed unconventional looking) object when the second object is named. We show further that (unnamed) conventional-looking objects provide similar outcomes when juxtaposed with an unnamed unconventional object (participants prefer the most non-descript as opposed to the conventional-looking object). Nevertheless, effects emerging from the conventional (unnamed) case are not as strong as those found with respect to those reported when an unconventional object is named. In describing participants' choices in the non-random cases, we propose that addressees rely on the construction of an ad hoc implicature that takes into account which object can be eliminated from consideration, given that the speaker did not explicitly name it. |
Dave Kush; Brian Dillon; Ragnhild Eik; Adrian Staub Processing of Norwegian complex verbs: Evidence for early decomposition Journal Article In: Memory & Cognition, vol. 47, no. 2, pp. 335–350, 2019. @article{Kush2019, We examined the processing of Norwegian complex verbs—compounds consisting of a prepositional prefix and a verbal root—to investigate the lexical decomposition of such morphologically complex compounds. In an eyetracking-while-reading study, we tested whether reading time measures were significantly predicted by a compound verb's whole-word frequency, its root family frequency, or some combination thereof. The results suggest that whole-word and root family frequencies make independent contributions to first-fixation durations. Subsequent reading time measures were better predicted by either whole-word frequency, root family frequency, or both in tandem. We interpret these results as providing support for hybrid models of lexical representation, in which complex verbs are associated with an atomic (whole-word) representation linked to the lexical entries for the compound's constituent morphemes. |
Nayoung Kwon; Patrick Sturt Proximity and same case marking do not increase attraction effect in comprehension: Evidence from eye-tracking experiments in Korean Journal Article In: Frontiers in Psychology, vol. 10, pp. 1320, 2019. @article{Kwon2019, Previous studies have suggested that during the on-line sentence processing, relevant memory representations are directly accessed based on cues at retrieval (McElree et al., 2003). Under this hypothesis, retrieval cues activate any memory representation with matching features, leading to the so-called attraction effect. This predicts that attraction effects would be modulated by memory representation of a distractor. Here, we investigated this possibility, focusing on two factors (i.e., proximity to the retrieval point and the number of matching features) that would affect representation of a distractor in three Korean eye-tracking experiments. We predicted that if memory representation of a distractor decays over time, a distractor close to a retrieval point would lead to stronger attraction effects. We also predicted that a distractor would be more likely to lead to interference when it shares a higher number of matching features with the retrieval cues of a dependency, relative to the target of the dependency, due to multiple direct accesses based on multiple matching cues. However, the results did not show evidence that proximity of a distractor to the retrieval point enhanced attraction effects. Likewise, there was no evidence that a greater number of matching cues of a distractor alone would trigger more mis-retrieval, in contrast to a previous finding that a greater number of mismatching cues of a licit antecedent in addition to a greater number of matching cues of a distractor did so (Parker and Phillips, 2017). On the other hand, the results suggested that a distractor marked with nominative case was more likely to be mis-retrieved as the subject of a verb, compared to a distractor marked with a dative case, suggesting that the subject grammatical role is a critical cue for a subject-verb agreement. These results are best compatible with the hypothesis that retrieval cues are weighted, possibly depending on the nature of the dependency that is currently processed. |
Anna K. Laurinavichyute; Irina A. Sekerina; Svetlana Alexeeva; Kristine Bagdasaryan; Reinhold Kliegl Russian Sentence Corpus: Benchmark measures of eye movements in reading in Russian Journal Article In: Behavior Research Methods, vol. 51, no. 3, pp. 1161–1178, 2019. @article{Laurinavichyute2019, This article introduces a new corpus of eye movements in silent reading—the Russian Sentence Corpus (RSC). Russian uses the Cyrillic script, which has not yet been investigated in cross-linguistic eye movement research. As in every language studied so far, we confirmed the expected effects of low-level parameters, such as word length, frequency, and predictability, on the eye movements of skilled Russian readers. These findings allow us to add Slavic languages using Cyrillic script (exemplified by Russian) to the growing number of languages with different orthographies, ranging from the Roman-based European languages to logographic Asian ones, whose basic eye movement benchmarks conform to the universal comparative science of reading (Share, 2008). We additionally report basic descriptive corpus statistics and three exploratory investigations of the effects of Russian morphology on the basic eye movement measures, which illustrate the kinds of questions that researchers can answer using the RSC. The annotated corpus is freely available from its project page at the Open Science Framework: https://osf.io/x5q2r/. |
Justin Lauro; Ana I. Schwartz Cognate effects on anaphor processing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 3, pp. 381–396, 2019. @article{Lauro2019, There are numerous studies demonstrating facilitated processing of cognates relative to noncognates for bilinguals, providing evidence that bilingual lexical access is language nonselective. We tested whether cross-language activation affects comprehension of larger units of meaning, focusing specifically on comprehension of anaphoric references. Highly proficient, Spanish–English bilin- guals read sentences either in English (Experiment 1) or Spanish (Experiment 2) while their eye movements were recorded. Sentences consisted of an initial clause with 2 nouns that were either cognates or noncognates, and a later clause with an anaphor that either referred to the first or second noun. In the English experiment, cognate status facilitated selection of the sentence's foundational noun, reflected by shorter reading times for cognate nouns in the first position. Processing of pronouns was facilitated when they referred to cognates, reflected by higher skipping rates and shorter reading times. Final selection of cognate referents was also facilitated, reflected by total reading shorter total reading times, but only when the pronoun referred to the first noun. In the Spanish experiment, total reading times for cognate nouns were shorter, irrespective of their order of mention, reflecting a general cognate facilitation effect that was not affected by which noun was selected as the foundational structure. Spillover fixations from anaphors referring to cognates were shorter than noncognates, but only when they were the second-mentioned noun, suggesting that cognate status affected coreferencing for the more recently encountered noun. Implications for theories of cross-language activation and anaphoric reference are discussed. |
Miseon Lee Effects of case-marking on the anticipatory processing of Korean Sentences Journal Article In: Journal of Cognitive Science, vol. 20, no. 3, pp. 339–364, 2019. @article{Lee2019b, The goal of this study was to explore the effect of the case-marking information from pre-verbal arguments on the anticipatory processing of Korean sentences. More specifically, it was examined whether the case-markers can be used to predict an upcoming argument even before it is introduced into the string. In our eye-tracking experiment using the visual-world paradigm, 24 adult native speakers of Korean showed significantly more anticipatory eye-movements to the potential referent of a Theme object as soon as hearing the sequence of a nominative-marked NP and a dative-marked NP, as compared to when the second NP is accusative-marked. These results confirm the predictive mechanism of the parsing system and the case effect on the prediction in Korean: that is, guided by the case-marking information which is available earlier in the input, the parser can predict a forthcoming argument and thus activate a structural representation of the currently processed sentence. In this way, a verb-final sentence can be interpreted incrementally and predictively as well at each moment of processing. |
Minna Lehtonen; Matti Varjokallio; Henna Kivikari; Annika Hultén; Sami Virpioja; Tero Hakala; Mikko Kurimo; Krista Lagus; Riitta Salmelin Statistical models of morphology predict eye-tracking measures during visual word recognition Journal Article In: Memory & Cognition, vol. 47, no. 7, pp. 1245–1269, 2019. @article{Lehtonen2019, We studied how statistical models of morphology that are built on different kinds of representational units, i.e., models emphasizing either holistic units or decomposition, perform in predicting human word recognition. More specifically, we studied the predictive power of such models at early vs. late stages of word recognition by using eye-tracking during two tasks. The tasks included a standard lexical decision task and a word recognition task that assumedly places less emphasis on postlexical reanalysis and decision processes. The lexical decision results showed good performance of Morfessor models based on the Minimum Description Length optimization principle. Models which segment words at some morpheme boundaries and keep other boundaries unsegmented performed well both at early and late stages of word recognition, supporting dual- or multiple-route cognitive models of morphological processing. Statistical models based on full forms fared better in late than early measures. The results of the second, multi-word recognition task showed that early and late stages of processing often involve accessing morphological constituents, with the exception of short complex words. Late stages of word recognition additionally involve predicting upcoming morphemes on the basis of previous ones in multimorphemic words. The statistical models based fully on whole words did not fare well in this task. Thus, we assume that the good performance of such models in global measures such as gaze durations or reaction times in lexical decision largely stems from postlexical reanalysis or decision processes. This finding highlights the importance of considering task demands in the study of morphological processing. |
Chi Yui Leung; Hitoshi Mikami; Lisa Yoshikawa Positive psychology broadens readers' attentional scope during L2 reading: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 10, pp. 2245, 2019. @article{Leung2019, While positive psychology has drawn increasing interests among researchers in the second language (L2) acquisition literature recently, little is known with respect to the relationship between positive psychology and mental processes during L2 reading. To bridge the gap, the present study investigated whether and how positive psychology (self-efficacy) influences word reading strategies during L2 sentence reading. Based on previous studies, eye-movement patterns with first-fixation locations closer to the beginning of a word can be characterized as an attempt to process the word with a local strategy, whereas first-fixation locations farther away from the beginning and closer to the center of a word can be considered as an attempt to use a global strategy. Eye movements of a group of Japanese learners of English (N = 59) were monitored, and L2 reading self-efficacy was used to assess the participants' positive belief toward their L2 reading skills. Based on Fredrickson's (1998) broaden-and-build theory, we predicted an effect of L2 reading self-efficacy on participants' first-fixation locations. Results from mixed-effects regression showed that while reading strategies depended in part on other factors such as L2 reading proficiency and word properties, L2 self-efficacy influenced reading strategy. The present data suggest that while more self-efficacious L2 readers prefer a more efficient global strategy, attempting to read the word as a whole word, less self-efficacious L2 readers tend to employ a local strategy, focusing more on sublexical information. These findings lend support to the broaden-and-build theory in the context of L2 processing. The present study has implications for how positive psychology works along with L2 proficiency in the development of strategic selection during reading. |
Lin Li; Sha Li; Fang Xie; Min Chang; Victoria A. McGowan; Jingxin Wang; Kevin B. Paterson In: Attention, Perception, and Psychophysics, vol. 81, pp. 2626–2634, 2019. @article{Li2019c, Older adults experience greater difficulty compared to young adults during both alphabetic and nonalphabetic reading. However, while this age-related reading difficulty may be attributable to visual and cognitive declines in older adulthood, the underlying causes remain unclear. With the present research, we focused on effects related to the visual complexity of written language. Chinese is ideally suited to investigating such effects, as characters in this logographic writing system can vary substantially in complexity (in terms of their number of strokes, i.e., lines and dashes) while always occupying the same square area of space, so that this complexity is not confounded with word length. Nonreading studies suggests older adults have greater difficulty than young adults when recognizing characters with high compared to low numbers of strokes. The present research used measures of eye movements to investigate adult age differences in these effects during natural reading. Young adult (18–28 years) and older adult (65+ years) participants read sentences that included one of a pair of two-character target words matched for lexical frequency and contextual predictability, but composed of either high-complexity (>9 strokes) or low-complexity (≤7 strokes) characters. Typical patterns of age-related reading difficulty were observed. However, an effect of visual complexity in reading times for words was greater for the older than for the younger adults, due to the older readers experiencing greater difficulty identifying words containing many rather than few strokes. We interpret these findings in terms of the influence of subtle deficits in visual abilities on reading capabilities in older adulthood. |
Monica Y. C. Li; David Braze; Anuenue Kukona; Clinton L. Johns; Whitney Tabor; Julie A. Van Dyke; W. Einar Mencl; Donald P. Shankweiler; Kenneth R. Pugh; James S. Magnuson Individual differences in subphonemic sensitivity and phonological skills Journal Article In: Journal of Memory and Language, vol. 107, pp. 195–215, 2019. @article{Li2019d, Many studies have established a link between phonological abilities (indexed by phonological awareness and phonological memory tasks) and typical and atypical reading development. Individuals who perform poorly on phonological assessments have been mostly assumed to have underspecified (or “fuzzy”) phonological representations, with typical phonemic categories, but with greater category overlap due to imprecise encoding. An alternative posits that poor readers have overspecified phonological representations, with speech sounds perceived allophonically (phonetically distinct variants of a single phonemic category). On both accounts, mismatch between phonological categories and orthography leads to reading difficulty. Here, we consider the implications of these accounts for online speech processing. We used eye tracking and an individual differences approach to assess sensitivity to subphonemic detail in a community sample of young adults with a wide range of reading-related skills. Subphonemic sensitivity inversely correlated with meta-phonological task performance, consistent with overspecification. |
Qianyu Li; Xuqian Chen; Qiaoning Su; Shun Liu; Jian Huang In: Language and Cognition, vol. 11, pp. 645–668, 2019. @article{Li2019e, We tested whether the proportion of typical sentences in a series of auditory sentences would lead people to adjust the strength of activation of world knowledge (i.e., retrieval rules adaptation) during comprehension. This issue is important because it could help clarify how people efficiently integrate different memory information in cognitive processes. In two experiments, all task materials were presented to participants as a whole package, in which proportions of typical sentences, with typical final locations, varied under different conditions. In Experiment 1, the proportion of typical sentences was equal to the atypical ones (i.e., 50% typical vs. 50% atypical), whereas in Experiment 2, the proportion of typical sentences was not equal to the atypical ones (i.e., 75% typical vs. 25% atypical, and 25% typical vs. 75% atypical). Visual fixation on the critical area in a visual display before/while hearing the critical words was compared across conditions, and across-condition differences were used as an index of the adaptation of the retrieval rule in the activation of world knowledge. The findings indicated that the adaptation of retrieval rules occurs throughout the whole test package of sentence comprehension, and the strength of activation of world knowledge in sentence comprehension can be adjusted. |
Sara T. K. Li; Susana T. L. Chung; Janet H. Hsiao Music-reading expertise modulates the visual span for English letters but not Chinese characters Journal Article In: Journal of Vision, vol. 19, no. 4, pp. 1–16, 2019. @article{Li2019f, Recent research has suggested that the visual span in stimulus identification can be enlarged through perceptual learning. Since both English and music reading involve left-to-right sequential symbol processing, music-reading experience may enhance symbol identification through perceptual learning particularly in the right visual field (RVF). In contrast, as Chinese can be read in all directions, and components of Chinese characters do not consistently form a left-right structure, this hypothesized RVF enhancement effect may be limited in Chinese character identification. To test these hypotheses, here we recruited musicians and nonmusicians who read Chinese as their first language (L1) and English as their second language (L2) to identify music notes, English letters, Chinese characters, and novel symbols (Tibetan letters) presented at different eccentricities and visual field locations on the screen while maintaining central fixation. We found that in English letter identification, significantly more musicians achieved above-chance performance in the center-RVF locations than nonmusicians. This effect was not observed in Chinese character or novel symbol identification. We also found that in music note identification, musicians outperformed nonmusicians in accuracy in the center-RVF condition, consistent with the RVF enhancement effect in the visual span observed in English-letter identification. These results suggest that the modulation of music-reading experience on the visual span for stimulus identification depends on the similarities in the perceptual processes involved. |
Sha Li; Laurien Oliver-Mighten; Lin Li; Sarah J. White; Kevin B. Paterson; Jingxin Wang; Kayleigh L. Warrington; Victoria A. McGowan Adult age differences in effects of text spacing on eye movements during reading Journal Article In: Frontiers in Psychology, vol. 9, pp. 2700, 2019. @article{Li2019, Large-scale changes in text spacing, such as removing the spaces between words, disrupt reading more for older (65+ years) than younger (18-30 years) adults. However, it is unknown whether older readers show greater sensitivity to simultaneous subtle changes in inter-letter and inter-word spacing encountered in everyday reading. To investigate this, we recorded young and older adults' eye movements while reading sentences in which inter-letter and inter-word spacing was normal, condensed (10 and 20% smaller than normal), or expanded (10 or 20% larger than normal). Each sentence included either a high or low frequency target word, matched for length and contextual predictability. Condensing but not expanding text spacing disrupted reading more for the older adults. Moreover, word frequency effects (the reading time cost for low compared to high frequency words) were larger for the older adults, consistent with aging effects on lexical processing in previous research. However, this age difference in the word frequency effect did not vary across spacing conditions, suggesting spacing did not further disrupt older readers' lexical processing. We conclude that visual rather than lexical processing is disrupted more for older readers when text spacing is condensed and discuss this finding in relation to common age-related visual deficits. |
Sunny S. J. Lin; Ming-Yi Hsieh In: International Journal of Human-Computer Interaction, vol. 35, no. 4-5, pp. 299–312, 2019. @article{Lin2019, Compared to reading text alone, pictures are regarded as easier for readers to comprehend the context. For EFL readers, their reading behavior on text with pictures needs to be carefully inspected. The study aims to examine how different are the viewing behaviors of EFL beginners versus intermediate readers on reading narrative paragraphs and accompanying pictures. Seventeen junior high and twenty-one senior high students represented as EFL beginners and intermediate readers, respectively. Both of them read consecu- tively three screens with narrative texts and pictures illustrating the texts. The results showed that both beginners and intermediate readers paidmore attention to the texts than the pictures. The beginners almost solely fixated on the texts and few fixations fell on to the pictures while the intermediates hadmore fixations on both texts and pictures. The eye-movement data in the specific AOIs showed that the intermediates made more references between text and pictures when they encountered difficult words or processed semantic meaning making. The beginners were less efficient in reading, having less fixated time on each screen, and encountered greater difficulties in comprehension than the intermediates. Based on eye- movement data, a personalized strategy to alter display sequence could be provided to support EFL beginners: Before going into narrative reading, a remindingmessage could be dispatched onscreen guiding them to view standalone pictures and to inspect pictorial components carefully to serve as the macro- reading strategy. The personalization could also be realized by posting cognitive and meta-cognitive level questions during the inspection of picture. |
Michael A. Johns; Jorge R. Valdés Kroff; Paola E. Dussias Mixing things up: How blocking and mixing affect the processing of codemixed sentences Journal Article In: International Journal of Bilingualism, vol. 23, no. 2, pp. 584–611, 2019. @article{Johns2019, Aims and objectives/purpose/research questions: The goal of this study is to determine if the way in which codemixed sentences are presented during experimental lab sessions affects the way they are processed, and how experimental design approximates (or not) patterns of language use in bilingual populations. Design/methodology/approach: An eye-tracking study was conducted comparing reading times on codemixed and unilingual Spanish sentences across two modes of presentation: (a) a blocked mode, where one block contained unilingual Spanish sentences and another one contained codemixed sentences; and (b) a mixed mode, where both unilingual and codemixed sentences were mixed together in a randomized fashion. Data and analysis: 20 heritage speakers of Spanish were tested. Four reading measures extracted from the eye-tracking data were subjected to linear mixed-effects regression, with significance determined via backwards likelihood ratio tests, to examine differences across modes of presentation. Findings/conclusions: Codemixes took significantly longer to process in the blocked mode than in the mixed mode. This is in line with corpus data suggesting that intra-sentential codemixing does not occur for long stretches of time and is broken up by unilingual discourse. Originality: While a few studies have hinted at the potential confounds related to the presentation of codemixed or language-switching stimuli, the direct effects of experimental manipulation coupled with insights from sociolinguistic or corpus-based studies have not been tested. Significance/implications: To better understand bilingual codemixing, as well as the cost (or lack thereof) associated with it, lab-based studies of codemixing should take insights from sociolinguistic and corpus-based research. The results of this study suggest that the experience that participants bring into the lab can interact with experimental design and result in unexpected results. |
Rebecca L. Johnson; Sarah Rose Slate; Allison R. Teevan; Barbara J. Juhasz The processing of blend words in naming and sentence reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 4, pp. 847–857, 2019. @article{Johnson2019, Research exploring the processing of morphologically complex words, such as compound words, has found that they are decomposed into their constituent parts during processing. Although much is known about the processing of compound words, very little is known about the processing of lexicalised blend words, which are created from parts of two words, often with phoneme overlap (e.g., brunch). In the current study, blends were matched with non-blend words on a variety of lexical characteristics, and blend processing was examined using two tasks: a naming task and an eye-tracking task that recorded eye movements during reading. Results showed that blend words were processed more slowly than non-blend control words in both tasks. Blend words led to longer reaction times in naming and longer processing times on several eye movement measures compared to non-blend words. This was especially true for blends that were long, rated low in word familiarity, but were easily recognisable as blends. |
Barbara J. Juhasz; Heather Sheridan In: Memory & Cognition, vol. 48, no. 1, pp. 83–95, 2019. @article{Juhasz2019, Adults process words that are rated as being learned earlier in life faster than words that are rated as being acquired later in life. This age-of-acquisition (AoA) effect has been observed in a variety of word-recognition tasks when word frequency is controlled. AoA has also previously been found to influence fixation durations when words are embedded into sentences and eye movements are recorded. However, the time course of AoA effects during reading has been inconsistent across studies. The current study further explored the time course of AoA effects on distributions of first-fixation durations during reading. Early and late acquired words were embedded into matched neutral sentence frames. Participants read the sentences while their eye movements were recorded. AoA effects were observed in both early and late fixation duration measures, suggesting that AoA has an early and long-lasting effect on word-recognition processes during reading. Survival analysis revealed that the earliest discernable effect of AoA on distributions of first-fixation durations emerged beginning at 158 ms. This rapid influence of AoA was confirmed through the use of Vincentile plots, which demonstrated that the effect of AoA occurred early and was relatively consistent across the distribution of fixations. This pattern of results provides support for the direct lexical-control hypothesis, as well as the viewpoint that AoA may exert an influence at multiple loci within the mental lexicon. |
Efthymia C. Kapnoula; Arthur G. Samuel Voices in the mental lexicon: Words carry indexical information that can affect access to their meaning Journal Article In: Journal of Memory and Language, vol. 107, pp. 111–127, 2019. @article{Kapnoula2019, The speech signal carries both linguistic and non-linguistic information (e.g., a talker's voice qualities; referred to as indexical information). There is evidence that indexical information can affect some aspects of spoken word recognition, but we still do not know whether and how it can affect access to a word's meaning. A few studies support a dual-route model, in which inferences about the talker can guide access to meaning via a route external to the mental lexicon. It remains unclear whether indexical information is also encoded within the mental lexicon. The present study tests for indexical effects on spoken word recognition and referent selection within the mental lexicon. In two experiments, we manipulated voice-to-referent co-occurrence, while preventing participants from using indexical information in an explicit way. Participants learned novel words (e.g., bifa) and their meanings (e.g., kite), with each talker's voice linked (via systematic co-occurrence) to a specific referent (e.g., bifa spoken by speaker 1 referred to a specific picture of a kite). In testing, voice-to-referent mapping either matched that of training (congruent), or not (incongruent). Participants' looks to the target's referent were used as an index of lexical activation. Listeners looked faster at a target's referent on congruent than incongruent trials. The same pattern of results was observed in a third experiment, when testing was 24 hrs later. These results show that indexical information can be encoded in lexical representations and affect spoken word recognition and referent selection. Our findings are consistent with episodic and distributed views of the mental lexicon that assume multi-dimensional lexical representations. |
Hossein Karimi; Trevor Brothers; Fernanda Ferreira Phonological versus semantic prediction in focus and repair constructions: No evidence for differential predictions Journal Article In: Cognitive Psychology, vol. 112, pp. 25–47, 2019. @article{Karimi2019, Evidence suggests that the language processing system is predictive. Although past research has established prediction as a general tendency, it is not yet clear whether comprehenders can modulate their anticipatory strategies in response to cues based on sentence constructions. In two visual world eye-tracking experiments, we investigated whether focus constructions (not the hammer but rather the …)and repair disfluencies (the hammer uh I mean the …)would lead listeners to generate different patterns of predictions. In three offline tasks, we observed that participants preferred semantically related continuations (hammer – nail)following focus constructions and phonologically related continuations (hammer – hammock)following disfluencies. However, these offline preferences were not evident in participants' predictive eye-movements during online language processing: Semantically related (nail)and phonologically related words (hammock)received additional predictive looks regardless of whether the target word appeared in a disfluency or in a focus construction. However, significantly less semantic and phonological activation was observed in two “control” linguistic contexts in which predictive processing was discouraged. These findings suggest that although the prediction system is sensitive to sentence construction, is it not flexible enough to alter the type of prediction generated based on preceding context. |
Young-Suk Grace Kim; Yaacov Petscher; Christian Vorstius Unpacking eye movements during oral and silent reading and their relations to reading proficiency in beginning readers Journal Article In: Contemporary Educational Psychology, vol. 58, pp. 102–120, 2019. @article{Kim2019, Our understanding about the developmental similarities and differences between oral and silent reading and their relations to reading proficiency (word reading and reading comprehension) in beginning readers is limited. To fill this gap, we investigated 368 first graders' oral and silent reading using eye-tracking technology at the beginning and end of the school year. Oral reading took a longer time (greater rereading times and refixations) than silent reading, but showed greater development (greater reduction in rereading times and fixations) from the beginning to the end of the year. The relation of eye-movement behaviors to reading proficiency was such that, for example, less rereading time was positively related to reading proficiency, and the relation was stronger in oral reading than in silent reading. Moreover, the nature of relations between eye movements and reading skill varied as a function of the child's reading proficiency such that the relations were weaker for poor readers, particularly at the beginning of the year. The relations between eye movements and reading proficiency stabilized in the spring for children whose reading skill was 0.30 quantile and above, but weaker relations remained for readers below 0.30 quantile. These findings suggest the importance of examining eye-movement behaviors in both oral and silent reading modes and their developmental relations to reading proficiency. |
Thomas Kluth; Michele Burigo; Holger Schultheis; Pia Knoeferle Does direction matter? Linguistic asymmetries reflected in visual attention Journal Article In: Cognition, vol. 185, pp. 91–120, 2019. @article{Kluth2019, Language and vision interact in non-trivial ways. Linguistically, spatial utterances are often asymmetrical as they relate more stable objects (reference objects) to less stable objects (located objects). Researchers have claimed that such linguistic asymmetry should also be reflected in the allocation of visual attention when people process a depicted spatial relation described by spatial language. More specifically, it was assumed that people move their attention from the reference object to the located object. However, recent theoretical and empirical findings challenge the directionality of this attentional shift. In this article, we present the results of an empirical study based on predictions generated by computational cognitive models implementing different directionalities of attention. Moreover, we thoroughly analyze the computational models. While our results do not favor any of the implemented directionalities of attention, we found that two unknown sources of geometric information affect spatial language understanding. We provide modifications to the computational models that substantially improve their performance on empirical data. |
Faye Knickerbocker; Rebecca L. Johnson; Emma L. Starr; Anna M. Hall; Daphne M. Preti; Sarah Rose Slate; Jeanette Altarriba The time course of processing emotion-laden words during sentence reading: Evidence from eye movements Journal Article In: Acta Psychologica, vol. 192, pp. 1–10, 2019. @article{Knickerbocker2019, While recent research has explored the effect that positive and negative emotion words (e.g., happy or sad) have on the eye-movement record during reading, the current study examined the effect of positive and negative emotion-laden words (e.g., birthday or funeral) on eye movements. Emotion-laden words do not express a state of mind but have emotional associations and connotations. The current results indicated that both positive and negative emotion-laden words have a processing advantage over neutral words, although the relative time-course of processing differs between words of positive and negative valence. Specifically, positive emotion-laden words showed advantages in early, late, and post-target measures, while negative emotion-laden words showed effects only in late and post-target measures. |
Tammy Sue-Wynne Liu; Yeu-Ting Liu; Chun-Yin Doris Chen Meaningfulness is in the eye of the reader: Eye-tracking insights of L2 learners reading e-books and their pedagogical implications Journal Article In: Interactive Learning Environments, vol. 27, no. 2, pp. 181–199, 2019. @article{Liu2019a, This study employed eye-tracking technology to probe the online reading behavior of 52 advanced L2 English learners. These participants read an e-book containing six types of multimedia supports for either vocabulary acquisition or comprehension. The six supports consisted of three micro-level supports that provided information about specific words (glosses, vocabulary focus, and footnotes), and three macro-level supports that provided global or background information (illustrations, infographics, and photos). The participants read the ebook under two presentation modes: (1) simultaneous mode: where digital input and supports were presented at the same time; and (2) sequential mode: where the digital content and supports were incrementally presented. Analyses showed that when reading for vocabulary acquisition, vocabulary focus and glosses were significantly fixated on, and when reading for comprehension, illustrations were more intensely fixated on. Additionally, when the digital content was incrementally presented, vocabulary focus received significantly higher total fixation duration. This suggests that reading under the sequential mode has the potency to guide L2 learners' focal attention toward micro-level supports. In contrast, under the simultaneous presentation mode, L2 learners seemed to divide their focal attention among both micro-level and macro-level supports. Pedagogical implications are discussed based on the findings of this study. |
Yanping Liu; Lei Yu; Erik D. Reichle The dynamic adjustment of saccades during Chinese reading: Evidence from eye movements and simulations Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 3, pp. 535–543, 2019. @article{Liu2019d, This article reports an eye-movement experiment in which participants scanned continuous sequences of Landolt-Cs for target circles to examine the visual and oculomotor constraints that might jointly determine where the eyes move in a task that engages many of the perceptual and motor processes involved in Chinese reading but without lexical or linguistic processing. The lengths of the saccades entering the Landolt-C clusters were modulated by the processing difficulty (i.e., gap sizes) of those clusters. Simulations using implemented versions of default-targeting (Yan, Kliegl, Richter, Nuthmann, & Shu, 2010) versus dynamic-adjustment (Liu, Reichle, & Li, 2016) models of saccadic targeting indicated that the latter provided a better account of our participants' eye movements, further supporting the hypothesis that Chinese readers "decide" where to move their eyes by adjusting saccade length in response to processing difficulty rather than by selecting default saccade targets. We discuss this hypothesis in relation to both what is known about saccadic targeting during the reading of English versus Chinese and current models of eye-movement control in reading. |
Yanping Liu; Lei Yu; Erik D. Reichle The influence of parafoveal preview, character transposition, and word frequency on saccadic targeting in Chinese reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 4, pp. 537–552, 2019. @article{Liu2019, This article reports the results of an eye-movement experiment which manipulated the frequency and parafoveal preview (i.e., nonword, transposed-character, or identical) of 2-character Chinese target words using a gaze-contingent boundary paradigm (Rayner, 1975). The key findings were that progressive saccades were longer into high-than low-frequency target words, and that this word-frequency effect was more pronounced for identical than transposed previews. These findings suggest that Chinese readers adjust their saccade lengths in response to variables that influence the rate of parafoveal lexical processing. To examine the feasibility of this hypothesis, 2 computer simulations were completed that pitted this dynamic-adjustment account (Liu, Huang, Gao, & Reichle, 2017) against an account in which readers simply move their eyes to a small number of default saccade targets (e.g., the beginning or center of the upcoming word; Yan, Kliegl, Richter, Nuthmann, & Shu, 2010). The simulation results show that the dynamic-adjustment hypothesis more accurately describes our experimental findings using fewer parameters. The theoretical implications of the dynamic-adjustment account of saccadic targeting are discussed relevant to both models of eye-movement control in reading and modes of Chinese word identification. |
Yanping Liu; Lili Yu; Le Fu; Wenwen Li; Ziyi Duan; Erik D. Reichle The effects of parafoveal word frequency and segmentation on saccade targeting during Chinese reading Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 4, pp. 1367–1376, 2019. @article{Liu2019e, Two eye-movement experiments are reported in which a boundary paradigm was used to manipulate the presence versus absence of boundaries for high-frequency and low-frequency target words in the parafovea. In Experiment 1, this was done by introducing a blank space after the target words, whereas in Experiment 2 this was done by rendering the target words in red. In both experiments, higher frequency targets engendered longer saccades, whereas the presence of parafoveal word boundaries engendered shorter saccades. This pattern suggests the operation of two countermanding saccade-targeting mechanisms: one that uses parafoveal processing difficulty to adjust saccade lengths and a second that uses word boundaries to direct the eyes toward specific saccade targets. The implications of these findings for models of eye-movement control during reading are discussed, as are suggestions for integrating dynamic-adjustment and default-targeting accounts. |
Otto Loberg; Jarkko Hautala; Jarmo A. Hämäläinen; Paavo H. T. Leppänen Influence of reading skill and word length on fixation-related brain activity in school-aged children during natural reading Journal Article In: Vision Research, vol. 165, pp. 109–122, 2019. @article{Loberg2019, Word length is one of the main determinants of eye movements during reading and has been shown to influence slow readers more strongly than typical readers. The influence of word length on reading in individuals with different reading skill levels has been shown in separate eye-tracking and electroencephalography studies. However, the influence of reading difficulty on cortical correlates of word length effect during natural reading is unknown. To investigate how reading skill is related to brain activity during natural reading, we performed an exploratory analysis on our data set from a previous study, where slow reading (N = 27) and typically reading (N = 65) 12-to-13.5-year-old children read sentences while co-registered ET-EEG was recorded. We extracted fixation-related potentials (FRPs) from the sentences using the linear deconvolution approach. We examined standard eye-movement variables and deconvoluted FRP estimates: intercept of the response, categorical effect of first fixation versus additional fixation and continuous effect of word length. We replicated the pattern of stronger word length effect in eye movements for slow readers. We found a difference between typical readers and slow readers in the FRP intercept, which contains activity that is common to all fixations, within a fixation time-window of 50–300 ms. For both groups, the word length effect was present in brain activity during additional fixations; however, this effect was not different between groups. This suggests that stronger word length effect in the eye movements of slow readers might be mainly due re-fixations, which are more probable due to the lower efficiency of visual processing. |
Ya Lou; Huajian Cai; Xuewei Liu; Xingshan Li Effects of self-enhancement on eye movements during reading Journal Article In: Frontiers in Psychology, vol. 10, pp. 343, 2019. @article{Lou2019, Previous studies show that readers' eye movements are influenced by text properties and readers' personal cognitive characteristics. In the current study, we further show that readers' eye movements are influenced by a social motivation of self-enhancement. We asked participants to silently read sentences that describe self or others with positive or negative traits while their eyes were monitored. First-fixation duration and gaze duration were longer when positive words were used to describe self than to describe others, but there was no such effect for negative words. These results suggest that eye movements can be influenced by the motivation of self-enhancement in addition to various stimuli features and cognitive factors. This finding indicates that the eye movement methodology can potentially be used to study implicit social cognition. |
Jana Lüdtke; Eva Fröhlich; Arthur M. Jacobs; Florian Hutzler The SLS-Berlin: Validation of a German computer-based screening test to measure reading proficiency in early and late adulthood Journal Article In: Frontiers in Psychology, vol. 10, pp. 1682, 2019. @article{Luedtke2019, Reading proficiency, i.e., successfully integrating early word-based information and utilizing this information in later processes of sentence and text comprehension, and its assessment is subject to extensive research. However, screening tests for German adults across the life span are basically non-existent. Therefore, the present article introduces a standardized computerized sentence-based screening measure for German adult readers to assess reading proficiency including norm data from 2,148 participants covering an age range from 16-88 years. The test was developed in accordance with the children's version of the Salzburger LeseScreening (SLS, Wimmer & Mayringer, 2014). The SLS-Berlin has a high reliability and can easily be implemented in any research setting using German language. We present a detailed description of the test and report the distribution of SLS-Berlin scores for the norm sample as well as for two subsamples of younger (below 60 years) and older adults (60 and older). For all three samples, we conducted regression analyses to investigate the relationship between sentence characteristics and SLS-Berlin scores. In a second validation study, SLS-Berlin scores were compared with two (pseudo)word reading tests, a test measuring attention and processing speed and eye-movements recorded during expository text reading. Our results confirm the SLS-Berlin's sensitivity to capture early word decoding and later text related comprehension processes. The test distinguished very well between skilled and less skilled readers and also within less skilled readers and is therefore a powerful and efficient screening test for German adults to assess interindividual levels of reading proficiency. |
Sahil Luthra; Sara Guediche; Sheila E. Blumstein; Emily B. Myers Neural substrates of subphonemic variation and lexical competition in spoken word recognition Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 2, pp. 151–169, 2019. @article{Luthra2019, In spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g. peacock). Eyetracking data indicated that participants were slowed when a voiced onset competitor (e.g. beaker) was also displayed, and this effect was amplified when acoustic-phonetic competition was increased. Simultaneously-collected fMRI data showed that several brain regions were sensitive to the presence of the onset competitor, including the supramarginal, middle temporal, and inferior frontal gyri, and functional connectivity analyses revealed that the coordinated activity of left frontal regions depends on both acoustic-phonetic and lexical factors. Taken together, results suggest a role for frontal brain structures in resolving lexical competition, particularly as atypical acoustic-phonetic information maps on to the lexicon. |
Guojie Ma; Danxin Li; Xiangling Zhuang Do visual word segmentation cues improve reading performance in Chinese reading? Journal Article In: Ergonomics, vol. 62, no. 8, pp. 1086–1097, 2019. @article{Ma2019, It is controversial whether providing visual word segmentation cues can improve Chinese reading performance. This study investigated this topic by examining how visual word segmentation cues such as grey highlighting, red colour and interword spacing influence global sentence reading and local word recognition during reading Chinese text in three experiments. The results showed that interword spacing could facilitate local word recognition but could not increase reading speed. In contrast, grey highlighting and red colour could improve neither local word recognition nor global sentence reading performance. Instead, these cues increased the number of fixations and saccades, resulting in slower reading speed. These results suggest that even red colour is not a practically visual cue for Chinese word segmentation and the corresponding mechanisms were discussed. Practitioner Summary: We studied how visual cues such as grey highlighting, red colour and interword spacing influenced Chinese reading performance. Our data showed that even the red colour was not an efficient cue for Chinese word segmentation. The corresponding mechanisms and future direction were discussed regarding how to improve Chinese reading performance. |
Guojie Ma; Ziang Li; Fengfeng Xu; Xingshan Li The modulation of eye movement control by word length in reading Chinese Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 7, pp. 1620–1631, 2019. @article{Ma2019a, Given there are no interword spaces marking word boundaries in Chinese text, it remains unclear how information about word length influences eye movement control during the reading of Chinese text. In this research, we set up strict controls for word frequency and other word properties, to study this knowledge gap. In Experiment 1A and Experiment 1B, a between-subjects design was used. Forty-eight pairs of one- and two-character words were selected as target words in Experiment 1A, while the same amount of two- and three-character words were selected in Experiment 1B. Conversely, a within-subjects design was used in Experiment 2. Sixty sets of one-, two- and three-character words were selected as target words. The results showed that long words were skipped less often and fixated on more often than short words. Total time was shorter for shorter than for longer words but first fixation durations were longer for one- than for two-character words. Most importantly, we did not find reliable evidence to support the view that word length could modulate initial landing position and incoming saccade length in the length-matched region analyses. These findings suggest that word length influences eye movement control during reading Chinese in a way that is slightly different from that in the process of reading English. |
Marloes Mak; Roel M. Willems Mental simulation during literary reading: Individual differences revealed with eye-tracking Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 4, pp. 511–535, 2019. @article{Mak2019, People engage in simulation when reading literary narratives. In this study, we tried to pinpoint how different kinds of simulation (perceptual and motor simulation, mentalising) affect reading behaviour. Eye-tracking (gaze durations, regression probability) and questionnaire data were collected from 102 participants, who read three literary short stories. In a pre-test, 90 additional participants indicated which parts of the stories were high in one of the three kinds of simulation-eliciting content. The results show that motor simulation reduces gaze duration (faster reading), whereas perceptual simulation and mentalising increase gaze duration (slower reading). Individual differences in the effect of simulation on gaze duration were found, which were related to individual differences in aspects of story world absorption and story appreciation. These findings suggest fundamental differences between different kinds of simulation and confirm the role of simulation in absorption and appreciation. |
Michael P. Mansbridge; Katsuo Tamaoka Ambiguity in Japanese relative clause processing Journal Article In: Journal of Japanese Linguistics, vol. 35, no. 1, pp. 75–136, 2019. @article{Mansbridge2019, In Japanese, relative clauses have initial clause-type ambiguity. Because there are no overt RC markers, the structure is realized at a locus of disambiguation, typically the head noun. While previous studies have attenuated this ambiguity, these studies have not effectively investigated the processing asymmetry between subject/object-relatives during reading. The current study investigated RC processing within different ambiguity contexts using eye-tracking on native Japanese speakers. For ambiguous RCs, ORC difficulties were primarily observed during late-processing measures after disambiguation at the head noun and RC verb. This was possibly due to the inherent difficulty of assigning thematic roles when the object appears outside the clause as the object-before-subject-bias predicts or due to factors such as expectation, structural-integration and similarity interference. Because all predict ORC difficulties in ambiguous RCs, the exact nature of the processing remains uncertain. For unambiguous RCs, ORC difficulties were instead observed during early-processing measures at the head noun. We attribute this to expectation-based processing because the clause no longer requires a structural reconfiguration. Specifically, with increased cues for the RC interpretation, expectation-based processing effects became more observable at the head. In conclusion, clause type ambiguity is an integral factor for Japanese relative clause processing. |
María Teresa Martínez-García Using eye-movements to track bilingual activation Journal Article In: Languages, vol. 4, no. 3, pp. 59, 2019. @article{MartinezGarcia2019, Recent research found that the languages of bilingual listeners are active and interact, such that both lexical representations are activated by the spoken input with which they are compatible. However, the time course of bilingual activation and whether suprasegmental information further modulates this cross-language competition are still not well understood. This study investigates the effect of stress placement on the processing of English–Spanish cognates by beginner-to-intermediate Spanish-speaking second-language (L2) learners of English and intermediate-to-advanced English-speaking L2 learners of Spanish using the visual-world eye-tracking paradigm. In each trial, participants saw a target (asado, ‘roast'), one of two competitors (stress match: asados, ‘roast (pl)'; stress mismatch: asador, ‘rotisserie'), and two unrelated distracters, while hearing the target word. The experiment included a non-cognate condition (asado-asados-asador) and a cognate condition, where the stress pattern of the English word corresponding to the Spanish competitor in the stress-mismatch condition (inventor) instead matched that of the Spanish target (invento, ‘invent'). Growth-curve analyses revealed cognate-status and stress-mismatch effects for Spanish-speaking L2 learners of English, and cognate-status and stress-mismatch effects, and an interaction for English-speaking L2 learners of Spanish. This suggests that both groups use stress for word recognition, but the English stress pattern only affects the processing of Spanish words in the English-speaking L2 learners of Spanish. |
Eliana Mastrantuono; Michele Burigo; Isabel R. Rodríguez-Ortiz; David Saldaña The role of multiple articulatory channels of sign-supported speech revealed by visual processing Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 62, pp. 1625–1656, 2019. @article{Mastrantuono2019, Purpose: The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed inrelation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication. Method: Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either theface or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message. Results: In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased inthe magnified condition. In Experiment 2, results indicatedless accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech. Conclusions: All participants, even those with residualhearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions. |
Bob McMurray; Tyler P. Ellis; Keith S. Apfelbaum How do you deal with uncertainty? Cochlear implant users differ in the dynamics of lexical processing of noncanonical inputs Journal Article In: Ear & Hearing, vol. 40, no. 4, pp. 961–980, 2019. @article{McMurray2019, Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. Design: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. Results: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. Conclusions: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations. |
Ascensión Pagán; Kate Nation Learning words via reading: Contextual diversity, spacing, and retrieval effects in adults Journal Article In: Cognitive Science, vol. 43, no. 1, pp. 1–24, 2019. @article{Pagan2019a, We examined whether variations in contextual diversity, spacing, and retrieval practice influenced how well adults learned new words from reading experience. Eye movements were recorded as adults read novel words embedded in sentences. In the learning phase, unfamiliar words were presented either in the same sentence repeated four times (same context) or in four different sentences (diverse context). Spacing was manipulated by presenting the sentences under distributed or non-distributed practice. After learning, half of the participants were asked to retrieve the new words, and half had an extra exposure to the new words. Although words experienced in diverse contexts were acquired more slowly during learning, they enjoyed a greater benefit of learning at immediate posttest. Distributed practice also slowed learning, but no benefit was observed at posttest. Although participants who had an extra exposure showed the greatest learning benefit overall, learning also benefited from retrieval opportunity, when words were experienced in diverse contexts. These findings demonstrate that variation in the content and structure of the learning environment impacts on word learning via reading. |
Pauline Palma; Veronica Whitford; Debra Titone Cross-language activation and executive control modulate within-language ambiguity resolution: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 46, no. 3, pp. 507–528, 2019. @article{Palma2019, An important question within psycholinguistics is how knowledge of multiple languages impacts the coactivation of word forms and meanings during language comprehension. To the extent that a bilingual's known languages are always partially active, as predicted by models such as the bilingual interactive activation plus model (Dijkstra & Van Heuven, 2002), cross-language activation should influence which meanings are accessed and in which order. Here, we monitored the eye movements of 48 French-English and 40 English-French bilingual adults as they read within-language homonyms embedded in more or less semantically constraining English sentences. The within-language homonyms were either cognate homonyms, whose subordinate meanings were also French cognates (e.g., sage, "herb" or, less frequently, "wise man" in English and, also, "wise man" in French), or uniquely English (e.g., chest). French-English bilinguals processed cognate homonyms with subordinate meanings more quickly than uniquely English homonyms with subordinate meanings, and individual differences in executive control capacity modulated their processing of uniquely English homonyms only. In contrast, English-French bilinguals processed all within-language homonyms similarly, regardless of cognate status and executive control capacity. Our findings suggest that cross-language activation impacts within-language ambiguity resolution by changing the relative dominance of the meanings associated with a word form, and that cross-language activation varies as a function of the language tested (first or second language). |
Jinger Pan; Ming Yan; Jochen Laubrock; Hua Shu Lexical and sublexical phonological effects in Chinese silent and oral reading Journal Article In: Scientific Studies of Reading, vol. 23, no. 5, pp. 403–418, 2019. @article{Pan2019, What is the time course of activation of phonological information in logographic writing systems like Chinese, in which meaning is prioritized over sound? We used a manipulation of phonological regularity to examine foveal and parafoveal phonological processing of Chinese phonograms at lexical and sublexical levels during Chinese sentence reading in 2 eye-tracking experiments. In Experiment 1, using an error disruption task during silent reading, we observed foveal lexical phonological activation in second-pass reading. In Experiment 2, using the boundary paradigm, both parafoveal lexical and sublexical phonological preview benefits were found in first-fixation duration in oral reading, whereas only lexical phonological benefits were found in gaze duration during silent reading. Thus, phonological information had earlier and more pronounced parafoveal effects in oral reading, and these extended to sublexical processing. These results are compatible with the view that oral reading prioritizes parafoveal phonological processing in Chinese. |
Adam J. Parker; Timothy J. Slattery Word frequency, predictability, and return-sweep saccades: Towards the modeling of eye movements during paragraph reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 12, pp. 1614–1633, 2019. @article{Parker2019a, Models of eye movement control during reading focus on the reading of single lines of text. Within these models, word frequency and predictability are important input variables which influence fixation probabilities and durations. However, a comprehensive model of eye movement control will have to account for readers' eye movements across multiline texts. Line-initial words are unlike those presented midline; they are routinely unavailable for parafoveal preprocessing. Therefore, it is unclear whether and how word frequency and predictability influence reading times on line-initial words. To address this, we present an analysis of the Provo Corpus (Luke & Christianson, 2018) followed by a novel eye-movement experiment. We conclude that word frequency and predictability impact single-fixation and gaze durations on line-initial words. We also observed that return-sweep error (undersweep-fixations) may, among several other possibilities, allow for parafoveal processing of line-initial words prior to their direct fixation. Implications for models of eye movement control during reading are discussed. |
Adam J. Parker; Timothy J. Slattery; Julie A. Kirkby Return-sweep saccades during reading in adults and children Journal Article In: Vision Research, vol. 155, pp. 35–43, 2019. @article{Parker2019, During reading, eye movement patterns differ between children and adults. Children make more fixations that are longer in duration and make shorter saccades. Return-sweeps are saccadic eye movements that move a reader's fixation to a new line of text. Return-sweeps move fixation further than intra-line saccades and often undershoot their target. This necessitates a corrective saccade to bring fixation closer to the start of the line. There have been few empirical investigations of return-sweep saccades in adults, and even fewer in children. In the present study, we examined return-sweeps of 47 adults and 48 children who read identical multiline texts. We found that children launch their return-sweeps closer to the end of the line and target a position closer to the left margin. Therefore, children fixate more extreme positions on the screen when reading for comprehension. Furthermore, children required a corrective saccade following a return-sweep more often than adults. Analysis of the duration of the fixation preceding the corrective saccade indicated that children are as efficient as adults at responding to retinal feedback following a saccade. Rather than consider differences in adult's and children's return-sweep behaviour an artefact of oculomotor control, we believe that these differences represent adult's ability to utilise parafoveal processing to encode text at extreme positions. |
Giovanni Parodi; Cristóbal Julio; Laura Nadal; Adriana Cruz; Gina Burdiles Stepping back to look ahead: Neuter encapsulation and referent extension in counter-argumentative and causal relations in Spanish Journal Article In: Language and Cognition, vol. 11, pp. 431–454, 2019. @article{Parodi2019, In discourse comprehension, if all goes well, people tend to create a rich and coherent mental representation of the events described in the text. To do so, referential and relational coherence must be established in order to construct a connected discourse. The objective of this follow-up eye-tracking study (N = 72) is to explore the existence of an interaction effect between two factors: (a) the extension of the referent (short and long antecedent), and (b) the semantic relation (counter-argumentative a pesar de, and causal por), when processing the neuter pronoun ello in texts written in Spanish. No previous study has systematically compared the on-line processing of texts in which different extensions of the encapsulated anaphoric antecedent by the neuter pronoun ello ('this' or 'it' in English) are presented in diverse marked semantic relations (causal and counter-argumentative). Based on three eye-tracking measures, we found distinctive patterns of reading behavior when anaphoric neuter reference and semantic relations must be processed conjointly in order to construct a coherent mental representation. The main findings show that reading longer and more complex antecedents encapsulated by the neutral pronouns ello exerts more cognitive effort in late processing (Look Back measure), particularly when simultaneously and in the same discourse construction there is an explicitly marked counter-argumentative semantic relation. Implications for theories of referential and relational coherence are discussed. |
Clare Patterson; Claudia Felser Delayed application of binding condition C during cataphoric pronoun resolution Journal Article In: Journal of Psycholinguistic Research, vol. 48, no. 2, pp. 453–475, 2019. @article{Patterson2019, Previous research has shown that during cataphoric pronoun resolution, the predictive search for an antecedent is restricted by a structure-sensitive constraint known as ‘Condition C', such that an antecedent is only considered when the constraint does not apply. Evidence has mainly come from self-paced reading (SPR), a method which may not be able to pick up on short-lived effects over the timecourse of processing. This study investigates whether or not the active search mechanism is constrained by Condition C at all points in time during cataphoric processing. We carried out one eye-tracking during reading and a parallel SPR experiment, accompanied by offline coreference judgment tasks. Although offline judgments about coreference were constrained by Condition C, the eye-tracking experiment revealed temporary consideration of antecedents that should be ruled out by Condition C. The SPR experiment using exactly the same materials indicated, conversely, that only structurally appropriate antecedents were considered. Taken together, our results suggest that the application of Condition C may be delayed during naturalistic reading. |
Jovana Pejovic; Eiling Yee; Monika Molnar Speaker matters: Natural inter-speaker variation affects 4-month-olds' perception of audio-visual speech Journal Article In: First Language, pp. 1–15, 2019. @article{Pejovic2019, In the language development literature, studies often make inferences about infants' speech perception abilities based on their responses to a single speaker. However, there can be significant natural variability across speakers in how speech is produced (i.e., inter-speaker differences). The current study examined whether inter-speaker differences can affect infants' ability to detect a mismatch between the auditory and visual components of vowels. Using an eye-tracker, 4.5-month-old infants were tested on auditory-visual (AV) matching for two vowels (/i/ and /u/). Critically, infants were tested with two speakers who naturally differed in how distinctively they articulated the two vowels within and across the categories. Only infants who watched and listened to the speaker whose visual articulations of the two vowels were most distinct from one another were sensitive to AV mismatch. This speaker also produced a visually more distinct /i/ as compared to the other speaker. This finding suggests that infants are sensitive to the distinctiveness of AV information across speakers, and that when making inferences about infants' perceptual abilities, characteristics of the speaker should be taken into account. |