EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language article, please email us!
Jamie Reilly; Bonnie Zuckerman; Alexandra Kelly; Maurice Flurie; Sagar Rao
In: Brain and Language, 206 , pp. 1–8, 2020.
Many neurological disorders are associated with excessive and/or uncontrolled cursing. The right prefrontal cortex has long been implicated in a diverse range of cognitive processes that underlie the propensity for cursing, including non-propositional language representation, emotion regulation, theory of mind, and affective arousal. Neurogenic cursing often poses significant negative social consequences, and there is no known behavioral intervention for this communicative disorder. We examined whether right vs. left lateralized prefrontal neurostimultion via tDCS could modulate taboo word production in neurotypical adults. We employed a pre/post design with a bilateral frontal electrode montage. Half the participants received left anodal and right cathodal stimulation; the remainder received the opposite polarity stimulation at the same anatomical loci. We employed physiological (pupillometry) and behavioral (reaction time) dependent measures as participants read aloud taboo and non-taboo words. Pupillary responses demonstrated a crossover reaction, suggestive of modulation of phasic arousal during cursing. Participants in the right anodal condition showed elevated pupil responses for taboo words post stimulation. In contrast, participants in the right cathodal condition showed relative dampening of pupil responses for taboo words post stimulation. We observed no effects of stimulation on response times. We interpret these findings as supporting modulation of right hemisphere affective arousal that disproportionately impacts taboo word processing. We discuss alternate accounts of the data and future applications to neurological disorders.
Johannes Rennig; Kira Wegner-Clemens; Michael S Beauchamp
In: Psychonomic Bulletin & Review, 27 , pp. 70–77, 2020.
Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2–58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3–98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.
Tracy Reuter; Kavindya Dalawella; Casey Lew-Williams
In: Language, Cognition and Neuroscience, pp. 1–17, 2020.
Prior research suggests that prediction supports language processing and learning. However, the ecological validity of such findings is unclear because experiments usually include constrained stimuli. While theoretically suggestive, previous conclusions will be largely irrelevant if listeners cannot generate predictions in response to complex and variable perceptual input. Taking a step toward addressing this limitation, three eye-tracking experiments evaluated how adults (N = 72) and 4- and 5-year-old children (N = 72) generated predictions in contexts with complex visual stimuli (Experiment 1), variable speech stimuli (Experiment 2), and both concurrently (Experiment 3). Results indicated that listeners generated predictions in contexts with complex visual stimuli or variable speech stimuli. When both were more naturalistic, listeners used informative verbs to generate predictions, but not adjectives or number markings. This investigation provides a test for theories claiming that prediction is a central learning mechanism, and calls for further evaluations of prediction in naturalistic settings.
Samy Rima; Grace Kerbyson; Elizabeth Jones; Michael C Schmid
In: Vision Research, 169 , pp. 41–48, 2020.
Visual perception is often not homogenous across the visual field and can vary depending on situational demands. The reasons behind this inhomogeneity are not clear. Here we show that directing attention that is consistent with a western reading habit from left to right, results in a ~32% higher sensitivity to detect transient visual events in the right hemifield. This right visual field advantage was largely reduced in individuals with reading difficulties from developmental dyslexia. Similarly, visual detection became more symmetric in skilled readers, when attention was guided opposite to the reading pattern. Taken together, these findings highlight a higher sensitivity in the right visual field for detecting the onset of sudden visual events that is well accounted for by left hemisphere dominated reading habit.
Christopher M Robus; Christopher J Hand; Ruth Filik; Melanie Pitchford
In: Computers in Human Behavior, 109 , pp. 1–11, 2020.
Digital images of faces such as emoji in virtual communication have become increasingly popular, but current research findings are inconsistent regarding their emotional effects on perceptions of text. Similarly, emoji effects on reading behaviours are largely unknown and require further examination. The present study (N = 41) investigated how the position and emotional valence of emoji in neutral narrative sentences influenced eye movements during reading and perceptions of sentence valence. Participants read neutral narrative sentences containing smiling or frowning emoji in sentence-initial or sentence-final positions and rated the perceived emotional valence of the sentence. Results from linear mixed-effects models demonstrated significantly longer fixations on sentence-final emoji and longer sentence reading times when emoji were in sentence-final positions. These findings are comparable to sentence ‘wrap-up' effects witnessed in the processing of lexical units during sentence reading, providing new evidence towards the way readers integrate emoji into contextual processing. However, no impact of emoji valence or position on first-pass target word processing or sentence-valence ratings were found. This would refute previous suggestions that digital faces influence text valence, raising questions about reader preference for emoji or sentence sentiment, the influence of sentence formatting, and delivery/display mechanism on these effects.
Andre Roelke; Christian Vorstius; Ralph Radach; Markus J Hofmann
In: NeuroImage, 215 , pp. 1–11, 2020.
While word frequency and predictability effects have been examined extensively, any evidence on interactive effects as well as parafoveal influences during whole sentence reading remains inconsistent and elusive. Novel neuroimaging methods utilize eye movement data to account for the hemodynamic responses of very short events such as fixations during natural reading. In this study, we used the rapid sampling frequency of near-infrared spectroscopy (NIRS) to investigate neural responses in the occipital and orbitofrontal cortex to word frequency and predictability. We observed increased activation in the right ventral occipital cortex when the fixated word N was of low frequency, which we attribute to an enhanced cost during saccade planning. Importantly, unpredictable (in contrast to predictable) low frequency words increased the activity in the left dorsal occipital cortex at the fixation of the preceding word N-1, presumably due to an upcoming breach of top-down modulated expectation. Opposite to studies that utilized a serial presentation of words (e.g. Hofmann et al., 2014), we did not find such an interaction in the orbitofrontal cortex, implying that top-down timing of cognitive subprocesses is not required during natural reading. We discuss the implications of an interactive parafoveal-on-foveal effect for current models of eye movements.
Martina Micai; Mila Vulchanova; David Saldaña
In: Autism Research, pp. 1–18, 2020.
The sources of reading comprehension difficulties in people with autism spectrum disorder (ASD) are still open to discussion. We explored their ability to adapt reading strategies to different reading goals using eye-tracking technology. A group of participants with ASD, and intelligence-, receptive oral language- and reading skills-matched control peers, read three stories under three different reading goals conditions: read for entertainment; read for study; and read fast and search information for a previously presented question. Each text required participants to answer comprehension questions. The ASD group was less accurate in question answering. The control group was faster in reading questions, displayed more fixations on the text, and reported to be more confident in question answering during reading for study compared to reading for entertainment. These differences between reading goals were not observed in the ASD group. The control group adopted and was aware of using different reading strategies according to different reading goals. In contrast, the ASD group did not change their reading behavior and strategies between entertainment and study reading goal condition, showing less of a tendency to adopt deep-level processing strategies when necessary. Planning, as measured by Tower of Hanoi, was the only executive task that predicted individual differences in text reading time across conditions. Participants with better planning ability were also able to adapt their reading behavior to different reading instructions. Difficulties in adjusting the reading behavior according to the task, evaluating own performance and planning may be partly involved in reading comprehension problems in ASD. Lay abstract: The control group read questions faster, reported to be more confident in question answering during reading for study compared to reading for entertainment, and were aware of using different reading strategies according to different reading goals. In contrast, the autistic group did not change their reading behavior and strategies according to the reading goal. Difficulties in adjusting the reading behavior according to the task, in evaluating own performance and in planning may be partly involved in reading comprehension problems in autism.
Marije Michel; Andrea Révész; Xiaojun Lu; Nektaria Efstathia Kourtali; Minjin Lee; Lais Borges
In: Second Language Research, 36 (3), pp. 307–334, 2020.
Most research into second language (L2) writing has focused on the products of writing tasks; much less empirical work has examined the behaviours in which L2 writers engage and the cognitive processes that underlie writing behaviours. We aimed to fill this gap by investigating the extent to which writing speed fluency, pausing, eye-gaze behaviours and the cognitive processes associated with pausing may vary across independent and integrated tasks throughout the whole, and at five different stages, of the writing process. Sixty L2 writers performed two independent and two integrated TOEFL iBT writing tasks counterbalanced across participants. While writing, we logged participants' keystrokes and captured their eye-movements. Participants took part in a stimulated recall interview based on the last task they had completed. Mixed effects regressions and qualitative analyses revealed that, apart from source use on the integrated task, L2 writers engaged in similar writing behaviours and cognitive processes during the independent and integrated tasks. The integrated task, however, elicited more dynamic and varied behaviours and cognitive processes across writing stages. Adopting a mixed-methods approach enabled us to gain more complete and specific insights than using a single method.
Sara V Milledge; Hazel I Blythe; Simon P Liversedge
In: Psychonomic Bulletin & Review, pp. 1–12, 2020.
Although previous research has demonstrated that for adults external letters of words are more important than internal letters for lexical processing during reading, no comparable research has been conducted with children. This experiment explored, using the boundary paradigm during silent sentence reading, whether parafoveal pre-processing in English is more affected by the manipulation of external letters or internal letters, and whether this differs between skilled adult and beginner child readers. Six previews were generated: identity (e.g., monkey); external letter manipulations where either the beginning three letters of the word were substituted (e.g., rackey) or the last three letters of the word were substituted (e.g., monhig); internal letter manipulations; e.g., machey, mochiy); and an unrelated control condition (e.g., rachig). Results indicate that both adults and children undertook pre-processing of words in their entirety in the parafovea, and that the manipulation of external letters in preview was more harmful to participants' parafoveal pre-processing than internal letters. The data also suggest developmental change in the time course of pre-processing, with children's pre-processing delayed compared to that of adults. These results not only provide further evidence for the importance of external letters to parafoveal processing and lexical identification for adults, but also demonstrate that such findings can be extended to children.
Ailsa E Millen; Lorraine Hope; Anne P Hillstrom
In: Cognitive Research: Principles and Implications, 5 (38), pp. 1–18, 2020.
Background: In criminal investigations, uncooperative witnesses might deny knowing a perpetrator, the location of a murder scene or knowledge of a weapon. We sought to identify markers of recognition in eye fixations and confidence judgments whilst participants told the truth and lied about recognising faces (Experiment 1) and scenes and objects (Experiment 2) that varied in familiarity. To detect recognition we calculated effect size differences in markers of recognition between familiar and unfamiliar items that varied in familiarity (personally familiar, newly learned). Results: In Experiment 1, recognition of personally familiar faces was reliably detected across multiple fixation markers (e.g. fewer fixations, fewer interest areas viewed, fewer return fixations) during honest and concealed recognition. In Experiment 2, recognition of personally familiar non-face items (scenes and objects) was detected solely by fewer fixations during honest and concealed recognition; differences in other fixation measures were not consistent. In both experiments, fewer fixations exposed concealed recognition of newly learned faces, scenes and objects, but the same pattern was not observed during honest recognition. Confidence ratings were higher for recognition of personally familiar faces than for unfamiliar faces. Conclusions: Robust memories of personally familiar faces were detected in patterns of fixations and confidence ratings, irrespective of task demands required to conceal recognition. Crucially, we demonstrate that newly learned faces should not be used as a proxy for real-world familiarity, and that conclusions should not be generalised across different types of familiarity or stimulus class.
Krista A Miller; Gary E Raney; Alexander P Demos
In: Journal of Psycholinguistic Research, 49 (5), pp. 885–913, 2020.
The goal of the current research was to determine if conceptual metaphors are activated when people read idioms within a text. Participants read passages that included idioms that were consistent (blow your top) or inconsistent (bite his head off) with an underlying conceptual metaphor (ANGER IS HEATED FLUID IN A CONTAINER) followed by target words that were related (heat) or unrelated (lead) to the conceptual metaphor. Reading time (Experiment 1) or lexical decision time (Experiment 2) for the target words were measured. We found no evidence supporting conceptual metaphor activation. Target word reading times were unaffected by whether they were related or unrelated to underlying conceptual metaphors. Lexical decision times were facilitated for related target words in both the consistent and inconsistent idiom conditions. We suggest that the conceptual (target) domain, not a specific underlying conceptual metaphor, facilitates processing of related target words.
Jonathan Mirault; Jeremy Yeaton; Fanny Broqua; Stéphane Dufau; Phillip J Holcomb; Jonathan Grainger
In: Psychophysiology, 57 , pp. 1–18, 2020.
When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal-on-Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye-movements, and time-locked the analysis of fixation-related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post-fixation of word n. Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel.
In: IRAL - International Review of Applied Linguistics in Language Teaching, 58 (3), pp. 323–349, 2020.
This study examined whether native Japanese speakers and second language (L2) speakers of Japanese use information from numeral classifiers to predict possible referents. Using a visual-world eye-tracking paradigm, we asked participants to identify picture objects that take either the same or different numeral classifiers while they listened to Japanese sentences referring to one object. The results showed that native speakers looked to the target predictively more often when the classifier was informative about noun identity than when it was not. L2 learners also showed a facilitative effect of classifiers that was comparable to that of native speakers. In addition, we found that the level of proficiency played a role in the speed of real-time referent resolutions when the participants heard the target nouns in audio input. However, such an effect was not observed during the period when the predictions were generated.
Holger Mitterer; Sahyang Kim; Taehong Cho
In: Data in Brief, 30 , pp. 1–9, 2020.
This article provides some supplementary analysis data of speech production and perception of glottal stops in the Semitic language Maltese. In Maltese, a glottal stop can occur as a phoneme, but also as a phonetic marker of vowel-initial words (as in the case with Germanic languages like English). Data from four experiments are provided, which will allow other researchers to reproduce the results and apply their own data-analysis techniques to these data for further data exploration. A production experiment (Experiment 1) investigates how often the glottal marking of vowel-initial words occurs (causing vowel-initial words to be ambiguous with words starting with a glottal stop as a phoneme) and whether the glottal gesture for this marking can be differentiated from an underlying (phonemic) glottal stop in its acoustic properties. Experiments 2 to 4 investigate how and to what extent Maltese listeners perceive glottal markings as lexical (phonemic) or epenthetic (phonetic), using a two-alternative forced choice task (Experiment 2), a visual-world eye tracking task with printed target words (Experiment 3) and a gating task (Experiment 4). A full account of theoretical consequences of these data can be found in the full length article entitled “The glottal stop between segmental and suprasegmental processing: The case of Maltese” .
Francisco J Moreno-Pérez; Isabel R Rodríguez-Ortiz; Gema Tavares; David Saldaña
In: International Journal of Language and Communication Disorders, 55 (6), pp. 884–898, 2020.
Background: It has been established that people with autism spectrum disorder (ASD) often have difficulties understanding spoken language. Understanding reflexive and clitic pronouns is vital to establishing reference-based inference, but it is as yet unclear whether such constructions pose specific difficulties for those with ASD. Pronoun interpretation seems be connected to the development of pragmatic abilities, and can therefore be considered a plausible marker in the differential diagnosis between ASD and developmental language disorder (DLD). Aims: To establish whether or not there are differences between ASD and DLD in relation to their understanding of pronoun constructions (both reflexive and clitic). The working hypothesis was that although no differences were expected between groups in relation to automatic (online) pronoun processing, the comprehension of reflexive pronouns would constitute a diagnostic marker between the group with ASD and language disorder and the DLD group. Methods & Procedures: The study carried out two experiments with three clinical groups (two with ASD and different levels of language proficiency and one with DLD) and two control groups with typically developing people (with equivalent language levels), analysing their on- and offline processing in pronoun resolution tasks. The first experiment uses an online method (eye-tracking) to record pronoun processing in real time. The second uses an offline method to analyse comprehension accuracy. Outcomes & Results: The results of the two experiments indicated no differences in the way in which the clinical and control groups resolved the tasks, but a shorter reaction time was observed only in the age-matched control group in comparison with the ASD group without language disorder in the first experiment, perhaps due to the fact that processing pronouns involves a greater cognitive load among the latter group. Conclusions & Implications: The comprehension of reflexive pronouns cannot be considered a diagnostic marker for distinguishing ASD from DLD. What this paper adds What is already known on the subject Previous studies have found that the performance of children with ASD in the comprehension of personal pronouns is equivalent to youngest control groups, but poorer regarding the interpretation of reflective pronouns. However, children with DLD do not usually have problems with the use of pronouns, which suggests that their pronoun processing is not affected. As pronoun interpretation seems be connected to the development of pragmatic abilities, it could be considered a plausible marker in the differential diagnosis between ASD and DLD. What this paper adds to existing knowledge This paper presents the results of two experiments involving pronoun processing by those with ASD (both with and without language disorder) and those with DLD. The design enables us to analyse the reflexive and clitic pronoun processing in people with ASD and DLD, regardless of their language proficiency. One experiment uses an eye-tracking methodology that allows us to obtain data about how the pronouns are processed in real time. It represents an attempt to identify language markers that may help distinguish between the two groups and adapt the interventions to the specific problems experienced by each one. What are the potential or actual clinical implications of this work? The results indicate that it is not possible to identify any specific impairment in pronoun processing among the clinical groups (ASD and DLD).
Laura M Morett; Jennifer M Roche; Scott H Fraundorf; James C McPartland
In: Cognitive Science, 44 (10), pp. 1–46, 2020.
We investigated how two cues to contrast—beat gesture and contrastive pitch accenting—affect comprehenders' cognitive load during processing of spoken referring expressions. In two visual-world experiments, we orthogonally manipulated the presence of these cues and their felicity, or fit, with the local (sentence-level) referential context in critical referring expressions while comprehenders' task-evoked pupillary responses (TEPRs) were examined. In Experiment 1, beat gesture and contrastive accenting always matched the referential context of filler referring expressions and were therefore relatively felicitous on the global (experiment) level, whereas in Experiment 2, beat gesture and contrastive accenting never fit the referential context of filler referring expressions and were therefore infelicitous on the global level. The results revealed that both beat gesture and contrastive accenting increased comprehenders' cognitive load. For beat gesture, this increase in cognitive load was driven by both local and global infelicity. For contrastive accenting, this increase in cognitive load was unaffected when cues were globally felicitous but exacerbated when cues were globally infelicitous. Together, these results suggest that comprehenders' cognitive resources are taxed by processing infelicitous use of beat gesture and contrastive accenting to convey contrast on both the local and global levels.
Adam M Morgan; Titus von der Malsburg; Victor S Ferreira; Eva Wittenberg
In: Cognition, 205 , pp. 1–21, 2020.
Language comprehension and production are generally assumed to use the same representations, but resumption poses a problem for this view: This structure is regularly produced, but judged highly unacceptable. Production-based solutions to this paradox explain resumption in terms of processing pressures, whereas the Facilitation Hypothesis suggests resumption is produced to help listeners comprehend. Previous research purported to support the Facilitation Hypothesis did not test its keystone prediction: that resumption improves accuracy of interpretation. Here, we test this prediction directly, controlling for factors that previous work did not. Results show that resumption in fact hinders comprehension in the same sentences that native speakers produced, a finding which replicated across four high-powered experiments with varying paradigms: sentence-picture matching (N=300), self-paced reading (N=96), visual world eye-tracking (N=96), and multiple-choice comprehension question (N=150). These findings are consistent with production-based accounts, indicating that comprehension and production may indeed share representations, although our findings point toward a limit on the degree of overlap. Methodologically speaking, the findings highlight the importance of measuring interpretation when studying comprehension.
Petroula Mousikou; Lorena Nüesch; Jana Hasenäcker; Sascha Schroeder
In: Language, Cognition and Neuroscience, pp. 1–14, 2020.
German verb stems may be combined with a particle or a prefix, forming particle and prefixed verbs, respectively. Both types of verbs are morphologically complex, yet particles are free morphemes, which are routinely separated from their stem and can stand alone in a sentence, whereas prefixes are bound morphemes, which are attached to their stem and cannot stand alone in a sentence. Morphologically complex words are thought to be segmented into their constituent morphemes during reading. On this assumption, we took advantage of the separability feature of the constituent morphemes of particle verbs to investigate how the segmentation process occurs in skilled reading. Thirty German adults participated in a sentence-reading task that employed the eye-contingent boundary paradigm in eye-tracking. We observed no differences in the processing of particle and prefixed verbs, which suggests that idiosyncratic linguistic characteristics do not modulate the way morphologically complex words are segmented in skilled reading.
Gábor Müller; Emese Bodnár; Stavros Skopeteas; Julia Marina Kröger
In: Language and Speech, pp. 1–32, 2020.
Thematic-role assignment is influenced by several classes of cues during sentence comprehension, ranging from morphological exponents of syntactic relation such as case and agreement to probabilistic cues such as prosody. The effect of these cues cross-linguistically varies, presumably reflecting their language-specific robustness in signaling thematic roles. However, language-specific frequencies are not mapped onto the cue strength in a one-to-one fashion. The present article reports two eye-tracking studies on Hungarian examining the interaction of case and prosody during the processing of case-unambiguous (Experiment 1) and case-ambiguous (Experiment 2) clauses. Eye fixations reveal that case is a strong cue for thematic role assignment, but stress only enhances the effect of case in case-unambiguous clauses. This result differs from findings reported for Italian and German in which case initial stress reduces the expectation for subject-first clauses. Furthermore, the sentence comprehension facts are not explained by corpus frequencies in Hungarian. After considering an array of hypotheses about the roots of cross-linguistic variation, we conclude that the crucial difference lies in the high reliability/availability of case cues in Hungarian in contrast to the further languages examined within this experimental paradigm.
Malik M Naeem Mannan; Ahmad M Kamran; Shinil Kang; Hak Soo Choi; Myung Yung Jeong
In: Sensors, 20 (3), pp. 1–20, 2020.
Steady‐state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli‐responsive hybrid speller by using electroencephalography (EEG) and video‐based eye‐tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)‐based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI‐speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI‐spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued‐spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free‐spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI‐based system will ultimately enable a truly high-speed communication channel.
Leanne Nagels; Roelien Bastiaanse; Deniz Başkent; Anita Wagnera
In: Journal of Speech, Language, and Hearing Research, 63 , pp. 286–304, 2020.
Purpose: The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word–nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method: Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results: In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word–nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions: The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word–nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation.
Chie Nakamura; Manabu Arai; Yuki Hirose; Suzanne Flynn
In: Frontiers in Psychology, 10 , pp. 1–14, 2020.
It has long been debated whether non-native speakers can process sentences in the same way as native speakers do or they suffer from certain qualitative deficit in their ability of language comprehension. The current study examined the influence of prosodic and visual information in processing sentences with a temporarily ambiguous prepositional phrase (“Put the cake on the plate in the basket”) with native English speakers and Japanese learners of English. Specifically, we investigated (1) whether native speakers assign different pragmatic functions to the same prosodic cues used in different contexts and (2) whether L2 learners can reach the correct analysis by integrating prosodic cues with syntax with reference to the visually presented contextual information. The results from native speakers showed that contrastive accents helped to resolve the referential ambiguity when a contrastive pair was present in visual scenes. However, without a contrastive pair in the visual scene, native speakers were slower to reach the correct analysis with the contrastive accent, which supports the view that the pragmatic function of intonation categories are highly context dependent. The results from L2 learners showed that visually presented context alone helped L2 learners to reach the correct analysis. However, L2 learners were unable to assign contrastive meaning to the prosodic cues when there were two potential referents in the visual scene. The results suggest that L2 learners are not capable of integrating multiple sources of information in an interactive manner during real-time language comprehension.
M J Nelson; S Moeller; A Basu; L Christopher; E J Rogalski; M Greicius; S Weintraub; B Bonakdarpour; R S HWIBBLEey; M M Mesulam
In: Cerebral Cortex, 30 (4), pp. 2529–2541, 2020.
Phonemic paraphasias are thought to reflect phonological (post-semantic) deficits in language production. Here we present evidence that phonemic paraphasias in non-semantic primary progressive aphasia (PPA) may be associated with taxonomic interference. Agrammatic and logopenic PPA patients and control participants performed a word-To-picture visual search task where they matched a stimulus noun to 1 of 16 object pictures as their eye movements were recorded. Participants were subsequently asked to name the same items. We measured taxonomic interference (ratio of time spent viewing related vs. unrelated foils) during the search task for each item. Target items that elicited a phonemic paraphasia during object naming elicited increased taxonomic interference during the search task in agrammatic but not logopenic PPA patients. These results could reflect either very subtle sub-clinical semantic distortions of word representations or partial degradation of specific phonological word forms in agrammatic PPA during both word-To-picture matching (input stage) and picture naming (output stage). The mechanism for phonemic paraphasias in logopenic patients seems to be different and to be operative at the pre-Articulatory stage of phonological retrieval. Glucose metabolic imaging suggests that degeneration in the left posterior frontal lobe and left temporo-parietal junction, respectively, might underlie these different patterns of phonemic paraphasia.
Jinghui Ouyang; Lingshan Huang; Jingyang Jiang
In: Journal of Research in Reading, 43 (4), pp. 496–515, 2020.
Providing glosses that explain the meanings of unknown words is a common method of promoting learners' learning of new words. Numerous studies have shown that compared with no-gloss condition, glosses benefit the learning of the meaning of new words. This study combines both online (i.e., eye-tracking) and offline (i.e., immediate vocabulary tests) measures to investigate the influences of glosses on incidental vocabulary learning and evaluating the degree to which glossing influences reading behaviour during second language (L2) reading. The eye movements of 45 high-intermediate adult learners of English were recorded when they read a text presented on-screen. Two different text versions (both with 17 new words) were presented to two different groups of participants: first language (L1) textual glossed and no-glossed. After reading, unannounced vocabulary tests were administered to gauge learners' recall and recognition of vocabulary meaning. Learners performed better in meaning recall and meaning recognition tests under L1-glossed condition. Eye-tracking measures of the target words were significantly different in two conditions. Eye-tracking measures of new words and their glosses in L1-glossed condition were significantly correlated with learners' scores of vocabulary tests. L1 glosses promote the learning of the meaning of new words in an incidental condition. The attention allocated to the new words is different in L1-glossed and no-glossed conditions. More importantly, there is a relationship between the online reading behaviour and the vocabulary test performance in gloss condition.
Ayşegül Özkan; Figen Beken Fikri; Bilal Kırkıcı; Reinhold Kliegl; Cengiz Acartürk
Eye movement control in Turkish sentence reading Journal Article
In: Quarterly Journal of Experimental Psychology, pp. 1–20, 2020.
Reading requires the assembly of cognitive processes across a wide spectrum from low-level visual perception to high-level discourse comprehension. One approach of unravelling the dynamics associated with these processes is to determine how eye movements are influenced by the characteristics of the text, in particular which features of the words within the perceptual span maximise the information intake due to foveal, spillover, parafoveal, and predictive processing. One way to test the generalisability of current proposals of such distributed processing is to examine them across different languages. For Turkish, an agglutinative language with a shallow orthography–phonology mapping, we replicate the well-known canonical main effects of frequency and predictability of the fixated word as well as effects of incoming saccade amplitude and fixation location within the word on single-fixation durations with data from 35 adults reading 120 nine-word sentences. Evidence for previously reported effects of the characteristics of neighbouring words and interactions was mixed. There was no evidence for the expected Turkish-specific morphological effect of the number of inflectional suffixes on single-fixation durations. To control for word-selection bias associated with single-fixation durations, we also tested effects on word skipping, single-fixation, and multiple-fixation cases with a base-line category logit model, assuming an increase of difficulty for an increase in the number of fixations. With this model, significant effects of word characteristics and number of inflectional suffixes of foveal word on probabilities of the number of fixations were observed, while the effects of the characteristics of neighbouring words and interactions were mixed.
Ascensión Pagán; Megan Bird; Yaling Hsiao; Kate Nation
In: Scientific Studies of Reading, 24 (4), pp. 356–364, 2020.
Semantic diversity–a metric that captures variations in previous contextual experience with a word–influences children's lexical decision and reading aloud. We investigated the effects of semantic diversity and frequency on children's reading of words embedded in sentences, while eye movements were recorded. If semantic diversity and frequency reflect different aspects of experience that influence reading in different ways, they should show independent effects and perhaps even different processing signatures during reading. Forty-nine 9-year-olds read sentences containing high/low frequency and high/low diversity words, manipulated orthogonally. We observed main effects of both variables, with high frequency and high semantic diversity words being read more easily. These results show that variations in the amount and nature of contextual experience influence how easily words are processed during reading.
Jinger Pan; Jochen Laubrock; Ming Yan
Phonological consistency effects in Chinese sentence reading Journal Article
In: Scientific Studies of Reading, pp. 1–17, 2020.
In two eye-tracking experiments, we investigated the processing of information about phonological consistency of Chinese phonograms during sentence reading. In Experiment 1, we adopted the error disruption paradigm in silent reading and found significant effects of phonological consistency and homophony in the foveal vision, but only in a late processing stage. Adding oral reading to Experiment 2, we found both effects shifted to earlier indices of parafoveal processing. Specifically, low-consistency characters led to a better homophonic foveal recovery effect in Experiment 1 and stronger homophonic preview benefits in Experiment 2. These findings suggest that phonological consistency information can be obtained during sentence reading, and compared to the low-consistency previews the high-consistency previews are processed faster, which leads to greater interference to the recognition of target characters.
Jinger Pan; Miaomiao Liu; Hong Li; Ming Yan
In: Reading and Writing, pp. 1–15, 2020.
Word boundary information is not marked explicitly in Chinese sentences and word ambiguity happens in Chinese texts. This introduces difficulty to parse characters into words when reading Chinese sentences, especially for beginning readers. In an eye-tracking study, we tested whether explicit word boundary information as provided by alternating text-colors affects reading performance of Chinese children and how such an effect is influenced by individual differences in word segmentation ability. Results showed that across a number of eye-movement measures, grade three children overall benefited from explicit marking of word boundary. Additionally, children with highest word segmentation ability showed the largest benefits in reading speed. We discuss possible implications for education.
Jinger Pan; Ming Yan; Jochen Laubrock
In: Cognition, 205 , pp. 1–10, 2020.
How is semantic information in the mental lexicon accessed and selected during reading? Readers process information of both the foveal and parafoveal words. Recent eye-tracking studies hint at bi-phasic lexical activation dynamics, demonstrating that semantically related parafoveal previews can either facilitate, or interfere with lexical processing of target words in comparison to unrelated previews, with the size and direction of the effect depending on exposure time to parafoveal previews. However, evidence to date is only correlational, because exposure time was determined by participants' pre-target fixation durations. Here we experimentally controlled parafoveal preview exposure duration using a combination of the gaze-contingent fast-priming and boundary paradigms. We manipulated preview duration and examined the time course of parafoveal semantic activation during the oral reading of Chinese sentences in three experiments. Semantic previews led to faster lexical access of target words than unrelated previews only when the previews were presented briefly (80 ms in Experiments 1 and 3). Longer exposure time (100 ms or 150 ms) eliminated semantic preview effects, and full preview without duration limit resulted in preview cost, i.e., a reversal of preview benefit. Our results indicate that high-level semantic information can be obtained from parafoveal words and the size and direction of the parafoveal semantic effect depends on the level of lexical activation.
Jinger Pan; Caicai Zhang; Xunan Huang; Ming Yan
In: Reading and Writing, pp. 1–17, 2020.
The current study examined whether or not lexical access is influenced by detailed phonological features during the silent reading of Chinese sentences. We used two types of two-character target words (Mandarin sandhi-tone and base-tone). The first characters of the words in the sandhi-tone condition had a tonal alternation, but no tonal alternation was involved in the base-tone condition. Recordings of eye movements revealed that native Mandarin Chinese readers viewed the base-tone target words more briefly than the sandhi-tone target words when they were infrequent. Such articulation-specific effects on visual word processing, however, diminished for frequent words. We suggest that a conflict in tonal representation at a character/morpheme level and at a word level induces prolongation in fixation duration on infrequent sandhi-tone words, and conclude that these tonal effects appear to reflect articulation simulation of words during the silent reading of Chinese sentences.
Nick B Pandža; Ian Phillips; Valerie P Karuzis; Polly O'Rourke; Stefanie E Kuchinsky
In: Annual Review of Applied Linguistics, 40 , pp. 56–77, 2020.
This paper begins by discussing new trends in the use of neurostimulation techniques in cognitive science and learning research, as well as the nascent research on their application in second language learning. To illustrate this, an experiment designed to investigate the impact of transcutaneous vagus nerve stimulation (tVNS), which is delivered via earbuds, on how learners process and learn Mandarin tones is reported. Pupillometry, which is an index of cognitive effort, is explained and illustrated as one way to assess the impact of tVNS. Participants in the study were native English speakers, naïve to tone languages, pseudorandomly assigned to active or control conditions, while balancing for nonlinguistic pitch ability and musical experience. Their performance after tVNS was assessed using a range of more traditional language outcome measures, including accuracy and reaction times from lexical recognition and recall tasks and was triangulated with pupillometry during word-learning to help understand the mechanism through which tVNS operates. Findings are discussed in light of the literatures on lexical tone learning, cognitive effort, and neurostimulation, including specific benefits for learners of tone languages. Recommendations are made for future work on the increasingly popular area of neurostimulation for the field of applied linguistics in the 40th anniversary issue of ARAL.
Fabio Parente; Kathy Conklin; Josephine M Guy; Rebekah Scott
In: Language and Literature, pp. 1–16, 2020.
The popularity of literary biographies and the importance publishers place on author publicity materials suggest the concept of an author's creative intentions is important to readers' appreciation of literary works. However, the question of how this kind of contextual information informs literary interpretation is contentious. One area of dispute concerns the extent to which readers' constructions of an author's creative intentions are text-centred and therefore can adequately be understood by linguistic evidence alone. The current study shows how the relationship between linguistic and contextual factors in readers' constructions of an author's creative intentions may be investigated empirically. We use eye-tracking to determine whether readers' responses to textual features (changes to lexis and punctuation) are affected by prior, extra-textual prompts concerning information about an author's creative intentions. We showed participants pairs of sentences from Oscar Wilde and Henry James while monitoring their eye movements. The first sentence was followed by a prompt denoting a different attribution (Authorial, Editorial/Publisher and Typographic) for the change that, if present, would appear in the second sentence. After reading the second sentence, participants were asked whether they had detected a change and, if so, to describe it. If the concept of an author's creative intentions is implicated in literary reading this should influence participants' reading behaviour and ability to accurately report a change based on the prompt. The findings showed that readers' noticing of textual variants was sensitive to the prior prompt about its authorship, in the sense of producing an effect on attention and re-reading times. But they also showed that these effects did not follow the pattern predicted of them, based on prior assumptions about readers' cultures. This last finding points to the importance, as well as the challenges, of further investigating the role of contextual information in readers' constructions of an author's creative intentions.
Adam J Parker; Julie A Kirkby; Timothy J Slattery
Undersweep fixations during reading in adults and children Journal Article
In: Journal of Experimental Child Psychology, 192 , pp. 1–23, 2020.
Return sweeps take a reader's fixation from the end of one line to the start of the next. Return sweeps frequently undershoot their target and are followed by a corrective saccade toward the left margin. The pauses prior to corrective saccades are typically considered to be uninvolved in linguistic processing. However, recent findings indicate that these undersweep fixations influence skilled adult readers' subsequent reading pass across the line and provide a preview of line-initial words. The current research examined these effects in children. First, a children's reading corpus analysis revealed that words receiving an undersweep fixation were more likely skipped and received shorter gaze durations during a subsequent pass. Second, a novel eye movement experiment that directly compared adults' and children's eye movements indicated that, during an undersweep fixation, readers very briefly allocate their attention to the fixated word—as indicated by inhibition of return effects during a subsequent pass—prior to deploying attention toward the line-initial word. We argue that prior to the redeployment of attention, readers extract information at the point of fixation that facilitates later encoding and saccade targeting. Given similar patterns of results for adults and children, we conclude that the mechanisms controlling for oculomotor coordination and attention necessary for reading across line boundaries are established from a very early point in reading development.
Leigh B Fernandez; Paul E Engelhardt; Angela G Patarroyo; Shanley E M Allen
In: Quarterly Journal of Experimental Psychology, 73 (12), pp. 2348–2361, 2020.
Research has shown that suprasegmental cues in conjunction with visual context can lead to anticipatory (or predictive) eye movements. However, the impact of speech rate on anticipatory eye movements has received little empirical attention. The purpose of the current study was twofold. From a methodological perspective, we tested the impact of speech rate on anticipatory eye movements by systemically varying speech rate (3.5, 4.5, 5.5, and 6.0 syllables per second) in the processing of filler-gap dependencies. From a theoretical perspective, we examined two groups thought to show fewer anticipatory eye movements, and thus likely to be more impacted by speech rate. Experiment 1 compared anticipatory eye movements across the lifespan with younger (18–24 years old) and older adults (40–75 years old). Experiment 2 compared L1 speakers of English and L2 speakers of English with an L1 of German. Results showed that all groups made anticipatory eye movements. However, L2 speakers only made anticipatory eye movements at 3.5 syllables per second, older adults at 3.5 and 4.5 syllables per second, and younger adults at speech rates up to 5.5 syllables per second. At the fastest speech rate, all groups showed a marked decrease in anticipatory eye movements. This work highlights (1) the importance of speech rate on anticipatory eye movements, and (2) group-level performance differences in filler-gap prediction.
Leigh Fernandez; Christoph Scheepers; Shanley Allen
In: Journal of Eye Movement Research, 13 (6), pp. 1–26, 2020.
Much reading research has found that informative parafoveal masks lead to a reading benefit for native speakers (see, Schotter et al., 2012). However, little reading research has tested the impact of uninformative parafoveal masks during reading. Additionally, parafoveal processing research is primarily restricted to native speakers. In the current study we manipulated the type of uninformative preview using a gaze contingent boundary paradigm with a group of L1 English speakers and a group of late L2 English speakers (L1 German). We were interested in how different types of uninformative masks impact on parafoveal processing, whether L1 and L2 speakers are similarly impacted, and whether they are sensitive to parafoveally viewed language-specific sub-lexical orthographic in- formation. We manipulated six types of uninformative masks to test these objectives: an Identical, English pseudo-word, German pseudo-word, illegal string of letters, series of X's, and a blank mask. We found that X masks affect reading the most with slight graded differences across the other masks, L1 and L2 speakers are impacted similarly, and neither group is sensitive to sub-lexical orthographic information. Overall these data show that not all previews are equal, and research should be aware of the way uninformative masks affect reading behavior. Additionally, we hope that future research starts to approach models of eye-movement behavior during reading from not only a monolingual but also from a multilingual perspective.
Gemma Fitzsimmons; Lewis T Jayes; Mark J Weal; Denis Drieghe
In: PLoS ONE, 15 (9), pp. 1–23, 2020.
It has been shown that readers spend a great deal of time skim reading on the Web and that this type of reading can affect lexical processing of words. Across two experiments, we utilised eye tracking methodology to explore how hyperlinks and navigating webpages affect reading behaviour. In Experiment 1, participants read static Webpages either for comprehension or whilst skim reading, while in Experiment 2, participants additionally read through a navigable Web environment. Embedded target words were either hyperlinks or not and were either high-frequency or low-frequency words. Results from Experiment 1 show that while readers lexically process both linked and unlinked words when reading for comprehension, readers only fully lexically process linked words when skim reading, as was evidenced by a frequency effect that was absent for the unlinked words. They did fully lexically process both linked and unlinked words when reading for comprehension. In Experiment 2, which allowed for navigating, readers only fully lexically processed linked words compared to unlinked words, regardless of whether they were skim reading or reading for comprehension. We suggest that readers engage in an efficient reading strategy where they attempt to minimise comprehension loss while maintaining a high reading speed. Readers use hyperlinks as markers to suggest important information and use them to navigate through the text in an efficient and effective way. The task of reading on the Web causes readers to lexically process words in a markedly different way from typical reading experiments.
Francesca Foppolo; Adrian Staub
The puzzle of number agreement with disjunction Journal Article
In: Cognition, 198 , pp. 1–20, 2020.
In English, when two nouns in a disjunctive subject differ in number (e.g., the dogs or the cat), the verb tends to agree with the number of the nearer noun. This is exceptional, as a noun's linear proximity to the verb does not generally play a role in agreement. In the present study, we investigate a further puzzle about agreement with disjunction, namely, the existence of a pattern in which two singular disjuncts trigger plural agreement (e.g., The lawyer or the accountant arełdots). Two eyetracking studies in English show that plural agreement with a disjunction of singulars does not reliably disrupt readers' eye movements, in contrast to the immediate disruptive effect of other agreement violations. Three off-line rating studies in English show that plural agreement results in only a small decrement in acceptability, compared to other agreement violations, and that in some structural configurations there is no decrement at all. On the whole, the data do not support the hypothesis that plural agreement is licensed only when or has an inclusive reading; even when it has an exclusive reading, there is only a small penalty for plural agreement. Finally, we explored this issue in Italian, which has a richer system of inflectional morphology. Italian speakers showed a plural preference in a completion experiment, and singular and plural agreement did not differ in acceptability in a rating experiment. We conclude that agreement with disjunction is a grammatical lacuna or gap, in the sense that speakers' grammar simply does not prescribe a verb number following a disjunctive subject.
Ana Laura Frapiccini; Jessica A Del Punta; Karina V Rodriguez; Leonardo Dimieri; Gustavo Gasaneo
In: European Physical Journal B, 93 (2), pp. 1–10, 2020.
Abstract: Starting with a proposal to model horizontal eye movements, we study the parameters involved in it. Particularly, we investigate the values that best fit the parameters describing the activation force responsible for horizontal saccades, independently of the task being performed. The fitting process is based on data sets gathered with an eye tracker device. The simplicity of the model allows to profit from analytical expressions useful to simplify the fitting process. Finally, we use our model to obtain the activation force corresponding to a reading task, finding a very good agreement with the experimental data.
Deanna C Friesen; Olivia Ward; Jessica Bohnet; Pierre Cormier; Debra Jared
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (9), pp. 1754–1767, 2020.
The current study investigated whether shared phonology across languages activates cross-language meaning when reading in context. Eighty-five bilinguals read English sentences while their eye movements were tracked. Critical sentences contained English members of English-French interlingual homophone pairs (e.g., mow; French homophone mate mot means "word") or they contained spelling control words (e.g., mop). Only the meaning of the unseen French homophone mate fit the context (e.g., Hannah wrote another mow/mop on the blackboard for the spelling test). Differences in fixation durations between homophone errors and spelling control errors provided evidence for cross-language activation that extended to semantic representations. When the unseen French homophone was of high frequency, shorter first fixations and gaze durations were observed on English interlingual homophones than on English control words, providing evidence that the French meaning associated with the shared phonology was activated during early stage word identification. Individual differences analyses showed that these effects were larger when bilinguals were using the nontarget language (i.e., French) more regularly in daily life. Results provide evidence that cross-language activation of phonology can be sufficiently strong to activate corresponding semantic representations during single language sentence processing.
Deanna C Friesen; Veronica Whitford; Debra Titone; Debra Jared
In: Bilingualism, 23 (2), pp. 323–343, 2020.
We investigated how individual differences in language proficiency and executive control impact cross-language meaning activation through phonology. Ninety-six university students read English sentences that contained French target words. Target words were high- and low-frequency French interlingual homophones (i.e., words that share pronunciation, but not meaning across langauges; mot means 'word' in French and sounds like 'mow' in English) and matched French control words (e.g., mois - 'month' in French). Readers could use the homophones' shared phonology to activate their English meanings and, ultimately, make sense of the sentence (e.g., Tony was too lazy to mot/mois the grass on Sunday). Shorter reading times were observed on interlingual homophones than control words, suggesting that phonological representations in one language activate cross-language semantic representations. Importantly, the magnitude of the effect was modulated by word frequency, and several participant-level characteristics, including French proficiency, English word knowledge, and executive control ability.
Rebecca L A Frost; Andrew Jessop; Samantha Durrant; Michelle S Peter; Amy Bidgood; Julian M Pine; Caroline F Rowland; Padraic Monaghan
In: Cognitive Psychology, 120 , pp. 1–19, 2020.
To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants' capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants' statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants' vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants' segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size.
Hiroki Fujita; Ian Cunnings
In: Journal of Memory and Language, 115 , pp. 104154, 2020.
Research on temporarily ambiguous “garden-path” sentences (e.g., After Mary dressed the baby laughed) has shown that initially assigned misinterpretations linger after reanalysis of the temporarily ambiguous phrase in both native (L1) and non-native (L2) readers. L2 speakers have particular difficulty with reanalysis, but the source of this L1/L2 difference is debated. Furthermore, how lingering misinterpretation may influence other aspects of language processing has not been systematically examined. We report three offline and two online experiments investigating reanalysis and misinterpretation of filler-gap dependences (e.g., Elisa noticed the truck which the policeman watched the car from). Our results showed that L1 and L2 speakers are prone to lingering misinterpretation during dependency resolution. L1/L2 differences were observed such that L2 speakers had increased difficulty reanalysing some filler-gap dependencies, however this was dependent on how the dependency was disambiguated. These results are compatible with the “good-enough” approach to language processing, and suggest that L1/L2 differences are more likely when reanalysis is particularly difficult.
Hiroki Fujita; Ian Cunnings
In: Applied Psycholinguistics, pp. 1–30, 2020.
Native (L1) and nonnative (L2) speakers sometimes misinterpret temporarily ambiguous sentences like "When Mary dressed the baby laughed happily."Recent studies suggest that the initially assigned misinterpretation ("Mary dressed the baby") may persist even after disambiguation, and that L2 speakers may have particular difficulty discarding initial misinterpretations. The present study investigated whether L2 speakers are more persistent with misinterpretation compared with L1 speakers during sentence processing, using the structural priming and eye tracking while reading tasks. In the experiment, participants read prime followed by target sentences. Reading times revealed that unambiguous but not ambiguous prime sentences facilitated processing of the globally correct interpretation of ambiguous target sentences. However, this priming effect was only observed when the prime and target sentence shared the same verb. Comprehension accuracy rates were not significantly influenced by priming effects but did provide evidence of lingering misinterpretation. We did not find significant L1/L2 differences in either priming effects or persistence of misinterpretation. Together, these results suggest that initially assigned misinterpretations linger in both L1 and L2 readers during sentence processing and that L1 and L2 comprehension priming is strongly lexically mediated.
Xiaolei Gao; Xiaowei Li; Min Sun; Xuejun Bai; Lei Gao
In: Acta Psychologica Sinica, 52 (10), pp. 1143–1155, 2020.
In the process of reading, readers mainly obtain information through the fovea region—in particular, the parafovea plays an important role in information acquisition. Readers can obtain certain information from the parafovea through previewing processing, thus promoting the improvement of reading efficiency, which is called the “previewing effect”. The effect of the processing load of the fovea on the previewing effect of parafovea has become a popular research focus of late. For example, studies based on alphabetic languages have found that the previewing effect of the parafovea is greater for high-frequency and short words than for low-frequency and the long words. While Tibetan is an analphabetic language, it also belongs to the Sino-Tibetan language family and has many similarities with Chinese. However, it is still largely unclear how to reflect the above role in the process of Tibetan reading. Will it only show the common characters of alphabetic languages or will it show some Chinese characteristics? The present study aimed to provide experimental evidence to respond to these research questions. Two experiments were carried out on 119 Tibetan undergraduate students. More specifically, participants were asked to read Tibetan sentences and their eye movements during reading were recorded using an SR Research EyeLink 1000Plus eye tracker (sampling rate = 1000 Hz). Experiment 1 manipulated the fovea word frequency (i.e., high vs. low frequency) to investigate the word frequency effect and word frequency delay effect of fovea words in Tibetan reading. The results showed a word frequency effect and a word frequency delay effect in Tibetan reading. Experiment 2 manipulated both fovea word frequency and parafovea previewing word types with the aid of boundary paradigm to investigate the previewing effect of parafovea and the effect of fovea word frequency on the previewing effect of parafovea in Tibetan reading. The results showed a previewing effect of parafovea in Tibetan reading and that, when compared with low-frequency fovea words, high-frequency fovea words had a greater promoting effect on the previewing effect of parafovea. The primary findings can be summarized as follows: (1) significant word frequency effect exists in Tibetan reading, which is reflected in the whole process of vocabulary processing; (2) there is a significant word frequency delay effect in Tibetan reading, which runs through the whole process of vocabulary processing; (3) there is a significant previewing effect of parafovea in Tibetan reading, through which the reader can extract speech and font information; and (4) in Tibetan reading, fovea word frequency affects the size of the previewing effect of parafovea—moreover, word frequency only affects the extraction of shape previewing information in the early stage of lexical processing, that is, the previewing effect of high-frequency words is greater under the condition of shape previewing. In conclusion, the effect of the processing load of the fovea on the previewing effect of parafovea shows the common characteristics of alphabetic languages in Tibetan reading. In addition, this study found that reading Tibetan involves the word frequency delay effect and the previewing effect of parafovea; these findings support the theory of parafovea sequence processing in the E-Z reader model.
Alan Garnham; Scarlett Child; Sam Hutton
Anticipating causes and consequences Journal Article
In: Journal of Memory and Language, 114 , pp. 1–13, 2020.
Two visual world eye-tracking experiments investigated anticipatory looks to implicit causes and implicit consequences in two clause sentences with mental state verbs (Stimulus-Experiencer and Experiencer-Stimulus) in the first main clause, and an explicit cause or consequence in the second. The first experiment showed that, just as when all continuations are causes, people look early at the implicit cause, when all continuations are consequences they look early at the implicit consequence, for the same verbs. When causes and consequences are intermixed, people direct their looks at the cause or consequence on a trial-by-trial basis depending on the connective (“because” or “and so”). Numerically, causes were favored overall, even when all the endings were consequences, but the effect was only significant at the end of the sentences in Experiment 2. The results are discussed in terms of rapid deployment of causal and consequential information implicit in mental state verbs, and in relation to conflicting accounts of why causes or consequences might generally be favored.
Hallie Garrison; Gladys Baudet; Elise Breitfeld; Alexis Aberman; Elika Bergelson
In: Infancy, 25 (4), pp. 458–477, 2020.
Infants amass thousands of hours of experience with particular items, each of which is representative of a broader category that often shares perceptual features. Robust word comprehension requires generalizing known labels to new category members. While young infants have been found to look at common nouns when they are named aloud, the role of item familiarity has not been well examined. This study compares 12- to 18-month-olds' word comprehension in the context of pairs of their own items (e.g., photographs of their own shoe and ball) versus new tokens from the same category (e.g., a new shoe and ball). Our results replicate previous work showing that noun comprehension improves rapidly over the second year, while also suggesting that item familiarity appears to play a far smaller role in comprehension in this age range. This in turn suggests that even before age 2, ready generalization beyond particular experiences is an intrinsic component of lexical development.
Thomas Geyer; Franziska Günther; Hermann J Müller; Jim Kacian; Heinrich René Liesefeld; Stella Pierides
In: Journal of Eye Movement Research, 13 (2), pp. 1–29, 2020.
The current study, set within the larger enterprise of Neuro-Cognitive Poetics, was designed to examine how readers deal with the 'cut'-a more or less sharp semantic-conceptual break-in normative, three-line English-language haiku poems (ELH). Readers were presented with three-line haiku that consisted of two (seemingly) disparate parts, a (two-line) 'phrase' image and a one-line 'fragment' image, in order to determine how they process the conceptual gap between these images when constructing the poem's meaning-as reflected in their patterns of reading eye movements. In addition to replicating the basic 'cut effect', i.e., the extended fixation dwell time on the fragment line relative to the other lines, the present study examined (a) how this effect is influenced by whether the cut is purely implicit or explicitly marked by punctuation, and (b) whether the effect pattern could be delineated against a control condition of 'uncut', one-image haiku. For 'cut' vs. 'uncut' haiku, the results revealed the distribution of fixations across the poems to be modulated by the position of the cut (after line 1 vs. after line 2), the presence vs. absence of a cut marker, and the semanticconceptual distance between the two images (context-action vs. juxtaposition haiku). These formal-structural and conceptual-semantic properties were associated with systematic changes in how individual poem lines were scanned at first reading and then (selectively) re-sampled in second-and third-pass reading to construct and check global meaning. No such effects were found for one-image (control) haiku. We attribute this pattern to the operation of different meaning resolution processes during the comprehension of two-image haiku, which are invoked by both form-and meaning-related features of the poems.
Emilie Ginestet; Jalyssa Shadbolt; Rebecca Tucker; Marie Line Bosse; Hélène S Deacon
In: Journal of Research in Reading, pp. 2–20, 2020.
Background: Efficient word identification is directly tied to strong mental representations of words, which include spellings, meanings and pronunciations. Orthographic learning is the process by which spellings for individual words are acquired. Methods: In the present study, we combined the classic self-teaching paradigm with eye tracking to detail the process by which complex pseudowords are learned. With this methodology, we explored the visual processing and learning of complex pseudowords, as well as the transfer of that learning. We explore visual processing across exposures during the initial reading task and then measure learning and transfer in orthographic choice and spelling tasks. Results: Online eye movement monitoring during the repeated reading of complex pseudowords revealed that visual processing varied across exposures with key differences based on word type Further, data from both dictation and eye movements recorded during the orthographic choice task suggested stronger learning of morphologically than orthographically complex pseudowords after four encounters. Finally, results suggested that learning transfer occurred, with similar levels of accurate recognition of new pseudowords that were morphologically or orthographically related to pseudowords learned during the reading phase than of new pseudowords never read. Conclusions: The present study provides new insights into theory and methodological discussions of orthographic learning.
Margaret Grant; Shayne Sloggett; Brian Dillon
Processing ambiguities in attachment and pronominal reference Journal Article
In: Glossa: a journal of general linguistics, 5 (1), pp. 1–30, 2020.
The nature of ambiguity resolution has important implications for models of sentence processing in general. Studies of structural ambiguities, such as modifier attachment ambiguities, have generally supported a model in which a single analysis of ambiguous material is adopted without a cost to processing. Concurrently, a separate literature has observed a processing penalty for ambiguities in pronominal reference, suggesting that potential referents compete for selection during the processing of ambiguous pronouns. We argue that the apparent distinction between the ambiguity resolution mechanisms in attachment and pronominal reference ambiguities war- rants further study. We present evidence from two experiments measuring eye movements during reading, showing that the separation held in the literature between these two ambiguity types is, at least, not uniformly supported.
Matteo Greco; Paolo Canal; Valentina Bambini; Andrea Moro
In: Journal of Psycholinguistic Research, 49 (3), pp. 415–434, 2020.
This work focuses on a particular case of negative sentences, the Surprise Negation sentences (SNEGs). SNEGs belong to the class of expletive negation sentences, i.e., they are affirmative in meaning but involve a clausal negation. A clear example is offered by Italian: ‘Enonmi è scesa dal treno Maria?!' (let. ‘and not CLITIC.to_me is got off-the train Mary' = ‘The surprise was that Maria got off the train!'). From a theoretical point of view, the interpretation of SNEGs as affirmative can be derived from their specific syntactic and semantic structure. Here we offer an implementation of the visual world paradigm to test how SNEGs are interpreted. Participants listened to affirmative, negative or expletive negative clauses while four objects (two relevant—either expected or unexpected—and two unrelated) were shown on the screen and their eye movements were recorded. Growth Curve Analysis showed that the fixation patterns to the relevant objects were very similar for affirmative and expletive negative sentences, while striking differences were observed between negative and affirmative sentences. These results showed that negation does play a different role in the mental representation of a sentence, depending on its syntactic derivation. Moreover, we also found that, compared to affirmative sentences, SNEGs require higher processing efforts due to both their syntactic complexity and pragmatic integration, with slower response time and lower accuracy.
Jeffrey J Green; Michael McCourt; Ellen Lau; Alexander Williams
In: Glossa: A Journal of General Linguistics, 5 (1), pp. 1–33, 2020.
The comprehension of anaphoric relations may be guided not only by discourse, but also syntactic information. In the literature on online processing, however, the focus has been on audible pronouns and descriptions whose reference is resolved mainly on the former. This paper examines one relation that both lacks overt exponence, and relies almost exclusively on syntax for its resolution: adjunct control, or the dependency between the null subject of a non-finite adjunct and its antecedent in sentences such as Mickey talked to Minnie before ___ eating. Using visual-world eyetracking, we compare the timecourse of interpreting this null subject and overt pronouns (Mickey talked to Minnie before he ate). We show that when control structures are highly frequent, listeners are just as quick to resolve reference in either case. When control structures are less frequent, reference resolution based on structural information still occurs upon hearing the non-finite verb, but more slowly, especially when unaided by structural and referential predictions. This may be due to increased difficulty in recognizing that a referential dependency is necessary. These results indicate that in at least some contexts, referential expressions whose resolution depends on very different sources of information can be resolved approximately equally rapidly, and that the speed of interpretation is largely independent of whether or not the dependency is cued by an overt referring expression.
Kathleen Hall; Masaya Yoshida
Coreference and parallelism Journal Article
In: Language, Cognition and Neuroscience, pp. 1–24, 2020.
Previous studies have demonstrated a reliable effect of parallelism in a variety of domains. These studies have suggested that parallelism is preferred during both production and comprehension, and that parallelism can result in facilitation during sentence processing. There is, however, some debate about whether such effects are truly limited to coordination. In both coordinate and subordinate environments we examined whether parallelism affects pronoun resolution, and furthermore investigated whether distance between a pronoun and an antecedent (locality) affects the retrieval process. Two experiments, each consisting of an offline forced choice task as well as an eye tracking task, indicated that both locality and parallelism influence the pronoun resolution process in both coordinate and subordinate contexts. These findings are discussed in light of popular retrieval models.
Andreas Hallberg; Diederick C Niehorster
In: Reading and Writing, pp. 1–22, 2020.
Morphologically marked case is in Arabic a feature exclusive to the variety of Standard Arabic, with no parallel in the spoken varieties, and it is orthographically marked only on some word classes in specific grammatical situations. In this study we test the hypothesis that readers of Arabic do not parse sentences for case and that orthographically marked case can therefore be removed with no effect on reading. Twenty-nine participants read sentences in which one of the two most frequent types of orthographically marked case was either retained or omitted, while their eye-movements were monitored. The removal of case marking from subjects in the sound masculine plural declension (changing the suffix‑ūn ـون to ‑īn ـين) had no negative effect on gaze duration, regressions out, or go-past time. The removal of case marking form direct objects in the triptote declension (omitting the suffix -an ـاً) did however resulted in an increase in these measures. These results indicate that only some forms of case marking are required in the grammar used by readers for parsing written text.
Juan Su; Guoen Yin; Xuejun Bai; Guoli Yan; Stoyan Kurtev; Kayleigh L Warrington; Victoria A McGowan; Simon P Liversedge; Kevin B Paterson
In: Attention, Perception, and Psychophysics, 82 (4), pp. 1566–1572, 2020.
Readers can acquire useful information from only a narrow region of text around each fixation (the perceptual span), which extends asymmetrically in the direction of reading. Studies with bilingual readers have additionally shown that this asymmetry reverses with changes in horizontal reading direction. However, little is known about the perceptual span's flexibility following orthogonal (vertical vs. horizontal) changes in reading direction, because of the scarcity of vertical writing systems and because changes in reading direction often are confounded with text orientation. Accordingly, we assessed effects in a language (Mongolian) that avoids this confound, in which text is conventionally read vertically but can also be read horizontally. Sentences were presented normally or in a gaze-contingent paradigm in which a restricted region of text was displayed normally around each fixation and other text was degraded. The perceptual span effects on reading rates were similar in both reading directions. These findings therefore provide a unique (nonconfounded) demonstration of perceptual span flexibility.
Alan Taitz; Florencia M Assaneo; Diego E Shalom; Marcos A Trevisan
In: Scientific Reports, 10 , pp. 1–10, 2020.
Silent reading is a cognitive operation that produces verbal content with no vocal output. One relevant question is the extent to which this verbal content is processed as overt speech in the brain. To address this, we acquired sound, eye trajectories and lips' dynamics during the reading of consonant-consonant-vowel (CCV) combinations which are infrequent in the language. We found that the duration of the first fixations on the CCVs during silent reading correlate with the duration of the transitions between consonants when the CCVs are actually uttered. With the aid of an articulatory model of the vocal system, we show that transitions measure the articulatory effort required to produce the CCVs. This means that first fixations during silent reading are lengthened when the CCVs require a greater laryngeal and/or articulatory effort to be pronounced. Our results support that a speech motor code is used for the recognition of infrequent text strings during silent reading.
Philip Thierfelder; Gillian Wigglesworth; Gladys Tang
In: Quarterly Journal of Experimental Psychology, 73 (12), pp. 2217–2235, 2020.
We used an error disruption paradigm to investigate how deaf readers from Hong Kong, who had varying levels of reading fluency, use orthographic, phonological, and mouth-shape-based (i.e., “visemic”) codes during Chinese sentence reading while also examining the role of contextual information in facilitating lexical retrieval and integration. Participants had their eye movements recorded as they silently read Chinese sentences containing orthographic, homophonic, homovisemic, or unrelated errors. Sentences varied in terms of how much contextual information was available leading up to the target word. Fixation time analyses revealed that in early fixation measures, deaf readers activated word meanings primarily through orthographic representations. However, in contexts where targets were highly predictable, fixation times on homophonic errors decreased relative to those on unrelated errors, suggesting that higher levels of contextual predictability facilitated early phonological activation. In the measure of total reading time, results indicated that deaf readers activated word meanings primarily through orthographic representations, but they also appeared to activate word meanings through visemic representations in late error recovery processes. Examining the influence of reading fluency level on error recovery processes, we found that, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels could more quickly resolve homophonic and orthographic errors in the measures of gaze duration and total reading time, respectively. We conclude with a discussion of the theoretical implications of these findings as they relate to the lexical quality hypothesis and the dual-route cascaded model of reading by deaf adults.
Philip Thierfelder; Gillian Wigglesworth; Gladys Tang
In: Cognition, 201 , pp. 1–14, 2020.
Research has found that deaf readers unconsciously activate sign translations of written words while reading. However, the ways in which different sign phonological parameters associated with these sign translations tie into reading processes have received little attention in the literature. In this study on Chinese reading, we used a parafoveal preview paradigm to investigate how four different types of sign phonologically related preview affect reading processes in adult deaf signers of Hong Kong Sign Language (HKSL). The four types of sign phonologically related preview-target pair were: (1) pairs with HKSL translations that overlapped in three parameters—handshape, location, and movement; (2) pairs that overlapped in only handshape and location; (3) pairs that only overlapped in handshape and movement; and (4) pairs that only overlapped in location and movement. Results showed that the handshape parameter was of particular importance as only sign translation pairs that had handshape among their overlapping sign phonological parameters led to early sign activation. Furthermore, we found that, compared to control previews, deaf readers took longer to read targets when the sign translation previews overlapped with targets in either handshape and movement or handshape, movement, and location. In contrast, fixation times on targets were shorter when previews and targets overlapped location and any single additional parameter—either handshape or movement. These results indicate that the phonological parameters of handshape, location, and movement are activated via orthography during Chinese reading and can have different effects on parafoveal processing in deaf signers of HKSL.
Simon P Tiffin-Richards; Sascha Schroeder
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (9), pp. 1701–1713, 2020.
Words are seldom read in isolation. Predicting or anticipating upcoming words in a text, based on the context in which they are read, is an important aspect of efficient language processing. In sentence reading, words with congruent preceding context have been shown to be processed faster than words read in neutral or incongruous contexts. The onset of contextual facilitation effects is found very early in the first-pass-reading eye-movement and electroencephalogram (EEG) measures of skilled adult readers. However, the effect of contextual facilitation on children's eye movements during reading remains largely unexplored. To fill this gap, we tracked children's and adults' eye movements while reading stories with embedded words that were either strongly or weakly related to a clear narrative theme. Our central finding is that children showed late contextual facilitation effects during text reading as opposed to both early and late facilitation effects found in skilled adult readers. Contextual constraint had a similar effect on children's and adults' initiation of regressive saccades, whereas children invested more time in rereading relative to adults after encountering weakly contextually constrained words. Quantile regression analyses revealed that contextual facilitation effects had an early onset in adults' first-pass reading, whereas they only had a late onset in children's gaze durations.
In: Laterality, pp. 1–25, 2020.
Previous research suggests that the right visual field advantage on the lexical decision task occurs independent of the visual quality of stimuli [Chiarello, C., Senehi, J., & Soulier, M. (1986). Viewing conditions and hemisphere asymmetry for the lexical decision. Neuropsychologia, 24(4), 521–529]. However, previous studies examining these effects have had methodological limitations that were addressed and controlled for in the present study. Participants performed a divided visual field, lexical decision task for words that varied in size (Experiment 1) and visibility (Experiment 2). Results showed a quality by visual field interaction effect. In both experiments, response times were faster for targets presented to the right visual field in the high quality (i.e., large font, high visibility) conditions; however, visual quality resulted in no differences for targets presented to the left visual field. Furthermore, this quality by visual field interaction effect was only observed when the target was a word. These results suggest that the left hemisphere advantage for lexical decision depends on the perceptual quality of targets, consistent with an early stage of processing account of hemispheric asymmetry during lexical decision. Findings are discussed within the context of word recognition and decision-based models.
Xiuli Xiuhong Tong; Wei Shen; Zhao Li; Mengdi Xu; Liping Pan; Shelley Xiuli Tong; Xiuli Xiuhong Tong
In: Quarterly Journal of Experimental Psychology, 73 (4), pp. 617–628, 2020.
Combining eye-tracking technique with a revised visual world paradigm, this study examined how positional, phonological, and semantic information of radicals are activated in visual Chinese character recognition. Participants' eye movements were tracked when they looked at four types of invented logographic characters including a semantic radical in the legal (e.g., (Figure presented.)) and illegal positions ((Figure presented.)), a phonetic radical in the legal (e.g., (Figure presented.)) and illegal positions (e.g., (Figure presented.)). These logographic characters were presented simultaneously with either a sound-cued (e.g., /qiao2/) or meaning-cued (e.g., a picture of a bridge) condition. Participants appeared to allocate more visual attention towards radicals in legal, rather than illegal, positions. In addition, more eye fixations occurred on phonetic, rather than on semantic, radicals across both sound- and meaning-cued conditions, indicating participants' strong preference for phonetic over semantic radicals in visual character processing. These results underscore the universal phonology principle in processing non-alphabetic Chinese logographic characters.
Kristen M Tooley
Contrasting mechanistic accounts of the lexical boost Journal Article
In: Memory and Cognition, 48 (5), pp. 815–838, 2020.
While many recent studies focused on abstract syntactic priming effects have implicated an error-based learning mechanism, there is little consensus on the most likely mechanism underlying the lexical boost. The current study aimed at refining understanding of the mechanism that leads to this priming effect. In two eye-tracking during reading experiments, the nature of the lexical boost was investigated by comparing predictions from competing accounts in terms of decay and the requirement of structural overlap between primes and targets. Experiment 1 revealed facilitation of target structure processing for shorter relative to longer primes, when there were fewer intervening words between prime and target verbs. In Experiment 2, significant lexically boosted priming effects were observed, but only when the target structure also appeared in the prime, and not when the prime had a different structure but a high degree of lexical overlap with the target. Overall, these results are most consistent with a short-lived mechanistic account rather than an error-based learning account of the lexical boost. Furthermore, these results align with dual-mechanism accounts of syntactic priming whereby different mechanisms are claimed to produce abstract syntactic priming effects and the lexical boost.
Josef Toon; Anuenue Kukona
In: Cognitive Science, 44 (1), pp. 1–22, 2020.
Two visual world experiments investigated the activation of semantically related concepts during the processing of environmental sounds and spoken words. Participants heard environmental sounds such as barking or spoken words such as “puppy” while viewing visual arrays with objects such as a bone (semantically related competitor) and candle (unrelated distractor). In Experiment 1, a puppy (target) was also included in the visual array; in Experiment 2, it was not. During both types of auditory stimuli, competitors were fixated significantly more than distractors, supporting the coactivation of semantically related concepts in both cases; comparisons of the two types of auditory stimuli also revealed significantly larger effects with environmental sounds than spoken words. We discuss implications of these results for theories of semantic knowledge.
Fatemeh Torabi Asr; Vera Demberg
In: Discourse Processes, 57 (4), pp. 376–399, 2020.
Connectives can facilitate the processing of discourse relations by helping comprehenders to infer the intended coherence relation holding between two text spans. Previous experimental studies have focused on pairs of connectives that are very different from one another to be able to compare and formalize the distinguishing effects of these particles in discourse comprehension. In this article, we compare two connectives, but and although, which overlap in terms of the relations they can signal. We demonstrate in a set of carefully controlled studies that while a connective can be a marker of several discourse relations, it can have a specific fine-grained biasing effect on linguistic inferences and that this bias can be derived (or predicted) from the connectives' distribution of relations found in production data. The effects that we find speak to the ambiguity of discourse connectives, in general, and the different functions of but and although, in particular. These effects cannot be explained within the earlier accounts of discourse connectives, which propose that each connective has a core meaning or processing instruction. Instead, we here lay out a probabilistic account of connective meaning and interpretation, which is based on the distribution of connectives in production and is supported by our experimental findings.
Alexandra Ţurcan; Hannah Howman; Ruth Filik
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (10), pp. 1966–1976, 2020.
This article addresses a current theoretical debate between modular and interactive accounts of sarcasm processing, by investigating the role of context (specifically, knowing that a character has been sarcastic before) in the comprehension of a sarcastic remark. An eye-tracking experiment was conducted in which participants were asked to read texts that introduced a character as being either sarcastic or not and ended in either a literal or an unfamiliar sarcastic remark. The results indicated that when the character was previously literal, a subsequent sarcastic remark was more difficult to process than its literal counterpart. However, when the context was supportive of the sarcastic interpretation (i.e., the character was known to be sarcastic), subsequent sarcastic remarks were as easy to read as literal equivalents, which would support the predictions of interactive accounts. Importantly, this effect was not preceded by a main effect of literality, which constitutes evidence against the predictions of modular accounts.
Anastasia Ulicheva; Hannah Harvey; Mark Aronoff; Kathleen Rastle
In: Cognition, 195 , pp. 103810, 2020.
Substantial research has been undertaken to understand the relationship between spelling and sound, but we know little about the relationship between spelling and meaning in alphabetic writing systems. We present a computational analysis of English writing in which we develop new constructs to describe this relationship. Diagnosticity captures the amount of meaningful information in a given spelling, whereas specificity estimates the degree of dispersion of this meaning across different spellings for a particular sound sequence. Using these two constructs, we demonstrate that particular suffix spellings tend to be reserved for particular meaningful functions. We then show across three paradigms (nonword classification, spelling, and eye tracking during sentence reading) that this form of regularity between spelling and meaning influences the behaviour of skilled readers, and that the degree of this behavioural sensitivity mirrors the strength of spelling-to-meaning regularities in the writing system. We close by arguing that English spelling may have become fractionated such that the high degree of spelling-sound inconsistency maximises the transmission of meaningful information.
Franziska Usée; Arthur M Jacobs; Jana Lüdtke
In: Frontiers in Psychology, 11 , pp. 1–21, 2020.
Reading is known to be a highly complex, emotion-inducing process, usually involving connected and cohesive sequences of sentences and paragraphs. However, most empirical results, especially from studies using eye tracking, are either restricted to simple linguistic materials (e.g., isolated words, single sentences) or disregard valence-driven effects. The present study addressed the need for ecologically valid stimuli by examining the emotion potential of and reading behavior in emotional vignettes, often used in applied psychological contexts and discourse comprehension. To allow for a cross-domain comparison in the area of emotion induction, negatively and positively valenced vignettes were constructed based on pre-selected emotional pictures from the Nencki Affective Picture System (NAPS; Marchewka et al., 2014). We collected ratings of perceived valence and arousal for both material groups and recorded eye movements of 42 participants during reading and picture viewing. Linear mixed-effects models were performed to analyze effects of valence (i.e., valence category, valence rating) and stimulus domain (i.e., textual, pictorial) on ratings of perceived valence and arousal, eye movements in reading, and eye movements in picture viewing. Results supported the success of our experimental manipulation: emotionally positive stimuli (i.e., vignettes, pictures) were perceived more positively and less arousing than emotionally negative ones. The cross-domain comparison indicated that vignettes are able to induce stronger valence effects than their pictorial counterparts, no differences between vignettes and pictures regarding effects on perceived arousal were found. Analyses of eye movements in reading replicated results from experiments using isolated words and sentences: perceived positive text valence attracted shorter reading times than perceived negative valence at both the supralexical and lexical level. In line with previous findings, no emotion effects on eye movements in picture viewing were found. This is the first eye tracking study reporting superior valence effects for vignettes compared to pictures and valence-specific effects on eye movements in reading at the supralexical level.
Marloes L van Moort; Arnout Koornneef; Paul W van den Broek
In: Discourse Processes, pp. 1–20, 2020.
To build a coherent accurate mental representation of a text, readers routinely validate information they read against the preceding text and their background knowledge. It is clear that both sources affect processing, but when and how they exert their influence remains unclear. To examine the time course and cognitive architecture of text-based and knowledge-based validation processes, we used eye-tracking methodology. Participants read versions of texts that varied systematically in (in)coherence with prior text or background knowledge. Contradictions with respect to prior text and background knowledge both were found to disrupt reading but in different ways: The two types of contradiction led to distinct patterns of processes, and, importantly, these differences were evident already in early processing stages. Moreover, knowledge-based incoherence triggered more pervasive and longer (repair) processes than did text-based incoherence. Finally, processing of text-based and knowledge-based incoherence was not influenced by readers' working memory capacity.
Martin R Vasilev; Mark Yates; Ethan Prueitt; Timothy J Slattery
In: Quarterly Journal of Experimental Psychology, pp. 1–23, 2020.
There is a growing understanding that the parafoveal preview effect during reading may represent a combination of preview benefits and preview costs due to interference from parafoveal masks. It has been suggested that visually degrading the parafoveal masks may reduce their costs, but adult readers were later shown to be highly sensitive to degraded display changes. Four experiments examined how preview benefits and preview costs are influenced by the perception of distinct parafoveal degradation at the target word location. Participants read sentences with four preview types (identity, orthographic, phonological, and letter-mask preview) and two levels of visual degradation (0% vs. 20%). The distinctiveness of the target word degradation was either eliminated by degrading all words in the sentence (Experiments 1a–2a) or remained present, as in previous research (Experiments 1b–2b). Degrading the letter masks resulted in a reduction in preview costs, but only when all words in the sentence were degraded. When degradation at the target word location was perceptually distinct, it induced costs of its own, even for orthographically and phonologically related previews. These results confirm previous reports that traditional parafoveal masks introduce preview costs that overestimate the size of the true benefit. However, they also show that parafoveal degradation has the unintended consequence of introducing additional costs when participants are aware of distinct degradation on the target word. Parafoveal degradation appears to be easily perceived and may temporarily orient attention away from the reading task, thus delaying word processing.
Awel Vaughan-Evans; Simon P Liversedge; Gemma Fitzsimmons; Manon W Jones
Syntactic co-activation in natural reading Journal Article
In: Visual Cognition, 28 (10), pp. 541–556, 2020.
The extent to which syntactic co-activation occurs during natural reading is currently unknown. Here, we measured the eye movements of Welsh-English bilinguals and English monolinguals as they read English sentences. Target words were manipulated to create nonwords that were consistent or inconsistent with the rules of Welsh soft mutation (a morphosyntactic process that alters the initial consonant of words). Nonwords were only visible in parafoveal preview, and a direct fixation triggered the presentation of the normal English word. Linear mixed effects analyses revealed a robust parafoveal preview benefit for identity (television) compared with mutated (delevision) and aberrant previews (belevision), and a parafoveal-on-foveal effect in our bilingual sample. Bilingual readers' sentence reanalysis was affected by the implicit Welsh mutation, but only in contexts that would elicit a mutation in Welsh. Our findings suggest that morphosyntactic rules are co-activated during natural reading, however further investigations must evaluate the robustness of this effect.
Aaron Veldre; Erik D Reichle; Roslyn Wong; Sally Andrews
In: Cognition, 197 , pp. 1–14, 2020.
Recent eye-movement evidence suggests readers are more likely to skip a high-frequency word than a low-frequency word independently of the semantic or syntactic acceptability of the word in the sentence. This has been interpreted as strong support for a serial processing mechanism in which the decision to skip a word is based on the completion of a preliminary stage of lexical processing prior to any assessment of contextual fit. The present large-scale study was designed to reconcile these findings with the plausibility preview effect: higher skipping and reduced first-pass reading times for words that are previewed by contextually plausible, compared to implausible, sentence continuations that are unrelated to the target word. Participants' eye movements were recorded as they read sentences containing a short (3–4 letters) or long (6 letters) target word. The boundary paradigm was used to present parafoveal previews which were either higher or lower frequency than the target, and either plausible or implausible in the sentence context. The results revealed strong, independent effects of all three factors on target skipping and early measures of target fixation duration, while frequency and plausibility interacted on later measures of target fixation duration. Simulations using the E-Z Reader model of eye-movement control in reading demonstrated that plausibility effects on skipping are potentially consistent with the assumption that higher-level contextual information only affects post-lexical integration processes. However, no current model of eye movements in reading provides an explicit account of the information or processes that allow readers to rapidly detect an integration failure.
Jorrig Vogels; David M Howcroft; Elli Tourtouri; Vera Demberg
How speakers adapt object descriptions to listeners under load Journal Article
In: Language, Cognition and Neuroscience, 35 (1), pp. 78–92, 2020.
A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.
Margreet Vogelzang; Francesca Foppolo; Maria Teresa Guasti; Hedderik van Rijn; Petra Hendriks
In: Discourse Processes, 57 (2), pp. 158–183, 2020.
Different words generally have different meanings. However, some words seemingly share similar meanings. An example are null and overt pronouns in Italian, which both refer to an individual in the discourse. Is the interpretation and processing of a form affected by the existence of another form with a similar meaning? With a pupillary response study, we show that null and overt pronouns are processed differently. Specifically, null pronouns are found to be less costly to process than overt pronouns. We argue that this difference is caused by an additional reasoning step that is needed to process marked overt pronouns but not unmarked null pronouns. A comparison with data from Dutch, a language with overt but no null pronouns, demonstrates that Italian pronouns are processed differently from Dutch pronouns. These findings suggest that the processing of a marked form is influenced by alternative forms within the same language, making its processing costly.
Aiping Wang; Ming Yan; Bei Wang; Gaoding Jia; Albrecht W Inhoff
The perceptual span in Tibetan reading Journal Article
In: Psychological Research, pp. 1–10, 2020.
Tibetan script differs from other alphabetic writing systems in that word forms can be composed of horizontally and vertically arrayed characters. To examine information extraction during the reading of this script, eye movements of native readers were recorded and used to control the size of a window of legible text that moved in synchrony with the eyes. Letters outside the window were masked, and no viewing constraints were imposed in a control condition. Comparisons of window conditions with the control condition showed that reading speed and oculomotor activity matched the control condition, when windows revealed three letters to the left and seven to eight letters to the right of a fixated letter location. Cross-script comparisons indicate that this perceptual span is smaller than for English and larger than for Chinese script. We suggest that the information density of a writing system influences the perceptual span during reading.
Xiaoming Wang; Xinbo Zhao; Yanning Zhang
In: Neurocomputing, 2020.
Eye-movement recognition is a new type of biometric recognition technology. Without considering the characteristics of the stimuli, the existing eye-movement recognition technology is based on eye-movement trajectory similarity measurements and uses more eye-movement features. Related studies on reading psychology have shown that when reading text, human eye-movements are different between individuals yet stable for a given individual. This paper proposes a type of technology for aiding biometric recognition based on reading eye-movement. By introducing a deep-learning framework, a computational model for reading eye-movement recognition (REMR) was constructed. The model takes the text, fixation, and text-based linguistic feature sequences as inputs and identifies a human subject by measuring the similarity distance between the predicted fixation sequence and the actual one (to be identified). The experimental results show that the fixation sequence similarity recognition algorithm obtained an equal error rate of 19.4% on the test set, and the model obtained an 86.5% Rank-1 recognition rate on the test set.
Xin Kang; Gitte H Joergensen; Gerry T M Altmann
In: Acta Psychologica, 210 , pp. 1–11, 2020.
Understanding the time-course of event knowledge activation is crucial for theories of language comprehension. We report two experiments using the ‘visual world paradigm' (VWP) that investigated the dynamic mapping between object-state representations and real-time language processing. In Experiment 1, participants heard sentences that described events resulting in either a substantial change of state (e.g. The chef will chop the onion) or a minimal change of state (e.g. The chef will weigh the onion). Concurrently, they viewed pictures depicting two versions of the target object (e.g., an onion) corresponding to the intact and changed states, and two unrelated distractors. A second sentence referred to the object with either a backward or a forward shift in event time (e.g. But first/And then, he will smell the onion). In Experiment 2, Degree of Change was manipulated by using different nouns in the first sentence (e.g. The girl will stomp on the penny/egg). The second sentence was similar to the ones used in Experiment 1 (e.g., But first/And then, she will look at the penny/egg). The results from both experiments showed that participants looked more at the ‘appropriate' state of the object that matched the language context, but the shift of visual attention emerged only when the object name was heard. Our findings suggest that situationally appropriate object representations do trigger eye movements to the corresponding states of the target object, but inappropriate representations are not necessarily eliminated from consideration until the language forces it.
Greta Kaufeld; Wibke Naumann; Antje S Meyer; Hans Rutger Bosker; Andrea E Martin
In: Language, Cognition and Neuroscience, 35 (7), pp. 933–948, 2020.
Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing.
Greta Kaufeld; Anna Ravenschlag; Antje S Meyer; Andrea E Martin; Hans Rutger Bosker
In: Journal of experimental psychology. Learning, memory, and cognition, 46 (3), pp. 1–14, 2020.
During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue's reliability. Moreover, we found speech rate normalization effects in participants' gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Nayoun Kim; Katy Carlson; Mike Dickey; Masaya Yoshida
Processing gapping: Parallelism and grammatical constraints Journal Article
In: Quarterly Journal of Experimental Psychology, 73 (5), pp. 781–798, 2020.
This study aims to test two hypotheses about the online processing of Gapping: whether the parser inserts an ellipsis site in an incremental fashion in certain coordinated structures (the Incremental Ellipsis Hypothesis), or whether ellipsis is a late and dispreferred option (the Ellipsis as a Last Resort Hypothesis). We employ two offline acceptability rating experiments and a sentence fragment completion experiment to investigate to what extent the distribution of Gapping is controlled by grammatical and extra-grammatical constraints. Furthermore, an eye-tracking while reading experiment demonstrated that the parser inserts an ellipsis site incrementally but only when grammatical and extra-grammatical constraints allow for the insertion of the ellipsis site. This study shows that incremental building of the Gapping structure follows from the parser's general preference to keep the structure of the two conjuncts maximally parallel in a coordination structure as well as from grammatical restrictions on the distribution of Gapping such as the Coordination Constraint.
Young Suk Grace Kim; Yaacov Petscher; Christopher Vorstius
In: Scientific Studies of Reading, pp. 1–20, 2020.
We examined the relations between working memory, emergent literacy skills (e.g., phonological awareness, orthographic awareness, rapid-automatized naming), word reading, and listening comprehension to online reading processes (eye movements), and their relations to reading comprehension. A total of 292 students were assessed on working memory and emergent literacy skills in Grade 1, and eye movements, language, and reading skills in Grade 3. Structural equation model results showed that word reading was related to gaze duration and rereading duration, but listening comprehension was not. Working memory and emergent literacy skills were related to eye movements, but their relations to eye movements were largely mediated by word reading. Eye movements were related to reading comprehension, but not after accounting for word reading and listening comprehension. These results expand our understanding of reading development by revealing the nature of relations of emergent literacy skills, reading, and listening comprehension to online processes.
S Kleijn; W M Mak; Ted J M Sanders
In: Cognitive Linguistics, pp. 1–31, 2020.
Research has shown that it requires less time to process information that is part of an objective causal relation describing states of affairs in the world (She was out ofbreath because she was running), than information that is part of a subjective relation (She must have been in a hurry because she was running) expressing a claim or conclusion and a supporting argument. Representing subjectivity seems to require extra cognitive operations. In Mental Spaces Theory (MST; Fauconnier, Gilles. 1994. Mental spaces: Aspects ofmeaning construction in natural language. Cambridge: MIT Press) the difference between these two relation types can be described in terms of an extra mental space in the discourse representation of subjective relations: representing the Subject of Consciousness (SoC). In processing terms, this might imply that the processing difference is not present if this SoC has already been established in the discourse. We tested this prediction in two eye tracking experiments. The results of Experiment 1 showed that signaling the subjectivity ofthe relation by introducing a subject of consciousness beforehand did not diminish the processing asymmetry compared to a neutral context. However, the relative complexity of subjective relations was diminished in the context of Free Indirect Speech (No! He was absolutely sure. There was no doubt about it. She was running so she was in hurry; Experiment 2). In terms of MST and the representation of subjectivity in general, this implies that not only creating a representation of a thinking subject, but also assigning a claim to this thinking subject requires extra processing effort.
Sebastian P Korinth; Kerstin Gerstenberger; Christian J Fiebach
In: Frontiers in Psychology, 11 , pp. 1–17, 2020.
Previous reports of improved oral reading performance for dyslexic children but not for regular readers when between-letter spacing was enlarged led to the proposal of a dyslexia-specific deficit in visual crowding. However, it is in this context also critical to understand how letter spacing affects visual word recognition and reading in unimpaired readers. Adopting an individual differences approach, the present study, accordingly, examined whether wider letter spacing improves reading performance also for non-impaired adults during silent reading and whether there is an association between letter spacing and crowding sensitivity. We report eye movement data of 24 German students who silently read texts presented either with normal or wider letter spacing. Foveal and parafoveal crowding sensitivity were estimated using two independent tests. Wider spacing reduced first fixation durations, gaze durations, and total fixation time for all participants, with slower readers showing stronger effects. However, wider letter spacing also reduced skipping probabilities and elicited more fixations, especially for faster readers. In terms of words read per minute, wider letter spacing did not provide a benefit, and faster readers in particular were slowed down. Neither foveal nor parafoveal crowding sensitivity correlated with the observed letter-spacing effects. In conclusion, wide letter spacing reduces single word processing time in typically developed readers during silent reading, but affects reading rates negatively since more words must be fixated. We tentatively propose that wider letter spacing reinforces serial letter processing in slower readers, but disrupts parallel processing of letter chunks in faster readers. These effects of letter spacing do not seem to be mediated by individual differences in crowding sensitivity.
A A Korneev; Yu E Matveeva; T V Akhutina
In: Human Physiology, 46 (3), pp. 235–243, 2020.
Abstract: We studied the reading skills in primary schoolchildren (8–9 years of age) using the neuropsychological and eye tracking methods. We analyzed possible correlations between the level of reading skills and the preferred reading strategy with the features of eye movements and the cognitive function of children. The study involved 46 third-graders. Their reading skill was evaluated using the words with regular and irregular spelling. Based on a cluster analysis of reading performance, these children were divided into four groups according to the level and quality of reading development. Group 1 read all types of words well enough; group 2 read well regular words and slightly worse irregular words; children from groups 3 and 4 read regular words at a satisfactory level, while irregular words were read significantly worse than regular ones in group 3 and were not read by group 4 children. An eye tracking study allowed us to suggest that children with good reading skills are more likely to use the lexical strategy, and children with relatively poor reading skills use the sublexical strategy, which is more available to them. Moreover, analysis of the individual differences in poor readers showed that some of them were also able to recruit lexical strategy in the reading process.
Anna A Kosovicheva; Peter J Bex
In: Perception, 49 (1), pp. 21–38, 2020.
When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer's task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks.
In: Journal of Experimental Psychology: Learning Memory and Cognition, 46 (11), pp. 2153–2162, 2020.
Two visual world experiments investigated the priming of form (e.g., phonology) during language processing. In Experiment 1, participants heard high cloze probability sentences like "In order to have a closer look, the dentist asked the man to open his . . ." while viewing visual arrays with objects like a predictable target mouth, phonological competitor mouse, and unrelated distractors. In Experiment 2, participants heard target-associated nouns like "dentist" that were isolated from the sentences in Experiment 1 while viewing the same visual arrays. In both experiments, participants fixated the target (e.g., mouth) most, but also fixated the phonological competitor (e.g., mouse) more than unrelated distractors. Taken together, these results are interpreted as supporting association-based mechanisms in prediction, such that activation spreads across both semantics and form within the mental lexicon (e.g., dentist-mouth-mouse) and likewise primes (i.e., preactivates) the form of upcoming words during sentence processing.
Alper Kumcu; Robin L Thompson
In: Psychological Research, 84 (3), pp. 667–684, 2020.
People revisit spatial locations of visually encoded information when they are asked to retrieve that information, even when the visual image is no longer present. Such “looking at nothing” during retrieval is likely modulated by memory load (i.e., mental effort to maintain and reconstruct information) and the strength of mental representations. We investigated whether words that are more difficult to remember also lead to more looks to relevant, blank locations. Participants were presented four nouns on a two by two grid. A number of lexico-semantic variables were controlled to form high-difficulty and low-difficulty noun sets. Results reveal more frequent looks to blank locations during retrieval of high-difficulty nouns compared to low-difficulty ones. Mixed-effects modelling demonstrates that imagery-related semantic factors (imageability and concreteness) predict looking at nothing during retrieval. Results provide the first direct evidence that looking at nothing is modulated by word difficulty and in particular, word imageability. Overall, the research provides substantial support to the integrated memory account for linguistic stimuli and looking at nothing as a form of mental imagery.
Victor Kuperman; Avital Deutsch
In: Quarterly Journal of Experimental Psychology, 73 (12), pp. 2177–2187, 2020.
Hebrew noun–noun compounds offer a valuable opportunity to study the long-standing question of how morphologically complex words are processed during reading. Specifically, in some morpho-syntactic environments, the first (head) noun of a compound carries a suffix—a clear orthographic marker of being part of a compound—whereas in others it is homographic with a stand-alone noun. In addition to this morphological cue, Hebrew occasionally employs hyphenation as a visual signal that two nouns, which are typically separated by a space, are combined in a compound. In a factorial design, we orthogonally manipulated the morphological and the visual cues and recorded eye movements of 75 proficient Hebrew readers while they read sentences with embedded compounds. The effect of hyphenation on reading times was inhibitory. This slow-down was significantly weaker in compounds where the syntactic relation between constituents was overtly marked by a suffix compared with compounds without a morphological marker. We interpret these findings as evidence that hyphenation is largely a redundant cue but morphological markers of compounding are psychologically valid cues for semantic integration of compounds. We discuss the implications of this finding for accounts of morphological processing.
Marianna Kyriacou; Kathy Conklin; Dominic Thompson
Passivizability of idioms: Has the wrong tree been barked up? Journal Article
In: Language and Speech, 63 (2), pp. 404–435, 2020.
A growing number of studies support the partial compositionality of idiomatic phrases, while idioms are thought to vary in their syntactic flexibility. Some idioms, like kick the bucket, have been classified as inflexible and incapable of being passivized without losing their figurative interpretation (i.e., the bucket was kicked ≠ died). Crucially, this has never been substantiated by empirical findings. In the current study, we used eye-tracking to examine whether the passive forms of (flexible and inflexible) idioms retain or lose their figurative meaning. Active and passivized idioms (he kicked the bucket/the bucket was kicked) and incongruous active and passive control phrases (he kicked the apple/the apple was kicked) were inserted in sentences biasing the figurative meaning of the respective idiom (die). Active idioms served as a baseline. We hypothesized that if passivized idioms retain their figurative meaning (the bucket was kicked = died), they should be processed more efficiently than the control phrases, since their figurative meaning would be congruous in the context. If, on the other hand, passivized idioms lose their figurative interpretation (the bucket was kicked = the pail was kicked), then their meaning should be just as incongruous as that of both control phrases, in which case we would expect no difference in their processing. Eye movement patterns demonstrated a processing advantage for passivized idioms (flexible and inflexible) over control phrases, thus indicating that their figurative meaning was not compromised. These findings challenge classifications of idiom flexibility and highlight the creative nature of language.
In: Aphasiology, 34 (4), pp. 391–410, 2020.
Background & Aims: Healthy speakers use both word-level and structure-level information to ease sentence production processes. Structural priming facilitates message-structure mapping in aphasia. However, it remains unclear if and how word-level information affects off-line and on-line sentence production in persons with aphasia (PWA). This eyetracking-while-speaking study examined the effect of lexical priming on production of syntactic (active/passive) structures in PWA. Methods: Eleven PWA and twenty healthy older adults (HOA) described transitive actions (woman pulling horse) following lexical priming, wherein the relative ease of lexical retrieval for the Agent or Theme was manipulated via an auditory probe (what is happening with the woman/horse?). It was examined whether or not PWA produce the sentence structure that allows earlier production of the primed word (e.g., passives when Theme was primed). Participants' eye fixation times to each character (Agent, Theme) were also monitored to examine if PWA show priming-induced preferential looks to one character from the earliest stage of production, consistent with word-driven planning. Results: HOA showed increased production of passives over actives in the Theme vs. Agent prime condition. In eye fixation data, HOA showed priming-induced Theme advantage from the earliest time window (picture onset-400 milliseconds). PWA also showed a significant priming effect in off-line sentence production, with this priming effect being greater for the individuals whose syntactic processing is better preserved. In eye fixation data, however, PWA showed preferential fixations to the primed character at a later stage of sentence planning (400–800 milliseconds), following equal fixation time to Agent and Theme during the earliest time window. Conclusion: HOA showed word-driven production in both off-line and real-time (eye fixations) production. Lexical accessibility effectively drove off-line syntactic production in PWA, especially for those whose syntactic capacity remains relatively preserved. However, PWA showed advanced processing of both characters in earliest eye fixation data, suggesting that successful word-driven off-line syntactic production was associated with atypical real-time sentence planning in aphasia.
Francesco Ruotolo; Solène Kalénine; Angela Bartolo
In: Journal of Experimental Psychology: Human Perception and Performance, 46 (1), pp. 66–90, 2020.
This study aimed at comparing the time course of the activation of function and manipulation knowledge during object identification. The influence of visual similarity and context information was also assessed. In 3 eye-tracking experiments, conducted with the Visual-World-Paradigm, participants heard the name of an object and had to identify it among four pictures. The target object (e.g., shopping cart) could be presented along with objects related by (a) function (e.g., basket), (b) manipulation (e.g., lawnmower), (c) context (e.g., cash register), (d) visual similarity (e.g., toaster), and (e) completely unrelated objects. Growth curve analyses were used to assess competition effects among semantically (a, b, and c), visually related (d), and unrelated competitors (e). Results showed that manipulation- and function-related, but not context-related objects received more fixations than the unrelated ones, with a temporal advantage for the manipulation-related objects (Experiment 1). However, the visually similar objects faded the semantic competition effects, especially for function-related objects (Experiment 2). Finally, no temporal differences appeared when manipulation- and function-related objects were shown within the same visual array (Experiment 3). These results support the idea that both function and manipulation are relevant features of object semantic representations, but in the absence of other semantic competitors the activation of manipulation features appears prioritized during object identification.
Edin Šabić; Daniel Henning; Hunter Myüz; Audrey Morrow; Michael C Hout; Justin A MacDonald
In: Frontiers in Psychology, 11 , pp. 1–11, 2020.
Speech comprehension is often thought of as an entirely auditory process, but both normal hearing and hearing-impaired individuals sometimes use visual attention to disambiguate speech, particularly when it is difficult to hear. Many studies have investigated how visual attention (or the lack thereof) impacts the perception of simple speech sounds such as isolated consonants, but there is a gap in the literature concerning visual attention during natural speech comprehension. This issue needs to be addressed, as individuals process sounds and words in everyday speech differently than when they are separated into individual elements with no competing sound sources or noise. Moreover, further research is needed to explore patterns of eye movements during speech comprehension-especially in the presence of noise-as such an investigation would allow us to better understand how people strategically use visual information while processing speech. To this end, we conducted an experiment to track eye-gaze behavior during a series of listening tasks as a function of the number of speakers, background noise intensity, and the presence or absence of simulated hearing impairment. Our specific aims were to discover how individuals might adapt their oculomotor behavior to compensate for the difficulty of the listening scenario, such as when listening in noisy environments or experiencing simulated hearing loss. Speech comprehension difficulty was manipulated by simulating hearing loss and varying background noise intensity. Results showed that eye movements were affected by the number of speakers, simulated hearing impairment, and the presence of noise. Further, findings showed that differing levels of signal-to-noise ratio (SNR) led to changes in eye-gaze behavior. Most notably, we found that the addition of visual information (i.e., videos vs. auditory information only) led to enhanced speech comprehension-highlighting the strategic usage of visual information during this process.
Cailey A Salagovic; Carly J Leonard
In: Attention, Perception, and Psychophysics, 1 , pp. 1–10, 2020.
Successful navigation of information-rich, multimodal environments involves processing both auditory and visual information. The extent to which information within each modality is processed varies because of many factors, but the influence of auditory stimuli on the processing of visual stimuli in these multimodal environments is not well understood. Previous research has shown that a preceding sound leads to decreased reaction times in visual tasks (Bertelson, Quarterly Journal of Experimental Psychology 19(3), 272–279, 1967). The current study examines whether a nonspatial, task-irrelevant sound additionally alters processing of visual distractors that flank a central target. We used a version of a flanker task in which participants responded to a central letter surrounded by two irrelevant flanker letters. When these flankers are associated with a conflicting response, a congruency effect occurs such that reaction time to the target is slowed (Eriksen & Eriksen, Perception & Psychophysics, 16(1), 143–149, 1974). In two experiments using this task, results showed that a preceding tone caused general speeding of reaction time across flanker types, consistent with alerting. The tone also caused decreased variation in response time. Critically, the tone modulated the congruency effect, with a greater speeding for congruent flankers than for incongruent flankers. This suggests that the influence of flanker identity was more intense after tone presentation, consistent with a nonspatial sound increasing perceptual and/or response-association processing of flanking stimuli.
Daniel Schmidtke; Anna L Moro
In: Reading Research Quarterly, pp. 1–36, 2020.
We investigated the word-reading development of adult second-language learners of English. A sample of 70 (Mandarin or Cantonese) Chinese-speaking students enrolled in a university-level English bridging program at a Canadian university silently read passages of text at the beginning and end of the program while their eye movements were recorded. At each timepoint, we also administered a battery of tests that measure key component skills of second-language reading (phonological processing, vocabulary knowledge, and listening comprehension). We found longitudinal changes in lexical processing for long words in early (refixation probability and gaze duration) and late (go-past time and total reading time) eye movement measures, indicating a shift from a sublexical to a holistic word-processing strategy. We found the largest gains in sublexical processing among students with stronger phonological awareness upon entry to the program and students who acquired more vocabulary than their peers during the program. We interpret the results of this study as evidence of a transition from a lexical processing strategy that is heavily reliant on phonological decoding to word-reading behavior that is more actively engaged in higher order cognitive processes, such as meaning integration. This research offers novel insights into predictors of reading skill in postsecondary English-language bridging programs.
Daniel Schmidtke; Julie A Van Dyke; Victor Kuperman
In: Behavior Research Methods, pp. 1–19, 2020.
The CompLex database presents a large-scale collection of eye-movement studies on English compound-word processing. A combined total of 440 participants completed eye-tracking experiments in which they silently read unspaced English compound words (e.g., goalpost) embedded in sentence contexts (e.g., Dylan hit the goalpost when he was aiming for the net.). Three studies were conducted using participants representing the non-college-bound population (300 participants), and four studies included participants recruited from the student population (140 participants). The database comprises trial-level eye-movement data (47,763 trials), participant data (including a measure of reading experience estimated via the Author Recognition Test), and lexical characteristics for the set of 931 English compound words used as critical stimuli in the studies. One contribution of the present paper is a set of regression analyses conducted on the full database and individual experiments. We report that the most reliable and consistent main effects were those elicited by compound word length, left constituent frequency, right constituent frequency, compound frequency and semantic transparency. Separately, we also found that the effect of left frequency and compound word length is weaker among more frequent compounds. Another contribution is a power analysis, in which we determined the sample sizes required to reliably detect effect sizes that are comparable to those observed in our regression models. These sample size estimates serve as a recommendation for researchers wishing to either collect eye-movement data for compound word reading, or use the current database as a resource for the study of English compound word processing.
Elizabeth R Schotter; Emily Johnson; Amy M Lieberman
In: Journal of Experimental Psychology: Human Perception and Performance, 46 (11), pp. 1397–1410, 2020.
Deaf signers exhibit an enhanced ability to process information in their peripheral visual field, particularly the motion of dots or orientation of lines. Does their experience processing sign language, which involves identifying meaningful visual forms across the visual field, contribute to this enhancement? We tested whether deaf signers recruit language knowledge to facilitate peripheral identification through a sign superiority effect (i.e., better handshape discrimination in a sign than a pseudosign) and whether such a superiority effect might be responsible for perceptual enhancements relative to hearing individuals (i.e., a decrease in the effect of eccentricity on perceptual identification). Deaf signers and hearing signers or nonsigners identified the handshape presented within a static ASL fingerspelling letter (Experiment 1), fingerspelled sequence (Experiment 2), or sign or pseudosign (Experiment 3) presented in the near or far periphery. Accuracy on all tasks was higher for deaf signers than hearing nonsigning participants and was higher in the near than the far periphery. Across experiments, there were different patterns of interactions between hearing status and eccentricity depending on the type of stimulus; deaf signers showed an effect of eccentricity for static fingerspelled letters, fingerspelled sequences, and pseudosigns but not for ASL signs. In contrast, hearing nonsigners showed an effect of eccentricity for all stimuli. Thus, deaf signers recruit lexical knowledge to facilitate peripheral perceptual identification, and this perceptual enhancement may derive from their extensive experience processing visual linguistic information in the periphery during sign comprehension.
Sarah Schuster; Stefan Hawelka; Nicole Alexandra Himmelstoss; Fabio Richlan; Florian Hutzler
In: Language, Cognition and Neuroscience, 35 (5), pp. 613–624, 2020.
By means of combining eye-tracking and fMRI, the present study aimed to investigate aspects of higher linguistic processing during natural reading which were formerly hard to assess with traditional paradigms. Specifically, we investigated the haemodynamic effects of incremental sentence comprehension–as operationalised by word position–and its relation to context-based word-level effects of lexical predictability. We observed that an increasing amount of words being processed was associated with an increase in activation in the left posterior middle temporal and angular gyri. At the same time, left occipito-temporal regions showed a decrease in activation with increasing word position. Region of interest (ROI) analyses revealed differential effects of word position and predictability within dissociable parts of the semantic network–showing that it is expedient to consider these effects conjointly.
Kilian G Seeber; Laura Keller; Alexis Hervais-Adelman
In: Language, Cognition and Neuroscience, 35 (10), pp. 1480–1494, 2020.
In our study we analyse the online processing of visual-verbal input during simultaneous interpreting with text. To that end, we compared 15 professional interpreters' eye movements during simultaneous interpreting with text (SIMTXT) to a baseline collected during reading while listening (RWL). We found that interpreters have a preference for a visual lead during RWL, following the pattern well-documented in silent and oral reading studies. During SIMTXT, in contrast, interpreters show a clear preference for a visual lag. We tentatively conclude that during SIMTXT the visual input might be used first and foremost to support the production of the output rather than the comprehension of the input. Importantly, we submit that the availability of the written text of the orally presented discourse might negatively affect predictive processing.
Adi Shechter; David L Share
In: Psychological Science, pp. 1–16, 2020.
Rapid and seemingly effortless word recognition is a virtually unquestioned characteristic of skilled reading, yet the definition and operationalization of the concept of cognitive effort have proven elusive. We investigated the cognitive effort involved in oral and silent word reading using pupillometry among adults (Experiment 1
Wei Shen; Jukka Hyönä; Youxi Wang; Meiling Hou; Jing Zhao
In: Memory and Cognition, pp. 1–12, 2020.
Two experiments were conducted to investigate the extent to which the lexical tone can affect spoken-word recognition in Chinese using a printed-word paradigm. Participants were presented with a visual display of four words—namely, a target word (e.g., 象限, xiang4xian4, “quadrant”), a tone-consistent phonological competitor (e.g., 相册, xiang4ce4, “photo album”), or a tone-inconsistent phonological competitor (e.g., 香菜, xiang1cai4, “coriander”), and two unrelated distractors. Simultaneously, they were asked to listen to a spoken target word presented in isolation (Experiment 1) or embedded in neutral/predictive sentence contexts (Experiment 2), and then click on the target word on the screen. Results showed significant phonological competitor effects (i.e., the fixation proportion on the phonological competitor was higher than that on the distractors) under both tone conditions. Specifically, a larger phonological competitor effect was observed in the tone-consistent condition than in the tone-inconsistent condition when the spoken word was presented in isolation and the neutral sentence contexts. This finding suggests a partial role of lexical tone in constraining spoken-word recognition. However, when embedded in a predictive sentence context, the phonological competitor effect was only observed in the tone-consistent condition and absent in the tone-inconsistent condition. This result indicates that the predictive sentence context can strengthen the role of lexical tone.
Matthias J Sjerps; Caitlin Decuyper; Antje S Meyer
In: Quarterly Journal of Experimental Psychology, 73 (3), pp. 357–374, 2020.
In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants' timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants' speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings.
Bryor Snefjella; Nadia Lana; Victor Kuperman
In: Journal of Memory and Language, 115 , pp. 1–18, 2020.
The present paper addresses two under-studied dimensions of novel word learning. We ask (a) whether originally meaningless novel words can acquire emotional connotations from their linguistic contexts, and (b) whether these acquired connotations can affect the quality of orthographic and semantic word learning and its retention over time. In five experiments using three stimuli sets, L1 speakers of English learned nine novel words embedded in contexts that were consistently positive, neutral or negative. Reading times were recorded during the learning phase, and vocabulary post-tests were administered immediately after that phase and after one week to assess learning. With two of three stimulus sets, the answer to (a) was positive: readers learned both the forms, definitional meanings and emotional connotations of novel words from their contexts. We confirmed (b) in two of three stimulus sets as well. Items were learned more accurately (by 10% to 20%) in positive rather than negative or neutral contexts. We propose the transfer of affect to a word from its collocations to be a virtually unstudied yet efficient mechanism of learning affective meanings. We further demonstrate that the transfer that occurs over a few exposures to a novel word in context is sufficient to elicit a long-lasting positivity advantage previously shown in existing words only. Null results in one stimulus set suggest that contextual transfer of affect is contingent on other contextual properties, such as text complexity. These findings are pitted against theories of vocabulary acquisition.