EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language articles, please email us!
Roberta Daini; Silvia Primativo; Andrea Albonico; Laura Veronelli; Manuela Malaspina; Massimo Corbo; Marialuisa Martelli; Lisa S. Arduino
In: Brain Sciences, vol. 11, no. 2, pp. 1–18, 2021.
Acquired Neglect Dyslexia is often associated with right-hemisphere brain damage and is mainly characterized by omissions and substitutions in reading single words. Martelli et al. proposed in 2011 that these two types of error are due to different mechanisms. Omissions should depend on neglect plus an oculomotor deficit, whilst substitutions on the difficulty with which the letters are perceptually segregated from each other (i.e., crowding phenomenon). In this study, we hypothesized that a deficit of focal attention could determine a pathological crowding effect, leading to imprecise letter identification and consequently substitution errors. In Experiment 1, three brain-damaged patients, suffering from peripheral dyslexia, mainly characterized by substitutions, underwent an assessment of error distribution in reading pseudowords and a T detection task as a function of cue size and timing, in order to measure focal attention. Each patient, when compared to a control group, showed a deficit in adjusting the attentional focus. In Experiment 2, a group of 17 right-brain-damaged patients were asked to perform the focal attention task and to read single words and pseudowords as a function of inter-letter spacing. The results allowed us to confirm a more general association between substitution-type reading errors and the performance in the focal attention task.
Kelly M. Dann; Aaron Veldre; Sally Andrews
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 47, no. 8, pp. 1–50, 2021.
Much of the evidence for morphological decomposition accounts of complex word identification has relied on the masked-priming paradigm. However, morphologically complex words are typically encountered in sentence contexts and processing begins before a word is fixated, when it is in the parafovea. To evaluate whether the single word-identification data generalize to natural reading, Experiment 1 investigated the contribution of morphological structure to the very earliest stages of lexical processing indexed by preview effects during sentence reading in the gaze-contingent boundary paradigm. Preview conditions systematically assessed the impact of prefixed and suffixed nonword previews that manipulated stem and affix overlap, and affix status, against an orthographically legal control baseline. Initial fixations on suffixed target words showed a preview benefit from nonwords that combined the target stem with a legitimate affix, but not with a nonaffix, whereas prefixed targets only benefited from an identical preview. When presented in a masked prime lexical-decision task in Experiment 2, the same stimuli yielded equivalent stem priming from suffixed and prefixed primes regardless of affix status, consistent with previous masked priming studies using similar nonword primes. The early effects of morphological structure selectively observed on parafoveal processing of suffixed words are inconsistent with recent nonmorphological, position-invariant accounts of embedded stem activation. These results provide the first evidence of morphological parafoveal processing in English and contribute to recent evidence that readers extract a higher level of information from the parafovea during natural reading than was previously assumed
Catherine Davies; Jamie Lingwood; Bissera Ivanova; Sudha Arunachalam
In: Cognition, vol. 212, pp. 104707, 2021.
Combining information from adjectives with the nouns they modify is essential for comprehension. Previous research suggests that preschoolers do not always integrate adjectives and nouns, and may instead over-rely on noun information when processing referring expressions (Fernald, Thorpe, & Marchman, 2010; Thorpe, Baumgartner, & Fernald, 2006). This disjointed processing has implications for pragmatics, apparently preventing under-fives from making contrastive inferences (Huang & Snedeker, 2013). Using a novel experimental design that allows preschoolers time to demonstrate their abilities in adjective-noun integration and in contrastive inference, two visual world experiments investigate how English-speaking three-year-olds (N = 73
Minke J. Boer; Deniz Başkent; Frans W. Cornelissen
In: Multisensory Research, vol. 34, no. 1, pp. 17–47, 2021.
The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
Minke J. Boer; Tim Jürgens; Deniz Başkent; Frans W. Cornelissen
In: Trends in Hearing, vol. 25, pp. 1–20, 2021.
Since emotion recognition involves integration of the visual and auditory signals, it is likely that sensory impairments worsen emotion recognition. In emotion recognition, young adults can compensate for unimodal sensory degradations if the other modality is intact. However, most sensory impairments occur in the elderly population and it is unknown whether older adults are similarly capable of compensating for signal degradations. As a step towards studying potential effects of real sensory impairments, this study examined how degraded signals affect emotion recognition in older adults with normal hearing and vision. The degradations were designed to approximate some aspects of sensory impairments. Besides emotion recognition accuracy, we recorded eye movements to capture perceptual strategies for emotion recognition. Overall, older adults were as good as younger adults at integrating auditory and visual information and at compensating for degraded signals. However, accuracy was lower overall for older adults, indicating that aging leads to a general decrease in emotion recognition. In addition to decreased accuracy, older adults showed smaller adaptations of perceptual strategies in response to video degradations. Concluding, this study showed that emotion recognition declines with age, but that integration and compensation abilities are retained. In addition, we speculate that the reduced ability of older adults to adapt their perceptual strategies may be related to the increased time it takes them to direct their attention to scene aspects that are relatively far away from fixation.
Minke J. Boer; Tim Jürgens; Frans W. Cornelissen; Deniz Başkent
In: Vision Research, vol. 180, pp. 51–62, 2021.
Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.
Alex Carvalho; Cécile Crimon; Axel Barrault; John Trueswell; Anne Christophe
In: Developmental Science, vol. 24, no. 4, pp. e13085, 2021.
Two word-learning experiments were conducted to investigate the understanding of negative sentences in 18- and 24-month-old children. In Experiment 1, after learning that bamoule means “penguin” and pirdaling means “cartwheeling,” 18-month-olds (n = 48) increased their looking times when listening to negative sentences rendered false by their visual context (“Look! It is not a bamoule!” while watching a video showing a penguin cartwheeling); however, they did not change their looking behavior when negative sentences were rendered true by their context (“Look! It is not pirdaling!” while watching a penguin spinning). In Experiment 2, 24-month-olds (n = 48) were first exposed to a teaching phase in which they saw a new cartoon character on a television (e.g., a blue monster). Participants in the affirmative condition listened to sentences like “It's a bamoule!” and participants in the negative condition listened to sentences like “It's not a bamoule!.” At test, all participants were asked to find the bamoule while viewing two images: the familiar character from the teaching phase versus a novel character (e.g., a red monster). Results showed that participants in the affirmative condition looked more to the familiar character (i.e., they learned the familiar character was a bamoule) than participants in the negative condition. Together, these studies provide the first evidence for the understanding of negative sentences during the second year of life. The ability to understand negative sentences so early might support language acquisition, providing infants with a tool to constrain the space of possibilities for word meanings.
Alex Carvalho; Isabelle Dautriche; Anne Caroline Fiévet; Anne Christophe
In: Journal of Experimental Child Psychology, vol. 203, pp. 105017, 2021.
Because linguistic communication is often noisy and uncertain, adults flexibly rely on different information sources during sentence processing. We tested whether toddlers engage in a similar process and how that process interacts with verb learning. Across two experiments, we presented French 28-month-olds with right-dislocated sentences featuring a novel verb (“Hei is VERBing, the boyi”), where a clear prosodic boundary after the verb indicates that the sentence is intransitive (such that the NP “the boy” is coreferential with the pronoun “he” and the sentence means “The boy is VERBing”). By default, toddlers incorrectly interpreted the sentence based on the number of NPs (assuming, e.g., that someone is VERBing the boy). Yet, when children were provided with additional information about the syntactic contexts (Experiment 1
J. C. F. Winter; S. M. Petermeijer; L. Kooijman; D. Dodou
Replicating five pupillometry studies of Eckhard Hess Journal Article
In: International Journal of Psychophysiology, vol. 165, pp. 145–205, 2021.
Several papers by Eckhard Hess from the 1960s and 1970s report that the pupils dilate or constrict according to the interest value, arousing content, or mental demands of visual stimuli. However, Hess mostly used small sample sizes and undocumented luminance control. In a first experiment (N = 182) and a second preregistered experiment (N = 147), we replicated five studies of Hess using modern equipment. Our experiments (1) did not support the hypothesis of gender differences in pupil diameter change with respect to baseline (PC) when viewing stimuli of different interest value, (2) showed that solving more difficult multiplications yields a larger PC in the seconds before providing an answer and a larger maximum PC, but a smaller PC at a fixed time after the onset of the multiplication, (3) did not support the hypothesis that participants' PC mimics the pupil diameter in a pair of schematic eyes but not in single-eyed or three-eyed stimuli, (4) did not support the hypothesis of gender differences in PC when watching a video of a male trying to escape a mob, and (5) supported the hypothesis that arousing words yield a higher PC than non-arousing words. Although we did not observe consistent gender differences in PC, additional analyses showed gender differences in eye movements towards erogenous zones. Furthermore, PC strongly correlated with the luminance of the locations where participants looked. Overall, our replications confirm Hess's findings that pupils dilate in response to mental demands and stimuli of an arousing nature. Hess's hypotheses regarding pupil mimicry and gender differences in pupil dilation did not replicate.
Gayle DeDe; Denis Kelleher
In: Journal of Neurolinguistics, vol. 57, pp. 100950, 2021.
The present study examined how healthy aging and aphasia influence the capacity for readers to generate structural predictions during online reading, and how animacy cues influence this process. Non-brain-damaged younger (n = 24) and older (n = 12) adults (Experiment 1) and individuals with aphasia (IWA; n = 11; Experiment 2) read subject relative and object relative sentences in an eye-tracking experiment. Half of the sentences included animate sentential subjects, and the other half included inanimate sentential subjects. All three groups used animacy information to mitigate effects of syntactic complexity. These effects were greater in older than younger adults. IWA were sensitive to structural frequency, with longer reading times for object relative than subject relative sentences. As in previous work, effects of structural complexity did not emerge on IWA's first pass through the sentence, but were observed when IWA reread critical segments of the sentences. Thus, IWA may adopt atypical reading strategies when they encounter low frequency or complex sentence structures, but they are able to use animacy information to reduce the processing disruptions associated with these structures.
Federica Degno; Otto Loberg; Simon P. Liversedge
In: Collabra: Psychology, vol. 7, no. 1, pp. 1–28, 2021.
A growing number of studies are using co-registration of eye movement (EM) and fixation-related potential (FRP) measures to investigate reading. However, the number of co-registration experiments remains small when compared to the number of studies in the literature conducted with EMs and event-related potentials (ERPs) alone. One reason for this is the complexity of the experimental design and data analyses. The present paper is designed to support researchers who might have expertise in conducting reading experiments with EM or ERP techniques and are wishing to take their first steps towards co-registration research. The objective of this paper is threefold. First, to provide an overview of the issues that such researchers would face. Second, to provide a critical overview of the methodological approaches available to date to deal with these issues. Third, to offer an example pipeline and a full set of scripts for data preprocessing that may be adopted and adapted for one's own needs. The data preprocessing steps are based on EM data parsing via Data Viewer (SR Research), and the provided scripts are written in Matlab and R. Ultimately, with this paper we hope to encourage other researchers to run co-registration experiments to study reading and human cognition more generally.
Kristina DeRoy Milvae; Stefanie E. Kuchinsky; Olga A. Stakhovskaya; Matthew J. Goupell
In: The Journal of the Acoustical Society of America, vol. 150, no. 2, pp. 920–935, 2021.
ABSTRACT: One potential benefit of bilateral cochlear implants is reduced listening effort in speech-on-speech masking situations. However, the symmetry of the input across ears, possibly related to spectral resolution, could impact binaural benefits. Fifteen young adults with normal hearing performed digit recall with target and interfering digits presented to separate ears and attention directed to the target ear. Recall accuracy and pupil size over time (used as an index of listening effort) were measured for unprocessed, 16-channel vocoded, and 4-channel vocoded digits. Recall accuracy was significantly lower for dichotic (with interfering digits) than for monotic listening. Dichotic recall accuracy was highest when the target was less degraded and the interferer was more degraded. With matched target and interferer spectral resolution, pupil dilation was lower with more degradation. Pupil dilation grew more shallowly over time when the interferer had more degradation. Overall, interferer spectral resolution more strongly affected listening effort than target spectral resolution. These results suggest that interfering speech both lowers performance and increases listening effort, and that the relative spectral resolution of target and interferer affect the listening experience. Ignoring a clearer interferer is more effortful
Félix Desmeules-Trudel; Tania S. Zamuner
In: Second Language Research, pp. 1–30, 2021.
Spoken word recognition depends on variations in fine-grained phonetics as listeners decode speech. However, many models of second language (L2) speech perception focus on units such as isolated syllables, and not on words. In two eye-tracking experiments, we investigated how fine-grained phonetic details (i.e. duration of nasalization on contrastive and coarticulatory nasalized vowels in Canadian French) influenced spoken word recognition in an L2, as compared to a group of native (L1) listeners. Results from L2 listeners (English-native speakers) indicated that fine-grained phonetics impacted the recognition of words, i.e. they were able to use nasalization duration variability in a way similar to L1-French listeners, providing evidence that lexical representations can be highly specified in an L2. Specifically, L2 listeners were able to distinguish minimal word pairs (differentiated by the presence of phonological vowel nasalization in French) and were able to use variability in a way approximating L1-French listeners. Furthermore, the robustness of the French “nasal vowel” category in L2 listeners depended on age of exposure. Early bilinguals displayed greater sensitivity to some ambiguity in the stimuli than late bilinguals, suggesting that early bilinguals had greater sensitivity to small variations in the signal and thus better knowledge of the phonetic cue associated with phonological vowel nasalization in French, similarly to L1 listeners.
Avital Deutsch; Hadas Velan; Yiska Merzbach; Tamar Michaly
In: Journal of Memory and Language, vol. 116, pp. 1–13, 2021.
In Hebrew, as in other Semitic languages, most words are formed in a non-concatenated way, with a root morpheme embedded in a word-pattern morpheme consisting of only vowels or vowels plus consonants. Previous research on visual word recognition in Hebrew has revealed a robust morphological root-priming effect, with word recognition facilitated by the prior sub-perceptual presentation of the root morpheme, along with a less stable and more fragile word-pattern priming effect, particularly in the nominal system. These findings support the theory that morphological principles govern lexical access, with the root morpheme as a main organizational unit of the mental lexicon. However, less research has been done to delineate the algorithm underlying decomposition. The current study explores the importance of the natural lexical orthographic context of a complex root + pattern word structure for root extraction, using on-line measures based on tracking eye-movements in sentence reading. A series of 4 experiments using a fast-priming paradigm demonstrated that detaching the root morpheme from its lexical orthographic structure hinders the root-priming effect. Presenting the root in a non-word or a pseudo-word, that is, a non-existent combination of a real root + a real pattern did not make any difference. These results suggest that mapping the orthographic root onto its morphological mental representation depends on the orthographic context in which its letters appear. This finding constrains the role of the root in visual word-recognition, highlighting the crucial conditions for extracting it in the natural setting of reading.
Sara Dhaene; Nicolas Dirix; Hélène Van Marcke; Evy Woumans
In: Bilingualism: Language and Cognition, pp. 1–15, 2021.
Research among bilinguals suggests a foreign language effect for various tasks requiring a more systematic processing style. For instance, bilinguals seem less prone to heuristic reasoning when solving problem statements in their foreign (FL) as opposed to their native (NL) language. The present study aimed to determine whether such an effect might also be observed in the detection of semantic anomalies. Participants were presented NL and FL questions with and without anomalies while their eye movements were recorded. Overall, they failed to detect the anomaly in more than half of the trials. Furthermore, more illusions occurred for questions presented in the FL, indicating an FL disadvantage. Additionally, eye movement analyses suggested that reading patterns for anomalies are predominantly similar across languages. Our results therefore substantiate theories suggesting that FL use induces cognitive load, causing increased susceptibility to illusions due to partial semantic processing.
Monica L. Do; Elsi Kaiser
In: Language, Cognition and Neuroscience, pp. 1–23, 2021.
We use the visual world eye-tracking paradigm to investigate how the mapping from thematic event structures to grammatical structures, known as sentence formulation, unfolds during real time sentence production. Experiment 1 contrasted production of SubjExp (“LeslieEXP loves AnnSTIM”) versus ObjExp (“LeslieSTIM scares AnnEXP”) sentences. Experiment 2 investigated passivized SubjExp (“LeslieSTIM was loved by AnnEXP”) and passivized ObjExp sentences (“LeslieEXP was scared by AnnSTIM.”). In both studies, we found that speakers were faster to begin speaking and to preferentially fixate the subject when they were able to assign the thematically prominent Experiencer role to the subject of the sentence. We conclude that sentence formulation is easier when speakers can make use of a tight, systematic correspondence between event structures and linguistic structures. We discuss the implications of our work for the relationship between language and thought and for the formal accounts of SubjExp and ObjExp verbs.
Linda Drijvers; Ole Jensen; Eelke Spaak
In: Human Brain Mapping, vol. 42, no. 4, pp. 1138–1152, 2021.
During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
Dawn Liu Holford; Marie Juanchich; Tom Foulsham; Miroslav Sirota; Alasdair D. F. Clarke
In: Judgment and Decision Making, vol. 16, no. 4, pp. 969–1009, 2021.
When people are given quantified information (e.g., ‘there is a 60% chance of rain'), the format of quantifiers (i.e., numerical: ‘a 60% chance' vs. verbal: ‘it is likely') might affect their decisions. Previous studies with indirect cues of judgements and decisions (e.g., response times, decision outcomes) give inconsistent findings that could support either a more intuitive process for verbal than numerical quantifiers or a greater focus on the context (e.g., rain) for verbal than numerical quantifiers. We used two pre-registered eye-tracking experiments (n(1) = 148, n(2) = 133) to investigate decision-making processes with verbal and numerical quantifiers. Participants eval-uated multiple verbally or numerically quantified nutrition labels (Experiment 1) and weather forecasts (Experiment 2) with different context valence (positive or negative), and quantities (‘low', ‘medium', or ‘high' in Experiment 1 and ‘possible', ‘likely', or ‘very likely' in Experiment 2) presented in a fully within-subjects design. Participants looked longer at verbal than numerical quantifiers, and longer at the contextual information with verbal quantifiers. Quantifier format also affected judgements and decisions: in Experiment 1, participants judged positive labels to be better in the verbal compared to the equivalent numerical condition (and to be worse for negative labels). In Experiment 2, participants decided on rain protection more for a verbal forecast of rain than the equivalent numerical forecast. The results fit the explanation that verbal quantifiers put more focus on the informational context than do numerical quantifiers, rather than prompting more intuitive decisions.
Nina S. Hsu; Stefanie E. Kuchinsky; Jared M. Novick
In: Language, Cognition and Neuroscience, vol. 36, no. 2, pp. 211–239, 2021.
Incremental language processing means that listeners confront temporary ambiguity about how to structure the input, which can generate misinterpretations. In four “visual-world” experiments, we tested whether engaging cognitive control–which detects and resolves conflict–assists revision during comprehension. We recorded listeners' eye-movements and actions while following instructions that were ripe for misanalysis. In Experiments 1 and 3, sentences followed trials from a nonverbal conflict task that manipulated cognitive-control engagement, to test its impact on the ability to revise. To isolate conflict-driven effects of cognitive-control on comprehension, we manipulated attention in a non-conflict task in Experiments 2 and 4. We observed fewer comprehension errors, and earlier revision, when cognitive control (more than attention) was elicited on an immediately preceding trial. These results extend previous correlations between cognitive control and language processing by revealing the influence of domain-general cognitive-control engagement on the temporal unfolding of error-revision processes during language comprehension.
Kuan Jung Huang; Adrian Staub
In: Cognition, vol. 216, pp. 104846, 2021.
Previous research (Mirault, Snell, & Grainger, 2018) has demonstrated that subjects sometimes incorrectly judge an ungrammatical sentence as grammatical when it is created by the transposition of two words in a grammatical sentence (e.g., The white was cat big). Here we present two eye-tracking experiments designed to assess the prevalence of this phenomenon in a more natural reading task, and to explore theoretical explanations. Readers failed to notice transpositions at about the same rate as in Mirault et al. (2018). Failure to notice the transposition was more common when both words were short, and when readers' eyes skipped, rather than directly fixated, one of the two words. The status of the transposed words as open- or closed-class did not have a reliable effect. The transposed words caused disruption in the eye movement record only on trials when participants ultimately judged the sentence to be ungrammatical, not when they judged the sentence to be grammatical. We argue that the results are not entirely consistent with the account offered by Mirault et al. (2018), which attributes failure to notice transpositions to parallel processing of adjacent words, or with a late, post-perceptual rational inference account (Gibson, Bergen, & Piantadosi, 2013). We propose that word recognition is serial, but post-lexical integration of each word into its context may not be perfectly incremental.
Linjieqiong Huang; Adrian Staub; Xingshan Li
In: Journal of Memory and Language, vol. 118, pp. 104218, 2021.
We report three eye-movement experiments that investigated the effect of prior sentence context on the processing of overlapping ambiguous strings (OASs) during Chinese reading. An OAS is a Chinese character string (ABC) in which the middle character can form a distinct word with both the character on its left (word AB) and on its right (word BC). In three experiments, we manipulated the extent to which the right-side word (BC) was plausible as an immediate continuation following the prior context; the left-side word AB was always plausible given the prior context, and the sentence continued in a manner that was compatible with word AB. Compared with a less plausible word BC, first-pass reading times on the OAS were longer with a more plausible word BC. The results suggest that in reading of Chinese strings with ambiguous word boundaries, plausibility influences an early stage of competition between words, rather than only a later checking process that occurs after the initial segmentation.
Yujing Huang; Fernanda Ferreira
In: Journal of Memory and Language, vol. 121, pp. 104288, 2021.
A key question in research on sentence processing concerns how sentences that have been misanalyzed are reinterpreted, and to what extent the parser's attempts at revision are successful. Past work has shown that misinterpretations associated with a syntactic misparse linger even after the entire sentence has been processed (Christianson, Hollingworth, Halliwell, & Ferreira, 2001; Slattery, Sturt, Christianson, Yoshida and Ferreira, 2013). In two reading experiments, we sought to evaluate the level of representation that is responsible for misinterpretations of garden-path sentences. We combined reading measures with an offline comprehension task, which enabled us to conditionalize reading time analyses on correct versus incorrect question-answering performance. Our results suggest that reanalysis does not always result in a correct interpretation, either because the final interpretation does not always reflect the global structure or because reanalysis processes result in the creation of licit local trees but fail to generate a complete global parse for the entire sentence.
Isabell Hubert Lyall; Juhani Järvikivi
In: Frontiers in Psychology, vol. 12, pp. 699071, 2021.
Individuals' moral views have been shown to affect their event-related potentials (ERP) response to spoken statements, and people's political ideology has been shown to guide their sentence completion behavior. Using pupillometry, we asked whether political ideology and disgust sensitivity affect online spoken language comprehension. 60 native speakers of English listened to spoken utterances while their pupil size was tracked. Some of those utterances contained grammatical errors, semantic anomalies, or socio-cultural violations, statements incongruent with existing gender stereotypes and perceived speaker identity, such as “I sometimes buy my bras at Hudson's Bay,” spoken by a male speaker. An individual's disgust sensitivity is associated with the Behavioral Immune System, and may be correlated with socio-political attitudes, for example regarding out-group stigmatization. We found that more disgust-sensitive individuals showed greater pupil dilation with semantic anomalies and socio-cultural violations. However, political views differently affected the processing of the two types of violations: whereas more conservative listeners showed a greater pupil response to socio-cultural violations, more progressive listeners engaged more with semantic anomalies, but this effect appeared much later in the pupil record.
Isabell Hubert Lyall; Juhani Järvikivi
In: Scientific reports, vol. 11, pp. 5443, 2021.
Research suggests that listeners' comprehension of spoken language is concurrently affected by linguistic and non-linguistic factors, including individual difference factors. However, there is no systematic research on whether general personality traits affect language processing. We correlated 88 native English-speaking participants' Big-5 traits with their pupillary responses to spoken sentences that included grammatical errors, "He frequently have burgers for dinner"; semantic anomalies, "Dogs sometimes chase teas"; and statements incongruent with gender stereotyped expectations, such as "I sometimes buy my bras at Hudson's Bay", spoken by a male speaker. Generalized additive mixed models showed that the listener's Openness, Extraversion, Agreeableness, and Neuroticism traits modulated resource allocation to the three different types of unexpected stimuli. No personality trait affected changes in pupil size across the board: less open participants showed greater pupil dilation when processing sentences with grammatical errors; and more introverted listeners showed greater pupil dilation in response to both semantic anomalies and socio-cultural clashes. Our study is the first one demonstrating that personality traits systematically modulate listeners' online language processing. Our results suggest that individuals with different personality profiles exhibit different patterns of the allocation of cognitive resources during real-time language comprehension.
Robert S. Hurley; Jonathan Sander; Kayleigh Nemeth; Brittany R. Lapin; Wei Huang; Mustafa Seckin
Differential eye movements in verbal and nonverbal search Journal Article
In: Frontiers in Communication, vol. 6, pp. 654575, 2021.
In addition to "nonverbal search"for objects, modern life also necessitates "verbal search"for written words in variable configurations. We know less about how we locate words in novel spatial arrangements, as occurs on websites and menus, than when words are located in passages. In this study we leveraged eye tracking technology to examine the hypothesis that objects are simultaneously screened in parallel while words can only be found when each are directly foveated in serial fashion. Participants were provided with a cue (e.g. rabbit) and tasked with finding a thematically-related target (e.g. carrot) embedded within an array including a dozen distractors. The cues and arrays were comprised of object pictures on nonverbal trials, and of written words on verbal trials. In keeping with the well-established "picture superiority effect,"picture targets were identified more rapidly than word targets. Eye movement analysis showed that picture superiority was promoted by parallel viewing of objects, while words were viewed serially. Different factors influenced performance in each stimulus modality; lexical characteristics such as word frequency modulated viewing times during verbal search, while taxonomic category affected viewing times during nonverbal search. In addition to within-platform task conditions, performance was examined in cross-platform conditions where picture cues were followed by word arrays, and vice versa. Although taxonomically-related words did not capture gaze on verbal trials, they were viewed disproportionately when preceded by cross-platform picture cues. Our findings suggest that verbal and nonverbal search are associated with qualitatively different search strategies and forms of distraction, and cross-platform search incorporates characteristics ofboth.
Jukka Hyönä; Timo T. Heikkilä; Seppo Vainio; Reinhold Kliegl
In: Cognition, vol. 208, pp. 104547, 2021.
Previous studies (Hyönä, Yan, & Vainio, 2018; Yan et al., 2014) have demonstrated that in morphologically rich languages a word's morphological status is processed parafoveally to be used in modulating saccadic programming in reading. In the present parafoveal preview study conducted in Finnish, we examined the exact nature of this effect by comparing reading of morphologically complex words (a stem + two suffixes) to that of monomorphemic words. In the preview-change condition, the final 3–4 letters were replaced with other letters making the target word a pseudoword; for suffixed words, the word stem remained intact but the suffix information was unavailable; for monomorphemic words, only part of the stem was parafoveally available. Three alternative predictions were put forth. According to the first alternative, the morphological effect in initial fixation location is due to parafoveally perceiving the suffix as a highly frequent letter cluster and then adjusting the saccade program to land closer to the word beginning for suffixed than monomorphemic words. The second alternative, the processing difficulty hypothesis, assumes a morphological complexity effect: suffixed words are more complex than monomorphemic words. Therefore, the attentional window is narrower and the saccade is shorter. The third alternative posits that the effect reflects parafoveal access to the word's stem. The results for the initial fixation location and fixation durations were consistent with the parafoveal stem-access view.
Aine Ito; Hiromu Sakai
In: Frontiers in Psychology, vol. 12, pp. 607474, 2021.
We investigated the effects of everyday language exposure on the prediction of orthographic and phonological forms of a highly predictable word during listening comprehension. Native Japanese speakers in Tokyo (Experiment 1) and Berlin (Experiment 2) listened to sentences that contained a predictable word and viewed four objects. The critical object represented the target word (e.g., (Figure presented.) /sakana/; fish), an orthographic competitor (e.g., (Figure presented.) /tuno/; horn), a phonological competitor (e.g., (Figure presented.) /sakura/; cherry blossom), or an unrelated word (e.g., (Figure presented.) /hon/; book). The three other objects were distractors. The Tokyo group fixated the target and the orthographic competitor over the unrelated objects before the target word was mentioned, suggesting that they pre-activated the orthographic form of the target word. The Berlin group showed a weaker bias toward the target than the Tokyo group, and they showed a tendency to fixate the orthographic competitor only when the orthographic similarity was very high. Thus, prediction effects were weaker in the Berlin group than in the Tokyo group. We found no evidence for the prediction of phonological information. The obtained group differences support probabilistic models of prediction, which regard the built-up language experience as a basis of prediction.
Debra Jared; Alyssa Pandolfo; Gatot Subroto Achmad Rifai
The effect of speaker age on the perception of ironic insults Journal Article
In: Canadian Journal of Experimental Psychology, vol. 75, no. 2, pp. 146–154, 2021.
We investigated a cue that readers may use in determining whether a remark such as “You are so helpful!” is intended as a compliment or as an ironic insult. The cue was the age of the speaker. Remarks were preceded by a sentence that either invited a literal or ironic interpretation of the remark. Data were collected on the familiarity of the remark as an ironic statement, and the incongruity of the remark with the prior context. In Experiment 1, participants were asked to rate the intent of the speaker as to how ironic, mocking, polite, and funny they intended their remark to be. In Experiment 2, participants read the scenarios as their eye movements were tracked. The results showed that age of the speaker had an impact on first pass reading times when statements were not familiar as ironic statements. Our younger adult participants did not appear to immediately activate a nonliteral interpretation of an ambiguous remark made by an older adult unless they had evidence from past experience that the remark is often used as an insult. However, ratings of the ironic intent of the statements were unaffected by speaker age; the age of the speaker affects the ease of interpretation but not the final outcome. The results are consistent with constraint-based theories of sentence comprehension. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Yu Cin Jian
In: Reading and Writing, vol. 34, no. 3, pp. 727–752, 2021.
Reading strategy instruction has been an important area in educational psychology for decades, however, research has primarily focused on its influence on learning outcomes rather than learning processes; reading pure texts rather than illustrated texts; and immediate effect rather than retention effect. This study used an eye-tracker to investigate the immediate and delayed effects of text–diagram reading instruction on reading comprehension and learning processes in illustrated text reading. Fourth-grade students with high (N = 66) and low reading ability (N = 66) were randomly assigned to one of three groups: a text–diagram group who received text–diagram instruction which emphasized diagram decoding and integration of relevant textual and pictorial information, a placebo group who received instruction which emphasized comprehension monitoring, and a control group which received no reading instruction. All participants read four illustrated science texts for a baseline check, instructional example, immediate testing, and delayed testing. The results showed that the effect of text–diagram instruction was more evident in the immediate test than the delayed test. The eye-movement pattern showed that the students who received text–diagram reading instruction spent significantly more reading time on illustrations, made more integrative transitions between text and illustrations, and spent a higher proportion of total reading time on illustrations in immediate and delayed reading situations than the other groups. Overall, this study developed an effective text–diagram instruction method to promote reading comprehension, identified the reading processes underlying the effect of text–diagram strategy instruction, and depicted the changing appearances of reading instruction intervention over time.
Jia Jin; Chenchen Lin; Fenghua Wang; Ting Xu; Wuke Zhang
In: Electronic Commerce Research, pp. 1–22, 2021.
Few studies have focused on summary descriptions of online product reviews regarding purchase decisions, and there is a gap between individual product reviews and summary descriptions of online product reviews. The current study applied eye-tracking to explore how the product type moderates the framing effect of summary descriptions of product reviews on e-consumers' purchase decisions. The results showed that product type moderated the framing effect of summary reviews on e-consumers' purchase intention. Specifically, for search products, compared with a negative frame, a positive frame increased e-consumers' attention to function attributes and led to higher purchase intention. However, with experience products, e-consumers' attention and purchase intention did not vary across framing messages. Referring to information asymmetry theory and signal theory, we posit that the cognitive effort involved in summary review information is high for search products and low for experience products since summary reviews are a more useful signal in reducing information asymmetry for search products than for experience products. The theoretical and practical implications are also discussed.
Michael A. Johns; Paola E. Dussias
In: Journal of Second Language Studies, vol. 4, no. 2, pp. 375–411, 2021.
The transfer of words from one language to another is ubiquitous in many of the world's languages. While loanwords have a rich literature in the fields of historical linguistics, language contact, and sociolinguistics, little work has been done examining how loanwords are processed by bilinguals with knowledge of both the source and recipient languages. The present study uses pupillometry to compare the online processing of established loanwords in Puerto Rican Spanish to native Spanish words by highly proficient Puerto Rican Spanish-English bilinguals. Established loanwords elicited a significantly larger pupillary response than native Spanish words, with the pupillary response modulated by both the frequency of the loanword itself and of the native Spanish counterpart. These findings suggest that established loanwords are processed differently than native Spanish words and compete with their native equivalents, potentially due to both intra- and inter-lingual effects of saliency.
Michael A. Johns; Laura Rodrigo; Rosa E. Guzzardo Tamargo; Aliza Winneg; Paola E. Dussias
In: Bilingualism: Language and Cognition, vol. 24, no. 4, pp. 681–693, 2021.
Most studies on lexical priming have examined single words presented in isolation, despite language users rarely encountering words in such cases. The present study builds upon this by examining both within-language identity priming and across-language translation priming in sentential contexts. Highly proficient Spanish-English bilinguals read sentence-question pairs, where the sentence contained the prime and the question contained the target. At earlier stages of processing, we find evidence only of within-language identity priming; at later stages of processing, however, across-language translation priming surfaces, and becomes as strong as within-language identity priming. Increasing the time between the prime sentence and target question results in strengthened priming at the latest stages of processing. These results replicate previous findings at the single-word level but do so within sentential contexts, which has implications both for accounts of priming via automatic spreading activation as well as for accounts of persistence attested in spontaneous speech corpora.
Holly Joseph; Elizabeth Wonnacott; Kate Nation
In: Quarterly Journal of Experimental Psychology, vol. 74, no. 7, pp. 1202–1224, 2021.
Inference generation and comprehension monitoring are essential elements of successful reading comprehension. While both improve with age and reading development, little is known about when and how children make inferences and monitor their comprehension during the reading process itself. Over two experiments, we monitored the eye movements of two groups of children (age 8–13 years) as they read short passages and answered questions that tapped local (Experiment 1) and global (Experiment 2) inferences. To tap comprehension monitoring, the passages contained target words which were consistent or inconsistent with the context. Comprehension question location was also manipulated with the question appearing before or after the passage. Children made local inferences during reading, but the evidence was less clear for global inferences. Children were sensitive to inconsistencies that relied on the generation of an inference, consistent with successful comprehension monitoring, although this was seen only very late in the eye movement record. Although question location had a large effect on reading times, it had no effect on global comprehension in one experiment and reading the question first had a detrimental effect in the other. We conclude that children appear to prioritise efficiency over completeness when reading, generating inferences spontaneously only when they are necessary for establishing a coherent representation of the text.
Ana I. Schwartz; Karla S. Tarin
In: Bilingualism: Language and Cognition, vol. 24, no. 5, pp. 879–890, 2021.
Four hypotheses regarding the impact of discourse context on cross-language lexical activation were tested. Highly-proficient, Spanish-English bilinguals read all-English paragraphs containing non-identical and identical cognates or noncognate controls while their eye-movements were tracked. There were four paragraph conditions based on a full crossing of semantic bias from the topic sentence and sentence containing the critical word. In analyses in which cognate status was treated categorically there was an interaction between global bias and cognates status such that the observed inhibitory effects of cognate status were attenuated in global-neutral contexts. Follow-up analyses on the non-identical cognates in which orthographic overlap was treated continuously revealed a U-shaped function between orthographic overlap and processing time, which was more pronounced in global-neutral contexts. The overall pattern of findings is consistent with a combined operation of resonant-based and feature-restriction mechanisms of context effects.
Adi Shechter; David L. Share
In: Psychological Science, vol. 32, no. 1, pp. 80–95, 2021.
Rapid and seemingly effortless word recognition is a virtually unquestioned characteristic of skilled reading, yet the definition and operationalization of the concept of cognitive effort have proven elusive. We investigated the cognitive effort involved in oral and silent word reading using pupillometry among adults (Experiment 1
In: JASA Express Letters, vol. 1, no. 11, pp. 115202, 2021.
Dynamic pitch, also known as intonation, conveys both semantic and pragmatic meaning in speech communica- tion. While alteration of this cue is detrimental to speech intelligibility in noise, the mechanism involved is poorly understood. Using the psychophysiological measure of task-evoked pupillary response, this study examined the perceptual effect of altered dynamic pitch cues on speech perception in noise. The data showed that pupil dilation increased with dynamic pitch strength in a sentence recognition in noise task. Taken together with recognition accuracy data, the results suggest the involvement of perceptual arousal in speech perception with dynamic pitch alteration
Wei Shen; Jukka Hyönä; Youxi Wang; Meiling Hou; Jing Zhao
In: Memory and Cognition, vol. 49, no. 1, pp. 181–192, 2021.
Two experiments were conducted to investigate the extent to which the lexical tone can affect spoken-word recognition in Chinese using a printed-word paradigm. Participants were presented with a visual display of four words—namely, a target word (e.g., 象限, xiang4xian4, “quadrant”), a tone-consistent phonological competitor (e.g., 相册, xiang4ce4, “photo album”), or a tone-inconsistent phonological competitor (e.g., 香菜, xiang1cai4, “coriander”), and two unrelated distractors. Simultaneously, they were asked to listen to a spoken target word presented in isolation (Experiment 1) or embedded in neutral/predictive sentence contexts (Experiment 2), and then click on the target word on the screen. Results showed significant phonological competitor effects (i.e., the fixation proportion on the phonological competitor was higher than that on the distractors) under both tone conditions. Specifically, a larger phonological competitor effect was observed in the tone-consistent condition than in the tone-inconsistent condition when the spoken word was presented in isolation and the neutral sentence contexts. This finding suggests a partial role of lexical tone in constraining spoken-word recognition. However, when embedded in a predictive sentence context, the phonological competitor effect was only observed in the tone-consistent condition and absent in the tone-inconsistent condition. This result indicates that the predictive sentence context can strengthen the role of lexical tone.
Timothy G. Shepard; Zhong Lin Lu; Deyue Yu
In: Optometry and Vision Science, vol. 98, no. 8, pp. 936–946, 2021.
SIGNIFICANCE We recently developed a novel Bayesian adaptive method, qReading, to measure reading function. The qReading method has both the efficiency and excellent test-retest reliability in normally sighted young adults to make it an excellent candidate for future studies of its value in diagnosis and longitudinal evaluation of treatment and/or rehabilitation outcomes. PURPOSE A novel Bayesian adaptive method, qReading, was recently developed to measure reading function. Here we performed a systematic assessment of the test-retest reliability of the qReading method. METHODS The variability of five repeated measurements of the reading curve was examined in two settings: within session and between sessions. For the within-session design, we considered two subpopulations: naive observers and experienced observers. All observers were normally sighted young adults. For each set of data, in addition to examining the intrinsic precision of the qReading method (the half width of the credible interval of the posterior distribution of the estimated performance), we computed four metrics to assess repeatability: standard deviation, Bland-Altman coefficient of repeatability, correlation coefficient, and Fractional Rank Precision. RESULTS Extrinsic factors such as observer, time interval between repeated measures, and observer experience all contribute to the variation across measurements. Nevertheless, the four metrics consistently show that the variability across five repeated measurements is small for each set of data. This is true even without taking learning effects into account (standard deviations, ≤0.092 log10 units; Bland-Altman coefficient of repeatability, ≤0.15 (log10)2 units; correlation coefficient, ≥0.91; and Fractional Rank Precision, ≥0.81). CONCLUSIONS The qReading method has excellent test-retest reliability in normally sighted young adults.
Les Sikos; Katharina Stein; Maria Staudte
In: Frontiers in Psychology, vol. 12, pp. 661898, 2021.
Recent work has shown that linguistic and visual contexts jointly modulate linguistic expectancy and, thus, the processing effort for a (more or less) expected critical word. According to these findings, uncertainty about the upcoming referent in a visually-situated sentence can be reduced by exploiting the selectional restrictions of a preceding word (e.g., a verb or an adjective), which then reduces processing effort on the critical word (e.g., a referential noun). Interestingly, however, no such modulation was observed in these studies on the expectation-generating word itself. The goal of the current study is to investigate whether the reduction of uncertainty (i.e., the generation of expectations) simply does not modulate processing effort-or whether the particular subject-verb-object (SVO) sentence structure used in these studies (which emphasizes the referential nature of the noun as direct pointer to visually co-present objects) accounts for the observed pattern. To test these questions, the current design reverses the functional roles of nouns and verbs by using sentence constructions in which the noun reduces uncertainty about upcoming verbs, and the verb provides the disambiguating and reference-resolving piece of information. Experiment 1 (a Visual World Paradigm study) and Experiment 2 (a Grammaticality Maze study) both replicate the effect found in previous work (i.e., the effect of visually-situated context on the word which uniquely identifies the referent), albeit on the verb in the current study. Results on the noun, where uncertainty is reduced and expectations are generated in the current design, were mixed and were most likely influenced by design decisions specific to each experiment. These results show that processing of the reference-resolving word—whether it be a noun or a verb—reliably benefits from the prior linguistic and visual information that lead to the generation of concrete expectations.
Jack W. Silcox; Brennan R. Payne
In: Cortex, vol. 142, pp. 296–316, 2021.
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words (“The prisoners were planning their escape/party”) or were low-constraint sentences with unexpected sentence-final words (“All day she thought about the party”). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Ziming Song; Xiaowei Liang; Yongsheng Wang; Guoli Yan
In: Reading and Writing, vol. 34, no. 10, pp. 2627–2643, 2021.
There is no obvious boundary information in Chinese reading. It has been shown that the introduction of word boundary information presented with alternating colors without changing the text distribution could significantly improve the reading speed of Chinese children in grade 2 (Perea and Wang in Mem Cognit 45(7):1160−1170, 2017. https://doi.org/10.3758/s13421-017-0717-0). However, few studies have examined how the effect of word boundary information on children's oral reading develops and changes as children's grade increases. The present study asked Chinese children in grades 2–5 to read alternating-color and mono-color text orally and used eye-tracking technology to explore the developmental trajectory of the influence of word boundary information on oral reading. The results indicated that children in grade 2 and grade 3 showed faster reading speeds in the alternating-color condition than in the mono-color condition. In contrast, there was no difference between the two conditions in children in grade 4 and grade 5. We discuss the mechanisms of the findings and the implications for education.
Thomas St. Pierre; Elizabeth K. Johnson
In: Cognitive Science, vol. 45, no. 8, pp. e13028, 2021.
To help infer the meanings of novel words, children frequently capitalize on their current linguistic knowledge to constrain the hypothesis space. Children's syntactic knowledge of function words has been shown to be especially useful in helping to infer the meanings of novel words, with most previous research focusing on how children use preceding determiners and pronouns/auxiliary to infer whether a novel word refers to an entity or an action, respectively. In the current visual world experiment, we examined whether 28- to 32-month-olds could exploit their lexical semantic knowledge of an additional class of function words—prepositions—to learn novel nouns. During the experiment, children were tested on their ability to use the prepositions in, on, under, and next to to identify novel creatures displayed on a screen (e.g., The wug is on the table), as well as their ability to later identify the creature without accompanying prepositions (e.g., Look at the wug). Children overall demonstrated understanding of all the prepositions but next to and were able to use their knowledge of prepositions to learn the associations between novel words and their intended referents, as shown by greater-than chance looks to the target referent when no prepositional phrase was provided.
Prosodic prominence effects in the processing of spectral cues Journal Article
In: Language, Cognition and Neuroscience, vol. 36, no. 5, pp. 586–611, 2021.
Two experiments test how phrasal prominence influences listeners' perception of vowel contrasts and how prominence information and vowel formant cues are integrated in processing. Experiment 1 finds that listeners incorporate phrasal prominence in their perception of vowels, in line with how spectral structure is modulated by prominence in speech. Experiment 2 explores how prominence information is integrated with formant cues in a visual world eyetracking task. Prominence shows an overall later influence in processing in line with current models of prosodic and segmental integration. However, listeners' perception of formants was also impacted more subtly by prominence immediately in processing such that prominence information directly shapes how formant cues are perceived. Results are discussed in terms of their implications for models of prosodic effects in segmental perception and possible differences between prosodic prominence and prosodic boundaries in this regard.
Marianna Stella; Paul E. Engelhardt
In: Brain Sciences, vol. 11, pp. 915, 2021.
In this study, we examined eye movements and comprehension in sentences containing a relative clause. To date, few studies have focused on syntactic processing in dyslexia and so one goal of the study is to contribute to this gap in the experimental literature. A second goal is to contribute to theoretical psycholinguistic debate concerning the cause and the location of the processing difficulty associated with object-relative clauses. We compared dyslexic readers (n = 50) to a group of non-dyslexic controls (n = 50). We also assessed two key individual differences variables (working memory and verbal intelligence), which have been theorised to impact reading times and comprehension of subject-and object-relative clauses. The results showed that dyslexics and controls had similar comprehension accuracy. However, reading times showed participants with dyslexia spent significantly longer reading the sentences compared to controls (i.e., a main effect of dyslexia). In general, sentence type did not interact with dyslexia status. With respect to individual differences and the theoretical debate, we found that processing difficulty between the subject and object relatives was no longer significant when individual differences in working memory were controlled. Thus, our findings support theories, which assume that working memory demands are responsible for the processing difficulty incurred by (1) individuals with dyslexia and (2) object-relative clauses as compared to subject relative clauses.
Kate Stone; João Veríssimo; Daniel J. Schad; Elise Oltrogge; Shravan Vasishth; Sol Lago
In: Language, Cognition and Neuroscience, vol. 36, no. 9, pp. 1159–1179, 2021.
Previous research has found that comprehenders sometimes predict information that is grammatically unlicensed by sentence constraints. An open question is why such grammatically unlicensed predictions occur. We examined the possibility that unlicensed predictions arise in situations of information conflict, for instance when comprehenders try to predict upcoming words while simultaneously building dependencies with previously encountered elements in memory. German possessive pronouns are a good testing ground for this hypothesis because they encode two grammatically distinct agreement dependencies: a retrospective one between the possessive and its previously mentioned referent, and a prospective one between the possessive and its following nominal head. In two visual world eye-tracking experiments, we estimated the onset of predictive effects in participants' fixations. The results showed that the retrospective dependency affected resolution of the prospective dependency by shifting the onset of predictive effects. We attribute this effect to an interaction between predictive and memory retrieval processes.
Benjamin Swets; Susanne Fuchs; Jelena Krivokapić; Caterina Petrone
In: Frontiers in Psychology, vol. 12, pp. 655516, 2021.
Although previous research has shown that there exist individual and cross-linguistic differences in planning strategies during language production, little is known about how such individual differences might vary depending on which language a speaker is planning. The present series of studies examines individual differences in planning strategies exhibited by speakers of American English, French, and German. Participants were asked to describe images on a computer monitor while their eye movements were monitored. In addition, we measured participants' working memory capacity and speed of processing. The results indicate that in the present study, English and German were planned less incrementally (further in advance) prior to speech onset compared to French, which was planned more incrementally (not as far in advance). Crucially, speed of processing predicted the scope of planning for French speakers, but not for English or German speakers. These results suggest that the different planning strategies that are invoked by syntactic choices available in different languages are associated with the tendency for speakers to rely on different cognitive support systems as they plan sentences.
Debra Titone; Julie Mercier; Aruna Sudarshan; Irina Pivneva; Jason Gullifer; Shari Baum
Spoken word processing in bilingual older adults Journal Article
In: Linguistic Approaches to Bilingualism, vol. 11, no. 4, pp. 578–610, 2021.
We investigated whether bilingual older adults experience within- and cross-language competition during spoken word recognition similarly to younger adults matched on age of second language (L2) acquisition, objective and subjective L2 proficiency, and current L2 exposure. In a visual world eye-tracking paradigm, older and younger adults, who were French-dominant or English-dominant English-French bilinguals, listened to English words, and looked at pictures including the target (field), a within-language competitor (feet) or cross-language (French) competitor (fille, “girl”), and unrelated filler pictures while their eye movements were monitored. Older adults showed evidence of greater within-language competition as a function of increased target and competitor phonological overlap. There was some evidence of age-related differences in cross-language competition, however, it was quite small overall and varied as a function of target language proficiency. These results suggest that greater within- and possibly cross-language lexical competition during spoken word recognition may underlie some of the communication difficulties encountered by healthy bilingual older adults.
Aleksandra Tomić; Jorge R. Valdés Kroff
In: Bilingualism: Language and Cognition, pp. 1–12, 2021.
Despite its prominent use among bilinguals, psycholinguistic studies reported code-switch processing costs (e.g., Meuter & Allport, 1999). This paradox may partly be due to the focus on the code-switch itself instead of its potential subsequent benefits. Motivated by corpus studies on CS patterns and sociopragmatic functions of CS, we asked whether bilinguals use code-switches as a cue to the lexical characteristics of upcoming speech. We report a visual world study testing whether code-switching facilitates the anticipation of lower-frequency words. Results confirm that US Spanish-English bilinguals (n = 30) use minority (Spanish) to majority (English) language code-switches in real-time language processing as a cue that a less frequent word would ensue, as indexed by increased looks at images representing lower- vs. higher-frequency words in the code-switched condition, prior to the target word onset. These results highlight the need to further integrate sociolinguistic and corpus observations into the experimental study of code-switching.
In: Laterality, vol. 26, no. 5, pp. 539–563, 2021.
Previous research suggests that the right visual field advantage on the lexical decision task occurs independent of the visual quality of stimuli [Chiarello, C., Senehi, J., & Soulier, M. (1986). Viewing conditions and hemisphere asymmetry for the lexical decision. Neuropsychologia, 24(4), 521–529]. However, previous studies examining these effects have had methodological limitations that were addressed and controlled for in the present study. Participants performed a divided visual field, lexical decision task for words that varied in size (Experiment 1) and visibility (Experiment 2). Results showed a quality by visual field interaction effect. In both experiments, response times were faster for targets presented to the right visual field in the high quality (i.e., large font, high visibility) conditions; however, visual quality resulted in no differences for targets presented to the left visual field. Furthermore, this quality by visual field interaction effect was only observed when the target was a word. These results suggest that the left hemisphere advantage for lexical decision depends on the perceptual quality of targets, consistent with an early stage of processing account of hemispheric asymmetry during lexical decision. Findings are discussed within the context of word recognition and decision-based models.
Débora Torres; Wagner R. Sena; Humberto A. Carmona; André A. Moreira; Hernán A. Makse; José S. Andrade
Eye-tracking as a proxy for coherence and complexity of texts Journal Article
In: PLoS ONE, vol. 16, no. 12, pp. e0260236, 2021.
Reading is a complex cognitive process that involves primary oculomotor function and high-level activities like attention focus and language processing. When we read, our eyes move by primary physiological functions while responding to language-processing demands. In fact, the eyes perform discontinuous twofold movements, namely, successive long jumps (saccades) interposed by small steps (fixations) in which the gaze “scans” confined locations. It is only through the fixations that information is effectively captured for brain processing. Since individuals can express similar as well as entirely different opinions about a given text, it is therefore expected that the form, content and style of a text could induce different eye-movement patterns among people. A question that naturally arises is whether these individuals' behaviours are correlated, so that eye-tracking while reading can be used as a proxy for text subjective properties. Here we perform a set of eye-tracking experiments with a group of individuals reading different types of texts, including children stories, random word generated texts and excerpts from literature work. In parallel, an extensive Internet survey was conducted for categorizing these texts in terms of their complexity and coherence, considering a large number of individuals selected according to different ages, gender and levels of education. The computational analysis of the fixation maps obtained from the gaze trajectories of the subjects for a given text reveals that the average “magnetization” of the fixation configurations correlates strongly with their complexity observed in the survey. Moreover, we perform a thermodynamic analysis using the Maximum-Entropy Model and find that coherent texts were closer to their corresponding “critical points” than non-coherent ones, as computed from the Pairwise Maximum-Entropy method, suggesting that different texts may induce distinct cohesive reading activities.
Matthew J. Traxler; Timothy Banh; Madeline M. Craft; Kurt Winsler; Trevor A. Brothers; Liv J. Hoversten; Pilar Piñar; David P. Corina
In: Applied Psycholinguistics, vol. 42, no. 3, pp. 601–630, 2021.
Deaf readers may have larger perceptual spans than ability-matched hearing native English readers, allowing them to read more efficiently (Belanger & Rayner, 2015). To further test the hypothesis that deaf and hearing readers have different perceptual spans, the current study uses eye-movement data from two experiments in which deaf American Sign Language-English bilinguals, hearing native English speakers, and hearing Chinese-English bilinguals read semantically unrelated sentences and answered comprehension questions after a proportion of them. We analyzed skip rates, fixation times, and accuracy on comprehension questions. In addition, we analyzed how lexical properties of words affected skipping behavior and fixation durations. Deaf readers skipped words more often than native English speakers, who skipped words more often than Chinese-English bilinguals. Deaf readers had shorter first-pass fixation times than the other two groups. All groups' skipping behaviors were affected by lexical frequency. Deaf readers' comprehension did not differ from hearing Chinese-English bilinguals, despite greater skipping and shorter fixation times. Overall, the eye-tracking findings align with Belanger's word processing efficiency hypothesis. Effects of lexical frequency on skipping behavior indicated further that eye movements during reading remain under cognitive control in deaf readers.
Annie Tremblay; Sahyang Kim; Seulgi Shin; Taehong Cho
In: Bilingualism: Language and Cognition, vol. 24, no. 2, pp. 1–13, 2021.
This study investigates how phonological and phonetic aspects of the native-language (L1) intonation modulate the use of tonal cues in second-language (L2) speech segmentation. Previous research suggested that prosodic learning is more difficult if the L1 and L2 intonations are phonologically similar but phonetically different (French-Korean) than if they are phonologically different (English-French/Korean) (Prosodic-Learning Interference Hypothesis; Tremblay, Broersma, Coughlin & Choi, 2016). This study provides another test of this hypothesis. Korean listeners and French-speaking and English-speaking L2 learners of Korean in Korea completed an eye-tracking experiment investigating the effects of phrase tones in Korean. All groups patterned similarly with the phrase-final tone, but, unlike Korean and French listeners, English listeners showed early benefits from the phrase-initial tone (signaling word-initial boundaries in English). Importantly, French listeners patterned like Korean listeners with both tones. The Prosodic-Learning Interference Hypothesis is refined to suggest that prosodic learning difficulties may not be persistent for immersed L2 learners.
Christina Ralph-Nearman; Madison A. Hooper; Ruth Filik
In: Cognition and Emotion, vol. 35, no. 8, pp. 1543–1558, 2021.
Eating disorder prevalence is increasing in males, perhaps more rapidly than in females. Theorists have proposed that cognitive biases are important factors underpinning disordered eating, especially those related to food, body, and perfectionism. We investigated these factors in relation to males' eating disorder symptomatology in the general population by using eye-tracking during reading as a novel and implicit measure. 180 males' eye movements were monitored while they read scenarios (third-person in Experiment 1 (n = 90, 18-38(Mage = 21.50
Theresa Redl; Stefan L. Frank; Peter Swart; Helen Hoop
In: PLoS ONE, vol. 16, no. 4, pp. e0249309, 2021.
Two experiments tested whether the Dutch possessive pronoun zijn ‘his' gives rise to a gender inference and thus causes a male bias when used generically in sentences such as Everyone was putting on his shoes. Experiment 1 (N = 120, 48 male) was a conceptual replication of a previous eye-tracking study that had not found evidence of a male bias. The results of the current eye-tracking experiment showed the generically-intended masculine pronoun to trigger a gender inference and cause a male bias, but for male participants and in stereotypically neutral stereotype contexts only. No evidence for a male bias was thus found in stereotypically female and male context nor for female participants altogether. Experiment 2 (N = 80, 40 male) used the same stimuli as Experiment 1, but employed the sentence evaluation paradigm. No evidence of a male bias was found in Experiment 2. Taken together, the results suggest that the generically-intended masculine pronoun zijn ‘his' can cause a male bias for male participants even when the referents are previously introduced by inclusive and grammatically gender-unmarked iedereen ‘everyone'. This male bias surfaces with eye-tracking, which taps directly into early language processing, but not in offline sentence evaluations. Furthermore, the results suggest that the intended generic reading of the masculine possessive pronoun zijn ‘his' is more readily available for women than for men.
Gwendolyn Rehrig; Reese A. Cullimore; John M. Henderson; Fernanda Ferreira
In: Cognitive Research: Principles and Implications, vol. 6, no. 10, pp. 1–20, 2021.
Abstract: According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement: This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.
Tracy Reuter; Kavindya Dalawella; Casey Lew-williams
In: Language, Cognition and Neuroscience, vol. 36, no. 4, pp. 474–490, 2021.
Prior research suggests that prediction supports language processing and learning. However, the ecological validity of such findings is unclear because experiments usually include constrained stimuli. While theoretically suggestive, previous conclusions will be largely irrelevant if listeners cannot generate predictions in response to complex and variable perceptual input. Taking a step toward addressing this limitation, three eye-tracking experiments evaluated how adults (N = 72) and 4- and 5-year-old children (N = 72) generated predictions in contexts with complex visual stimuli (Experiment 1), variable speech stimuli (Experiment 2), and both concurrently (Experiment 3). Results indicated that listeners generated predictions in contexts with complex visual stimuli or variable speech stimuli. When both were more naturalistic, listeners used informative verbs to generate predictions, but not adjectives or number markings. This investigation provides a test for theories claiming that prediction is a central learning mechanism, and calls for further evaluations of prediction in naturalistic settings.
Tracy Reuter; Mia Sullivan; Casey Lew-Williams
In: Language Acquisition, pp. 1–26, 2021.
Prediction-based theories posit that interlocutors use prediction to process language efficiently and to coordinate dialogue. The present study evaluated whether listeners can use spatial deixis (i.e., this, that, these, and those) to predict the plurality and proximity of a speaker's upcoming referent. In two eye-tracking experiments with varying referential complexity (N = 168), native English-speaking adults, native English-learning 5-year-olds, and nonnative English-learning adults viewed images while listening to sentences with or without informative deictic determiners, e.g., Look at the/this/that/these/those wonderful cookie(s). Results showed that all groups successfully exploited plurality information. However, they varied in using deixis to anticipate the proximity of the referent; specifically, L1 adults showed more robust prediction than L2 adults, and L1 children did not show evidence of prediction. By evaluating listeners with varied language experiences, this investigation helps refine proposed mechanisms of prediction and suggests that linguistic experience is key to the development of such mechanisms.
Samy Rima; Michael C. Schmid
In: Frontiers in Neuroscience, vol. 15, pp. 663242, 2021.
Small fixational eye-movements are a fundamental aspect of vision and thought to reflect fine shifts in covert attention during active viewing. While the perceptual benefits of these small eye movements have been demonstrated during a wide range of experimental tasks including during free viewing, their function during reading remains surprisingly unclear. Previous research demonstrated that readers with increased microsaccade rates displayed longer reading speeds. To what extent increased fixational eye movements are, however, specific to reading and might be indicative of reading skill deficits remains, however, unknown. To address this topic, we compared the eye movement scan paths of 13 neurotypical individuals and 13 subjects diagnosed with developmental dyslexia during short story reading and free viewing of natural scenes. We found that during reading only, dyslexics tended to display small eye movements more frequently compared to neurotypicals, though this effect was not significant at the population level, as it could also occur in slow readers not diagnosed as dyslexics. In line with previous research, neurotypical readers had twice as many regressive compared to progressive microsaccades, which did not occur during free viewing. In contrast, dyslexics showed similar amounts of regressive and progressive small fixational eye movements during both reading and free viewing. We also found that participants with smaller fixational saccades from both neurotypical and dyslexic samples displayed reduced reading speeds and lower scores during independent tests of reading skill. Slower readers also displayed greater variability in the landing points and temporal occurrence of their fixational saccades. Both the rate and spatio-temporal variability of fixational saccades were associated with lower phonemic awareness scores. As none of the observed differences between dyslexics and neurotypical readers occurred during control experiments with free viewing, the reported effects appear to be directly related to reading. In summary, our results highlight the predictive value of small saccades for reading skill, but not necessarily for developmental dyslexia.
Bojana Ristic; Simona Mancini; Nicola Molinaro; Adrian Staub
Maintenance cost in the processing of subject–verb dependencies Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–31, 2021.
Although research in sentence comprehension has suggested that processing long-distance dependencies involves maintenance between the elements that form the dependency, studies on maintenance of long-distance subject–verb (SV) dependencies are scarce. The few relevant studies have delivered mixed results using self-paced reading or phoneme-monitoring tasks. In the current study, we used eye tracking during reading to test whether maintaining a long-distance SV dependency results in a processing cost on an intervening adverbial clause. In Experiment 1, we studied this question in Spanish and found that both go-past reading times and regressions out of an adverbial clause to the previous regions were significantly increased when the clause interrupts a SV dependency compared to when the same clause doesn't interrupt this dependency. We then replicated these findings in English (Experiment 2), observing significantly increased go-past reading times on a clause interrupting a SV dependency. The current study provides the first eye-tracking data showing a maintenance cost in the processing of SV dependencies cross-linguistically. Sentence comprehension models should account for the maintenance cost generated by SV dependency processing, and future research should focus on the nature of the maintained representation. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Miriam Rivero-Contreras; Paul E. Engelhardt; David Saldaña
In: Annals of Dyslexia, vol. 71, no. 1, pp. 170–187, 2021.
Easy-to-read guidelines recommend visual support and lexical simplification to facilitate text processing, but few empirical studies confirm a positive effect from these recommendations in individuals with dyslexia. This study examined the influence of the visual support and lexical simplification on sentence processing through eye movements at both the text- and word-level, and the differences between readers with and without dyslexia. Furthermore, we explored the influence of reading experience and vocabulary, as control variables. We tested 20 young adults with dyslexia and 20 chronological age-matched controls. Participants read 60 sentences in total. Half the sentences contained an image and the other half did not, and half contained a low-frequency word and half a high-frequency word. Results showed that visual support and lexical simplification facilitated sentence processing, potentially by jointly facilitating lexical semantic access. We also found that participants with lower print exposure and lower vocabulary benefited more from word-level lexical simplification. We conclude that both adaptations could benefit readers with low print exposure and smaller vocabularies, and therefore, to many dyslexic readers who show these characteristics.
John Ross Rizzo; Todd E. Hudson; John Martone; Weiwei Dai; Oluchi Ihionu; Yash Chaudhry; Ivan Selesnick; Laura J. Balcer; Steven L. Galetta; Janet C. Rucker
In: Brain Injury, vol. 35, no. 4, pp. 426–435, 2021.
Background: Sideline diagnostic tests for concussion are vulnerable to volitional poor performance (“sandbagging”) on baseline assessments, motivated by desire to subvert concussion detection and potential removal from play. We investigated eye movements during sandbagging versus best effort on the King-Devick (KD) test, a rapid automatized naming (RAN) task. Methods: Participants performed KD testing during oculography following instructions to sandbag or give best effort. Results: Twenty healthy participants without concussion history were included (mean age 27 ± 8 years). Sandbagging resulted in longer test times (89.6 ± 39.2 s vs 48.2 ± 8.5 s, p < .001), longer inter-saccadic intervals (459.5 ± 125.4 ms vs 311.2 ± 79.1 ms, p < .001) and greater numbers of saccades (171.4 ± 47 vs 138 ± 24.2, p < .001) and reverse saccades (wrong direction for reading) (21.2% vs 11.3%, p < .001). Sandbagging was detectable using a logistic model with KD times as the only predictor, though more robustly detectable using eye movement metrics. Conclusions: KD sandbagging results in eye movement differences that are detectable by eye movement recordings and suggest an invalid test score. Objective eye movement recording during the KD test shows promise for distinguishing between best effort and post-injury performance, as well as for identifying sandbagging red flags.
Jennifer M. Roche; Arkady Zgonnikov; Laura M. Morett; Stephen M. Camarata; Susan Nittrouer
In: Journal of Speech, Language, and Hearing Research, vol. 64, no. 1, pp. 159–175, 2021.
Purpose: The purpose of the current study was to evaluate the social and cognitive underpinnings of miscommunication during an interactive listening task. Method: An eye and computer mouse-tracking visualworld paradigm was used to investigate how a listener's cognitive effort (local and global) and decision-making processes were affected by a speaker's use of ambiguity that led to a miscommunication. Results: Experiments 1 and 2 found that an environmental cue that made a miscommunication more or less salient impacted listener language processing effort (eye-tracking). Experiment 2 also indicated that listeners may develop different processing heuristics dependent upon the speaker's use of ambiguity that led to a miscommunication, exerting a significant impact on cognition and decision making. We also found that perspective-taking effort and decision-making complexity metrics (computer mouse tracking) predict language processing effort, indicating that instances of miscommunication produced cognitive consequences of indecision, thinking, and cognitive pull. Conclusion: Together, these results indicate that listeners behave both reciprocally and adaptively when miscommunications occur, but the way they respond is largely dependent upon the type of ambiguity and how often it is produced by the speaker.
Douglas Roland; Gail Mauner; Yuki Hirose
In: Journal of Memory and Language, vol. 119, pp. 104244, 2021.
Relative clauses have played a key role in distinguishing between different theories of language comprehension. A reversal in processing costs between full NP and pronominal relative clauses reported by Reali and Christiansen (2007) has been used to argue for expectation-based theories of comprehension (e.g., Hale, 2001; Levy, 2008), and against memory-based theories of comprehension (e.g., Gibson, 1998, 2000; Gordon, Hendrick, & Johnson, 2001; Lewis, Vasishth, & Van Dyke, 2006). We present results relying on eye-movements during reading, in conjunction with modeling of differences between self-paced reading and eye movement data, to argue that the results observed by Reali and Christiansen and others are due to the self-paced reading paradigm, and do not reflect an actual reversal in processing costs. Overall, our results suggest that a combination of memory-based factors and spillover explains the pattern of reading times observed in various relative clause experiments such as those in Reali and Christiansen (2007), and that while comprehenders' expectations undeniably play a role in language comprehension, the role may be less dramatic than is suggested by previous studies.
Camilo R. Ronderos; Ernesto Guerra; Pia Knoeferle
In: Frontiers in Psychology, vol. 11, pp. 556624, 2021.
When a word is used metaphorically (for example “walrus” in the sentence “The president is a walrus”), some features of that word's meaning (“very fat,” “slow-moving”) are carried across to the metaphoric interpretation while other features (“has large tusks,” “lives near the north pole”) are not. What happens to these features that relate only to the literal meaning during processing of novel metaphors? In four experiments, the present study examined the role of the feature of physical containment during processing of verbs of physical containment. That feature is used metaphorically to signify difficulty, such as “fenced in” in the sentence “the journalist's opinion was fenced in after the change in regime.” Results of a lexical decision task showed that video clips displaying a ball being trapped by a box facilitated comprehension of verbs of physical containment when the words were presented in isolation. However, when the verbs were embedded in sentences that rendered their interpretation metaphorical in a novel way, no such facilitation was found, as evidenced by two eye-tracking reading studies. We interpret this as suggesting that features that are critical for understanding the encoded meaning of verbs but are not part of the novel metaphoric interpretation are ignored during the construction of metaphorical meaning. Results and limitations of the paradigm are discussed in relation to previous findings in the literature both on metaphor comprehension and on the interaction between language comprehension and the visual world.
Mylène Ross-Plourde; Mylène Lachance-Grzela; Andréanne Charbonneau; Mylène Dumont; Annie Roy-Charland
In: Journal of Gender Studies, pp. 1–9, 2021.
While the characteristics associated with fathers have taken on more maternal traits more recently, a similar shift has not been observed for maternal characteristics. The role of mother remains stereotyped, and those who do not adhere to this often face criticism. This study examines the impact of parental stereotypes on the cognitive processes associated with reading. A sample of 32 individuals read 24 experimental passages introducing a parent (mother or father) in a traditional or non-traditional role, and in a neutral or disambiguating context. Results show a significant interaction between the type of role and gender of the parent on reading times. Simple main effect tests revealed that for traditional roles, fixation durations were longer when the protagonist was a father than when the protagonist was a mother. There was no effect of role type for fathers, yet for mothers, fixation durations were longer when they were depicted in non-traditional roles than when they were depicted in traditional roles. This disruption of information processing of schema incongruent content suggests that mothers' parenting stereotypes remain anchored in society and are more rigid than those of fathers, supporting the idea of a double standard in parenting roles.
Mikael Rubin; Nilavra Bhattacharya; Jacek Gwizdka; Zenzi Griffin; Michael Telch
In: Cognition and Emotion, pp. 1–8, 2021.
A large body of research has provided evidence that Posttraumatic Stress Disorder (PTSD) symptoms are associated with broad changes in attentional processes which are in turn implicated in core facets of emotion regulation. However, prior research has primarily focused on specific task-based evaluations of attention. In the current study, we evaluated eye movement behaviour among adults that endorsed a traumatic event meeting Criterion A and were experiencing a range of PTSD symptoms (N= 55) while they read short trauma-related or neutral passages. We found evidence that PTSD symptoms were associated with a small difference in attentional processes between the two types of passages, with longer first fixations to words in trauma-related passages b = 1.92, 95% CI [0.31, 3.56]. Moreover, within the trauma-related texts we found that greater PTSD symptoms were associated with longer total fixation times b = 9.53, 95% CI [2.20, 16.83] and a greater number of regressions b = 0.07, 95% CI [0.01,0.13] to trauma-related words. Inclusion of an additional 25 participants not endorsing a trauma that met Criterion A did not influence the results in any meaningful way. For the first time, we provide evidence that PTSD symptoms are linked to bias for trauma-related information during a naturalistic, everyday activity – reading.
Danila Rusich; Lisa S Arduino; Marika Mauti; Marialuisa Martelli; Silvia Primativo
In: Brain Sciences, vol. 11, pp. 28, 2021.
This study explores whether semantic processing in parafoveal reading in the Italian language is modulated by the perceptual and lexical features of stimuli by analyzing the results of the rapid parallel visual presentation (RPVP) paradigm experiment, which simultaneously presented two words, with one in the fovea and one in the parafovea. The words were randomly sampled from a set of semantically related and semantically unrelated pairs. The accuracy and reaction times in reading the words were measured as a function of the stimulus length and written word frequency. Fewer errors were observed in reading parafoveal words when they were semantically related to the foveal ones, and a larger semantic facilitatory effect was observed when the foveal word was highly frequent and the parafoveal word was short. Analysis of the reaction times suggests that the semantic relation between the two words sped up the naming of the foveal word when both words were short and highly frequent. Altogether, these results add further evidence in favor of the semantic processing of words in the parafovea during reading, modulated by the orthographic and lexical features of the stimuli. The results are discussed within the context of the most prominent models of word processing and eye movement controls in reading.
Anthony J. Ryals; Megan E. Kelly; Anne M. Cleary
Increased pupil dilation during tip-of-the-tongue states Journal Article
In: Consciousness and Cognition, vol. 92, pp. 103152, 2021.
Tip-of-the-tongue states (TOTs) are feelings of impending word retrieval success during a current failure to retrieve a target word. Though much is known and understood about TOT states from decades of research, research on potential psychophysiological correlates of the TOT state is still in its infancy, and existing studies point toward the involvement of neural processes that are associated with enhanced attention, motivation, and information-seeking. In the present study, we demonstrate that, during instances of target retrieval failure, TOT states are associated with greater pupillary dilation (i.e., autonomic arousal) in 91% of our sample. This is the first study to demonstrate a pupillometric correlate of the TOT experience, and this finding provides an important step toward understanding emotional attributes associated with TOT states. Mean pupil dilation also increased such that instances of target identification failure that were unaccompanied by TOT states < instances in which TOTs occurred < instances of target identification success. It is possible that TOTs reflect an intermediary state between complete target retrieval failure and full target retrieval.
Cailey A. Salagovic; Carly J. Leonard
In: Attention, Perception, and Psychophysics, vol. 83, no. 2, pp. 800–809, 2021.
Successful navigation of information-rich, multimodal environments involves processing both auditory and visual information. The extent to which information within each modality is processed varies because of many factors, but the influence of auditory stimuli on the processing of visual stimuli in these multimodal environments is not well understood. Previous research has shown that a preceding sound leads to decreased reaction times in visual tasks (Bertelson, Quarterly Journal of Experimental Psychology 19(3), 272–279, 1967). The current study examines whether a nonspatial, task-irrelevant sound additionally alters processing of visual distractors that flank a central target. We used a version of a flanker task in which participants responded to a central letter surrounded by two irrelevant flanker letters. When these flankers are associated with a conflicting response, a congruency effect occurs such that reaction time to the target is slowed (Eriksen & Eriksen, Perception & Psychophysics, 16(1), 143–149, 1974). In two experiments using this task, results showed that a preceding tone caused general speeding of reaction time across flanker types, consistent with alerting. The tone also caused decreased variation in response time. Critically, the tone modulated the congruency effect, with a greater speeding for congruent flankers than for incongruent flankers. This suggests that the influence of flanker identity was more intense after tone presentation, consistent with a nonspatial sound increasing perceptual and/or response-association processing of flanking stimuli.
McCall E. Sarrett; Christine Shea; Bob McMurray
In: Language, Cognition and Neuroscience, pp. 1–41, 2021.
Second language (L2) learners must not only acquire L2 knowledge (i.e. vocabulary and grammar), but they must also rapidly access this knowledge. In monolinguals, efficient spoken word recognition is accomplished via lexical competition, by which listeners activate a range of candidates that compete for recognition as the signal unfolds. We examined this in adult L2 learners, investigating lexical competition both amongst words of the L2, and between L2 and native language (L1) words. Adult L2 learners (N = 33) in their third semester of college Spanish completed a cross-linguistic Visual World Paradigm task to assess lexical activation, along with a proficiency assessment (LexTALE-Esp). L2 learners showed typical incremental processing activating both within-L2 and cross-linguistic competitors, similar to fluent bilinguals. Proficiency correlated with both the speed of activating the target (which prior work links to the developmental progression in L1) and the degree to which competition ultimately resolves (linked to robustness of the lexicon).
Raheleh Saryazdi; Daniel DeSantis; Elizabeth K. Johnson; Craig G. Chambers
In: Psychology and Aging, vol. 36, no. 8, pp. 928–942, 2021.
Past research suggests listeners treat disfluencies as informative cues during spoken language processing. For example, studies have shown that child and younger adult listeners use filled pauses to rapidly anticipate discourse-new objects. The present study explores whether older adults show a similar pattern, or if this ability is reduced in light of age-related declines in language and cognitive abilities. The study also examines whether the processing of disfluencies differs depending on the talker's age. Stereotyped ideas about older adults' speech could lead listeners to treat disfluencies as uninformative, similar to the way in which listeners react to disfluencies produced by non-native speakers or individuals with a cognitive disorder. Experiment 1 tracking to capture younger and older listeners' real-time reactions to filled pauses produced by younger and older talkers. On critical trials, participants followed fluent or disfluent instructions referring to by both younger and older talkers as cues for reference to discourse-new objects despite holding stereotypes regarding older adults' speech. Experiment 2 further explored listeners' biased judgments of talkers' fluency, using auditory materials from Experiment 1. Speech produced by an older talker was rated as more slower than a younger talker even though these features were matched across recordings. Together, the findings demonstrate (a) older listeners' effective use of disfluency cues in real-time listeners treat both older and younger talkers' disfluencies as informative despite regarding older talkers' speech.
Gaston Saux; Nicolas Vibert; Julien Dampuré; Debora I. Burin; M. Anne Britt; Jean François Rouet
In: Acta Psychologica, vol. 212, pp. 103191, 2021.
The study examined how readers integrate information from and about multiple information sources into a memory representation. In two experiments, college students read brief news reports containing two critical statements, each attributed to a source character. In half of the texts, the statements were consistent with each other, in the other half they were discrepant. Each story also featured a non-source character (who made no statement). The hypothesis was that discrepant statements, as compared to consistent statements, would promote distinct attention and memory only for the source characters. Experiment 1 used short interviews to assess participants' ability to recognize the source of one of the statements after reading. Experiment 2 used eye-tracking to collect data during reading and during a source-content recognition task after reading. As predicted, discrepancies only enhanced memory of, and attention to source-related segments of the texts. Discrepancies also enhanced the link between the two source characters in memory as opposed to the non-source character, as indicated by the participants' justifications (Experiment 1) and their visual inspection of the recognition items (Experiment 2). The results are interpreted within current theories of text comprehension and document literacy.
Karly M. Schleicher; Ana I. Schwartz
In: Discourse Processes, pp. 1–25, 2021.
In the present study we examined whether overlap in language across texts influences the integration of information into a coherent discourse represen- tation for bilingual readers. Across two experiments highly proficient Spanish–English bilinguals read pairs of expository passages describing two fictional science facts while their eye-movements were monitored. One of the facts was revised in the second passage, requiring a discourse updating. The language of the two passages and follow-up questions was fully crossed. Accuracy was lower for questions pertaining to revised facts when the second passage was in the second language (L2). This cost was exacer- bated when the first passage was in the dominant language, suggesting strong interference from the representation of the first passage which impeded updating the discourse model in the L2. This interference was eliminated in Experiment 2 when second passages were written based on a refutation-style text structure. Analyses of reading times on the pseudo- terms before and after the revised fact was stated indicated that the previous version of the fact was reactivated and interfered with processing. This interference was similar regardless of whether passages were written in the same or different languages.
Daniel Schmidtke; Anna L. Moro
In: Reading Research Quarterly, vol. 56, no. 4, pp. 819–854, 2021.
We investigated the word-reading development of adult second-language learners of English. A sample of 70 (Mandarin or Cantonese) Chinese-speaking students enrolled in a university-level English bridging program at a Canadian university silently read passages of text at the beginning and end of the program while their eye movements were recorded. At each timepoint, we also administered a battery of tests that measure key component skills of second-language reading (phonological processing, vocabulary knowledge, and listening comprehension). We found longitudinal changes in lexical processing for long words in early (refixation probability and gaze duration) and late (go-past time and total reading time) eye movement measures, indicating a shift from a sublexical to a holistic word-processing strategy. We found the largest gains in sublexical processing among students with stronger phonological awareness upon entry to the program and students who acquired more vocabulary than their peers during the program. We interpret the results of this study as evidence of a transition from a lexical processing strategy that is heavily reliant on phonological decoding to word-reading behavior that is more actively engaged in higher order cognitive processes, such as meaning integration. This research offers novel insights into predictors of reading skill in postsecondary English-language bridging programs.
Daniel Schmidtke; Julie A. Van Dyke; Victor Kuperman
In: Behavior Research Methods, vol. 53, no. 1, pp. 59–77, 2021.
The CompLex database presents a large-scale collection of eye-movement studies on English compound-word processing. A combined total of 440 participants completed eye-tracking experiments in which they silently read unspaced English compound words (e.g., goalpost) embedded in sentence contexts (e.g., Dylan hit the goalpost when he was aiming for the net.). Three studies were conducted using participants representing the non-college-bound population (300 participants), and four studies included participants recruited from the student population (140 participants). The database comprises trial-level eye-movement data (47,763 trials), participant data (including a measure of reading experience estimated via the Author Recognition Test), and lexical characteristics for the set of 931 English compound words used as critical stimuli in the studies. One contribution of the present paper is a set of regression analyses conducted on the full database and individual experiments. We report that the most reliable and consistent main effects were those elicited by compound word length, left constituent frequency, right constituent frequency, compound frequency and semantic transparency. Separately, we also found that the effect of left frequency and compound word length is weaker among more frequent compounds. Another contribution is a power analysis, in which we determined the sample sizes required to reliably detect effect sizes that are comparable to those observed in our regression models. These sample size estimates serve as a recommendation for researchers wishing to either collect eye-movement data for compound word reading, or use the current database as a resource for the study of English compound word processing.
Lea-Maria Schmitt; Julia Erb; Sarah Tune; Anna U. Rysop; Gesa Hartwigsen; Jonas Obleser
In: Science Advances, vol. 7, no. 49, pp. eabi6070, 2021.
How do predictions in the brain incorporate the temporal unfolding of context in our natural environment? We here provide evidence for a neural coding scheme that sparsely updates contextual representations at the boundary of events. This yields a hierarchical, multilayered organization of predictive language comprehension. Training artificial neural networks to predict the next word in a story at five stacked time scales and then using model-based functional magnetic resonance imaging, we observe an event-based “surprisal hierarchy” evolving along a temporoparietal pathway. Along this hierarchy, surprisal at any given time scale gated bottom-up and top-down connectivity to neighboring time scales. In contrast, surprisal derived from continuously updated context influenced temporoparietal activity only at short time scales. Representing context in the form of increasingly coarse events constitutes a network architecture for making predictions that is both computationally efficient and contextually diverse.
Sarah Schuster; Nicole Alexandra; Florian Hutzler; Fabio Richlan; Martin Kronbichler; Stefan Hawelka
In: NeuroImage, vol. 228, pp. 117687, 2021.
Evidence accrues that readers form multiple hypotheses about upcoming words. The present study investigated the hemodynamic effects of predictive processing during natural reading by means of combining fMRI and eye movement recordings. In particular, we investigated the neural and behavioral correlates of precision-weighted prediction errors, which are thought to be indicative of subsequent belief updating. Participants silently read sentences in which we manipulated the cloze probability and the semantic congruency of the final word that served as an index for precision and prediction error respectively. With respect to the neural correlates, our findings indicate an enhanced activation within the left inferior frontal and middle temporal gyrus suggesting an effect of precision on prediction update in higher (lexico-)semantic levels. Despite being evident at the neural level, we did not observe any evidence that this mechanism resulted in disproportionate reading times on participants' eye movements. The results speak against discrete predictions, but favor the notion that multiple words are activated in parallel during reading. 1.
Anne Wienholz; Derya Nuhbalaoglu; Markus Steinbach; Annika Herrmann; Nivedita Mani
Phonological priming in German Sign Language Journal Article
In: Sign Language & Linguistics, vol. 24, no. 1, pp. 4–35, 2021.
A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.
Anne Wienholz; Derya Nuhbalaoglu; Markus Steinbach; Annika Herrmann; Nivedita Mani
In: Sign Language & Linguistics, vol. 24, no. 1, pp. 1–32, 2021.
A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.
Matthew B. Winn; Katherine H. Teece
In: Ear and Hearing, vol. 42, no. 3, pp. 584–595, 2021.
OBJECTIVES: Slowed speaking rate was examined for its effects on speech intelligibility, its interaction with the benefit of contextual cues, and the impact of these factors on listening effort in adults with cochlear implants. DESIGN: Participants (n = 21 cochlear implant users) heard high- and low-context sentences that were played at the original speaking rate, as well as a slowed (1.4× duration) speaking rate, using uniform pitch-synchronous time warping. In addition to intelligibility measures, changes in pupil dilation were measured as a time-varying index of processing load or listening effort. Slope of pupil size recovery to baseline after the sentence was used as an index of resolution of perceptual ambiguity. RESULTS: Speech intelligibility was better for high-context compared to low-context sentences and slightly better for slower compared to original-rate speech. Speech rate did not affect magnitude and latency of peak pupil dilation relative to sentence offset. However, baseline pupil size recovered more substantially for slower-rate sentences, suggesting easier processing in the moment after the sentence was over. The effect of slowing speech rate was comparable to changing a sentence from low context to high context. The effect of context on pupil dilation was not observed until after the sentence was over, and one of two analyses suggested that context had greater beneficial effects on listening effort when the speaking rate was slower. These patterns maintained even at perfect sentence intelligibility, suggesting that correct speech repetition does not guarantee efficient or effortless processing. With slower speaking rates, there was less variability in pupil dilation slopes following the sentence, implying mitigation of some of the difficulties shown by individual listeners who would otherwise demonstrate prolonged effort after a sentence is heard. CONCLUSIONS: Slowed speaking rate provides release from listening effort when hearing an utterance, particularly relieving effort that would have lingered after a sentence is over. Context arguably provides even more release from listening effort when speaking rate is slower. The pattern of prolonged pupil dilation for faster speech is consistent with increased need to mentally correct errors, although that exact interpretation cannot be verified with intelligibility data alone or with pupil data alone. A pattern of needing to dwell on a sentence to disambiguate misperceptions likely contributes to difficulty in running conversation where there are few opportunities to pause and resolve recently heard utterances.
Matthew B. Winn; Katherine H. Teece
Listening effort is not the same as speech intelligibility score Journal Article
In: Trends in Hearing, vol. 25, pp. 1–26, 2021.
Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.
Sanmei Wu; Liangsu Tian; Jiaqiao Chen; Guangyao Chen; Jingxin Wang
In: Acta Psychologica Sinica, vol. 53, no. 7, pp. 729–745, 2021.
A wealth of research shows that irrelevant background speech can interfere with reading behavior. This effect is often described as the irrelevant speech effect (ISE). Two key theories have been proposed to account for this effect; namely, the Phonological-Interference Hypothesis and the Semantic-Interference Hypothesis. Few studies have investigated the irrelevant speech effect in Chinese reading. Moreover, the underlying mechanisms for the effect also remain unclear. Accordingly, with the present research we examined the irrelevant speech effect in Chinese using eye movement measures. Three experiments were conducted to explore the effects of different kinds of background speech. Experiment 1 used simple sentences, Experiment 2 used complex sentence, and Experiment 3 used paragraphs. The participants in each experiment were skilled readers who were undergraduate recruited from the university, who read the sentence while their eye movements were recorded using an EyeLink 1000 eye-tracker (SR Research inc.). The three experiments used the same background speech conditions. In an unintelligible background speech condition, participants heard irrelevant speech in Spanish (which none of the participants could understand), while in an intelligible background speech condition, they heard irrelevant speech in Chinese. Finally, in third condition, the participants read in silence, with no background speech present. The results showed no significant difference in key eye movement measures (total reading time, average fixation duration, number of fixations, number of regressions, total fixation time, and regression path reading time) for the silent compared to the unintelligible background speech condition across all three experiments. In Experiment 1, which used simple sentences as stimuli, there was also no significant difference between the silent and intelligible background speech condition. However, in Experiment 2, which used more complex sentences, normal reading was disrupted in the intelligible background speech condition compared to silence, revealing an ISE for these more difficult sentences. Compared with the silent condition, the intelligible background speech produced longer reading times and average fixation duration, more numbers of fixations and regressions, longer regression path reading time and longer total fixation times. Finally, Experiment 3 also produced evidence for an ISE, with longer total reading times, more fixations, and longer regression path reading times and total reading times in the intelligible background speech condition compared with silence. To sum up, the results of the current three experiments suggest that: (1) unintelligible speech does not disrupt normal reading significantly, contrary to the Phonological-Interference Hypothesis; (2) intelligible background speech can disrupt the reading of complex (but not simpler) sentences and also paragraph reading, supporting the Semantic-Interference Hypothesis. Such findings suggest that irrelevant speech might disrupt later stages of lexical processing and semantic integration in reading, and that this effect is modulated by the difficulty of the reading task.
Xue-Zhen Xiao; Gaoding Jia; Aiping Wang
In: Language Learning and Development, pp. 1–15, 2021.
When reading Chinese, skilled native readers regularly gain a preview benefit (PB) when the parafoveal word is orthographically or semantically related to the target word. Evidence shows that non-native, beginning Chinese readers can obtain an orthographic PB during Chinese reading, which indicates the parafoveal processing of low-level visual information. However, whether non-native Chinese readers who are more proficient in Chinese can make use of high-level parafoveal information remains unknown. Therefore, this study examined parafoveal processing during Chinese reading among Tibetan-Chinese bilinguals with high Chinese proficiency and compared their PB effects with those from native Chinese readers. Tibetan-Chinese bilinguals demonstrated both orthographic and semantic PB but did not show phonological PB and only differed from native Chinese in the identical PB when preview characters were identical to the targets. These findings demonstrate that non-native Chinese readers can extract semantic informa- tion from parafoveal preview during Chinese reading and highlight the modulation of parafoveal processing efficiency by reading proficiency. The results are in line with the direct route to access the mental lexicon of visual Chinese characters among non-native Chinese speakers. Introduction
Guoli Yan; Zebo Lan; Zhu Meng; Yingchao Wang; Valerie Benson
In: Scientific Studies of Reading, vol. 25, no. 4, pp. 287–303, 2021.
Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students.
Bo Yao; Jason R. Taylor; Briony Banks; Sonja A. Kotz
In: NeuroImage, vol. 239, pp. 118313, 2021.
Growing evidence shows that theta-band (4–7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: “This dress is lovely!”) elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250–500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Panpan Yao; Reem Alkhammash; Xingshan Li
In: Scientific Studies of Reading, pp. 1–19, 2021.
We aimed to tackle the question about the time course of plausibility effect in online processing of Chinese nouns in temporarily ambiguous structures, and whether L2ers can immediately use the plausibility information generated from classifier-noun associations in analyzing ambiguous structures. Two eye-tracking experiments were conducted to explore how native Chinese speakers (Experiment 1) and high-proficiency Dutch-Chinese learners (Experiment 2) online process 4-character novel noun-noun combinations in Chinese. In each pair of nominal phrases (Numeral+Classifier+Noun1 +Noun2), the plausibility of Classifier-Noun1 varied (plausible vs. implausible) while the whole nominal phrases were always plausible. Results showed that the plausibility of Classifier-Noun1 associations had an immediate effect on Noun1, and a reversed effect on Noun2 for both groups of participants. These findings indicated that plausibility plays an immediate role in incremental semantic integration during online processing of Chinese. Similar to native Chinese speakers, high-proficiency L2ers can also use the plausibility infor- mation of classifier-noun associations in syntactic reanalysis. Sentence
Panpan Yao; Timothy J. Slattery; Xingshan Li
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–40, 2021.
In the current study, we conducted two eye-tracking reading experiments to explore whether sentence context can influence neighbor effects in word recognition during Chinese reading. Chinese readers read sentences in which the targets' orthographic neighbors were either plausible or implausible with the pre-target context. The results revealed that the neighbor effect was influenced by context: the context in the biased condition (where only targets but not neighbors can fit in the pre-target context) evoked a significantly weaker inhibitory neighbor effect than in the neutral condition (where both targets and neighbors can fit in the pre-target context). These results indicate that contextual information can be used to modulate neighbor effects during on-line sentence reading in Chinese.
Panpan Yao; Adrian Staub; Xingshan Li
In: Psychonomic Bulletin & Review, pp. 1–10, 2021.
Previous research has demonstrated effects of both orthographic neighborhood size and neighbor frequency in word recognition in Chinese. A large neighborhood—where neighborhood size is defined by the number of words that differ from a target word by a single character—appears to facilitate word recognition, while the presence of a higher-frequency neighbor has an inhibitory effect. The present study investigated modulation of these effects by a word's predictability in context. In two eye-movement experiments, the predictability of a target word in each sentence was manipulated. Target words differed in their neighborhood size (Experiment 1) and in whether they had a higher-frequency neighbor (Experiment 2). The study replicated the previously observed effects of neighborhood size and neighbor frequency when the target word was unpredictable, but in both experiments neighborhood effects were absent when the target was predictable. These results suggest that when a word is preactivated by context, the activation of its neighbors may be diminished to such an extent that these neighbors do not effectively compete for selection.
Lili Yu; Yanping Liu; Erik D. Reichle
In: Journal of Experimental Psychology: General, vol. 150, no. 8, pp. 1612–1641, 2021.
Chinese words consist of a variable number of characters that are normally written in continuous lines, without the blank spaces that are used to separate words in most alphabetic writing systems. These conventions raise questions about the relative roles of character versus whole-word processing in word identification, and how words are segmented from strings of characters for the purpose of their identification and saccade targeting. The present article attempts to address these questions by reporting an eye-movement experiment in which 60 participants read a corpus of sentences containing two-character target words that varied in terms of their overall frequency and the frequency of their initial characters. We examine participants' eye movements using both corpus-based statistical models and more standard analyses of our target words. In addition to documenting how key lexical variables influence eye movements and highlighting a few discrepancies between the results obtained using our two statistical approaches, our experiment shows that high-frequency initial characters can actually slow word identification. We discuss the theoretical significance of this finding and others for current models of Chinese reading, and then describe a new computational model of eye-movement control during the reading of Chinese. Finally, we report simulations showing that this model can account for our findings. (PsycInfo Database Record (c) 2020 APA, all rights reserved)
Chuanli Zang; Ying Fu; Xuejun Bai; Guoli Yan; Simon P. Liversedge
In: Journal of Memory and Language, vol. 119, pp. 1–15, 2021.
Chinese idioms are likely to be represented and processed as Multi-Constituent Units (MCUs, a multi-word unit with a single lexical representation, see Zang, 2019). Chinese idioms with a 1-character verb and 2-character noun structure are processed foveally, but not parafoveally, as a single lexical unit (Yu et al., 2016), probably because the verb only loosely constrains noun identity. By contrast, Chinese idioms with modifier-noun structure are more likely MCU candidates due to significant modifier constraint over the subsequent noun. We investigated whether idioms of this type are parafoveally and foveally processed as MCUs during natural reading. In Experiment 1, we manipulated phrase type (idiom or matched phrase) and preview of the noun (identity, unrelated character or pseudocharacter) using the boundary paradigm (Rayner, 1975). A larger preview effect occurred for idioms on the modifier with shorter fixations for identical than unrelated and pseudocharacter previews. This suggests idioms are parafoveally processed to a greater extent than matched phrases. In Experiment 2, preview of the modifier and noun of idioms and phrases (identity or pseudocharacter) was orthogonally manipulated (c.f., Cutter, Drieghe & Liversedge, 2014). For identity modifiers, a greater noun preview effect occurred for idioms relative to phrases providing further evidence that modifier-noun idioms are lexicalised MCUs and processed parafoveally as single, unified representations.
Alessandra Zarcone; Vera Demberg
In: Discourse Processes, vol. 58, no. 9, pp. 804–819, 2021.
There is now a well-established literature showing that people anticipate upcoming concepts and words during language processing. Commonsense knowledge about typical event sequences and verbal selectional preferences can contribute to anticipating what will be mentioned next. We here investigate how temporal discourse connectives (before, after), which signal event ordering along a temporal dimension, modulate predictions for upcoming discourse referents. Our study analyses anticipatory gaze in the visual world and supports the idea that script knowledge, temporal connectives (before eating → menu, appetizer), and the verb's selectional preferences (order → appetizer) jointly contribute to shaping rapid prediction of event participants.
Tao Zeng; Wen Mao; Yarong Gao
In: Journal of Psycholinguistic Research, no. 1-26, 2021.
The present study attempted to explore the abstract priming effects from mathematical equations to Mandarin Chinese structure NP1 + You + NP2 + Hen + AP in an on-line comprehension task with the aim to figure out the mechanism that underlying these effects. The results revealed that compared with baseline priming conditions, participants tended to choose more high-attachment options in high-attachment priming conditions and more low-attachment priming options in low-attachment priming conditions. Such difference had reached a significant level, which provided evidence for the shared structural representation across mathematical and linguistic domains. Additionally, the fixations sequences during arithmetic calculations reflected those equations were processed hierarchically and could be extracted in parallel instead of being scanned in a sequentially left-to-right order. Our results have provided some evidence for the Representational Account.
Tao Zeng; Yating Mu; Taoyan Zhu
In: Cognitive Processing, vol. 22, no. 2, pp. 185–207, 2021.
This article explores the domain generality of hierarchical representation between linguistic and mathematical cognition by adopting the structural priming paradigm in an eye-tracking reading experiment. The experiment investigated whether simple arithmetic equations with high (e.g., (7 + 2) × 3 + 1)- or low (e.g., 7 + 2 × 3 + 1)- attachment influence language users' interpretation of Chinese ambiguous structures (NP1 + He + NP2 + De + NP3; Quantifier + NP1 + De + NP2; NP1 + Kan/WangZhe + NP2 + AP). On the one hand, behavioral results showed that high-attachment primes led to more high-attachment interpretation, while low-attachment primes led to more low-attachment interpretation. On the other hand, the eye movement data indicated that structural priming was of great help to reduce dwell time on the ambiguous structure. There were structural priming effects from simple arithmetic to three different structures in Chinese, which provided new evidence on the cross-domain priming from simple arithmetic to language. Besides attachment priming effect at global level, online sentence integration at local level was found to be structure-dependent by some differences in eye movement measures. Our results have provided some evidence for the Representational Account.
Guangyao Zhang; Binke Yuan; Huimin Hua; Ya Lou; Nan Lin; Xingshan Li
In: Brain and Language, vol. 213, pp. 1–10, 2021.
Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.
Tamás Káldi; Anna Babarczy
In: Journal of Memory and Language, vol. 116, pp. 104187, 2021.
Focus is a linguistic device that marks a piece of information within an utterance as most relevant, as when emphasis is placed by the speaker on a word using phonological stress, special intonation, or prosodic prominence. The question addressed in the present study is whether the use of linguistic focus is best seen as a means of directing the listener's attention. We investigated attention allocation on the part of the listener to linguistically focused elements in working memory in a series of eye-tracking experiments. We concentrated on two processes: the encoding of the focused element and its retention. Attentional load during encoding was measured by pupil dilation, and attention allocation during retention was estimated from fixations to locations of previously present visual stimuli on a blank screen. It was found that i) more attention was allocated during the processing of sentences with linguistic focus and ii) linguistically focused elements received more attention during memory retention. However, when the task demanded the sharing of attention, the advantage of the focused element during retention disappeared. Further experiments showed that when verbal stimuli whose prominence was not linguistically marked were presented, the patterns of attention allocation associated with linguistic focus during retention replicated. These results lend further support to the claim that linguistic focus is a grammaticalized means of expressing prominence, and as such, functions as an attention capturing device.
Tinghu Kang; Ping Wang; Hui Zhang
In: Psychology Research and Behavior Management, vol. 14, pp. 251–260, 2021.
Purpose: Calligraphy is the most unique form of artistic expression in Chinese culture. However, most studies that used calligraphy as a research object only explored its artistivalue from an artistic perspective. Thus, we know little about the information processing and influencing factors of calligraphic perception. Thus, we aimed to determine whether there ardifferences in attention distribution due to cognitive style in the process of calligraphiperception. Methods: The calligraphy of Lan Ting Ji Xu, which is known as the first running script in the history of Chinese calligraphy, was selected as the experimental material. The study used eye movement experiments to explore the differences in cognitive styles of attention distribution when perceiving calligraphy, through the analysis of eye movement data oparticipants. Results: The results showed that field-independent participants had more fixation durationnumber of fixations, and saccade angle when they perceived calligraphic works than thoswho were field-dependent. In other words, field-independent individuals spend more attention resources in the perceptual process. In addition, through data analysis, it was found thafixation duration, number of fixations, and saccade angle in the middle position of calligraphy are larger than the data on both sides of the calligraphy. In other words, when individualperceive calligraphy, the content in the middle position can attract more attention resourcethan those on both sides. Conclusion: We found that individuals with different cognitive styles have differences in attention distribution in the process of perceiving calligraphy.
Elif Canseza Kaplan; Anita E. Wagner; Paolo Toffanin; Deniz Başkent
In: Frontiers in Psychology, vol. 12, pp. 623787, 2021.
Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words' images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.
Efthymia C. Kapnoula; Bob McMurray
In: Brain and Language, vol. 223, pp. 105031, 2021.
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing.
Young Suk Grace Kim; Yaacov Petscher; Christopher Vorstius
In: Scientific Studies of Reading, vol. 25, no. 4, pp. 351–369, 2021.
We examined the relations between working memory, emergent literacy skills (e.g., phonological awareness, orthographic awareness, rapid-automatized naming), word reading, and listening comprehension to online reading processes (eye movements), and their relations to reading comprehension. A total of 292 students were assessed on working memory and emergent literacy skills in Grade 1, and eye movements, language, and reading skills in Grade 3. Structural equation model results showed that word reading was related to gaze duration and rereading duration, but listening comprehension was not. Working memory and emergent literacy skills were related to eye movements, but their relations to eye movements were largely mediated by word reading. Eye movements were related to reading comprehension, but not after accounting for word reading and listening comprehension. These results expand our understanding of reading development by revealing the nature of relations of emergent literacy skills, reading, and listening comprehension to online processes.
Suzanne Kleijn; W. M. Willem M. Mak; Ted J. M. Sanders
In: Cognitive Linguistics, vol. 32, no. 1, pp. 35–65, 2021.
Research has shown that it requires less time to process information that is part of an objective causal relation describing states of affairs in the world (She was out of breath because she was running), than information that is part of a subjective relation (She must have been in a hurry because she was running) expressing a claim or conclusion and a supporting argument. Representing subjectivity seems to require extra cognitive operations. In Mental Spaces Theory (MST; Fauconnier, Gilles. 1994. Mental spaces: Aspects of meaning construction in natural language. Cambridge: MIT Press) the difference between these two relation types can be described in terms of an extra mental space in the discourse representation of subjective relations: representing the Subject of Consciousness (SoC). In processing terms, this might imply that the processing difference is not present if this SoC has already been established in the discourse. We tested this prediction in two eye tracking experiments. The results of Experiment 1 showed that signaling the subjectivity of the relation by introducing a subject of consciousness beforehand did not diminish the processing asymmetry compared to a neutral context. However, the relative complexity of subjective relations was diminished in the context of Free Indirect Speech (No! He was absolutely sure. There was no doubt about it. She was running so she was in hurry; Experiment 2). In terms of MST and the representation of subjectivity in general, this implies that not only creating a representation of a thinking subject, but also assigning a claim to this thinking subject requires extra processing effort.