EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2019 (with some early 2020s) are listed below by year. You can search the publications using key words such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language article, please email us!
All EyeLink reading and language publications are also available for download / import into reference management software as a single Bibtex (.bib) file.
2020 |
Anastasia Ulicheva; Hannah Harvey; Mark Aronoff; Kathleen Rastle Skilled readers' sensitivity to meaningful regularities in English writing Journal Article Cognition, 195 , pp. 103810, 2020. @article{Ulicheva2020, title = {Skilled readers' sensitivity to meaningful regularities in English writing}, author = {Anastasia Ulicheva and Hannah Harvey and Mark Aronoff and Kathleen Rastle}, doi = {10.1016/j.cognition.2018.09.013}, year = {2020}, date = {2020-01-01}, journal = {Cognition}, volume = {195}, pages = {103810}, publisher = {Elsevier}, abstract = {Substantial research has been undertaken to understand the relationship between spelling and sound, but we know little about the relationship between spelling and meaning in alphabetic writing systems. We present a computational analysis of English writing in which we develop new constructs to describe this relationship. Diagnosticity captures the amount of meaningful information in a given spelling, whereas specificity estimates the degree of dispersion of this meaning across different spellings for a particular sound sequence. Using these two constructs, we demonstrate that particular suffix spellings tend to be reserved for particular meaningful functions. We then show across three paradigms (nonword classification, spelling, and eye tracking during sentence reading) that this form of regularity between spelling and meaning influences the behaviour of skilled readers, and that the degree of this behavioural sensitivity mirrors the strength of spelling-to-meaning regularities in the writing system. We close by arguing that English spelling may have become fractionated such that the high degree of spelling-sound inconsistency maximises the transmission of meaningful information.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Substantial research has been undertaken to understand the relationship between spelling and sound, but we know little about the relationship between spelling and meaning in alphabetic writing systems. We present a computational analysis of English writing in which we develop new constructs to describe this relationship. Diagnosticity captures the amount of meaningful information in a given spelling, whereas specificity estimates the degree of dispersion of this meaning across different spellings for a particular sound sequence. Using these two constructs, we demonstrate that particular suffix spellings tend to be reserved for particular meaningful functions. We then show across three paradigms (nonword classification, spelling, and eye tracking during sentence reading) that this form of regularity between spelling and meaning influences the behaviour of skilled readers, and that the degree of this behavioural sensitivity mirrors the strength of spelling-to-meaning regularities in the writing system. We close by arguing that English spelling may have become fractionated such that the high degree of spelling-sound inconsistency maximises the transmission of meaningful information. |
Margreet Vogelzang; Francesca Foppolo; Maria Teresa Guasti; Hedderik van Rijn; Petra Hendriks Reasoning about alternative forms is costly: The processing of null and overt pronouns in Italian using pupillary responses Journal Article Discourse Processes, 57 (2), pp. 158–183, 2020. @article{Vogelzang2020, title = {Reasoning about alternative forms is costly: The processing of null and overt pronouns in Italian using pupillary responses}, author = {Margreet Vogelzang and Francesca Foppolo and Maria Teresa Guasti and Hedderik van Rijn and Petra Hendriks}, doi = {10.1080/0163853X.2019.1591127}, year = {2020}, date = {2020-01-01}, journal = {Discourse Processes}, volume = {57}, number = {2}, pages = {158--183}, publisher = {Routledge}, abstract = {Different words generally have different meanings. However, some words seemingly share similar meanings. An example are null and overt pronouns in Italian, which both refer to an individual in the discourse. Is the interpretation and processing of a form affected by the existence of another form with a similar meaning? With a pupillary response study, we show that null and overt pronouns are processed differently. Specifically, null pronouns are found to be less costly to process than overt pronouns. We argue that this difference is caused by an additional reasoning step that is needed to process marked overt pronouns but not unmarked null pronouns. A comparison with data from Dutch, a language with overt but no null pronouns, demonstrates that Italian pronouns are processed differently from Dutch pronouns. These findings suggest that the processing of a marked form is influenced by alternative forms within the same language, making its processing costly.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Different words generally have different meanings. However, some words seemingly share similar meanings. An example are null and overt pronouns in Italian, which both refer to an individual in the discourse. Is the interpretation and processing of a form affected by the existence of another form with a similar meaning? With a pupillary response study, we show that null and overt pronouns are processed differently. Specifically, null pronouns are found to be less costly to process than overt pronouns. We argue that this difference is caused by an additional reasoning step that is needed to process marked overt pronouns but not unmarked null pronouns. A comparison with data from Dutch, a language with overt but no null pronouns, demonstrates that Italian pronouns are processed differently from Dutch pronouns. These findings suggest that the processing of a marked form is influenced by alternative forms within the same language, making its processing costly. |
Anke Weidmann; Laura Richert; Maximilian Bernecker; Miriam Knauss; Kathlen Priebe; Benedikt Reuter; Martin Bohus; Meike Müller-Engelmann; Thomas Fydrich Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence Journal Article Psychological Trauma: Theory, Research, Practice, and Policy, 12 (1), pp. 46–54, 2020. @article{Weidmann2020, title = {Dwelling on verbal but not pictorial threat cues: An eye-tracking study with adult survivors of childhood interpersonal violence}, author = {Anke Weidmann and Laura Richert and Maximilian Bernecker and Miriam Knauss and Kathlen Priebe and Benedikt Reuter and Martin Bohus and Meike Müller-Engelmann and Thomas Fydrich}, doi = {10.1037/tra0000424}, year = {2020}, date = {2020-01-01}, journal = {Psychological Trauma: Theory, Research, Practice, and Policy}, volume = {12}, number = {1}, pages = {46--54}, abstract = {Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objective: Previous studies have found evidence of an attentional bias for trauma-related stimuli in posttraumatic stress disorder (PTSD) using eye-tracking (ET) technlogy. However, it is unclear whether findings for PTSD after traumatic events in adulthood can be transferred to PTSD after interpersonal trauma in childhood. The latter is often accompanied by more complex symptom features, including, for example, affective dysregulation and has not yet been studied using ET. The aim of this study was to explore which components of attention are biased in adult victims of childhood trauma with PTSD compared to those without PTSD. Method: Female participants with (n = 27) or without (n = 27) PTSD who had experienced interpersonal violence in childhood or adolescence watched different traumarelated stimuli (Experiment 1: words, Experiment 2: facial expressions). We analyzed whether traumarelated stimuli were primarily detected (vigilance bias) and/or dwelled on longer (maintenance bias) compared to stimuli of other emotional qualities. Results: For trauma-related words, there was evidence of a maintenance bias but not of a vigilance bias. For trauma-related facial expressions, there was no evidence of any bias. Conclusions: At present, an attentional bias to trauma-related stimuli cannot be considered as robust in PTSD following trauma in childhood compared to that of PTSD following trauma in adulthood. The findings are discussed with respect to difficulties attributing effects specifically to PTSD in this highly comorbid though understudied population. |
Vincent Porretta; Lori Buchanan; Juhani Järvikivi When processing costs impact predictive processing: The case of foreign-accented speech and accent experience Journal Article Attention, Perception, & Psychophysics, pp. 1–8, 2020. @article{Porretta2020, title = {When processing costs impact predictive processing: The case of foreign-accented speech and accent experience}, author = {Vincent Porretta and Lori Buchanan and Juhani Järvikivi}, year = {2020}, date = {2020-01-01}, journal = {Attention, Perception, & Psychophysics}, pages = {1--8}, publisher = {Attention, Perception, & Psychophysics}, abstract = {Listeners use linguistic information and real-world knowledge to predict upcoming spoken words. However, studies ofpredictive processing have focused on prediction under optimal listening conditions. We examined the effect offoreign-accented speech on predictive processing. Furthermore, we investigated whether accent-specific experience facilitates predictive processing. Using the visual world paradigm, we demonstrated that although the presence of an accent impedes predictive processing, it does not preclude it. We further showed that as listener experience increases, predictive processing for accented speech increases and begins to approximate the pattern seen for native speech. These results speak to the limitation of the processing resources that must be allocated, leading to a trade-offwhen listeners are faced with increased uncertainty and more effortful recognition due to a foreign accent.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Listeners use linguistic information and real-world knowledge to predict upcoming spoken words. However, studies ofpredictive processing have focused on prediction under optimal listening conditions. We examined the effect offoreign-accented speech on predictive processing. Furthermore, we investigated whether accent-specific experience facilitates predictive processing. Using the visual world paradigm, we demonstrated that although the presence of an accent impedes predictive processing, it does not preclude it. We further showed that as listener experience increases, predictive processing for accented speech increases and begins to approximate the pattern seen for native speech. These results speak to the limitation of the processing resources that must be allocated, leading to a trade-offwhen listeners are faced with increased uncertainty and more effortful recognition due to a foreign accent. |
Johannes Rennig; Kira Wegner-Clemens; Michael S Beauchamp Face viewing behavior predicts multisensory gain during speech perception Journal Article Psychonomic Bulletin & Review, 27 , pp. 70–77, 2020. @article{Rennig2020, title = {Face viewing behavior predicts multisensory gain during speech perception}, author = {Johannes Rennig and Kira Wegner-Clemens and Michael S Beauchamp}, doi = {10.3758/s13423-019-01665-y}, year = {2020}, date = {2020-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {27}, pages = {70--77}, publisher = {Psychonomic Bulletin & Review}, abstract = {Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2–58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3–98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2–58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3–98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech. |
Christophe Carlei; Dirk Kerzel Looking up improves performance in verbal tasks Journal Article Laterality: Asymmetries of Body, Brain and Cognition, 25 (2), pp. 198–214, 2020. @article{Carlei2020, title = {Looking up improves performance in verbal tasks}, author = {Christophe Carlei and Dirk Kerzel}, doi = {10.1080/1357650X.2019.1646755}, year = {2020}, date = {2020-01-01}, journal = {Laterality: Asymmetries of Body, Brain and Cognition}, volume = {25}, number = {2}, pages = {198--214}, abstract = {Earlier research suggested that gaze direction has an impact on cognitive processing. It is likely that horizontal gaze direction increases activation in specific areas of the contralateral cerebral hemisphere. Consistent with the lateralization of memory functions, we previously showed that shifting gaze to the left improves visuo-spatial short-term memory. In the current study, we investigated the effect of unilateral gaze on verbal processing. We expected better performance with gaze directed to the right because language is lateralized in the left hemisphere. Also, an advantage of gaze directed upward was expected because local processing and object recognition are facilitated in the upper visual field. Observers directed their gaze at one of the corners of the computer screen while they performed lexical decision, grammatical gender and semantic discrimination tasks. Contrary to expectations, we did not observe performance differences between gaze directed to the left or right, which is consistent with the inconsistent literature on horizontal asymmetries with verbal tasks. However, RTs were shorter when observers looked at words in the upper compared to the lower part of the screen, suggesting that looking upwards enhances verbal processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Earlier research suggested that gaze direction has an impact on cognitive processing. It is likely that horizontal gaze direction increases activation in specific areas of the contralateral cerebral hemisphere. Consistent with the lateralization of memory functions, we previously showed that shifting gaze to the left improves visuo-spatial short-term memory. In the current study, we investigated the effect of unilateral gaze on verbal processing. We expected better performance with gaze directed to the right because language is lateralized in the left hemisphere. Also, an advantage of gaze directed upward was expected because local processing and object recognition are facilitated in the upper visual field. Observers directed their gaze at one of the corners of the computer screen while they performed lexical decision, grammatical gender and semantic discrimination tasks. Contrary to expectations, we did not observe performance differences between gaze directed to the left or right, which is consistent with the inconsistent literature on horizontal asymmetries with verbal tasks. However, RTs were shorter when observers looked at words in the upper compared to the lower part of the screen, suggesting that looking upwards enhances verbal processing. |
Leanne Nagels; Roelien Bastiaanse; Deniz Başkent; Anita Wagnera Individual differences in lexical access among cochlear implant users Journal Article Journal of Speech, Language, and Hearing Research, 63 , pp. 286–304, 2020. @article{Nagels2020, title = {Individual differences in lexical access among cochlear implant users}, author = {Leanne Nagels and Roelien Bastiaanse and Deniz Başkent and Anita Wagnera}, year = {2020}, date = {2020-01-01}, journal = {Journal of Speech, Language, and Hearing Research}, volume = {63}, pages = {286--304}, abstract = {Purpose: The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word–nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method: Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results: In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word–nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions: The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word–nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word–nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method: Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results: In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word–nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions: The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word–nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation. |
Chie Nakamura; Manabu Arai; Yuki Hirose; Suzanne Flynn Frontiers in Psychology, 10 , pp. 1–14, 2020. @article{Nakamura2020, title = {An extra cue is beneficial for native speakers but can be disruptive for second language learners: Integration of prosody and visual context in syntactic ambiguity resolution}, author = {Chie Nakamura and Manabu Arai and Yuki Hirose and Suzanne Flynn}, doi = {10.3389/fpsyg.2019.02835}, year = {2020}, date = {2020-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--14}, abstract = {It has long been debated whether non-native speakers can process sentences in the same way as native speakers do or they suffer from certain qualitative deficit in their ability of language comprehension. The current study examined the influence of prosodic and visual information in processing sentences with a temporarily ambiguous prepositional phrase (“Put the cake on the plate in the basket”) with native English speakers and Japanese learners of English. Specifically, we investigated (1) whether native speakers assign different pragmatic functions to the same prosodic cues used in different contexts and (2) whether L2 learners can reach the correct analysis by integrating prosodic cues with syntax with reference to the visually presented contextual information. The results from native speakers showed that contrastive accents helped to resolve the referential ambiguity when a contrastive pair was present in visual scenes. However, without a contrastive pair in the visual scene, native speakers were slower to reach the correct analysis with the contrastive accent, which supports the view that the pragmatic function of intonation categories are highly context dependent. The results from L2 learners showed that visually presented context alone helped L2 learners to reach the correct analysis. However, L2 learners were unable to assign contrastive meaning to the prosodic cues when there were two potential referents in the visual scene. The results suggest that L2 learners are not capable of integrating multiple sources of information in an interactive manner during real-time language comprehension.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It has long been debated whether non-native speakers can process sentences in the same way as native speakers do or they suffer from certain qualitative deficit in their ability of language comprehension. The current study examined the influence of prosodic and visual information in processing sentences with a temporarily ambiguous prepositional phrase (“Put the cake on the plate in the basket”) with native English speakers and Japanese learners of English. Specifically, we investigated (1) whether native speakers assign different pragmatic functions to the same prosodic cues used in different contexts and (2) whether L2 learners can reach the correct analysis by integrating prosodic cues with syntax with reference to the visually presented contextual information. The results from native speakers showed that contrastive accents helped to resolve the referential ambiguity when a contrastive pair was present in visual scenes. However, without a contrastive pair in the visual scene, native speakers were slower to reach the correct analysis with the contrastive accent, which supports the view that the pragmatic function of intonation categories are highly context dependent. The results from L2 learners showed that visually presented context alone helped L2 learners to reach the correct analysis. However, L2 learners were unable to assign contrastive meaning to the prosodic cues when there were two potential referents in the visual scene. The results suggest that L2 learners are not capable of integrating multiple sources of information in an interactive manner during real-time language comprehension. |
Elisabeth Beyersmann; Signy Wegener; Kate Nation; Ayako Prokupzcuk; Hua-chen Wang; Elisabeth Beyersmann; Signy Wegener; Kate Nation; Hua-chen Wang; Anne Castles Learning morphologically complex spoken words: Orthographic expectations of embedded stems are formed prior to print exposure Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–13, 2020. @article{Beyersmann2020, title = {Learning morphologically complex spoken words: Orthographic expectations of embedded stems are formed prior to print exposure}, author = {Elisabeth Beyersmann and Signy Wegener and Kate Nation and Ayako Prokupzcuk and Hua-chen Wang and Elisabeth Beyersmann and Signy Wegener and Kate Nation and Hua-chen Wang and Anne Castles}, year = {2020}, date = {2020-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, pages = {1--13}, abstract = {It is well known that information from spoken language is integrated into reading processes, but the nature of these links and how they are acquired is less well understood. Recent evidence has suggested that predictions about the written form of newly learned spoken words are already generated prior to print exposure. We extend this work to morphologically complex words and ask whether the information that is available in spoken words goes beyond the mappings between phonology and orthography. Adults were taught the oral form of a set of novel morphologically complex words (e.g., “neshing”, “neshed”, “neshes”), with a 2nd set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., nesh), embedded in sentences, and their eye movements were monitored. Half of the stems were allocated a predictable and half an unpredictable spelling. Reading times were shorter for orally trained than untrained stems and for stems with predictable rather than unpredictable spellings. Crucially, there was an interaction between spelling predictability and training. This suggests that orthographic expectations of embedded stems are formed during spoken word learning. Reading aloud and spelling tests complemented the eye movement data, and findings are discussed in the context of theories of reading acquisition.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It is well known that information from spoken language is integrated into reading processes, but the nature of these links and how they are acquired is less well understood. Recent evidence has suggested that predictions about the written form of newly learned spoken words are already generated prior to print exposure. We extend this work to morphologically complex words and ask whether the information that is available in spoken words goes beyond the mappings between phonology and orthography. Adults were taught the oral form of a set of novel morphologically complex words (e.g., “neshing”, “neshed”, “neshes”), with a 2nd set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., nesh), embedded in sentences, and their eye movements were monitored. Half of the stems were allocated a predictable and half an unpredictable spelling. Reading times were shorter for orally trained than untrained stems and for stems with predictable rather than unpredictable spellings. Crucially, there was an interaction between spelling predictability and training. This suggests that orthographic expectations of embedded stems are formed during spoken word learning. Reading aloud and spelling tests complemented the eye movement data, and findings are discussed in the context of theories of reading acquisition. |
Anna Kosovicheva; Peter J Bex What color was it? A psychophysical paradigm for tracking subjective progress in continuous tasks Journal Article Perception, 49 (1), pp. 21–38, 2020. @article{Kosovicheva2020, title = {What color was it? A psychophysical paradigm for tracking subjective progress in continuous tasks}, author = {Anna Kosovicheva and Peter J Bex}, doi = {10.1177/0301006619886247}, year = {2020}, date = {2020-01-01}, journal = {Perception}, volume = {49}, number = {1}, pages = {21--38}, abstract = {When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer's task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer's task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks. |
2019 |
Chun-Ting Hsu; Roy Clariana; Benjamin Schloss; Ping Li Neurocognitive signatures of naturalistic reading of scientific texts: A fixation-related fMRI study Journal Article Scientific Reports, 9 , pp. 1–16, 2019. @article{Hsu2019, title = {Neurocognitive signatures of naturalistic reading of scientific texts: A fixation-related fMRI study}, author = {Chun-Ting Hsu and Roy Clariana and Benjamin Schloss and Ping Li}, doi = {10.1038/s41598-019-47176-7}, year = {2019}, date = {2019-12-01}, journal = {Scientific Reports}, volume = {9}, pages = {1--16}, publisher = {Nature Publishing Group}, abstract = {How do students gain scientific knowledge while reading expository text? This study examines the underlying neurocognitive basis of textual knowledge structure and individual readers' cognitive differences and reading habits, including the influence of text and reader characteristics, on outcomes of scientific text comprehension. By combining fixation-related fMRI and multiband data acquisition, the study is among the first to consider self-paced naturalistic reading inside the MRI scanner. Our results revealed the underlying neurocognitive patterns associated with information integration of different time scales during text reading, and significant individual differences due to the interaction between text characteristics (e.g., optimality of the textual knowledge structure) and reader characteristics (e.g., electronic device use habits). Individual differences impacted the amount of neural resources deployed for multitasking and information integration for constructing the underlying scientific mental models based on the text being read. Our findings have significant implications for understanding science reading in a population that is increasingly dependent on electronic devices.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How do students gain scientific knowledge while reading expository text? This study examines the underlying neurocognitive basis of textual knowledge structure and individual readers' cognitive differences and reading habits, including the influence of text and reader characteristics, on outcomes of scientific text comprehension. By combining fixation-related fMRI and multiband data acquisition, the study is among the first to consider self-paced naturalistic reading inside the MRI scanner. Our results revealed the underlying neurocognitive patterns associated with information integration of different time scales during text reading, and significant individual differences due to the interaction between text characteristics (e.g., optimality of the textual knowledge structure) and reader characteristics (e.g., electronic device use habits). Individual differences impacted the amount of neural resources deployed for multitasking and information integration for constructing the underlying scientific mental models based on the text being read. Our findings have significant implications for understanding science reading in a population that is increasingly dependent on electronic devices. |
Dexiang Zhang; Jukka Hyönä; Lei Cui; Zhaoxia Zhu; Shouxin Li Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study Journal Article Learning and Instruction, 64 , pp. 1–15, 2019. @article{Zhang2019b, title = {Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study}, author = {Dexiang Zhang and Jukka Hyönä and Lei Cui and Zhaoxia Zhu and Shouxin Li}, doi = {10.1016/j.learninstruc.2019.101246}, year = {2019}, date = {2019-12-01}, journal = {Learning and Instruction}, volume = {64}, pages = {1--15}, publisher = {Elsevier BV}, abstract = {Effects of task instructions and topic signaling on text processing among adult readers with different reading styles were studied by eye-tracking. In Experiment 1, readers read two multiple-topic expository texts guided either by a summary or a verification task. In Experiment 2, readers read a text with or without the topic sentences underlined. Four types of readers emerged: topic structure processors (TSPs), fast linear readers (FLRs), slow linear readers (SLRs), and nonselective reviewers (NSRs). TSPs paid ample fixation time on topic sentences regardless of their signaling. FLRs were characterized by fast first-pass reading, little rereading of previous text, and some signs of structure processing. The common feature of SLRs and NSRs was their slow first-pass reading. They differed from each other in that NSRs were characterized by spending ample time also during second-pass reading. They only showed some signs of topic structure processing when cued by task instructions or topic signaling.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Effects of task instructions and topic signaling on text processing among adult readers with different reading styles were studied by eye-tracking. In Experiment 1, readers read two multiple-topic expository texts guided either by a summary or a verification task. In Experiment 2, readers read a text with or without the topic sentences underlined. Four types of readers emerged: topic structure processors (TSPs), fast linear readers (FLRs), slow linear readers (SLRs), and nonselective reviewers (NSRs). TSPs paid ample fixation time on topic sentences regardless of their signaling. FLRs were characterized by fast first-pass reading, little rereading of previous text, and some signs of structure processing. The common feature of SLRs and NSRs was their slow first-pass reading. They differed from each other in that NSRs were characterized by spending ample time also during second-pass reading. They only showed some signs of topic structure processing when cued by task instructions or topic signaling. |
Jarkko Hautala; Otto Loberg; Najla Azaiez; Sara Taskinen; Simon P Tiffin-Richards; Paavo H T Leppänen What information should I look for again? Attentional difficulties distracts reading of task assignments Journal Article Learning and Individual Differences, 75 , pp. 1–12, 2019. @article{Hautala2019, title = {What information should I look for again? Attentional difficulties distracts reading of task assignments}, author = {Jarkko Hautala and Otto Loberg and Najla Azaiez and Sara Taskinen and Simon P Tiffin-Richards and Paavo H T Leppänen}, doi = {10.1016/j.lindif.2019.101775}, year = {2019}, date = {2019-10-01}, journal = {Learning and Individual Differences}, volume = {75}, pages = {1--12}, publisher = {Elsevier BV}, abstract = {This large-scale eye-movement study (N=164) investigated how students read short task assignments to complete information search problems and how their cognitive resources are associated with this reading behavior. These cognitive resources include information searching subskills, prior knowledge, verbal memory, reading fluency, and attentional difficulties. In this study, the task assignments consisted of four sentences. The first and last sentences provided context, while the second or third sentence was the relevant or irrelevant sentence under investigation. The results of a linear mixed-model and latent change score analyses showed the ubiquitous influence of reading fluency on first-pass eye movement measures, and the effects of sentence re- levancy on making more and longer reinspections and look-backs to the relevant than irrelevant sentence. In addition, the look-backs to the relevant sentence were associated with better information search subskills. Students with attentional difficulties made substantially fewer look-backs specifically to the relevant sentence. These results provide evidence that selective look-backs are used as an important index of comprehension monitoring independent of reading fluency. In this framework, slow reading fluency was found to be associated with laborious decoding but with intact comprehension monitoring, whereas attention difficulty was associated with intact decoding but with deficiency in comprehension monitoring.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This large-scale eye-movement study (N=164) investigated how students read short task assignments to complete information search problems and how their cognitive resources are associated with this reading behavior. These cognitive resources include information searching subskills, prior knowledge, verbal memory, reading fluency, and attentional difficulties. In this study, the task assignments consisted of four sentences. The first and last sentences provided context, while the second or third sentence was the relevant or irrelevant sentence under investigation. The results of a linear mixed-model and latent change score analyses showed the ubiquitous influence of reading fluency on first-pass eye movement measures, and the effects of sentence re- levancy on making more and longer reinspections and look-backs to the relevant than irrelevant sentence. In addition, the look-backs to the relevant sentence were associated with better information search subskills. Students with attentional difficulties made substantially fewer look-backs specifically to the relevant sentence. These results provide evidence that selective look-backs are used as an important index of comprehension monitoring independent of reading fluency. In this framework, slow reading fluency was found to be associated with laborious decoding but with intact comprehension monitoring, whereas attention difficulty was associated with intact decoding but with deficiency in comprehension monitoring. |
Bob McMurray; Jamie Klein-Packarda; Bruce J Tomblinb A real-time mechanism underlying lexical deficits in developmental language disorder: Between-word inhibition Journal Article Cognition, 191 , pp. 1–13, 2019. @article{McMurray2019a, title = {A real-time mechanism underlying lexical deficits in developmental language disorder: Between-word inhibition}, author = {Bob McMurray and Jamie Klein-Packarda and Bruce J Tomblinb}, doi = {10.1016/J.COGNITION.2019.06.012}, year = {2019}, date = {2019-10-01}, journal = {Cognition}, volume = {191}, pages = {1--13}, publisher = {Elsevier}, abstract = {Eight to 11% of children have a clinical disorder in oral language (Developmental Language Disorder, DLD). Language deficits in DLD can affect all levels of language and persist through adulthood. Word-level processing may be critical as words link phonology, orthography, syntax and semantics. Thus, a lexical deficit could cascade throughout language. Cognitively, word recognition is a competition process: as the input (e.g., lizard) unfolds, multiple candidates (liver, wizard) compete for recognition. Children with DLD do not fully resolve this competition, but it is unclear what cognitive mechanisms underlie this. We examined lexical inhibition—the ability of more active words to suppress competitors—in 79 adolescents with and without DLD. Participants heard words (e.g. net) in which the onset was manipulated to briefly favor a competitor (neck). This was predicted to inhibit the target, slowing recognition. Word recognition was measured using a task in which participants heard the stimulus, and clicked on a picture of the item from an array of competitors, while eye-movements were monitored as a measure of how strongly the participant was committed to that interpretation over time. TD listeners showed evidence of inhibition with greater interference for stimuli that briefly activated a competitor word. DLD listeners did not. This suggests deficits in DLD may stem from a failure to engage lexical inhibition. This in turn could have ripple effects throughout the language system. This supports theoretical approaches to DLD that emphasize lexical-level deficits, and deficits in real-time processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eight to 11% of children have a clinical disorder in oral language (Developmental Language Disorder, DLD). Language deficits in DLD can affect all levels of language and persist through adulthood. Word-level processing may be critical as words link phonology, orthography, syntax and semantics. Thus, a lexical deficit could cascade throughout language. Cognitively, word recognition is a competition process: as the input (e.g., lizard) unfolds, multiple candidates (liver, wizard) compete for recognition. Children with DLD do not fully resolve this competition, but it is unclear what cognitive mechanisms underlie this. We examined lexical inhibition—the ability of more active words to suppress competitors—in 79 adolescents with and without DLD. Participants heard words (e.g. net) in which the onset was manipulated to briefly favor a competitor (neck). This was predicted to inhibit the target, slowing recognition. Word recognition was measured using a task in which participants heard the stimulus, and clicked on a picture of the item from an array of competitors, while eye-movements were monitored as a measure of how strongly the participant was committed to that interpretation over time. TD listeners showed evidence of inhibition with greater interference for stimuli that briefly activated a competitor word. DLD listeners did not. This suggests deficits in DLD may stem from a failure to engage lexical inhibition. This in turn could have ripple effects throughout the language system. This supports theoretical approaches to DLD that emphasize lexical-level deficits, and deficits in real-time processing. |
Michelle Perdomo; Edith Kaan Prosodic cues in second-language speech processing: A visual world eye-tracking study Journal Article Second Language Research, pp. 1–27, 2019. @article{Perdomo2019, title = {Prosodic cues in second-language speech processing: A visual world eye-tracking study}, author = {Michelle Perdomo and Edith Kaan}, doi = {10.1177/0267658319879196}, year = {2019}, date = {2019-10-01}, journal = {Second Language Research}, pages = {1--27}, abstract = {Listeners interpret cues in speech processing immediately rather than waiting until the end of a sentence. In particular, prosodic cues in auditory speech processing can aid listeners in building information structure and contrast sets. Native speakers even use this information in combination with syntactic and semantic information to build mental representations predictively. Research on second-language (L2) learners suggests that learners have difficulty integrating linguistic information across various domains, likely subject to L2 proficiency levels. The current study investigated eye-movement behavior of native speakers of English and Chinese learners of English in their use of contrastive intonational cues to restrict the set of upcoming referents in a visual world paradigm. Both native speakers and learners used contrastive pitch accent to restrict the set of referents. Whereas native speakers anticipated the upcoming set of referents, this was less clear in the L2 learners. This suggests that learners are able to integrate information across multiple domains to build information structure in the L2 but may not do so predictively. Prosodic processing was not affected by proficiency or working memory in the L2 speakers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Listeners interpret cues in speech processing immediately rather than waiting until the end of a sentence. In particular, prosodic cues in auditory speech processing can aid listeners in building information structure and contrast sets. Native speakers even use this information in combination with syntactic and semantic information to build mental representations predictively. Research on second-language (L2) learners suggests that learners have difficulty integrating linguistic information across various domains, likely subject to L2 proficiency levels. The current study investigated eye-movement behavior of native speakers of English and Chinese learners of English in their use of contrastive intonational cues to restrict the set of upcoming referents in a visual world paradigm. Both native speakers and learners used contrastive pitch accent to restrict the set of referents. Whereas native speakers anticipated the upcoming set of referents, this was less clear in the L2 learners. This suggests that learners are able to integrate information across multiple domains to build information structure in the L2 but may not do so predictively. Prosodic processing was not affected by proficiency or working memory in the L2 speakers. |
Georgia Zellou; Delphine Dahan Listeners maintain phonological uncertainty over time and across words: The case of vowel nasality in English Journal Article Journal of Phonetics, 76 , pp. 1–20, 2019. @article{Zellou2019, title = {Listeners maintain phonological uncertainty over time and across words: The case of vowel nasality in English}, author = {Georgia Zellou and Delphine Dahan}, doi = {10.1016/j.wocn.2019.06.001}, year = {2019}, date = {2019-09-01}, journal = {Journal of Phonetics}, volume = {76}, pages = {1--20}, publisher = {Elsevier BV}, abstract = {While the fact that phonetic information is evaluated in a non-discrete, probabilistic fashion is well established, there is less consensus regarding how long such encoding is maintained. Here, we examined whether people maintain in memory the amount of vowel nasality present in a word when processing a subsequent word that holds a semantic dependency with the first one. Vowel nasality in English is an acoustic correlate of the oral vs. nasal status of an adjacent consonant, and sometimes it is the only distinguishing phonetic feature (e.g., bet vs. bent). In Experiment 1, we show that people can perceive differences in nasality between two vowels above and beyond differences in the categorization of those vowels. In Experiment 2, we tracked listeners' eye-movements as they heard a sentence that mentioned one of four displayed images (e.g., ‘money') following a prime word (e.g., ‘bet') that held a semantic relationship with the target word. Recognition of the target was found to be modulated by the degree of nasality in the first word's vowel: Slightly greater uncertainty regarding the oral status of the post-vocalic consonant in the first word translated into a weaker semantic cue for the identification of the second word. Thus, listeners appear to maintain in memory the degree of vowel nasality they perceived on the first word and bring this information to bear onto the interpretation of a subsequent, semantically-dependent word. Probabilistic cue integration across words that hold semantic coherence, we argue, contributes to achieving robust language comprehension despite the inherent ambiguity of the speech signal.}, keywords = {}, pubstate = {published}, tppubtype = {article} } While the fact that phonetic information is evaluated in a non-discrete, probabilistic fashion is well established, there is less consensus regarding how long such encoding is maintained. Here, we examined whether people maintain in memory the amount of vowel nasality present in a word when processing a subsequent word that holds a semantic dependency with the first one. Vowel nasality in English is an acoustic correlate of the oral vs. nasal status of an adjacent consonant, and sometimes it is the only distinguishing phonetic feature (e.g., bet vs. bent). In Experiment 1, we show that people can perceive differences in nasality between two vowels above and beyond differences in the categorization of those vowels. In Experiment 2, we tracked listeners' eye-movements as they heard a sentence that mentioned one of four displayed images (e.g., ‘money') following a prime word (e.g., ‘bet') that held a semantic relationship with the target word. Recognition of the target was found to be modulated by the degree of nasality in the first word's vowel: Slightly greater uncertainty regarding the oral status of the post-vocalic consonant in the first word translated into a weaker semantic cue for the identification of the second word. Thus, listeners appear to maintain in memory the degree of vowel nasality they perceived on the first word and bring this information to bear onto the interpretation of a subsequent, semantically-dependent word. Probabilistic cue integration across words that hold semantic coherence, we argue, contributes to achieving robust language comprehension despite the inherent ambiguity of the speech signal. |
Elizabeth R Schotter; Anna Marie Fennell Readers can identify the meanings of words without looking at them: Evidence from regressive eye movements Journal Article Psychonomic Bulletin & Review, 26 (5), pp. 1697–1704, 2019. @article{Schotter2019, title = {Readers can identify the meanings of words without looking at them: Evidence from regressive eye movements}, author = {Elizabeth R Schotter and Anna Marie Fennell}, doi = {10.3758/s13423-019-01662-1}, year = {2019}, date = {2019-09-01}, journal = {Psychonomic Bulletin & Review}, volume = {26}, number = {5}, pages = {1697--1704}, publisher = {Springer Science and Business Media LLC}, abstract = {Previewing words prior to fixating them leads to faster reading, but does it lead to word identification (i.e., semantic encoding)? We tested this with a gaze-contingent display change study and a subsequent plausibility manipulation. Both the preview and the target words were plausible when encountered, and we manipulated the end of the sentence so that the different preview was rendered implausible (in critical sentences) or remained plausible (in neutral sentences). Regressive saccades from the end ofthe sentence increased when the preview was rendered implausible compared to when it was plausible, especially when the preview was high frequency. These data add to a growing body ofresearch suggesting that linguistic information can be obtained during preview, to the point where word meaning is accessed. In addition, these findings suggest that the meaning of the fixated target does not always override the semantic information obtained during preview.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Previewing words prior to fixating them leads to faster reading, but does it lead to word identification (i.e., semantic encoding)? We tested this with a gaze-contingent display change study and a subsequent plausibility manipulation. Both the preview and the target words were plausible when encountered, and we manipulated the end of the sentence so that the different preview was rendered implausible (in critical sentences) or remained plausible (in neutral sentences). Regressive saccades from the end ofthe sentence increased when the preview was rendered implausible compared to when it was plausible, especially when the preview was high frequency. These data add to a growing body ofresearch suggesting that linguistic information can be obtained during preview, to the point where word meaning is accessed. In addition, these findings suggest that the meaning of the fixated target does not always override the semantic information obtained during preview. |
Jessica E Goold; Wonil Choi; John M Henderson Cortical control of eye movements in natural reading: Evidence from MVPA Journal Article Experimental Brain Research, 237 , pp. 3099–3107, 2019. @article{Goold2019, title = {Cortical control of eye movements in natural reading: Evidence from MVPA}, author = {Jessica E Goold and Wonil Choi and John M Henderson}, doi = {10.1007/s00221-019-05655-3}, year = {2019}, date = {2019-09-01}, journal = {Experimental Brain Research}, volume = {237}, pages = {3099--3107}, abstract = {Language comprehension during reading requires fine-grained management of saccadic eye movements. A critical question, therefore, is how the brain controls eye movements in reading. Neural correlates of simple eye movements have been found in multiple cortical regions, but little is known about how this network operates in reading. To investigate this question in the present study, participants were presented with normal text, pseudo-word text, and consonant string text in a magnetic resonance imaging (MRI) scanner with eyetracking. Participants read naturally in the normal text condition and moved their eyes “as if they were reading” in the other conditions. Multi-voxel pattern analysis was used to analyze the fMRI signal in the oculomotor network. We found that activation patterns in a subset of network regions differentiated between stimulus types. These results suggest that the oculomotor network reflects more than simple saccade generation and are consistent with the hypothesis that specific network areas interface with cognitive systems.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Language comprehension during reading requires fine-grained management of saccadic eye movements. A critical question, therefore, is how the brain controls eye movements in reading. Neural correlates of simple eye movements have been found in multiple cortical regions, but little is known about how this network operates in reading. To investigate this question in the present study, participants were presented with normal text, pseudo-word text, and consonant string text in a magnetic resonance imaging (MRI) scanner with eyetracking. Participants read naturally in the normal text condition and moved their eyes “as if they were reading” in the other conditions. Multi-voxel pattern analysis was used to analyze the fMRI signal in the oculomotor network. We found that activation patterns in a subset of network regions differentiated between stimulus types. These results suggest that the oculomotor network reflects more than simple saccade generation and are consistent with the hypothesis that specific network areas interface with cognitive systems. |
Jesse A Harris; Katy Carlson Correlate not optional: PP sprouting and parallelism in “much less” ellipsis Journal Article Glossa: A Journal of General Linguistics, 4 (1), pp. 1–35, 2019. @article{Harris2019, title = {Correlate not optional: PP sprouting and parallelism in “much less” ellipsis}, author = {Jesse A Harris and Katy Carlson}, doi = {10.5334/gjgl.707}, year = {2019}, date = {2019-07-01}, journal = {Glossa: A Journal of General Linguistics}, volume = {4}, number = {1}, pages = {1--35}, publisher = {Ubiquity Press}, abstract = {Clauses that are parallel in form and meaning show processing advantages in ellipsis and coordination structures (Frazier et al. 1984; Kehler 2000; Carlson 2002). However, the constructions that have been used to show a parallelism advantage do not always require a strong semantic relationship between clauses. We present two eye tracking while reading studies on focus-sensitive coordination structures, an understudied form of ellipsis which requires the generation of a contextually salient semantic relation or scale between conjuncts. However, when the remnant of ellipsis lacks an overt correlate in the matrix clause and must be “sprouted” in the ellipsis site, the relation between clauses is simplified to entailment. Instead of facilitation for sentences with an entailment relation between clauses, our online processing results suggest that violating parallelism is costly, even when doing so could ease the semantic relations required for interpretation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Clauses that are parallel in form and meaning show processing advantages in ellipsis and coordination structures (Frazier et al. 1984; Kehler 2000; Carlson 2002). However, the constructions that have been used to show a parallelism advantage do not always require a strong semantic relationship between clauses. We present two eye tracking while reading studies on focus-sensitive coordination structures, an understudied form of ellipsis which requires the generation of a contextually salient semantic relation or scale between conjuncts. However, when the remnant of ellipsis lacks an overt correlate in the matrix clause and must be “sprouted” in the ellipsis site, the relation between clauses is simplified to entailment. Instead of facilitation for sentences with an entailment relation between clauses, our online processing results suggest that violating parallelism is costly, even when doing so could ease the semantic relations required for interpretation. |
Yipu Wei; Willem M Mak; Jacqueline Evers-Vermeul; Ted J M Sanders Causal connectives as indicators of source information: Evidence from the visual world paradigm Journal Article Acta Psychologica, 198 , pp. 1–13, 2019. @article{Wei2019, title = {Causal connectives as indicators of source information: Evidence from the visual world paradigm}, author = {Yipu Wei and Willem M Mak and Jacqueline Evers-Vermeul and Ted J M Sanders}, doi = {10.1016/J.ACTPSY.2019.102866}, year = {2019}, date = {2019-07-01}, journal = {Acta Psychologica}, volume = {198}, pages = {1--13}, publisher = {North-Holland}, abstract = {Causal relations can be presented as subjective, involving someone's reasoning, or objective, depicting a real-world cause-consequence relation. Subjective relations require longer processing times than objective relations. We hypothesize that the extra time is due to the involvement of a Subject of Consciousness (SoC) in the mental representation of subjective information. To test this hypothesis, we conducted a Visual World Paradigm eye-tracking experiment on Dutch and Chinese connectives that differ in the degree of subjectivity they encode. In both languages, subjective connectives triggered an immediate increased attention to the SoC, compared to objective connectives. Only when the subjectivity information was not expressed by the connective, modal verbs presented later in the sentence induced an increase in looks at the SoC. This focus on the SoC due to the linguistic cues can be explained as the tracking of the information source in the situation models, which continues throughout the sentence.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Causal relations can be presented as subjective, involving someone's reasoning, or objective, depicting a real-world cause-consequence relation. Subjective relations require longer processing times than objective relations. We hypothesize that the extra time is due to the involvement of a Subject of Consciousness (SoC) in the mental representation of subjective information. To test this hypothesis, we conducted a Visual World Paradigm eye-tracking experiment on Dutch and Chinese connectives that differ in the degree of subjectivity they encode. In both languages, subjective connectives triggered an immediate increased attention to the SoC, compared to objective connectives. Only when the subjectivity information was not expressed by the connective, modal verbs presented later in the sentence induced an increase in looks at the SoC. This focus on the SoC due to the linguistic cues can be explained as the tracking of the information source in the situation models, which continues throughout the sentence. |
Benjamin T Carter; Steven G Luke Data in Brief, 25 , pp. 1–21, 2019. @article{Carter2019a, title = {The effect of convolving word length, word frequency, function word predictability and first pass reading time in the analysis of a fixation-related fMRI dataset}, author = {Benjamin T Carter and Steven G Luke}, doi = {10.1016/J.DIB.2019.104171}, year = {2019}, date = {2019-07-01}, journal = {Data in Brief}, volume = {25}, pages = {1--21}, publisher = {Elsevier}, abstract = {The data presented in this document was created to explore the effect of including or excluding word length, word frequency, the lexical predictability of function words and first pass reading time (or the duration of the first fixation on a word) as either baseline regressors or duration modulators on the final analysis for a fixation-related fMRI investigation of linguistic processing. The effect of these regressors was a central question raised during the review of Linguistic networks associated with lexical, semantic and syntactic predictability in reading: A fixation-related fMRI study [1]. Three datasets were created and compared to the original dataset to determine their effect. The first examines the effect of adding word length and word frequency as baseline regressors. The second examines the effect of removing first pass reading time as a duration modulator. The third examines the inclusion of function word predictability into the baseline hemodynamic response function. Statistical maps were created for each dataset and compared to the primary dataset (published in [1]) across the linguistic conditions of the initial dataset (lexical predictability, semantic predictability or syntax predictability).}, keywords = {}, pubstate = {published}, tppubtype = {article} } The data presented in this document was created to explore the effect of including or excluding word length, word frequency, the lexical predictability of function words and first pass reading time (or the duration of the first fixation on a word) as either baseline regressors or duration modulators on the final analysis for a fixation-related fMRI investigation of linguistic processing. The effect of these regressors was a central question raised during the review of Linguistic networks associated with lexical, semantic and syntactic predictability in reading: A fixation-related fMRI study [1]. Three datasets were created and compared to the original dataset to determine their effect. The first examines the effect of adding word length and word frequency as baseline regressors. The second examines the effect of removing first pass reading time as a duration modulator. The third examines the inclusion of function word predictability into the baseline hemodynamic response function. Statistical maps were created for each dataset and compared to the primary dataset (published in [1]) across the linguistic conditions of the initial dataset (lexical predictability, semantic predictability or syntax predictability). |
Lijing Chen; Kevin B Paterson; Xingshan Li; Lin Li; Yufang Yang Pragmatic influences on sentence integration: Evidence from eye movements Journal Article Quarterly Journal of Experimental Psychology, pp. 1–10, 2019. @article{Chen2019b, title = {Pragmatic influences on sentence integration: Evidence from eye movements}, author = {Lijing Chen and Kevin B Paterson and Xingshan Li and Lin Li and Yufang Yang}, doi = {10.1177/1747021819859829}, year = {2019}, date = {2019-07-01}, journal = {Quarterly Journal of Experimental Psychology}, pages = {1--10}, publisher = {SAGE PublicationsSage UK: London, England}, abstract = {To understand a discourse, readers must rapidly process semantic and syntactic information and extract the pragmatic information these sources imply. An important question concerns how this pragmatic information influences discourse processing in return. We address this issue in two eye movement experiments that investigate the influence of pragmatic inferences on the processing of inter-sentence integration. In Experiments 1a and 1b, participants read two-sentence discourses in Chinese in which the first sentence introduced an event and the second described its consequence, where the sentences were linked using either the causal connective “suoyi” (meaning “so” or “therefore”) or not. The second sentence included a target word that was unmarked or marked using the focus particle “zhiyou” (meaning “only”) in Experiment 1a or “shi” (equivalent to an it-cleft) in Experiment 1b. These particles have the pragmatic function of implying a contrast between a target element and its alternatives. The results showed that while the causal connective facilitated the processing of unmarked words in causal contexts (a connective facilitation effect), this effect was eliminated by the presence of the focus particle. This implies that contrastive information is inferred sufficiently rapidly during reading that it can influence semantic processes involved in sentence integration. Experiment 2 showed that disruption due to conflict between the processing requirements of focus and inter-sentence integration occurred only in causal and not adversative connective contexts, confirming that processing difficulty occurred when a contrastive relationship was not possible.}, keywords = {}, pubstate = {published}, tppubtype = {article} } To understand a discourse, readers must rapidly process semantic and syntactic information and extract the pragmatic information these sources imply. An important question concerns how this pragmatic information influences discourse processing in return. We address this issue in two eye movement experiments that investigate the influence of pragmatic inferences on the processing of inter-sentence integration. In Experiments 1a and 1b, participants read two-sentence discourses in Chinese in which the first sentence introduced an event and the second described its consequence, where the sentences were linked using either the causal connective “suoyi” (meaning “so” or “therefore”) or not. The second sentence included a target word that was unmarked or marked using the focus particle “zhiyou” (meaning “only”) in Experiment 1a or “shi” (equivalent to an it-cleft) in Experiment 1b. These particles have the pragmatic function of implying a contrast between a target element and its alternatives. The results showed that while the causal connective facilitated the processing of unmarked words in causal contexts (a connective facilitation effect), this effect was eliminated by the presence of the focus particle. This implies that contrastive information is inferred sufficiently rapidly during reading that it can influence semantic processes involved in sentence integration. Experiment 2 showed that disruption due to conflict between the processing requirements of focus and inter-sentence integration occurred only in causal and not adversative connective contexts, confirming that processing difficulty occurred when a contrastive relationship was not possible. |
Arielle Borovsky; Ryan E Peters Vocabulary size and structure affects real-time lexical recognition in 18-month-olds Journal Article PLOS ONE, 14 (7), pp. 1–21, 2019. @article{Borovsky2019, title = {Vocabulary size and structure affects real-time lexical recognition in 18-month-olds}, author = {Arielle Borovsky and Ryan E Peters}, editor = {Masatoshi Koizumi}, doi = {10.1371/journal.pone.0219290}, year = {2019}, date = {2019-07-01}, journal = {PLOS ONE}, volume = {14}, number = {7}, pages = {1--21}, publisher = {Public Library of Science}, abstract = {The mature lexicon encodes semantic relations between words, and these connections can alternately facilitate and interfere with language processing. We explore the emergence of these processing dynamics in 18-month-olds (N = 79) using a novel approach that calculates individualized semantic structure at multiple granularities in participants' productive vocabularies. Participants completed two interleaved eye-tracked word recognition tasks involving semantically unrelated and related picture contexts, which sought to measure the impact of lexical facilitation and interference on processing, respectively. Semantic structure and vocabulary size differentially impacted processing in each task. Category level structure facilitated word recognition in 18-month-olds with smaller productive vocabularies, while overall lexical connectivity interfered with word recognition for toddlers with relatively larger vocabularies. The results suggest that, while semantic structure at multiple granularities is measurable even in small lexicons, mechanisms of semantic interference and facilitation are driven by the development of structure at different granularities. We consider these findings in light of accounts of adult word recognition that posits that different levels of structure index strong and weak activation from nearby and distant semantic neighbors. We also consider further directions for developmental change in these patterns.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The mature lexicon encodes semantic relations between words, and these connections can alternately facilitate and interfere with language processing. We explore the emergence of these processing dynamics in 18-month-olds (N = 79) using a novel approach that calculates individualized semantic structure at multiple granularities in participants' productive vocabularies. Participants completed two interleaved eye-tracked word recognition tasks involving semantically unrelated and related picture contexts, which sought to measure the impact of lexical facilitation and interference on processing, respectively. Semantic structure and vocabulary size differentially impacted processing in each task. Category level structure facilitated word recognition in 18-month-olds with smaller productive vocabularies, while overall lexical connectivity interfered with word recognition for toddlers with relatively larger vocabularies. The results suggest that, while semantic structure at multiple granularities is measurable even in small lexicons, mechanisms of semantic interference and facilitation are driven by the development of structure at different granularities. We consider these findings in light of accounts of adult word recognition that posits that different levels of structure index strong and weak activation from nearby and distant semantic neighbors. We also consider further directions for developmental change in these patterns. |
Shira Klorfeld-Auslender; Nitzan Censor Visual-oculomotor interactions facilitate consolidation of perceptual learning Journal Article Journal of Vision, 19 (6), pp. 1–10, 2019. @article{Klorfeld-Auslender2019, title = {Visual-oculomotor interactions facilitate consolidation of perceptual learning}, author = {Shira Klorfeld-Auslender and Nitzan Censor}, doi = {10.1167/19.6.11}, year = {2019}, date = {2019-06-01}, journal = {Journal of Vision}, volume = {19}, number = {6}, pages = {1--10}, publisher = {The Association for Research in Vision and Ophthalmology}, abstract = {Visual skill learning is commonly considered a manifestation of brain plasticity. Following encoding, consolidation of the skill may result in between-session performance gains. A great volume of studies have demonstrated that during the offline consolidation interval, the skill is susceptible to external inputs that modify the preformed representation of the memory, affecting future performance. However, while basic visual perceptual learning is thought to be mediated by sensory brain regions or their higher-order readout pathways, the possibility of visual-oculomotor interactions affecting the consolidation interval and reshaping visual learning remains uncharted. Motivated by findings mapping connections between oculomotor behavior and visual performance, we examined whether visual consolidation can be facilitated by visual-oculomotor interactions. To this aim, we paired reactivation of an oculomotor memory with consolidation of a typical visual texture discrimination task. Importantly, the oculomotor memory was encoded by learning of the pure motor component of the movement, removing visual cues. When brief reactivation of the oculomotor memory preceded the visual task, visual gains were substantially enhanced compared with those achieved by visual practice per se and were strongly related to the magnitude of oculomotor gains, suggesting that the brain utilizes oculomotor memory to enhance basic visual perception.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual skill learning is commonly considered a manifestation of brain plasticity. Following encoding, consolidation of the skill may result in between-session performance gains. A great volume of studies have demonstrated that during the offline consolidation interval, the skill is susceptible to external inputs that modify the preformed representation of the memory, affecting future performance. However, while basic visual perceptual learning is thought to be mediated by sensory brain regions or their higher-order readout pathways, the possibility of visual-oculomotor interactions affecting the consolidation interval and reshaping visual learning remains uncharted. Motivated by findings mapping connections between oculomotor behavior and visual performance, we examined whether visual consolidation can be facilitated by visual-oculomotor interactions. To this aim, we paired reactivation of an oculomotor memory with consolidation of a typical visual texture discrimination task. Importantly, the oculomotor memory was encoded by learning of the pure motor component of the movement, removing visual cues. When brief reactivation of the oculomotor memory preceded the visual task, visual gains were substantially enhanced compared with those achieved by visual practice per se and were strongly related to the magnitude of oculomotor gains, suggesting that the brain utilizes oculomotor memory to enhance basic visual perception. |
Timothy G Shepard; Fang Hou; Peter J Bex; Luis A Lesmes; Zhong-Lin Lu; Deyue Yu Assessing reading performance in the periphery with a Bayesian adaptive approach: The qReading method Journal Article Journal of Vision, 19 (5), pp. 1–14, 2019. @article{Shepard2019, title = {Assessing reading performance in the periphery with a Bayesian adaptive approach: The qReading method}, author = {Timothy G Shepard and Fang Hou and Peter J Bex and Luis A Lesmes and Zhong-Lin Lu and Deyue Yu}, doi = {10.1167/19.5.5}, year = {2019}, date = {2019-05-01}, journal = {Journal of Vision}, volume = {19}, number = {5}, pages = {1--14}, publisher = {The Association for Research in Vision and Ophthalmology}, abstract = {Reading is a crucial visual activity and a fundamental skill in daily life. Rapid Serial Visual Presentation (RSVP) is a text-presentation paradigm that has been extensively used in the laboratory to study basic characteristics of reading performance. However, measuring reading function (reading speed vs. print size) is time-consuming for RSVP reading using conventional testing procedures. In this study, we develop a novel method, qReading, utilizing the Bayesian adaptive testing framework to measure reading function in the periphery. We perform both a psychophysical experiment and computer simulations to validate the qReading method. In the experiment, words are presented using an RSVP paradigm at 10° in the lower visual field. The reading function obtained from the qReading method with 50 trials exhibits good agreement (i.e., high accuracy) with the reading function obtained from a conventional method (method of constant stimuli [MCS]) with 186 trials (mean root mean square error: 0.12 log10 units). Simulations further confirm that the qReading method provides an unbiased measure. The qReading procedure also demonstrates excellent precision (half width of 68.2% credible interval: 0.02 log10 units with 50 trials) compared to the MCS method (0.03 log10 units with 186 trials). This investigation establishes that the qReading method can adequately measure the reading function in the normal periphery with high accuracy, precision, and efficiency, and is a potentially valuable tool for both research and clinical assessments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reading is a crucial visual activity and a fundamental skill in daily life. Rapid Serial Visual Presentation (RSVP) is a text-presentation paradigm that has been extensively used in the laboratory to study basic characteristics of reading performance. However, measuring reading function (reading speed vs. print size) is time-consuming for RSVP reading using conventional testing procedures. In this study, we develop a novel method, qReading, utilizing the Bayesian adaptive testing framework to measure reading function in the periphery. We perform both a psychophysical experiment and computer simulations to validate the qReading method. In the experiment, words are presented using an RSVP paradigm at 10° in the lower visual field. The reading function obtained from the qReading method with 50 trials exhibits good agreement (i.e., high accuracy) with the reading function obtained from a conventional method (method of constant stimuli [MCS]) with 186 trials (mean root mean square error: 0.12 log10 units). Simulations further confirm that the qReading method provides an unbiased measure. The qReading procedure also demonstrates excellent precision (half width of 68.2% credible interval: 0.02 log10 units with 50 trials) compared to the MCS method (0.03 log10 units with 186 trials). This investigation establishes that the qReading method can adequately measure the reading function in the normal periphery with high accuracy, precision, and efficiency, and is a potentially valuable tool for both research and clinical assessments. |
Daniel Schmidtke; Victor Kuperman A paradox of apparent brainless behavior: The time-course of compound word recognition Journal Article Cortex, 116 , pp. 250–267, 2019. @article{Schmidtke2019, title = {A paradox of apparent brainless behavior: The time-course of compound word recognition}, author = {Daniel Schmidtke and Victor Kuperman}, doi = {10.1016/j.cortex.2018.07.003}, year = {2019}, date = {2019-01-01}, journal = {Cortex}, volume = {116}, pages = {250--267}, publisher = {Elsevier Ltd}, abstract = {A review of the behavioral and neurophysiological estimates of the time-course of compound word recognition brings to light a paradox whereby temporal activity associated with lexical variables in behavioral studies predates temporal activity of seemingly comparable lexical processing in neuroimaging studies. However, under the assumption that brain activity is a cause of behavior, the earliest reliable behavioral effect of a lexical variable must represent an upper temporal bound for the origin of that effect in the neural record. The present research provides these behavioral bounds for lexical variables involved in compound word processing. We report data from five naturalistic reading studies in which participants read sentences containing English compound words, and apply a distributional technique of survival analysis to resulting eye-movement fixation durations (Reingold & Sheridan, 2014). The results of the survival analysis of the eye-movement record place a majority of the earliest discernible onsets of orthographic, morphological, and semantic effects at less than 200 ms (with a range of 138–269 ms). Our results place constraints on the absolute time-course of effects reported in the neurolinguistic literature, and support theories of complex word recognition which posit early simultaneous access of form and meaning.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A review of the behavioral and neurophysiological estimates of the time-course of compound word recognition brings to light a paradox whereby temporal activity associated with lexical variables in behavioral studies predates temporal activity of seemingly comparable lexical processing in neuroimaging studies. However, under the assumption that brain activity is a cause of behavior, the earliest reliable behavioral effect of a lexical variable must represent an upper temporal bound for the origin of that effect in the neural record. The present research provides these behavioral bounds for lexical variables involved in compound word processing. We report data from five naturalistic reading studies in which participants read sentences containing English compound words, and apply a distributional technique of survival analysis to resulting eye-movement fixation durations (Reingold & Sheridan, 2014). The results of the survival analysis of the eye-movement record place a majority of the earliest discernible onsets of orthographic, morphological, and semantic effects at less than 200 ms (with a range of 138–269 ms). Our results place constraints on the absolute time-course of effects reported in the neurolinguistic literature, and support theories of complex word recognition which posit early simultaneous access of form and meaning. |
Elizabeth R Schotter; Chuchu Li; Tamar H Gollan Quarterly Journal of Experimental Psychology, 72 (8), pp. 2032–2045, 2019. @article{Schotter2019a, title = {What reading aloud reveals about speaking: Regressive saccades implicate a failure to monitor, not inattention, in the prevalence of intrusion errors on function words}, author = {Elizabeth R Schotter and Chuchu Li and Tamar H Gollan}, doi = {10.1177/1747021818819480}, year = {2019}, date = {2019-01-01}, journal = {Quarterly Journal of Experimental Psychology}, volume = {72}, number = {8}, pages = {2032--2045}, abstract = {Bilinguals occasionally produce language intrusion errors (inadvertent translations of the intended word), especially when attempting to produce function word targets, and often when reading aloud mixed-language paragraphs. We investigate whether these errors are due to a failure of attention during speech planning, or failure of monitoring speech output by classifying errors based on whether and when they were corrected, and investigating eye movement behaviour surrounding them. Prior research on this topic has primarily tested alphabetic languages (e.g., Spanish-English bilinguals) in which part of speech is confounded with word length, which is related to word skipping (i.e., decreased attention). Therefore, we tested 29 Chinese-English bilinguals whose languages differ in orthography, visually cueing language membership, and for whom part of speech (in Chinese) is less confounded with word length. Despite the strong orthographic cue, Chinese-English bilinguals produced intrusion errors with similar effects as previously reported (e.g., especially with function word targets written in the dominant language). Gaze durations did differ by whether errors were made and corrected or not, but these patterns were similar for function and content words and therefore cannot explain part of speech effects. However, bilinguals regressed to words produced as errors more often than to correctly produced words, but regressions facilitated correction of errors only for content, not for function words. These data suggest that the vulnerability of function words to language intrusion errors primarily reflects automatic retrieval and failures of speech monitoring mechanisms from stopping function versus content word errors after they are planned for production.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Bilinguals occasionally produce language intrusion errors (inadvertent translations of the intended word), especially when attempting to produce function word targets, and often when reading aloud mixed-language paragraphs. We investigate whether these errors are due to a failure of attention during speech planning, or failure of monitoring speech output by classifying errors based on whether and when they were corrected, and investigating eye movement behaviour surrounding them. Prior research on this topic has primarily tested alphabetic languages (e.g., Spanish-English bilinguals) in which part of speech is confounded with word length, which is related to word skipping (i.e., decreased attention). Therefore, we tested 29 Chinese-English bilinguals whose languages differ in orthography, visually cueing language membership, and for whom part of speech (in Chinese) is less confounded with word length. Despite the strong orthographic cue, Chinese-English bilinguals produced intrusion errors with similar effects as previously reported (e.g., especially with function word targets written in the dominant language). Gaze durations did differ by whether errors were made and corrected or not, but these patterns were similar for function and content words and therefore cannot explain part of speech effects. However, bilinguals regressed to words produced as errors more often than to correctly produced words, but regressions facilitated correction of errors only for content, not for function words. These data suggest that the vulnerability of function words to language intrusion errors primarily reflects automatic retrieval and failures of speech monitoring mechanisms from stopping function versus content word errors after they are planned for production. |
Elizabeth R Schotter; Titus von der Malsburg; Mallorie Leinenger Forced fixations, trans-saccadic integration, and word recognition: Evidence for a hybrid mechanism of saccade triggering in reading Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 45 (4), pp. 677–688, 2019. @article{Schotter2019b, title = {Forced fixations, trans-saccadic integration, and word recognition: Evidence for a hybrid mechanism of saccade triggering in reading}, author = {Elizabeth R Schotter and Titus von der Malsburg and Mallorie Leinenger}, doi = {10.1037/xlm0000617}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {45}, number = {4}, pages = {677--688}, abstract = {Recent studies using the gaze-contingent boundary paradigm reported a reversed preview benefit- shorter fixations on a target word when an unrelated preview was easier to process than the fixated target (Schotter & Leinenger, 2016). This is explained via forced fixations-short fixations on words that would ideally be skipped (because lexical processing has progressed enough) but could not be because saccade planning reached a point of no return. This contrasts with accounts of preview effects via trans-saccadic integration-shorter fixations on a target word when the preview is more similar to it (see Cutter, Drieghe, & Liversedge, 2015). In addition, if the previewed word-not the fixated target- determines subsequent eye movements, is it also this word that enters the linguistic processing stream? We tested these accounts by having 24 subjects read 150 sentences in the boundary paradigm in which both the preview and target were initially plausible but later one, both, or neither became implausible, providing an opportunity to probe which one was linguistically encoded. In an intervening buffer region, both words were plausible, providing an opportunity to investigate trans-saccadic integration. The frequency of the previewed word affected progressive saccades (i.e., forced fixations) as well as when transsaccadic integration failure increased regressions, but, only the implausibility of the target word affected semantic encoding. These data support a hybrid account of saccadic control (Reingold, Reichle, Glaholt, & Sheridan, 2012) driven by incomplete (often parafoveal) word recognition, which occurs prior to complete (often foveal) word recognition.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent studies using the gaze-contingent boundary paradigm reported a reversed preview benefit- shorter fixations on a target word when an unrelated preview was easier to process than the fixated target (Schotter & Leinenger, 2016). This is explained via forced fixations-short fixations on words that would ideally be skipped (because lexical processing has progressed enough) but could not be because saccade planning reached a point of no return. This contrasts with accounts of preview effects via trans-saccadic integration-shorter fixations on a target word when the preview is more similar to it (see Cutter, Drieghe, & Liversedge, 2015). In addition, if the previewed word-not the fixated target- determines subsequent eye movements, is it also this word that enters the linguistic processing stream? We tested these accounts by having 24 subjects read 150 sentences in the boundary paradigm in which both the preview and target were initially plausible but later one, both, or neither became implausible, providing an opportunity to probe which one was linguistically encoded. In an intervening buffer region, both words were plausible, providing an opportunity to investigate trans-saccadic integration. The frequency of the previewed word affected progressive saccades (i.e., forced fixations) as well as when transsaccadic integration failure increased regressions, but, only the implausibility of the target word affected semantic encoding. These data support a hybrid account of saccadic control (Reingold, Reichle, Glaholt, & Sheridan, 2012) driven by incomplete (often parafoveal) word recognition, which occurs prior to complete (often foveal) word recognition. |
Sarah Schuster; Stefan Hawelka; Nicole Alexandra Himmelstoss; Fabio Richlan; Florian Hutzler The neural correlates of word position and lexical predictability during sentence reading: Evidence from fixation-related fMRI Journal Article Language, Cognition and Neuroscience, pp. 1–12, 2019. @article{Schuster2019, title = {The neural correlates of word position and lexical predictability during sentence reading: Evidence from fixation-related fMRI}, author = {Sarah Schuster and Stefan Hawelka and Nicole Alexandra Himmelstoss and Fabio Richlan and Florian Hutzler}, doi = {10.1080/23273798.2019.1575970}, year = {2019}, date = {2019-01-01}, journal = {Language, Cognition and Neuroscience}, pages = {1--12}, publisher = {Taylor & Francis}, abstract = {By means of combining eye-tracking and fMRI, the present study aimed to investigate aspects of higher linguistic processing during natural reading which were formerly hard to assess with traditional paradigms. Specifically, we investigated the haemodynamic effects of incremental sentence comprehension–as operationalised by word position–and its relation to context-based word-level effects of lexical predictability. We observed that an increasing amount of words being processed was associated with an increase in activation in the left posterior middle temporal and angular gyri. At the same time, left occipito-temporal regions showed a decrease in activation with increasing word position. Region of interest (ROI) analyses revealed differential effects of word position and predictability within dissociable parts of the semantic network–showing that it is expedient to consider these effects conjointly.}, keywords = {}, pubstate = {published}, tppubtype = {article} } By means of combining eye-tracking and fMRI, the present study aimed to investigate aspects of higher linguistic processing during natural reading which were formerly hard to assess with traditional paradigms. Specifically, we investigated the haemodynamic effects of incremental sentence comprehension–as operationalised by word position–and its relation to context-based word-level effects of lexical predictability. We observed that an increasing amount of words being processed was associated with an increase in activation in the left posterior middle temporal and angular gyri. At the same time, left occipito-temporal regions showed a decrease in activation with increasing word position. Region of interest (ROI) analyses revealed differential effects of word position and predictability within dissociable parts of the semantic network–showing that it is expedient to consider these effects conjointly. |
Zeshu Shao; Jeroen van Paridon; Fenna Poletiek; Antje S Meyer Effects of phrase and word frequencies in noun phrase production Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 45 (1), pp. 147–165, 2019. @article{Shao2019, title = {Effects of phrase and word frequencies in noun phrase production}, author = {Zeshu Shao and Jeroen van Paridon and Fenna Poletiek and Antje S Meyer}, doi = {10.1037/xlm0000570}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {45}, number = {1}, pages = {147--165}, abstract = {There is mounting evidence that the ease of producing and understanding language depends not only on the frequencies of individual words but also on the frequencies of word combinations. However, in two picture description experiments, Janssen and Barber (2012) found that French and Spanish speakers' speech onset latencies for short phrases depended exclusively on the frequencies of the phrases but not on the frequencies of the individual words. They suggested that speakers retrieved phrase-sized units from the mental lexicon. In the present study, we examined whether the time required to plan complex noun phrases in Dutch would likewise depend only on phrase frequencies. Participants described line drawings in phrases such as rode schoen [red shoe] (Experiments 1 and 2) or de rode schoen [the red shoe] (Experiment 3). Replicating Janssen and Barber's findings, utterance onset latencies depended on the frequencies of the phrases but, deviating from their findings, also depended on the frequencies of the adjectives in adjective-noun phrases and the frequencies of the nouns in determiner-adjective-noun phrases. We conclude that individual word frequencies and phrase frequencies both affect the time needed to produce noun phrases and discuss how these findings may be captured in models of the mental lexicon and of phrase production.}, keywords = {}, pubstate = {published}, tppubtype = {article} } There is mounting evidence that the ease of producing and understanding language depends not only on the frequencies of individual words but also on the frequencies of word combinations. However, in two picture description experiments, Janssen and Barber (2012) found that French and Spanish speakers' speech onset latencies for short phrases depended exclusively on the frequencies of the phrases but not on the frequencies of the individual words. They suggested that speakers retrieved phrase-sized units from the mental lexicon. In the present study, we examined whether the time required to plan complex noun phrases in Dutch would likewise depend only on phrase frequencies. Participants described line drawings in phrases such as rode schoen [red shoe] (Experiments 1 and 2) or de rode schoen [the red shoe] (Experiment 3). Replicating Janssen and Barber's findings, utterance onset latencies depended on the frequencies of the phrases but, deviating from their findings, also depended on the frequencies of the adjectives in adjective-noun phrases and the frequencies of the nouns in determiner-adjective-noun phrases. We conclude that individual word frequencies and phrase frequencies both affect the time needed to produce noun phrases and discuss how these findings may be captured in models of the mental lexicon and of phrase production. |
Signy Sheldon; Kelly Cool; Nadim El-Asmar The processes involved in mentally constructing event- and scene-based autobiographical representations Journal Article Journal of Cognitive Psychology, 31 , pp. 261–275, 2019. @article{Sheldon2019, title = {The processes involved in mentally constructing event- and scene-based autobiographical representations}, author = {Signy Sheldon and Kelly Cool and Nadim El-Asmar}, doi = {10.1080/20445911.2019.1614004}, year = {2019}, date = {2019-01-01}, journal = {Journal of Cognitive Psychology}, volume = {31}, pages = {261--275}, publisher = {Taylor & Francis}, abstract = {Autobiographical experiences can be mentally constructed as generalised events or as spatial scenes. We investigated the commonalities and distinctions in using episodic and visual imagery processes to imagine autobiographical scenarios as events or scenes. Participants described personal scenarios framed as future events or spatial scenes. We analyzed the number and type of episodic details within the descriptions. To measure imagery processing, we monitored eye-movements and examined the impact of viewing a imagery disrupting stimulus (Dynamic Visual Noise; DVN) when these descriptions were made. We found that events were described with more generalised details and scenes with more perceptual details. DVN reduced the number of episodic details generated for all descriptions and eye fixation rates negatively correlated with the number of these details that were generated. This suggests that different content is used to imagine event- or scene-based experiences and imagery contributes similarly to the episodic specificity of these imaginations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Autobiographical experiences can be mentally constructed as generalised events or as spatial scenes. We investigated the commonalities and distinctions in using episodic and visual imagery processes to imagine autobiographical scenarios as events or scenes. Participants described personal scenarios framed as future events or spatial scenes. We analyzed the number and type of episodic details within the descriptions. To measure imagery processing, we monitored eye-movements and examined the impact of viewing a imagery disrupting stimulus (Dynamic Visual Noise; DVN) when these descriptions were made. We found that events were described with more generalised details and scenes with more perceptual details. DVN reduced the number of episodic details generated for all descriptions and eye fixation rates negatively correlated with the number of these details that were generated. This suggests that different content is used to imagine event- or scene-based experiences and imagery contributes similarly to the episodic specificity of these imaginations. |
Anthony Shook; Viorica Marian Covert co-activation of bilinguals' non-target language: Phonological competition from translations Journal Article Linguistic Approaches to Bilingualism, 9 (2), pp. 228–252, 2019. @article{Shook2019, title = {Covert co-activation of bilinguals' non-target language: Phonological competition from translations}, author = {Anthony Shook and Viorica Marian}, doi = {10.1075/lab.17022.sho}, year = {2019}, date = {2019-01-01}, journal = {Linguistic Approaches to Bilingualism}, volume = {9}, number = {2}, pages = {228--252}, abstract = {When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala , respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala , respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing. |
Anthony Shook; Viorica Marian Covert co-activation of bilinguals' non-target language Journal Article Linguistic Approaches to Bilingualism, 9 (2), pp. 228–252, 2019. @article{Shook2019a, title = {Covert co-activation of bilinguals' non-target language}, author = {Anthony Shook and Viorica Marian}, doi = {10.1075/lab.17022.sho}, year = {2019}, date = {2019-01-01}, journal = {Linguistic Approaches to Bilingualism}, volume = {9}, number = {2}, pages = {228--252}, abstract = {When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala , respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala , respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing. |
Matthias J Sjerps; Caitlin Decuyper; Antje S Meyer Initiation of utterance planning in response to pre-recorded and “live” utterances Journal Article Quarterly Journal of Experimental Psychology, pp. 1–18, 2019. @article{Sjerps2019, title = {Initiation of utterance planning in response to pre-recorded and “live” utterances}, author = {Matthias J Sjerps and Caitlin Decuyper and Antje S Meyer}, doi = {10.1177/1747021819881265}, year = {2019}, date = {2019-01-01}, journal = {Quarterly Journal of Experimental Psychology}, pages = {1--18}, abstract = {In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants' timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants' speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants' timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants' speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings. |
Timothy J Slattery; Adam J Parker Return sweeps in reading: Processing implications of undersweep-fixations Journal Article Psychonomic Bulletin & Review, 26 , pp. 1948–1957, 2019. @article{Slattery2019, title = {Return sweeps in reading: Processing implications of undersweep-fixations}, author = {Timothy J Slattery and Adam J Parker}, doi = {10.3758/s13423-019-01636-3}, year = {2019}, date = {2019-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {26}, pages = {1948--1957}, publisher = {Psychonomic Bulletin & Review}, abstract = {Models of eye-movement control during reading focus on reading single lines of text. However, with multiline texts, return sweeps, which bring fixation from the end of one line to the beginning of the next, occur regularly and influence ~20% of all reading fixations. Our understanding of return sweeps is still limited. One common feature of return sweeps is the prevalence of oculomotor errors. Return sweeps, often initially undershoot the start of the line. Corrective saccades then bring fixation closer to the line start. The fixation occurring between the undershoot and the corrective saccade (undersweep-fixation) has important theoretical implications for the serial nature of lexical processing during reading, as they occur on words ahead of the intended attentional target. Furthermore, since the attentional target of a return sweep will lie far outside the parafovea during the prior fixation, it cannot be lexically preprocessed during this prior fixation. We explore the implications of undersweep-fixations for ongoing processing and models of eye movements during reading by analysing two existing eye-movement data sets of multiline reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Models of eye-movement control during reading focus on reading single lines of text. However, with multiline texts, return sweeps, which bring fixation from the end of one line to the beginning of the next, occur regularly and influence ~20% of all reading fixations. Our understanding of return sweeps is still limited. One common feature of return sweeps is the prevalence of oculomotor errors. Return sweeps, often initially undershoot the start of the line. Corrective saccades then bring fixation closer to the line start. The fixation occurring between the undershoot and the corrective saccade (undersweep-fixation) has important theoretical implications for the serial nature of lexical processing during reading, as they occur on words ahead of the intended attentional target. Furthermore, since the attentional target of a return sweep will lie far outside the parafovea during the prior fixation, it cannot be lexically preprocessed during this prior fixation. We explore the implications of undersweep-fixations for ongoing processing and models of eye movements during reading by analysing two existing eye-movement data sets of multiline reading. |
Timothy J Slattery; Martin R Vasilev An eye-movement exploration into return-sweep targeting during reading Journal Article Attention, Perception, & Psychophysics, 81 (5), pp. 1197–1203, 2019. @article{Slattery2019a, title = {An eye-movement exploration into return-sweep targeting during reading}, author = {Timothy J Slattery and Martin R Vasilev}, doi = {10.3758/s13414-019-01742-3}, year = {2019}, date = {2019-01-01}, journal = {Attention, Perception, & Psychophysics}, volume = {81}, number = {5}, pages = {1197--1203}, publisher = {Attention, Perception, & Psychophysics}, abstract = {Return-sweeps are an essential eye-movement that takes the readers' eyes from the end of one line of text to the start of the next. While return-sweeps are common during normal reading, the eye-movement literature is dominated by single-line reading studies where no return-sweeps are needed. The present experiment was designed to explore what readers are targeting with their return-sweeps. Participants read two short stories by Frank L. Baum while their eye-movements were being recorded. In one story, every line-initial word was highlighted by formatting it in bold, while the other story was presented normally (i.e., without any bolding). The bolding manipulation significantly reduced oculomotor error associated with return-sweeps, as these saccades landed closer to the left margin and were less likely to require corrective saccades compared to the control condition. However, despite this reduction in oculomotor error, the bolding had no influence on local fixation durations or global reading-time measures. Moreover, return-sweep landing sites were not impacted by line-initial word length nor did the effect of bolding interact with the length of the line-initial word, suggesting that readers were not targeting the centre of line-initial words. We discuss the implication of these findings for return-sweep targeting and eye-movement control during reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Return-sweeps are an essential eye-movement that takes the readers' eyes from the end of one line of text to the start of the next. While return-sweeps are common during normal reading, the eye-movement literature is dominated by single-line reading studies where no return-sweeps are needed. The present experiment was designed to explore what readers are targeting with their return-sweeps. Participants read two short stories by Frank L. Baum while their eye-movements were being recorded. In one story, every line-initial word was highlighted by formatting it in bold, while the other story was presented normally (i.e., without any bolding). The bolding manipulation significantly reduced oculomotor error associated with return-sweeps, as these saccades landed closer to the left margin and were less likely to require corrective saccades compared to the control condition. However, despite this reduction in oculomotor error, the bolding had no influence on local fixation durations or global reading-time measures. Moreover, return-sweep landing sites were not impacted by line-initial word length nor did the effect of bolding interact with the length of the line-initial word, suggesting that readers were not targeting the centre of line-initial words. We discuss the implication of these findings for return-sweep targeting and eye-movement control during reading. |
Adrian Staub; Sophia Dodge; Andrew L Cohen Failure to detect function word repetitions and omissions in reading: Are eye movements to blame? Journal Article Psychonomic Bulletin & Review, 26 , pp. 340–346, 2019. @article{Staub2019a, title = {Failure to detect function word repetitions and omissions in reading: Are eye movements to blame?}, author = {Adrian Staub and Sophia Dodge and Andrew L Cohen}, doi = {10.3758/s13423-018-1492-z}, year = {2019}, date = {2019-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {26}, pages = {340--346}, publisher = {Psychonomic Bulletin & Review}, abstract = {We tested whether failure to notice repetitions of function words during reading (e.g., Amanda jumped off the the swing and landed on her feet.) is due to the eyes' tendency to skip one of the instances of the word. Eye movements were recorded during reading of sentences with repetitions of the word the or repetitions of a noun, after which readers were asked whether an error was present. A repeated the was detected on 46% of trials overall. On trials on which both instances of the were fixated, detection was still only 66%. A repeated noun was detected on 90% of trials, with no significant effect of eye movement patterns. Detecting an omitted the also proved difficult, with eye movement patterns having only a small effect. Readers frequently overlook function word errors even when their eye movements provide maximal opportunity for noticing such errors, but they notice content word repetitions regardless of eye movement patterns. We propose that readers overlook function word errors because they attribute the apparent error to noise in the eye movement control system.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We tested whether failure to notice repetitions of function words during reading (e.g., Amanda jumped off the the swing and landed on her feet.) is due to the eyes' tendency to skip one of the instances of the word. Eye movements were recorded during reading of sentences with repetitions of the word the or repetitions of a noun, after which readers were asked whether an error was present. A repeated the was detected on 46% of trials overall. On trials on which both instances of the were fixated, detection was still only 66%. A repeated noun was detected on 90% of trials, with no significant effect of eye movement patterns. Detecting an omitted the also proved difficult, with eye movement patterns having only a small effect. Readers frequently overlook function word errors even when their eye movements provide maximal opportunity for noticing such errors, but they notice content word repetitions regardless of eye movement patterns. We propose that readers overlook function word errors because they attribute the apparent error to noise in the eye movement control system. |
Adrian Staub; Kirk Goddard The role of preview validity in predictability and frequency effects on eye movements in reading Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 45 (1), pp. 110–127, 2019. @article{Staub2019b, title = {The role of preview validity in predictability and frequency effects on eye movements in reading}, author = {Adrian Staub and Kirk Goddard}, doi = {10.1037/xlm0000561}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {45}, number = {1}, pages = {110--127}, abstract = {A word's predictability, as measured by its cloze probability, has a robust influence on the time a reader's eyes spend on the word, with more predictable words receiving shorter fixations. However, several previous studies using the boundary paradigm have found no apparent effect of predictability on early reading time measures when the reader does not have valid parafoveal preview of the target word. The present study directly assesses this pattern in two experiments, demonstrating evidence for a null effect of predictability on first fixation and gaze duration with invalid preview, supported by Bayes factor analyses. While the effect of context independent word frequency is shown to survive with invalid preview, consistent with previous studies, the effect of predictability is eliminated with both unrelated word previews and random letter string previews. These results suggest that a word's predictability influences early stages of orthographic processing, and does so only when perceptual evidence is equivocal, as is the case when the word is initially viewed in parafoveal vision. Word frequency may influence not only early orthographic processing, but also later processing stages.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A word's predictability, as measured by its cloze probability, has a robust influence on the time a reader's eyes spend on the word, with more predictable words receiving shorter fixations. However, several previous studies using the boundary paradigm have found no apparent effect of predictability on early reading time measures when the reader does not have valid parafoveal preview of the target word. The present study directly assesses this pattern in two experiments, demonstrating evidence for a null effect of predictability on first fixation and gaze duration with invalid preview, supported by Bayes factor analyses. While the effect of context independent word frequency is shown to survive with invalid preview, consistent with previous studies, the effect of predictability is eliminated with both unrelated word previews and random letter string previews. These results suggest that a word's predictability influences early stages of orthographic processing, and does so only when perceptual evidence is equivocal, as is the case when the word is initially viewed in parafoveal vision. Word frequency may influence not only early orthographic processing, but also later processing stages. |
Marianna Stella; Paul E Engelhardt Syntactic ambiguity resolution in dyslexia: An examination of cognitive factors underlying eye movement differences and comprehension failures Journal Article Dyslexia, 25 (2), pp. 115–141, 2019. @article{Stella2019, title = {Syntactic ambiguity resolution in dyslexia: An examination of cognitive factors underlying eye movement differences and comprehension failures}, author = {Marianna Stella and Paul E Engelhardt}, doi = {10.1002/dys.1613}, year = {2019}, date = {2019-01-01}, journal = {Dyslexia}, volume = {25}, number = {2}, pages = {115--141}, abstract = {This study examined eye movements and comprehension of temporary syntactic ambiguities in individuals with dyslexia, as few studies have focused on sentence-level comprehension in dyslexia. We tested 50 participants with dyslexia and 50 typically developing controls, in order to investigate (a) whether dyslexics have difficulty revising temporary syntactic misinterpretations and (b) underlying cognitive factors (i.e., working memory and processing speed) associated with eye movement differences and comprehension failures. In the sentence comprehension task, participants read subordinate-main structures that were either ambiguous or unambiguous, and we also manipulated the type of verb contained in the subordinate clause (i.e., reflexive or optionally transitive). Results showed a main effect of group on comprehension, in which individuals with dyslexia showed poorer comprehension than typically developing readers. In addition, participants with dyslexia showed longer total reading times on the disambiguating region of syntactically ambiguous sentences. With respect to cognitive factors, working memory was more associated with group differences than was processing speed. Conclusions focus on sentence-level syntactic processing issues in dyslexia (a previously under-researched area) and the relationship between online and offline measures of syntactic ambiguity resolution.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study examined eye movements and comprehension of temporary syntactic ambiguities in individuals with dyslexia, as few studies have focused on sentence-level comprehension in dyslexia. We tested 50 participants with dyslexia and 50 typically developing controls, in order to investigate (a) whether dyslexics have difficulty revising temporary syntactic misinterpretations and (b) underlying cognitive factors (i.e., working memory and processing speed) associated with eye movement differences and comprehension failures. In the sentence comprehension task, participants read subordinate-main structures that were either ambiguous or unambiguous, and we also manipulated the type of verb contained in the subordinate clause (i.e., reflexive or optionally transitive). Results showed a main effect of group on comprehension, in which individuals with dyslexia showed poorer comprehension than typically developing readers. In addition, participants with dyslexia showed longer total reading times on the disambiguating region of syntactically ambiguous sentences. With respect to cognitive factors, working memory was more associated with group differences than was processing speed. Conclusions focus on sentence-level syntactic processing issues in dyslexia (a previously under-researched area) and the relationship between online and offline measures of syntactic ambiguity resolution. |
Anastasia Stoops; Kiel Christianson Parafoveal processing of inflectional morphology in Russian: A within-word boundary-change paradigm Journal Article Vision Research, 158 , pp. 1–10, 2019. @article{Stoops2019, title = {Parafoveal processing of inflectional morphology in Russian: A within-word boundary-change paradigm}, author = {Anastasia Stoops and Kiel Christianson}, doi = {10.1016/j.visres.2019.01.012}, year = {2019}, date = {2019-01-01}, journal = {Vision Research}, volume = {158}, pages = {1--10}, publisher = {Elsevier}, abstract = {The present study examined whether the inflectional morphology on Russian nouns is processed parafoveally in words longer than five characters while the eyes are fixated on the word. A modified boundary-change paradigm was used to examine parafoveal processing of nominal case markings within a currently fixated word n. The results elicited identical preview benefit for both first and second-pass measures on the post boundary and whole word regions. The morphologically related preview benefit (vs. nonword) was observed for first and second-pass measures as early as pre-boundary, post-boundary, and whole word regions. Additionally the morphologically related preview elicited cost (vs. identical) for first-pass measures on the post-boundary region, total time for the whole word, and regressions into the pre-boundary region. The contribution of the study is two-fold. First, this is the first study to use within-word boundary changes to study the parafoveal processing of inflectional morphology in Russian. Second, we provide additional evidence that inflectional morphology can be integrated parafoveally while reading a language with linear concatenative morphology.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study examined whether the inflectional morphology on Russian nouns is processed parafoveally in words longer than five characters while the eyes are fixated on the word. A modified boundary-change paradigm was used to examine parafoveal processing of nominal case markings within a currently fixated word n. The results elicited identical preview benefit for both first and second-pass measures on the post boundary and whole word regions. The morphologically related preview benefit (vs. nonword) was observed for first and second-pass measures as early as pre-boundary, post-boundary, and whole word regions. Additionally the morphologically related preview elicited cost (vs. identical) for first-pass measures on the post-boundary region, total time for the whole word, and regressions into the pre-boundary region. The contribution of the study is two-fold. First, this is the first study to use within-word boundary changes to study the parafoveal processing of inflectional morphology in Russian. Second, we provide additional evidence that inflectional morphology can be integrated parafoveally while reading a language with linear concatenative morphology. |
Hideko Teruya; Vsevolod Kapatsinski Deciding to look: Revisiting the linking hypothesis for spoken word recognition in the visual world Journal Article Language, Cognition and Neuroscience, 34 (7), pp. 861–880, 2019. @article{Teruya2019, title = {Deciding to look: Revisiting the linking hypothesis for spoken word recognition in the visual world}, author = {Hideko Teruya and Vsevolod Kapatsinski}, doi = {10.1080/23273798.2019.1588338}, year = {2019}, date = {2019-01-01}, journal = {Language, Cognition and Neuroscience}, volume = {34}, number = {7}, pages = {861--880}, publisher = {Taylor & Francis}, abstract = {The visual world paradigm (VWP) studies of spoken word recognition rely on a linking hypothesis that connects lexical activation to the probability of looking at the referent of a word. The standard hypothesis is that fixation probabilities track activation levels transformed via the Luce Choice Rule. Under this assumption, given enough power, any difference between positive activations should be detectable using VWP. We argue that looking at a referent of a word is a decision, made when the word's activation exceeds a context-specific threshold. Subthreshold activations do not drive saccades, and differences among such activations are undetectable in VWP. Evidence is provided by VWP experiments on Japanese. Bayesian analyses indicate a relatively high threshold: saccades to cohort competitors do not exceed those to unrelated distractors unless the cohort competitor shares the initial CVC with the target. We argue that threshold setting constitutes an understudied source of variability in VWP data.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The visual world paradigm (VWP) studies of spoken word recognition rely on a linking hypothesis that connects lexical activation to the probability of looking at the referent of a word. The standard hypothesis is that fixation probabilities track activation levels transformed via the Luce Choice Rule. Under this assumption, given enough power, any difference between positive activations should be detectable using VWP. We argue that looking at a referent of a word is a decision, made when the word's activation exceeds a context-specific threshold. Subthreshold activations do not drive saccades, and differences among such activations are undetectable in VWP. Evidence is provided by VWP experiments on Japanese. Bayesian analyses indicate a relatively high threshold: saccades to cohort competitors do not exceed those to unrelated distractors unless the cohort competitor shares the initial CVC with the target. We argue that threshold setting constitutes an understudied source of variability in VWP data. |
Debra Titone; Kyle Lovseth; Kristina Kasparian; Mehrgol Tiv Are figurative interpretations of idioms directly retrieved, compositionally built, or both? Evidence from eye movement measures of reading Journal Article Canadian Journal of Experimental Psychology, 73 (4), pp. 216–230, 2019. @article{Titone2019, title = {Are figurative interpretations of idioms directly retrieved, compositionally built, or both? Evidence from eye movement measures of reading}, author = {Debra Titone and Kyle Lovseth and Kristina Kasparian and Mehrgol Tiv}, doi = {10.1037/cep0000175}, year = {2019}, date = {2019-01-01}, journal = {Canadian Journal of Experimental Psychology}, volume = {73}, number = {4}, pages = {216--230}, abstract = {Idioms are part of a general class of multiword expressions where the overall interpretation cannot be fully determined through a simple syntactic and semantic (i.e., compositional) analysis of their component words (e.g., kick the bucket, save your skin). Idioms are thus simultaneously amenable to direct retrieval from memory, and to an on-demand compositional analysis, yet it is unclear which processes lead to figurative interpretations of idioms during comprehension. In this eye-tracking study, healthy adults read sentences in their native language that contained idioms, which were followed by figurativeor literal-biased disambiguating sentential information. The results showed that the earliest stages of comprehension are driven by direct retrieval of idiomatic forms; however, later stages of comprehension, after which point the intended meaning of an idiom is known, are driven by both direct retrieval and compositional processing. Of note, at later stages, increased idiom decomposability slowed reading time, suggesting more effortful figurative comprehension. Together, these results are most consistent with multidetermined or hybrid models of idiom processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Idioms are part of a general class of multiword expressions where the overall interpretation cannot be fully determined through a simple syntactic and semantic (i.e., compositional) analysis of their component words (e.g., kick the bucket, save your skin). Idioms are thus simultaneously amenable to direct retrieval from memory, and to an on-demand compositional analysis, yet it is unclear which processes lead to figurative interpretations of idioms during comprehension. In this eye-tracking study, healthy adults read sentences in their native language that contained idioms, which were followed by figurativeor literal-biased disambiguating sentential information. The results showed that the earliest stages of comprehension are driven by direct retrieval of idiomatic forms; however, later stages of comprehension, after which point the intended meaning of an idiom is known, are driven by both direct retrieval and compositional processing. Of note, at later stages, increased idiom decomposability slowed reading time, suggesting more effortful figurative comprehension. Together, these results are most consistent with multidetermined or hybrid models of idiom processing. |
Mehrgol Tiv; Laura Gonnerman; Veronica Whitford; Deanna Friesen; Debra Jared; Debra Titone Figuring out how verb-particle constructions are understood during L1 and L2 reading Journal Article Frontiers in Psychology, 10 (JULY), pp. 1–18, 2019. @article{Tiv2019, title = {Figuring out how verb-particle constructions are understood during L1 and L2 reading}, author = {Mehrgol Tiv and Laura Gonnerman and Veronica Whitford and Deanna Friesen and Debra Jared and Debra Titone}, doi = {10.3389/fpsyg.2019.01733}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, number = {JULY}, pages = {1--18}, abstract = {The aim of this paper was to investigate first-language (L1) and second-language (L2) reading of verb particle constructions (VPCs) among English-French bilingual adults. VPCs, or phrasal verbs, are highly common collocations of a verb paired with a particle, such as eat up or chew out, that often convey a figurative meaning. VPCs vary in form (eat up the candy vs. eat the candy up) and in other factors, such as the semantic contribution of the constituent words to the overall meaning (semantic transparency) and form frequency. Much like classic forms of idioms, VPCs are difficult for L2 users. Here, we present two experiments that use eye-tracking to discover factors that influence the ease with which VPCs are processed by bilingual readers. In Experiment 1, we compared L1 reading of adjacent vs. split VPCs, and then explored whether the general pattern was driven by item-level factors. L1 readers did not generally find adjacent VPCs (eat up the candy) easier to process than split VPCs (eat the candy up); however, VPCs low in co-occurrence strength (i.e., low semantic transparency) and high in frequency were easiest to process in the adjacent form during first pass reading. In Experiment 2, we compared L2 reading of adjacent vs split VPCs, and then explored whether the general pattern varied with item-level or participant-level factors. L2 readers generally allotted more second pass reading time to split vs. adjacent forms, and there was some evidence that this pattern was greater for L2 English readers who had less English experience. In contrast with L1 reading, there was no influence of item differences on L2 reading behavior. These data suggest that L1 readers often have lexicalized VPC representations that are directly retrieved during comprehension, whereas L2 readers are more likely to compositionally process VPCs given their more general preference for adjacent particles, as demonstrated by longer second pass reading time for all split items.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The aim of this paper was to investigate first-language (L1) and second-language (L2) reading of verb particle constructions (VPCs) among English-French bilingual adults. VPCs, or phrasal verbs, are highly common collocations of a verb paired with a particle, such as eat up or chew out, that often convey a figurative meaning. VPCs vary in form (eat up the candy vs. eat the candy up) and in other factors, such as the semantic contribution of the constituent words to the overall meaning (semantic transparency) and form frequency. Much like classic forms of idioms, VPCs are difficult for L2 users. Here, we present two experiments that use eye-tracking to discover factors that influence the ease with which VPCs are processed by bilingual readers. In Experiment 1, we compared L1 reading of adjacent vs. split VPCs, and then explored whether the general pattern was driven by item-level factors. L1 readers did not generally find adjacent VPCs (eat up the candy) easier to process than split VPCs (eat the candy up); however, VPCs low in co-occurrence strength (i.e., low semantic transparency) and high in frequency were easiest to process in the adjacent form during first pass reading. In Experiment 2, we compared L2 reading of adjacent vs split VPCs, and then explored whether the general pattern varied with item-level or participant-level factors. L2 readers generally allotted more second pass reading time to split vs. adjacent forms, and there was some evidence that this pattern was greater for L2 English readers who had less English experience. In contrast with L1 reading, there was no influence of item differences on L2 reading behavior. These data suggest that L1 readers often have lexicalized VPC representations that are directly retrieved during comprehension, whereas L2 readers are more likely to compositionally process VPCs given their more general preference for adjacent particles, as demonstrated by longer second pass reading time for all split items. |
Wilhelmiina Toivo; Christoph Scheepers Pupillary responses to affective words in bilinguals' first versus second language Journal Article PLoS ONE, 14 (4), pp. 1–20, 2019. @article{Toivo2019, title = {Pupillary responses to affective words in bilinguals' first versus second language}, author = {Wilhelmiina Toivo and Christoph Scheepers}, doi = {10.1371/journal.pone.0210450}, year = {2019}, date = {2019-01-01}, journal = {PLoS ONE}, volume = {14}, number = {4}, pages = {1--20}, abstract = {Late bilinguals often report less emotional involvement in their second language, a phenomenon called reduced emotional resonance in L2. The present study measured pupil dilation in response to high-versus low-arousing words (e.g., riot vs. swamp) in German-English and Finnish-English late bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. To improve on previous research, we controlled for lexical confounds such as length, frequency, emotional valence, and abstractness–both within and across languages. Results showed no appreciable differences in post-trial word recognition judgements (98% recognition on average), but reliably stronger pupillary effects of the arousal manipulation when stimuli were presented in participants' first rather than second language. This supports the notion of reduced emotional resonance in L2. Our findings are unlikely to be due to differences in stimulus-specific control variables or to potential word-recognition difficulties in participants' second language. Linguistic relatedness between first and second language (German-English vs. Finnish-English) was also not found to have a modulating influence.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Late bilinguals often report less emotional involvement in their second language, a phenomenon called reduced emotional resonance in L2. The present study measured pupil dilation in response to high-versus low-arousing words (e.g., riot vs. swamp) in German-English and Finnish-English late bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. To improve on previous research, we controlled for lexical confounds such as length, frequency, emotional valence, and abstractness–both within and across languages. Results showed no appreciable differences in post-trial word recognition judgements (98% recognition on average), but reliably stronger pupillary effects of the arousal manipulation when stimuli were presented in participants' first rather than second language. This supports the notion of reduced emotional resonance in L2. Our findings are unlikely to be due to differences in stimulus-specific control variables or to potential word-recognition difficulties in participants' second language. Linguistic relatedness between first and second language (German-English vs. Finnish-English) was also not found to have a modulating influence. |
Xiuhong Tong; Wei Shen; Zhao Li; Mengdi Xu; Liping Pan; Xiuli Tong Phonological, not semantic, activation dominates Chinese character recognition: Evidence from a visual World eye-tracking study Journal Article Quarterly Journal of Experimental Psychology, pp. 1–12, 2019. @article{Tong2019, title = {Phonological, not semantic, activation dominates Chinese character recognition: Evidence from a visual World eye-tracking study}, author = {Xiuhong Tong and Wei Shen and Zhao Li and Mengdi Xu and Liping Pan and Xiuli Tong}, doi = {10.1177/1747021819887956}, year = {2019}, date = {2019-01-01}, journal = {Quarterly Journal of Experimental Psychology}, pages = {1--12}, abstract = {Combining eye-tracking technique with a revised visual world paradigm, this study examined how positional, phonological, and semantic information of radicals are activated in visual Chinese character recognition. Participants' eye movements were tracked when they looked at four types of invented logographic characters including a semantic radical in the legal and illegal positions, a phonetic radical in the legal and illegal positions. These logographic characters were presented simultaneously with either a sound-cued (e.g., /qiao2/) or meaning-cued (e.g., a picture of a bridge) condition. Participants appeared to allocate more visual attention towards radicals in legal, rather than illegal, positions. In addition, more eye fixations occurred on phonetic, rather than on semantic, radicals across both sound- and meaning-cued conditions, indicating participants' strong preference for phonetic over semantic radicals in visual character processing. These results underscore the universal phonology principle in processing non-alphabetic Chinese logographic characters.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Combining eye-tracking technique with a revised visual world paradigm, this study examined how positional, phonological, and semantic information of radicals are activated in visual Chinese character recognition. Participants' eye movements were tracked when they looked at four types of invented logographic characters including a semantic radical in the legal and illegal positions, a phonetic radical in the legal and illegal positions. These logographic characters were presented simultaneously with either a sound-cued (e.g., /qiao2/) or meaning-cued (e.g., a picture of a bridge) condition. Participants appeared to allocate more visual attention towards radicals in legal, rather than illegal, positions. In addition, more eye fixations occurred on phonetic, rather than on semantic, radicals across both sound- and meaning-cued conditions, indicating participants' strong preference for phonetic over semantic radicals in visual character processing. These results underscore the universal phonology principle in processing non-alphabetic Chinese logographic characters. |
Bernard I Issa; Kara Morgan-Short Effects of external and internal attentional manipulations on second language grammar development: An eye-tracking study Journal Article Studies in Second Language Acquisition, 41 (2), pp. 389–417, 2019. @article{Issa2019b, title = {Effects of external and internal attentional manipulations on second language grammar development: An eye-tracking study}, author = {Bernard I Issa and Kara Morgan-Short}, doi = {10.1017/S027226311800013X}, year = {2019}, date = {2019-01-01}, journal = {Studies in Second Language Acquisition}, volume = {41}, number = {2}, pages = {389--417}, abstract = {The role of attention has been central to theoretical and empirical inquiries in second language (L2) acquisition. The current eye-tracking study examined how external and internal attentional manipulations (Chun, Golomb, &Turk-Browne, 2011) promote L2 grammatical development. Participants (n = 55) were exposed to Spanish direct-object pronouns under external or internal attentional manipulations, which were implemented through textual input enhancement or structured input practice, respectively. Results for both manipulations indicated that (a) learner attentional allocation to the form was affected; (b) L2 gains were evidenced, although only the internal manipulation led to above-chance performance; and (c) L2 gains were related to attention allocated to the form under the external manipulation and to a lesser extent the internal manipulation. Overall, findings may inform theoretical perspectives on attention and elucidate cognitive processes related to L2 instruction.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The role of attention has been central to theoretical and empirical inquiries in second language (L2) acquisition. The current eye-tracking study examined how external and internal attentional manipulations (Chun, Golomb, &Turk-Browne, 2011) promote L2 grammatical development. Participants (n = 55) were exposed to Spanish direct-object pronouns under external or internal attentional manipulations, which were implemented through textual input enhancement or structured input practice, respectively. Results for both manipulations indicated that (a) learner attentional allocation to the form was affected; (b) L2 gains were evidenced, although only the internal manipulation led to above-chance performance; and (c) L2 gains were related to attention allocated to the form under the external manipulation and to a lesser extent the internal manipulation. Overall, findings may inform theoretical perspectives on attention and elucidate cognitive processes related to L2 instruction. |
Aine Ito Prediction of orthographic information during listening comprehension: A printed-word visual world study Journal Article Quarterly Journal of Experimental Psychology, 72 (11), pp. 2584–2596, 2019. @article{Ito2019, title = {Prediction of orthographic information during listening comprehension: A printed-word visual world study}, author = {Aine Ito}, doi = {10.1177/1747021819851394}, year = {2019}, date = {2019-01-01}, journal = {Quarterly Journal of Experimental Psychology}, volume = {72}, number = {11}, pages = {2584--2596}, abstract = {Two visual world eye-tracking experiments examined the role of orthographic information in the visual context in pre-activation of orthographic and phonological information using Japanese. Participants heard sentences that contained a predictable target word and viewed a display showing four words in a logogram, kanji (Experiment 1) or in a phonogram, hiragana (Experiment 2). The four words included either the target word (e.g., 魚 /sakana/; fish), an orthographic competitor (e.g.,角 /tuno/; horn), a phonological competitor (e.g., 桜 /sakura/; cherry blossom), or an unrelated word (e.g., 本 /hon/; book), together with three distractor words. The orthographic competitor was orthographically or phonologically dissimilar to the target in hiragana. In Experiment 1, target and orthographic competitor words attracted more fixations than unrelated words before the target word was mentioned, suggesting that participants pre-activated orthographic form of the target word. In Experiment 2, target and phonological competitor words attracted more predictive fixations than unrelated words, but orthographic competitor words did not, suggesting a critical role of the visual context. This pre-activation pattern does not fit with the pattern of lexical activation in auditory word recognition, where orthography and phonology interact. However, it is compatible with the pattern of lexical activation in spoken word production, where orthographic information is not automatically activated, in line with production-based prediction accounts.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Two visual world eye-tracking experiments examined the role of orthographic information in the visual context in pre-activation of orthographic and phonological information using Japanese. Participants heard sentences that contained a predictable target word and viewed a display showing four words in a logogram, kanji (Experiment 1) or in a phonogram, hiragana (Experiment 2). The four words included either the target word (e.g., 魚 /sakana/; fish), an orthographic competitor (e.g.,角 /tuno/; horn), a phonological competitor (e.g., 桜 /sakura/; cherry blossom), or an unrelated word (e.g., 本 /hon/; book), together with three distractor words. The orthographic competitor was orthographically or phonologically dissimilar to the target in hiragana. In Experiment 1, target and orthographic competitor words attracted more fixations than unrelated words before the target word was mentioned, suggesting that participants pre-activated orthographic form of the target word. In Experiment 2, target and phonological competitor words attracted more predictive fixations than unrelated words, but orthographic competitor words did not, suggesting a critical role of the visual context. This pre-activation pattern does not fit with the pattern of lexical activation in auditory word recognition, where orthography and phonology interact. However, it is compatible with the pattern of lexical activation in spoken word production, where orthographic information is not automatically activated, in line with production-based prediction accounts. |
Juhani Järvikivi; Sarah Schimke; Pirita Pyykkönen-Klauck Understanding indirect reference in a visual context Journal Article Discourse Processes, 56 (2), pp. 117–135, 2019. @article{Jaervikivi2019, title = {Understanding indirect reference in a visual context}, author = {Juhani Järvikivi and Sarah Schimke and Pirita Pyykkönen-Klauck}, doi = {10.1080/0163853X.2017.1386521}, year = {2019}, date = {2019-01-01}, journal = {Discourse Processes}, volume = {56}, number = {2}, pages = {117--135}, abstract = {We often use pronouns like it or they without explicitly mentioned antecedents. We asked whether the human processing system that resolves such indirect pronouns uses the immediate visual-sensory context in multi-modal discourse. Our results showed that people had no difficulty understanding conceptually central referents, whether explicitly mentioned or not, whereas referents that were conceptually peripheral were much harder to understand when left implicit than when they had been mentioned before. Importantly, we showed that people could not recover this information from the visual environment. The results suggest that the semantic-conceptual relatedness of the potential referent with respect to the defining events and actors in the current discourse representation is a determining factor of how easy it is to establish the referential link. The visual environment is only integrated to the extent that it is relevant or acts as a fall-back when the referential search within the current discourse representation fails.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We often use pronouns like it or they without explicitly mentioned antecedents. We asked whether the human processing system that resolves such indirect pronouns uses the immediate visual-sensory context in multi-modal discourse. Our results showed that people had no difficulty understanding conceptually central referents, whether explicitly mentioned or not, whereas referents that were conceptually peripheral were much harder to understand when left implicit than when they had been mentioned before. Importantly, we showed that people could not recover this information from the visual environment. The results suggest that the semantic-conceptual relatedness of the potential referent with respect to the defining events and actors in the current discourse representation is a determining factor of how easy it is to establish the referential link. The visual environment is only integrated to the extent that it is relevant or acts as a fall-back when the referential search within the current discourse representation fails. |
Jill Jegerski; Irina A Sekerina The processing of input with differential object marking by heritage Spanish speakers Journal Article Bilingualism: Language and Cognition, pp. 1–9, 2019. @article{Jegerski2019, title = {The processing of input with differential object marking by heritage Spanish speakers}, author = {Jill Jegerski and Irina A Sekerina}, doi = {10.1017/S1366728919000087}, year = {2019}, date = {2019-01-01}, journal = {Bilingualism: Language and Cognition}, pages = {1--9}, abstract = {Heritage Spanish speakers and adult immigrant bilinguals listened to wh-questions with the differential object marker a (quién/a quién 'who/who ACC ') while their eye movements across four referent pictures were tracked. The heritage speakers were less accurate than the adult immigrants in their verbal responses to the questions, leaving objects unmarked for case at a rate of 18%, but eye movement data suggested that the two groups were similar in their comprehension, with both starting to look at the target picture at the same point in the question and identifying the target sooner with a quién 'who ACC ' than with quién 'who' questions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Heritage Spanish speakers and adult immigrant bilinguals listened to wh-questions with the differential object marker a (quién/a quién 'who/who ACC ') while their eye movements across four referent pictures were tracked. The heritage speakers were less accurate than the adult immigrants in their verbal responses to the questions, leaving objects unmarked for case at a rate of 18%, but eye movement data suggested that the two groups were similar in their comprehension, with both starting to look at the target picture at the same point in the question and identifying the target sooner with a quién 'who ACC ' than with quién 'who' questions. |
Yu Cin Jian Reading instructions facilitate signaling effect on science text for young readers: An eye-movement study Journal Article International Journal of Science and Mathematics Education, 17 (3), pp. 503–522, 2019. @article{Jian2019, title = {Reading instructions facilitate signaling effect on science text for young readers: An eye-movement study}, author = {Yu Cin Jian}, doi = {10.1007/s10763-018-9878-y}, year = {2019}, date = {2019-01-01}, journal = {International Journal of Science and Mathematics Education}, volume = {17}, number = {3}, pages = {503--522}, publisher = {International Journal of Science and Mathematics Education}, abstract = {Science texts often use visual representations (e.g. diagrams, graphs, photographs) to help readers learn science knowledge. Reading an illustrated text for learning is one type of multimedia learning. Empirical research has increasingly confirmed the signaling principle's effectiveness in multimedia learning. Highlighting correspondences between text and pictures benefits learning outcomes. However, the signaling effect's cognitive processes and its generalizability to young readers are unknown. This study clarified these aspects using eye-tracking technology and reading tests. Eighty-nine sixth-grade students read an illustrated science text in one of three conditions: reading material with signals, without signals (identical labels of Diagram 1 and Diagram 2 in text and illustration), and with signals combined with reading instructions. Findings revealed that the signaling principle alone cannot be generalized to young readers. Specifically, “Diagram 1” and “Diagram 2” in parentheses mixed with science text content had limited signaling effect for students and reading instructions are necessary. Eye movements reflected cognitive processes of science reading; students who received reading instructions employed greater cognitive effort and time in reading illustrations and tried to integrate textual and pictorial information using signals.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Science texts often use visual representations (e.g. diagrams, graphs, photographs) to help readers learn science knowledge. Reading an illustrated text for learning is one type of multimedia learning. Empirical research has increasingly confirmed the signaling principle's effectiveness in multimedia learning. Highlighting correspondences between text and pictures benefits learning outcomes. However, the signaling effect's cognitive processes and its generalizability to young readers are unknown. This study clarified these aspects using eye-tracking technology and reading tests. Eighty-nine sixth-grade students read an illustrated science text in one of three conditions: reading material with signals, without signals (identical labels of Diagram 1 and Diagram 2 in text and illustration), and with signals combined with reading instructions. Findings revealed that the signaling principle alone cannot be generalized to young readers. Specifically, “Diagram 1” and “Diagram 2” in parentheses mixed with science text content had limited signaling effect for students and reading instructions are necessary. Eye movements reflected cognitive processes of science reading; students who received reading instructions employed greater cognitive effort and time in reading illustrations and tried to integrate textual and pictorial information using signals. |
Rebecca L Johnson; Sarah Rose Slate; Allison R Teevan; Barbara J Juhasz The processing of blend words in naming and sentence reading Journal Article Quarterly Journal of Experimental Psychology, 72 (4), pp. 847–857, 2019. @article{Johnson2019, title = {The processing of blend words in naming and sentence reading}, author = {Rebecca L Johnson and Sarah Rose Slate and Allison R Teevan and Barbara J Juhasz}, doi = {10.1177/1747021818768441}, year = {2019}, date = {2019-01-01}, journal = {Quarterly Journal of Experimental Psychology}, volume = {72}, number = {4}, pages = {847--857}, abstract = {Research exploring the processing of morphologically complex words, such as compound words, has found that they are decomposed into their constituent parts during processing. Although much is known about the processing of compound words, very little is known about the processing of lexicalised blend words, which are created from parts of two words, often with phoneme overlap (e.g., brunch). In the current study, blends were matched with non-blend words on a variety of lexical characteristics, and blend processing was examined using two tasks: a naming task and an eye-tracking task that recorded eye movements during reading. Results showed that blend words were processed more slowly than non-blend control words in both tasks. Blend words led to longer reaction times in naming and longer processing times on several eye movement measures compared to non-blend words. This was especially true for blends that were long, rated low in word familiarity, but were easily recognisable as blends.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Research exploring the processing of morphologically complex words, such as compound words, has found that they are decomposed into their constituent parts during processing. Although much is known about the processing of compound words, very little is known about the processing of lexicalised blend words, which are created from parts of two words, often with phoneme overlap (e.g., brunch). In the current study, blends were matched with non-blend words on a variety of lexical characteristics, and blend processing was examined using two tasks: a naming task and an eye-tracking task that recorded eye movements during reading. Results showed that blend words were processed more slowly than non-blend control words in both tasks. Blend words led to longer reaction times in naming and longer processing times on several eye movement measures compared to non-blend words. This was especially true for blends that were long, rated low in word familiarity, but were easily recognisable as blends. |
Barbara J Juhasz; Heather Sheridan Memory and Cognition, 48 (1), pp. 83–95, 2019. @article{Juhasz2019, title = {The time course of age-of-acquisition effects on eye movements during reading: Evidence from survival analyses distributional analyses provide additional information about}, author = {Barbara J Juhasz and Heather Sheridan}, doi = {10.3758/s13421-019-00963-z}, year = {2019}, date = {2019-01-01}, journal = {Memory and Cognition}, volume = {48}, number = {1}, pages = {83--95}, abstract = {Adults process words that are rated as being learned earlier in life faster than words that are rated as being acquired later in life. This age-of-acquisition (AoA) effect has been observed in a variety of word-recognition tasks when word frequency is controlled. AoA has also previously been found to influence fixation durations when words are embedded into sentences and eye movements are recorded. However, the time course of AoA effects during reading has been inconsistent across studies. The current study further explored the time course of AoA effects on distributions of first-fixation durations during reading. Early and late acquired words were embedded into matched neutral sentence frames. Participants read the sentences while their eye movements were recorded. AoA effects were observed in both early and late fixation duration measures, suggesting that AoA has an early and long-lasting effect on word-recognition processes during reading. Survival analysis revealed that the earliest discernable effect of AoA on distributions of first-fixation durations emerged beginning at 158 ms. This rapid influence of AoA was confirmed through the use of Vincentile plots, which demonstrated that the effect of AoA occurred early and was relatively consistent across the distribution of fixations. This pattern of results provides support for the direct lexical-control hypothesis, as well as the viewpoint that AoA may exert an influence at multiple loci within the mental lexicon.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Adults process words that are rated as being learned earlier in life faster than words that are rated as being acquired later in life. This age-of-acquisition (AoA) effect has been observed in a variety of word-recognition tasks when word frequency is controlled. AoA has also previously been found to influence fixation durations when words are embedded into sentences and eye movements are recorded. However, the time course of AoA effects during reading has been inconsistent across studies. The current study further explored the time course of AoA effects on distributions of first-fixation durations during reading. Early and late acquired words were embedded into matched neutral sentence frames. Participants read the sentences while their eye movements were recorded. AoA effects were observed in both early and late fixation duration measures, suggesting that AoA has an early and long-lasting effect on word-recognition processes during reading. Survival analysis revealed that the earliest discernable effect of AoA on distributions of first-fixation durations emerged beginning at 158 ms. This rapid influence of AoA was confirmed through the use of Vincentile plots, which demonstrated that the effect of AoA occurred early and was relatively consistent across the distribution of fixations. This pattern of results provides support for the direct lexical-control hypothesis, as well as the viewpoint that AoA may exert an influence at multiple loci within the mental lexicon. |
Efthymia C Kapnoula; Arthur G Samuel Voices in the mental lexicon: Words carry indexical information that can affect access to their meaning Journal Article Journal of Memory and Language, 107 , pp. 111–127, 2019. @article{Kapnoula2019, title = {Voices in the mental lexicon: Words carry indexical information that can affect access to their meaning}, author = {Efthymia C Kapnoula and Arthur G Samuel}, doi = {10.1016/j.jml.2019.05.001}, year = {2019}, date = {2019-01-01}, journal = {Journal of Memory and Language}, volume = {107}, pages = {111--127}, publisher = {Elsevier}, abstract = {The speech signal carries both linguistic and non-linguistic information (e.g., a talker's voice qualities; referred to as indexical information). There is evidence that indexical information can affect some aspects of spoken word recognition, but we still do not know whether and how it can affect access to a word's meaning. A few studies support a dual-route model, in which inferences about the talker can guide access to meaning via a route external to the mental lexicon. It remains unclear whether indexical information is also encoded within the mental lexicon. The present study tests for indexical effects on spoken word recognition and referent selection within the mental lexicon. In two experiments, we manipulated voice-to-referent co-occurrence, while preventing participants from using indexical information in an explicit way. Participants learned novel words (e.g., bifa) and their meanings (e.g., kite), with each talker's voice linked (via systematic co-occurrence) to a specific referent (e.g., bifa spoken by speaker 1 referred to a specific picture of a kite). In testing, voice-to-referent mapping either matched that of training (congruent), or not (incongruent). Participants' looks to the target's referent were used as an index of lexical activation. Listeners looked faster at a target's referent on congruent than incongruent trials. The same pattern of results was observed in a third experiment, when testing was 24 hrs later. These results show that indexical information can be encoded in lexical representations and affect spoken word recognition and referent selection. Our findings are consistent with episodic and distributed views of the mental lexicon that assume multi-dimensional lexical representations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The speech signal carries both linguistic and non-linguistic information (e.g., a talker's voice qualities; referred to as indexical information). There is evidence that indexical information can affect some aspects of spoken word recognition, but we still do not know whether and how it can affect access to a word's meaning. A few studies support a dual-route model, in which inferences about the talker can guide access to meaning via a route external to the mental lexicon. It remains unclear whether indexical information is also encoded within the mental lexicon. The present study tests for indexical effects on spoken word recognition and referent selection within the mental lexicon. In two experiments, we manipulated voice-to-referent co-occurrence, while preventing participants from using indexical information in an explicit way. Participants learned novel words (e.g., bifa) and their meanings (e.g., kite), with each talker's voice linked (via systematic co-occurrence) to a specific referent (e.g., bifa spoken by speaker 1 referred to a specific picture of a kite). In testing, voice-to-referent mapping either matched that of training (congruent), or not (incongruent). Participants' looks to the target's referent were used as an index of lexical activation. Listeners looked faster at a target's referent on congruent than incongruent trials. The same pattern of results was observed in a third experiment, when testing was 24 hrs later. These results show that indexical information can be encoded in lexical representations and affect spoken word recognition and referent selection. Our findings are consistent with episodic and distributed views of the mental lexicon that assume multi-dimensional lexical representations. |
Hossein Karimi; Trevor Brothers; Fernanda Ferreira Phonological versus semantic prediction in focus and repair constructions: No evidence for differential predictions Journal Article Cognitive Psychology, 112 , pp. 25–47, 2019. @article{Karimi2019, title = {Phonological versus semantic prediction in focus and repair constructions: No evidence for differential predictions}, author = {Hossein Karimi and Trevor Brothers and Fernanda Ferreira}, doi = {10.1016/j.cogpsych.2019.04.001}, year = {2019}, date = {2019-01-01}, journal = {Cognitive Psychology}, volume = {112}, pages = {25--47}, publisher = {Elsevier}, abstract = {Evidence suggests that the language processing system is predictive. Although past research has established prediction as a general tendency, it is not yet clear whether comprehenders can modulate their anticipatory strategies in response to cues based on sentence constructions. In two visual world eye-tracking experiments, we investigated whether focus constructions (not the hammer but rather the łdots) and repair disfluencies (the hammer uh I mean the łdots) would lead listeners to generate different patterns of predictions. In three offline tasks, we observed that participants preferred semantically related continuations (hammer – nail)following focus constructions and phonologically related continuations (hammer – hammock) following disfluencies. However, these offline preferences were not evident in participants' predictive eye-movements during online language processing: Semantically related (nail) and phonologically related words (hammock) received additional predictive looks regardless of whether the target word appeared in a disfluency or in a focus construction. However, significantly less semantic and phonological activation was observed in two “control” linguistic contexts in which predictive processing was discouraged. These findings suggest that although the prediction system is sensitive to sentence construction, is it not flexible enough to alter the type of prediction generated based on preceding context.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Evidence suggests that the language processing system is predictive. Although past research has established prediction as a general tendency, it is not yet clear whether comprehenders can modulate their anticipatory strategies in response to cues based on sentence constructions. In two visual world eye-tracking experiments, we investigated whether focus constructions (not the hammer but rather the łdots) and repair disfluencies (the hammer uh I mean the łdots) would lead listeners to generate different patterns of predictions. In three offline tasks, we observed that participants preferred semantically related continuations (hammer – nail)following focus constructions and phonologically related continuations (hammer – hammock) following disfluencies. However, these offline preferences were not evident in participants' predictive eye-movements during online language processing: Semantically related (nail) and phonologically related words (hammock) received additional predictive looks regardless of whether the target word appeared in a disfluency or in a focus construction. However, significantly less semantic and phonological activation was observed in two “control” linguistic contexts in which predictive processing was discouraged. These findings suggest that although the prediction system is sensitive to sentence construction, is it not flexible enough to alter the type of prediction generated based on preceding context. |
Greta Kaufeld; Wibke Naumann; Antje S Meyer; Hans Rutger Bosker; Andrea E Martin Contextual speech rate influences morphosyntactic prediction and integration Journal Article Language, Cognition and Neuroscience, pp. 1–16, 2019. @article{Kaufeld2019, title = {Contextual speech rate influences morphosyntactic prediction and integration}, author = {Greta Kaufeld and Wibke Naumann and Antje S Meyer and Hans Rutger Bosker and Andrea E Martin}, doi = {10.1080/23273798.2019.1701691}, year = {2019}, date = {2019-01-01}, journal = {Language, Cognition and Neuroscience}, pages = {1--16}, publisher = {Taylor & Francis}, abstract = {Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing. |
Greta Kaufeld; Anna Ravenschlag; Antje S Meyer; Andrea E Martin; Hans Rutger Bosker Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–14, 2019. @article{Kaufeld2019a, title = {Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension}, author = {Greta Kaufeld and Anna Ravenschlag and Antje S Meyer and Andrea E Martin and Hans Rutger Bosker}, doi = {10.1037/xlm0000744}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, pages = {1--14}, abstract = {During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue's reliability. Moreover, we found speech rate normalization effects in participants' gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue's reliability. Moreover, we found speech rate normalization effects in participants' gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects. |
Young-Suk Grace Kim; Yaacov Petscher; Christian Vorstius Unpacking eye movements during oral and silent reading and their relations to reading proficiency in beginning readers Journal Article Contemporary Educational Psychology, 58 , pp. 102–120, 2019. @article{Kim2019i, title = {Unpacking eye movements during oral and silent reading and their relations to reading proficiency in beginning readers}, author = {Young-Suk Grace Kim and Yaacov Petscher and Christian Vorstius}, doi = {10.1016/j.cedpsych.2019.03.002}, year = {2019}, date = {2019-01-01}, journal = {Contemporary Educational Psychology}, volume = {58}, pages = {102--120}, abstract = {Our understanding about the developmental similarities and differences between oral and silent reading and their relations to reading proficiency (word reading and reading comprehension) in beginning readers is limited. To fill this gap, we investigated 368 first graders' oral and silent reading using eye-tracking technology at the beginning and end of the school year. Oral reading took a longer time (greater rereading times and refixations) than silent reading, but showed greater development (greater reduction in rereading times and fixations) from the beginning to the end of the year. The relation of eye-movement behaviors to reading proficiency was such that, for example, less rereading time was positively related to reading proficiency, and the relation was stronger in oral reading than in silent reading. Moreover, the nature of relations between eye movements and reading skill varied as a function of the child's reading proficiency such that the relations were weaker for poor readers, particularly at the beginning of the year. The relations between eye movements and reading proficiency stabilized in the spring for children whose reading skill was 0.30 quantile and above, but weaker relations remained for readers below 0.30 quantile. These findings suggest the importance of examining eye-movement behaviors in both oral and silent reading modes and their developmental relations to reading proficiency.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Our understanding about the developmental similarities and differences between oral and silent reading and their relations to reading proficiency (word reading and reading comprehension) in beginning readers is limited. To fill this gap, we investigated 368 first graders' oral and silent reading using eye-tracking technology at the beginning and end of the school year. Oral reading took a longer time (greater rereading times and refixations) than silent reading, but showed greater development (greater reduction in rereading times and fixations) from the beginning to the end of the year. The relation of eye-movement behaviors to reading proficiency was such that, for example, less rereading time was positively related to reading proficiency, and the relation was stronger in oral reading than in silent reading. Moreover, the nature of relations between eye movements and reading skill varied as a function of the child's reading proficiency such that the relations were weaker for poor readers, particularly at the beginning of the year. The relations between eye movements and reading proficiency stabilized in the spring for children whose reading skill was 0.30 quantile and above, but weaker relations remained for readers below 0.30 quantile. These findings suggest the importance of examining eye-movement behaviors in both oral and silent reading modes and their developmental relations to reading proficiency. |
Thomas Kluth; Michele Burigo; Holger Schultheis; Pia Knoeferle Does direction matter? Linguistic asymmetries reflected in visual attention Journal Article Cognition, 185 , pp. 91–120, 2019. @article{Kluth2019, title = {Does direction matter? Linguistic asymmetries reflected in visual attention}, author = {Thomas Kluth and Michele Burigo and Holger Schultheis and Pia Knoeferle}, doi = {10.1016/j.cognition.2018.09.006}, year = {2019}, date = {2019-01-01}, journal = {Cognition}, volume = {185}, pages = {91--120}, publisher = {Elsevier}, abstract = {Language and vision interact in non-trivial ways. Linguistically, spatial utterances are often asymmetrical as they relate more stable objects (reference objects) to less stable objects (located objects). Researchers have claimed that such linguistic asymmetry should also be reflected in the allocation of visual attention when people process a depicted spatial relation described by spatial language. More specifically, it was assumed that people move their attention from the reference object to the located object. However, recent theoretical and empirical findings challenge the directionality of this attentional shift. In this article, we present the results of an empirical study based on predictions generated by computational cognitive models implementing different directionalities of attention. Moreover, we thoroughly analyze the computational models. While our results do not favor any of the implemented directionalities of attention, we found that two unknown sources of geometric information affect spatial language understanding. We provide modifications to the computational models that substantially improve their performance on empirical data.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Language and vision interact in non-trivial ways. Linguistically, spatial utterances are often asymmetrical as they relate more stable objects (reference objects) to less stable objects (located objects). Researchers have claimed that such linguistic asymmetry should also be reflected in the allocation of visual attention when people process a depicted spatial relation described by spatial language. More specifically, it was assumed that people move their attention from the reference object to the located object. However, recent theoretical and empirical findings challenge the directionality of this attentional shift. In this article, we present the results of an empirical study based on predictions generated by computational cognitive models implementing different directionalities of attention. Moreover, we thoroughly analyze the computational models. While our results do not favor any of the implemented directionalities of attention, we found that two unknown sources of geometric information affect spatial language understanding. We provide modifications to the computational models that substantially improve their performance on empirical data. |
Faye Knickerbocker; Rebecca L Johnson; Emma L Starr; Anna M Hall; Daphne M Preti; Sarah Rose Slate; Jeanette Altarriba The time course of processing emotion-laden words during sentence reading: Evidence from eye movements Journal Article Acta Psychologica, 192 , pp. 1–10, 2019. @article{Knickerbocker2019, title = {The time course of processing emotion-laden words during sentence reading: Evidence from eye movements}, author = {Faye Knickerbocker and Rebecca L Johnson and Emma L Starr and Anna M Hall and Daphne M Preti and Sarah Rose Slate and Jeanette Altarriba}, doi = {10.1016/j.actpsy.2018.10.008}, year = {2019}, date = {2019-01-01}, journal = {Acta Psychologica}, volume = {192}, pages = {1--10}, publisher = {Elsevier}, abstract = {While recent research has explored the effect that positive and negative emotion words (e.g., happy or sad) have on the eye-movement record during reading, the current study examined the effect of positive and negative emotion-laden words (e.g., birthday or funeral) on eye movements. Emotion-laden words do not express a state of mind but have emotional associations and connotations. The current results indicated that both positive and negative emotion-laden words have a processing advantage over neutral words, although the relative time-course of processing differs between words of positive and negative valence. Specifically, positive emotion-laden words showed advantages in early, late, and post-target measures, while negative emotion-laden words showed effects only in late and post-target measures.}, keywords = {}, pubstate = {published}, tppubtype = {article} } While recent research has explored the effect that positive and negative emotion words (e.g., happy or sad) have on the eye-movement record during reading, the current study examined the effect of positive and negative emotion-laden words (e.g., birthday or funeral) on eye movements. Emotion-laden words do not express a state of mind but have emotional associations and connotations. The current results indicated that both positive and negative emotion-laden words have a processing advantage over neutral words, although the relative time-course of processing differs between words of positive and negative valence. Specifically, positive emotion-laden words showed advantages in early, late, and post-target measures, while negative emotion-laden words showed effects only in late and post-target measures. |
Astrid Kraal; Paul W van den Broek; Arnout W Koornneef; Lesya Y Ganushchak; Nadira Saab Differences in text processing by low-and high-comprehending beginning readers of expository and narrative texts: Evidence from eye movements Journal Article Learning and Individual Differences, 74 , pp. 1–14, 2019. @article{Kraal2019, title = {Differences in text processing by low-and high-comprehending beginning readers of expository and narrative texts: Evidence from eye movements}, author = {Astrid Kraal and Paul W van den Broek and Arnout W Koornneef and Lesya Y Ganushchak and Nadira Saab}, doi = {10.1016/j.lindif.2019.101752}, year = {2019}, date = {2019-01-01}, journal = {Learning and Individual Differences}, volume = {74}, pages = {1--14}, abstract = {The present study investigated on-line text processing of second-grade low-and high-comprehending readers by recording their eye movements as they read expository and narrative texts. For narrative texts, the reading patterns of low-and high-comprehending readers revealed robust differences consistent with prior findings for good versus struggling readers (e.g., longer first-and second-pass reading times for low-comprehending readers). For expository texts, however, the differences in the reading patterns of low-and high-comprehending readers were attenuated. These results suggest that low-comprehending readers adopt a suboptimal processing approach for expository texts: relative to their processing approach for narrative texts, they either do not adjust their reading strategy or they adjust towards a more cursory strategy. Both processing approaches are suboptimal because expository texts tend to demand more, rather than less, cognitive effort of the reader than narrative texts. We discuss implications for (reading) education.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study investigated on-line text processing of second-grade low-and high-comprehending readers by recording their eye movements as they read expository and narrative texts. For narrative texts, the reading patterns of low-and high-comprehending readers revealed robust differences consistent with prior findings for good versus struggling readers (e.g., longer first-and second-pass reading times for low-comprehending readers). For expository texts, however, the differences in the reading patterns of low-and high-comprehending readers were attenuated. These results suggest that low-comprehending readers adopt a suboptimal processing approach for expository texts: relative to their processing approach for narrative texts, they either do not adjust their reading strategy or they adjust towards a more cursory strategy. Both processing approaches are suboptimal because expository texts tend to demand more, rather than less, cognitive effort of the reader than narrative texts. We discuss implications for (reading) education. |
Edmundo Kronmüller; Ira Noveck How do addressees exploit conventionalizations? From a negative reference to an ad hoc implicature Journal Article Frontiers in Psychology, 10 , pp. 1–10, 2019. @article{Kronmueller2019, title = {How do addressees exploit conventionalizations? From a negative reference to an ad hoc implicature}, author = {Edmundo Kronmüller and Ira Noveck}, doi = {10.3389/fpsyg.2019.01461}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--10}, abstract = {A negative reference, such as "not the sculpture" (where the sculpture is a name the speaker had only just invented to describe an unconventional-looking object and where the negation is saying that she does not currently desire that object), seems like a perilous and linguistically underdetermined way to point to another object, especially when there are three objects to choose from. To succeed, it obliges listeners to rely on contextual elements to determine which object the speaker has in mind. Prior work has shown that pragmatic inference-making plays a crucial role in such an interpretation process. When a negative reference leaves two candidate objects to choose from, listeners avoid an object that had been previously named, preferring instead an unconventional-looking object that had remained unnamed (Kronmüller et al., 2017). In the present study, we build over these findings by maintaining our focus on the two remaining objects (what we call the second and third objects) as we systematically vary two features. With respect to the second object - which is always unconventional looking - we vary whether or not it has been given a name. With respect to the third object - which is never named - we vary whether it is unconventional or conventional looking (for the latter, imagine an object that clearly resembles a bicycle). As revealed by selection patterns and eye-movements in a visual-world eye-tracking paradigm, we replicate our previous findings that show that participants choose randomly when both of the remaining objects are unconventional looking and unnamed and that they opt reliably in favor of the most nondescript (the unnamed unconventional looking) object when the second object is named. We show further that (unnamed) conventional-looking objects provide similar outcomes when juxtaposed with an unnamed unconventional object (participants prefer the most non-descript as opposed to the conventional-looking object). Nevertheless, effects emerging from the conventional (unnamed) case are not as strong as those found with respect to those reported when an unconventional object is named. In describing participants' choices in the non-random cases, we propose that addressees rely on the construction of an ad hoc implicature that takes into account which object can be eliminated from consideration, given that the speaker did not explicitly name it.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A negative reference, such as "not the sculpture" (where the sculpture is a name the speaker had only just invented to describe an unconventional-looking object and where the negation is saying that she does not currently desire that object), seems like a perilous and linguistically underdetermined way to point to another object, especially when there are three objects to choose from. To succeed, it obliges listeners to rely on contextual elements to determine which object the speaker has in mind. Prior work has shown that pragmatic inference-making plays a crucial role in such an interpretation process. When a negative reference leaves two candidate objects to choose from, listeners avoid an object that had been previously named, preferring instead an unconventional-looking object that had remained unnamed (Kronmüller et al., 2017). In the present study, we build over these findings by maintaining our focus on the two remaining objects (what we call the second and third objects) as we systematically vary two features. With respect to the second object - which is always unconventional looking - we vary whether or not it has been given a name. With respect to the third object - which is never named - we vary whether it is unconventional or conventional looking (for the latter, imagine an object that clearly resembles a bicycle). As revealed by selection patterns and eye-movements in a visual-world eye-tracking paradigm, we replicate our previous findings that show that participants choose randomly when both of the remaining objects are unconventional looking and unnamed and that they opt reliably in favor of the most nondescript (the unnamed unconventional looking) object when the second object is named. We show further that (unnamed) conventional-looking objects provide similar outcomes when juxtaposed with an unnamed unconventional object (participants prefer the most non-descript as opposed to the conventional-looking object). Nevertheless, effects emerging from the conventional (unnamed) case are not as strong as those found with respect to those reported when an unconventional object is named. In describing participants' choices in the non-random cases, we propose that addressees rely on the construction of an ad hoc implicature that takes into account which object can be eliminated from consideration, given that the speaker did not explicitly name it. |
Dato Abashidze; Maria Nella Carminati; Pia Knoeferle Anticipating a future versus integrating a recent event? Evidence from eye-tracking Journal Article Acta Psychologica, 200 , pp. 1–16, 2019. @article{Abashidze2019, title = {Anticipating a future versus integrating a recent event? Evidence from eye-tracking}, author = {Dato Abashidze and Maria Nella Carminati and Pia Knoeferle}, doi = {10.1016/j.actpsy.2019.102916}, year = {2019}, date = {2019-01-01}, journal = {Acta Psychologica}, volume = {200}, pages = {1--16}, publisher = {Elsevier}, abstract = {When comprehending a spoken sentence that refers to a visually-presented event, comprehenders both integrate their current interpretation of language with the recent event and develop expectations about future event possibilities. Tense cues can disambiguate this linking, but temporary ambiguity in these cues may lead comprehenders to also rely on further, experience-based (e.g., frequency or an actor's gaze) cues. How comprehenders reconcile these different cues in real time is an open issue. Extant results suggest that comprehenders preferentially relate their unfolding interpretation to a recent event by inspecting its target object. We investigated to what extent this recent-event preference could be overridden by short-term experiential and situation-specific cues. In Experiments 1–2 participants saw substantially more future than recent events and listened to more sentences about future-events (75% in Experiment 1 and 88% in Experiment 2). Experiment 3 cued future target objects and event possibilities via an actor's gaze. The event frequency increase yielded a reduction in the recent event inspection preference early during sentence processing in Experiments 1–2 compared with Experiment 3 (where event frequency and utterance tense were balanced) but did not eliminate the overall recent-event preference. Actor gaze also modulated the recent-event preference, and jointly with future tense led to its reversal in Experiment 3. However, our results showed that people overall preferred to focus on recent (vs. future) events in their interpretation, suggesting that while two cues (actor gaze and short-term event frequency) can partially override the recent-event preference, the latter still plays a key role in shaping participants' interpretation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When comprehending a spoken sentence that refers to a visually-presented event, comprehenders both integrate their current interpretation of language with the recent event and develop expectations about future event possibilities. Tense cues can disambiguate this linking, but temporary ambiguity in these cues may lead comprehenders to also rely on further, experience-based (e.g., frequency or an actor's gaze) cues. How comprehenders reconcile these different cues in real time is an open issue. Extant results suggest that comprehenders preferentially relate their unfolding interpretation to a recent event by inspecting its target object. We investigated to what extent this recent-event preference could be overridden by short-term experiential and situation-specific cues. In Experiments 1–2 participants saw substantially more future than recent events and listened to more sentences about future-events (75% in Experiment 1 and 88% in Experiment 2). Experiment 3 cued future target objects and event possibilities via an actor's gaze. The event frequency increase yielded a reduction in the recent event inspection preference early during sentence processing in Experiments 1–2 compared with Experiment 3 (where event frequency and utterance tense were balanced) but did not eliminate the overall recent-event preference. Actor gaze also modulated the recent-event preference, and jointly with future tense led to its reversal in Experiment 3. However, our results showed that people overall preferred to focus on recent (vs. future) events in their interpretation, suggesting that while two cues (actor gaze and short-term event frequency) can partially override the recent-event preference, the latter still plays a key role in shaping participants' interpretation. |
Irene Ablinger; Anne Friede; Ralph Radach A combined lexical and segmental therapy approach in a participant with pure alexia Journal Article Aphasiology, 33 (5), pp. 579–605, 2019. @article{Ablinger2019, title = {A combined lexical and segmental therapy approach in a participant with pure alexia}, author = {Irene Ablinger and Anne Friede and Ralph Radach}, doi = {10.1080/02687038.2018.1485073}, year = {2019}, date = {2019-01-01}, journal = {Aphasiology}, volume = {33}, number = {5}, pages = {579--605}, publisher = {Routledge}, abstract = {Background: Pure alexia is characterized by effortful left-to-right word processing, leading to a pathological length effect during reading aloud. Results of previous therapy outcome research suggest that patients with pure alexia tend to develop and maintain an adaptive sequential reading strategy in an effort to cope with their severe deficit and at least master a slow and laborious reading mode. Aim: We applied a theory-based, strategy-driven and eye-movement-supported therapy approach on HC, a participant with pure alexia. Our intention was to help optimizing his very persistent sequential reading strategy, while concurrently facilitating fast parallel word processing. Methods & Procedures: Therapy included a systematic combination of segmental and holistic reading as well as text reading components. Exposure duration and font size were gradually reduced. Following a single case experimental reading design with follow-up testing, we assessed reading performance at four testing points focusing on analyses of linguistic errors and word viewing patterns. Outcomes & Results: With respect to reading accuracy and oculomotor measures, the combined therapy approach resulted in sustained training effects evident in significant improvements for trained and untrained word materials. Text reading intervention only led to therapy specific improvements. Spatio-temporal analyses of eye fixation positions revealed a more and more efficient adaptive strategy to compensate for reading difficulties. However, spatial changes in fixation position were less pronounced at T4, suggesting some diminishing of success at follow-up. Conclusions: Our results underscore the need for a continuous systematic training of underlying reading strategies in pure alexia to develop and sustain more economic reading procedures.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Pure alexia is characterized by effortful left-to-right word processing, leading to a pathological length effect during reading aloud. Results of previous therapy outcome research suggest that patients with pure alexia tend to develop and maintain an adaptive sequential reading strategy in an effort to cope with their severe deficit and at least master a slow and laborious reading mode. Aim: We applied a theory-based, strategy-driven and eye-movement-supported therapy approach on HC, a participant with pure alexia. Our intention was to help optimizing his very persistent sequential reading strategy, while concurrently facilitating fast parallel word processing. Methods & Procedures: Therapy included a systematic combination of segmental and holistic reading as well as text reading components. Exposure duration and font size were gradually reduced. Following a single case experimental reading design with follow-up testing, we assessed reading performance at four testing points focusing on analyses of linguistic errors and word viewing patterns. Outcomes & Results: With respect to reading accuracy and oculomotor measures, the combined therapy approach resulted in sustained training effects evident in significant improvements for trained and untrained word materials. Text reading intervention only led to therapy specific improvements. Spatio-temporal analyses of eye fixation positions revealed a more and more efficient adaptive strategy to compensate for reading difficulties. However, spatial changes in fixation position were less pronounced at T4, suggesting some diminishing of success at follow-up. Conclusions: Our results underscore the need for a continuous systematic training of underlying reading strategies in pure alexia to develop and sustain more economic reading procedures. |
Luis Aguado; Karisa B Parkington; Teresa Dieguez-Risco; José A Hinojosa; Roxane J Itier Joint modulation of facial expression processing by contextual congruency and task demands Journal Article Brain Sciences, 9 , pp. 1–20, 2019. @article{Aguado2019, title = {Joint modulation of facial expression processing by contextual congruency and task demands}, author = {Luis Aguado and Karisa B Parkington and Teresa Dieguez-Risco and José A Hinojosa and Roxane J Itier}, doi = {10.3390/brainsci9050116}, year = {2019}, date = {2019-01-01}, journal = {Brain Sciences}, volume = {9}, pages = {1--20}, abstract = {Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions. |
Scott P Ardoin; Katherine S Binder; Andrea M Zawoyski; Eloise Nimocks; Tori E Foster Measuring the behavior of reading comprehension test takers: What do they do, and should they do it? Journal Article Reading Research Quarterly, 54 (4), pp. 507–529, 2019. @article{Ardoin2019, title = {Measuring the behavior of reading comprehension test takers: What do they do, and should they do it?}, author = {Scott P Ardoin and Katherine S Binder and Andrea M Zawoyski and Eloise Nimocks and Tori E Foster}, doi = {10.1002/rrq.246}, year = {2019}, date = {2019-01-01}, journal = {Reading Research Quarterly}, volume = {54}, number = {4}, pages = {507--529}, abstract = {The authors sought to further the understanding of reading processes and their links to comprehension using two reading tasks for elementary-grade students. One hundred sixty-six students in grades 2–5 were randomly assigned to one of two conditions: reading with questions presented concurrently with text or reading with questions presented after reading the text (with the text unavailable when answering questions). Eye movement data suggested different processes for each task: Rereading occurred and more time was spent on higher level processing measures in the with-text condition, and in particular, those who did not reread had more accurate answers than those who engaged in rereading. Measurement of students' precision in returning directly to the portion of the passage with information corresponding to a question also predicted students' response accuracy.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The authors sought to further the understanding of reading processes and their links to comprehension using two reading tasks for elementary-grade students. One hundred sixty-six students in grades 2–5 were randomly assigned to one of two conditions: reading with questions presented concurrently with text or reading with questions presented after reading the text (with the text unavailable when answering questions). Eye movement data suggested different processes for each task: Rereading occurred and more time was spent on higher level processing measures in the with-text condition, and in particular, those who did not reread had more accurate answers than those who engaged in rereading. Measurement of students' precision in returning directly to the portion of the passage with information corresponding to a question also predicted students' response accuracy. |
Vahid Aryadoust; Bee Hoon Ang Exploring the frontiers of eye tracking research in language studies: A novel co-citation scientometric review Journal Article Computer Assisted Language Learning, pp. 1–36, 2019. @article{Aryadoust2019, title = {Exploring the frontiers of eye tracking research in language studies: A novel co-citation scientometric review}, author = {Vahid Aryadoust and Bee Hoon Ang}, doi = {10.1080/09588221.2019.1647251}, year = {2019}, date = {2019-01-01}, journal = {Computer Assisted Language Learning}, pages = {1--36}, publisher = {Routledge}, abstract = {Eye tracking technology has become an increasingly popular methodology in language studies. Using data from 27 journals in language sciences indexed in the Social Science Citation Index and/or Scopus, we conducted an in-depth scientometric analysis of 341 research publications together with their 14,866 references between 1994 and 2018. We identified a number of countries, researchers, universities, and institutes with large numbers of publications in eye tracking research in language studies. We further discovered a mixed multitude of connected research trends that have shaped the nature and development of eye tracking research. Specifically, a document co-citation analysis revealed a number of major research clusters, their key topics, connections, and bursts (sudden citation surges). For example, the foci of clusters #0 through #5 were found to be perceptual learning, regressive eye movement(s), attributive adjective(s), stereotypical gender, discourse processing, and bilingual adult(s). The content of all the major clusters was closely examined and synthesized in the form of an in-depth review. Finally, we grounded the findings within a data-driven theory of scientific revolution and discussed how the observed patterns have contributed to the emergence of new trends. As the first scientometric investigation of eye tracking research in language studies, the present study offers several implications for future research that are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye tracking technology has become an increasingly popular methodology in language studies. Using data from 27 journals in language sciences indexed in the Social Science Citation Index and/or Scopus, we conducted an in-depth scientometric analysis of 341 research publications together with their 14,866 references between 1994 and 2018. We identified a number of countries, researchers, universities, and institutes with large numbers of publications in eye tracking research in language studies. We further discovered a mixed multitude of connected research trends that have shaped the nature and development of eye tracking research. Specifically, a document co-citation analysis revealed a number of major research clusters, their key topics, connections, and bursts (sudden citation surges). For example, the foci of clusters #0 through #5 were found to be perceptual learning, regressive eye movement(s), attributive adjective(s), stereotypical gender, discourse processing, and bilingual adult(s). The content of all the major clusters was closely examined and synthesized in the form of an in-depth review. Finally, we grounded the findings within a data-driven theory of scientific revolution and discussed how the observed patterns have contributed to the emergence of new trends. As the first scientometric investigation of eye tracking research in language studies, the present study offers several implications for future research that are discussed. |
Mahsa Barzy; Jo Black; David Williams; Heather Ferguson Autistic adults anticipate and integrate meaning based on the speaker's voice: Evidence from eye-tracking and event-related potentials Journal Article Journal of Experimental Psychology: General, pp. 1–19, 2019. @article{Barzy2019, title = {Autistic adults anticipate and integrate meaning based on the speaker's voice: Evidence from eye-tracking and event-related potentials}, author = {Mahsa Barzy and Jo Black and David Williams and Heather Ferguson}, doi = {10.1037/xge0000705}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: General}, pages = {1--19}, abstract = {Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults, and tested their timecourse in two pre-registered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker's voice and message were either consistent or inconsistent (e.g. “When we go shopping, I usually look for my favourite wine”, spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g. wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias towards the voice- consistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker's voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240ms vs 1800ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterised autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information, and were comparably sensitive to speaker-meaning inconsistency effects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults, and tested their timecourse in two pre-registered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker's voice and message were either consistent or inconsistent (e.g. “When we go shopping, I usually look for my favourite wine”, spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g. wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias towards the voice- consistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker's voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240ms vs 1800ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterised autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information, and were comparably sensitive to speaker-meaning inconsistency effects. |
Stéphanie Bellocchi; Delphine Massendari; Jonathan Grainger; Stéphanie Ducrot Effects of inter-character spacing on saccade programming in beginning readers and dyslexics Journal Article Child Neuropsychology, 25 (4), pp. 482–506, 2019. @article{Bellocchi2019, title = {Effects of inter-character spacing on saccade programming in beginning readers and dyslexics}, author = {Stéphanie Bellocchi and Delphine Massendari and Jonathan Grainger and Stéphanie Ducrot}, doi = {10.1080/09297049.2018.1504907}, year = {2019}, date = {2019-01-01}, journal = {Child Neuropsychology}, volume = {25}, number = {4}, pages = {482--506}, publisher = {Routledge}, abstract = {The present study investigated the impact of inter-character spacing on saccade programming in beginning readers and dyslexic children. In two experiments, eye movements were recorded while dyslexic children, reading-age, and chronological-age controls, performed an oculomotor lateralized bisection task on words and strings of hashes presented either with default inter-character spacing or with extra spacing between the characters. The results of Experiment 1 showed that (1) only proficient readers had already developed highly automatized procedures for programming both left- and rightward saccades, depending on the discreteness of the stimuli and (2) children of all groups were disrupted (i.e., had trouble to land close to the beginning of the stimuli) by extra spacing between the characters of the stimuli, and particularly for stimuli presented in the left visual field. Experiment 2 was designed to disentangle the role of inter-character spacing and spatial width. Stimuli were made the same physical length in the default and extra-spacing conditions by having more characters in the default spacing condition. Our results showed that inter-letter spacing still influenced saccade programming when controlling for spatial width, thus confirming the detrimental effect of extra spacing for saccade programming. We conclude that the beneficial effect of increased inter-letter spacing on reading can be better explained in terms of decreased visual crowding than improved saccade targeting.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study investigated the impact of inter-character spacing on saccade programming in beginning readers and dyslexic children. In two experiments, eye movements were recorded while dyslexic children, reading-age, and chronological-age controls, performed an oculomotor lateralized bisection task on words and strings of hashes presented either with default inter-character spacing or with extra spacing between the characters. The results of Experiment 1 showed that (1) only proficient readers had already developed highly automatized procedures for programming both left- and rightward saccades, depending on the discreteness of the stimuli and (2) children of all groups were disrupted (i.e., had trouble to land close to the beginning of the stimuli) by extra spacing between the characters of the stimuli, and particularly for stimuli presented in the left visual field. Experiment 2 was designed to disentangle the role of inter-character spacing and spatial width. Stimuli were made the same physical length in the default and extra-spacing conditions by having more characters in the default spacing condition. Our results showed that inter-letter spacing still influenced saccade programming when controlling for spatial width, thus confirming the detrimental effect of extra spacing for saccade programming. We conclude that the beneficial effect of increased inter-letter spacing on reading can be better explained in terms of decreased visual crowding than improved saccade targeting. |
Jean Baptiste Bernard; Eric Castet The optimal use of non-optimal letter information in foveal and parafoveal word recognition Journal Article Vision Research, 155 , pp. 44–61, 2019. @article{Bernard2019, title = {The optimal use of non-optimal letter information in foveal and parafoveal word recognition}, author = {Jean Baptiste Bernard and Eric Castet}, doi = {10.1016/j.visres.2018.12.006}, year = {2019}, date = {2019-01-01}, journal = {Vision Research}, volume = {155}, pages = {44--61}, publisher = {Elsevier}, abstract = {Letters and words across the visual field can be difficult to identify due to limiting visual factors such as acuity, crowding and position uncertainty. Here, we show that when human readers identify words presented at foveal and para-foveal locations, they act like theoretical observers making optimal use of letter identity and letter position information independently extracted from each letter after an unavoidable and non-optimal letter recognition guess. The novelty of our approach is that we carefully considered foveal and parafoveal letter identity and position uncertainties by measuring crowded letter recognition performance in five subjects without any word context influence. Based on these behavioral measures, lexical access was simulated for each subject by an observer making optimal use of each subject's uncertainties. This free-parameter model was able to predict individual behavioral recognition rates of words presented at different positions across the visual field. Importantly, the model was also able to predict individual mislocation and identity letter errors made during behavioral word recognition. These results reinforce the view that human readers recognize foveal and parafoveal words by parts (the word letters) in a first stage, independently of word context. They also suggest a second step where letter identity and position uncertainties are generated based on letter first guesses and positions. During the third lexical access stage, identity and position uncertainties from each letter look remarkably combined together through an optimal word recognition decision process.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Letters and words across the visual field can be difficult to identify due to limiting visual factors such as acuity, crowding and position uncertainty. Here, we show that when human readers identify words presented at foveal and para-foveal locations, they act like theoretical observers making optimal use of letter identity and letter position information independently extracted from each letter after an unavoidable and non-optimal letter recognition guess. The novelty of our approach is that we carefully considered foveal and parafoveal letter identity and position uncertainties by measuring crowded letter recognition performance in five subjects without any word context influence. Based on these behavioral measures, lexical access was simulated for each subject by an observer making optimal use of each subject's uncertainties. This free-parameter model was able to predict individual behavioral recognition rates of words presented at different positions across the visual field. Importantly, the model was also able to predict individual mislocation and identity letter errors made during behavioral word recognition. These results reinforce the view that human readers recognize foveal and parafoveal words by parts (the word letters) in a first stage, independently of word context. They also suggest a second step where letter identity and position uncertainties are generated based on letter first guesses and positions. During the third lexical access stage, identity and position uncertainties from each letter look remarkably combined together through an optimal word recognition decision process. |
Raymond Bertram; Victor Kuperman The English disease in Finnish compound processing: Backward transfer effects in Finnish-English bilinguals Journal Article Bilingualism: Language and Cognition, pp. 1–12, 2019. @article{Bertram2019, title = {The English disease in Finnish compound processing: Backward transfer effects in Finnish-English bilinguals}, author = {Raymond Bertram and Victor Kuperman}, doi = {10.1017/S1366728919000312}, year = {2019}, date = {2019-01-01}, journal = {Bilingualism: Language and Cognition}, pages = {1--12}, abstract = {Most English compounds are spaced compounds, whereas spelling regulations prescribe Finnish compounds to be written in a concatenated format. However, as in English, Finnish compounds are commonly spaced nowadays (e.g., piha juhla 'garden party'), a phenomenon that we labeled the 'English disease'. In this eye movement study with Finnish-English bilinguals we investigate whether the reading of a concatenated or illegally spaced Finnish compound is affected by the spelling of an English translation equivalent (ETE). We found that spaced Finnish compounds were read slower than their concatenated counterparts, but this effect was attenuated when ETEs were thought to be spaced. Similarly, concatenated Finnish compounds were read faster when their ETEs were also concatenated. These backward transfer effects are in line with studies that show that processing behavior in L1 is affected by a strong concurrent L2, even when the L1 is the native language as well as the dominant community language.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Most English compounds are spaced compounds, whereas spelling regulations prescribe Finnish compounds to be written in a concatenated format. However, as in English, Finnish compounds are commonly spaced nowadays (e.g., piha juhla 'garden party'), a phenomenon that we labeled the 'English disease'. In this eye movement study with Finnish-English bilinguals we investigate whether the reading of a concatenated or illegally spaced Finnish compound is affected by the spelling of an English translation equivalent (ETE). We found that spaced Finnish compounds were read slower than their concatenated counterparts, but this effect was attenuated when ETEs were thought to be spaced. Similarly, concatenated Finnish compounds were read faster when their ETEs were also concatenated. These backward transfer effects are in line with studies that show that processing behavior in L1 is affected by a strong concurrent L2, even when the L1 is the native language as well as the dominant community language. |
Nicoletta Biondo; Francesco Vespignani; Brian Dillon Attachment and concord of temporal adverbs: Evidence from eye movements Journal Article Frontiers in Psychology, 10 , pp. 1–17, 2019. @article{Biondo2019, title = {Attachment and concord of temporal adverbs: Evidence from eye movements}, author = {Nicoletta Biondo and Francesco Vespignani and Brian Dillon}, doi = {10.3389/fpsyg.2019.00983}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--17}, abstract = {The present study examined the processing of temporal adverbial phrases such as "last week," which must agree in temporal features with the verb they modify. We investigated readers' sensitivity to this feature match or mismatch in two eye-tracking studies. The main aim of this study was to expand the range of concord phenomena which have been investigated in real-time processing in order to understand how linguistic dependencies are formed during sentence comprehension (Felser et al., 2017). Under a cue-based perspective, linguistic dependency formation relies on an associative cue-based retrieval mechanism (Lewis et al., 2006; McElree, 2006), but how such a mechanism is deployed over diverse linguistic dependencies remains a matter of debate. Are all linguistic features candidate cues that guide retrieval? Are all cues given similar weight? Are different cues differently weighted based on the dependency being processed? To address these questions, we implemented a mismatch paradigm (Sturt, 2003) adapted for temporal concord dependencies. This paradigm tested whether readers were sensitive to a temporal agreement between a temporal adverb like last week and a linearly distant, but structurally accessible verb, as well as a linearly proximate but structurally inaccessible verb. We found clear evidence that readers were sensitive to feature match between the adverb and the linearly distant, structurally accessible verb. We found no clear evidence on whether feature match with the inaccessible verb impacted the processing of a temporal adverb. Our results suggest syntactic positional information plays an important role during the processing of the temporal concord relation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study examined the processing of temporal adverbial phrases such as "last week," which must agree in temporal features with the verb they modify. We investigated readers' sensitivity to this feature match or mismatch in two eye-tracking studies. The main aim of this study was to expand the range of concord phenomena which have been investigated in real-time processing in order to understand how linguistic dependencies are formed during sentence comprehension (Felser et al., 2017). Under a cue-based perspective, linguistic dependency formation relies on an associative cue-based retrieval mechanism (Lewis et al., 2006; McElree, 2006), but how such a mechanism is deployed over diverse linguistic dependencies remains a matter of debate. Are all linguistic features candidate cues that guide retrieval? Are all cues given similar weight? Are different cues differently weighted based on the dependency being processed? To address these questions, we implemented a mismatch paradigm (Sturt, 2003) adapted for temporal concord dependencies. This paradigm tested whether readers were sensitive to a temporal agreement between a temporal adverb like last week and a linearly distant, but structurally accessible verb, as well as a linearly proximate but structurally inaccessible verb. We found clear evidence that readers were sensitive to feature match between the adverb and the linearly distant, structurally accessible verb. We found no clear evidence on whether feature match with the inaccessible verb impacted the processing of a temporal adverb. Our results suggest syntactic positional information plays an important role during the processing of the temporal concord relation. |
Frances Blanchette; Cynthia Lukyanenko Unacceptable grammars? An eye-tracking study of English negative concord Journal Article Language and Cognition, 11 (1), pp. 1–40, 2019. @article{Blanchette2019, title = {Unacceptable grammars? An eye-tracking study of English negative concord}, author = {Frances Blanchette and Cynthia Lukyanenko}, doi = {10.1017/langcog.2019.4}, year = {2019}, date = {2019-01-01}, journal = {Language and Cognition}, volume = {11}, number = {1}, pages = {1--40}, abstract = {This paper uses eye-tracking while reading to examine Standard English speakers' processing of sentences with two syntactic negations: a negative auxiliary and either a negative subject (e.g., Nothing didn't fall from the shelf) or a negative object (e.g., She didn't answer nothing in that interview). Sentences were read in Double Negation (DN; the 'she answered something' reading of she didn't answer nothing) and Negative Concord (NC; the 'she answered nothing' reading of she didn't answer nothing) biasing contexts. Despite the social stigma associated with NC, and linguistic assumptions that Standard English has a DN grammar, in which each syntactic negation necessarily contributes a semantic negation, our results show that Standard English speakers generate both NC and DN interpretations, and that their interpretation is affected by the syntactic structure of the negative sentence. Participants spent more time reading the critical sentence and rereading the context sentence when negative object sentences were paired with DN-biasing contexts and when negative subject sentences were paired with NC-biasing contexts. This suggests that, despite not producing NC, they find NC interpretations of negative object sentences easier to generate than DN interpretations. The results illustrate the utility of online measures when investigating socially stigmatized construction types.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper uses eye-tracking while reading to examine Standard English speakers' processing of sentences with two syntactic negations: a negative auxiliary and either a negative subject (e.g., Nothing didn't fall from the shelf) or a negative object (e.g., She didn't answer nothing in that interview). Sentences were read in Double Negation (DN; the 'she answered something' reading of she didn't answer nothing) and Negative Concord (NC; the 'she answered nothing' reading of she didn't answer nothing) biasing contexts. Despite the social stigma associated with NC, and linguistic assumptions that Standard English has a DN grammar, in which each syntactic negation necessarily contributes a semantic negation, our results show that Standard English speakers generate both NC and DN interpretations, and that their interpretation is affected by the syntactic structure of the negative sentence. Participants spent more time reading the critical sentence and rereading the context sentence when negative object sentences were paired with DN-biasing contexts and when negative subject sentences were paired with NC-biasing contexts. This suggests that, despite not producing NC, they find NC interpretations of negative object sentences easier to generate than DN interpretations. The results illustrate the utility of online measures when investigating socially stigmatized construction types. |
Hazel I Blythe; Barbara J Juhasz; Lee W Tbaily; Keith Rayner; Simon P Liversedge Reading sentences of words with rotated letters: An eye movement study Journal Article Quarterly Journal of Experimental Psychology, 72 (7), pp. 1790–1804, 2019. @article{Blythe2019, title = {Reading sentences of words with rotated letters: An eye movement study}, author = {Hazel I Blythe and Barbara J Juhasz and Lee W Tbaily and Keith Rayner and Simon P Liversedge}, doi = {10.1177/1747021818810381}, year = {2019}, date = {2019-01-01}, journal = {Quarterly Journal of Experimental Psychology}, volume = {72}, number = {7}, pages = {1790--1804}, abstract = {Participants' eye movements were measured as they read sentences in which individual letters within words were rotated. Both the consistency of direction and the magnitude of rotation were manipulated (letters rotated all in the same direction, or alternately clockwise and anti-clockwise, by 30° or 60°). Each sentence included a target word that was manipulated for frequency of occurrence. Our objectives were threefold: To quantify how change in the visual presentation of individual letters disrupted word identification, and whether disruption was consistent with systematic change in visual presentation; to determine whether inconsistent letter transformation caused more disruption than consistent letter transformation; and to determine whether such effects were comparable for words that were high and low frequency to explore the extent to which they were visually or linguistically mediated. We found that disruption to reading was greater as the magnitude of letter rotation increased, although even small rotations affected processing. The data also showed that alternating letter rotations were significantly more disruptive than consistent rotations; this result is consistent with models of lexical identification in which encoding occurs over units of more than one adjacent letter. These rotation manipulations also showed significant interactions with word frequency on the target word: Gaze durations and total fixation duration times increased disproportionately for low-frequency words when they were presented at more extreme rotations. These data provide a first step towards quantifying the relative contribution of the spatial relationships between individual letters to word recognition and eye movement control in reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Participants' eye movements were measured as they read sentences in which individual letters within words were rotated. Both the consistency of direction and the magnitude of rotation were manipulated (letters rotated all in the same direction, or alternately clockwise and anti-clockwise, by 30° or 60°). Each sentence included a target word that was manipulated for frequency of occurrence. Our objectives were threefold: To quantify how change in the visual presentation of individual letters disrupted word identification, and whether disruption was consistent with systematic change in visual presentation; to determine whether inconsistent letter transformation caused more disruption than consistent letter transformation; and to determine whether such effects were comparable for words that were high and low frequency to explore the extent to which they were visually or linguistically mediated. We found that disruption to reading was greater as the magnitude of letter rotation increased, although even small rotations affected processing. The data also showed that alternating letter rotations were significantly more disruptive than consistent rotations; this result is consistent with models of lexical identification in which encoding occurs over units of more than one adjacent letter. These rotation manipulations also showed significant interactions with word frequency on the target word: Gaze durations and total fixation duration times increased disproportionately for low-frequency words when they were presented at more extreme rotations. These data provide a first step towards quantifying the relative contribution of the spatial relationships between individual letters to word recognition and eye movement control in reading. |
Hans Rutger Bosker; Marjolein Van Os; Rik Does; Geertje Van Bergen Counting ‘uhm's: How tracking the distribution of native and non-native disfluencies influences online language comprehension Journal Article Journal of Memory and Language, 106 , pp. 189–202, 2019. @article{Bosker2019, title = {Counting ‘uhm's: How tracking the distribution of native and non-native disfluencies influences online language comprehension}, author = {Hans Rutger Bosker and Marjolein {Van Os} and Rik Does and Geertje {Van Bergen}}, doi = {10.1016/j.jml.2019.02.006}, year = {2019}, date = {2019-01-01}, journal = {Journal of Memory and Language}, volume = {106}, pages = {189--202}, abstract = {Disfluencies, like uh, have been shown to help listeners anticipate reference to low-frequency words. The associative account of this ‘disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Disfluencies, like uh, have been shown to help listeners anticipate reference to low-frequency words. The associative account of this ‘disfluency bias' proposes that listeners learn to associate disfluency with low-frequency referents based on prior exposure to non-arbitrary disfluency distributions (i.e., greater probability of low-frequency words after disfluencies). However, there is limited evidence for listeners actually tracking disfluency distributions online. The present experiments are the first to show that adult listeners, exposed to a typical or more atypical disfluency distribution (i.e., hearing a talker unexpectedly say uh before high-frequency words), flexibly adjust their predictive strategies to the disfluency distribution at hand (e.g., learn to predict high-frequency referents after disfluency). However, when listeners were presented with the same atypical disfluency distribution but produced by a non-native speaker, no adjustment was observed. This suggests pragmatic inferences can modulate distributional learning, revealing the flexibility of, and constraints on, distributional learning in incremental language comprehension. |
Bettina Braun; Yuki Asano; Nicole Dehé When (not) to look for contrastive alternatives: The role of pitch accent type and additive particles Journal Article Language and Speech, 62 (4), pp. 751–778, 2019. @article{Braun2019, title = {When (not) to look for contrastive alternatives: The role of pitch accent type and additive particles}, author = {Bettina Braun and Yuki Asano and Nicole Dehé}, doi = {10.1177/0023830918814279}, year = {2019}, date = {2019-01-01}, journal = {Language and Speech}, volume = {62}, number = {4}, pages = {751--778}, abstract = {This study investigates how pitch accent type and additive particles affect the activation of contrastive alternatives. In Experiment 1, German listeners heard declarative utterances (e.g., The swimmer wanted to put on flippers) and saw four printed words displayed on screen: one that was a contrastive alternative to the subject noun (e.g., diver), one that was non-contrastively related (e.g., pool), the object (e.g., flippers), and an unrelated distractor. Experiment 1 manipulated pitch accent type, comparing a broad focus control condition to two narrow focus conditions: with a contrastive or non-contrastive accent on the subject noun (nuclear L+H* vs. H+L*, respectively, followed by deaccentuation). In Experiment 2, the utterances in the narrow focus conditions were preceded by the unstressed additive particle auch (“also”), which may trigger alternatives itself. It associated with the accented subject. Results showed that, compared to the control condition, participants directed more fixations to the contrastive alternative when the subject was realized with a contrastive accent (nuclear L+H*) than when it was realized with non-contrastive H+L*, while additive particles had no effect. Hence, accent type is the primary trigger for signaling the presence of alternatives (i.e., contrast). Implications for theories of information structure and the processing of additive particles are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigates how pitch accent type and additive particles affect the activation of contrastive alternatives. In Experiment 1, German listeners heard declarative utterances (e.g., The swimmer wanted to put on flippers) and saw four printed words displayed on screen: one that was a contrastive alternative to the subject noun (e.g., diver), one that was non-contrastively related (e.g., pool), the object (e.g., flippers), and an unrelated distractor. Experiment 1 manipulated pitch accent type, comparing a broad focus control condition to two narrow focus conditions: with a contrastive or non-contrastive accent on the subject noun (nuclear L+H* vs. H+L*, respectively, followed by deaccentuation). In Experiment 2, the utterances in the narrow focus conditions were preceded by the unstressed additive particle auch (“also”), which may trigger alternatives itself. It associated with the accented subject. Results showed that, compared to the control condition, participants directed more fixations to the contrastive alternative when the subject was realized with a contrastive accent (nuclear L+H*) than when it was realized with non-contrastive H+L*, while additive particles had no effect. Hence, accent type is the primary trigger for signaling the presence of alternatives (i.e., contrast). Implications for theories of information structure and the processing of additive particles are discussed. |
Laurel Brehm; Linda Taschenberger; Antje Meyer Mental representations of partner task cause interference in picture naming Journal Article Acta Psychologica, 199 , pp. 1–13, 2019. @article{Brehm2019, title = {Mental representations of partner task cause interference in picture naming}, author = {Laurel Brehm and Linda Taschenberger and Antje Meyer}, doi = {10.1016/j.actpsy.2019.102888}, year = {2019}, date = {2019-01-01}, journal = {Acta Psychologica}, volume = {199}, pages = {1--13}, abstract = {Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner- present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner- present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli. |
Emma Bridgwater; Aki Juhani Kyröläinen; Victor Kuperman The influence of syntactic expectations on reading comprehension is malleable and strategic: An eye-tracking study of English dative alternation Journal Article Canadian Journal of Experimental Psychology, 73 (3), pp. 179–192, 2019. @article{Bridgwater2019, title = {The influence of syntactic expectations on reading comprehension is malleable and strategic: An eye-tracking study of English dative alternation}, author = {Emma Bridgwater and Aki Juhani Kyröläinen and Victor Kuperman}, doi = {10.1037/cep0000173}, year = {2019}, date = {2019-01-01}, journal = {Canadian Journal of Experimental Psychology}, volume = {73}, number = {3}, pages = {179--192}, abstract = {Language processing is incremental and inherently predictive. Against this theoretic backdrop, we investigated the role of upcoming structural information in the comprehension of the English dative alternation. The use of eye-tracking enabled us to examine both the time course and locus of the effect associated with (a) structural expectations based on a lifetime of experience with language, and (b) rapid adaptation of the reader to the local statistics of the experiment. We quantified (a) as a verb subcategorization bias toward dative alternatives, and (b) as distributional biases in the syntactic input during the experiment. A reliable facilitatory effect of the verb bias was only observed in the double-object datives and only in the disambiguation region of the second object. Furthermore, structural priming led to an earlier locus of the verb bias effect, suggesting an interaction between (a) and (b). Our results offer a new outlook on the utilization of syntactic expectations during reading, in conjunction with rapid adaptation to the immediate linguistic environment. We demonstrate that this utilization is both malleable and strategic.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Language processing is incremental and inherently predictive. Against this theoretic backdrop, we investigated the role of upcoming structural information in the comprehension of the English dative alternation. The use of eye-tracking enabled us to examine both the time course and locus of the effect associated with (a) structural expectations based on a lifetime of experience with language, and (b) rapid adaptation of the reader to the local statistics of the experiment. We quantified (a) as a verb subcategorization bias toward dative alternatives, and (b) as distributional biases in the syntactic input during the experiment. A reliable facilitatory effect of the verb bias was only observed in the double-object datives and only in the disambiguation region of the second object. Furthermore, structural priming led to an earlier locus of the verb bias effect, suggesting an interaction between (a) and (b). Our results offer a new outlook on the utilization of syntactic expectations during reading, in conjunction with rapid adaptation to the immediate linguistic environment. We demonstrate that this utilization is both malleable and strategic. |
Clare Patterson; Claudia Felser Delayed application of binding condition C during cataphoric pronoun resolution Journal Article Journal of Psycholinguistic Research, 48 (2), pp. 453–475, 2019. @article{Patterson2019, title = {Delayed application of binding condition C during cataphoric pronoun resolution}, author = {Clare Patterson and Claudia Felser}, doi = {10.1007/s10936-018-9613-4}, year = {2019}, date = {2019-01-01}, journal = {Journal of Psycholinguistic Research}, volume = {48}, number = {2}, pages = {453--475}, publisher = {Springer US}, abstract = {Previous research has shown that during cataphoric pronoun resolution, the predictive search for an antecedent is restricted by a structure-sensitive constraint known as ‘Condition C', such that an antecedent is only considered when the constraint does not apply. Evidence has mainly come from self-paced reading (SPR), a method which may not be able to pick up on short-lived effects over the timecourse of processing. This study investigates whether or not the active search mechanism is constrained by Condition C at all points in time during cataphoric processing. We carried out one eye-tracking during reading and a parallel SPR experiment, accompanied by offline coreference judgment tasks. Although offline judgments about coreference were constrained by Condition C, the eye-tracking experiment revealed temporary consideration of antecedents that should be ruled out by Condition C. The SPR experiment using exactly the same materials indicated, conversely, that only structurally appropriate antecedents were considered. Taken together, our results suggest that the application of Condition C may be delayed during naturalistic reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Previous research has shown that during cataphoric pronoun resolution, the predictive search for an antecedent is restricted by a structure-sensitive constraint known as ‘Condition C', such that an antecedent is only considered when the constraint does not apply. Evidence has mainly come from self-paced reading (SPR), a method which may not be able to pick up on short-lived effects over the timecourse of processing. This study investigates whether or not the active search mechanism is constrained by Condition C at all points in time during cataphoric processing. We carried out one eye-tracking during reading and a parallel SPR experiment, accompanied by offline coreference judgment tasks. Although offline judgments about coreference were constrained by Condition C, the eye-tracking experiment revealed temporary consideration of antecedents that should be ruled out by Condition C. The SPR experiment using exactly the same materials indicated, conversely, that only structurally appropriate antecedents were considered. Taken together, our results suggest that the application of Condition C may be delayed during naturalistic reading. |
Jovana Pejovic; Eiling Yee; Monika Molnar Speaker matters: Natural inter-speaker variation affects 4-month-olds' perception of audio-visual speech Journal Article First Language, pp. 1–15, 2019. @article{Pejovic2019, title = {Speaker matters: Natural inter-speaker variation affects 4-month-olds' perception of audio-visual speech}, author = {Jovana Pejovic and Eiling Yee and Monika Molnar}, doi = {10.1177/0142723719876382}, year = {2019}, date = {2019-01-01}, journal = {First Language}, pages = {1--15}, abstract = {In the language development literature, studies often make inferences about infants' speech perception abilities based on their responses to a single speaker. However, there can be significant natural variability across speakers in how speech is produced (i.e., inter-speaker differences). The current study examined whether inter-speaker differences can affect infants' ability to detect a mismatch between the auditory and visual components of vowels. Using an eye-tracker, 4.5-month-old infants were tested on auditory-visual (AV) matching for two vowels (/i/ and /u/). Critically, infants were tested with two speakers who naturally differed in how distinctively they articulated the two vowels within and across the categories. Only infants who watched and listened to the speaker whose visual articulations of the two vowels were most distinct from one another were sensitive to AV mismatch. This speaker also produced a visually more distinct /i/ as compared to the other speaker. This finding suggests that infants are sensitive to the distinctiveness of AV information across speakers, and that when making inferences about infants' perceptual abilities, characteristics of the speaker should be taken into account.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the language development literature, studies often make inferences about infants' speech perception abilities based on their responses to a single speaker. However, there can be significant natural variability across speakers in how speech is produced (i.e., inter-speaker differences). The current study examined whether inter-speaker differences can affect infants' ability to detect a mismatch between the auditory and visual components of vowels. Using an eye-tracker, 4.5-month-old infants were tested on auditory-visual (AV) matching for two vowels (/i/ and /u/). Critically, infants were tested with two speakers who naturally differed in how distinctively they articulated the two vowels within and across the categories. Only infants who watched and listened to the speaker whose visual articulations of the two vowels were most distinct from one another were sensitive to AV mismatch. This speaker also produced a visually more distinct /i/ as compared to the other speaker. This finding suggests that infants are sensitive to the distinctiveness of AV information across speakers, and that when making inferences about infants' perceptual abilities, characteristics of the speaker should be taken into account. |
Michelle S Peter; Samantha Durrant; Andrew Jessop; Amy Bidgood; Julian M Pine; Caroline F Rowland Does speed of processing or vocabulary size predict later language growth in toddlers? Journal Article Cognitive Psychology, 115 , pp. 1–25, 2019. @article{Peter2019a, title = {Does speed of processing or vocabulary size predict later language growth in toddlers?}, author = {Michelle S Peter and Samantha Durrant and Andrew Jessop and Amy Bidgood and Julian M Pine and Caroline F Rowland}, doi = {10.1016/j.cogpsych.2019.101238}, year = {2019}, date = {2019-01-01}, journal = {Cognitive Psychology}, volume = {115}, pages = {1--25}, publisher = {Elsevier}, abstract = {It is becoming increasingly clear that the way that children acquire cognitive representations depends critically on how their processing system is developing. In particular, recent studies suggest that individual differences in language processing speed play an important role in explaining the speed with which children acquire language. Inconsistencies across studies, however, mean that it is not clear whether this relationship is causal or correlational, whether it is present right across development, or whether it extends beyond word learning to affect other aspects of language learning, like syntax acquisition. To address these issues, the current study used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UK-CDI, Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing speed correlated with vocabulary size - though this relationship changed over time, and was observed only when there was variation in how well the items used in the looking-while-listening task were known. Fast processing speed was a positive predictor of subsequent vocabulary growth, but only for children with smaller vocabularies. Faster processing speed did, however, predict faster syntactic growth across the whole sample, even when controlling for concurrent vocabulary. The results indicate a relatively direct relationship between processing speed and syntactic development, but point to a more complex interaction between processing speed, vocabulary size and subsequent vocabulary growth.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It is becoming increasingly clear that the way that children acquire cognitive representations depends critically on how their processing system is developing. In particular, recent studies suggest that individual differences in language processing speed play an important role in explaining the speed with which children acquire language. Inconsistencies across studies, however, mean that it is not clear whether this relationship is causal or correlational, whether it is present right across development, or whether it extends beyond word learning to affect other aspects of language learning, like syntax acquisition. To address these issues, the current study used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UK-CDI, Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing speed correlated with vocabulary size - though this relationship changed over time, and was observed only when there was variation in how well the items used in the looking-while-listening task were known. Fast processing speed was a positive predictor of subsequent vocabulary growth, but only for children with smaller vocabularies. Faster processing speed did, however, predict faster syntactic growth across the whole sample, even when controlling for concurrent vocabulary. The results indicate a relatively direct relationship between processing speed and syntactic development, but point to a more complex interaction between processing speed, vocabulary size and subsequent vocabulary growth. |
Mikhail Y Pokhoday; Yury Y Shtyrov; Andriy Myachykov Effects of visual priming and event orientation on word order choice in Russian sentence production Journal Article Frontiers in Psychology, 10 , pp. 1–8, 2019. @article{Pokhoday2019, title = {Effects of visual priming and event orientation on word order choice in Russian sentence production}, author = {Mikhail Y Pokhoday and Yury Y Shtyrov and Andriy Myachykov}, doi = {10.3389/fpsyg.2019.01661}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--8}, abstract = {Existing research shows that distribution of the speaker's attention among event's protagonists affects syntactic choice during sentence production. One of the debated issues concerns the extent of the attentional contribution to syntactic choice in languages that put stronger emphasis on word order arrangement rather than the choice of the overall syntactic frame. To address this, the current study used a sentence production task, in which Russian native speakers were asked to verbally describe visually perceived transitive events. Prior to describing the target event, a visual cue directed the participants' attention to the location of either the agent or the patient of the subsequently presented visual event. In addition, we also manipulated event orientation (agent-left vs agent-right) as another potential contributor to syntactic choice. The number of patient-initial sentences was the dependent variable compared between conditions. First, the obtained results replicated the effect of visual cueing on the word order in Russian language: more patient-initial sentences in patient cued condition. Second, we registered a novel effect of event orientation: Russian native speakers produced more patient-initial sentences after seeing events developing from right to left as opposed to left-to-right events. Our study provides new evidence about the role of the speaker's attention and event orientation in syntactic choice in language with flexible word order.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Existing research shows that distribution of the speaker's attention among event's protagonists affects syntactic choice during sentence production. One of the debated issues concerns the extent of the attentional contribution to syntactic choice in languages that put stronger emphasis on word order arrangement rather than the choice of the overall syntactic frame. To address this, the current study used a sentence production task, in which Russian native speakers were asked to verbally describe visually perceived transitive events. Prior to describing the target event, a visual cue directed the participants' attention to the location of either the agent or the patient of the subsequently presented visual event. In addition, we also manipulated event orientation (agent-left vs agent-right) as another potential contributor to syntactic choice. The number of patient-initial sentences was the dependent variable compared between conditions. First, the obtained results replicated the effect of visual cueing on the word order in Russian language: more patient-initial sentences in patient cued condition. Second, we registered a novel effect of event orientation: Russian native speakers produced more patient-initial sentences after seeing events developing from right to left as opposed to left-to-right events. Our study provides new evidence about the role of the speaker's attention and event orientation in syntactic choice in language with flexible word order. |
Vincent Porretta; Aki Juhani Kyröläinen Influencing the time and space of lexical competition: The effect of gradient foreign accentedness Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 45 (10), pp. 1832–1851, 2019. @article{Porretta2019, title = {Influencing the time and space of lexical competition: The effect of gradient foreign accentedness}, author = {Vincent Porretta and Aki Juhani Kyröläinen}, doi = {10.1037/xlm0000674}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {45}, number = {10}, pages = {1832--1851}, abstract = {This article examines the influence of gradient foreign accentedness on lexical competition during spoken word recognition. Using native and Mandarin-accented English words ranging in degree of foreign accentedness, we investigate the effect of increased accentedness on (a) the size of the competitor space and (b) the strength and duration of competitor activation. Here, we analyze the number of misperceptions in a transcription task, as well as the time course of competitor activation in a Visual World Paradigm eye-tracking task. The transcription data show that as accentedness increases, the number of unique misperceptions increases. This indicates that greater accent strength induces the activation of many additional competitors within the competition space relative to native speech. The eye-tracking data further show that, as accentedness increases, looks to competitors (not produced in the transcription task) increase both in likelihood and duration. This indicates that greater accentedness boosts the strength of competitor activation as well as the duration of the competition process, even when comprehension is ultimately successful, suggesting strong and diffuse competition within the lexicon. The results provide evidence of changes in the underlying dynamics, which lead to the pervasive processing costs associated with foreign-accented speech that are commonly observed in behavioral data.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This article examines the influence of gradient foreign accentedness on lexical competition during spoken word recognition. Using native and Mandarin-accented English words ranging in degree of foreign accentedness, we investigate the effect of increased accentedness on (a) the size of the competitor space and (b) the strength and duration of competitor activation. Here, we analyze the number of misperceptions in a transcription task, as well as the time course of competitor activation in a Visual World Paradigm eye-tracking task. The transcription data show that as accentedness increases, the number of unique misperceptions increases. This indicates that greater accent strength induces the activation of many additional competitors within the competition space relative to native speech. The eye-tracking data further show that, as accentedness increases, looks to competitors (not produced in the transcription task) increase both in likelihood and duration. This indicates that greater accentedness boosts the strength of competitor activation as well as the duration of the competition process, even when comprehension is ultimately successful, suggesting strong and diffuse competition within the lexicon. The results provide evidence of changes in the underlying dynamics, which lead to the pervasive processing costs associated with foreign-accented speech that are commonly observed in behavioral data. |
Céline Pozniak; Barbara Hemforth; Yair Haendler; Andrea Santi; Nino Grillo Seeing events vs. entities: The processing advantage of Pseudo Relatives over Relative Clauses Journal Article Journal of Memory and Language, 107 , pp. 128–151, 2019. @article{Pozniak2019, title = {Seeing events vs. entities: The processing advantage of Pseudo Relatives over Relative Clauses}, author = {Céline Pozniak and Barbara Hemforth and Yair Haendler and Andrea Santi and Nino Grillo}, doi = {10.1016/j.jml.2019.04.001}, year = {2019}, date = {2019-01-01}, journal = {Journal of Memory and Language}, volume = {107}, pages = {128--151}, publisher = {Elsevier}, abstract = {We present the results of three offline questionnaires (one attachment preference study and two acceptability judgments) and two eye-tracking studies in French and English, investigating the resolution of the ambiguity between pseudo relative and relative clause interpretations. This structural and interpretive ambiguity has recently been shown to play a central role in the explanation of apparent cross-linguistic asymmetries in relative clause attachment (Grillo and Costa, 2014; Grillo et al., 2015). This literature has argued that pseudo relatives are preferred to relative clauses because of their structural and interpretive simplicity. This paper adds to this growing body of literature in two ways. First we show that, in contrast to previous findings, French speakers prefer to attach relative clauses to the most local antecedent once pseudo relative availability is controlled for. We then provide direct support for the pseudo relative preference: grammatically forced disambiguation to a relative clause interpretation leads to degraded acceptability and greater processing cost in a pseudo relative environment than maintaining compatibility with a pseudo relative.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We present the results of three offline questionnaires (one attachment preference study and two acceptability judgments) and two eye-tracking studies in French and English, investigating the resolution of the ambiguity between pseudo relative and relative clause interpretations. This structural and interpretive ambiguity has recently been shown to play a central role in the explanation of apparent cross-linguistic asymmetries in relative clause attachment (Grillo and Costa, 2014; Grillo et al., 2015). This literature has argued that pseudo relatives are preferred to relative clauses because of their structural and interpretive simplicity. This paper adds to this growing body of literature in two ways. First we show that, in contrast to previous findings, French speakers prefer to attach relative clauses to the most local antecedent once pseudo relative availability is controlled for. We then provide direct support for the pseudo relative preference: grammatically forced disambiguation to a relative clause interpretation leads to degraded acceptability and greater processing cost in a pseudo relative environment than maintaining compatibility with a pseudo relative. |
Esha Prakash; Rebecca J McLean; Sarah J White; Kevin B Paterson; Irene Gottlob; Frank A Proudlock Reading individual words within sentences in infantile nystagmus Journal Article Investigative Ophthalmology & Visual Science, 60 , pp. 2226–2236, 2019. @article{Prakash2019, title = {Reading individual words within sentences in infantile nystagmus}, author = {Esha Prakash and Rebecca J McLean and Sarah J White and Kevin B Paterson and Irene Gottlob and Frank A Proudlock}, doi = {10.1167/iovs.18-25793}, year = {2019}, date = {2019-01-01}, journal = {Investigative Ophthalmology & Visual Science}, volume = {60}, pages = {2226--2236}, abstract = {PURPOSE. Normal readers make immediate and precise adjustments in eye movements during sentence reading in response to individual word features, such as lexical difficulty (e.g., common or uncommon words) or word length. Our purpose was to assess the effect of infantile nystagmus (IN) on these adaptive mechanisms. METHODS. Eye movements were recorded from 29 participants with IN (14 albinism, 12 idiopathic, and 3 congenital stationary night blindness) and 15 controls when reading sentences containing either common/uncommon words or long/short target words. Parameters assessed included: duration of first foveation/fixation, number of first-pass and percentage second-pass foveations/fixations, percentage words skipped, gaze duration, acquisition time (gaze þ nongaze duration), landing site locations, clinical and experimental reading speeds. RESULTS. Participants with IN could not modify first foveation durations in contrast to controls who made longer first fixations on uncommon words (P textless 0.001). Participants with IN made more first-pass foveations on uncommon and long words (P textless 0.001) to increase gaze durations. However, this also increased nongaze durations (P textless 0.001) delaying acquisition times. Participants with IN reread shorter words more often (P textless 0.005). Similar to controls, participants with IN landed more first foveations between the start and center of long words. Reading speeds during experiments were lower in IN participants compared to controls (P textless 0.01). CONCLUSIONS. People with IN make more first-pass foveations on uncommon and long words influencing reading speeds. This demonstrates that the ‘‘slow to see'' phenomenon occurs during word reading in IN. These deficits are not captured by clinical reading charts.}, keywords = {}, pubstate = {published}, tppubtype = {article} } PURPOSE. Normal readers make immediate and precise adjustments in eye movements during sentence reading in response to individual word features, such as lexical difficulty (e.g., common or uncommon words) or word length. Our purpose was to assess the effect of infantile nystagmus (IN) on these adaptive mechanisms. METHODS. Eye movements were recorded from 29 participants with IN (14 albinism, 12 idiopathic, and 3 congenital stationary night blindness) and 15 controls when reading sentences containing either common/uncommon words or long/short target words. Parameters assessed included: duration of first foveation/fixation, number of first-pass and percentage second-pass foveations/fixations, percentage words skipped, gaze duration, acquisition time (gaze þ nongaze duration), landing site locations, clinical and experimental reading speeds. RESULTS. Participants with IN could not modify first foveation durations in contrast to controls who made longer first fixations on uncommon words (P textless 0.001). Participants with IN made more first-pass foveations on uncommon and long words (P textless 0.001) to increase gaze durations. However, this also increased nongaze durations (P textless 0.001) delaying acquisition times. Participants with IN reread shorter words more often (P textless 0.005). Similar to controls, participants with IN landed more first foveations between the start and center of long words. Reading speeds during experiments were lower in IN participants compared to controls (P textless 0.01). CONCLUSIONS. People with IN make more first-pass foveations on uncommon and long words influencing reading speeds. This demonstrates that the ‘‘slow to see'' phenomenon occurs during word reading in IN. These deficits are not captured by clinical reading charts. |
Zhen Qin; Annie Tremblay; Jie Zhang Journal of Phonetics, 73 , pp. 144–157, 2019. @article{Qin2019, title = {Influence of within-category tonal information in the recognition of Mandarin-Chinese words by native and non-native listeners: An eye-tracking study}, author = {Zhen Qin and Annie Tremblay and Jie Zhang}, doi = {10.1016/j.wocn.2019.01.002}, year = {2019}, date = {2019-01-01}, journal = {Journal of Phonetics}, volume = {73}, pages = {144--157}, publisher = {Elsevier Ltd}, abstract = {This study investigates how within-category tonal information influences native and non-native Mandarin listeners' spoken word recognition. Previous eye-tracking research has shown that the within-category phonetic details of consonants and vowels constrain lexical activation. However, given the highly dynamic and variable nature of lexical tones, it is unclear whether the within-category phonetic details of lexical tones would similarly modulate lexical activation. Native Mandarin listeners and proficient adult English-speaking Mandarin learners were tested in a visual-world eye-tracking experiment. The target word contained a level tone and the competitor word contained a high-rising tone, or vice versa. The auditory stimuli were manipulated such that the target tone was either canonical (Standard condition), phonetically more distant from the competitor (Distant condition), or phonetically closer to the competitor (Close condition). Growth curve analyses on fixations suggest that, compared to the Standard condition, Mandarin listeners' target-over-competitor word activation was enhanced in the Distant condition and inhibited in the Close condition, whereas English listeners' target-over-competitor word activation was inhibited in both the Distant and Close conditions. These results suggest that within-category tonal information influences both native and non-native Mandarin listeners' word recognition, but does so differently for the two groups.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigates how within-category tonal information influences native and non-native Mandarin listeners' spoken word recognition. Previous eye-tracking research has shown that the within-category phonetic details of consonants and vowels constrain lexical activation. However, given the highly dynamic and variable nature of lexical tones, it is unclear whether the within-category phonetic details of lexical tones would similarly modulate lexical activation. Native Mandarin listeners and proficient adult English-speaking Mandarin learners were tested in a visual-world eye-tracking experiment. The target word contained a level tone and the competitor word contained a high-rising tone, or vice versa. The auditory stimuli were manipulated such that the target tone was either canonical (Standard condition), phonetically more distant from the competitor (Distant condition), or phonetically closer to the competitor (Close condition). Growth curve analyses on fixations suggest that, compared to the Standard condition, Mandarin listeners' target-over-competitor word activation was enhanced in the Distant condition and inhibited in the Close condition, whereas English listeners' target-over-competitor word activation was inhibited in both the Distant and Close conditions. These results suggest that within-category tonal information influences both native and non-native Mandarin listeners' word recognition, but does so differently for the two groups. |
Sadaf Rahmanian; Victor Kuperman Spelling errors impede recognition of correctly spelled word forms Journal Article Scientific Studies of Reading, 23 , pp. 24–36, 2019. @article{Rahmanian2019, title = {Spelling errors impede recognition of correctly spelled word forms}, author = {Sadaf Rahmanian and Victor Kuperman}, doi = {10.1080/10888438.2017.1359274}, year = {2019}, date = {2019-01-01}, journal = {Scientific Studies of Reading}, volume = {23}, pages = {24--36}, publisher = {Routledge}, abstract = {Spelling errors are typically thought of as an effect of a word's weak orthographic representation in an individual mind. What if existence of spelling errors is a partial cause of effortful orthographic learning and word recognition? We selected words that had homophonic substandard spelling variants of varying frequency (e.g., innocent and inocent occur in 69% and 31% of occurrences of the word, respectively). Conventional spellings were presented for recognition either in context (Experiment 1, eye-tracking sentence reading) or in isolation (Experiment 2, lexical decision). Words elicited longer fixation durations and lexical decision latencies if there was more uncertainty (higher entropy) regarding which spelling is a preferred one. The inhibitory effect of frequency was not modulated by spelling or other reading skill. This finding is in line with theories of learning that predict spelling errors to weaken associations between conventional spellings and the word's meaning.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Spelling errors are typically thought of as an effect of a word's weak orthographic representation in an individual mind. What if existence of spelling errors is a partial cause of effortful orthographic learning and word recognition? We selected words that had homophonic substandard spelling variants of varying frequency (e.g., innocent and inocent occur in 69% and 31% of occurrences of the word, respectively). Conventional spellings were presented for recognition either in context (Experiment 1, eye-tracking sentence reading) or in isolation (Experiment 2, lexical decision). Words elicited longer fixation durations and lexical decision latencies if there was more uncertainty (higher entropy) regarding which spelling is a preferred one. The inhibitory effect of frequency was not modulated by spelling or other reading skill. This finding is in line with theories of learning that predict spelling errors to weaken associations between conventional spellings and the word's meaning. |
Tracy Reuter; Arielle Borovsky; Casey Lew-Williams Predict and redirect: Prediction errors support children's word learning Journal Article Developmental Psychology, 55 (8), pp. 1656–1665, 2019. @article{Reuter2019a, title = {Predict and redirect: Prediction errors support children's word learning}, author = {Tracy Reuter and Arielle Borovsky and Casey Lew-Williams}, doi = {10.1037/dev0000754}, year = {2019}, date = {2019-01-01}, journal = {Developmental Psychology}, volume = {55}, number = {8}, pages = {1656--1665}, abstract = {According to prediction-based learning theories, erroneous predictions support learning. However, empirical evidence for a relation between prediction error and children's language learning is currently lacking. Here we investigated whether and how prediction errors influence children's learning of novel words. We hypothesized that word learning would vary as a function of 2 factors: the extent to which children generate predictions, and the extent to which children redirect attention in response to errors. Children were tested in a novel word learning task, which used eye tracking to measure (a) real-time semantic predictions to familiar referents, (b) attention redirection following prediction errors, and (c) learning of novel referents. Results indicated that predictions and prediction errors interdependently supported novel word learning, via children's efficient redirection of attention. This study provides a developmental evaluation of prediction-based theories and suggests that erroneous predictions play a mechanistic role in children's language learning.}, keywords = {}, pubstate = {published}, tppubtype = {article} } According to prediction-based learning theories, erroneous predictions support learning. However, empirical evidence for a relation between prediction error and children's language learning is currently lacking. Here we investigated whether and how prediction errors influence children's learning of novel words. We hypothesized that word learning would vary as a function of 2 factors: the extent to which children generate predictions, and the extent to which children redirect attention in response to errors. Children were tested in a novel word learning task, which used eye tracking to measure (a) real-time semantic predictions to familiar referents, (b) attention redirection following prediction errors, and (c) learning of novel referents. Results indicated that predictions and prediction errors interdependently supported novel word learning, via children's efficient redirection of attention. This study provides a developmental evaluation of prediction-based theories and suggests that erroneous predictions play a mechanistic role in children's language learning. |
Sarah Risse; Stefan Seelig Stable preview difficulty effects in reading with an improved variant of the boundary paradigm Journal Article Quarterly Journal of Experimental Psychology, 72 (7), pp. 1632–1645, 2019. @article{Risse2019, title = {Stable preview difficulty effects in reading with an improved variant of the boundary paradigm}, author = {Sarah Risse and Stefan Seelig}, doi = {10.1177/1747021818819990}, year = {2019}, date = {2019-01-01}, journal = {Quarterly Journal of Experimental Psychology}, volume = {72}, number = {7}, pages = {1632--1645}, abstract = {Using gaze-contingent display changes in the boundary paradigm during sentence reading, it has recently been shown that parafoveal word-processing difficulties affect fixations on words to the right of the boundary. Current interpretations of this post-boundary preview difficulty effect range from delayed parafoveal-on-foveal effects in parallel word-processing models to forced fixations in serial word-processing models. However, these findings are based on an experimental design that, while allowing to isolate preview difficulty effects, might have established a bias with respect to asymmetries in parafoveal preview benefit for high-frequent and low-frequent target words. Here, we present a revision of this paradigm varying the preview's lexical frequency and keeping the target word constant. We found substantial effects of the preview difficulty in fixation durations after the boundary confirming that preview processing affects the oculomotor decisions not only via trans-saccadic integration of preview and target word information. An additional time-course analysis showed that the preview difficulty effect was significant across the full fixation duration distribution on the target word without any evidence on the pretarget word before the boundary. We discuss implications of the accumulating evidence of post-boundary preview difficulty effects for models of eye movement control during reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Using gaze-contingent display changes in the boundary paradigm during sentence reading, it has recently been shown that parafoveal word-processing difficulties affect fixations on words to the right of the boundary. Current interpretations of this post-boundary preview difficulty effect range from delayed parafoveal-on-foveal effects in parallel word-processing models to forced fixations in serial word-processing models. However, these findings are based on an experimental design that, while allowing to isolate preview difficulty effects, might have established a bias with respect to asymmetries in parafoveal preview benefit for high-frequent and low-frequent target words. Here, we present a revision of this paradigm varying the preview's lexical frequency and keeping the target word constant. We found substantial effects of the preview difficulty in fixation durations after the boundary confirming that preview processing affects the oculomotor decisions not only via trans-saccadic integration of preview and target word information. An additional time-course analysis showed that the preview difficulty effect was significant across the full fixation duration distribution on the target word without any evidence on the pretarget word before the boundary. We discuss implications of the accumulating evidence of post-boundary preview difficulty effects for models of eye movement control during reading. |
Erin K Robertson; Jennifer E Gallant Eye tracking reveals subtle spoken sentence comprehension problems in children with dyslexia Journal Article Lingua, 228 , pp. 1–17, 2019. @article{Robertson2019, title = {Eye tracking reveals subtle spoken sentence comprehension problems in children with dyslexia}, author = {Erin K Robertson and Jennifer E Gallant}, doi = {10.1016/j.lingua.2019.06.009}, year = {2019}, date = {2019-01-01}, journal = {Lingua}, volume = {228}, pages = {1--17}, publisher = {Elsevier B.V.}, abstract = {Children with dyslexia who did not have SLI (n = 31) and typically-developing (TD}, keywords = {}, pubstate = {published}, tppubtype = {article} } Children with dyslexia who did not have SLI (n = 31) and typically-developing (TD |
Isabel R Rodríguez-Ortiz; Francisco J Moreno-Pérez; Pablo Delgado; David Saldaña The development of anaphora resolution in Spanish Journal Article Journal of Psycholinguistic Research, 48 (4), pp. 797–817, 2019. @article{Rodriguez-Ortiz2019, title = {The development of anaphora resolution in Spanish}, author = {Isabel R Rodríguez-Ortiz and Francisco J Moreno-Pérez and Pablo Delgado and David Salda{ñ}a}, doi = {10.1007/s10936-019-09632-3}, year = {2019}, date = {2019-01-01}, journal = {Journal of Psycholinguistic Research}, volume = {48}, number = {4}, pages = {797--817}, publisher = {Springer US}, abstract = {The present study focuses on the development of Spanish pronominal processing. We investigate whether the pronoun interpretation problem (i.e., reflexive pronouns comprehension is resolved at an earlier age than that of personal pronouns, also known as the Delay of the Principle B Effect), which has been documented in other languages, also occurs in Spanish. For this purpose, we conducted two experiments including pronoun resolution tasks. In Experiment 1, a task adapted from the experimental paradigm proposed by Love et al. (J Psycholinguist Res 38:285–304, 2009. https://doi.org/10.1007/s10936-009-9103-9) was used, which examines the off-line processing of the Spanish pronouns se and le. In Experiment 2, on-line processing of the same pronouns was evaluated with eye-tracking, using a paradigm developed by Thompson and Choy (J Psycholinguist Res 38:255–283, 2009. https://doi.org/10.1007/s10936-009-9105-7). Forty-three participants aged 4–16 years completed both experiments. Results indicated that there is no developmental asymmetry in the acquisition of successful resolution of the two types of anaphora in Spanish: from age 4, reflexive and clitic pronouns are processed with the same degree of accuracy.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study focuses on the development of Spanish pronominal processing. We investigate whether the pronoun interpretation problem (i.e., reflexive pronouns comprehension is resolved at an earlier age than that of personal pronouns, also known as the Delay of the Principle B Effect), which has been documented in other languages, also occurs in Spanish. For this purpose, we conducted two experiments including pronoun resolution tasks. In Experiment 1, a task adapted from the experimental paradigm proposed by Love et al. (J Psycholinguist Res 38:285–304, 2009. https://doi.org/10.1007/s10936-009-9103-9) was used, which examines the off-line processing of the Spanish pronouns se and le. In Experiment 2, on-line processing of the same pronouns was evaluated with eye-tracking, using a paradigm developed by Thompson and Choy (J Psycholinguist Res 38:255–283, 2009. https://doi.org/10.1007/s10936-009-9105-7). Forty-three participants aged 4–16 years completed both experiments. Results indicated that there is no developmental asymmetry in the acquisition of successful resolution of the two types of anaphora in Spanish: from age 4, reflexive and clitic pronouns are processed with the same degree of accuracy. |
Jens Roeser; Mark Torrance; Thom Baguley Advance planning in written and spoken sentence production Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 45 (11), pp. 1983–2009, 2019. @article{Roeser2019, title = {Advance planning in written and spoken sentence production}, author = {Jens Roeser and Mark Torrance and Thom Baguley}, doi = {10.1037/xlm0000685}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {45}, number = {11}, pages = {1983--2009}, abstract = {Response onset latencies for sentences that start with a conjoined noun phrase are typically longer than for sentences starting with a simple noun phrase. This suggests that advance planning has phrasal scope, which may or may not be lexically driven. All previous studies have involved spoken production, leaving open the possibility that effects are, in part, modality-specific. In 3 image-description experiments (Ns = 32) subjects produced sentences with conjoined (e.g., Peter and the hat) and simple initial noun phrases (e.g., Peter) in both speech and writing. Production onset latencies and participants' eye movements were recorded. Ease of lexical retrieval of sentences' second noun was assessed by manipulating codability (Experiment 1) and by gaze-contingent name priming (Experiments 2 and 3). Findings confirmed a modality-independent phrasal scope for advance planning but did not support obligatory lexical retrieval beyond the sentence-initial noun. This research represents the first direct experimental comparison of sentence planning in speech and writing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Response onset latencies for sentences that start with a conjoined noun phrase are typically longer than for sentences starting with a simple noun phrase. This suggests that advance planning has phrasal scope, which may or may not be lexically driven. All previous studies have involved spoken production, leaving open the possibility that effects are, in part, modality-specific. In 3 image-description experiments (Ns = 32) subjects produced sentences with conjoined (e.g., Peter and the hat) and simple initial noun phrases (e.g., Peter) in both speech and writing. Production onset latencies and participants' eye movements were recorded. Ease of lexical retrieval of sentences' second noun was assessed by manipulating codability (Experiment 1) and by gaze-contingent name priming (Experiments 2 and 3). Findings confirmed a modality-independent phrasal scope for advance planning but did not support obligatory lexical retrieval beyond the sentence-initial noun. This research represents the first direct experimental comparison of sentence planning in speech and writing. |
Koen Rummens; Bilge Sayim Disrupting uniformity: Feature contrasts that reduce crowding interfere with peripheral word recognition Journal Article Vision Research, 161 , pp. 25–35, 2019. @article{Rummens2019, title = {Disrupting uniformity: Feature contrasts that reduce crowding interfere with peripheral word recognition}, author = {Koen Rummens and Bilge Sayim}, doi = {10.1016/j.visres.2019.05.006}, year = {2019}, date = {2019-01-01}, journal = {Vision Research}, volume = {161}, pages = {25--35}, publisher = {Elsevier}, abstract = {Peripheral word recognition is impaired by crowding, the harmful influence of surrounding objects (flankers) on target identification. Crowding is usually weaker when the target and the flankers differ (for example in color). Here, we investigated whether reducing crowding at syllable boundaries improved peripheral word recognition. In Experiment 1, a target letter was flanked by single letters to the left and right and presented at 8° in the lower visual field. Target and flankers were either the same or different in regard to contrast polarity, color, luminance, and combined color/luminance. Crowding was reduced when the target differed from the flankers in contrast polarity, but not in any of the other conditions. Using the same color and luminance values as in Experiment 1, we measured recognition performance (speed and accuracy) for uniform (e.g., all letters black), congruent (e.g., alternating black and white syllables), and incongruent (e.g., alternating black and white non-syllables) words in Experiment 2. Participants verbally reported the target word, briefly displayed at 8° in the lower visual field. Congruent and incongruent words were recognized slower compared to uniform words in the opposite contrast polarity condition, but not in the other conditions. Our results show that the same feature contrast between the target and the flankers that yielded reduced crowding, deteriorated peripheral word recognition when applied to syllables and non-syllabic word parts. We suggest that a potential advantage of reduced crowding at syllable boundaries in word recognition is counteracted by the disruption of word uniformity.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Peripheral word recognition is impaired by crowding, the harmful influence of surrounding objects (flankers) on target identification. Crowding is usually weaker when the target and the flankers differ (for example in color). Here, we investigated whether reducing crowding at syllable boundaries improved peripheral word recognition. In Experiment 1, a target letter was flanked by single letters to the left and right and presented at 8° in the lower visual field. Target and flankers were either the same or different in regard to contrast polarity, color, luminance, and combined color/luminance. Crowding was reduced when the target differed from the flankers in contrast polarity, but not in any of the other conditions. Using the same color and luminance values as in Experiment 1, we measured recognition performance (speed and accuracy) for uniform (e.g., all letters black), congruent (e.g., alternating black and white syllables), and incongruent (e.g., alternating black and white non-syllables) words in Experiment 2. Participants verbally reported the target word, briefly displayed at 8° in the lower visual field. Congruent and incongruent words were recognized slower compared to uniform words in the opposite contrast polarity condition, but not in the other conditions. Our results show that the same feature contrast between the target and the flankers that yielded reduced crowding, deteriorated peripheral word recognition when applied to syllables and non-syllabic word parts. We suggest that a potential advantage of reduced crowding at syllable boundaries in word recognition is counteracted by the disruption of word uniformity. |
Rachel Ryskin; Chigusa Kurumada; Sarah Brown-Schmidt Information integration in modulation of pragmatic inferences during online language comprehension Journal Article Cognitive Science, 43 , pp. 1–35, 2019. @article{Ryskin2019, title = {Information integration in modulation of pragmatic inferences during online language comprehension}, author = {Rachel Ryskin and Chigusa Kurumada and Sarah Brown-Schmidt}, doi = {10.1111/cogs.12769}, year = {2019}, date = {2019-01-01}, journal = {Cognitive Science}, volume = {43}, pages = {1--35}, abstract = {Upon hearing a scalar adjective in a definite referring expression such as "the bigłdots," listeners typically make anticipatory eye movements to an item in a contrast set, such as a big glass in the context of a smaller glass. Recent studies have suggested that this rapid, contrastive interpretation of scalar adjectives is malleable and calibrated to the speaker's pragmatic competence. In a series of eye-tracking experiments, we explore the nature of the evidence necessary for the modulation of pragmatic inferences in language comprehension, focusing on the complementary roles of top-down information - (knowledge about the particular speaker's pragmatic competence) and bottom-up cues (distributional information about the use of scalar adjectives in the environment). We find that bottom-up evidence alone (e.g., the speaker says "the big dog" in a context with one dog), in large quantities, can be sufficient to trigger modulation of the listener's contrastive inferences, with or without top-down cues to support this adaptation. Further, these findings suggest that listeners track and flexibly combine multiple sources of information in service of efficient pragmatic communication.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Upon hearing a scalar adjective in a definite referring expression such as "the bigłdots," listeners typically make anticipatory eye movements to an item in a contrast set, such as a big glass in the context of a smaller glass. Recent studies have suggested that this rapid, contrastive interpretation of scalar adjectives is malleable and calibrated to the speaker's pragmatic competence. In a series of eye-tracking experiments, we explore the nature of the evidence necessary for the modulation of pragmatic inferences in language comprehension, focusing on the complementary roles of top-down information - (knowledge about the particular speaker's pragmatic competence) and bottom-up cues (distributional information about the use of scalar adjectives in the environment). We find that bottom-up evidence alone (e.g., the speaker says "the big dog" in a context with one dog), in large quantities, can be sufficient to trigger modulation of the listener's contrastive inferences, with or without top-down cues to support this adaptation. Further, these findings suggest that listeners track and flexibly combine multiple sources of information in service of efficient pragmatic communication. |
Eliana Mastrantuono; Michele Burigo; Isabel R Rodríguez-Ortiz; David Saldaña The role of multiple articulatory channels of sign-supported speech revealed by visual processing Journal Article Journal of Speech, Language, and Hearing Research, 62 , pp. 1625–1656, 2019. @article{Mastrantuono2019, title = {The role of multiple articulatory channels of sign-supported speech revealed by visual processing}, author = {Eliana Mastrantuono and Michele Burigo and Isabel R Rodríguez-Ortiz and David Salda{ñ}a}, doi = {10.1044/2019_JSLHR-S-17-0433}, year = {2019}, date = {2019-01-01}, journal = {Journal of Speech, Language, and Hearing Research}, volume = {62}, pages = {1625--1656}, abstract = {Purpose: The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed inrelation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication. Method: Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either theface or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message. Results: In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased inthe magnified condition. In Experiment 2, results indicatedless accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech. Conclusions: All participants, even those with residualhearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed inrelation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication. Method: Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either theface or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message. Results: In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased inthe magnified condition. In Experiment 2, results indicatedless accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech. Conclusions: All participants, even those with residualhearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions. |
Bob McMurray; Tyler P Ellis; Keith S Apfelbaum How do you deal with uncertainty? Cochlear implant users differ in the dynamics of lexical processing of noncanonical inputs Journal Article Ear and Hearing, 40 (4), pp. 961–980, 2019. @article{McMurray2019, title = {How do you deal with uncertainty? Cochlear implant users differ in the dynamics of lexical processing of noncanonical inputs}, author = {Bob McMurray and Tyler P Ellis and Keith S Apfelbaum}, doi = {10.1097/AUD.0000000000000681}, year = {2019}, date = {2019-01-01}, journal = {Ear and Hearing}, volume = {40}, number = {4}, pages = {961--980}, abstract = {Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. Design: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. Results: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. Conclusions: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. Design: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. Results: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. Conclusions: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations. |
Martina Micai; Mila Vulchanova; David Saldaña Do individuals with autism change their reading behavior to adapt to errors in the text? Journal Article Journal of Autism and Developmental Disorders, 39 , pp. 4232–4243, 2019. @article{Micai2019, title = {Do individuals with autism change their reading behavior to adapt to errors in the text?}, author = {Martina Micai and Mila Vulchanova and David Salda{ñ}a}, doi = {10.1007/s10803-019-04108-8}, year = {2019}, date = {2019-01-01}, journal = {Journal of Autism and Developmental Disorders}, volume = {39}, pages = {4232--4243}, abstract = {Reading monitoring is poorly explored, but it may have an impact on well-documented reading comprehension difficulties in autism. This study explores reading monitoring through the impact of instructions and different error types on reading behavior. Individuals with autism and matched controls read correct sentences and sentences containing orthographic and semantic errors. Prior to the task, participants were given instructions either to focus on semantic or orthographic errors. Analysis of eye-movements showed that the group with autism, differently from controls, were less influenced by the error's type in the regression-out to-error measure, showing less change in eye-movements behavior between error types. Individuals with autism might find it more difficult to adapt their reading strategies to various reading materials and task demands.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reading monitoring is poorly explored, but it may have an impact on well-documented reading comprehension difficulties in autism. This study explores reading monitoring through the impact of instructions and different error types on reading behavior. Individuals with autism and matched controls read correct sentences and sentences containing orthographic and semantic errors. Prior to the task, participants were given instructions either to focus on semantic or orthographic errors. Analysis of eye-movements showed that the group with autism, differently from controls, were less influenced by the error's type in the regression-out to-error measure, showing less change in eye-movements behavior between error types. Individuals with autism might find it more difficult to adapt their reading strategies to various reading materials and task demands. |
Evelyn Milburn; Tessa Warren Idioms show effects of meaning relatedness and dominance similar to those seen for ambiguous words Journal Article Psychonomic Bulletin & Review, 26 , pp. 591–598, 2019. @article{Milburn2019, title = {Idioms show effects of meaning relatedness and dominance similar to those seen for ambiguous words}, author = {Evelyn Milburn and Tessa Warren}, doi = {10.3758/s13423-019-01589-7}, year = {2019}, date = {2019-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {26}, pages = {591--598}, publisher = {Psychonomic Bulletin & Review}, abstract = {Does the language comprehension system resolve ambiguities for single- and multiple-word units similarly? We investigate this question by examining whether two constructs with robust effects on ambiguous word processing – meaning relatedness and meaning dominance – have similar influences on idiom processing. Eye tracking showed that: (1) idioms with more related figurative and literal meanings were read faster, paralleling findings for ambiguous words, and (2) meaning relatedness and meaning dominance interacted to drive eye movements on idioms just as they do on polysemous ambiguous words. These findings are consistent with a language comprehension system that resolves ambiguities similarly regardless of literality or the number of words in the unit.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Does the language comprehension system resolve ambiguities for single- and multiple-word units similarly? We investigate this question by examining whether two constructs with robust effects on ambiguous word processing – meaning relatedness and meaning dominance – have similar influences on idiom processing. Eye tracking showed that: (1) idioms with more related figurative and literal meanings were read faster, paralleling findings for ambiguous words, and (2) meaning relatedness and meaning dominance interacted to drive eye movements on idioms just as they do on polysemous ambiguous words. These findings are consistent with a language comprehension system that resolves ambiguities similarly regardless of literality or the number of words in the unit. |
Jonathan Mirault; Joshua Snell; Jonathan Grainger Reading without spaces revisited: The role of word identification and sentence-level constraints Journal Article Acta Psychologica, 195 , pp. 22–29, 2019. @article{Mirault2019, title = {Reading without spaces revisited: The role of word identification and sentence-level constraints}, author = {Jonathan Mirault and Joshua Snell and Jonathan Grainger}, doi = {10.1016/j.actpsy.2019.03.001}, year = {2019}, date = {2019-01-01}, journal = {Acta Psychologica}, volume = {195}, pages = {22--29}, abstract = {The present study examined the relative contribution of bottom-up word identification and top-down sentence-level constraints in facilitating the reading of text printed without between-word spacing. We compared reading of grammatically correct sentences and shuffled versions of the same words presented both with normal spacing and without spaces. We found that reading was hampered by removing sentence structure as well as by removing spaces. A significantly greater impact of sentence structure when reading unspaced text was found in probe word identification accuracies and total viewing times per word, whereas the impact of sentence structure on the probability of making a regressive eye movement was greater when reading normally spaced text. Crucially, we also found that the length of the currently fixated word determined the amplitude of forward saccades leaving that word during the reading of unspaced text. We conclude that the relative ease with which skilled readers can read unspaced text is due to a combination of an increased use of bottom-up word identification in guiding the timing and targeting of eye movements, plus an increased interactivity between word identification and sentence level processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study examined the relative contribution of bottom-up word identification and top-down sentence-level constraints in facilitating the reading of text printed without between-word spacing. We compared reading of grammatically correct sentences and shuffled versions of the same words presented both with normal spacing and without spaces. We found that reading was hampered by removing sentence structure as well as by removing spaces. A significantly greater impact of sentence structure when reading unspaced text was found in probe word identification accuracies and total viewing times per word, whereas the impact of sentence structure on the probability of making a regressive eye movement was greater when reading normally spaced text. Crucially, we also found that the length of the currently fixated word determined the amplitude of forward saccades leaving that word during the reading of unspaced text. We conclude that the relative ease with which skilled readers can read unspaced text is due to a combination of an increased use of bottom-up word identification in guiding the timing and targeting of eye movements, plus an increased interactivity between word identification and sentence level processing. |
Jonathan Mirault; Joshua Snell; Jonathan Grainger Reading without spaces: The role of precise letter order Journal Article Attention, Perception, & Psychophysics, 81 , pp. 846–860, 2019. @article{Mirault2019a, title = {Reading without spaces: The role of precise letter order}, author = {Jonathan Mirault and Joshua Snell and Jonathan Grainger}, doi = {10.3758/s13414-018-01648-6}, year = {2019}, date = {2019-01-01}, journal = {Attention, Perception, & Psychophysics}, volume = {81}, pages = {846--860}, abstract = {Prior research points to efficient identification of embedded words as a key factor in facilitating the reading of text printed without spacing between words. Here we further tested the primary role of bottom-up word identification by altering this process with a letter transposition manipulation. In two experiments, we examined silent reading and reading aloud of normal sentences and sentences containing words with letter transpositions, in both normally spaced and unspaced conditions. We predicted that letter transpositions should be particularly harmful for reading unspaced text. In line with our prediction, the majority of our measures of reading fluency showed that unspaced text with letter transpositions was disproportionately difficult to read. These findings provide further support for the claim that reading text without between-word spacing relies principally on efficient bottom-up processing, enabling accurate word identification in the absence of visual cues to identify word boundaries.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Prior research points to efficient identification of embedded words as a key factor in facilitating the reading of text printed without spacing between words. Here we further tested the primary role of bottom-up word identification by altering this process with a letter transposition manipulation. In two experiments, we examined silent reading and reading aloud of normal sentences and sentences containing words with letter transpositions, in both normally spaced and unspaced conditions. We predicted that letter transpositions should be particularly harmful for reading unspaced text. In line with our prediction, the majority of our measures of reading fluency showed that unspaced text with letter transpositions was disproportionately difficult to read. These findings provide further support for the claim that reading text without between-word spacing relies principally on efficient bottom-up processing, enabling accurate word identification in the absence of visual cues to identify word boundaries. |
Jelena Mirković; Gerry T M Altmann Unfolding meaning in context: The dynamics of conceptual similarity Journal Article Cognition, 183 , pp. 19–43, 2019. @article{Mirkovic2019, title = {Unfolding meaning in context: The dynamics of conceptual similarity}, author = {Jelena Mirkovi{ć} and Gerry T M Altmann}, doi = {10.1016/j.cognition.2018.10.018}, year = {2019}, date = {2019-01-01}, journal = {Cognition}, volume = {183}, pages = {19--43}, publisher = {Elsevier}, abstract = {How are relationships between concepts affected by the interplay between short-term contextual constraints and long-term conceptual knowledge? Across two studies we investigate the consequence of changes in visual context for the dynamics of conceptual processing. Participants' eye movements were tracked as they viewed a visual depiction of e.g. a canary in a birdcage (Experiment 1), or a canary and three unrelated objects, each in its own quadrant (Experiment 2). In both studies participants heard either a semantically and contextually similar “robin” (a bird; similar size), an equally semantically similar but not contextually similar “stork” (a bird; bigger than a canary, incompatible with the birdcage), or unrelated “tent”. The changing patterns of fixations across time indicated first, that the visual context strongly influenced the eye movements such that, in the context of a birdcage, early on (by word offset) hearing “robin” engendered more looks to the canary than hearing “stork” or “tent” (which engendered the same number of looks), unlike in the context of unrelated objects (in which case “robin” and “stork” engendered equivalent looks to the canary, and more than did “tent”). Second, within the 500 ms post-word-offset eye movements in both experiments converged onto a common pattern (more looks to the canary after “robin” than after “stork” and for both more than after “tent”). We interpret these findings as indicative of the dynamics of activation within semantic memory accessed via pictures and via words, and reflecting the complex interaction between systems representing context-independent and context-dependent conceptual knowledge driven by predictive processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How are relationships between concepts affected by the interplay between short-term contextual constraints and long-term conceptual knowledge? Across two studies we investigate the consequence of changes in visual context for the dynamics of conceptual processing. Participants' eye movements were tracked as they viewed a visual depiction of e.g. a canary in a birdcage (Experiment 1), or a canary and three unrelated objects, each in its own quadrant (Experiment 2). In both studies participants heard either a semantically and contextually similar “robin” (a bird; similar size), an equally semantically similar but not contextually similar “stork” (a bird; bigger than a canary, incompatible with the birdcage), or unrelated “tent”. The changing patterns of fixations across time indicated first, that the visual context strongly influenced the eye movements such that, in the context of a birdcage, early on (by word offset) hearing “robin” engendered more looks to the canary than hearing “stork” or “tent” (which engendered the same number of looks), unlike in the context of unrelated objects (in which case “robin” and “stork” engendered equivalent looks to the canary, and more than did “tent”). Second, within the 500 ms post-word-offset eye movements in both experiments converged onto a common pattern (more looks to the canary after “robin” than after “stork” and for both more than after “tent”). We interpret these findings as indicative of the dynamics of activation within semantic memory accessed via pictures and via words, and reflecting the complex interaction between systems representing context-independent and context-dependent conceptual knowledge driven by predictive processing. |