EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language articles, please email us!
Jon Andoni Duñabeitia; Alberto Avilés; Manuel Carreiras
In: Psychonomic Bulletin & Review, vol. 15, no. 6, pp. 1072–1077, 2008.
The main aim of this study was to explore the extent to which the number of associates of a word (NoA) influences lexical access, in four tasks that focus on different processes of visual word recognition: lexical decision, reading aloud, progressive demasking, and online sentence reading. Results consistently showed that words with a dense associative neighborhood (high-NoA words) were processed faster than words with a sparse neighborhood (low-NoA words), extending previous findings from English lexical decision and categorization experiments. These results are interpreted in terms of the higher degree of semantic richness of high-NoA words as compared with low-NoA words.
Ralf Engbert; Antje Nuthmann
In: PLoS ONE, vol. 3, no. 2, pp. e1534, 2008.
During reading, we generate saccadic eye movements to move words into the center of the visual field for word processing. However, due to systematic and random errors in the oculomotor system, distributions of within-word landing positions are rather broad and show overlapping tails, which suggests that a fraction of fixations is mislocated and falls on words to the left or right of the selected target word. Here we propose a new procedure for the self-consistent estimation of the likelihood of mislocated fixations in normal reading. Our approach is based on iterative computation of the proportions of several types of oculomotor errors, the underlying probabilities for word-targeting, and corrected distributions of landing positions. We found that the average fraction of mislocated fixations ranges from about 10% to more than 30% depending on word length. These results show that fixation probabilities are strongly affected by oculomotor errors.
Paola Escudero; Rachel Hayes-Harb; Holger Mitterer
Novel second-language words and asymmetric lexical access Journal Article
In: Journal of Phonetics, vol. 36, no. 2, pp. 345–360, 2008.
The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords, of which 10 contained the English contrast /$epsilon$/-æ/ (a confusable contrast for native Dutch speakers). One group of subjects learned the words by matching their auditory forms to pictured meanings, while a second group additionally saw the spelled forms of the words. We found that the group who received only auditory forms confused words containing /æ/ and /$epsilon$/ symmetrically, i.e., both /æ/ and /$epsilon$/ auditory tokens triggered looks to pictures containing both /æ/ and /$epsilon$/. In contrast, the group who also had access to spelled forms showed the same asymmetric word recognition pattern found by previous studies, i.e., they only looked at pictures of words containing /$epsilon$/ when presented with /$epsilon$/ target tokens, but looked at pictures of words containing both /æ/ and /$epsilon$/ when presented with /æ/ target tokens. The results demonstrate that L2 learners can form lexical contrasts for auditorily confusable novel L2 words. However, and most importantly, this study suggests that explicit information over the contrastive nature of two new sounds may be needed to build separate lexical representations for similar-sounding L2 words.
Steven Frisson; Brian McElree
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 1, pp. 1–11, 2008.
An eye-movement study examined the processing of expressions requiring complement coercion (J. Pustejovsky, 1995), in which a noun phrase that does not denote an event (e.g., the book) appears as the complement of an event-selecting verb (e.g., began the book). Previous studies demonstrated that these expressions are more costly to process than are control expressions that can be processed with basic compositional operations (L. Pylkka ̈nen & B. McElree, 2006). Complement coercion is thought to be costly because comprehenders need to construct an event sense of the complement to satisfy the semantic restrictions of the verb (e.g., began writing the book). The reported experiment tests the alternative hypotheses that the cost arises from the need to select 1 interpretation from several or from competition between alternative interpretations. Expressions with weakly constrained interpretations (no dominant interpretation and several alternative interpretations) were not more costly to process than expressions with a strongly constrained interpretation (1 dominant interpretation and few alternative interpretations). These results are consistent with the hypothesis that the cost reflects the on-line construction of an event sense for the complement.
Steven Frisson; Elizabeth Niswander-Klement; Alexander Pollatsek
In: British Journal of Psychology, vol. 99, no. 1, pp. 87–107, 2008.
Experiment 1 examined whether the semantic transparency of an English unspaced compound word affected how long it took to process it in reading. Three types of opaque words were each compared with a matched set of transparent words (i.e. matched on the length and frequency of the constituents and the frequency of the word as a whole). Two sets of the opaque words were partially opaque: either the first constituent was not related to the meaning of the compound (opaque-transparent) or the second constituent was not related to the meaning of the compound (transparent-opaque). In the third set (opaque-opaque), neither constituent was related to the meaning of the compound. For all three sets, there was no significant difference between the opaque and the transparent words on any eye-movement measure. This replicates an earlier finding with Finnish compound words (Pollatsek & Hyönä, 2005) and indicates that, although there is now abundant evidence that the component constituents play a role in the encoding of compound words, the meaning of the compound word is not constructed from the parts, at least for compound words for which a lexical entry exists. Experiment 2 used the same compounds but with a space between the constituents. This presentation resulted in a transparency effect, indicating that when an assembly route is 'forced', transparency does play a role.
Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg
Adaptive strategies for reading with a forced retinal location Journal Article
In: Journal of Vision, vol. 8, no. 5, pp. 1–18, 2008.
Forcing normal-sighted participants to use a distinct parafoveal retinal location for reading, we studied which part of the visual field is best suited to take over functions of the fovea during early stages of macular degeneration (MD). A region to the right of fixation lead to best reading performance and most natural gaze behavior, whereas reading performance was severely impaired when a region to the left or below fixation had to be used. An analysis of the underlying oculomotor behavior revealed that practice effects were accompanied by a larger number of saccades in text direction and decreased fixation durations, whereas no adjustment of saccade amplitudes was observed. We provide an explanation for the observed performance differences at different retinal locations based on the interplay of attention and eye movements. Our findings have important implications for the development of training methods for MD patients targeted at reading, suggesting that it would be beneficial for MD patients to use a region to the right of their central scotoma.
James S. Magnuson; Michael K. Tanenhaus; Richard N. Aslin
In: Cognition, vol. 108, no. 3, pp. 866–873, 2008.
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar "nouns" and "adjectives" did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration.
Falk Huettig; Robert J. Hartsuiker
In: Memory and Cognition, vol. 36, no. 2, pp. 341–360, 2008.
Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 x 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).
Albrecht W. Inhoff; Matthew S. Solomon; Bradley A. Seymour; Ralph Radach
In: Vision Research, vol. 48, no. 8, pp. 1027–1039, 2008.
Intra-fixation location changes were measured when one-line sentences written in lower or aLtErNaTiNg case were read. Intra-fixation location changes were common and their size was normally distributed except for a relatively high proportion of fixations without a discernible location change. Location changes that did occur were systematically biased toward the right when alternating case was read. Irrespective of case type, changes of the right eye were biased toward the right at the onset of sentence reading, and this spatial bias decreased as sentence reading progressed from left to right. The left eye showed a relatively stable right-directed bias. These results show that processing demands can pull the two fixated eyes in the same direction and that the response to this pull can differ for the right and left eye.
Albrecht W. Inhoff; Matthew S. Starr; Matthew S. Solomon; Lars Placke
In: Memory and Cognition, vol. 36, no. 3, pp. 675–687, 2008.
We examined the use of lexeme meaning during the processing of spatially unified bilexemic compound words by manipulating both the location and the word frequency of the lexeme that primarily defined the meaning of a compound (i.e., the dominant lexeme). The semantically dominant and nondominant lexemes occupied either the beginning or the ending compound word location, and the beginning and ending lexemes could be either high- or low-frequency words. Three tasks were used--lexical decision, naming, and sentence reading--all of which focused on the effects of lexeme frequency as a function of lexeme dominance. The results revealed a larger word frequency effect for the dominant lexeme in all three tasks. Eye movements during sentence reading further revealed larger word frequency effects for the dominant lexeme via several oculomotor motor measures, including the duration of the first fixation on a compound word. These findings favor theoretical conceptions in which the use of lexeme meaning is an integral part of the compound recognition process.
Manon W. Jones; Mateo Obregón; M. Louise Kelly; Holly P. Branigan
In: Cognition, vol. 109, no. 3, pp. 389–407, 2008.
The relationship between rapid automatized naming (RAN) and reading fluency is well documented (see Wolf, M. & Bowers, P.G. (1999). The double-deficit hypothesis for the developmental dyslexias. Journal of Educational Psychology, 91(3), 415-438, for a review), but little is known about which component processes are important in RAN, and why developmental dyslexics show longer latencies on these tasks. Researchers disagree as to whether these delays are caused by impaired phonological processing or whether extra-phonological processes also play a role (e.g., Clarke, P., Hulme, C., & Snowling, M. (2005). Individual differences in RAN and reading: A response timing analysis. Journal of Research in Reading, 28(2), 73-86; Wolf, M., Bowers, P.G., & Biddle, K. (2000). Naming-speed processes, timing, and reading: A conceptual review. Journal of learning disabilities, 33(4), 387-407). We conducted an eye-tracking study that manipulated phonological and visual information (as representative of extra-phonological processes) in RAN. Results from linear mixed (LME) effects analyses showed that both phonological and visual processes influence naming-speed for both dyslexic and non-dyslexic groups, but the influence on dyslexic readers is greater. Moreover, dyslexic readers' difficulties in these domains primarily emerge in a measure that explicitly includes the production phase of naming. This study elucidates processes underpinning RAN performance in non-dyslexic readers and pinpoints areas of difficulty for dyslexic readers. We discuss these findings with reference to phonological and extra-phonological hypotheses of naming-speed deficits.
Jesse A. Harris; Liina Pylkkänen; Brian McElree; Steven Frisson
The cost of question concealment: Eye-tracking and MEG evidence Journal Article
In: Brain and Language, vol. 107, no. 1, pp. 44–61, 2008.
Although natural language appears to be largely compositional, the meanings of certain expressions cannot be straightforwardly recovered from the meanings of their parts. This study examined the online processing of one such class of expressions: concealed questions, in which the meaning of a complex noun phrase (the proof of the theorem) shifts to a covert question (what the proof of the theorem is) when mandated by a sub-class of question-selecting verbs (e.g., guess). Previous behavioral and magnetoencephalographic (MEG) studies have reported a cost associated with converting an entity denotation to an event. Our study tested whether both types of meaning-shift affect the same computational resources by examining the effects elicited by concealed questions in eye-tracking and MEG. Experiment 1 found evidence from eye-movements that verbs requiring the concealed question interpretation require more processing time than verbs that do not support a shift in meaning. Experiment 2 localized the cost of the concealed question interpretation in the left posterior temporal region, an area distinct from that affected by complement coercion. Experiment 3 presented the critical verbs in isolation and found no posterior temporal effect, confirming that the effect of Experiment 2 reflected sentential, and not lexical-level, processing.
Seppo Vainio; Jukka Hyönä; Anneli Pajunen
In: Memory and Cognition, vol. 36, no. 2, pp. 329–340, 2008.
The present study examined whether type of inflectional case (semantic or grammatical) and phonological and morphological transparency affect the processing of Finnish modifier-head agreement in reading. Readers' eye movement patterns were registered. In Experiment 1, an agreeing modifier condition (agreement was transparent) was compared with a no-modifier condition, and in Experiment 2, similar constructions with opaque agreement were used. In both experiments, agreement was found to affect the processing of the target noun with some delay. In Experiment 3, unmarked and case-marked modifiers were used. The results again demonstrated a delayed agreement effect, ruling out the possibility that the agreement effects observed in Experiments 1 and 2 reflect a mere modifier-presence effect. We concluded that agreement exerts its effect at the level of syntactic integration but not at the level of lexical access.
Matteo Valsecchi; Sven Saage; Brian J. White; Karl R. Gegenfurtner
In: Journal of Eye Movement Research, vol. 6, no. 5:2, pp. 1–15, 2008.
Formulaic sequences such as idioms, collocations, and lexical bundles, which may be processed as holistic units, make up a large proportion of natural language. For language learners, however, formulaic patterns are a major barrier to achieving native like compe- tence. The present study investigated the processing of lexical bundles by native speakers and less advanced non-native English speakers using corpus analysis for the identification of lexical bundles and eye-tracking to measure the reading times. The participants read sentences containing 4-grams and control phrases which were matched for sub-string fre- quency. The results for native speakers demonstrate a processing advantage for formulaic sequences over the matched control units. We do not find any processing advantage for non-native speakers which suggests that native like processing of lexical bundles comes only late in the acquisition process
Suiping Wang; Hsuan-Chih Chen; Jinmian Yang; Lei Mo
In: Language and Cognitive Processes, vol. 23, no. 2, pp. 241–257, 2008.
An eye-movement study was conducted to examine whether Chinese readers immediately activate and integrate related background information during discourse comprehension. Participants were asked to read short passages, each containing a critical word that fitted well within the local context but was inconsistent or neutral with background information from the early part of the passage. This manipulation of textual consistency produced reliable effects on both first-pass reading fixations in the target region and second-pass reading times in the pre-target and target regions. These results indicate that integration processes start very rapidly in reading text in a writing system with properties that encourage delayed processing, suggesting that immediate processing is likely a universal principle in discourse comprehension.
Tessa Warren; Kerry McConnell; Keith Rayner
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 1001–1010, 2008.
Plausibility violations resulting in impossible scenarios lead to earlier and longer lasting eye movement disruption than violations resulting in highly unlikely scenarios (K. Rayner, T. Warren, B. J. Juhasz, & S. P. Liversedge, 2004; T. Warren & K. McConnell, 2007). This could reflect either differences in the timing of availability of different kinds of information (e.g., selectional restrictions, world knowledge, and context) or differences in their relative power to guide semantic interpretation. The authors investigated eye movements to possible and impossible events in real-world and fantasy contexts to determine when contextual information influences detection of impossibility cued by a semantic mismatch between a verb and an argument. Gaze durations on a target word were longer to impossible events independent of context. However, a measure of the time elapsed from first fixating the target word to moving past it showed disruption only in the real-world context. These results suggest that contextual information did not eliminate initial disruption but moderated it quickly thereafter.
Sarah J. White; Raymond Bertram; Jukka Hyönä
Semantic processing of previews within compound words Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 988–993, 2008.
Previous studies have suggested that previews of words prior to fixation can be processed orthographically, but not semantically, during reading of sentences (K. Rayner, D. A. Balota, & A. Pollatsek, 1986). The present study tested whether semantic processing of previews can occur within words. The preview of the second constituent of 2-constituent Finnish compound nouns was manipulated. The previews were either identical to the 2nd constituent or they were incorrect in the form of a semantically related word, a semantically unrelated word, or a semantically meaningless nonword. The results indicate that previews of 2nd constituents within compound words can be semantically processed. The results have important implications for understanding the nature of preview and compound word processing. These issues are crucial to developing comprehensive models of eye-movement control and word recognition during reading.
Joana Acha; Manuel Perea
In: Cognition, vol. 108, pp. 290–300, 2008.
Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading.
Bernhard Angele; Timothy J. Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner
In: Visual Cognition, vol. 16, no. 6, pp. 697–707, 2008.
The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading.
Jennifer E. Arnold
In: Cognition, vol. 108, no. 1, pp. 69–99, 2008.
Two eye-tracking experiments examine whether adults and 4- and 5-year-old children use the presence or absence of accenting to guide their interpretation of noun phrases (e.g., the bacon) with respect to the discourse context. Unaccented nouns tend to refer to contextually accessible referents, while accented variants tend to be used for less accessible entities. Experiment 1 confirms that accenting is informative for adults, who show a bias toward previously-mentioned objects beginning 300 ms after the onset of unaccented nouns and pronouns. But contrary to findings in the literature, accented words produced no observable bias. In Experiment 2, 4 and 5 year olds were also biased toward previously-mentioned objects with unaccented nouns and pronouns. This builds on findings of limits on children's on-line reference comprehension [Arnold, J. E., Brown-Schmidt, S., & Trueswell, J. C. (2007). Children's use of gender and order-of-mention during pronoun comprehension. Language and Cognitive Processes], showing that children's interpretation of unaccented nouns and pronouns is constrained in contexts with one single highly accessible object.
Jennifer E. Arnold; Shin Yi C. Lao
In: Language and Cognitive Processes, vol. 23, no. 2, pp. 282–295, 2008.
Research has shown that the comprehension of definite referring expressions (e.g., "the triangle") tends to be faster for "given" (previously mentioned) referents, compared with new referents. This has been attributed to the presence of given information in the consciousness of discourse participants (e.g., Chafe, 1994) suggesting that given is always more accessible. By contrast, we find a bias toward new referents during the on-line comprehension of the direct object in heavy-NP-shifted word orders, e.g., "Put on the star the...." This order tends to be used for new direct objects; canonical unshifted orders are more common with given direct objects. Thus, word order provides probabilistic information about the givenness or newness of the direct object. Results from eyetracking and gating experiments show that the traditional given bias only occurs with unshifted orders; with heavy-NP-shifted orders, comprehenders expect the object to be new, and comprehension for new referents is facilitated. (Contains 2 figures and 3 tables.)
Xuejun Bai; Guoli Yan; Simon P. Liversedge; Chuanli Zang; Keith Rayner
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 5, pp. 1277–1287, 2008.
Native Chinese readers' eye movements were monitored as they read text that did or did not demark word boundary information. In Experiment 1, sentences had 4 types of spacing: normal unspaced text, text with spaces between words, text with spaces between characters that yielded nonwords, and finally text with spaces between every character. The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading. Global and local measures indicated that sentences with unfamiliar word spaced format were as easy to read as visually familiar unspaced text. Nonword spacing and a space between every character produced longer reading times. In Experiment 2, highlighting was used to create analogous conditions: normal Chinese text, highlighting that marked words, highlighting that yielded nonwords, and highlighting that marked each character. The data from both experiments clearly indicated that words, and not individual characters, are the unit of primary importance in Chinese reading.
M. S. Baptista; C. Bohn; Reinhold Kliegl; Ralf Engbert; Jürgen Kurths
Reconstruction of eye movements during blinks Journal Article
In: Chaos, vol. 18, no. 1, pp. 1–15, 2008.
In eye movement research in reading, the amount of data plays a crucial role for the validation of results. A methodological problem for the analysis of the eye movement in reading are blinks, when readers close their eyes. Blinking rate increases with increasing reading time, resulting in high data losses, especially for older adults or reading impaired subjects. We present a method, based on the symbolic sequence dynamics of the eye movements, that reconstructs the horizontal position of the eyes while the reader blinks. The method makes use of an observed fact that the movements of the eyes before closing or after opening contain information about the eyes movements during blinks. Test results indicate that our reconstruction method is superior to methods that use simpler interpolation approaches. In addition, analyses of the reconstructed data show no significant deviation from the usual behavior observed in readers.
Dale J. Barr
In: Cognition, vol. 109, no. 1, pp. 18–40, 2008.
When listeners search for the referent of a speaker's expression, they experience interference from privileged knowledge, knowledge outside of their 'common ground' with the speaker. Evidence is presented that this interference reflects limitations in lexical processing. In three experiments, listeners' eye movements were monitored as they searched for the target of a speaker's referring expression in a display that also contained a phonological competitor (e.g., bucket/buckle). Listeners anticipated that the speaker would refer to something in common ground, but they did not experience less interference from a competitor in privileged ground than from a matched competitor in common ground. In contrast, interference from the competitor was eliminated when it was ruled out by a semantic constraint. These findings support a view of comprehension as relying on multiple systems with distinct access to information and present a challenge for constraint-based views of common ground.
Elizabeth Wonnacott; Elissa L. Newport; Michael K. Tanenhaus
In: Cognitive Psychology, vol. 56, no. 3, pp. 165–209, 2008.
Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.
Mark Yates; John Friend; Danielle M. Ploetz
In: Cognition, vol. 107, no. 2, pp. 685–692, 2008.
Recent research has indicated that phonological neighbors speed processing in a variety of isolated word recognition tasks. Nevertheless, as these tasks do not represent how we normally read, it is not clear if phonological neighborhood has an effect on the reading of sentences for meaning. In the research reported here, we evaluated whether phonological neighborhood density influences reading of target words embedded in sentences. The eye movement data clearly revealed that phonological neighborhood facilitated reading. This was evidenced by shorter fixations for words with large neighborhoods relative to words with small neighborhoods. These results are important in indicating that phonology is a crucial component of reading and that it affects early lexical processing. textcopyright 2007 Elsevier B.V. All rights reserved.
Eiling Yee; Sheila E. Blumstein; Julie C. Sedivy
In: Journal of Cognitive Neuroscience, vol. 20, no. 4, pp. 592–612, 2008.
Lexical processing requires both activating stored representations, and selecting among active candidates. The current work uses an eye-tracking paradigm to conduct a detailed temporal investigation of lexical processing. Patients with Broca's and Wernicke's aphasia are studied to shed light on the roles of anterior and posterior brain regions in lexical processing as well as the effects of lexical competition on such processing. Experiment 1 investigates whether objects semantically related to an uttered word are preferentially fixated, e.g., given the auditory target 'hammer', do participants fixate a picture of a nail? Results show that, like normals, both groups of patients are more likely to fixate on an object semantically related to the target than an unrelated object. Experiment 2 explores whether Broca's and Wernicke's aphasics show competition effects when words share onsets with the uttered word, e.g., given the auditory target 'hammer', do participants fixate a picture of a hammock? Experiment 3 investigates whether these patients activate words semantically related to onset competitors of the uttered word, e.g., given the auditory target 'hammock' do participants fixate a nail due to partial activation of the onset competitor hammer? Results of Experiments 2 and 3 show pathological patterns of performance for both Broca's and Wernicke's aphasics under conditions of lexical onset competition. However, the patterns of deficit differed, suggesting different functional and computational roles for anterior and posterior areas in lexical processing. Implications of the findings for the functional architecture of the lexical processing system and its potential neural substrates are considered.
Miao-Hsuan Yen; Jie-Li Tsai; Ovid J. L. Tzeng; Daisy L. Hung
Eye movements and parafoveal word processing in reading Chinese Journal Article
In: Memory and Cognition, vol. 36, no. 5, pp. 1033–1045, 2008.
In two experiments, a parafoveal lexicality effect in the reading of Chinese (a script that does not physically mark word boundaries) was examined. Both experiments used the boundary paradigm (Rayner, 1975) and indicated that the lexical properties of parafoveal words influenced eye movements. In Experiment 1, the preview stimulus was either a real word or a pseudoword. Targets with word previews, even unrelated ones, were more likely to be skipped than were those with pseudowords. In Experiment 2, all of the preview stimuli had the same first character as the target. Target words with same-morpheme previews were fixated for less time than were those with pseudoword previews, suggesting that morphological processing may be involved in extracting information from parafoveal words in Chinese reading. Together, the two experiments dealing with how words are processed in Chinese may provide some constraints on current computational models of reading.
Paola E. Dussias; Nuria Sagarra
In: Bilingualism: Language and Cognition, vol. 10, no. 1, pp. 101–116, 2007.
An eye tracking experiment examined how exposure to a second language (L2) influences sentence parsing in the first language. Forty-four monolingual Spanish speakers, 24 proficient Spanish - English bilinguals with limited immersion experience in the L2 environment and 20 proficient Spanish - English bilinguals with extensive L2 immersion experience read temporarily ambiguous constructions. The ambiguity concerned whether a relative clause (RC) that appeared after a complex noun phrase (NP) was interpreted as modifying the first or the second noun in the complex NP (El policía arrestó a la hermana del criado que estaba enferma desde hacía tiempo). The results showed that whereas the Spanish monolingual speakers and the Spanish - English bilinguals with limited exposure reliably attached the relative clause to the first noun, the Spanish - English bilingual with extensive exposure attached the relative to the second noun. Results are discussed in terms of models of sentence parsing most consistent with the findings. textcopyright 2007 Cambridge University Press.
Wouter Duyck; Eva Van Assche; Denis Drieghe; Robert J. Hartsuiker
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 4, pp. 663–679, 2007.
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment, Dutch-English bilinguals performing a 2nd language (L2) lexical decision task were faster to recognize identical and nonidentical cognate words (e.g., banaan-banana) presented in isolation than control words. A second experiment replicated this effect when the same set of cognates was presented as the final words of low-constraint sentences. In a third experiment that used eyetracking, the authors showed that early target reading time measures also yield cognate facilitation but only for identical cognates. These results suggest that a sentence context may influence, but does not nullify, cross-lingual lexical interactions during early visual word recognition by bilinguals.
Géry D'Ydewalle; Wim De Bruycker
In: European Psychologist, vol. 12, no. 3, pp. 196–205, 2007.
Eye movements of children (Grade 5–6) and adults were monitored while they were watching a foreign language movie with either standard (foreign language soundtrack and native language subtitling) or reversed (foreign language subtitles and native language soundtrack) subtitling. With standard subtitling, reading behavior in the subtitle was observed, but there was a difference between one- and two-line subtitles. As two lines of text contain verbal information that cannot easily be inferred from the pictures on the screen, more regular reading occurred; a single text line is often redundant to the information in the picture, and accordingly less reading of one-line text was apparent. Reversed subtitling showed even more irregular reading patterns (e.g., more subtitles skipped, fewer fixations, longer latencies). No substantial age differences emerged, except that children took longer to shift attention to the subtitle at its onset, and showed longer fixations and shorter saccades in the text. On the whole, the results demonstrated the flexibility of the attentional system and its tuning to the several information sources available (image, soundtrack, and subtitles).
Julie A. Van Dyke
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 2, pp. 407–430, 2007.
Evidence from 3 experiments reveals interference effects from structural relationships that are inconsis- tent with any grammatical parse of the perceived input. Processing disruption was observed when items occurring between a head and a dependent overlapped with either (or both) syntactic or semantic features of the dependent. Effects of syntactic interference occur in the earliest online measures in the region where the retrieval of a long-distance dependent occurs. Semantic interference effects occur in later online measures at the end of the sentence. Both effects endure in offline comprehension measures, suggesting that interfering items participate in incorrect interpretations that resist reanalysis. The data are discussed in terms of a cue-based retrieval account of parsing, which reconciles the fact that the parser must violate the grammar in order for these interference effects to occur. Broader implications of this research indicate a need for a precise specification of the interface between the parsing mechanism and the memory system that supports language comprehension.
Stephani Foraker; Brian McElree
In: Journal of Memory and Language, vol. 56, no. 3, pp. 357–383, 2007.
A prominent antecedent facilitates anaphor resolution. Speed-accuracy tradeoff modeling in Experiments 1 and 3 indicated that clefting did not affect the speed of accessing an antecedent representation, which is inconsistent with claims that discourse-focused information is actively maintained in focal attention [e.g., Gundel, J. K. (1999). On different kinds of focus. In P. Bosch & R. van der Sandt, (Eds.), Focus: Linguistic, cognitive, and computational perspectives. Cambridge: Cambridge University Press]. Rather, clefting simply increased the likelihood of retrieving the antecedent representation, suggesting that clefting only increases the strength of a representation in memory. Eye fixation measures in Experiment 2 showed that clefting did not affect early bonding of the pronoun and antecedent, but did ease later integration. Collectively, the results indicate that clefting made antecedent representations more distinctive in working memory, hence more available for subsequent discourse operations. Pronoun type also affected resolution processes. Gendered pronouns (he or she) were interpreted more accurately than an ungendered pronoun (it), and in one case, earlier in time-course. We argue that both effects are due to the greater ambiguity of it, as a cue to retrieve the correct antecedent representation. textcopyright 2006 Elsevier Inc. All rights reserved.
James M. McQueen; Malte C. Viebahn
In: Quarterly Journal of Experimental Psychology, vol. 60, no. 5, pp. 661–671, 2007.
Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op het woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.
Antje S. Meyer; Eva Belke; Christine Häcker; Linda Mortensen
Use of word length information in utterance planning Journal Article
In: Journal of Memory and Language, vol. 57, no. 2, pp. 210–231, 2007.
Griffin [Griffin, Z. M. (2003). A reversed length effect in coordinating the preparation and articulation of words in speaking. Psychonomic Bulletin & Review, 10, 603-609.] found that speakers naming object pairs spent more time before utterance onset looking at the second object when the first object name was short than when it was long. She proposed that this reversed length effect arose because the speakers' decision when to initiate an utterance was based, in part, on their estimate of the spoken duration of the first object name and the time available during its articulation to plan the second object name. In Experiment 1 of the present study, participants named object pairs. They spent more time looking at the first object when its name was monosyllabic than when it was trisyllabic, and, as in Griffin's study, the average gaze-speech lag (the time between the end of the gaze to the first object and onset of its name, which corresponds closely to the pre-speech inspection time for the second object) showed a reversed length effect. Experiments 2 and 3 showed that this effect was not due to a trade-off between the time speakers spent looking at the first and second object before speech onset. Experiment 4 yielded a reversed length effect when the second object was replaced by a symbol (x or +), which the participants had to categorise. We propose a novel account of the reversed length effect, which links it to the incremental nature of phonological encoding and articulatory planning rather than the speaker's estimate of the length of the first object name.
Antje Nuthmann; Ralf Engbert; Reinhold Kliegl
The IOVP effect in mindless reading: Experiment and modeling Journal Article
In: Vision Research, vol. 47, no. 7, pp. 990–1002, 2007.
Fixation durations in reading are longer for within-word fixation positions close to word center than for positions near word boundaries. This counterintuitive result was termed the Inverted-Optimal Viewing Position (IOVP) effect. We proposed an explanation of the effect based on error-correction of mislocated fixations [Nuthmann, A., Engbert, R., & Kliegl, R. (2005). Mislocated fixations during reading and the inverted optimal viewing position effect. Vision Research, 45, 2201-2217], that suggests that the IOVP effect is not related to word processing. Here we demonstrate the existence of an IOVP effect in "mindless reading", a z-string scanning task. We compare the results from experimental data with results obtained from computer simulations of a simple model of the IOVP effect and discuss alternative accounts. We conclude that oculomotor errors, which often induce mislocalized fixations, represent the most important source of the IOVP effect.
Ian Cunnings; Harald Clahsen
In: Cognition, vol. 104, no. 3, pp. 476–494, 2007.
Lexical compounds in English are constrained in that the non-head noun can be an irregular but not a regular plural (e.g. mice eater vs. *rats eater), a contrast that has been argued to derive from a morphological constraint on modifiers inside compounds. In addition, bare nouns are preferred over plural forms inside compounds (e.g. mouse eater vs. mice eater), a contrast that has been ascribed to the semantics of compounds. Measuring eye-movements during reading, this study examined how morphological and semantic information become available over time during the processing of a compound. We found that the morphological constraint affected both early and late eye-movement measures, whereas the semantic constraint for singular non-heads only affected late measures of processing. These results indicate that morphological information becomes available earlier than semantic information during the processing of compounds.
Delphine Dahan; M. Gareth Gaskell
In: Journal of Memory and Language, vol. 57, no. 4, pp. 483–501, 2007.
Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants' responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
Meredyth Daneman; Tracy Lennertz; Brenda Hannon
Shallow semantic processing of text: Evidence from eye movements Journal Article
In: Language and Cognitive Processes, vol. 22, no. 1, pp. 83–105, 2007.
Evidence for shallow semantic processing has depended on paradigms that required readers to explicitly report whether they noticed an anomalous noun phrase (NP) after reading text such as 'Amanda was bouncing all over because she had taken too many tranquillizing sedatives in one day'. We replicated previous research by showing that readers frequently fail to report the anomaly, and that less-skilled readers have particular difficulty reporting locally anomalous NPs such as tranquillizing stimulants. In addition, we examined the time course of anomaly detection by monitoring readers' eye movements for spontaneous disruptions when encountering the anomalous NPs. The eye fixation data provided evidence for on-line detection of anomalies; however, the detection was delayed. Readers who later reported the anomaly did not spend longer processing the anomalous NP when first encountering it; however, they did spend longer refixating it. Our results challenge orthodox models of comprehension that assume that semantic analysis is exhaustive and complete.
Denis Drieghe; Timothy Desmet; Marc Brysbaert
In: British Journal of Psychology, vol. 98, no. 1, pp. 157–171, 2007.
The probability of skipping a word is influenced by its processing ease. For instance, a word that is predictable from the preceding context is skipped more often than an unpredictable word. A meta-analysis of studies examining this predictability effect reported effect sizes ranging from 0 to 13%, with an average of 8%. One study does not fit within this picture and reported 23% more skipping of Dutch pronouns in sentences in which the pronoun had no disambiguating value (e.g. 'Mary was envious of Helen because she never looked so good') than in sentences where it did have a disambiguating value (e.g. 'Mary was envious of Albert because she never looked so good'). We re-examined this ambiguity in Dutch using a task that more closely resembles normal reading and observed only a 9% difference in skipping of the pronoun, bringing this linguistic effect in line with the other findings.
Johanna K. Kaakinen; Jukka Hyönä
Perspective effects in repeated reading: An eye movement study Journal Article
In: Memory and Cognition, vol. 35, no. 6, pp. 1323–1336, 2007.
The present study examined the influence of perspective instructions on online processing of expository text during repeated reading. Sixty-two participants read either a high or a low prior knowledge (HPK vs. LPK) text twice from a given perspective while their eye movements were recorded. They switched perspective before a third reading. Reading perspective affected the first-pass reading and also increased sentence wrap-up processing time in the perspective-relevant sentences. Prior knowledge facilitated the recognition of the (ir)relevance of text information and resulted in relatively earlier perspective effects in the HPK versus LPK text. Repeated reading facilitated processing, as indicated by all eye movement measures. After the perspective switch, a repetition benefit was observed for the previously relevant text information, whereas a repetition cost was found for the previously irrelevant text information. These results indicate that reading perspective and prior knowledge have a significant influence on how readers allocate visual attention during reading.
Johanna K. Kaakinen; Jukka Hyönä
In: Memory, vol. 15, no. 6, pp. 634–646, 2007.
Strategy use in the traditional reading span test was examined by recording participants' eye movements during the task (Experiment 1) and by interviewing participants about their strategy use (Experiment 2). In Experiment 1, no differences between individuals with a low, medium, and high span were observed in how they distributed processing time between task elements. In all three groups, fixation times on words up to the to-be-remembered (TBR) word became shorter and the time spent on the TBR longer as memory load in the task increased. The results of Experiment 2, however, show that span groups differ in the use of memory encoding strategies: individuals with a low span use mainly rehearsal, whereas individuals with a high span use almost exclusively semantic elaboration. The results indicate that the use of elaborative strategies may enhance span performance but that not all individuals are necessarily able to use such strategies efficiently.
Reinhold Kliegl; Sarah Risse; Jochen Laubrock
Preview benefit and parafoveal-on-foveal effects from word n + 2 Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 33, no. 5, pp. 1250–1255, 2007.
Using the gaze-contingent boundary paradigm with the boundary placed after word n, the experiment manipulated preview of word n + 2 for fixations on word n. There was no preview benefit for 1st-pass reading on word n + 2, replicating the results of K. Rayner, B. J. Juhasz, and S. J. Brown (2007), but there was a preview benefit on the 3-letter word n + 1, that is, after the boundary but before word n + 2. Additionally, both word n + 1 and word n + 2 exhibited parafoveal-on-foveal effects on word n. Thus, during a fixation on word n and given a short word n + 1, some information is extracted from word n + 2, supporting the hypothesis of distributed processing in the perceptual span.
Pia Knoeferle; Matthew W. Crocker
In: Journal of Memory and Language, vol. 57, no. 4, pp. 519–543, 2007.
Evidence from recent experiments that monitored attention in clipart scenes during spoken comprehension suggests that people preferably rely on non-stereotypical depicted events over stereotypical thematic knowledge for incremental interpretation. The Coordinated Interplay Account [Knoeferle, P., & Crocker, M. W. (2006). The coordinated interplay of scene, utterance, and world knowledge: evidence from eye tracking. Cognitive Science, 30, 481-529.] accounts for this preference through referential processing (e.g., the verb mediates a depicted event) and the preferred use of scene event information that is associated with the referent (e.g., the agent of the depicted event). Three eye-tracking experiments examined the generality of this account. While the rapid use of depicted events was replicated in all three studies, the preference to rely on them was modulated by the decay of events that were no longer co-present. Our findings motivate the extension of the Coordinated Interplay Account with an explicit working memory. The coordinated interplay mechanism together with working memory and decay, is shown to account for the influence of scene-derived versus stored knowledge both when events are co-present and when they have recently been perceived.
Kerry Ledoux; Peter C. Gordon; C. Christine Camblin; Tamara Y. Swaab
In: Memory and Cognition, vol. 35, no. 4, pp. 801–815, 2007.
The use of repeated expressions to establish coreference allows an investigation of the relationship between basic processes of word recognition and higher level language processes that involve the integration of information into a discourse model. In two experiments on reading, we used eye tracking and event-related potentials to examine whether repeated expressions that are coreferential within a local discourse context show the kind of repetition priming that is shown in lists of words. In both experiments, the effects of lexical repetition were modulated by the effects of local discourse context that arose from manipulations of the linguistic prominence of the antecedent of a coreferentially repeated name. These results are interpreted within the context of discourse prominence theory, which suggests that processes of coreferential interpretation interact with basic mechanisms of memory integration during the construction of a model of discourse.
Christine Burton; Meredyth Daneman
In: Reading Psychology, vol. 28, no. 2, pp. 163–186, 2007.
Although working memory capacity is an important contributor to reading comprehension performance, it is not the only contributor. Studies have shown that epistemic knowledge (or knowledge about knowledge and learning) is related to comprehension success and may enable low-span readers to compensate for their limited resources. By comparing the eye movements of epistemically mature versus epistemically naïve low-span readers, this study provided evidence for how the compensation occurs. Metacognitively mature low-span readers spent more time engaged in selective backtracking to unfamiliar and task-relevant text information. These selective look-backs would have reinstated the difficult and important information into working memory, thereby allowing these readers to offset some of the disadvantages of a limited temporary storage capacity.
C. Christine Camblin; Peter C. Gordon; Tamara Y. Swaab
In: Journal of Memory and Language, vol. 56, no. 1, pp. 103–128, 2007.
Five experiments used ERPs and eye tracking to determine the interplay of word-level and discourse-level information during sentence processing. Subjects read sentences that were locally congruent but whose congruence with discourse context was manipulated. Furthermore, critical words in the local sentence were preceded by a prime word that was associated or not. Violations of discourse congruence had early and lingering effects on ERP and eye-tracking measures. This indicates that discourse representations have a rapid effect on lexical semantic processing even in locally congruous texts. In contrast, effects of association were more malleable: Very early effects of associative priming were only robust when the discourse context was absent or not cohesive. Together these results suggest that the global discourse model quickly influences lexical processing in sentences, and that spreading activation from associative priming does not contribute to natural reading in discourse contexts.
Aoju Chen; Els Den Os; Jan Peter De Ruiter
In: The Linguistic Review, vol. 24, no. 2-3, pp. 317–344, 2007.
Adopting an eyetracking paradigm, we investigated the role of H*L, L*HL, L*H, H*LH, and deaccentuation at the intonational phrase-final position in online processing of information status in British English in natural speech. The role of H*L, L*H and deaccentuation was also examined in diphonesynthetic speech. It was found that H*L and L*HL create a strong bias towards newness, whereas L*H, like deaccentuation, creates a strong bias towards givenness. In synthetic speech, the same effect was found for H*L, L*H and deaccentuation, but it was delayed. The delay may not be caused entirely by the difference in the segmental quality between synthetic and natural speech. The pitch accent H*LH, however, appears to bias participants' interpretation to the target word, independent of its information status. This finding was explained in the light of the effect of durational information at the segmental level on word recognition. textcopyright Walter de Gruyter 2007.
Ed H. Chi; Michelle Gumbrecht; Lichan Hong
Visual foraging of highlighted text: An eye-tracking study Journal Article
In: Human-Computer Interaction, pp. 589–598, 2007.
The wide availability of digital reading material online is causing a major shift in everyday reading activities. Readers are skimming instead of reading in depth [Nielson 1997]. Highlights are increasingly used in digital interfaces to direct attention toward relevant passages within texts. In this paper, we study the eye-gaze behavior of subjects using both keyword highlighting and ScentHighlights [Chi et al. 2005]. In this first eye-tracking study of highlighting interfaces, we show that there is direct evidence of the von Restorff isolation effect [VonRestorff 1933] in the eye-tracking data, in that subjects focused on highlighted areas when highlighting cues are present. The results point to future design possibilities in highlighting interfaces.
In: Journal of Memory and Language, vol. 57, no. 2, pp. 232–251, 2007.
The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture-word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107-142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88-125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
Annie Roy-Charland; Jean Saint-Aubin; Raymond M. Klein; Michael A. Lawrence
In: Perception and Psychophysics, vol. 69, no. 3, pp. 324–337, 2007.
When asked to detect target letters while reading a text, participants miss more letters in frequently occurring function words than in less frequent content words. To account for this pattern of results, known as the missing-letter effect, Greenberg, Healy, Koriat, and Kreiner proposed the guidance-organization (GO) model, which integrates the two leading models of the missing-letter effect while incorporating innovative assumptions based on the literature on eye movements during reading. The GO model was evaluated by monitoring the eye movements of participants while they searched for a target letter in a continuous text display. Results revealed the usual missing-letter effect, and many empirical benchmark effects in eye movement literature were observed. However, contrary to the predictions of the GO model, response latencies were longer for function words than for content words. Alternative models are discussed that can accommodate both error and response latency data for the missing-letter effect.
Miia Sainio; Jukka Hyönä; Kazuo Bingushi; Raymond Bertram
In: Vision Research, vol. 47, no. 20, pp. 2575–2584, 2007.
The present study investigated the role of interword spacing in a naturally unspaced language, Japanese. Eye movements were registered of native Japanese readers reading pure Hiragana (syllabic) and mixed Kanji-Hiragana (ideographic and syllabic) text in spaced and unspaced conditions. Interword spacing facilitated both word identification and eye guidance when reading syllabic script, but not when the script contained ideographic characters. We conclude that in reading Hiragana interword spacing serves as an effective segmentation cue. In contrast, spacing information in mixed Kanji-Hiragana text is redundant, since the visually salient Kanji characters serve as effective segmentation cues by themselves.
Gerry T. M. Altmann; Yuki Kamide
In: Journal of Memory and Language, vol. 57, no. 4, pp. 502–518, 2007.
Two experiments explored the representational basis for anticipatory eye movements. Participants heard 'the man will drink ...' or 'the man has drunk ...' (Experiment 1) or 'the man will drink all of ...' or 'the man has drunk all of ...' (Experiment 2). They viewed a concurrent scene depicting a full glass of beer and an empty wine glass (amongst other things). There were more saccades towards the empty wine glass in the past tensed conditions than in the future tense conditions; the converse pattern obtained for looks towards the full glass of beer. We argue that these anticipatory eye movements reflect sensitivity to objects' affordances, and develop an account of the linkage between language processing and visual attention that can account not only for looks towards named objects, but also for those cases (including anticipatory eye movements) where attention is directed towards objects that are not being named.
Manabu Arai; Roger P. G. Gompel; Christoph Scheepers
Priming ditransitive structures in comprehension Journal Article
In: Cognitive Psychology, vol. 54, no. 3, pp. 218–250, 2007.
Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent.
Jennifer E. Arnold; Carla L. Hudson Kam; Michael K. Tanenhaus
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 5, pp. 914–930, 2007.
Eye-tracking and gating experiments examined reference comprehension with fluent (Click on the red. . .) and disfluent (Click on [pause] thee uh red . . .) instructions while listeners viewed displays with 2 familiar (e.g., ice cream cones) and 2 unfamiliar objects (e.g., squiggly shapes). Disfluent instructions made unfamiliar objects more expected, which influenced listeners' on-line hypotheses from the onset of the color word. The unfamiliarity bias was sharply reduced by instructions that the speaker had object agnosia, and thus difficulty naming familiar objects (Experiment 2), but was not affected by intermittent sources of speaker distraction (beeps and construction noises; Experiments 3). The authors conclude that listeners can make situation-specific inferences about likely sources of disfluency, but there are some limitations to these attributions.
Jean-Baptiste Bernard; Anne-Catherine Scherlen; Eric Castet
In: Vision Research, vol. 47, no. 28, pp. 3447–3459, 2007.
Crowding is thought to be one potent limiting factor of reading in peripheral vision. While several studies investigated how crowding between horizontally adjacent letters or words can influence eccentric reading, little attention has been paid to the influence of vertically adjacent lines of text. The goal of this study was to examine the dependence of page mode reading performance (speed and accuracy) on interline spacing. A gaze-contingent visual display was used to simulate a visual central scotoma while normally sighted observers read meaningful French sentences following MNREAD principles. The sensitivity of this new material to low-level factors was confirmed by showing strong effects of perceptual learning, print size and scotoma size on reading performance. In contrast, reading speed was only slightly modulated by interline spacing even for the largest range tested: a 26% gain for a 178% increase in spacing. This modest effect sharply contrasts with the dramatic influence of vertical word spacing found in a recent RSVP study. This discrepancy suggests either that vertical crowding is minimized when reading meaningful sentences, or that the interaction between crowding and other factors such as attention and/or visuo-motor control is dependent on the paradigm used to assess reading speed (page vs. RSVP mode).
Falk Huettig; Gerry T. M. Altmann
In: Visual Cognition, vol. 15, no. 8, pp. 985–1018, 2007.
Visual attention can be directed immediately, as a spoken word unfolds, towards conceptually related but nonassociated objects, even if they mismatch on other dimensions that would normally determine which objects in the scene were appropriate referents for the unfolding word (Huettig & Altmann, 2005). Here we demonstrate that the mapping between language and concurrent visual objects can also be mediated by visual-shape relations. On hearing "snake", participants directed overt attention immediately, within a visual display depicting four objects, to a picture of an electric cable, although participants had viewed the visual display with four objects for approximately 5 s before hearing the target word-sufficient time to recognize the objects for what they were. The time spent fixating the cable correlated significantly with ratings of the visual similarity between snakes in general and this particular cable. Importantly, with sentences contextually biased towards the concept snake, participants looked at the snake well before the onset of "snake'', but they did not look at the visually similar cable until hearing "snake''. Finally, we demonstrate that such activation can, under certain circumstances (e. g., during the processing of dominant meanings of homonyms), constrain the direction of visual attention even when it is clearly contextually inappropriate. We conclude that language-mediated attention can be guided by a visual match between spoken words and visual objects, but that such a match is based on lexical input and may not be modulated by contextual appropriateness.
Falk Huettig; James M. McQueen
In: Journal of Memory and Language, vol. 57, no. 4, pp. 460–482, 2007.
Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with beker, 'beaker', for example, the display contained phonological (a beaver, bever), shape (a bobbin, klos), and semantic (a fork, vork) competitors. When the display appeared at sentence onset, fixations to phonological competitors preceded fixations to shape and semantic competitors. When display onset was 200 ms before (e.g.) beker, fixations were directed to shape and then semantic competitors, but not phonological competitors. In Experiments 3 and 4, displays contained the printed names of the previously-pictured entities; only phonological competitors were fixated preferentially. These findings suggest that retrieval of phonological, shape and semantic knowledge in the spoken-word and picture-recognition systems is cascaded, and that visual attention shifts are co-determined by the time-course of retrieval of all three knowledge types and by the nature of the information in the visual environment.
In: Journal of Psycholinguistic Research, vol. 36, no. 6, pp. 431–456, 2007.
Two eye-tracking studies assessed effects of grammatical and conceptual gender cues in generic role name processing in German. Participants read passages about a social or occupational group introduced by way of a generic role name (e.g., Soldaten/soldiers, Künstler/artists). Later in the passage the gender of this group was specified by the anaphoric expression diese Männer/these men or diese Frauen/these women. Testing masculine generic role names of male, female or neutral conceptual gender (Exp. 1) showed that a gender mismatch between the role name's conceptual gender and the anaphor significantly slowed reading immediately before and after the anaphoric noun. A mismatch between the antecedent's grammatical gender and the anaphor slowed down the reading of the anaphoric noun itself. Testing grammatically gender-unmarked role names (Exp. 2) revealed a general male bias in participants' understanding, irrespective of grammatical or conceptual gender. The experiments extend previous findings on gender effects to non-referential role names and generic contexts. Theoretical aspects of gender and plural reference as well as gender information in mental models are discussed.
Frank Joosten; Gert De Sutter; Denis Drieghe; Stefan Grondelaers; Robert J. Hartsuiker; Dirk Speelman
Dutch collective nouns and conceptual profiling Journal Article
In: Linguistics, vol. 45, no. 1, pp. 85–132, 2007.
Collective nouns such as committee, family, or team are conceptually (and in English also syntactically) complex in the sense that they are both singular ("one") and plural ("more than one"): they refer to a multiplicity that is conceptualized as a unity. In this article, which focuses on Dutch collective nouns, it is argued that some collective nouns are rather "one", whereas others are rather "more than one". Collective nouns are shown to be different from one another in member level accessibility. Whereas all collective nouns have both a conceptual collection level ("one") and a conceptual member level ("more than one"), the latter is not always conceptually profiled (i.e., focused on) to the same extent. A gradient is sketched in which collective nouns such as bemanning ('crew') (member level highly accessible) and vereniging ('association') (member level scarcely accessible) form the extremes. Arguments in favor of the conceptual phenomenon of variable member level accessibility derive from an analysis of property distribution, from corpus research on verbal and pronominal singular-plural variation, and from a psycholinguistic eye-tracking experiment.
The parser doesn't ignore intransitivity, after all Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 3, pp. 550–569, 2007.
Several previous studies (B. C. Adams, C. Clifton, & D. C. Mitchell, 1998; D. C. Mitchell, 1987; R. P. G. van Gompel & M. J. Pickering, 2001) have explored the question of whether the parser initially analyzes a noun phrase that follows an intransitive verb as the verb's direct object. Three eye-tracking experiments examined this issue in more detail. Experiment 1 replicated the finding that readers experience difficulty on this noun phrase in normal reading and found that this difficulty occurs even with intransitive verbs for which a direct object is categorically prohibited. Experiment 2, however, demonstrated that this effect is not due to syntactic misanalysis but to disruption that occurs when a comma is absent at a subordinate clause/main clause boundary. Experiment 3 replicated the finding (M. J. Pickering & M. J. Traxler, 2003; M. J. Traxler & M. J. Pickering, 1996) that when a noun phrase "filler" is an implausible direct object for an optionally transitive relative clause verb, processing difficulty results; however, there was no evidence for such difficulty when the relative clause verb was strictly intransitive. Taken together, the 3 experiments undermine the support for the claim that the parser initially ignores a verb's subcategorization restrictions.
Yung Chi Sung; Da-Lun Tang
In: Consciousness and Cognition, vol. 16, no. 2, pp. 339–348, 2007.
The current study aims to separate conscious and unconscious behaviors by employing both online and offline measures while the participants were consciously performing a task. Using an eye-movement tracking paradigm, we observed participants' response patterns for distinguishing within-word-boundary and across-word-boundary reverse errors while reading Chinese sentences (also known as the "word inferiority effect"). The results showed that when the participants consciously detected errors, their gaze time for target words associated with across-word-boundary reverse errors was significantly longer than that for targets words associated with within-word-boundary reverse errors. Surprisingly, the same gaze time pattern was found even when the readers were not consciously aware of the reverse errors. The results were statistically robust, providing converging evidence for the feasibility of our experimental paradigm in decoupling offline behaviors and the online, automatic, and unconscious aspects of cognitive processing in reading.
Yoonhyoung Lee; Hanjung Lee; Peter C. Gordon
In: Cognition, vol. 104, no. 3, pp. 495–534, 2007.
The nature of the memory processes that support language comprehension and the manner in which information packaging influences online sentence processing were investigated in three experiments that used eye-tracking during reading to measure the ease of understanding complex sentences in Korean. All three experiments examined reading of embedded complement sentences; the third experiment additionally examined reading of sentences with object-modifying, object-extracted relative clauses. In Korean, both of these structures place two NPs with nominative case marking early in the sentence, with the embedded and matrix verbs following later. The type (pronoun, name or description) of these two critical NPs was varied in the experiments. When the initial NPs were of the same type, comprehension was slowed after participants had read the sentence-final verbs, a finding that supports the view that working memory in language comprehension is constrained by similarity-based interference during the retrieval of information necessary to determine the syntactic or semantic relations between noun phrases and verb phrases. Ease of comprehension was also influenced by the association between type of NP and syntactic position, with the best performance being observed when more definite NPs (pronouns and names) were in a prominent syntactic position (e.g., matrix subject) and less definite NPs (descriptions) were in a non-prominent syntactic position (embedded subject). This pattern provides evidence that the interpretation of sentences is facilitated by consistent packaging of information in different linguistic elements.
James S. Magnuson; James A. Dixon; Michael K. Tanenhaus; Richard N. Aslin
In: Cognitive Science, vol. 31, no. 1, pp. 133–156, 2007.
The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set-precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the impact over time of word frequency and 2 partially overlapping competitor set definitions: onset density and neighborhood density. Time course measures revealed early and continuous effects of frequency (facilitatory) and on set based similarity (inhibitory). Neighborhood density appears to have early facilitatory effects and late inhibitory effects. The late inhibitory effects are due to differences in the temporal distribution of similarity within neighborhoods. The early facilitatory effects are due to subphonemic cues that inform the listener about word length before the entire word is heard. The results support a new conception of lexical competition neighborhoods in which recognition occurs against a background of activated competitors that changes over time based on fine-grained goodness-of-fit and competition dynamics.
Ulrich W. Weger; Albrecht W. Inhoff
In: Memory and Cognition, vol. 35, no. 6, pp. 1293–1306, 2007.
To examine the nature of the information that guides eye movements to previously read text during reading (regressions), we used a relatively novel technique to request a regression to a particular target word when the eyes reached a predefined location during sentence reading. A regression was to be directed to a close or a distant target when either the first or the second line of a complex two-line sentence was read. In addition, conditions were created that pitted effects of spatial and linguistic distance against each other. Initial regressions were more accurate when the target was spatially near, and effects of spatial distance dominated effects of verbal distance. Initial regressions rarely moved the eyes onto the target, however, and subsequent "corrective" regressions that homed in on the target were subject to general linguistic processing demands, being more accurate during first-line reading than during second-line reading. The results suggest that spatial and verbal memory guide regressions in reading. Initial regressions are primarily guided by fixation-centered spatial memory, and corrective regressions are primarily guided by linguistic knowledge.
Meredyth Daneman; Brenda Hannon; Christine Burton
In: Discourse Processes, vol. 42, no. 2, pp. 177–203, 2006.
After reading text such as Amanda was bouncing all over because she had taken too many tranquilizing sedatives in one day, young adult readers frequently fail to report that they noticed the anomalous noun phrase (NP). Although young readers of all skill levels are susceptible to this kind of shallow semantic processing, less-skilled readers are more susceptible and have particular difficulty detecting locally anomalous NPs such as tranquilizing stimulants. This article explores whether aging has a similar impact on a reader's propensity toward shallow semantic processing. Postreading responses showed that older readers frequently failed to report the anomalous NPs, but no more frequently than did younger readers. The eye-fixation behavior revealed that older readers actually detected the locally coherent anomalous NPs (e.g., tranquilizing sedatives) sooner than did younger readers, but had to allocate disproportionately more processing resources looking back to the locally incoherent anomalous NPs (tranquilizing stimulants) to achieve comparable levels of detection success as their younger counterparts.
Sarah C. Creel; Michael K. Tanenhaus; Richard N. Aslin
Consequences of lexical stress on learning an artificial lexicon Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 32, no. 1, pp. 15–32, 2006.
Four experiments examined effects of lexical stress on lexical access for recently learned words. Participants learned artificial lexicons (48 words) containing phonologically similar items and were tested on their knowledge in a 4-alternative forced-choice (4AFC) referent-selection task. Lexical stress differences did not reduce confusions between cohort items: KAdazu and kaDAzeI were confused with one another in a 4AFC task and in gaze fixations as often as BOsapeI and BOsapaI. However, lexical stress did affect the relative likelihood of stress-initial confusions when words were embedded in running nonsense speech. Words with medial stress, regardless of initial vowel quality, were more prone to confusions than words with initial stress. The authors concluded that noninitial stress, particularly when word segmentation is difficult, may serve as “noise” that alters lexical learning and lexical access.
Michael D. Crossland; Gary S. Rubin
In: Vision Research, vol. 46, no. 4, pp. 590–597, 2006.
Reduced perceptual span is one factor which limits reading speed in patients with macular disease. This study measured the perceptual span and the number of saccades to locate a target in 18 patients with macular disease and seven control subjects on two occasions separated by up to 12 months. Perceptual span changed by up to two letters. Changes in perceptual span were significantly related to changes in reading speed (r2= 0.43, p < 0.005), and were independent of changes in the number of saccades used to observe a target (r2= 0.003
Anne Cutler; Andrea Weber; Takashi Otake
In: Journal of Phonetics, vol. 34, no. 2, pp. 269–284, 2006.
The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners' mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic-phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
Timothy Desmet; Constantijn De Baecke; Denis Drieghe; Marc Brysbaert; Wietske Vonk
In: Language and Cognitive Processes, vol. 21, no. 4, pp. 453–485, 2006.
Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., 'Someone shot the servant of the actress who was on the balcony') was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.
Andrea Weber; Bettina Braun; Matthew W. Crocker
In: Language and Speech, vol. 49, no. 3, pp. 367–392, 2006.
In two eye-tracking experiments the role of contrastive pitch accents during the on-line determination of referents was examined. In both experiments, German listeners looked earlier at the picture of a referent belonging to a contrast pair (red scissors, given purple scissors) when instructions to click on it carried a contrastive accent on the color adjective (L + H*) than when the adjective was not accented. In addition to this prosodic facilitation, a general preference to interpret adjectives contrastively was found in Experiment 1: Along with the contrast pair, a noncontrastive referent was displayed (red vase) and listeners looked more often at the contrastive referent than at the noncontrastive referent even when the adjective was not focused. Experiment 2 differed from Experiment 1 in that the first member of the contrast pair (purple scissors) was introduced with a contrastive accent, thereby strengthening the salience of the contrast. In Experiment 2, listeners no longer preferred a contrastive interpretation of adjectives when the accent in a subsequent instruction was not contrastive. In sum, the results support both an early role for prosody in reference determination and an interpretation of contrastive focus that is dependent on preceding prosodic context.
Andrea Weber; Martine Grice; Matthew W. Crocker
In: Cognition, vol. 99, no. 2, pp. B63–B72, 2006.
An eye-tracking experiment examined whether prosodic cues can affect the interpretation of grammatical functions in the absence of clear morphological information. German listeners were presented with scenes depicting three potential referents while hearing temporarily ambiguous SVO and OVS sentences. While case marking on the first noun phrase (NP) was ambiguous, clear case marking on the second NP disambiguated sentences towards SVO or OVS. Listeners interpreted case-ambiguous NP1s more often as Subject, and thus expected an Object as upcoming argument, only when sentence beginnings carried an SVO-type intonation. This was revealed by more anticipatory eye movements to suitable Patients (Objects) than Agents (Subjects) in the visual scenes. No such preference was found when sentence beginnings had an OVS-type intonation. Prosodic cues were integrated rapidly enough to affect listeners' interpretation of grammatical function before disambiguating case information was available. We conclude that in addition to manipulating attachment ambiguities, prosody can influence the interpretation of constituent order ambiguities.
Ulrich W. Weger; Albrecht W. Inhoff
In: Psychological Science, vol. 17, no. 3, pp. 187–191, 2006.
A spatial cuing task was used to identify two types of readers, those with a relatively fast and those with a relatively slow buildup of inhibition of return (IOR). Backward-directed eye movements (regressions) during sentence reading were then examined as a function of the two IOR types. The results revealed that readers with fast IOR executed larger regressions than readers with slow IOR, as they directed the eyes away from the most recently attended area of text. Forward-directed eye movements (saccades), by contrast, were not a function of IOR type. Ease of sentence comprehension influenced the size of regressions, but this effect was also independent of IOR type. Multiple mechanisms of spatial attention, including IOR, bias eye movements toward upcoming words in the text during reading.
Scott A. McDonald
In: Visual Cognition, vol. 13, no. 1, pp. 89–98, 2006.
Word length is an important determinant of eye movement behaviour in reading. The current study is the first attempt to disconfound a word's number of letters from its spatial extent. In a sentence-reading experiment using closely matched stimuli, clear differences were observed between target words that subtended the same visual angle but differed in number of letters: the more letters in the word, the more fixations made on the word, and the longer the duration of these fixations. Analyses of the full set of sentence words confirmed these results for a wider range of word lengths, and are consistent with a role for number-of-letters distinct from spatial extent. The most plausible explanation for these findings is that long words are subject to a greater degree of visual crowding, which is costly for both temporal and spatial eye movement systems.
Scott A. McDonald
In: Vision Research, vol. 46, no. 26, pp. 4416–4424, 2006.
Previous research has demonstrated that reading is less efficient when parafoveal visual information about upcoming words is invalid or unavailable; the benefit from a valid preview is realised as reduced reading times on the subsequently foveated word, and has been explained with reference to the allocation of attentional resources to parafoveal word(s). This paper presents eyetracking evidence that preview benefit is obtained only for words that are selected as the saccade target. Using a gaze-contingent display change paradigm (Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psychology, 7, 65-81), the position of the triggering boundary was set near the middle of the pretarget word. When a refixation saccade took the eye across the boundary in the pretarget word, there was no reliable effect of the validity of the target word preview. However, when the triggering boundary was positioned just after the pretarget word, a robust preview benefit was observed, replicating previous research. The current results complement findings from studies of basic visual function, suggesting that for the case of preview benefit in reading, attentional and oculomotor processes are obligatorily coupled.
Scott A. McDonald; Galina Spitsyna; Richard C. Shillcock; Richard J. S. Wise; Alexander P. Leff
In: Brain, vol. 129, no. 1, pp. 158–167, 2006.
Patients with an acquired homonymous hemianopia often adapt over a period of a few months to compensate for some of the impairments caused by their visual field defect. Changes in their eye movement patterns have been demonstrated as performance on visual tasks improves with time; however, these patients often complain of persistent text reading problems. Using a video-based eye-movement tracking system, we investigated the text reading behaviour of patients with established hemianopic alexia (>6 months post deficit), a condition affecting left-to-right readers, with a homonymous field defect that encroaches into their right foveal/parafoveal visual field. Word-based analyses of text reading are standard in experiments involving normal readers, but this is the first time these methods have been extended to patients with hemianopic alexia. Using this method, we compared the patients' reading scanpaths to those generated by normal controls reading the same passages, and a random model generated by matching the patients' eye movement data to random permutations of the text they read. We demonstrate that patients adopt an inefficient reading strategy, fixating to the left of the preferred viewing location of words of four letters and longer. Fixating to the left of the normal preferred viewing location not only results in less of the fixated word being processed by the language system; ensuing fixations are also more likely to land within the same word (a refixation). It is this refixation rate that is the main factor in slowing reading times in these patients. Our data suggests that patients are able to extract some useful visual information from text to aid the planning of reading scanpaths as their behaviour differs critically from the random model. Potential reasons for this patient group failing to produce an effective reading strategy are discussed.
Jong-yoon Myung; Sheila E. Blumstein; Julie C. Sedivy
In: Cognition, vol. 98, no. 3, pp. 223–243, 2006.
Two experiments investigated sensory/motor-based functional knowledge of man-made objects: manipulation features associated with the actual usage of objects. In Experiment 1, a series of prime-target pairs was presented auditorily, and participants were asked to make a lexical decision on the target word. Participants made a significantly faster decision about the target word (e.g. 'typewriter') following a related prime that shared manipulation features with the target (e.g. 'piano') than an unrelated prime (e.g. 'blanket'). In Experiment 2, participants' eye movements were monitored when they viewed a visual display on a computer screen while listening to a concurrent auditory input. Participants were instructed to simply identify the auditory input and touch the corresponding object on the computer display. Participants fixated an object picture (e.g. "typewriter") related to a target word (e.g. 'piano') significantly more often than an unrelated object picture (e.g. "bucket") as well as a visually matched control (e.g. "couch"). Results of the two experiments suggest that manipulation knowledge of words is retrieved without conscious effort and that manipulation knowledge constitutes a part of the lexical-semantic representation of objects.
Guoli Yan; Hongjie Tian; Xuejun Bai; Keith Rayner
In: British Journal of Psychology, vol. 97, no. 2, pp. 259–268, 2006.
Eye movements of Chinese readers were monitored as they read sentences containing target words whose predictability from the preceding context was high, medium, or low. Readers fixated for less time on high- and medium-predictable target words than on low-predictable target words. They were also more likely to fixate on low-predictable target words than on high- or medium-predictable target words. The results were highly similar to those of a study by Rayner and Well (1996) with English read- ers and demonstrate that Chinese readers, like readers of English, exploit target word predictability during reading.
Eiling Yee; Julie C. Sedivy
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 32, no. 1, pp. 1–14, 2006.
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention.
In: Cognitive Systems Research, vol. 7, no. 1, pp. 70–95, 2006.
Random variables and probabilistic decision making are important elements in most theories of reading eye movements, but they tend to receive little theoretical attention. This paper attempts to address this problem by introducing the Stochastic, Hierarchical Architecture for Reading Eye-movements (SHARE). The SHARE framework formalizes reading eye movements as observable outcomes of a latent stochastic process. By modeling eye movements as time-series random variables, the goal of the model is to uncover statistical regularities in the data, which help to identify conditions and constraints the underlying mechanism must satisfy. In the univariate analysis, it is shown that a 3-component Lognormal mixture model provides a good fit to the marginal distribution function of fixation duration, and a hierarchical model is required for modeling saccade length. As a comprehensive model of reading eye movements, SHARE was implemented as an Input-Output Hidden Markov model. With a few simple hypotheses, SHARE is able to capture reading eye-movement patterns of beginning readers and proficient adults, and to reproduce well-known psycholinguistic effects. The rationale of the model, its relations with other modeling endeavors, and its implications are discussed.
Angélica Pérez Fornos; Jörg Sommerhalder; Benjamin Rappaz; Marco Pelizzone; Avinoam B. Safran
Processes involved in oculomotor adaptation to eccentric reading Journal Article
In: Investigative Ophthalmology & Visual Science, vol. 47, no. 4, pp. 1439–1447, 2006.
PURPOSE: Adaptation to eccentric viewing in subjects with a central scotoma remains poorly understood. The purpose of this study was to analyze the adaptation stages of oculomotor control to forced eccentric reading in normal subjects. METHODS: Three normal adults (25.7 +/- 3.8 years of age) were trained to read full-page texts using a restricted 10 degrees x 7 degrees viewing window stabilized at 15 degrees eccentricity (lower visual field). Gaze position was recorded throughout the training period (1 hour per day for approximately 6 weeks). RESULTS: In the first sessions, eye movements appeared inappropriate for reading, mainly consisting of reflexive vertical (foveating) saccades. In early adaptation phases, both vertical saccade count and amplitude dramatically decreased. Horizontal saccade frequency increased in the first experimental sessions, then slowly decreased after 7 to 15 sessions. Amplitude of horizontal saccades increased with training. Gradually, accurate line jumps appeared, the proportion of progressive saccades increased, and the proportion of regressive saccades decreased. At the end of the learning process, eye movements mainly consisted of horizontal progressions, line jumps, and a few horizontal regressions. CONCLUSIONS: Two main adaptation phases were distinguished: a "faster" vertical process aimed at suppressing reflexive foveation and a "slower" restructuring of the horizontal eye movement pattern. The vertical phase consisted of a rapid reduction in the number of vertical saccades and a rapid but more progressive adjustment of remaining vertical saccades. The horizontal phase involved the amplitude adjustment of horizontal saccades (mainly progressions) to the text presented and the reduction of regressions required.
Stamatina A. Kabanarou; Gary S. Rubin
Reading with central scotomas: Is there a binocular gain? Journal Article
In: Optometry and Vision Science, vol. 83, no. 11, pp. 789–796, 2006.
PURPOSE: The purpose of this study was to compare reading performance under binocular versus monocular viewing conditions in patients with bilateral age-related macular degeneration (AMD). METHODS: Twenty-two patients with AMD participated. Distance acuity, reading acuity, and contrast sensitivity were recorded binocularly and monocularly with the better eye. An infrared eye tracker recorded eye movements during reading. Reading speed and reading eye movement parameters, including number of fixations and regressions, fixation duration, and number of saccades to find the next line, were calculated for both viewing conditions. The difference between binocular and monocular performance (binocular gain) was computed. Regression analysis was used to determine whether intraocular differences in distance and reading acuity and contrast sensitivity were predictive of binocular gain. RESULTS: Reading speed when using both eyes was highly correlated with the reading speed for the better eye. There was a small, but not significant, advantage of binocular viewing (6.9 words/minute
Reinhold Kliegl; Antje Nuthmann; Ralf Engbert
In: Journal of Experimental Psychology: General, vol. 135, no. 1, pp. 12–35, 2006.
Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye-mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes.
Pia Knoeferle; Matthew W. Crocker
In: Cognitive Science, vol. 30, pp. 481–529, 2006.
Two studies investigated the interaction between utterance and scene processing by monitoring eye movements in agent-action-patient events, while participants listened to related utterances. The aim of Experiment 1 was to determine if and when depicted events are used for thematic role assignment and structural disambiguation of temporarily ambiguous English sentences. Shortly after the verb identified relevant depicted actions, eye movements in the event scenes revealed disambiguation. Experiment 2 investigated the relative importance of linguistic/world knowledge and scene information. When the verb identified either only the stereotypical agent of a (nondepicted) action, or the (nonstereotypical) agent of a depicted action as relevant, verb-based thematic knowledge and depicted action each rapidly influenced comprehension. In contrast, when the verb identified both of these agents as relevant, the gaze pattern suggested a preferred reliance of comprehension on depicted events over stereotypical thematic knowledge for thematic interpretation. We relate our findings to language comprehension and acquisition theories.
Marjolein Korvorst; Ardi Roelofs; Willem J. M. Levelt
In: Quarterly Journal of Experimental Psychology, vol. 59, no. 2, pp. 296–311, 2006.
Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported in which pairs of complex numerals were named (arabic format, Experiment 1) or read aloud (alphabetic format, Experiment 2) as house numbers and as clock times. We examined whether the degree of incrementality is differentially influenced by the production task (naming vs. reading) and mode (house numbers vs. clock time expressions), by comparing gaze durations and speech onset latencies. In both tasks and modes, dissociations were obtained between speech onset latencies (reflecting articulation) and gaze durations (reflecting planning), indicating incrementality. Furthermore, whereas none of the factors that determined gaze durations were reflected in the reading and naming latencies for the house numbers, the dissociation between gaze durations and response latencies for the clock times concerned mainly numeral length in both tasks. These results suggest that the degree of incrementality is influenced by the type of utterance (house number vs. clock time) rather than by task (reading vs. naming). The results highlight the importance of the utterance structure in determining the degree of incrementality.
Falk Huettig; Philip T. Quinlan; Scott A. McDonald; Gerry T. M. Altmann
In: Acta Psychologica, vol. 121, no. 1, pp. 65–80, 2006.
In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 813-839]. Here, this method is used to examine the psychological validity of models of high-dimensional semantic space. The data strongly suggest that these corpus-based measures of word semantics predict fixation behavior in the visual world and provide further evidence that language-mediated eye movements to objects in the concurrent visual environment are driven by semantic similarity rather than all-or-none categorical knowledge. The data suggest that the visual world paradigm can, together with other methodologies, converge on the evidence that may help adjudicate between different theoretical accounts of the psychological semantics.
Jukka Hyönä; Mika Koivisto
The role of eye movements in lateralised word recognition Journal Article
In: Laterality, vol. 11, no. 2, pp. 155–169, 2006.
The present study examined the role of eye movements and attention in lateralised word recognition, where words and pseudowords are presented to the right or left of the fixation point, and participants are asked to decide whether or not the presented letter string is a word. In the move condition, our participants were instructed to launch a saccade towards the target letter string, which was erased from the screen after 100 ms (i.e., prior to the eyes reaching the target). It was assumed that a preparation of an eye movement simultaneously with an attention shift results in the attention being more readily allocated to the target. In the fixate condition, participants were asked to fixate on the central fixation point throughout the trial. The data on response accuracy demonstrated that word recognition in the LVF benefited from a preparation to make an eye movement, whereas the performance in the RVF did not benefit. The results are consistent with the attentional advantage account (Mondor & Bryden, 1992), according to which the performance deficit of RH for verbal stimuli may be overcome by orienting attention to the LVF prior to the presentation of a letter string.
Jukka Hyönä; Anna-Mari Nurminen
In: British Journal of Psychology, vol. 97, pp. 31–50, 2006.
The present study was carried out to investigate individual differences in reading styles among competent adult readers and to examine whether readers are aware of their reading style. Individual reading strategies were studied by having the participants read a long expository text while their eye fixation patterns were registered. A cluster analysis was performed on the eye movement data to distinguish between different reading styles. The analysis revealed three types of readers that were coined, following Hyönä, Lorch, and Kaakinen (2002), fast linear readers, slow linear readers, and topic structure processors. Readers' procedural awareness of their reading behaviour was assessed by a questionnaire. The verbal reports obtained by the questionnaire were then correlated with the corresponding eye behaviour to investigate the extent to which the readers behave the way they report doing. The correlations showed that adult readers are well aware of their general reading speed and reasonably aware of their lookback and rereading behaviour. The amount of time spent looking back in text also correlated positively with the relative success in recalling the main points expressed in the text. It is concluded that systematic and extensive looking back in text is indicative of strategic behaviour.
In: Computers in Human Behavior, vol. 22, no. 4, pp. 657–671, 2006.
Even though eye movements during reading have been studied intensively for decades, applications that track the reading of longer passages of text in real time are rare. The problems encountered in developing such an application (a reading aid, iDict), and the solutions to the problems are described. Some of the issues are general and concern the broad family of Attention Aware Systems. Others are specific to the modality of interest: eye gaze. One of the most difficult problems when using eye tracking to identify the focus of visual attention is the inaccuracy of the eye trackers used to measure the point of gaze. The inaccuracy inevitably affects the design decisions of any application exploiting the point of gaze for localizing the point of visual attention. The problem is demonstrated with examples from our experiments. The principles of the drift correction algorithms that automatically correct the vertical inaccuracy are presented and the performance of the algorithms is evaluated.
Keith Rayner; Kathryn H. Chace; Timothy J. Slattery; Jane Ashby
In: Scientific Studies of Reading, vol. 10, no. 3, pp. 241–255, 2006.
In this article, we discuss the use of eye movement data to assess moment-to-moment comprehension processes. We first review some basic characteristics of eye movements during reading and then present two studies in which eye movements are monitored to confirm that eye movements are sensitive to (a) global text passage difficulty and (b) inconsistencies in text. We demonstrate that processing times increased (and especially that the number of fixations increased) when text is difficult. When there is an inconsistency, readers fixated longer on the region where the inconsistency occurred. In both studies, the probability of making a regressive eye movement increased as well. Finally, we discuss the use of eye movement recording as a research tool to further study moment-to-moment comprehension processes and the possibility of using this tool in more applied school settings. Copyright textcopyright 2006, Lawrence Erlbaum Associates, Inc.
Keith Rayner; Simon P. Liversedge; Sarah J. White
In: Vision Research, vol. 46, no. 3, pp. 310–323, 2006.
In a series of experiments, the currently fixated word (word n) and/or the word to the right of fixation (word n + 1) either disappeared or was masked during readers' eye fixations. Consistent with prior research, when only word n disappeared or was masked, there was little disruption to reading. However, when word n + 1 either disappeared or was masked (either at the onset of fixation on word n or after 60 ms), there was considerable disruption to reading. Independent of whether word n and/or word n + 1 disappeared or was masked, there were robust frequency effects on the fixation on word n. These results not only confirm the robust influence of cognitive/linguistic processing on fixation times in reading, but also again confirm the importance of preprocessing the word to the right of fixation for fluent reading.
Ronan G. Reilly; Ralph Radach
In: Cognitive Systems Research, vol. 7, no. 1, pp. 34–55, 2006.
This paper describes some empirical tests of an interactive activation model of eye movement control in reading (the "Glenmore" model). Qualitatively, the Glenmore model can account within one mechanism for preview and spillover effects, regressions, progressions, and refixations. It decouples the decision about when to move the eyes from the word recognition process. The time course of activity in a fixate centre (FC) determines the triggering of a saccade. The other main feature of the model is the use of a saliency map that acts as an arena for the interplay of bottom-up visual features of the text, and top-down lexical features. These factors combine to create a pattern of activation that selects one word as the saccade target. Even within the relatively simple framework proposed here, a coherent account can be provided for a range of eye movement control phenomena that have hitherto proved problematic to reconcile. The paper examines the performance of the model compared to data gathered in an empirical study of subjects reading a German text. The quantitative fit of the model, while reasonable, highlighted some limitations in the model that will need to be addressed in future versions.
Eyal M. Reingold; Keith Rayner
In: Psychological Science, vol. 17, no. 9, pp. 742–746, 2006.
A critical prediction of the E-Z Reader model is that experimental manipulations that disrupt early encoding of visual and orthographic features of the fixated word without affecting subsequent lexical processing should influence the processing difficulty of the fixated word without affecting the processing of the next word. We tested this prediction by monitoring participants' eye movements while they read sentences in which a target word was presented either normally or altered. In the critical condition, the contrast between the target word and the background was substantially reduced. Such a reduction in stimulus quality is typically assumed to have an impact that is largely confined to a very early stage of word recognition. Results were consistent with the E-Z Reader model: This faint presentation had a robust influence on the duration of fixations on the target word without substantially altering the processing of the next word.
Visual determinants of preferred adjective order Journal Article
In: Visual Cognition, vol. 14, no. 3, pp. 261–294, 2006.
In referential communication, speakers refer to a target object among a set of context objects. The NPs they produce are characterized by a canonical order of prenominal adjectives: The dimensions that are easiest to detect (e.g., absolute dimensions) are commonly placed closer to the noun than other dimensions (e.g., relative dimensions). This stands in stark contrast to the assumption that language production is an incremental process. According to this incremental-procedural view, the dimensions that are easiest to detect should be named first. In the present paper, an alternative account of the canonical order effect is presented, suggesting that the prenominal adjective ordering rules are a result of the perceptual analysis processes underlying the evaluation of distinctive target features. Analyses of speakers' eye movements during referential communication (Experiment 1) and analyses of utterance formats produced under time pressure (Experiment 2) provide evidence for the suggested perceptual classification account.
Christopher R. Sears; Crystal R. Campbell; Stephen J. Lupker
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 32, no. 4, pp. 1040–1062, 2006.
What is the effect of a word's higher frequency neighbors on its identification time? According to activation-based models of word identification (J. Grainger & A. M. Jacobs, 1996; J. L. McClelland & D. E. Rumelhart, 1981), words with higher frequency neighbors will be processed more slowly than words without higher frequency neighbors because of the lexical competition mechanism embodied in these models. Although a critical prediction of these models, this inhibitory neighborhood frequency effect has been elusive in studies that have used English stimuli. In the present experiments, the effect of higher frequency neighbors was examined in the lexical decision task and when participants were reading sentences while their eye movements were monitored. Results suggest that higher frequency neighbors have little, if any, effect on the identification of English words. The implications for activation-based models of word identification are discussed.
Keren B. Shatzman; James M. Mcqueen
The modulation of lexical competition by segment duration Journal Article
In: Psychonomic Bulletin & Review, vol. 13, no. 6, pp. 966–971, 2006.
In an eye-tracking study, we examined how fine-grained phonetic detail, such as segment duration, influences the lexical competition process during spoken word recognition. Dutch listeners' eye movements to pictures of four objects were monitored as they heard sentences in which a stop-initial target word (e.g., pijp "pipe") was preceded by an [s]. The participants made more fixations to pictures of cluster-initial words (e.g., spijker "nail") when they heard a long [s] (mean duration, 103 msec) than when they heard a short [s] (mean duration, 73 msec). Conversely, the participants made more fixations to pictures of the stop-initial words when they heard a short [s] than when they heard a long [s]. Lexical competition between stop- and cluster-initial words, therefore, is modulated by segment duration differences of only 30 msec.
Keren B. Shatzman; James M. McQueen
In: Perception and Psychophysics, vol. 68, no. 1, pp. 1–16, 2006.
In two eye-tracking experiments, we examined the degree to which listeners use acoustic cues to word boundaries. Dutch participants listened to ambiguous sentences in which stop-initial words (e.g., pot, jar) were preceded by eens (once); the sentences could thus also refer to cluster-initial words (e.g., een spot, a spotlight). The participants made fewer fixations to target pictures (e.g., ajar) when the target and the preceding [s] were replaced by a recording of the cluster-initial word than when they were spliced from another token of the target-bearing sentence (Experiment 1). Although acoustic analyses revealed several differences between the two recordings, only [s] duration correlated with the participants' fixations (more target fixations for shorter [s]s). Thus, we found that listeners apparently do not use all available acoustic differences equally. In Experiment 2, the participants made more fixations to target pictures when the [s] was shortened than when it was lengthened. Utterance interpretation can therefore be influenced by individual segment duration alone.
Keren B. Shatzman; James M. McQueen
In: Psychological Science, vol. 17, no. 5, pp. 372–377, 2006.
An eye-tracking study examined the involvement of prosodic knowledge--specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words--in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners' fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.
Peter C. Gordon; Randall Hendrick; Marcus L. Johnson; Yoonhyoung Lee
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 32, no. 6, pp. 1304–1321, 2006.
The nature of working memory operation during complex sentence comprehension was studied by means of eye-tracking methodology. Readers had difficulty when the syntax of a sentence required them to hold 2 similar noun phrases (NPs) in working memory before syntactically and semantically integrating either of the NPs with a verb. In sentence structures that placed these NPs at the same linear distances from one another but allowed integration with a verb for 1 of the NPs, the comprehension difficulty was not seen. These results are interpreted as indicating that similarity-based interference occurs online during the comprehension of complex sentences and that the degree of memory accessibility conventionally associated with different types of NPs does not have a strong effect on sentence processing.
Seth N. Greenberg; Albrecht W. Inhoff; Ulrich W. Weger
In: Quarterly Journal of Experimental Psychology, vol. 59, no. 6, pp. 987–995, 2006.
A comparison was made between reading tasks performed with and without the additional requirement of detecting target letters. At issue was whether eye movement measures are affected by the additional requirement of detection. Global comparisons showed robust effects of task type with longer fixations and fewer word skippings when letter detection was required. Detailed analyses of target words, however, further showed that reading with and without letter detection yielded virtually identical effects of word class and text predictability for word-skipping rate and similar effects for different word viewing duration measures. The overall oculomotor pattern suggested that detection does not substantially shift normal reading movements in response to lexical cues and thereby indicated that detection tasks are informative about word and specifically word class processing in normal reading.