EyeLink Reading and Language Eye-Tracking Publications
All EyeLink reading and language research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual World, Comprehension, Speech Production, etc. You can also search for individual author names. If we missed any EyeLink reading or language articles, please email us!
2012 |
Patrick Plummer; Keith Rayner Effects of parafoveal word length and orthographic features on initial fixation landing positions in reading Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 5, pp. 950–963, 2012. @article{Plummer2012, Previous research has demonstrated that readers use word length and word boundary information in targeting saccades into upcoming words while reading. Previous studies have also revealed that the initial landing positions for fixations on words are affected by parafoveal processing. In the present study, we examined the effects of word length and orthographic legality on targeting saccades into parafoveal words. Long (8-9 letters) and short (4-5 letters) target words, which were matched on lexical frequency and initial letter trigram, were paired and embedded into identical sentence frames. The gaze-contingent boundary paradigm (Rayner, 1975) was used to manipulate the parafoveal information available to the reader before direct fixation on the target word. The parafoveal preview was either identical to the target word or was a visually similar nonword. The nonword previews contained orthographically legal or orthographically illegal initial letters. The results showed that orthographic preprocessing of the word to the right of fixation affected eye movement targeting, regardless of word length. Additionally, the lexical status of an upcoming saccade target in the parafovea generally did not influence preprocessing. |
Matthew W. Lowder; Peter C. Gordon The pistol that injured the cowboy: Difficulty with inanimate subject-verb integration is reduced by structural separation Journal Article In: Journal of Memory and Language, vol. 66, no. 4, pp. 819–832, 2012. @article{Lowder2012, Previous work has suggested that the difficulty normally associated with processing an object-extracted relative clause (ORC) compared to a subject-extracted relative clause (SRC) is increased when the head noun phrase (NP1) is animate and the embedded noun phrase (NP2) is inanimate, compared to the reverse animacy configuration. Two eye-tracking experiments were conducted to determine whether the apparent effects of NP animacy on the ORC-SRC asymmetry reflect distinct processes of interpretation that operate at NP2 and NP1. Experiment 1 revealed a localized difficulty interpreting the embedded action verb when the preceding NP2 was inanimate as compared to animate, but this difficulty in subject-verb integration did not extend to the broader region of words in the RC and matrix verb where difficulty was observed in processing ORCs as compared to SRCs. Experiment 2 demonstrated that the difficulty associated with integrating an inanimate NP with an action verb is reduced when the two appear in separate clauses, as in the case of an SRC. |
Marco Marelli; Claudio Luzzatti Frequency effects in the processing of Italian nominal compounds: Modulation of headedness and semantic transparency Journal Article In: Journal of Memory and Language, vol. 66, no. 4, pp. 644–664, 2012. @article{Marelli2012, There is a general debate as to whether constituent representations are accessed in compound processing. The present study addresses this issue, exploiting the properties of Italian compounds to test the role of headedness and semantic transparency in constituent access. In a first experiment, a lexical decision task was run on nominal compounds. Significant interactions between constituent-frequencies, headedness and semantic transparency emerged, indicating facilitatory frequency effects for transparent and head-final compounds, thus highlighting the importance of the semantic and structural properties of the compounds in lexical access. In a second experiment, converging evidence was sought in an eye-tracking study. The compounds were embedded into sentence contexts, and fixation durations were measured. The results did in fact confirm the effect observed in the first experiment. The results are consistent with a multi-route model of compound processing, but also indicate the importance of a semantic route dedicated to the conceptual combination of constituent meanings. |
Kathleen M. Masserang; Alexander Pollatsek Transposed letter effects in prefixed words: Implications for morphological decomposition Journal Article In: Journal of Cognitive Psychology, vol. 24, no. 4, pp. 476–495, 2012. @article{Masserang2012, Acrucial issue in word encoding is whether morphemes are involved in early stages. One paradigm that tests for this employs the transposed letter (TL) effect the difference in the times to process a word (misfile) when it is preceded by a transposed letter (TL) prime (mifsile) and when it is preceded by a substitute letter (SL) prime (mintile) and examines whether the TL effect is smaller when the two adjacent letters cross a morpheme boundary. The evidence from prior studies is not consistent. Experiments 1 and 2 employed a parafoveal preview paradigm in which the transposed letters either crossed the prefix-stem boundary or did not, and found a clear TL effect regardless of whether the two letters crossed the morpheme boundary. Experiment 3 replicated this finding employing a masked priming lexical-decision paradigm. It thus appears that morphemes are not involved in early processes in English that are sensitive to letter order. There is some evidence for morphemicmodulation of the TL effect in other languages; thus, the properties of the language may modulate when morphemes influence early letter position encoding. |
Nathalie N. Bélanger; Timothy J. Slattery; Rachel I. Mayberry; Keith Rayner Skilled deaf readers have an enhanced perceptual span in reading Journal Article In: Psychological Science, vol. 23, no. 7, pp. 816–823, 2012. @article{Belanger2012, Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading |
Valerie Benson; Magdalena Ietswaart; David Milner Eye movements and verbal report in a single case of visual neglect Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e43743, 2012. @article{Benson2012b, In this single case study, visuospatial neglect patient P1 demonstrated a dissociation between an intact ability to make appropriate reflexive eye movements to targets in the neglected field with latencies of <400 ms, while failing to report targets presented at such durations in a separate verbal detection task. In contrast, there was a failure to evoke the usually robust Remote Distractor Effect in P1, even though distractors in the neglected field were presented at above threshold durations. Together those data indicate that the tight coupling that is normally shown between attention and eye movements appears to be disrupted for low-level orienting in P1. A comparable disruption was also found for high-level cognitive processing tasks, namely reading and scene scanning. The findings are discussed in relation to sampling, attention and awareness in neglect. |
Arielle Borovsky; Jeffrey L. Elman; Anne Fernald In: Journal of Experimental Child Psychology, vol. 112, no. 4, pp. 417–436, 2012. @article{Borovsky2012, Adults can incrementally combine information from speech with astonishing speed to anticipate future words. Concurrently, a growing body of work suggests that vocabulary ability is crucially related to lexical processing skills in children. However, little is known about this relationship with predictive sentence processing in children or adults. We explore this question by comparing the degree to which an upcoming sentential theme is anticipated by combining information from a prior agent and action. 48 children, aged of 3 to 10, and 48 college-aged adults' eye-movements were recorded as they heard a sentence (e.g., The pirate hides the treasure) in which the object referred to one of four images that included an agent-related, action-related and unrelated distractor image. Pictures were rotated so that, across all versions of the study, each picture appeared in all conditions, yielding a completely balanced within-subjects design. Adults and children quickly made use of combinatory information available at the action to generate anticipatory looks to the target object. Speed of anticipatory fixations did not vary with age. When controlling for age, individuals with higher vocabularies were faster to look to the target than those with lower vocabulary scores. Together, these results support and extend current views of incremental processing in which adults and children make use of linguistic information to continuously update their mental representation of ongoing language. |
Susanne Brouwer; Holger Mitterer; Falk Huettig Speech reductions change the dynamics of competition during spoken word recognition Journal Article In: Language and Cognitive Processes, vol. 27, no. 4, pp. 539–571, 2012. @article{Brouwer2012, Three eye-tracking experiments investigated how phonological reductions (e.g., ‘‘puter'' for ‘‘computer'') modulate phonological competition. Participants listened to sentences extracted from a pontaneous speech corpus and saw four printed words: a target (e.g., ‘‘computer''), a competitor similar to the canonical form (e.g., ‘‘companion''), one similar to the reduced form (e.g., ‘‘pupil''), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed their attention to a similar degree to both competitors independent of the target's spoken form. In Experiment 2, we excluded reduced forms and presented canonical forms only. In such a listening situation, participants showed a clear preference for the ‘‘canonical form'' competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listeningto reduced speech than when listening to fully articulated speech. We conclude that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system. |
Susanne Brouwer; Holger Mitterer; Falk Huettig Can hearing puter activate pupil? Phonological competition and the processing of reduced spoken words in spontaneous conversations Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 11, pp. 2193–2220, 2012. @article{Brouwer2012a, In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjutər] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjutər] was replaced with a "real" onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts. |
Sarah Brown-Schmidt Beyond common and privileged: Gradient representations of common ground in real-time language use Journal Article In: Language and Cognitive Processes, vol. 27, no. 1, pp. 62–89, 2012. @article{BrownSchmidt2012, The present research tested the hypothesis that on-line language processing is guided by gradient representations of linguistic common ground that reflect details of how common ground was established, including the discourse context and partner feedback. This hypothesis was contrasted with a simpler hypothesis that interpretation processes are only sensitive to simple binary representations of whether a potential discourse referent is or is not common ground. In order to evaluate these hypotheses, participants engaged in a task-based conversation with an experimenter in which some of the participant's game-pieces were hidden from the experimenter. On critical trials, the participant revealed the identity of the hidden game-pieces. Critical utterances contained referring expressions temporarily ambiguous between a visually shared game-piece, and a hidden game-piece. Analysis of participant eye movements during interpretation of these utterances revealed that participants were more likely to consider the hidden game-piece a potential referent if the experimenter had initially asked about its identity; whether the experimenter provided clear feedback that s/he understood its identity modulated this effect somewhat. These results provide key evidence for the richness of common ground representations, and are discussed in terms of the implications for models of the underlying representations of common ground. |
Julie N. Buchan; Kevin G. Munhall The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information Journal Article In: Seeing and Perceiving, vol. 25, no. 1, pp. 87–106, 2012. @article{Buchan2012, Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task. |
Robyn Burton; David P. Crabb; Nicholas D. Smith; Fiona C. Glen; David F. Garway-Heath Glaucoma and reading: Exploring the effects of contrast lowering of text Journal Article In: Optometry and Vision Science, vol. 89, no. 9, pp. 1282–1287, 2012. @article{Burton2012, PURPOSE: Past research has not fully ascertained the extent to which people with glaucoma have difficulties with reading. This study measures change in reading speed when letter contrast is reduced, to test the hypothesis that patients with glaucoma are more sensitive to letter contrast than age-similar visually healthy people. METHODS: Fifty-three patients with glaucoma [mean age: 66 years (standard deviation: 9)] with bilateral visual field (VF) defects and 40 age-similar visually healthy control subjects [mean age: 69 (standard deviation: 8) years] had reading speeds measured using sets of fixed size, non-scrolling texts on a computer setup that incorporated an eye tracking device. All participants had visual acuity ≥6/9, and they underwent standard tests of visual function including Humphrey 24-2 and 10-2 VFs. Potential non-visual confounders were also tested, including cognitive ability (Middlesex Elderly Assessment of Mental Status Test) and general reading ability. Individual average raw reading speeds were calculated from 8 trials (different passages of text) at both 100% and 20% letter contrast. RESULTS: Patients had an average 24-2 VF MD of -6.5 (range: 0.7 to -17.3) dB in the better eye. The overall median reduction in reading speed due to decreasing the contrast of the text in the patients was 20%, but with considerable between-individual variation (interquartile range, 8%-44%). This reduction was significantly greater (p = 0.01) than the controls [median: 11% (interquartile range, 6%-17%)]. Patients and controls had similar average performance on Middlesex Elderly Assessment of Mental Status Test (p = 0.71), a modified Burt Reading ability test (p = 0.33), and a computer-based lexical decision task (p = 0.53) and had similar self-reported day-to-day reading frequency (p = 0.12). CONCLUSIONS: Average reduction in reading speed caused by a difference in letter contrast between 100% and 20% is significantly more apparent in patients with glaucoma when compared with visually healthy people with a similar age and similar cognitive/reading ability. |
James E. Cane; Fabrice Cauchard; Ulrich W. Weger The time-course of recovery from interruption during reading: Eye movement evidence for the role of interruption lag and spatial memory Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 7, pp. 1397–1413, 2012. @article{Cane2012, Two experiments examined how interruptions impact reading and how interruption lags and the reader's spatial memory affect the recovery from such interruptions. Participants read paragraphs of text and were interrupted unpredictably by a spoken news story while their eye movements were monitored. Time made available for consolidation prior to responding to the interruption did not aid reading resumption. However, providing readers with a visual cue that indicated the interruption location did aid task resumption substantially in Experiment 2. Taken together, the findings show that the recovery from interruptions during reading draws on spatial memory resources and can be aided by processes that support spatial memory. Practical implications are discussed. |
Margaret Grant; Charles Clifton; Lyn Frazier The role of Non-Actuality Implicatures in processing elided constituents Journal Article In: Journal of Memory and Language, vol. 66, no. 1, pp. 326–343, 2012. @article{Grant2012, When an elided constituent and its antecedent do not match syntactically, the presence of a word implying the non-actuality of the state of affairs described in the antecedent seems to improve the example. (This information should be released but Gorbachev didn't. vs. This information was released but Gorbachev didn't.) We model this effect in terms of Non-Actuality Implicatures (NAIs) conveyed by non-epistemic modals like should and other words such as want to and be eager to that imply non-actuality. We report three studies. A rating and interpretation study showed that such implicatures are drawn and that they improve the acceptability of mismatch ellipsis examples. An interpretation study showed that adding a NAI trigger to ambiguous examples increases the likelihood of choosing an antecedent from the NAI clause. An eye movement study shows that a NAI trigger also speeds on-line reading of the ellipsis clause. By introducing alternatives (the desired state of affairs vs. the actual state of affairs), the NAI trigger introduces a potential Question Under Discussion (QUD). Processing an ellipsis clause is easier, the processor is more confident of its analysis, when the ellipsis clause comments on the QUD. |
Katherine Guérard; Jean Saint-Aubin; Marie Poirier Assessing the influence of letter position in reading normal and transposed texts using a letter detection task Journal Article In: Canadian Journal of Experimental Psychology, vol. 66, no. 4, pp. 227–238, 2012. @article{Guerard2012, During word recognition, some letters appear to play a more important role than others. Although some studies have suggested that the first and last letters of a word have a privileged status, there is no consensus with regards to the importance of the different letter positions when reading connected text. In the current experiments, we used a simple letter search task to examine the impact of letter position on word identification in connected text using a classic paper and pencil procedure (Experiment 1) and an eye movement monitoring procedure (Experiment 2). In Experiments 3 and 4, a condition with transposed letters was included. Our results show that the first letter of a word is detected more easily than the other letters, and transposing letters in a word revealed the importance of the final letter. It is concluded that both the initial and final letters play a special role in word identification during reading but that the underlying processes might differ. |
Christopher J. Hand; Patrick J. O'Donnell; Sara C. Sereno Word-initial letters influence fixation durations during fluent reading Journal Article In: Frontiers in Psychology, vol. 3, pp. 85, 2012. @article{Hand2012, The present study examined how word-initial letters influence lexical access during reading. Eye movements were monitored as participants read sentences containing target words. Three factors were independently manipulated. First, target words had either high or low constraining word-initial letter sequences (e.g., dwarf or clown, respectively). Second, tar- gets were either high or low in frequency of occurrence (e.g., train or stain, respectively). Third, targetswere embedded in either biasing or neutral contexts (i.e., targetswere high or low in their predictability).This 2 (constraint)×2 (frequency)×2 (context) design allowed us to examine the conditions under which a word's initial letter sequence could facilitate processing. Analyses of fixation duration data revealed significant main effects of constraint, frequency, and context. Moreover, in measures taken to reflect “early” lexical processing (i.e., first and single fixation duration), there was a significant interaction between constraint and context. The overall pattern of findings suggests lexical access is facilitated by highly constraining word-initial letters. Results are discussed in comparison to recent studies of lexical features involved in word recognition during reading. |
Adriana Hanulíková; Andrea Weber Sink positive: Linguistic experience with th substitutions influences nonnative word recognition Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 3, pp. 613–629, 2012. @article{Hanulikova2012, We used eyetracking, perceptual discrimination, and production tasks to examine the influences of perceptual similarity and linguistic experience on word recognition in nonnative (L2) speech. Eye movements to printed words were tracked while German and Dutch learners of English heard words containing one of three pronunciation variants (/t/, /s/, or /f/) of the interdental fricative /θ/. Irrespective of whether the speaker was Dutch or German, looking preferences for target words with /θ/ matched the preferences for producing /s/ variants in German speakers and /t/ variants in Dutch speakers (as determined via the production task), while a control group of English participants showed no such preferences. The perceptually most similar and most confusable /f/ variant (as determined via the discrimination task) was never preferred as a match for /θ/. These results suggest that linguistic experience with L2 pronunciations facilitates recognition of variants in an L2, with effects of frequency outweighing effects of perceptual similarity. |
John M. Henderson; Steven G. Luke Oculomotor inhibition of return in normal and mindless reading Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1101–1107, 2012. @article{Henderson2012, Oculomotor inhibition of return (O-IOR) is an increase in saccade latency prior to an eye movement to a recently fixated location, as compared with other locations. To investigate O-IOR in reading, subjects participated in two conditions while their eye movements were recorded: normal reading and mindless reading with words replaced by geometric shapes. We investigated the manifestation of O-IOR in reading and whether it is related to extracting meaning from the text or is an oculomotor phenomenon. The results indicated that fixation durations prior to a saccade returning to the immediately preceding fixated word were longer than those to other words, consistent with O-IOR. Furthermore, fixation durations were longest prior to a saccade that returned the eyes to the specific character position in the word that had previously been fixated and dropped off as the distance between the previously fixated character and landing position increased. This result is consistent with the hypothesis that O-IOR is relatively precise in its application during reading and drops off as a gradient. Both of these results were found for text reading and for mindless reading, suggesting that they are consequences of oculomotor control, and not of language processing. Finally, although these temporal IOR effects were robust, no spatial consequences of IOR were observed: Previously fixated words and characters were as likely to be refixated as new words and characters. |
Antje S. Meyer; Linda Wheeldon; Femke Meulen; Agnieszka E. Konopka Effects of speech rate and practice on the allocation of visual attention in multiple object naming Journal Article In: Frontiers in Psychology, vol. 3, pp. 39, 2012. @article{Meyer2012, Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye-speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast. |
Daniel Mirman; Kristen M. Graziano Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension Journal Article In: Neuropsychologia, vol. 50, no. 8, pp. 1990–1997, 2012. @article{Mirman2012, Both taxonomic and thematic semantic relations have been studied extensively in behavioral studies and there is an emerging consensus that the anterior temporal lobe plays a particularly important role in the representation and processing of taxonomic relations, but the neural basis of thematic semantics is less clear. We used eye tracking to examine incidental activation of taxonomic and thematic relations during spoken word comprehension in participants with aphasia. Three groups of participants were tested: neurologically intact control participants (N=14), individuals with aphasia resulting from lesions in left hemisphere BA 39 and surrounding temporo-parietal cortex regions (N=7), and individuals with the same degree of aphasia severity and semantic impairment and anterior left hemisphere lesions (primarily inferior frontal gyrus and anterior temporal lobe) that spared BA 39 (N=6). The posterior lesion group showed reduced and delayed activation of thematic relations, but not taxonomic relations. In contrast, the anterior lesion group exhibited longer-lasting activation of taxonomic relations and did not differ from control participants in terms of activation of thematic relations. These results suggest that taxonomic and thematic semantic knowledge are functionally and neuroanatomically distinct, with the temporo-parietal cortex playing a particularly important role in thematic semantics. |
Andriy Myachykov; Simon Garrod; Christoph Scheepers Determinants of structural choice in visually situated sentence production Journal Article In: Acta Psychologica, vol. 141, no. 3, pp. 304–315, 2012. @article{Myachykov2012, Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. |
Andriy Myachykov; Dominic Thompson; Simon Garrod; Christoph Scheepers Referential and visual cues to structural choice in visually situated sentence production Journal Article In: Frontiers in Psychology, vol. 2, pp. 396, 2012. @article{Myachykov2012a, We investigated how conceptually informative (referent preview) and conceptually uninformative (pointer to referent's location) visual cues affect structural choice during production of English transitive sentences. Cueing the Agent or the Patient prior to presenting the target-event reliably predicted the likelihood of selecting this referent as the sentential Subject, triggering, correspondingly, the choice between active and passive voice. Importantly, there was no difference in the magnitude of the general Cueing effect between the informative and uninformative cueing conditions, suggesting that attentionally driven structural selection relies on a direct automatic mapping mechanism from attentional focus to the Subject's position in a sentence. This mechanism is, therefore, independent of accessing conceptual, and possibly lexical, information about the cued referent provided by referent preview. |
Chie Nakamura; Manabu Arai; Reiko Mazuka Immediate use of prosody and context in predicting a syntactic structure Journal Article In: Cognition, vol. 125, no. 2, pp. 317–323, 2012. @article{Nakamura2012, Numerous studies have reported an effect of prosodic information on parsing but whether prosody can impact even the initial parsing decision is still not evident. In a visual world eye-tracking experiment, we investigated the influence of contrastive intonation and visual context on processing temporarily ambiguous relative clause sentences in Japanese. Our results showed that listeners used the prosodic cue to make a structural prediction before hearing disambiguating information. Importantly, the effect was limited to cases where the visual scene provided an appropriate context for the prosodic cue, thus eliminating the explanation that listeners have simply associated marked prosodic information with a less frequent structure. Furthermore, the influence of the prosodic information was also evident following disambiguating information, in a way that reflected the initial analysis. The current study demonstrates that prosody, when provided with an appropriate context, influences the initial syntactic analysis and also the subsequent cost at disambiguating information. The results also provide first evidence for pre-head structural prediction driven by prosodic and contextual information with a head-final construction. |
Jens K. Apel; John M. Henderson; Fernanda Ferreira Targeting regressions: Do readers pay attention to the left? Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1108–1113, 2012. @article{Apel2012a, The perceptual span during normal reading extends approximately 14 to 15 characters to the right and three to four characters to the left of a current fixation. In the present study, we investigated whether the perceptual span extends farther than three to four characters to the left immediately before readers execute a regression. We used a display-change paradigm in which we masked words beyond the three-to-four-character range to the left of a fixation. We hypothesized that if reading behavior was affected by this manipulation before regressions but not before progressions, we would have evidence that the perceptual span extends farther left before leftward eye movements. We observed significantly shorter regressive saccades and longer fixation and gaze durations in the masked condition when a regression was executed. Forward saccades were entirely unaffected by the manipulations. We concluded that the perceptual span during reading changes, depending on the direction of a following saccade. |
Jane Ashby; Jinmian Yang; Kris H. C. Evans; Keith Rayner Eye movements and the perceptual span in silent and oral reading Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 4, pp. 634–640, 2012. @article{Ashby2012, Previous research has examined parafoveal processing during silent reading, but little is known about the role of these processes in oral reading. Given that masking parafoveal information slows down silent reading, we asked whether a similar effect also occurs in oral reading. To investigate the role of parafoveal processing in silent and oral reading, we manipulated the parafoveal information available to readers by changing the size of a gaze-contingent moving window. Participants read silently and orally in a one-word window and a three-word window condition as we monitored their eye movements. The lack of parafoveal information slowed reading speed in both oral and silent reading. However, the effects of parafoveal information were larger in silent reading than in oral reading, because of different effects of preview information on both when the eyes move and how often. Parafoveal information benefitted silent reading for faster readers more than for slower readers. |
Fabrice Cauchard; James E. Cane; Ulrich W. Weger Influence of background speech and nusic in Iinterrupted reading: An eye-tracking study Journal Article In: Applied Cognitive Psychology, vol. 26, no. 3, pp. 381–390, 2012. @article{Cauchard2012, The current study examined the influence of interruption, background speech and music on reading, using an eye movement paradigm. Participants either read paragraphs while being exposed to background speech or music or read the texts in silence. On half of the trials, participants were interrupted by a 60-second audio story before resuming reading the paragraph. Interruptions increased overall reading time, but the reading of text following the interruption was quicker compared with baseline. Background speech and music did not modulate the interruption effects, but the background speech slowed down the reading rate compared with reading in the presence of music or reading in silence. The increase in reading time was primarily due to an increase in the time spent rereading previously read words. We argue that the observed interruption effects are in line with a theory of long-term working memory, and we present practical implications for the reported background speech effects. |
Charles Clifton; Lyn Frazier Discourse integration guided by the 'Question under Discussion' Journal Article In: Cognitive Psychology, vol. 65, no. 2, pp. 352–379, 2012. @article{Clifton2012a, What makes a discourse coherent? One potential factor has been discussed in the linguistic literature in terms of a Question under Discussion (QUD). This approach claims that discourse proceeds by continually raising explicit or implicit questions, viewed as sets of alternatives, or competing descriptions of the world. If the interlocutor accepts the question, it becomes the QUD, a narrowed set of alternatives to be addressed (Roberts, in press). Three eye movement recording studies are reported that investigated the effect of a preceding explicit QUD (Experiment 1) or implicit QUD (Experiments 2 and 3) on the processing of following text. Experiment 1 revealed an effect of whether the question queried alternative propositions or alternative entities. Reading times in the answer were faster when the answer it provided was of the same semantic type as was queried. Experiment 2 tested QUDs implied by the alternative description of reality introduced by a non-actuality implicature trigger such as should X or want to X. The results, when combined with the results of Experiment 3 (which ruled out a possible alternative interpretation) showed disrupted reading of a following verb phrase that failed to resolve the implicit QUD (Did the discourse participant actually X?), compared to reading the same material in the absence of a clear QUD. The findings support an online role for QUDs in guiding readers' structuring and interpretation of discourse. |
Moreno I. Coco; Frank Keller Scan patterns predict sentence production in the cross-modal processing of visual ccenes Journal Article In: Cognitive Science, vol. 36, no. 7, pp. 1204–1223, 2012. @article{Coco2012, Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they are mentioned, leading us to hypothesize that the scan pattern of a participant can be used to predict what he or she will say. We test this hypothesis using a data set of cued scene descriptions of photo-realistic scenes. We demonstrate that similar scan patterns are correlated with similar sentences, within and between visual scenes; and that this correlation holds for three phases of the language production process (target identification, sentence planning, and speaking). We also present a simple algorithm that uses scan patterns to accurately predict associated sentences by utilizing similarity-based retrieval. |
Claudia Felser; Ian Cunnings Processing reflexives in a second language: The timing of structural and discourse-level constraints Journal Article In: Applied Psycholinguistics, vol. 33, no. 3, pp. 571–603, 2012. @article{Felser2012, We report the results from two eye-movement monitoring experiments examining the processing of reflexive pronouns by proficient German-speaking learners of second language (L2) English. Our results showthat the nonnative speakers initially tried to linkEnglish argument reflexives to a discourse-prominent but structurally inaccessible antecedent, thereby violating binding condition A. Our native speaker controls, in contrast, showed evidence of applying conditionAimmediately during processing. Together, our findings show that L2 learners' initial focusing on a structurally inaccessible antecedent cannot be due to first language influence and is also independent of whether the inaccessible antecedent c-commands the reflexive. This suggests that unlike native speakers, nonnative speakers of English initially attempt to interpret reflexives through discourse-based coreference assignment rather than syntactic binding. |
Claudia Felser; Ian Cunnings; Claire Batterham; Harald Clahsen The timing of island effects in nonnative sentence processing Journal Article In: Studies in Second Language Acquisition, vol. 34, no. 1, pp. 67–98, 2012. @article{Felser2012a, Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in fi rst-and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect L1 and L2 comprehenders in essentially the same way. Furthermore, these results show that the timing of island effects in L1 compared to L2 sentence comprehension is affected differently by the type of cue (semantic fi t versus fi lled gaps) signaling whether dependency formation is possible at a potential gap site. Even though L1 English speakers showed immediate sensitivity to fi lled gaps but not to lack of semantic fi t, profi cient German-speaking learners of English as a L2 showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in L2 processing is based on semantic feature matching rather than being structurally mediated as in L1 comprehension. |
Heather J. Ferguson Eye movements reveal rapid concurrent access to factual and counterfactual interpretations of the world Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 5, pp. 939–961, 2012. @article{Ferguson2012, Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events. |
Stephani Foraker; Gregory L. Murphy Polysemy in sentence comprehension: Effects of meaning dominance Journal Article In: Journal of Memory and Language, vol. 67, no. 4, pp. 407–425, 2012. @article{Foraker2012, Words like church are polysemous, having two related senses (a building and an organization). Three experiments investigated how polysemous senses are represented and processed during sentence comprehension. On one view, readers retrieve an underspecified, core meaning, which is later specified more fully with contextual information. On another view, readers retrieve one or more specific senses. In a reading task, context that was neutral or biased towards a particular sense preceded a polysemous word. Disambiguating material consistent with only one sense followed, in a second sentence (Experiment 1) or the same sentence (Experiments 2 and 3). Reading the disambiguating material was faster when it was consistent with that context, and dominant senses were committed to more strongly than subordinate senses. Critically, following neutral context, the continuation was read more quickly when it selected the dominant sense, and the degree of sense dominance partially explained the reading time advantage. Similarity of the senses also affected reading times. Across experiments, we found that sense selection may not be completed immediately following a polysemous word but is completed at a sentence boundary. Overall, the results suggest that readers select an individual sense when reading a polysemous word, rather than a core meaning. |
Steven Frisson; Mary Wakefield Psychological essentialist reasoning and perspective taking during reading: A donkey is not a zebra, but a plate can be a clock Journal Article In: Memory & Cognition, vol. 40, no. 2, pp. 297–310, 2012. @article{Frisson2012, In an eyetracking study, we examined whether readers use psychological essentialist reasoning and perspective taking online. Stories were presented in which an animal or an artifact was transformed into another animal (e.g., a donkey into a zebra) or artifact (e.g., a plate into a clock). According to psychological essentialism, the essence of the animal did not change in these stories, while the transformed artifact would be thought to have changed categories. We found evidence that readers use this kind of reasoning online: When reference was made to the transformed animal, the nontransformed term ("donkey") was preferred, but the opposite held for the transformed artifact ("clock" was read faster than "plate"). The immediacy of the effect suggests that this kind of reasoning is employed automatically. Perspective taking was examined within the same stories by the introduction of a novel story character. This character, who was naïve about the transformation, commented on the transformed animal or artifact. If the reader were to take this character's perspective immediately and exclusively for reference solving, then only the transformed term ("zebra" or "clock") would be felicitous. However, the results suggested that while this character's perspective could be taken into account, it seems difficult to completely discard one's own perspective at the same time. |
2011 |
William S. Evans; David Caplan; Gloria Waters Effects of concurrent arithmetical and syntactic complexity on self-paced reaction times and eye fixations Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 6, pp. 1203–1211, 2011. @article{Evans2011, Two dual-task experiments (replications of Experiments 1 and 2 in Fedorenko, Gibson, & Rohde, Journal of Memory and Language, 56, 246-269 2007) were conducted to determine whether syntactic and arithmetical operations share working memory resources. Subjects read object- or subject-extracted relative clause sentences phrase by phrase in a self-paced task while simultaneously adding or subtracting numbers. Experiment 2 measured eye fixations as well as self-paced reaction times. In both experiments, there were main effects of syntax and of mathematical operation on self-paced reading times, but no interaction of the two. In the Experiment 2 eye-tracking results, there were main effects of syntax on first-pass reading time and total reading time and an interaction between syntax and math in total reading time on the noun phrase within the relative clause. The findings point to differences in the ways individuals process sentences under these dual-task conditions, as compared with viewing sentences during "normal" reading conditions, and do not support the view that arithmetical and syntactic integration operations share a working memory system. |
Ruth Filik; Emma Barber Inner speech during silent reading reflects the reader's regional accent Journal Article In: PLoS ONE, vol. 6, no. 10, pp. e25782, 2011. @article{Filik2011, While reading silently, we often have the subjective experience of inner speech. However, there is currently little evidence regarding whether this inner voice resembles our own voice while we are speaking out loud. To investigate this issue, we compared reading behaviour of Northern and Southern English participants who have differing pronunciations for words like 'glass', in which the vowel duration is short in a Northern accent and long in a Southern accent. Participants' eye movements were monitored while they silently read limericks in which the end words of the first two lines (e.g., glass/class) would be pronounced differently by Northern and Southern participants. The final word of the limerick (e.g., mass/sparse) then either did or did not rhyme, depending on the reader's accent. Results showed disruption to eye movement behaviour when the final word did not rhyme, determined by the reader's accent, suggesting that inner speech resembles our own voice. |
Gemma Fitzsimmons; Denis Drieghe The influence of number of syllables on word skipping during reading Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 4, pp. 736–741, 2011. @article{Fitzsimmons2011, In an eye-tracking experiment, participants read sentences containing a monosyllabic (e.g., grain) or a disyllabic (e.g., cargo) five-letter word. Monosyllabic target words were skipped more often than disyllabic target words, indicating that syllabic structure was extracted from the parafovea early enough to influence the decision of saccade target selection. Fixation times on the target word when it was fixated did not show an influence of number of syllables, demonstrating that number of syllables differentially impacts skipping rates and fixation durations during reading. |
Angélica Pérez Fornos; Jörg Sommerhalder; Marco Pelizzone Reading with a simulated 60-channel implant Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 57, 2011. @article{Fornos2011, First generation retinal prostheses containing 50-60 electrodes are currently in clinical trials. The purpose of this study was to evaluate the theoretical upper limit (best possible) reading performance attainable with a state-of-the-art 60-channel retinal implant and to find the optimum viewing conditions for the task. Four normal volunteers performed full-page text reading tasks with a low-resolution, 60-pixel viewing window that was stabilized in the central visual field. Two parameters were systematically varied: (1) spatial resolution (image magnification) and (2) the orientation of the rectangular viewing window. Performance was measured in terms of reading accuracy (% of correctly read words) and reading rates (words/min). Maximum reading performances were reached at spatial resolutions between 3.6 and 6 pixels/char. Performance declined outside this range for all subjects. In optimum viewing conditions (4.5 pixels/char), subjects achieved almost perfect reading accuracy and mean reading rates of 26 words/min for the vertical viewing window and of 34 words/min for the horizontal viewing window. These results suggest that, theoretically, some reading abilities can be restored with actual state-of-the-art retinal implant prototypes if "image magnification" is within an "optimum range." Future retinal implants providing higher pixel resolutions, thus allowing for a wider visual span might allow faster reading rates. |
Tom Foulsham; Geoffrey Underwood If visual saliency predicts search, then why? Evidence from normal and gaze-contingent search tasks in natural scenes Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 48–63, 2011. @article{Foulsham2011a, The Itti and Koch (Vision Research 40: 14891506, 2000) saliency map model has inspired a wealth of research testing the claim that bottom-up saliency determines the placement of eye fixations in natural scenes. Although saliency seems to correlate with (although not necessarily cause) fixation in free-viewing or encoding tasks, it has been suggested that visual saliency can be overridden in a search task, with saccades being planned on the basis of target features, rather than being captured by saliency. Here, we find that target regions of a scene that are salient according to this model are found quicker than control regions (Experiment 1). However, this does not seem to be altered by filtering features in the periphery using a gaze-contingent display (Experiment 2), and a deeper analysis of the eye movements made suggests that the saliency effect is instead due to the meaning of the scene regions. Experiment 3 supports this interpretation, showing that scene inversion reduces the saliency effect. These results suggest that saliency effects on search may have nothing to do with bottom-up saccade guidance. |
Daniel Mirman; Eiling Yee; Sheila E. Blumstein; James S. Magnuson Theories of spoken word recognition deficits in Aphasia: Evidence from eye-tracking and computational modeling Journal Article In: Brain and Language, vol. 117, no. 2, pp. 53–68, 2011. @article{Mirman2011, We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot-. parrot) and cohort (e.g., beaker-. beetle) competitors. Broca's aphasic participants exhibited larger rhyme competition effects than age-matched controls. A re-analysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke's aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke's aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control. |
Holger Mitterer The mental lexicon is fully specified: Evidence from eye-tracking Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 2, pp. 496–513, 2011. @article{Mitterer2011, Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input (pin) activates lexical entries with underspecified coronal stops (tin), but lexical entries with specified labial stops (pin) are not activated by mismatching input (tin). The eye-tracking data failed to show such a pattern. Although words that were phonologically similar to the spoken target attracted more looks than did unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs (tin-pin) and in Experiments 2 and 3 with words with an onset overlap (peacock-teacake). Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input mismatched only in terms of place than if it mismatched in place and voice, contrary to the assumption that /t/ is unspecified for place and voice. These results show that speech perception uses signal-driven information to the fullest, as was predicted by an optimal perception account. |
Vanessa Baudiffier; David Caplan; Daniel Gaonac'h; David Chesnet The effect of noun animacy on the processing of unambiguous sentences: Evidence from French relative clauses Journal Article In: Quarterly Journal of Experimental Psychology, vol. 64, no. 10, pp. 1896–1905, 2011. @article{Baudiffier2011, Two experiments, one using self-paced reading and one using eye tracking, investigated the influence of noun animacy on the processing of subject relative (SR) clauses, object relative (OR) clauses, and object relative clauses with stylistic inversion (OR-SI) in French. Each sentence type was presented in two versions: either with an animate relative clause (RC) subject and an inanimate object (AS/IO), or with an inanimate RC subject and an animate object (IS/AO). There was an interaction between the RC structure and noun animacy. The advantage of SR sentences over OR and OR-SI sentences disappeared in AS/IO sentences. The interaction between animacy and structure occurred in self-paced reading times and in total fixation times on the RCs, but not in first-pass reading times. The results are consistent with a late interaction between animacy and structural processing during parsing and provide data relevant to several models of parsing. |
Boaz M. Ben-David; Craig G. Chambers; Meredyth Daneman; M. Kathleen Pichora-Fuller; Eyal M. Reingold; Bruce A. Schneider Effects of aging and noise on real-time spoken word recognition: Evidence from eye movements Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 54, pp. 243–262, 2011. @article{BenDavid2011, PURPOSE: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. METHOD: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted objects, for example, "Look at the candle." Eye movements captured listeners' ability to differentiate the target noun (candle) from a similar-sounding phonological competitor (e.g., candy or sandal). Manipulations included the presence/absence of noise, the type of phonological overlap in target-competitor pairs, and the number of syllables. RESULTS: Having controlled for age-related differences in word recognition accuracy (by tailoring noise levels), similar online processing profiles were found for younger and older adults when targets were discriminated from competitors that shared onset sounds. Age-related differences were found when target words were differentiated from rhyming competitors and were more extensive in noise. CONCLUSIONS: Real-time spoken word recognition processes appear similar for younger and older adults in most conditions; however, age-related differences may be found in the discrimination of rhyming words (especially in noise), even when there are no age differences in word recognition accuracy. These results highlight the utility of eye movement methodologies for studying speech processing across the life span. |
Raymond Bertram; Victor Kuperman; R. Harald Baayen; Jukka Hyönä The hyphen as a segmentation cue in triconstituent compound processing: It's getting better all the time Journal Article In: Scandinavian Journal of Psychology, vol. 52, no. 6, pp. 530–544, 2011. @article{Bertram2011, Inserting a hyphen in Dutch and Finnish compounds is most often illegal given spelling conventions. However, the current two eye movement experiments on triconstituent Dutch compounds like voetbalbond "footballassociation" (Experiment 1) and triconstituent Finnish compounds like lentokenttätaksi "airporttaxi" (Experiment 2) show that inserting a hyphen at constituent boundaries does not have to be detrimental to compound processing. In fact, when hyphens were inserted at the major constituent boundary (voetbal-bond "football-association"; lentokenttä-taksi "airport-taxi"), processing of the first part (voetbal "football"; lentokenttä "airport") turns out to be faster when it is followed by a hyphen than when it is legally concatenated. Inserting a hyphen caused a delay in later eye movement measures, which is probably due to the illegality of inserting hyphens in normally concatenated compounds. However, in both Dutch and Finnish we found a learning effect in the course of the experiment, such that by the end of the experiments hyphenated compounds are read faster than in the beginning of the experiment. By the end of the experiment, compounds with a hyphen at the major constituent boundary were actually processed equally fast as (Dutch) or even faster than (Finnish) their concatenated counterparts. In contrast, hyphenation at the minor constituent boundary (voet-balbond "foot-ballassociation"; lento-kenttätaksi "air-porttaxi") was detrimental to compound processing speed throughout the experiment. The results imply that the hyphen may be an efficient segmentation cue and that spelling illegalities can be overcome easily, as long as they make sense. |
Hazel I. Blythe; Tuomo Häikiö; Raymond Bertam; Simon P. Liversedge; Jukka Hyönä Reading disappearing text: Why do children refixate words? Journal Article In: Vision Research, vol. 51, no. 1, pp. 84–92, 2011. @article{Blythe2011, We compared Finnish adults' and children's eye movements on long (8-letter) and short (4-letter) target words embedded in sentences, presented either normally or as disappearing text. When reading disappearing text, where refixations did not provide new information, the 8- to 9-year-old children made fewer refixations but more regressions back to long words compared to when reading normal text. This difference was not observed in the adults or 10- to 11-year-old children. We conclude that the younger children required a second visual sample on the long words, and they adapted their eye movement behaviour when reading disappearing text accordingly. |
Mara Breen; Charles Clifton Stress matters: Effects of anticipated lexical stress on silent reading Journal Article In: Journal of Memory and Language, vol. 64, no. 2, pp. 153–170, 2011. @article{Breen2011, This paper presents findings from two eye-tracking studies designed to investigate the role of metrical prosody in silent reading. In Experiment 1, participants read stress-alternating noun-verb or noun-adjective homographs (e.g. PREsent, preSENT) embedded in limericks, such that the lexical stress of the homograph, as determined by context, either matched or mismatched the metrical pattern of the limerick. The results demonstrated a reading cost when readers encountered a mismatch between the predicted and actual stress pattern of the word. Experiment 2 demonstrated a similar cost of a mismatch in stress patterns in a context where the metrical constraint was mediated by lexical category rather than by explicit meter. Both experiments demonstrated that readers are slower to read words when their stress pattern does not conform to expectations. The data from these two eye-tracking experiments provide some of the first on-line evidence that metrical information is part of the default representation of a word during silent reading. |
Meredith Brown; Anne Pier Salverda; Laura C. Dilley; Michael K. Tanenhaus Expectations from preceding prosody influence segmentation in online sentence processing Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 6, pp. 1189–1196, 2011. @article{Brown2011, Previous work examining prosodic cues in online spoken-word recognition has focused primarily on local cues to word identity. However, recent studies have suggested that utterance-level prosodic patterns can also influence the interpretation of subsequent sequences of lexically ambiguous syllables (Dilley, Mattys, & Vinke, Journal of Memory and Language, 63:274–294, 2010; Dilley & McAuley, Journal of Memory and Language, 59:294–311, 2008). To test the hypothesis that these distal prosody effects are based on expectations about the organization of upcoming material, we conducted a visual-world experiment. We examined fixations to competing alternatives such as pan and panda upon hearing the target word panda in utterances in which the acoustic properties of the preceding sentence material had been manipulated. The proportions of fixations to the monosyllabic competitor were higher beginning 200 ms after target word onset when the preceding prosody supported a prosodic constituent boundary following pan-, rather than following panda. These findings support the hypothesis that expectations based on perceived prosodic patterns in the distal context influence lexical segmentation and word recognition. |
Sarah Brown-Schmidt; Agnieszka E. Konopka Experimental Aapproaches to referential domains and the on-line processing of referring expressions in unscripted conversation Journal Article In: Information, vol. 2, no. 4, pp. 302–326, 2011. @article{BrownSchmidt2011, This article describes research investigating the on-line processing of language in unscripted conversational settings. In particular, we focus on the process of formulating and interpreting definite referring expressions. Within this domain we present results of two eye-tracking experiments addressing the problem of how speakers interrogate the referential domain in preparation to speak, how they select an appropriate expression for a given referent, and how addressees interpret these expressions. We aim to demonstrate that it is possible, and indeed fruitful, to examine unscripted, conversational language using modified experimental designs and standard hypothesis testing procedures. |
Tamar H. Gollan; Timothy J. Slattery; Diane Goldenberg; Eva Van Assche; Wouter Duyck; Keith Rayner Frequency drives lexical access in reading but not in speaking: The frequency-lag hypothesis Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 2, pp. 186–209, 2011. @article{Gollan2011, To contrast mechanisms of lexical access in production versus comprehension we compared the effects of word frequency (high, low), context (none, low constraint, high constraint), and level of English proficiency (monolingual, Spanish-English bilingual, Dutch-English bilingual) on picture naming, lexical decision, and eye fixation times. Semantic constraint effects were larger in production than in reading. Frequency effects were larger in production than in reading without constraining context but larger in reading than in production with constraining context. Bilingual disadvantages were modulated by frequency in production but not in eye fixation times, were not smaller in low-constraint contexts, and were reduced by high-constraint contexts only in production and only at the lowest level of English proficiency. These results challenge existing accounts of bilingual disadvantages and reveal fundamentally different processes during lexical access across modalities, entailing a primarily semantically driven search in production but a frequency-driven search in comprehension. The apparently more interactive process in production than comprehension could simply reflect a greater number of frequency-sensitive processing stages in production. |
Andreas Hartwig; Emma Gowen; W. Neil Charman; Hema Radhakrishnan Working distance and eye and head movements during near work in myopes and non-myopes Journal Article In: Clinical and Experimental Optometry, vol. 94, no. 6, pp. 536–544, 2011. @article{Hartwig2011a, PURPOSE: Reasons for the development and progression of myopia remain unclear. Some studies show a high prevalence of myopia in certain occupational groups. This might imply that certain head and eye movements lead to ocular elongation, perhaps as a result of forces from the extraocular muscles, lids or other structures. The present study aims to analyse head and eye movements in myopes and non-myopes for near-vision tasks. METHODS: The study analysed head and eye movements in a cohort of 14 myopic and 16 non-myopic young adults. Eye and head movements were monitored by an eye tracker and a motion sensor while the subjects performed three near tasks, which included reading on a screen, reading a book and writing. Horizontal eye and head movements were measured in terms of angular amplitudes. Vertical eye and head movements were analysed in terms of the range of the whole movement during the recording. Values were also assessed as a ratio based on the width of the printed text, which changed between participants due to individual working distances. RESULTS: Horizontal eye and head movements were significantly different among the three tasks (p = 0.03 and p = 0.014, for eye and head movements, respectively, repeated measures ANOVA). Horizontal and vertical eye and head movements did not differ significantly between myopes and non-myopes. As expected, eye movements preponderated over head movements for all tasks and in both meridians. A positive correlation was found between mean spherical equivalent and the working distance for reading a book (r = 0.41; p = 0.025). CONCLUSIONS: The results show a similar pattern of eye movements in all participating subjects, although the amplitude of these movements varied considerably between the individuals. It is likely that some individuals when exposed to certain occupational tasks might show different eye and head movement patterns. |
Minglei Chen; Hwawei Ko Exploring the eye-movement patterns as Chinese children read texts: A developmental perspective Journal Article In: Journal of Research in Reading, vol. 34, no. 2, pp. 232–246, 2011. @article{Chen2011, This study was to investigate Chinese children's eye patterns while reading different text genres from a developmental perspective. Eye movements were recorded while children in the second through sixth grades read two expository texts and two narrative texts. Across passages, overall word frequency was not significantly different between the two genres. Results showed that all children had longer fixation durations for low-frequency words. They also had longer fixation durations on content words. These results indicate that children adopted a word-based processing strategy like skilled readers do. However, only older children's rereading times were affected by genre. Overall, eye-movement patterns of older children reported in this study are in accordance with those of skilled Chinese readers, but younger children are more likely to be responsive to word characteristics than text level when reading a Chinese text. |
Reinier Cozijn; Edwin Commandeur; Wietske Vonk; Leo G. M. Noordman The time course of the use of implicit causality information in the processing of pronouns: A visual world paradigm study Journal Article In: Journal of Memory and Language, vol. 64, no. 4, pp. 381–403, 2011. @article{Cozijn2011, Several theoretical accounts have been proposed with respect to the issue how quickly the implicit causality verb bias affects the understanding of sentences such as "John beat Pete at the tennis match, because he had played very well" They can be considered as instances of two viewpoints: the focusing and the integration account. The focusing account claims that the bias should be manifest soon after the verb has been processed, whereas the integration account claims that the interpretation is deferred until disambiguating information is encountered. Up to now, this issue has remained unresolved because materials or methods have failed to address it conclusively. We conducted two experiments that exploited the visual world paradigm and ambiguous pronouns in subordinate because clauses. The first experiment presented implicit causality sentences with the task to resolve the ambiguous pronoun. To exclude strategic processing, in the second experiment, the task was to answer simple comprehension questions and only a minority of the sentences contained implicit causality verbs. In both experiments, the implicit causality of the verb had an effect before the disambiguating information was available. This result supported the focusing account. |
Sarah C. Creel; Melanie A. Tumlin On-line acoustic and semantic interpretation of talker information Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 264–285, 2011. @article{Creel2011, Recent work demonstrates that listeners utilize talker-specific information in the speech signal to inform real-time language processing. However, there are multiple representational levels at which this may take place. Listeners might use acoustic cues in the speech signal to access the talker's identity and information about what they tend to talk about, which then immediately constrains processing. Alternatively, or simultaneously, listeners might compare the signal to acoustically-detailed representations of words, without awareness of the talker's identity. In a series of eye-tracked comprehension experiments, we explore the circumstances under which listeners utilize talker-specific information. Experiments 1 and 2 demonstrate talker-specific recognition benefits for newly-learned words both in isolation (Experiment 1) and with preceding context (Experiment 2), but suggest that listeners do not strongly semantically associate talkers with referents. Experiment 3 demonstrates that listeners can recognize talkers rapidly, almost as soon as acoustic information is available, and can associate talkers with multiple arbitrary referents. Experiment 4 demonstrates that if talker identity is highly diagnostic on each trial, listeners readily associate talkers with specific referents, but do not seem to make such associations when diagnostic value is low. Implications for speech processing, talker processing, and learning are discussed. © 2011 Elsevier Inc. |
Sebastian J. Crutch; Manja Lehmann; Nikos Gorgoraptis; Diego Kaski; Natalie Ryan; Masud Husain; Elizabeth K. Warrington Abnormal visual phenomena in posterior cortical atrophy Journal Article In: Neurocase, vol. 17, no. 2, pp. 160–177, 2011. @article{Crutch2011, Individuals with posterior cortical atrophy (PCA) report a host of unusual and poorly explained visual disturbances. This preliminary report describes a single patient (CRO), and documents and investigates abnormally prolonged colour afterimages (concurrent and prolonged perception of colours complimentary to the colour of an observed stimulus), perceived motion of static stimuli, and better reading of small than large letters. We also evaluate CRO's visual and vestibular functions in an effort to understand the origin of her experience of room tilt illusion, a disturbing phenomenon not previously observed in individuals with cortical degenerative disease. These visual symptoms are set in the context of a 4-year longitudinal neuropsychological and neuroimaging investigation of CRO's visual and other cognitive skills. We hypothesise that prolonged colour after-images are attributable to relative sparing of V1 inhibitory interneurons; perceived motion of static stimuli reflects weak magnocellular function; better reading of small than large letters indicates a reduced effective field of vision; and room tilt illusion effects are caused by disordered integration of visual and vestibular information. This study contributes to the growing characterisation of PCA whose atypical early visual symptoms are often heterogeneous and frequently under-recognised. |
Chelsie L. Cushman; Rebecca L. Johnson Age-of-acquisition effects in pure alexia Journal Article In: Quarterly Journal of Experimental Psychology, vol. 64, no. 9, pp. 1726–1742, 2011. @article{Cushman2011, Pure alexia is an acquired reading disorder in which previously literate adults adopt a letter-by-letter processing strategy. Though these individuals display impaired reading, research shows that they are still able to use certain lexical information in order to facilitate visual word processing. The current experiment investigates the role that a word's age of acquisition (AoA) plays in the reading processes of an individual with pure alexia (G.J.) when other lexical variables have been controlled. Results from a sentence reading task in which eye movement patterns were recorded indicated that G.J. shows a strong effect of AoA, where late-acquired words are more difficult to process than early-acquired words. Furthermore, it was observed that the AoA effect is much greater for G.J. than for age-matched control participants. This indicates that patients with pure alexia rely heavily on intact top-down information, supporting the interactive activation model of reading. |
Gerry T. M. Altmann Language can mediate eye movement control within 100milliseconds, regardless of whether there is anything to move the eyes to Journal Article In: Acta Psychologica, vol. 137, no. 2, pp. 190–200, 2011. @article{Altmann2011, The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man. .' or 'the girl. .', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100. ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word. |
Richard Andersson; Fernanda Ferreira; John M. Henderson I see what you're saying: The integration of complex speech and scenes during language comprehension Journal Article In: Acta Psychologica, vol. 137, no. 2, pp. 208–216, 2011. @article{Andersson2011, The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load. |
Bernhard Angele; Keith Rayner Parafoveal processing of word n + 2 during reading: Do the preceding words matter? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1210–1220, 2011. @article{Angele2011, We used the boundary paradigm (Rayner, 1975) to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n + 2 preprocessing effects. In Experiment 1, we tested whether parafoveal processing of the second word to the right of fixation (n + 2) takes place only when the preceding word (n + 1) is very short (Angele, Slattery, Yang, Kliegl, & Rayner, 2008); word n + 1 was always a three-letter word. Before crossing the boundary, preview for both words n + 1 and n + 2 was either incorrect or correct. In a third condition, only the preview for word n + 1 was incorrect. In Experiment 2, we tested whether word frequency of the preboundary word (n) had an influence on the presence of preview benefit and parafoveal-on-foveal effects. Additionally, Experiment 2 contained a condition in which only preview of n + 2 was incorrect. Our findings suggest that effects of parafoveal n + 2 preprocessing are not modulated by either n + 1 word length or n frequency. Furthermore, we did not observe any evidence of parafoveal lexical preprocessing of word n + 2 in either experiment. |
Keith S. Apfelbaum; Sheila E. Blumstein; Bob Mcmurray Semantic priming is affected by real-time phonological competition: Evidence for continuous cascading systems Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 1, pp. 141–149, 2011. @article{Apfelbaum2011, Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words. |
Brian Bartek; Richard L. Lewis; Shravan Vasishth; Mason R. Smith In search of on-line locality effects in sentence comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1178–1198, 2011. @article{Bartek2011, Many comprehension theories assert that increasing the distance between elements participating in a linguistic relation (e.g., a verb and a noun phrase argument) increases the difficulty of establishing that relation during on-line comprehension. Such locality effects are expected to increase reading times and are thought to reveal properties and limitations of the short-term memory system that supports comprehension. Despite their theoretical importance and putative ubiquity, however, evidence for on-line locality effects is quite narrow linguistically and methodologically: It is restricted almost exclusively to self-paced reading of complex structures involving a particular class of syntactic relation. We present 4 experiments (2 self-paced reading and 2 eyetracking experiments) that demonstrate locality effects in the course of establishing subject-verb dependencies; locality effects are seen even in materials that can be read quickly and easily. These locality effects are observable in the earliest possible eye-movement measures and are of much shorter duration than previously reported effects. To account for the observed empirical patterns, we outline a processing model of the adaptive control of button pressing and eye movements. This model makes progress toward the goal of eliminating linking assumptions between memory constructs and empirical measures in favor of explicit theories of the coordinated control of motor responses and parsing. |
Shravan Vasishth; Heiner Drenhaus Locality in German Journal Article In: Dialogue and Discourse, vol. 2, no. 1, pp. 59–82, 2011. @article{Vasishth2011, Three experiments (self-paced reading, eyetracking and an ERP study) show that in relative clauses, increasing the distance between the relativized noun and the relative-clause verb makes it more difficult to process the relative-clause verb (the so-called locality effect). This result is consistent with the predictions of several theories (Gibson, 2000; Lewis and Vasishth, 2005), and contradicts the recent claim (Levy, 2008) that in relative-clause structures increasing argument-verb distance makes processing easier at the verb. Levy's expectation-based account predicts that the expectation for a verb becomes sharper as distance is increased and therefore processing becomes easier at the verb. We argue that, in addition to expectation effects (which are seen in the eyetracking study in first-pass regression probability), processing load alsoincreases with increasing distance. This contradicts Levy's claim that heightened expectation leadsto lower processing cost. Dependency- resolution cost and expectation-based facilitation are jointly responsible for determining processing cost. |
Eduardo Vidal-Abarca; Tomás Martinez; Ladislao Salmerón; Raquel Cerdán; Ramiro Gilabert; Laura Gil; Amelia Mañá; Ana C. Llorens; Ricardo Ferris Recording online processes in task-oriented reading with Read&Answer Journal Article In: Behavior Research Methods, vol. 43, no. 1, pp. 179–192, 2011. @article{VidalAbarca2011, We present an application to study task-oriented reading processes called Read&Answer. The application mimics paper-and-pencil situations in which a reader interacts with one or more documents to perform a specific task, such as answering questions, writing an essay, or similar activities. Read&Answer presents documents and questions with a mask. The reader unmasks documents and questions so that only a piece of information is available at a time. This way the entire interaction between the reader and the documents on the task is recorded and can be analyzed. We describe Read&Answer and present its applications for research and assessment. Finally, we explain two studies that compare readers' performance on Read&Answer with students' reading times and comprehension levels on a paper-and-pencil task, and on a computer task recorded with eye-tracking. The use of Read&Answer produced similar comprehension scores, although it changed the pattern of reading times. |
Tessa Warren; Erik D. Reichle; Nikole D. Patson Lexical and post-lexical complexity effects on eye movements Journal Article In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–10, 2011. @article{Warren2011, The current study investigated how a post-lexical complexity manipulation followed by a lexical complexity manipulation affects eye movements during reading. Both manipula- tions caused disruption in all measures on the manipulated words, but the patterns of spill- over differed. Critically, the effects of the two kinds of manipulations did not interact, and there was no evidence that post-lexical processing difficulty delayed lexical processing on the next word (c.f. Henderson & Ferreira, 1990). This suggests that post-lexical processing of one word and lexical processing of the next can proceed independently and likely in parallel. This finding is consistent with the assumptions of the E-Z Reader model of eye movement control in reading (Reichle, Warren, & McConnell, 2009). |
Sarah J. White; Tessa Warren; Erik D. Reichle Parafoveal preview during reading: Effects of sentence position Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1221–1238, 2011. @article{White2011, Two experiments examined parafoveal preview for words located in the middle of sentences and at sentence boundaries. Parafoveal processing was shown to occur for words at sentence-initial, mid-sentence, and sentence-final positions. Both Experiments 1 and 2 showed reduced effects of preview on regressions out for sentence-initial words. In addition, Experiment 2 showed reduced preview effects on first-pass reading times for sentence-initial words. These effects of sentence position on preview could result from either reduced parafoveal processing for sentence-initial words or other processes specific to word reading at sentence boundaries. In addition to the effects of preview, the experiments also demonstrate variability in the effects of sentence wrap-up on different reading measures, indicating that the presence and time course of wrap-up effects may be modulated by text-specific factors. We also report simulations of Experiment 2 using version 10 of E-Z Reader (Reichle, Warren, & McConnell, 2009), designed to explore the possible mechanisms underlying parafoveal preview at sentence boundaries. |
Adrian Staub The effect of lexical predictability on distributions of eye fixation durations Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 2, pp. 371–376, 2011. @article{Staub2011, A word's predictability in context has a well-established effect on fixation durations in reading. To investigate how this effect is manifested in distributional terms, an experiment was carried out in which subjects read each of 50 target words twice, once in a high-predictability context and once in a low-predictability context. The ex-Gaussian distribution was fit to each subject's first-fixation durations and single-fixation durations. For both measures, the μ parameter increased when a word was unpredictable, while the τ parameter was not significantly affected, indicating that a predictability manipulation shifts the distribution of fixation durations but does not affect the degree of skew. Vincentile plots showed that the mean ex-Gaussian parameters described the typical distribution shapes extremely well. These results suggest that the predictability and frequency effects are functionally distinct, since a frequency manipulation has been shown to influence both μ and τ. The results may also be seen as consistent with the finding from single-word recognition paradigms that semantic priming affects only μ. |
Adrian Staub Word recognition and syntactic attachment in reading: Evidence for a staged architecture Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 3, pp. 407–433, 2011. @article{Staub2011a, In 3 experiments, the author examined how readers' eye movements are influenced by joint manipulations of a word's frequency and the syntactic fit of the word in its context. In the critical conditions of the first 2 experiments, a high- or low-frequency verb was used to disambiguate a garden-path sentence, while in the last experiment, a high- or low-frequency verb constituted a phrase structure violation. The frequency manipulation always influenced the early eye movement measures of first-fixation duration and gaze duration. The context manipulation had a delayed effect in Experiment 1, influencing only the probability of a regressive eye movement from later in the sentence. However, the context manipulation influenced the same early eye movement measures as the frequency effect in Experiments 2 and 3, though there was no statistical interaction between the effects of these variables. The context manipulation also influenced the probability of a regressive eye movement from the verb, though the frequency manipulation did not. These results are shown to confirm predictions emerging from the serial, staged architecture for lexical and integrative processing of the E-Z Reader 10 model of eye movement control in reading (Reichle, Warren, & McConnell, 2009). It is argued, more generally, that the results provide an important constraint on how the relationship between visual word recognition and syntactic attachment is treated in processing models. |
Maria Staudte; Matthew W. Crocker Investigating joint attention mechanisms through spoken human-robot interaction Journal Article In: Cognition, vol. 120, no. 2, pp. 268–291, 2011. @article{Staudte2011, Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human-robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker's referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement.We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms. |
Debra Titone; Maya R. Libben; Julie Mercier; Veronica Whitford; Irina Pivneva Bilingual lexical access during L1 sentence reading: The effects of L2 knowledge, semantic constraint, and L1-L2 intermixing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 6, pp. 1412–1431, 2011. @article{Titone2011, Libben and Titone (2009) recently observed that cognate facilitation and interlingual homograph interference were attenuated by increased semantic constraint during bilingual second language (L2) reading, using eye movement measures. We now investigate whether cross-language activation also occurs during first language (L1) reading as a function of age of L2 acquisition and task demands (i.e., inclusion of L2 sentences). In Experiment 1, participants read high and low constraint English (L1) sentences containing interlingual homographs, cognates, or control words. In Experiment 2, we included French (L2) filler sentences to increase salience of the L2 during L1 reading. The results suggest that bilinguals reading in their L1 show nonselective activation to the extent that they acquired their L2 early in life. Similar to our previous work on L2 reading, high contextual constraint attenuated cross-language activation for cognates. The inclusion of French filler items promoted greater cross-language activation, especially for late stage reading measures. Thus, L1 bilingual reading is modulated by L2 knowledge, semantic constraint, and task demands. |
Annie Tremblay Learning to parse liaison-initial words: An eye-tracking study Journal Article In: Bilingualism: Language and Cognition, vol. 14, no. 3, pp. 257–279, 2011. @article{Tremblay2011, This study investigates the processing of resyllabified words by native English speakers at three proficiency levels in French and by native French speakers. In particular, it examines non-native listeners' development of a parsing procedure for recognizing vowel-initial words in the context of liaison, a process that creates a misalignment of the syllable and word boundaries in French. The participants completed an eye-tracking experiment in which they identified liaison- and consonant-initial real and nonce words in auditory stimuli. The results show that the non-native listeners had little difficulty recognizing liaison-initial real words, and they recognized liaison-initial nonce words more rapidly than consonant-initial ones. By contrast, native listeners recognized consonant-initial real and nonce words more rapidly than liaison-initial ones. These results suggest that native and non-native listeners used different parsing procedures for recognizing liaison-initial words in the task, with the non-native listeners' ability to segment liaison-initial words being phonologically abstract rather than lexical. © Copyright Cambridge University Press 2011. |
Cara Tsang; Craig G. Chambers Appearances aren't everything: Shape classifiers and referential processing in cantonese Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1065–1080, 2011. @article{Tsang2011, Cantonese shape classifiers encode perceptual information that is characteristic of their associated nouns, although certain nouns are exceptional. For example, the classifier tiu occurs primarily with nouns for long-narrow-flexible objects (e.g., scarves, snakes, and ropes) and also occurs with the noun for a (short, rigid) key. In 3 experiments, we explored how the semantic information encoded in shape classifiers influences language comprehension. When judging the fit between classifiers and depicted objects in an explicit ranking task, Cantonese speakers evaluated classifier-noun pairings solely in terms of grammatical well-formedness and showed no separate sensitivity to the shape features of objects. In an eye-tracking task (Experiment 2), we also found little sensitivity to shape classifier semantics during real-time comprehension. However, in a subsequent experiment in which referent objects lacked the prototypical features for their accompanying classifiers (Experiment 3), an influence of shape semantics was found in participants' incidental fixations to nontarget objects. We conclude that shape classifiers influence referential interpretation primarily through their grammatical constraints, consistent with the agreementlike nature of classifiers in general. The role of shape classifiers' semantics on processing is apparent only in specific circumstances. |
Gurmit Uppal; Mary P. Feely; Michael D. Crossland; Luke Membrey; John Lee; Lyndon Cruz; Gary S. Rubin Assessment of reading behavior with an infrared eye tracker after 360° macular translocation for age-related macular degeneration Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 9, pp. 6486–6496, 2011. @article{Uppal2011, Purpose. Macular translocation (MT360) is complex surgery used to restore reading in exudative age-related macular degeneration (AMD). MT360 involves retinal rotation and subsequent oculomotor globe counterrotation and is not without significant surgical risk. This study attempts to gauge the optimal potential of MT360 in restoring reading ability and describe the quality and extent of recovery. Methods. The six best outcomes were examined from a consecutive series of 23 MT360 cases. Reading behavior and fixation characteristics were examined with an infrared eye tracker. Results were compared to age-matched normal subjects and patients with untreated exudative and nonexudative AMD. Retinal sensitivity was examined with microperimetry to establish threshold visual function. Results. MT360 produced significant improvements in visual function over untreated disease and approximated normal function for reading speed and fixation quality. Relative to the comparative groups, eye tracking revealed the MT360 cohort generated a greater number of horizontal and vertical saccades, of longer latency and reduced velocity. In contrast, saccadic behavior when reading (forward and regressive saccades) closely matched normal function. Microperimetry revealed a reduction in the central scotoma with three patients recovering normal foveal sensitivity. Conclusions. Near normal reading function is recovered despite profound surgical disruption to the anatomy (retinal/oculomotor). MT360 restores foveal function sufficient to produce a single stable locus of fixation, with marked reduction of the central scotoma. Despite the limitations on saccadic function, the quality of reading saccadic behavior is maintained with good reading ability. Oculomotor surgery appears not to limit reading ability, and the results of retinal surgery approximate normal macular function. |
Seppo Vainio; Raymond Bertram; Anneli Pajunen; Jukka Hyönä Processing modifier-head agreement in long Finnish words: Evidence from eye movements Journal Article In: Acta Linguistica Hungarica, vol. 58, no. 1, pp. 134–156, 2011. @article{Vainio2011, The present study investigates whether processing of an inflected Finnish noun is facilitated when preceded by a modifier in the same case ending. In Finnish, modifiers agree with their head nouns both in case and in number and the agreement is expressed by means of suffixes (e.g., vanha/ssa talo/ssa 'old/in house/in' –> 'in the old house'). Vainio et al. (2003; 2008) showed processing benefits for this kind of modifier-head agreement, when the head nouns were relatively short. However, the effect showed up relatively late in the processing stream, such that word n + 1, the word following the target noun talo/ssa, was read faster when it was preceded by an agreeing modifier (vanha/ssa) than when no modifier was present. This led Vainio et al. to the conclusion that agreement exerts its effect at a later stage, namely at the level of syntactic integration and not at the level of lexical access. The current study investigates whether the same holds when head nouns are considerably longer (e.g., kaupungin/talo/ssa 'city house/in' –> 'in the city hall'). Our results show that the effect of agreement is facilitative in case of longer head nouns as well, but – in contrast to what was found for shorter words – the effect not only appeared late, but was also observed in earlier processing measures. It thus seems that, in processing long words, benefits related to modifier-head agreement are not confined to post-lexical syntactic integration processes, but extend to lexical identification of the head. Adapted from the source document |
Eva Van Assche; Denis Drieghe; Wouter Duyck; Marijke Welvaert; Robert J. Hartsuiker The influence of semantic constraints on bilingual word recognition during sentence reading Journal Article In: Journal of Memory and Language, vol. 64, no. 1, pp. 88–107, 2011. @article{VanAssche2011, The present study investigates how semantic constraint of a sentence context modulates language-non-selective activation in bilingual visual word recognition. We recorded Dutch-English bilinguals' eye movements while they read cognates and controls in low and high semantically constraining sentences in their second language. Early and late eye-movement measures yielded cognate facilitation, both for low- and high-constraint sentences. Facilitation increased gradually as a function of cross-lingual overlap between translation equivalents. A control experiment showed that the same stimuli did not yield cognate effects in English monolingual controls, ensuring that these effects were not due to any uncontrolled stimulus characteristics. The present study supports models of bilingual word recognition with a limited role for top-down influences of semantic constraints on lexical access in both early and later stages of bilingual word recognition. |
Lise Van der Haegen; Marc Brysbaert The mechanisms underlying the interhemispheric integration of information in foveal word recognition: Evidence for transcortical inhibition Journal Article In: Brain and Language, vol. 118, no. 3, pp. 81–89, 2011. @article{VanderHaegen2011, Words are processed as units. This is not as evident as it seems, given the division of the human cerebral cortex in two hemispheres and the partial decussation of the optic tract. In two experiments, we investigated what underlies the unity of foveally presented words: A bilateral projection of visual input in foveal vision, or interhemispheric inhibition and integration as proposed by the SERIOL model of visual word recognition. Experiment 1 made use of pairs of words and nonwords with a length of four letters each. Participants had to name the word and ignore the nonword. The visual field in which the word was presented and the distance between the word and the nonword were manipulated. The results showed that the typical right visual field advantage was observed only when the word and the nonword were clearly separated. When the distance between them became smaller, the right visual field advantage turned into a left visual field advantage, in line with the interhemispheric inhibition mechanism postulated by the SERIOL model. Experiment 2, using 5-letters stimuli, confirmed that this result was not due to the eccentricity of the word relative to the fixation location but to the distance between the word and the nonword. |
Lise Van der Haegen; Qing Cai; Ruth Seurinck; Marc Brysbaert Further fMRI validation of the visual half field technique as an indicator of language laterality: A large-group analysis Journal Article In: Neuropsychologia, vol. 49, no. 10, pp. 2879–2888, 2011. @article{VanderHaegen2011a, The best established lateralized cerebral function is speech production, with the majority of the population having left hemisphere dominance. An important question is how to best assess the laterality of this function. Neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) are increasingly used in clinical settings to replace the invasive Wada-test. We evaluated the usefulness of behavioral visual half field (VHF) tasks for screening a large sample of healthy left-handers. Laterality indices (LIs) calculated on the basis of the latencies in a word and picture naming VHF task were compared to the brain activity measured in a silent word generation task in fMRI (pars opercularis/BA44 and pars triangularis/BA45). Results confirmed the usefulness of the VHF-tasks as a screening device. None of the left-handed participants with clear right visual field (RVF) advantages in the picture and word naming task showed right hemisphere dominance in the scanner. In contrast, 16/20 participants with a left visual field (LVF) advantage in both word and picture naming turned out to have atypical right brain dominance. Results were less clear for participants who failed to show clear VHF asymmetries (below 20 ms RVF advantage and below 60 ms LVF advantage) or who had inconsistent asymmetries in picture and word naming. These results indicate that the behavioral tasks can mainly provide useful information about the direction of speech dominance when both VHF differences clearly point in the same direction. |
Julie A. Van Dyke; Brian McElree Cue-dependent interference in comprehension Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 247–263, 2011. @article{VanDyke2011, The role of interference as a primary determinant of forgetting in memory has long been accepted, however its role as a contributor to poor comprehension is just beginning to be understood. The current paper reports two studies, in which speed-accuracy tradeoff and eye-tracking methodologies were used with the same materials to provide converging evidence for the role of syntactic and semantic cues as mediators of both proactive (PI) and retroactive interference (RI) during comprehension. Consistent with previous work (e.g., Van Dyke & Lewis, 2003), we found that syntactic constraints at the retrieval site are among the cues that drive retrieval in comprehension, and that these constraints effectively limit interference from potential distractors with semantic/pragmatic properties in common with the target constituent. The data are discussed in terms of a cue-overload account, in which interference both arises from and is mediated through a direct-access retrieval mechanism that utilizes a linear, weighted cue-combinatoric scheme. |
Heather Winskel Orthographic and phonological parafoveal processing of consonants, vowels, and tones when reading Thai Journal Article In: Applied Psycholinguistics, vol. 32, no. 4, pp. 739–759, 2011. @article{Winskel2011, Four eye movement experiments investigated whether readers use parafoveal input to gain information about the phonological or orthographic forms of consonants, vowels, and tones in word recognition when reading Thai silently. Target words were presented in sentences preceded by parafoveal previews in which consonant, vowel, or tone information was manipulated. Previews of homophonous consonants (Experiment I) and concordant vowels (Experiment 2) did not substantially facilitate processing of the target word, whereas the identical previews did. Hence, orthography appears to be playing the prominent role in early word recognition for consonants and vowels. Incorrect tone marker previews (Experiment 3) substantially retarded the subsequent processing of the target word, indicating that lexical tone plays an important role in early word recognition. Vowels in VOP (Experiment 4) did not facilitate processing, which points to vowel position being a significant factor. Primarily, orthographic codes of consonants and vowels (HOP) in conjunction with tone information are assembled from parafoveal input and used for early lexical access. |
Lynsey Wolter; Kristen Skovbroten Gorman; Michael K. Tanenhaus Scalar reference, contrast and discourse: Separating effects of linguistic discourse from availability of the referent Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 299–317, 2011. @article{Wolter2011, Listeners expect that a definite noun phrase with a pre-nominal scalar adjective (e.g., the big ...) will refer to an entity that is part of a set of objects contrasting on the scalar dimension, e.g., size (Sedivy, Tanenhaus, Chambers, & Carlson, 1999). Two visual world experiments demonstrate that uttering a referring expression with a scalar adjective makes all members of the relevant contrast set more salient in the discourse model, facilitating subsequent reference to other members of that contrast set. Moreover, this discourse effect is caused primarily by linguistic mention of a scalar adjective and not by the listener's prior visual or perceptual experience. These experiments demonstrate that language processing is sensitive to which information was introduced by linguistic mention, and that the visual world paradigm can be use to tease apart the separate contributions of visual and linguistic information to reference resolution. |
Bo Yao; Christoph Scheepers Contextual modulation of reading rate for direct versus indirect speech quotations Journal Article In: Cognition, vol. 121, no. 3, pp. 447–453, 2011. @article{Yao2011, In human communication, direct speech (e.g., Mary said: " I'm hungry" ) is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former. |
Eiling Yee; Stacy Huffstetler; Sharon L. Thompson-Schill Function follows form: Activation of shape and function features during object identification Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 3, pp. 348–363, 2011. @article{Yee2011, Most theories of semantic memory characterize knowledge of a given object as comprising a set of semantic features. But how does conceptual activation of these features proceed during object identification? We present the results of a pair of experiments that demonstrate that object recognition is a dynamically unfolding process in which function follows form. We used eye movements to explore whether activating one object's concept leads to the activation of others that share perceptual (shape) or abstract (function) features. Participants viewed 4-picture displays and clicked on the picture corresponding to a heard word. In critical trials, the conceptual representation of 1 of the objects in the display was similar in shape or function (i.e., its purpose) to the heard word. Importantly, this similarity was not apparent in the visual depictions (e.g., for the target Frisbee, the shape-related object was a triangular slice of pizza, a shape that a Frisbee cannot take); preferential fixations on the related object were therefore attributable to overlap of the conceptual representations on the relevant features. We observed relatedness effects for both shape and function, but shape effects occurred earlier than function effects. We discuss the implications of these findings for current accounts of the representation of semantic memory. |
Li-Hao Yeh; Ana I. Schwartz; Aaron L. Baule The impact of text-structure strategy instruction on the text recall and eye-movement patterns of second language English readers Journal Article In: Reading Psychology, vol. 32, no. 6, pp. 495–519, 2011. @article{Yeh2011, Previous studies have demonstrated the efficacy of the Text Structure Strategy for improving text recall. The strategy emphasizes the identification of text structure for encoding and recalling information. Traditionally, the efficacy of this strategy has been measured through free recall. The present study examined whether recall and eye-movement patterns of second language English readers would benefit from training on the strategy. Participants' free recall and eye-movement patterns were measured before and after training. There was a significant increase in recall at posttest and a change in eye-movement patterns, reflecting additional processing time of phrases and words signaling the text structure. |
Andrea E. Martin; Brian McElree Direct-access retrieval during sentence comprehension: Evidence from Sluicing Journal Article In: Journal of Memory and Language, vol. 64, no. 4, pp. 327–343, 2011. @article{Martin2011, Language comprehension requires recovering meaning from linguistic form, even when the mapping between the two is indirect. A canonical example is ellipsis, the omission of information that is subsequently understood without being overtly pronounced. Comprehension of ellipsis requires retrieval of an antecedent from memory, without prior prediction, a property which enables the study of retrieval in situ (Martin & McElree, 2008, 2009). Sluicing, or inflectional-phrase ellipsis, in the presence of a conjunction, presents a test case where a competing antecedent position is syntactically licensed, in contrast with most cases of nonadjacent dependency, including verb-phrase ellipsis. We present speed-accuracy tradeoff and eye-movement data inconsistent with the hypothesis that retrieval is accomplished via a syntactically guided search, a particular variant of search not examined in past research. The observed timecourse profiles are consistent with the hypothesis that antecedents are retrieved via a cue-dependent direct-access mechanism susceptible to general memory variables. |
Kazunaga Matsuki; Tracy Chow; Mary Hare; Jeffrey L. Elman; Christoph Scheepers; Ken McRae Event-based plausibility immediately influences on-line language comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 4, pp. 913–934, 2011. @article{Matsuki2011, In some theories of sentence comprehension, linguistically relevant lexical knowledge, such as selectional restrictions, is privileged in terms of the time-course of its access and influence. We examined whether event knowledge computed by combining multiple concepts can rapidly influence language understanding even in the absence of selectional restriction violations. Specifically, we investigated whether instruments can combine with actions to influence comprehension of ensuing patients of (as in Rayner, Warren, Juhuasz, & Liversedge, 2004; Warren & McConnell, 2007). Instrument-verb-patient triplets were created in a norming study designed to tap directly into event knowledge. In self-paced reading (Experiment 1), participants were faster to read patient nouns, such as hair, when they were typical of the instrument-action pair (Donna used the shampoo to wash vs. the hose to wash). Experiment 2 showed that these results were not due to direct instrument-patient relations. Experiment 3 replicated Experiment 1 using eyetracking, with effects of event typicality observed in first fixation and gaze durations on the patient noun. This research demonstrates that conceptual event-based expectations are computed and used rapidly and dynamically during on-line language comprehension. We discuss relationships among plausibility and predictability, as well as their implications. We conclude that selectional restrictions may be best considered as event-based conceptual knowledge rather than lexical-grammatical knowledge. |
Stefanie E. Kuchinsky; Kathryn Bock; David E. Irwin Reversing the hands of time: Changing the mapping from seeing to saying Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 3, pp. 748–756, 2011. @article{Kuchinsky2011, To describe a scene, speakers must map visual information to a linguistic plan. Eye movements capture features of this linkage in a tendency for speakers to fixate referents just before they are mentioned. The current experiment examined whether and how this pattern changes when speakers create atypical mappings. Eye movements were monitored as participants told the time from analog clocks. Half of the participants did this in the usual manner. For the other participants, the denotations of the clock hands were reversed, making the big hand the hour and the little hand the minute. Eye movements revealed that it was not the visual features or configuration of the hands that determined gaze patterns, but rather top-down control from upcoming referring expressions. Differences in eye-voice spans further suggested a process in which scene elements are relationally structured before a linguistic plan is executed. This provides evidence for structural rather than lexical incrementality in planning and supports a "seeing-for-saying" hypothesis in which the visual system is harnessed to the linguistic demands of an upcoming utterance. |
Victor Kuperman; Julie A. Van Dyke Effects of individual differences in verbal skills on eye-movement patterns during sentence reading Journal Article In: Journal of Memory and Language, vol. 65, no. 1, pp. 42–73, 2011. @article{Kuperman2011, This study is a large-scale exploration of the influence that individual reading skills exert on eye-movement behavior in sentence reading. Seventy-one non-college-bound 16-24. year-old speakers of English completed a battery of 18 verbal and cognitive skill assessments, and read a series of sentences as their eye-movements were monitored. Statistical analyses were performed to establish what tests of reading abilities were predictive of eye-movement patterns across this population and how strong the effects were. We found that individual scores in rapid automatized naming and word identification tests (i) were the only participant variables with reliable predictivity throughout the time-course of reading; (ii) elicited effects that superceded in magnitude the effects of established predictors like word length or frequency; and (iii) strongly modulated the influence of word length and frequency on fixation times. We discuss implications of our findings for testing reading ability, as well as for research of eye-movements in reading. |
Eric Lambert; Denis Alamargot; Denis Larocque; Gilles Caporossi Dynamics of the spelling process during a copy task: Effects of regularity and frequency Journal Article In: Canadian Journal of Experimental Psychology, vol. 65, no. 3, pp. 141–150, 2011. @article{Lambert2011, This study investigated the time course of spelling, and its influence on graphomotor execution, in a successive word copy task. According to the cascade model, these two processes may be engaged either sequentially or in parallel, depending on the cognitive demands of spelling. In this experiment, adults were asked to copy a series of words varying in frequency and spelling regularity. A combined analysis of eye and pen movements revealed periods where spelling occurred in parallel with graphomotor execution, but concerned different processing units. The extent of this parallel processing depended on the words' orthographic characteristics. Results also highlighted the specificity of word recognition for copying purposes compared with recognition for reading tasks. The results confirm the validity of the cascade model and clarify the nature of the dependence between spelling and graphomotor processes. |
Jiyeon Lee; Cynthia K. Thompson Real-time production of unergative and unaccusative sentences in normal and agrammatic speakers: An eyetracking study Journal Article In: Aphasiology, vol. 25, no. 6-7, pp. 813–825, 2011. @article{Lee2011a, Background: Speakers with agrammatic aphasia have greater difficulty producing unaccusative (float) compared to unergative (bark) verbs (Kegl, 1995; Lee & Thompson, 2004; Thompson, 2003), putatively because the former involve movement of the theme to the subject position from the post-verbal position, and are therefore more complex than the latter (Burzio, 1986; Perlmutter, 1978). However, it is unclear if and how sentence production processes are affected by the linguistic distinction between these two types of verbs in normal and impaired speakers. Aims: This study examined real-time production of sentences with unergative (the black dog is barking) vs unaccusative (the black tube is floating) verbs in healthy young speakers and individuals with agrammatic aphasia, using eyetracking. Methods & Procedures: Participants' eye movements and speech were recorded while they produced a sentence using computer displayed written stimuli (e.g., black, dog, is barking). Outcomes & Results: Both groups of speakers produced numerically fewer unaccusative sentences than unergative sentences. However, the eye movement data revealed significant differences in fixations between the adjective (black) vs the noun (tube) when producing unaccusatives, but not when producing unergatives for both groups. Interestingly, whereas healthy speakers showed this difference during speech, speakers with agrammatism showed this difference prior to speech onset. Conclusions: These findings suggest that the human sentence production system differentially processes unaccusatives vs unergatives. This distinction is preserved in individuals with agrammatism; however, the time course of sentence planning appears to differ from healthy speakers (Lee & Thompson, 2010). |
Xingshan Li; Pingping Liu; Keith Rayner Eye movement guidance in Chinese reading: Is there a preferred viewing location? Journal Article In: Vision Research, vol. 51, pp. 1146–1156, 2011. @article{Li2011a, In this study, we examined eye movement guidance in Chinese reading. We embedded either a 2-character word or a 4-character word in the same sentence frame, and observed the eye movements of Chinese readers when they read these sentences. We found that when all saccades into the target words were considered that readers eyes tended to land near the beginning of the word. However, we also found that Chinese readers' eyes landed at the center of words when they made only a single fixation on a word, and that they landed at the beginning of a word when they made more than one fixation on a word. However, simulations that we carried out suggest that these findings cannot be taken to unambiguously argue for word-based saccade targeting in Chinese reading. We discuss alternative accounts of eye guidance in Chinese reading and suggest that eye movement target planning for Chinese readers might involve a combination of character-based and word-based targeting contingent on word segmentation processes. |
Marcus L. Johnson; Matthew W. Lowder; Peter C. Gordon The sentence-composition effect : Processing of complex noun phrases versus unusual noun phrases Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 4, pp. 707–724, 2011. @article{Johnson2011, In 2 experiments, the authors used an eye tracking while reading methodology to examine how different configurations of common noun phrases versus unusual noun phrases (NPs) influenced the difference in processing difficulty between sentences containing object- and subject-extracted relative clauses. Results showed that processing difficulty was reduced when the head NP was unusual relative to the embedded NP, as manipulated by lexical frequency. When both NPs were common or both were unusual, results showed strong effects of both commonness and sentence structure, but no interaction. In contrast, when 1 NP was common and the other was unusual, results showed the critical interaction. These results provide evidence for a sentence-composition effect analogous to the list-composition effect that has been well documented in memory research, in which the pattern of recall for common versus unusual items is different, depending on whether items are studied in a pure or mixed list context. This work represents an important step in integrating the list-memory and sentence-processing literatures and provides additional support for the usefulness of studying complex sentence processing from the perspective of memory-based models. |
Barbara J. Juhasz; Rachel N. Berkowitz Effects of morphological families on English compound word recognition: A multitask investigation Journal Article In: Language and Cognitive Processes, vol. 26, no. 4-6, pp. 653–682, 2011. @article{Juhasz2011, Three experiments examined the influence of first lexeme morphological family size on English compound word recognition. Concatenated compound words whose first lexemes were from large morphological families were responded to faster in word naming and lexical decision than compounds from small morphological families. In addition, an eye movement experiment showed that gaze durations were shorter on compounds from large morphological families during sentence reading. This was mainly due to more refixations on compounds from small morphological families. Posthoc analyses and re-analysis of past studies suggested that compounds with a larger number of higher frequency family members (HFFM) are read more slowly than compounds with fewer HFFM. Thus, while morphological family size is generally facilitative, the presence of HFFM has an inhibitory effect on eye movement behaviour. The time-course of these effects is discussed. |
Barbara J. Juhasz; Margaret M. Gullick; Leah W. Shesler The effects of age-of-Aacquisition on ambiguity resolution: Evidence from eye movements Journal Article In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–14, 2011. @article{Juhasz2011a, Words that are rated as acquired earlier in life receive shorter fixation durations than later acquired words, even when word frequency is adequately controlled (Juhasz & Rayner, 2003; 2006). Some theories posit that age-of-acquisition (AoA) affects the semantic representation of words (e.g., Steyvers & Tenenbaum, 2005), while others suggest that AoA should have an influence at multiple levels in the mental lexicon (e.g. Ellis & Lambon Ralph, 2000). In past studies, early and late AoA words have differed from each other in orthography, phonology, and meaning, making it difficult to localize the influence of AoA. Two experiments are reported which examined the locus of AoA effects in reading. Both experiments used balanced ambiguous words which have two equally-frequent meanings acquired at different times (e.g. pot, tick). In Experiment 1, sentence context supporting either the early- or late-acquired meaning was presented prior to the ambiguous word; in Experiment 2, disambiguating context was presented after the ambiguous word. When prior context disambiguated the ambiguous word, meaning AoA influenced the processing of the target word. However, when disambiguating sentence context followed the ambiguous word, meaning frequency was the more important variable and no effect of meaning AoA was observed. These results, when combined with the past results of Juhasz and Rayner (2003; 2006) suggest that AoA influences access to multiple levels of representation in the mental lexicon. The results also have implications for theories of lexical ambiguity resolution, as they suggest that variables other than meaning frequency and context can influence resolution of noun-noun ambiguities. |
Elsi Kaiser Focusing on pronouns: Consequences of subjecthood, pronominalisation, and contrastive focus Journal Article In: Language and Cognitive Processes, vol. 26, no. 10, pp. 1625–1666, 2011. @article{Kaiser2011, We report two visual-world eye-tracking experiments that investigated the effects of subjecthood, pronominalisation, and contrastive focus on the interpretation of pronouns in subsequent discourse. By probing the effects of these factors on real-time pronoun interpretation, we aim to contribute to our understanding of how topicality-related factors (subjecthood, givenness) interact with contrastive focus effects, and to investigate whether the seemingly mixed results obtained in prior work on topicality and focusing could be related to effects of subjecthood. Our results indicate that structural and semantic prominence (specifically, agentive subjects) influence pronoun interpretation even when separated from information-structural notions, and thus need to be taken into account when investigating topicality and focusing. We discuss how our results allow us to reconcile the distinct findings of prior studies. More generally, this research contributes to our understanding of how the language comprehension system integrates different kinds of information during real-time reference resolution.$backslash$nWe report two visual-world eye-tracking experiments that investigated the effects of subjecthood, pronominalisation, and contrastive focus on the interpretation of pronouns in subsequent discourse. By probing the effects of these factors on real-time pronoun interpretation, we aim to contribute to our understanding of how topicality-related factors (subjecthood, givenness) interact with contrastive focus effects, and to investigate whether the seemingly mixed results obtained in prior work on topicality and focusing could be related to effects of subjecthood. Our results indicate that structural and semantic prominence (specifically, agentive subjects) influence pronoun interpretation even when separated from information-structural notions, and thus need to be taken into account when investigating topicality and focusing. We discuss how our results allow us to reconcile the distinct findings of prior studies. More generally, this research contributes to our understanding of how the language comprehension system integrates different kinds of information during real-time reference resolution. |
Manizeh Khan; Meredyth Daneman How readers spontaneously interpret man-suffix words: Evidence from eye movements Journal Article In: Journal of Psycholinguistic Research, vol. 40, no. 5, pp. 351–366, 2011. @article{Khan2011, This study investigated whether readers are more likely to assign a male referent to man-suffix terms (e.g. chairman) than to gender-neutral alternatives (e.g., chairperson) during reading, and whether this bias differs as a function of age. Younger and older adults' eye movements were monitored while reading passages containing phrases such as "The chairman/chairperson familiarized herself with…" On-line eye fixation data provided strong evidence that man-suffix words were more likely to evoke the expectation of a male referent in both age groups. Younger readers demonstrated inflated processing times when first encountering herself after chairman relative to chairperson, and they tended to make more regressive fixations to chairman. Older readers did not show the effect when initially encountering herself, but they spent disproportionately longer looking back to chairman and herself. The study provides empirical support for copy-editing policies that mandate the use of explicitly gender-neutral suffix terms in place of man-suffix terms. |
Yi Ting Huang; Peter C. Gordon Distinguishing the time course of lexical and discourse processes through context, coreference, and quantified expressions Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 4, pp. 966–978, 2011. @article{Huang2011, How does prior context influence lexical and discourse-level processing during real-time language comprehension? Experiment 1 examined whether the referential ambiguity introduced by a repeated, anaphoric expression had an immediate or delayed effect on lexical and discourse processing, using an eye-tracking-while-reading task. Eye movements indicated facilitated recognition of repeated expressions, suggesting that prior context can rapidly influence lexical processing. However, context effects at the discourse level affected later processing, appearing in longer regression-path durations 2 words after the anaphor and in greater rereading times of the antecedent expression. Experiments 2 and 3 explored the nature of this delay by examining the role of the preceding context in activating relevant representations. Offline and online interpretations confirmed that relevant referents were activated following the critical context. Nevertheless, their initial unavailability during comprehension suggests a robust temporal division between lexical and discourse-level processing. |
Falk Huettig; Gerry T. M. Altmann In: Quarterly Journal of Experimental Psychology, vol. 64, no. 1, pp. 122–145, 2011. @article{Huettig2011, Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment. |
Falk Huettig; James M. Mcqueen The nature of the visual environment induces implicit biases during language-mediated visual search Journal Article In: Memory & Cognition, vol. 39, no. 6, pp. 1068–1084, 2011. @article{Huettig2011a, Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search. |
Alex D. Hwang; Hsueh-Cheng Wang; Marc Pomplun Semantic guidance of eye movements in real-world scenes Journal Article In: Vision Research, vol. 51, no. 10, pp. 1192–1205, 2011. @article{Hwang2011, The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. |
Jukka Hyönä; Raymond Bertram Optimal viewing position effects in reading Finnish Journal Article In: Vision Research, vol. 51, no. 11, pp. 1279–1287, 2011. @article{Hyoenae2011, The present study examined effects of the initial landing position in words on eye behavior during reading of long and short Finnish compound words. The study replicated OVP and IOVP effects previously found in French, German and English - languages structurally distinct from Finnish, suggesting that the effects generalize across structurally different alphabetic languages. The results are consistent with the view that the landing position effects appear at the prelexical stage of word processing, as landing position effects were not modulated by word frequency. Moreover, the OVP effects are in line with a visuomotor explanation making recourse to visual acuity constraints. |
Albrecht W. Inhoff; Matthew S. Solomon; Ralph Radach; Bradley A. Seymour Temporal dynamics of the eye-voice span and eye movement control during oral reading Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 5, pp. 543–558, 2011. @article{Inhoff2011, The distance between eye movements and articulation during oral reading, commonly referred to as the eye?voice span, has been a classic issue of experimental reading research since Buswell (1921). To examine the influence of the span on eye movement control, synchronised recordings of eye position and speech production were obtained during fluent oral reading. The viewing of a word almost always preceded its articulation, and the interval between the onset of a word's fixation and the onset of its articulation was approximately 500 ms. The identification and articulation of a word were closely coupled, and the fixation?speech interval was regulated through immediate adjustments of word viewing duration, unless the interval was relatively long. In this case, the lag between identification and articulation was often reduced through a regression that moved the eyes back in the text. These results indicate that models of eye movement control during oral reading need to include a mechanism that maintains a close linkage between the identification and articulation of words through continuous oculomotor adjustments. |
Lisa Irmen; Eva Schumann Processing grammatical gender of role nouns: Further evidence from eye movements Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 8, pp. 998–1014, 2011. @article{Irmen2011, Two eye-tracking experiments investigated the effects of masculine versus feminine grammatical gender on the processing of role nouns and on establishing coreference relations. Participants read sentences with the basic structure My kinship term is a role noun prepositional phrase such as My brother is a singer in a band. Role nouns were either masculine or feminine. Kinship terms were lexically male or female and in this way specified referent gender, i.e., the sex of the person referred to. Experiment 1 tested a fully crossed design including items with an incorrect combination of lexically male kinship term and feminine role name. Experiment 2 tested only correct combinations of grammatical and lexical/referential gender to control for possible effects of the incorrect items of Experiment 1. In early stages of processing, feminine role nouns, but not masculine ones, were fixated longer when grammatical and referential gender were contradictory. In later stages of sentence wrap-up there were longer fixations for sentences with masculine than for those with feminine role nouns. Results of both experiments indicate that, for feminine role nouns, cues to referent gender are integrated immediately, whereas a late integration obtains for masculine forms. |
Nikole D. Patson; Tessa Warren Building complex reference objects from dual sets Journal Article In: Journal of Memory and Language, vol. 64, no. 4, pp. 443–459, 2011. @article{Patson2011, There has been considerable psycholinguistic investigation into the conditions that allow separately introduced individuals to be joined into a plural set and represented as a complex reference object (e.g., Eschenbach et al., 1989; Garrod & Sanford, 1982; Koh & Clifton, 2002; Koh et al., 2008; Moxey, Sanford, Sturt, & Morrow, 2004; Sanford & Lockhart, 1990). The current paper reports three eye-tracking experiments that investigate the less-well understood question of what conditions allow pointers to be assigned to the individuals within a previously undifferentiated set, turning it into a complex reference object. The experiments made use of a methodology used in Patson and Ferreira (2009) to distinguish between complex reference objects and undifferentiated sets. Experiments 1 and 2 demonstrated that assigning different properties to the members of an undifferentiated dual set via a conjoined modifier or a comparative modifier transformed it into a complex reference object. Experiment 3 indicated that assigning a property to only one member of an undifferentiated dual set introduced pointers to both members. These results demonstrate that pointers can be established to referents within a plural set without picking them out via anaphors; they set boundaries on the kinds of implicit contrasts between referents that establish pointers; and they illustrate that extremely subtle properties of the semantic and referential context can affect early parsing decisions. |