• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
    • Purchase Licenses
  • Solutions
    • Reading and Language
    • Developmental
    • fMRI and MEG
    • EEG and fNIRS
    • Clinical and Oculomotor
    • Cognitive
    • Usability and Applied
    • Non Human Primate
  • Support
    • Forum
    • Resources
    • Useful Apps
    • Training
  • About
    • About Us
    • EyeLink Publications
    • Events
    • Manufacturing
    • Careers
    • About Eye Tracking
    • Newsletter
  • Blog
  • Contact
  • 简体中文
eye tracking research

EyeLink Eye-Tracking Publications Library

All EyeLink Publications

All 10,000+ peer-reviewed EyeLink research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!

10162 entries « ‹ 93 of 102 › »

2008

Sarah C. Creel; Richard N. Aslin; Michael K. Tanenhaus

Heeding the voice of experience: The role of talker variation in lexical access Journal Article

In: Cognition, vol. 106, no. 2, pp. 633–664, 2008.

Abstract | Links | BibTeX

@article{Creel2008,
title = {Heeding the voice of experience: The role of talker variation in lexical access},
author = {Sarah C. Creel and Richard N. Aslin and Michael K. Tanenhaus},
doi = {10.1016/j.cognition.2007.03.013},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {106},
number = {2},
pages = {633--664},
abstract = {Two experiments used the head-mounted eye-tracking methodology to examine the time course of lexical activation in the face of a non-phonemic cue, talker variation. We found that lexical competition was attenuated by consistent talker differences between words that would otherwise be lexical competitors. In Experiment 1, some English cohort word-pairs were consistently spoken by a single talker (male couch, male cows), while other word-pairs were spoken by different talkers (male sheep, female sheet). After repeated instances of talker-word pairings, words from different-talker pairs showed smaller proportions of competitor fixations than words from same-talker pairs. In Experiment 2, participants learned to identify black-and-white shapes from novel labels spoken by one of two talkers. All of the 16 novel labels were VCVCV word-forms atypical of, but not phonologically illegal in, English. Again, a word was consistently spoken by one talker, and its cohort or rhyme competitor was consistently spoken either by that same talker (same-talker competitor) or the other talker (different-talker competitor). Targets with different-talker cohorts received greater fixation proportions than targets with same-talker cohorts, while the reverse was true for fixations to cohort competitors; there were fewer erroneous selections of competitor referents for different-talker competitors than same-talker competitors. Overall, these results support a view of the lexicon in which entries contain extra-phonemic information. Extensions of the artificial lexicon paradigm and developmental implications are discussed. textcopyright 2007 Elsevier B.V. All rights reserved.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments used the head-mounted eye-tracking methodology to examine the time course of lexical activation in the face of a non-phonemic cue, talker variation. We found that lexical competition was attenuated by consistent talker differences between words that would otherwise be lexical competitors. In Experiment 1, some English cohort word-pairs were consistently spoken by a single talker (male couch, male cows), while other word-pairs were spoken by different talkers (male sheep, female sheet). After repeated instances of talker-word pairings, words from different-talker pairs showed smaller proportions of competitor fixations than words from same-talker pairs. In Experiment 2, participants learned to identify black-and-white shapes from novel labels spoken by one of two talkers. All of the 16 novel labels were VCVCV word-forms atypical of, but not phonologically illegal in, English. Again, a word was consistently spoken by one talker, and its cohort or rhyme competitor was consistently spoken either by that same talker (same-talker competitor) or the other talker (different-talker competitor). Targets with different-talker cohorts received greater fixation proportions than targets with same-talker cohorts, while the reverse was true for fixations to cohort competitors; there were fewer erroneous selections of competitor referents for different-talker competitors than same-talker competitors. Overall, these results support a view of the lexicon in which entries contain extra-phonemic information. Extensions of the artificial lexicon paradigm and developmental implications are discussed. textcopyright 2007 Elsevier B.V. All rights reserved.

Close

  • doi:10.1016/j.cognition.2007.03.013

Close

Michael D. Crossland; Antony B. Morland; Mary P. Feely; Elisabeth Hagen; Gary S. Rubin

The effect of age and fixation instability on retinotopic mapping of primary visual cortex Journal Article

In: Investigative Ophthalmology & Visual Science, vol. 49, no. 8, pp. 3734–3739, 2008.

Abstract | Links | BibTeX

@article{Crossland2008,
title = {The effect of age and fixation instability on retinotopic mapping of primary visual cortex},
author = {Michael D. Crossland and Antony B. Morland and Mary P. Feely and Elisabeth Hagen and Gary S. Rubin},
doi = {10.1167/iovs.07-1621},
year = {2008},
date = {2008-01-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {49},
number = {8},
pages = {3734--3739},
abstract = {PURPOSE: Functional magnetic resonance imaging (fMRI) experiments determining the retinotopic structure of visual cortex have commonly been performed on young adults, who are assumed to be able to maintain steady fixation throughout the trial duration. The authors quantified the effects of age and fixation stability on the quality of retinotopic maps of primary visual cortex. METHODS: With the use of a 3T fMRI scanner, the authors measured cortical activity in six older and six younger normally sighted participants observing an expanding flickering checkerboard stimulus of 30 degrees diameter. The area of flattened primary visual cortex (V1) showing any blood oxygen level-dependent (BOLD) activity to the visual stimulus and the area responding to the central 3.75 degrees of the stimulus (relating to the central ring of our target) were recorded. Fixation stability was measured while participants observed the same stimuli outside the scanner using an infrared gazetracker. RESULTS: There were no age-related changes in the area of V1. However, the proportion of V1 active to our visual stimulus was lower for the older observers than for the younger observers (overall activity: 89.8% of V1 area for older observers, 98.6% for younger observers; P <0.05). This effect was more pronounced for the central 3.75 degrees of the target (older subjects, 26.4%; younger subjects, 40.7%; P <0.02). No significant relationship existed between fixation stability and age or the magnitude of activity in the primary visual cortex. CONCLUSIONS: Although the cortical area remains unchanged, healthy older persons show less BOLD activity in V1 than do younger persons. Normal variations in fixation stability do not have a significant effect on the accuracy of experiments to determine the retinotopic structure of the visual cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE: Functional magnetic resonance imaging (fMRI) experiments determining the retinotopic structure of visual cortex have commonly been performed on young adults, who are assumed to be able to maintain steady fixation throughout the trial duration. The authors quantified the effects of age and fixation stability on the quality of retinotopic maps of primary visual cortex. METHODS: With the use of a 3T fMRI scanner, the authors measured cortical activity in six older and six younger normally sighted participants observing an expanding flickering checkerboard stimulus of 30 degrees diameter. The area of flattened primary visual cortex (V1) showing any blood oxygen level-dependent (BOLD) activity to the visual stimulus and the area responding to the central 3.75 degrees of the stimulus (relating to the central ring of our target) were recorded. Fixation stability was measured while participants observed the same stimuli outside the scanner using an infrared gazetracker. RESULTS: There were no age-related changes in the area of V1. However, the proportion of V1 active to our visual stimulus was lower for the older observers than for the younger observers (overall activity: 89.8% of V1 area for older observers, 98.6% for younger observers; P <0.05). This effect was more pronounced for the central 3.75 degrees of the target (older subjects, 26.4%; younger subjects, 40.7%; P <0.02). No significant relationship existed between fixation stability and age or the magnitude of activity in the primary visual cortex. CONCLUSIONS: Although the cortical area remains unchanged, healthy older persons show less BOLD activity in V1 than do younger persons. Normal variations in fixation stability do not have a significant effect on the accuracy of experiments to determine the retinotopic structure of the visual cortex.

Close

  • doi:10.1167/iovs.07-1621

Close

Ian Cunnings; Harald Clahsen

The time-course of morphological constraints: A study of plurals inside derived words Journal Article

In: The Mental Lexicon, vol. 3, no. 2, pp. 149–175, 2008.

Abstract | Links | BibTeX

@article{Cunnings2008,
title = {The time-course of morphological constraints: A study of plurals inside derived words},
author = {Ian Cunnings and Harald Clahsen},
doi = {10.1075/ml.3.2.01cun},
year = {2008},
date = {2008-01-01},
journal = {The Mental Lexicon},
volume = {3},
number = {2},
pages = {149--175},
abstract = {The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.

Close

  • doi:10.1075/ml.3.2.01cun

Close

Lillian Chen; Julie E. Boland

Dominance and context effects on activation of alternative homophone meanings Journal Article

In: Memory and Cognition, vol. 36, no. 7, pp. 1306–1323, 2008.

Abstract | Links | BibTeX

@article{Chen2008,
title = {Dominance and context effects on activation of alternative homophone meanings},
author = {Lillian Chen and Julie E. Boland},
doi = {10.3758/MC.36.7.1306},
year = {2008},
date = {2008-01-01},
journal = {Memory and Cognition},
volume = {36},
number = {7},
pages = {1306--1323},
abstract = {Two eyetracking-during-listening experiments showed frequency and context effects on fixation probability for pictures representing multiple meanings of homophones. Participants heard either an imperative sentence instructing them to look at a homophone referent (Experiment 1) or a declarative sentence that was either neutral or biased toward the homophone's subordinate meaning (Experiment 2). At homophone onset in both experiments, the participants viewed four pictures: (1) a referent of one homophone meaning, (2) a shape competitor for a nonpictured homophone meaning, and (3) two unrelated filler objects. In Experiment 1, meaning dominance affected looks to both the homophone referent and the shape competitor. In Experiment 2, as compared with neutral contexts, subordinatebiased contexts lowered the fixation probability for shape competitors of dominant meanings, but shape competitors still attracted more looks than would be expected by chance. We discuss the consistencies and discrepancies of these findings with the selective access and reordered access theories of lexical ambiguity resolution.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two eyetracking-during-listening experiments showed frequency and context effects on fixation probability for pictures representing multiple meanings of homophones. Participants heard either an imperative sentence instructing them to look at a homophone referent (Experiment 1) or a declarative sentence that was either neutral or biased toward the homophone's subordinate meaning (Experiment 2). At homophone onset in both experiments, the participants viewed four pictures: (1) a referent of one homophone meaning, (2) a shape competitor for a nonpictured homophone meaning, and (3) two unrelated filler objects. In Experiment 1, meaning dominance affected looks to both the homophone referent and the shape competitor. In Experiment 2, as compared with neutral contexts, subordinatebiased contexts lowered the fixation probability for shape competitors of dominant meanings, but shape competitors still attracted more looks than would be expected by chance. We discuss the consistencies and discrepancies of these findings with the selective access and reordered access theories of lexical ambiguity resolution.

Close

  • doi:10.3758/MC.36.7.1306

Close

Gideon P. Caplovitz; Nora A. Paymer; Peter U. Tse

The drifting edge illusion: A stationary edge abutting an oriented drifting grating appears to move because of the 'other aperture problem' Journal Article

In: Vision Research, vol. 48, no. 22, pp. 2403–2414, 2008.

Abstract | Links | BibTeX

@article{Caplovitz2008,
title = {The drifting edge illusion: A stationary edge abutting an oriented drifting grating appears to move because of the 'other aperture problem'},
author = {Gideon P. Caplovitz and Nora A. Paymer and Peter U. Tse},
doi = {10.1016/j.visres.2008.07.014},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {22},
pages = {2403--2414},
abstract = {We describe the Drifting Edge Illusion (DEI), in which a stationary edge appears to move when it abuts a drifting grating. Although a single edge is sufficient to perceive DEI, a particularly compelling version of DEI occurs when a drifting grating is viewed through an oriented and stationary aperture. The magnitude of the illusion depends crucially on the orientations of the grating and aperture. Using psychophysics, we describe the relationship between the magnitude of DEI and the relative angle between the grating and aperture. Results are discussed in the context of the roles of occlusion, component-motion, and contour relationships in the interpretation of motion information. In particular, we suggest that the visual system is posed with solving an ambiguity other than the traditionally acknowledged aperture problem of determining the direction of motion of the drifting grating. In this 'second aperture problem' or 'edge problem', a motion signal may belong to either the occluded or occluding contour. That is, the motion along the contour can arise either because the grating is drifting or because the edge is drifting over a stationary grating. DEI appears to result from a misattribution of motion information generated by the drifting grating to the stationary contours of the aperture, as if the edges are interpreted to travel over the grating, although they are in fact stationary.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We describe the Drifting Edge Illusion (DEI), in which a stationary edge appears to move when it abuts a drifting grating. Although a single edge is sufficient to perceive DEI, a particularly compelling version of DEI occurs when a drifting grating is viewed through an oriented and stationary aperture. The magnitude of the illusion depends crucially on the orientations of the grating and aperture. Using psychophysics, we describe the relationship between the magnitude of DEI and the relative angle between the grating and aperture. Results are discussed in the context of the roles of occlusion, component-motion, and contour relationships in the interpretation of motion information. In particular, we suggest that the visual system is posed with solving an ambiguity other than the traditionally acknowledged aperture problem of determining the direction of motion of the drifting grating. In this 'second aperture problem' or 'edge problem', a motion signal may belong to either the occluded or occluding contour. That is, the motion along the contour can arise either because the grating is drifting or because the edge is drifting over a stationary grating. DEI appears to result from a misattribution of motion information generated by the drifting grating to the stationary contours of the aperture, as if the edges are interpreted to travel over the grating, although they are in fact stationary.

Close

  • doi:10.1016/j.visres.2008.07.014

Close

Maria Nella Carminati; Roger P. G. Gompel; Christoph Scheepers; Manabu Arai

Syntactic priming in comprehension: The role of argument order and animacy Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 5, pp. 1098–1110, 2008.

Abstract | Links | BibTeX

@article{Carminati2008,
title = {Syntactic priming in comprehension: The role of argument order and animacy},
author = {Maria Nella Carminati and Roger P. G. Gompel and Christoph Scheepers and Manabu Arai},
doi = {10.1037/a0012795},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {34},
number = {5},
pages = {1098--1110},
abstract = {Two visual-world eye-movement experiments investigated the nature of syntactic priming during comprehension--specifically, whether the priming effects in ditransitive prepositional object (PO) and double object (DO) structures (e.g., "The wizard will send the poison to the prince/the prince the poison?") are due to anticipation of structural properties following the verb (send) in the target sentence or to anticipation of animacy properties of the first postverbal noun. Shortly following the target verb onset, listeners looked at the recipient more (relative to the theme) following DO than PO primes, indicating that the structure of the prime affected listeners' eye gazes on the target scene. Crucially, this priming effect was the same irrespective of whether the postverbal nouns in the prime sentences did ("The monarch will send the painting to the president") or did not ("The monarch will send the envoy to the president") differ in animacy, suggesting that PO/DO priming in comprehension occurs because structural properties, rather than animacy features, are being primed when people process the ditransitive target verb.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two visual-world eye-movement experiments investigated the nature of syntactic priming during comprehension--specifically, whether the priming effects in ditransitive prepositional object (PO) and double object (DO) structures (e.g., "The wizard will send the poison to the prince/the prince the poison?") are due to anticipation of structural properties following the verb (send) in the target sentence or to anticipation of animacy properties of the first postverbal noun. Shortly following the target verb onset, listeners looked at the recipient more (relative to the theme) following DO than PO primes, indicating that the structure of the prime affected listeners' eye gazes on the target scene. Crucially, this priming effect was the same irrespective of whether the postverbal nouns in the prime sentences did ("The monarch will send the painting to the president") or did not ("The monarch will send the envoy to the president") differ in animacy, suggesting that PO/DO priming in comprehension occurs because structural properties, rather than animacy features, are being primed when people process the ditransitive target verb.

Close

  • doi:10.1037/a0012795

Close

Jonathan S. A. Carriere; Daniel Eaton; Michael G. Reynolds; Mike J. Dixon; Daniel Smilek

Grapheme–color synesthesia influences overt visual attention Journal Article

In: Journal of Cognitive Neuroscience, vol. 21, no. 2, pp. 246–258, 2008.

Abstract | BibTeX

@article{Carriere2008,
title = {Grapheme–color synesthesia influences overt visual attention},
author = {Jonathan S. A. Carriere and Daniel Eaton and Michael G. Reynolds and Mike J. Dixon and Daniel Smilek},
year = {2008},
date = {2008-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {21},
number = {2},
pages = {246--258},
abstract = {For individuals with grapheme–color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme–color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias—a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters—in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

For individuals with grapheme–color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme–color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias—a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters—in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia.

Close

Monica S. Castelhano; Alexander Pollatsek; Kyle R. Cave

Typicality aids search for an unspecified target, but only in identification and not in attentional guidance Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 4, pp. 795–801, 2008.

Abstract | BibTeX

@article{Castelhano2008,
title = {Typicality aids search for an unspecified target, but only in identification and not in attentional guidance},
author = {Monica S. Castelhano and Alexander Pollatsek and Kyle R. Cave},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {4},
pages = {795--801},
abstract = {Participants searched for a picture of an object, and the object was either a typical or an atypical category member. The object was cued by either the picture or its basic-level category name. Of greatest interest was whether it would be easier to search for typical objects than to search for atypical objects. The answer was"yes," but only in a qualified sense: There was a large typicality effect on response time only for name cues, and almost none of the effect was found in the time to locate (i.e., first fixate) the target. Instead, typicality influenced verification time-the time to respond to the target once it was fixated. Typicality is thus apparently irrelevant when the target is well specified by a picture cue; even when the target is underspecified (as with a name cue), it does not aid attentional guidance, but only facilitates categorization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Participants searched for a picture of an object, and the object was either a typical or an atypical category member. The object was cued by either the picture or its basic-level category name. Of greatest interest was whether it would be easier to search for typical objects than to search for atypical objects. The answer was"yes," but only in a qualified sense: There was a large typicality effect on response time only for name cues, and almost none of the effect was found in the time to locate (i.e., first fixate) the target. Instead, typicality influenced verification time-the time to respond to the target once it was fixated. Typicality is thus apparently irrelevant when the target is well specified by a picture cue; even when the target is underspecified (as with a name cue), it does not aid attentional guidance, but only facilitates categorization.

Close

Marc H. E. Lussanet; Luciano Fadiga; Lars Michels; Rüdiger J. Seitz; Raimund Kleiser; Markus Lappe

Interaction of visual hemifield and body view in biological motion perception Journal Article

In: European Journal of Neuroscience, vol. 27, no. 2, pp. 514–522, 2008.

Abstract | Links | BibTeX

@article{Lussanet2008,
title = {Interaction of visual hemifield and body view in biological motion perception},
author = {Marc H. E. Lussanet and Luciano Fadiga and Lars Michels and Rüdiger J. Seitz and Raimund Kleiser and Markus Lappe},
doi = {10.1111/j.1460-9568.2007.06009.x},
year = {2008},
date = {2008-01-01},
journal = {European Journal of Neuroscience},
volume = {27},
number = {2},
pages = {514--522},
abstract = {The brain network for the recognition of biological motion includes visual areas and structures of the mirror-neuron system. The latter respond during action execution as well as during action recognition. As motor and somatosensory areas predominantly represent the contralateral side of the body and visual areas predominantly process stimuli from the contralateral hemifield, we were interested in interactions between visual hemifield and action recognition. In the present study, human participants detected the facing direction of profile views of biological motion stimuli presented in the visual periphery. They recognized a right-facing body view of human motion better in the right visual hemifield than in the left; and a left-facing body view better in the left visual hemifield than in the right. In a subsequent fMRI experiment, performed with a similar task, two cortical areas in the left and right hemispheres were significantly correlated with the behavioural facing effect: primary somatosensory cortex (BA 2) and inferior frontal gyrus (BA 44). These areas were activated specifically when point-light stimuli presented in the contralateral visual hemifield displayed the side view of their contralateral body side. Our results indicate that the hemispheric specialization of one's own body map extends to the visual representation of the bodies of others.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The brain network for the recognition of biological motion includes visual areas and structures of the mirror-neuron system. The latter respond during action execution as well as during action recognition. As motor and somatosensory areas predominantly represent the contralateral side of the body and visual areas predominantly process stimuli from the contralateral hemifield, we were interested in interactions between visual hemifield and action recognition. In the present study, human participants detected the facing direction of profile views of biological motion stimuli presented in the visual periphery. They recognized a right-facing body view of human motion better in the right visual hemifield than in the left; and a left-facing body view better in the left visual hemifield than in the right. In a subsequent fMRI experiment, performed with a similar task, two cortical areas in the left and right hemispheres were significantly correlated with the behavioural facing effect: primary somatosensory cortex (BA 2) and inferior frontal gyrus (BA 44). These areas were activated specifically when point-light stimuli presented in the contralateral visual hemifield displayed the side view of their contralateral body side. Our results indicate that the hemispheric specialization of one's own body map extends to the visual representation of the bodies of others.

Close

  • doi:10.1111/j.1460-9568.2007.06009.x

Close

Christopher A. Dickinson; Helene Intraub

Transsaccadic representation of layout: What is the time course of boundary extension? Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 3, pp. 543–555, 2008.

Abstract | Links | BibTeX

@article{Dickinson2008,
title = {Transsaccadic representation of layout: What is the time course of boundary extension?},
author = {Christopher A. Dickinson and Helene Intraub},
doi = {10.1037/0096-1523.34.3.543},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {34},
number = {3},
pages = {543--555},
abstract = {How rapidly does boundary extension occur? Across experiments, trials included a 3-scene sequence (325 ms/picture), masked interval, and repetition of 1 scene. The repetition was the same view or differed (more close-up or wide angle). Observers rated the repetition as same as, closer than, or more wide angle than the original view on a 5-point scale. Masked intervals were 100, 250, 625, or 1,000 ms in Experiment 1 and 42, 100, or 250 ms in Experiments 2 and 3. Boundary extension occurred in all cases: Identical views were rated as too "close-up," and distractor views elicited the rating asymmetry typical of boundary extension (wider angle distractors were rated as being more similar to the original than were closer up distractors). Most important, boundary extension was evident when only a 42-ms mask separated the original and test views. Experiments 1 and 3 included conditions eliciting a gaze shift prior to the rating test; this did not eliminate boundary extension. Results show that boundary extension is available soon enough and is robust enough to play an on-line role in view integration, perhaps supporting incorporation of views within a larger spatial framework.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How rapidly does boundary extension occur? Across experiments, trials included a 3-scene sequence (325 ms/picture), masked interval, and repetition of 1 scene. The repetition was the same view or differed (more close-up or wide angle). Observers rated the repetition as same as, closer than, or more wide angle than the original view on a 5-point scale. Masked intervals were 100, 250, 625, or 1,000 ms in Experiment 1 and 42, 100, or 250 ms in Experiments 2 and 3. Boundary extension occurred in all cases: Identical views were rated as too "close-up," and distractor views elicited the rating asymmetry typical of boundary extension (wider angle distractors were rated as being more similar to the original than were closer up distractors). Most important, boundary extension was evident when only a 42-ms mask separated the original and test views. Experiments 1 and 3 included conditions eliciting a gaze shift prior to the rating test; this did not eliminate boundary extension. Results show that boundary extension is available soon enough and is robust enough to play an on-line role in view integration, perhaps supporting incorporation of views within a larger spatial framework.

Close

  • doi:10.1037/0096-1523.34.3.543

Close

Jan Churan; Farhan A. Khawaja; James M. G. Tsui; Christopher C. Pack

Brief motion stimuli preferentially activate surround-suppressed neurons in macaque visual area MT Journal Article

In: Current Biology, vol. 18, no. 22, pp. 1–6, 2008.

Abstract | Links | BibTeX

@article{Churan2008,
title = {Brief motion stimuli preferentially activate surround-suppressed neurons in macaque visual area MT},
author = {Jan Churan and Farhan A. Khawaja and James M. G. Tsui and Christopher C. Pack},
doi = {10.1016/j.cub.2008.10.003},
year = {2008},
date = {2008-01-01},
journal = {Current Biology},
volume = {18},
number = {22},
pages = {1--6},
abstract = {Intuitively one might think that larger objects should be easier to see, and indeed performance on visual tasks generally improves with increasing stimulus size [1,2]. Recently, a remarkable exception to this rule was reported [3]: when a high-contrast, moving stimulus is presented very briefly, motion perception deteriorates as stimulus size increases. This psychophysical surround suppression has been interpreted as a correlate of the neuronal surround suppression that is commonly found in the visual cortex [3-5]. However, many visual cortical neurons lack surround suppression, and so one might expect that the brain would simply use their outputs to discriminate the motion of large stimuli. Indeed previous work has generally found that observers rely on whichever neurons are most informative about the stimulus to perform similar psychophysical tasks [6]. Here we show that the responses of neurons in the middle temporal (MT) area of macaque monkeys provide a simple resolution to this paradox. We find that surround-suppressed MT neurons integrate motion signals relatively quickly, so that by comparison non-suppressed neurons respond poorly to brief stimuli. Thus, psychophysical surround suppression for brief stimuli can be viewed as a consequence of a strategy that weights neuronal responses according to how informative they are about a given stimulus. If this interpretation is correct, then it follows that any psychophysical experiment that uses brief motion stimuli will effectively probe the responses of MT neurons that have strong surround suppression.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intuitively one might think that larger objects should be easier to see, and indeed performance on visual tasks generally improves with increasing stimulus size [1,2]. Recently, a remarkable exception to this rule was reported [3]: when a high-contrast, moving stimulus is presented very briefly, motion perception deteriorates as stimulus size increases. This psychophysical surround suppression has been interpreted as a correlate of the neuronal surround suppression that is commonly found in the visual cortex [3-5]. However, many visual cortical neurons lack surround suppression, and so one might expect that the brain would simply use their outputs to discriminate the motion of large stimuli. Indeed previous work has generally found that observers rely on whichever neurons are most informative about the stimulus to perform similar psychophysical tasks [6]. Here we show that the responses of neurons in the middle temporal (MT) area of macaque monkeys provide a simple resolution to this paradox. We find that surround-suppressed MT neurons integrate motion signals relatively quickly, so that by comparison non-suppressed neurons respond poorly to brief stimuli. Thus, psychophysical surround suppression for brief stimuli can be viewed as a consequence of a strategy that weights neuronal responses according to how informative they are about a given stimulus. If this interpretation is correct, then it follows that any psychophysical experiment that uses brief motion stimuli will effectively probe the responses of MT neurons that have strong surround suppression.

Close

  • doi:10.1016/j.cub.2008.10.003

Close

Meghan Clayards; Michael K. Tanenhaus; Richard N. Aslin; Robert A. Jacobs

Perception of speech reflects optimal use of probabilistic speech cues Journal Article

In: Cognition, vol. 108, no. 3, pp. 804–809, 2008.

Abstract | Links | BibTeX

@article{Clayards2008,
title = {Perception of speech reflects optimal use of probabilistic speech cues},
author = {Meghan Clayards and Michael K. Tanenhaus and Richard N. Aslin and Robert A. Jacobs},
doi = {10.1016/j.cognition.2008.04.004},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {108},
number = {3},
pages = {804--809},
abstract = {Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions.

Close

  • doi:10.1016/j.cognition.2008.04.004

Close

Thérèse Collins; Tobias Schicke; Brigitte Röder

Action goal selection and motor planning can be dissociated by tool use Journal Article

In: Cognition, vol. 109, no. 3, pp. 363–371, 2008.

Abstract | Links | BibTeX

@article{Collins2008,
title = {Action goal selection and motor planning can be dissociated by tool use},
author = {Thérèse Collins and Tobias Schicke and Brigitte Röder},
doi = {10.1016/j.cognition.2008.10.001},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {109},
number = {3},
pages = {363--371},
abstract = {The preparation of eye or hand movements enhances visual perception at the upcoming movement end position. The spatial location of this influence of action on perception could be determined either by goal selection or by motor planning. We employed a tool use task to dissociate these two alternatives. The instructed goal location was a visual target to which participants pointed with the tip of a triangular hand-held tool. The motor endpoint was defined by the final fingertip position necessary to bring the tool tip onto the goal. We tested perceptual performance at both locations (tool tip endpoint, motor endpoint) with a visual discrimination task. Discrimination performance was enhanced in parallel at both spatial locations, but not at nearby and intermediate locations, suggesting that both action goal selection and motor planning contribute to visual perception. In addition, our results challenge the widely held view that tools extend the body schema and suggest instead that tool use enhances perception at those precise locations which are most relevant during tool action: the body part used to manipulate the tool, and the active tool tip.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The preparation of eye or hand movements enhances visual perception at the upcoming movement end position. The spatial location of this influence of action on perception could be determined either by goal selection or by motor planning. We employed a tool use task to dissociate these two alternatives. The instructed goal location was a visual target to which participants pointed with the tip of a triangular hand-held tool. The motor endpoint was defined by the final fingertip position necessary to bring the tool tip onto the goal. We tested perceptual performance at both locations (tool tip endpoint, motor endpoint) with a visual discrimination task. Discrimination performance was enhanced in parallel at both spatial locations, but not at nearby and intermediate locations, suggesting that both action goal selection and motor planning contribute to visual perception. In addition, our results challenge the widely held view that tools extend the body schema and suggest instead that tool use enhances perception at those precise locations which are most relevant during tool action: the body part used to manipulate the tool, and the active tool tip.

Close

  • doi:10.1016/j.cognition.2008.10.001

Close

R. Contreras; Rachel Kolster; Henning U. Voss; Jamshid Ghajar; M. Suh; S. Bahar

Eye-target synchronization in mild traumatic brain-injured patients Journal Article

In: Journal of Biological Physics, vol. 34, no. 3-4, pp. 381–392, 2008.

Abstract | Links | BibTeX

@article{Contreras2008,
title = {Eye-target synchronization in mild traumatic brain-injured patients},
author = {R. Contreras and Rachel Kolster and Henning U. Voss and Jamshid Ghajar and M. Suh and S. Bahar},
doi = {10.1007/s10867-008-9092-1},
year = {2008},
date = {2008-01-01},
journal = {Journal of Biological Physics},
volume = {34},
number = {3-4},
pages = {381--392},
abstract = {Eye-target synchronization is critical for effective smooth pursuit of a moving visual target. We apply the nonlinear dynamical technique of stochastic-phase synchronization to human visual pursuit of a moving target, in both normal and mild traumatic brain-injured (mTBI) patients. We observe significant fatigue effects in all subject populations, in which subjects synchronize better with the target during the first half of the trial than in the second half. The fatigue effect differed, however, between the normal and the mTBI populations and between old and young subpopulations of each group. In some cases, the younger (40 years old) normal subjects. Our results, however, suggest that further studies will be necessary before a standard of "normal" smooth pursuit synchronization can be developed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-target synchronization is critical for effective smooth pursuit of a moving visual target. We apply the nonlinear dynamical technique of stochastic-phase synchronization to human visual pursuit of a moving target, in both normal and mild traumatic brain-injured (mTBI) patients. We observe significant fatigue effects in all subject populations, in which subjects synchronize better with the target during the first half of the trial than in the second half. The fatigue effect differed, however, between the normal and the mTBI populations and between old and young subpopulations of each group. In some cases, the younger (</=40 years old) normal subjects performed better than mTBI subjects and also better than older (>40 years old) normal subjects. Our results, however, suggest that further studies will be necessary before a standard of "normal" smooth pursuit synchronization can be developed.

Close

  • doi:10.1007/s10867-008-9092-1

Close

Inger Montfoort; Josef N. Geest; Harm P. Slijper; Chris I. Zeeuw; Maarten A. Frens

Adaptation of the cervico- and vestibulo-ocular reflex in whiplash injury patients Journal Article

In: Journal of Neurotrauma, vol. 25, pp. 687–693, 2008.

Abstract | Links | BibTeX

@article{Montfoort2008,
title = {Adaptation of the cervico- and vestibulo-ocular reflex in whiplash injury patients},
author = {Inger Montfoort and Josef N. Geest and Harm P. Slijper and Chris I. Zeeuw and Maarten A. Frens},
doi = {10.1089/neu.2007.0314},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurotrauma},
volume = {25},
pages = {687--693},
abstract = {The aim of this study was to investigate the underlying mechanisms of the increased gains of the cervico-ocular reflex (COR) and the lack of synergy between the COR and the vestibulo-ocular reflex (VOR) that have been previously observed in patients with whiplash-associated disorders (WAD). Eye movements during COR or VOR stimulation were recorded in four different experiments. The effect of restricted neck motion and the relationship between muscle activity and COR gain was examined in healthy controls. The adaptive ability of the COR and the VOR was tested in WAD patients and healthy controls. Reduced neck mobility yielded an increase in COR gain. No correlation between COR gain and muscle activity was observed. Adaptation of both the COR and VOR was observed in healthy controls, but not in WAD patients. The increased COR gain of WAD patients may stem from a reduced neck mobility. The lack of adaptation of the two stabilization reflexes may result in a lack of synergy between them. These abnormalities may underlie several of the symptoms frequently observed in WAD, such as vertigo and dizziness.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The aim of this study was to investigate the underlying mechanisms of the increased gains of the cervico-ocular reflex (COR) and the lack of synergy between the COR and the vestibulo-ocular reflex (VOR) that have been previously observed in patients with whiplash-associated disorders (WAD). Eye movements during COR or VOR stimulation were recorded in four different experiments. The effect of restricted neck motion and the relationship between muscle activity and COR gain was examined in healthy controls. The adaptive ability of the COR and the VOR was tested in WAD patients and healthy controls. Reduced neck mobility yielded an increase in COR gain. No correlation between COR gain and muscle activity was observed. Adaptation of both the COR and VOR was observed in healthy controls, but not in WAD patients. The increased COR gain of WAD patients may stem from a reduced neck mobility. The lack of adaptation of the two stabilization reflexes may result in a lack of synergy between them. These abnormalities may underlie several of the symptoms frequently observed in WAD, such as vertigo and dizziness.

Close

  • doi:10.1089/neu.2007.0314

Close

Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven

Cue validity effects in response preparation: A pupillometric study Journal Article

In: Brain Research, vol. 1196, pp. 94–102, 2008.

Abstract | Links | BibTeX

@article{Moresi2008,
title = {Cue validity effects in response preparation: A pupillometric study},
author = {Sofie Moresi and Jos J. Adam and Jons Rijcken and Pascal W. M. Van Gerven},
doi = {10.1016/j.brainres.2007.12.026},
year = {2008},
date = {2008-01-01},
journal = {Brain Research},
volume = {1196},
pages = {94--102},
abstract = {This study examined the effects of cue validity and cue difficulty on response preparation to provide a test of the Grouping Model [Adam, J.J., Hommel, B. and Umiltà, C., 2003. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognit. Psychol. 46(3), 302-58, Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) automatic and effortful processes in response cuing. Vis. Cogn. 12(8), 1444-1473.]. We used the pupillary response to index the cognitive processing load during and after the preparatory interval (2 s). Twenty-two participants performed the finger-cuing tasks with valid (75%) and invalid (25%) cues. Results showed longer reaction times, more errors, and larger pupil dilations for invalid than valid cues. During the preparation interval, pupil dilation varied systematically with cue difficulty, with easy cues (specifying 2 fingers on 1 hand) showing less pupil dilation than difficult cues (specifying 2 fingers on 2 hands). After the preparation interval, this pattern of differential pupil dilation as a function of cue difficulty reversed for invalid cues, suggesting that cues which incorrectly specified fingers on one hand required more effortful reprogramming operations than cues which incorrectly specified fingers on two hands. These outcomes were consistent with predictions derived from the Grouping Model. Finally, all participants exhibited two distinct pupil dilation strategies: an "early" strategy in which the onset of the main pupil dilation was tied to onset of the cue, and a "late" strategy in which the onset of the main pupil dilation was tied to the onset of the target. Thus, whereas the early pupil dilation strategy showed a strong dilation during the preparation interval, the late pupil strategy showed a strong constriction. Interestingly, only the late onset pupil dilation strategy revealed the above reported sensitivity to cue difficulty, showing for the first time that the well-known pupil's sensitivity to task difficulty can also emerge when the pupil is constricting instead of dilating.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study examined the effects of cue validity and cue difficulty on response preparation to provide a test of the Grouping Model [Adam, J.J., Hommel, B. and Umiltà, C., 2003. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognit. Psychol. 46(3), 302-58, Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) automatic and effortful processes in response cuing. Vis. Cogn. 12(8), 1444-1473.]. We used the pupillary response to index the cognitive processing load during and after the preparatory interval (2 s). Twenty-two participants performed the finger-cuing tasks with valid (75%) and invalid (25%) cues. Results showed longer reaction times, more errors, and larger pupil dilations for invalid than valid cues. During the preparation interval, pupil dilation varied systematically with cue difficulty, with easy cues (specifying 2 fingers on 1 hand) showing less pupil dilation than difficult cues (specifying 2 fingers on 2 hands). After the preparation interval, this pattern of differential pupil dilation as a function of cue difficulty reversed for invalid cues, suggesting that cues which incorrectly specified fingers on one hand required more effortful reprogramming operations than cues which incorrectly specified fingers on two hands. These outcomes were consistent with predictions derived from the Grouping Model. Finally, all participants exhibited two distinct pupil dilation strategies: an "early" strategy in which the onset of the main pupil dilation was tied to onset of the cue, and a "late" strategy in which the onset of the main pupil dilation was tied to the onset of the target. Thus, whereas the early pupil dilation strategy showed a strong dilation during the preparation interval, the late pupil strategy showed a strong constriction. Interestingly, only the late onset pupil dilation strategy revealed the above reported sensitivity to cue difficulty, showing for the first time that the well-known pupil's sensitivity to task difficulty can also emerge when the pupil is constricting instead of dilating.

Close

  • doi:10.1016/j.brainres.2007.12.026

Close

Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven; Harm Kuipers; Jelle Jolles

Pupil dilation in response preparation Journal Article

In: International Journal of Psychophysiology, vol. 67, no. 2, pp. 124–130, 2008.

Abstract | Links | BibTeX

@article{Moresi2008a,
title = {Pupil dilation in response preparation},
author = {Sofie Moresi and Jos J. Adam and Jons Rijcken and Pascal W. M. Van Gerven and Harm Kuipers and Jelle Jolles},
doi = {10.1016/j.ijpsycho.2007.10.011},
year = {2008},
date = {2008-01-01},
journal = {International Journal of Psychophysiology},
volume = {67},
number = {2},
pages = {124--130},
abstract = {This study examined changes in pupil size during response preparation in a finger-cuing task. Based on the Grouping Model of finger preparation [Adam, J.J., Hommel, B. and Umiltà, C., 2003b. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognitive Psychology. 46, (3), 302-358.; Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) Automatic and effortfull Processes in Response cuing. Visual Cognition. 12, (8), 1444-1473.], it was hypothesized that the selection and preparation of more difficult response sets would be accompanied by larger pupillary dilations. The results supported this prediction, thereby extending the validity of pupil size as a measure of cognitive load to the domain of response preparation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study examined changes in pupil size during response preparation in a finger-cuing task. Based on the Grouping Model of finger preparation [Adam, J.J., Hommel, B. and Umiltà, C., 2003b. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognitive Psychology. 46, (3), 302-358.; Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) Automatic and effortfull Processes in Response cuing. Visual Cognition. 12, (8), 1444-1473.], it was hypothesized that the selection and preparation of more difficult response sets would be accompanied by larger pupillary dilations. The results supported this prediction, thereby extending the validity of pupil size as a measure of cognitive load to the domain of response preparation.

Close

  • doi:10.1016/j.ijpsycho.2007.10.011

Close

Jane L. Morgan; Gus Elswijk; Antje S. Meyer

Extrafoveal processing of objects in a naming task: Evidence from word probe experiments Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 3, pp. 561–565, 2008.

Abstract | Links | BibTeX

@article{Morgan2008,
title = {Extrafoveal processing of objects in a naming task: Evidence from word probe experiments},
author = {Jane L. Morgan and Gus Elswijk and Antje S. Meyer},
doi = {10.3758/PBR.15.3.561},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {3},
pages = {561--565},
abstract = {In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.

Close

  • doi:10.3758/PBR.15.3.561

Close

Linda Mortensen; Antje S. Meyer; Glyn W. Humphreys

Speech planning during multiple-object naming: Effects of ageing Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 61, no. 8, pp. 1217–1238, 2008.

Abstract | Links | BibTeX

@article{Mortensen2008,
title = {Speech planning during multiple-object naming: Effects of ageing},
author = {Linda Mortensen and Antje S. Meyer and Glyn W. Humphreys},
doi = {10.1080/17470210701467912},
year = {2008},
date = {2008-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {61},
number = {8},
pages = {1217--1238},
abstract = {Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.

Close

  • doi:10.1080/17470210701467912

Close

S. Moshel; Ari Z. Zivotofsky; L. Jin-Rong; Ralf Engbert; Jürgen Kurths; Reinhold Kliegl; Shlomo Havlin

Persistence and phase synchronisation properties of fixational eye movements Journal Article

In: The European Physical Journal Special Topics, vol. 161, pp. 207–223, 2008.

Abstract | Links | BibTeX

@article{Moshel2008,
title = {Persistence and phase synchronisation properties of fixational eye movements},
author = {S. Moshel and Ari Z. Zivotofsky and L. Jin-Rong and Ralf Engbert and Jürgen Kurths and Reinhold Kliegl and Shlomo Havlin},
doi = {10.1140/epjst/e2008-00762-3},
year = {2008},
date = {2008-01-01},
journal = {The European Physical Journal Special Topics},
volume = {161},
pages = {207--223},
abstract = {When we fixate our gaze on a stable object, our eyes move continuously with extremely small involuntary and autonomic movements, that even we are unaware of during their occurrence. One of the roles of these fixational eye movements is to prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, small movements are restricted at long time scales so as to keep the target at the centre of the field of view. In addition, the synchronisation properties between both eyes are related to binocular coordination in order to provide stereopsis. We investigated the roles of different time scale behaviours, especially how they are expressed in the different spatial directions (vertical versus horizontal). We also tested the synchronisation between both eyes. Results show different scaling behaviour between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling behaviour in both axes becomes similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. We also applied the phase synchronisation decay method to study the synchronisation between six combinations of binocular fixational eye movement components. We found that the vertical-vertical components of right and left eyes are significantly more synchronised than the horizontal-horizontal components. These differences may be due to the need for continuously moving the eyes in the horizontal plane in order to match the stereoscopic image for different viewing distances.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When we fixate our gaze on a stable object, our eyes move continuously with extremely small involuntary and autonomic movements, that even we are unaware of during their occurrence. One of the roles of these fixational eye movements is to prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, small movements are restricted at long time scales so as to keep the target at the centre of the field of view. In addition, the synchronisation properties between both eyes are related to binocular coordination in order to provide stereopsis. We investigated the roles of different time scale behaviours, especially how they are expressed in the different spatial directions (vertical versus horizontal). We also tested the synchronisation between both eyes. Results show different scaling behaviour between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling behaviour in both axes becomes similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. We also applied the phase synchronisation decay method to study the synchronisation between six combinations of binocular fixational eye movement components. We found that the vertical-vertical components of right and left eyes are significantly more synchronised than the horizontal-horizontal components. These differences may be due to the need for continuously moving the eyes in the horizontal plane in order to match the stereoscopic image for different viewing distances.

Close

  • doi:10.1140/epjst/e2008-00762-3

Close

Brad C. Motter; Diglio A. Simoni

Changes in the functional visual field during search with and without eye movements Journal Article

In: Vision Research, vol. 48, pp. 2382–2393, 2008.

Abstract | BibTeX

@article{Motter2008,
title = {Changes in the functional visual field during search with and without eye movements},
author = {Brad C. Motter and Diglio A. Simoni},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
pages = {2382--2393},
abstract = {The size of the functional visual field (FVF) is dynamic, changing with the context and attentive demand that each fixation brings as we move our eyes and head to explore the visual scene. Using performance measures of the FVF we show that during search conditions with eye movements, the FVF is small compared to the size of the FVF measured during search without eye movements. In all cases the size of the FVF is constrained by the density of distracting items. During search without eye movements the FVF expands with time; subjects have idiosyncratic spatial biases suggesting covert shifts of attention. For search within the constraints imposed by item density, the rate of item inspection is the same across all search conditions. Array set size effects are not apparent once stimulus density is taken into account, a result that is consistent with a spatial constraint for the FVF based on the cortical separation hypothesis.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The size of the functional visual field (FVF) is dynamic, changing with the context and attentive demand that each fixation brings as we move our eyes and head to explore the visual scene. Using performance measures of the FVF we show that during search conditions with eye movements, the FVF is small compared to the size of the FVF measured during search without eye movements. In all cases the size of the FVF is constrained by the density of distracting items. During search without eye movements the FVF expands with time; subjects have idiosyncratic spatial biases suggesting covert shifts of attention. For search within the constraints imposed by item density, the rate of item inspection is the same across all search conditions. Array set size effects are not apparent once stimulus density is taken into account, a result that is consistent with a spatial constraint for the FVF based on the cortical separation hypothesis.

Close

Jason S. McCarley; Christopher Grant

State-trace analysis of the effects of a visual illusion on saccade amplitudes and perceptual judgments Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 5, pp. 1008–1014, 2008.

Abstract | Links | BibTeX

@article{McCarley2008,
title = {State-trace analysis of the effects of a visual illusion on saccade amplitudes and perceptual judgments},
author = {Jason S. McCarley and Christopher Grant},
doi = {10.3758/PBR.15.5.1008},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {5},
pages = {1008--1014},
abstract = {Visual illusions often appear to have a larger influence on subjective judgments than on visuomotor behavior. Although this effect has been taken as evidence for multiple estimates of stimulus size in the visual brain, dissociations between subjective judgments and visuomotor measures can frequently be reconciled with a single-estimate model. To circumvent this difficulty, we used state-trace analysis in a pair of experiments to examine the effects of the Müller-Lyer illusion on subjective length estimates, voluntary saccade amplitudes, and reflexive saccade amplitudes. All dependent measures were affected by the illusion. However, state-trace analyses revealed nonmonotonic relationships among all three variables, a pattern inconsistent with the possibility of a single underlying estimate of stimulus size.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual illusions often appear to have a larger influence on subjective judgments than on visuomotor behavior. Although this effect has been taken as evidence for multiple estimates of stimulus size in the visual brain, dissociations between subjective judgments and visuomotor measures can frequently be reconciled with a single-estimate model. To circumvent this difficulty, we used state-trace analysis in a pair of experiments to examine the effects of the Müller-Lyer illusion on subjective length estimates, voluntary saccade amplitudes, and reflexive saccade amplitudes. All dependent measures were affected by the illusion. However, state-trace analyses revealed nonmonotonic relationships among all three variables, a pattern inconsistent with the possibility of a single underlying estimate of stimulus size.

Close

  • doi:10.3758/PBR.15.5.1008

Close

Bob McMurray; Richard N. Aslin; Michael K. Tanenhaus; Michael J. Spivey; Dana Subik

Gradient sensitivity to within-category variation in words and syllables Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 6, pp. 1609–1631, 2008.

Abstract | Links | BibTeX

@article{McMurray2008,
title = {Gradient sensitivity to within-category variation in words and syllables},
author = {Bob McMurray and Richard N. Aslin and Michael K. Tanenhaus and Michael J. Spivey and Dana Subik},
doi = {10.1037/a0011747},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {34},
number = {6},
pages = {1609--1631},
abstract = {Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability.

Close

  • doi:10.1037/a0011747

Close

Bob McMurray; Meghan Clayards; Michael K. Tanenhaus; Richard N. Aslin

Tracking the time course of phonetic cue integration during spoken word recognition Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 6, pp. 1064–1071, 2008.

Abstract | BibTeX

@article{McMurray2008a,
title = {Tracking the time course of phonetic cue integration during spoken word recognition},
author = {Bob McMurray and Meghan Clayards and Michael K. Tanenhaus and Richard N. Aslin},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {6},
pages = {1064--1071},
abstract = {Speech perception requires listeners to integrate multiple cues that each contribute to judgments about a phonetic category. Classic studies of trading relations assessed the weights attached to each cue but did not explore the time course of cue integration. Here, we provide the first direct evidence that asynchronous cues to voicing (/b/ vs. /p/) and manner (/b/ vs. /w/) contrasts become available to the listener at different times during spoken word recognition. Using the visual world paradigm, we show that the probability of eye movements to pictures of target and of competitor objects diverge at different points in time after the onset of the target word. These points of divergence correspond to the availability of early (voice onset time or formant transition slope) and late (vowel length) cues to voicing and manner contrasts. These results support a model of cue integration in which phonetic cues are used for lexical access as soon as they are available.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Speech perception requires listeners to integrate multiple cues that each contribute to judgments about a phonetic category. Classic studies of trading relations assessed the weights attached to each cue but did not explore the time course of cue integration. Here, we provide the first direct evidence that asynchronous cues to voicing (/b/ vs. /p/) and manner (/b/ vs. /w/) contrasts become available to the listener at different times during spoken word recognition. Using the visual world paradigm, we show that the probability of eye movements to pictures of target and of competitor objects diverge at different points in time after the onset of the target word. These points of divergence correspond to the availability of early (voice onset time or formant transition slope) and late (vowel length) cues to voicing and manner contrasts. These results support a model of cue integration in which phonetic cues are used for lexical access as soon as they are available.

Close

David Melcher

Dynamic, object-based remapping of visual features in trans-saccadic perception Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008.

Abstract | Links | BibTeX

@article{Melcher2008,
title = {Dynamic, object-based remapping of visual features in trans-saccadic perception},
author = {David Melcher},
doi = {10.1167/8.14.2},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--17},
abstract = {Saccadic eye movements can dramatically change the location in which an object is projected onto the retina. One mechanism that might potentially underlie the perception of stable objects, despite the occurrence of saccades, is the "remapping" of receptive fields around the time of saccadic eye movements. Here we examined two possible models of trans-saccadic remapping of visual features: (1) spatiotopic coordinates that remain constant across saccades or (2) an object-based remapping in retinal coordinates. We used form adaptation to test "object" and "space" based predictions for an adapter that changed spatial and/or retinal location due to eye movements, object motion or manual displacement using a computer mouse. The predictability and speed of the object motion was also manipulated. The main finding was that maximum transfer of the form aftereffect in retinal coordinates occurred when there was a saccade and when the object motion was attended and predictable. A small transfer was also found when observers moved the object across the screen using a computer mouse. The overall pattern of results is consistent with the theory of object-based remapping for salient stimuli. Thus, the active updating of the location and features of attended objects may play a role in perceptual stability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic eye movements can dramatically change the location in which an object is projected onto the retina. One mechanism that might potentially underlie the perception of stable objects, despite the occurrence of saccades, is the "remapping" of receptive fields around the time of saccadic eye movements. Here we examined two possible models of trans-saccadic remapping of visual features: (1) spatiotopic coordinates that remain constant across saccades or (2) an object-based remapping in retinal coordinates. We used form adaptation to test "object" and "space" based predictions for an adapter that changed spatial and/or retinal location due to eye movements, object motion or manual displacement using a computer mouse. The predictability and speed of the object motion was also manipulated. The main finding was that maximum transfer of the form aftereffect in retinal coordinates occurred when there was a saccade and when the object motion was attended and predictable. A small transfer was also found when observers moved the object across the screen using a computer mouse. The overall pattern of results is consistent with the theory of object-based remapping for salient stimuli. Thus, the active updating of the location and features of attended objects may play a role in perceptual stability.

Close

  • doi:10.1167/8.14.2

Close

Andrea E. Martin; Brian McElree

A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis Journal Article

In: Journal of Memory and Language, vol. 58, no. 3, pp. 879–906, 2008.

Abstract | Links | BibTeX

@article{Martin2008,
title = {A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis},
author = {Andrea E. Martin and Brian McElree},
doi = {10.1016/j.jml.2007.06.010},
year = {2008},
date = {2008-01-01},
journal = {Journal of Memory and Language},
volume = {58},
number = {3},
pages = {879--906},
abstract = {Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed-accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3-5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed-accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3-5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.

Close

  • doi:10.1016/j.jml.2007.06.010

Close

Eric Matheron; Qing Yang; Thanh Thuan Lê; Zoï Kapoula

Effects of ocular dominance on the vertical vergence induced by a 2-diopter vertical prism during standing Journal Article

In: Neuroscience Letters, vol. 444, no. 2, pp. 176–180, 2008.

Abstract | Links | BibTeX

@article{Matheron2008,
title = {Effects of ocular dominance on the vertical vergence induced by a 2-diopter vertical prism during standing},
author = {Eric Matheron and Qing Yang and Thanh Thuan Lê and Zoï Kapoula},
doi = {10.1016/j.neulet.2008.08.025},
year = {2008},
date = {2008-01-01},
journal = {Neuroscience Letters},
volume = {444},
number = {2},
pages = {176--180},
abstract = {This study examined the eye movement responses to vertical disparity induced by a 2-diopter vertical prism base down while in standing position. Vertical vergence movements are known to be small requiring accurate measurement with the head stabilized, and was done with the EyeLink 2. The 2-diopter vertical prism, base down, was inserted in front of either the non-dominant eye (NDE) or dominant eye (DE) at 40 and 200 cm. The results showed that vertical vergence was stronger and excessive relative to the required value (i.e. 1.14°) when the prism was on the NDE for both distances, but more appropriate when the prism was on the DE. The results suggest that sensory disparity process and vertical vergence responses are modulated by eye dominance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study examined the eye movement responses to vertical disparity induced by a 2-diopter vertical prism base down while in standing position. Vertical vergence movements are known to be small requiring accurate measurement with the head stabilized, and was done with the EyeLink 2. The 2-diopter vertical prism, base down, was inserted in front of either the non-dominant eye (NDE) or dominant eye (DE) at 40 and 200 cm. The results showed that vertical vergence was stronger and excessive relative to the required value (i.e. 1.14°) when the prism was on the NDE for both distances, but more appropriate when the prism was on the DE. The results suggest that sensory disparity process and vertical vergence responses are modulated by eye dominance.

Close

  • doi:10.1016/j.neulet.2008.08.025

Close

Casimir J. H. Ludwig; Adam Ranson; Iain D. Gilchrist

Oculomotor capture by transient events: A comparison of abrupt onsets, offsets, motion, and flicker Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–16, 2008.

Abstract | Links | BibTeX

@article{Ludwig2008,
title = {Oculomotor capture by transient events: A comparison of abrupt onsets, offsets, motion, and flicker},
author = {Casimir J. H. Ludwig and Adam Ranson and Iain D. Gilchrist},
doi = {10.1167/8.14.11},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--16},
abstract = {Attentional and oculomotor capture by some salient visual event gives insight into what types of dynamic signals the human orienting system is sensitive to. We examined the sensitivity of the saccadic eye movement system to 4 types of dynamic, but task-irrelevant, visual events: abrupt onset, abrupt offset, motion onset and flicker onset. We varied (1) the primary task (contrast vs. motion discrimination) and (2) the amount of prior knowledge of the location of the dynamic event. Interference from the irrelevant events was quantified using a discrimination threshold metric. When the primary task involved contrast discrimination, all four events disrupted performance approximately equally, including the sudden disappearance of an old object. However, when motion was the task-relevant dimension, abrupt onsets and offsets did not disrupt performance at all, but motion onset had a strong effect. Providing more spatial certainty to observers decreased the amount of direct oculomotor capture but nevertheless impaired performance. We conclude that oculomotor capture is predominantly contingent upon the channel the observer monitors in order to perform the primary visual task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attentional and oculomotor capture by some salient visual event gives insight into what types of dynamic signals the human orienting system is sensitive to. We examined the sensitivity of the saccadic eye movement system to 4 types of dynamic, but task-irrelevant, visual events: abrupt onset, abrupt offset, motion onset and flicker onset. We varied (1) the primary task (contrast vs. motion discrimination) and (2) the amount of prior knowledge of the location of the dynamic event. Interference from the irrelevant events was quantified using a discrimination threshold metric. When the primary task involved contrast discrimination, all four events disrupted performance approximately equally, including the sudden disappearance of an old object. However, when motion was the task-relevant dimension, abrupt onsets and offsets did not disrupt performance at all, but motion onset had a strong effect. Providing more spatial certainty to observers decreased the amount of direct oculomotor capture but nevertheless impaired performance. We conclude that oculomotor capture is predominantly contingent upon the channel the observer monitors in order to perform the primary visual task.

Close

  • doi:10.1167/8.14.11

Close

Manon Mulckhuyse; Wieske Van Zoest; Jan Theeuwes

Capture of the eyes by relevant and irrelevant onsets Journal Article

In: Experimental Brain Research, vol. 186, no. 2, pp. 225–235, 2008.

Abstract | Links | BibTeX

@article{Mulckhuyse2008,
title = {Capture of the eyes by relevant and irrelevant onsets},
author = {Manon Mulckhuyse and Wieske Van Zoest and Jan Theeuwes},
doi = {10.1007/s00221-007-1226-3},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {186},
number = {2},
pages = {225--235},
abstract = {During early visual processing the eyes can be captured by salient visual information in the environment. Whether a salient stimulus captures the eyes in a purely automatic, bottom-up fashion or whether capture is contingent on task demands is still under debate. In the first experiment, we manipulated the relevance of a salient onset distractor. The onset distractor could either be similar or dissimilar to the target. Error saccade latency distributions showed that early in time, oculomotor capture was driven purely bottom-up irrespective of distractor similarity. Later in time, top-down information became available resulting in contingent capture. In the second experiment, we manipulated the saliency information at the target location. A salient onset stimulus could be presented either at the target or at a non-target location. The latency distributions of error and correct saccades had a similar time-course as those observed in the first experiment. Initially, the distributions overlapped but later in time task-relevant information decelerated the oculomotor system. The present findings reveal the interaction between bottom-up and top-down processes in oculomotor behavior. We conclude that the task relevance of a salient event is not crucial for capture of the eyes to occur. Moreover, task-relevant information may integrate with saliency information to initiate saccades, but only later in time.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During early visual processing the eyes can be captured by salient visual information in the environment. Whether a salient stimulus captures the eyes in a purely automatic, bottom-up fashion or whether capture is contingent on task demands is still under debate. In the first experiment, we manipulated the relevance of a salient onset distractor. The onset distractor could either be similar or dissimilar to the target. Error saccade latency distributions showed that early in time, oculomotor capture was driven purely bottom-up irrespective of distractor similarity. Later in time, top-down information became available resulting in contingent capture. In the second experiment, we manipulated the saliency information at the target location. A salient onset stimulus could be presented either at the target or at a non-target location. The latency distributions of error and correct saccades had a similar time-course as those observed in the first experiment. Initially, the distributions overlapped but later in time task-relevant information decelerated the oculomotor system. The present findings reveal the interaction between bottom-up and top-down processes in oculomotor behavior. We conclude that the task relevance of a salient event is not crucial for capture of the eyes to occur. Moreover, task-relevant information may integrate with saliency information to initiate saccades, but only later in time.

Close

  • doi:10.1007/s00221-007-1226-3

Close

Ikuya Murakami; Rumi Hisakata

The effects of eccentricity and retinal illuminance on the illusory motion seen in a stationary luminance gradient Journal Article

In: Vision Research, vol. 48, no. 19, pp. 1940–1948, 2008.

Abstract | Links | BibTeX

@article{Murakami2008,
title = {The effects of eccentricity and retinal illuminance on the illusory motion seen in a stationary luminance gradient},
author = {Ikuya Murakami and Rumi Hisakata},
doi = {10.1016/j.visres.2008.06.015},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {19},
pages = {1940--1948},
abstract = {Kitaoka recently reported a novel illusion named the Rotating Snakes [Kitaoka, A., & Ashida, H. (2003). Phenomenal characteristics of the peripheral drift illusion. Vision, 15, 261-262], in which a stationary pattern appears to rotate constantly. In the first experiment, we attempted to quantify the anecdote that this illusion is better perceived in the periphery. The stimulus was a ring composed of stepwise luminance patterns and was presented in the left visual field. With increasing eccentricity up to 10-14 deg, the cancellation velocity required to establish perceptual stationarity increased. In the next experiment, we examined the effect of retinal illuminance. Interestingly, the cancellation velocity decreased as retinal illuminance was decreased. We also estimated the human temporal impulse response at some retinal illuminances by using the double-pulse method to confirm that the shape of the impulse response actually changes from biphasic to monophasic, which indicates that the transient processing system has weaker activities at lower illuminances. We conclude that some transient temporal processing system is necessary for the illusion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Kitaoka recently reported a novel illusion named the Rotating Snakes [Kitaoka, A., & Ashida, H. (2003). Phenomenal characteristics of the peripheral drift illusion. Vision, 15, 261-262], in which a stationary pattern appears to rotate constantly. In the first experiment, we attempted to quantify the anecdote that this illusion is better perceived in the periphery. The stimulus was a ring composed of stepwise luminance patterns and was presented in the left visual field. With increasing eccentricity up to 10-14 deg, the cancellation velocity required to establish perceptual stationarity increased. In the next experiment, we examined the effect of retinal illuminance. Interestingly, the cancellation velocity decreased as retinal illuminance was decreased. We also estimated the human temporal impulse response at some retinal illuminances by using the double-pulse method to confirm that the shape of the impulse response actually changes from biphasic to monophasic, which indicates that the transient processing system has weaker activities at lower illuminances. We conclude that some transient temporal processing system is necessary for the illusion.

Close

  • doi:10.1016/j.visres.2008.06.015

Close

Chie Nakatani; Cees Van Leeuwen

A pragmatic approach to multi-modality and non-normality in fixation duration studies of cognitive processes Journal Article

In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–12, 2008.

Abstract | Links | BibTeX

@article{Nakatani2008,
title = {A pragmatic approach to multi-modality and non-normality in fixation duration studies of cognitive processes},
author = {Chie Nakatani and Cees Van Leeuwen},
doi = {10.1016/0022-3468(90)90802-G},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {1},
number = {2},
pages = {1--12},
abstract = {Interpreting eye-fixation durations in terms of cognitive processing load is complicated by the multimodality of their distribution. An important source of multimodality is the distinction between single and multiple fixations to the same object. Based on the distinction, we separated a log-transformed distribution made to an object in non-reading task. We could reasonably conclude that the separated distributions belong to the same, general logistic distribution, which has a finite population mean and variance. This allowed us to use the sample means as dependent variables in a parametric analysis. Six tasks were compared, which required different levels of post-perceptual processing. A no-task control condition was added to test for perceptual processing. Fixation durations differentiated task-specific perceptual, but not post-perceptual processing demands.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Interpreting eye-fixation durations in terms of cognitive processing load is complicated by the multimodality of their distribution. An important source of multimodality is the distinction between single and multiple fixations to the same object. Based on the distinction, we separated a log-transformed distribution made to an object in non-reading task. We could reasonably conclude that the separated distributions belong to the same, general logistic distribution, which has a finite population mean and variance. This allowed us to use the sample means as dependent variables in a parametric analysis. Six tasks were compared, which required different levels of post-perceptual processing. A no-task control condition was added to test for perceptual processing. Fixation durations differentiated task-specific perceptual, but not post-perceptual processing demands.

Close

  • doi:10.1016/0022-3468(90)90802-G

Close

Harold T. Nefs; J. M. Harris

Induced motion in depth and the effects of vergence eye movements Journal Article

In: Journal of Vision, vol. 8, no. 3, pp. 1–16, 2008.

Abstract | Links | BibTeX

@article{Nefs2008,
title = {Induced motion in depth and the effects of vergence eye movements},
author = {Harold T. Nefs and J. M. Harris},
doi = {10.1167/8.3.8},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {3},
pages = {1--16},
abstract = {Induced motion is the false impression that physically stationary objects move when in the presence of other objects that really move. In this study, we investigated this motion illusion in the depth dimension. We raised three related questions, as follows: (1) What cues in the stimulus are responsible for this motion illusion in depth? (2) Is the size of this illusion affected by vergence eye movements? And (3) are the effects of eye movements different for motion in depth and for motion in the frontoparallel plane? To answer these questions, we measured the point of subjective stationarity. Observers viewed an inducer target that oscillated in depth and a test target that was located directly above it. The test target moved in phase or out of phase with the inducer, but with a smaller amplitude. Observers had to indicate whether the test target and the inducer target moved in phase or out of phase with one another. They were asked to keep their eyes either on the test target or on the inducer. For motion in depth, created by binocular disparity and retinal size change or by binocular disparity alone, we found that when the eyes followed the inducer, subjective stationarity occurred at approximately 40-45% of the inducer's amplitude. When the eyes were kept fixated on the test target, the bias decreased tenfold to around 4%. When size change was the only cue to motion in depth, there was no illusory motion. When the eyes were kept on an inducer moving in the frontoparallel plane, induced motion was of the same order as for induced motion in depth, namely, approximately 44%. When the induced motion was in the frontoparallel plane, we found that perceived stationarity occurred at approximately 23% of inducer's amplitude when the eyes were kept on the test target.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Induced motion is the false impression that physically stationary objects move when in the presence of other objects that really move. In this study, we investigated this motion illusion in the depth dimension. We raised three related questions, as follows: (1) What cues in the stimulus are responsible for this motion illusion in depth? (2) Is the size of this illusion affected by vergence eye movements? And (3) are the effects of eye movements different for motion in depth and for motion in the frontoparallel plane? To answer these questions, we measured the point of subjective stationarity. Observers viewed an inducer target that oscillated in depth and a test target that was located directly above it. The test target moved in phase or out of phase with the inducer, but with a smaller amplitude. Observers had to indicate whether the test target and the inducer target moved in phase or out of phase with one another. They were asked to keep their eyes either on the test target or on the inducer. For motion in depth, created by binocular disparity and retinal size change or by binocular disparity alone, we found that when the eyes followed the inducer, subjective stationarity occurred at approximately 40-45% of the inducer's amplitude. When the eyes were kept fixated on the test target, the bias decreased tenfold to around 4%. When size change was the only cue to motion in depth, there was no illusory motion. When the eyes were kept on an inducer moving in the frontoparallel plane, induced motion was of the same order as for induced motion in depth, namely, approximately 44%. When the induced motion was in the frontoparallel plane, we found that perceived stationarity occurred at approximately 23% of inducer's amplitude when the eyes were kept on the test target.

Close

  • doi:10.1167/8.3.8

Close

Mark B. Neider; Gregory J. Zelinsky

Exploring set size effects in scenes: Identifying the objects of search Journal Article

In: Visual Cognition, vol. 16, no. 1, pp. 1–10, 2008.

Abstract | Links | BibTeX

@article{Neider2008,
title = {Exploring set size effects in scenes: Identifying the objects of search},
author = {Mark B. Neider and Gregory J. Zelinsky},
doi = {10.1080/13506280701381691},
year = {2008},
date = {2008-01-01},
journal = {Visual Cognition},
volume = {16},
number = {1},
pages = {1--10},
abstract = {Traditional search paradigms utilize simple displays, allowing a precise determination of set size. However, objects in realistic scenes are largely uncountable, and typically visually and semantically complex. Can traditional conceptions of set size be applied to search in realistic scenes? Observers searched quasirealistic scenes for a tank target hidden among tree distractors varying in number and density. Search efficiency improved as trees were added to the display, a reverse set size effect. Eye movement analyses revealed that observers fixated individual trees when the set size was small, and the open regions between trees when the set size was large. Rather than a set size consisting of objectively countable objects, we interpret these data as evidence for a restricted functional set size consisting of idiosyncratically defined objects of search. Observers exploit low-level perceptual grouping processes and high-level semantic scene constraints to dynamically create objects that are appropriate to a given search task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Traditional search paradigms utilize simple displays, allowing a precise determination of set size. However, objects in realistic scenes are largely uncountable, and typically visually and semantically complex. Can traditional conceptions of set size be applied to search in realistic scenes? Observers searched quasirealistic scenes for a tank target hidden among tree distractors varying in number and density. Search efficiency improved as trees were added to the display, a reverse set size effect. Eye movement analyses revealed that observers fixated individual trees when the set size was small, and the open regions between trees when the set size was large. Rather than a set size consisting of objectively countable objects, we interpret these data as evidence for a restricted functional set size consisting of idiosyncratically defined objects of search. Observers exploit low-level perceptual grouping processes and high-level semantic scene constraints to dynamically create objects that are appropriate to a given search task.

Close

  • doi:10.1080/13506280701381691

Close

Amy D. Lykins; Marta Meana; Gregory P. Strauss

Sex differences in visual attention to erotic and non-erotic stimuli Journal Article

In: Archives of Sexual Behavior, vol. 37, no. 2, pp. 219–228, 2008.

Abstract | Links | BibTeX

@article{Lykins2008,
title = {Sex differences in visual attention to erotic and non-erotic stimuli},
author = {Amy D. Lykins and Marta Meana and Gregory P. Strauss},
doi = {10.1007/s10508-007-9208-x},
year = {2008},
date = {2008-01-01},
journal = {Archives of Sexual Behavior},
volume = {37},
number = {2},
pages = {219--228},
abstract = {It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence.

Close

  • doi:10.1007/s10508-007-9208-x

Close

Antonio F. Macedo; Michael D. Crossland; Gary S. Rubin

The effect of retinal image slip on peripheral visual acuity Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–11, 2008.

Abstract | Links | BibTeX

@article{Macedo2008,
title = {The effect of retinal image slip on peripheral visual acuity},
author = {Antonio F. Macedo and Michael D. Crossland and Gary S. Rubin},
doi = {10.1167/8.14.16},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--11},
abstract = {Retinal image slip promoted by fixational eye movements prevents image fading in central vision. However, in the periphery a higher amount of movement is necessary to prevent this fading. We assessed the effect of different levels of retinal image slip in peripheral vision by measuring peripheral visual acuity (VA), with and without crowding, while modulating retinal eccentricity. Gaze position was monitored throughout using an infrared eyetracker. The target was presented for up to 500 msec, either with no retinal image slip, with reduced retinal slip, or with increased retinal image slip. Without crowding, peripheral visual acuity improved with increased retinal image slip compared with the other two conditions. IN contrast to the previous result, under crowded conditions, peripheral visual acuity decreased markedly with increased retinal image slip. Therefore, the effects of increased retinal image slip are different for simple (noncrowded) and more complex (crowded) visual tasks. These results provide further evidence for the importance of fixation stability on complex visual tasks when using the peripheral retina.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retinal image slip promoted by fixational eye movements prevents image fading in central vision. However, in the periphery a higher amount of movement is necessary to prevent this fading. We assessed the effect of different levels of retinal image slip in peripheral vision by measuring peripheral visual acuity (VA), with and without crowding, while modulating retinal eccentricity. Gaze position was monitored throughout using an infrared eyetracker. The target was presented for up to 500 msec, either with no retinal image slip, with reduced retinal slip, or with increased retinal image slip. Without crowding, peripheral visual acuity improved with increased retinal image slip compared with the other two conditions. IN contrast to the previous result, under crowded conditions, peripheral visual acuity decreased markedly with increased retinal image slip. Therefore, the effects of increased retinal image slip are different for simple (noncrowded) and more complex (crowded) visual tasks. These results provide further evidence for the importance of fixation stability on complex visual tasks when using the peripheral retina.

Close

  • doi:10.1167/8.14.16

Close

James S. Magnuson; Michael K. Tanenhaus; Richard N. Aslin

Immediate effects of form-class constraints on spoken word recognition Journal Article

In: Cognition, vol. 108, no. 3, pp. 866–873, 2008.

Abstract | Links | BibTeX

@article{Magnuson2008,
title = {Immediate effects of form-class constraints on spoken word recognition},
author = {James S. Magnuson and Michael K. Tanenhaus and Richard N. Aslin},
doi = {10.1016/j.cognition.2008.06.005},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {108},
number = {3},
pages = {866--873},
abstract = {In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar "nouns" and "adjectives" did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar "nouns" and "adjectives" did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration.

Close

  • doi:10.1016/j.cognition.2008.06.005

Close

George L. Malcolm; Linda J. Lanyon; Andrew J. B. Fugard; Jason J. S. Barton

Scan patterns during the processing of facial expression versus identity: An exploration of task-driven and stimulus-driven effects Journal Article

In: Journal of Vision, vol. 8, no. 8, pp. 1–9, 2008.

Abstract | Links | BibTeX

@article{Malcolm2008,
title = {Scan patterns during the processing of facial expression versus identity: An exploration of task-driven and stimulus-driven effects},
author = {George L. Malcolm and Linda J. Lanyon and Andrew J. B. Fugard and Jason J. S. Barton},
doi = {10.1167/8.8.2},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {8},
pages = {1--9},
abstract = {Perceptual studies suggest that processing facial identity emphasizes upper-face information, whereas processing expressions of anger or happiness emphasizes the lower-face. The two goals of the present study were to determine (a) if the distributions of eye fixations reflect these upper/lower-face biases, and (b) whether this bias is task- or stimulus-driven. We presented a target face followed by a probe pair of morphed faces, neither of which was identical to the target. Subjects judged which of the pair was more similar to the target face while eye movements were recorded. In Experiment 1 the probe pair always differed from each other in both identity and expression on each trial. In one block subjects judged which probe face was more similar to the target face in identity, and in a second block subjects judged which probe face was more similar to the target face in expression. In Experiment 2 the two probe faces differed in either expression or identity, but not both. Subjects were not informed which dimension differed, but simply asked to judge which probe face was more similar to the target face. We found that subjects scanned the upper-face more than the lower-face during the identity task but the lower-face more than the upper-face during the expression task in Experiment 1 (task-driven effects), with significantly less variation in bias in Experiment 2 (stimulus-driven effects). We conclude that fixations correlate with regional variations of diagnostic information in different processing tasks, but that these reflect top-down task-driven guidance of information acquisition more than stimulus-driven effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptual studies suggest that processing facial identity emphasizes upper-face information, whereas processing expressions of anger or happiness emphasizes the lower-face. The two goals of the present study were to determine (a) if the distributions of eye fixations reflect these upper/lower-face biases, and (b) whether this bias is task- or stimulus-driven. We presented a target face followed by a probe pair of morphed faces, neither of which was identical to the target. Subjects judged which of the pair was more similar to the target face while eye movements were recorded. In Experiment 1 the probe pair always differed from each other in both identity and expression on each trial. In one block subjects judged which probe face was more similar to the target face in identity, and in a second block subjects judged which probe face was more similar to the target face in expression. In Experiment 2 the two probe faces differed in either expression or identity, but not both. Subjects were not informed which dimension differed, but simply asked to judge which probe face was more similar to the target face. We found that subjects scanned the upper-face more than the lower-face during the identity task but the lower-face more than the upper-face during the expression task in Experiment 1 (task-driven effects), with significantly less variation in bias in Experiment 2 (stimulus-driven effects). We conclude that fixations correlate with regional variations of diagnostic information in different processing tasks, but that these reflect top-down task-driven guidance of information acquisition more than stimulus-driven effects.

Close

  • doi:10.1167/8.8.2

Close

Antje S. Meyer; Marc Ouellet; Christine Häcker

Parallel processing of objects in a naming task Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 982–987, 2008.

Abstract | Links | BibTeX

@article{Meyer2008,
title = {Parallel processing of objects in a naming task},
author = {Antje S. Meyer and Marc Ouellet and Christine Häcker},
doi = {10.1037/0278-7393.34.4.982},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {34},
number = {4},
pages = {982--987},
abstract = {The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.

Close

  • doi:10.1037/0278-7393.34.4.982

Close

Areh Mikulić; Michael C. Dorris

Temporal and spatial allocation of motor preparation during a mixed-strategy game Journal Article

In: Journal of Neurophysiology, vol. 100, no. 4, pp. 2101–2108, 2008.

Abstract | Links | BibTeX

@article{Mikulic2008,
title = {Temporal and spatial allocation of motor preparation during a mixed-strategy game},
author = {Areh Mikulić and Michael C. Dorris},
doi = {10.1152/jn.90703.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {100},
number = {4},
pages = {2101--2108},
abstract = {Adopting a mixed response strategy in competitive situations can prevent opponents from exploiting predictable play. What drives stochastic action selection is unclear given that choice patterns suggest that, on average, players are indifferent to available options during mixed-strategy equilibria. To gain insight into this stochastic selection process, we examined how motor preparation was allocated during a mixed-strategy game. If selection processes on each trial reflect a global indifference between options, then there should be no bias in motor preparation (unbiased preparation hypothesis). If, however, differences exist in the desirability of options on each trial then motor preparation should be biased toward the preferred option (biased preparation hypothesis). We tested between these alternatives by examining how saccade preparation was allocated as human subjects competed against an adaptive computer opponent in an oculomotor version of the game "matching pennies." Subjects were free to choose between two visual targets using a saccadic eye movement. Saccade preparation was probed by occasionally flashing a visual distractor at a range of times preceding target presentation. The probability that a distractor would evoke a saccade error, and when it failed to do so, the probability of choosing each of the subsequent targets quantified the temporal and spatial evolution of saccade preparation, respectively. Our results show that saccade preparation became increasingly biased as the time of target presentation approached. Specifically, the spatial locus to which saccade preparation was directed varied from trial to trial, and its time course depended on task timing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adopting a mixed response strategy in competitive situations can prevent opponents from exploiting predictable play. What drives stochastic action selection is unclear given that choice patterns suggest that, on average, players are indifferent to available options during mixed-strategy equilibria. To gain insight into this stochastic selection process, we examined how motor preparation was allocated during a mixed-strategy game. If selection processes on each trial reflect a global indifference between options, then there should be no bias in motor preparation (unbiased preparation hypothesis). If, however, differences exist in the desirability of options on each trial then motor preparation should be biased toward the preferred option (biased preparation hypothesis). We tested between these alternatives by examining how saccade preparation was allocated as human subjects competed against an adaptive computer opponent in an oculomotor version of the game "matching pennies." Subjects were free to choose between two visual targets using a saccadic eye movement. Saccade preparation was probed by occasionally flashing a visual distractor at a range of times preceding target presentation. The probability that a distractor would evoke a saccade error, and when it failed to do so, the probability of choosing each of the subsequent targets quantified the temporal and spatial evolution of saccade preparation, respectively. Our results show that saccade preparation became increasingly biased as the time of target presentation approached. Specifically, the spatial locus to which saccade preparation was directed varied from trial to trial, and its time course depended on task timing.

Close

  • doi:10.1152/jn.90703.2008

Close

William L. Miller; Vincenzo Maffei; Gianfranco Bosco; Marco Iosa; Myrka Zago; Emiliano Macaluso; Francesco Lacquaniti

Vestibular nuclei and cerebellum put visual gravitational motion in context Journal Article

In: Journal of Neurophysiology, vol. 99, no. 4, pp. 1969–1982, 2008.

Abstract | Links | BibTeX

@article{Miller2008,
title = {Vestibular nuclei and cerebellum put visual gravitational motion in context},
author = {William L. Miller and Vincenzo Maffei and Gianfranco Bosco and Marco Iosa and Myrka Zago and Emiliano Macaluso and Francesco Lacquaniti},
doi = {10.1152/jn.00889.2007},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {99},
number = {4},
pages = {1969--1982},
abstract = {Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion.

Close

  • doi:10.1152/jn.00889.2007

Close

D. A. Mills; Teresa C. Frohman; Scott L. Davis; A. R. Salter; Samuel M. McClure; I. Beatty; A. Shah; S. Galetta; E. Eggenberger; D. S. Zee; Elliot M. Frohman

Break in binocular fusion during head turning in MS patients with INO Journal Article

In: Neurology, vol. 71, pp. 457–460, 2008.

Abstract | Links | BibTeX

@article{Mills2008,
title = {Break in binocular fusion during head turning in MS patients with INO},
author = {D. A. Mills and Teresa C. Frohman and Scott L. Davis and A. R. Salter and Samuel M. McClure and I. Beatty and A. Shah and S. Galetta and E. Eggenberger and D. S. Zee and Elliot M. Frohman},
doi = {10.1212/NXI.0000000000000061},
year = {2008},
date = {2008-01-01},
journal = {Neurology},
volume = {71},
pages = {457--460},
abstract = {Internuclear ophthalmoparesis (INO) is the most common eye movement abnormality observed in pa- tients with multiple sclerosis (MS).1 While most MS patients with INO have no or little misalignment in the straight ahead position, significant disconjugacy occurs during horizontal saccades or with horizontal (yaw axis) head turning.2 A break in binocular fusion can produce a loss of stereopsis and depth percep- tion, transient diplopia (perceived as a double image or visual blur), oscillopsia, and disorientation.2 The purpose of this investigation was to confirm the hy- pothesis that a break in binocular fusion occurs in MS patients with INO during head or body turning, and that the magnitude of disconjugacy will be di- rectly correlated with the severity of this eye move- ment syndrome.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Internuclear ophthalmoparesis (INO) is the most common eye movement abnormality observed in pa- tients with multiple sclerosis (MS).1 While most MS patients with INO have no or little misalignment in the straight ahead position, significant disconjugacy occurs during horizontal saccades or with horizontal (yaw axis) head turning.2 A break in binocular fusion can produce a loss of stereopsis and depth percep- tion, transient diplopia (perceived as a double image or visual blur), oscillopsia, and disorientation.2 The purpose of this investigation was to confirm the hy- pothesis that a break in binocular fusion occurs in MS patients with INO during head or body turning, and that the magnitude of disconjugacy will be di- rectly correlated with the severity of this eye move- ment syndrome.

Close

  • doi:10.1212/NXI.0000000000000061

Close

Don C. Mitchell; Xingjia Shen; Matthew J. Green; Timothy L. Hodgson

Accounting for regressive eye-movements in models of sentence processing: A reappraisal of the Selective Reanalysis hypothesis Journal Article

In: Journal of Memory and Language, vol. 59, no. 3, pp. 266–293, 2008.

Abstract | Links | BibTeX

@article{Mitchell2008,
title = {Accounting for regressive eye-movements in models of sentence processing: A reappraisal of the Selective Reanalysis hypothesis},
author = {Don C. Mitchell and Xingjia Shen and Matthew J. Green and Timothy L. Hodgson},
doi = {10.1016/j.jml.2008.06.002},
year = {2008},
date = {2008-01-01},
journal = {Journal of Memory and Language},
volume = {59},
number = {3},
pages = {266--293},
abstract = {When people read temporarily ambiguous sentences, there is often an increased prevalence of regressive eye-movements launched from the word that resolves the ambiguity. Traditionally, such regressions have been interpreted at least in part as reflecting readers' efforts to re-read and reconfigure earlier material, as exemplified by the Selective Reanalysis hypothesis [Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178-210]. Within such frameworks it is assumed that the selection of saccadic landing-sites is linguistically supervised. As an alternative to this proposal, we consider the possibility (dubbed the Time Out hypothesis) that regression control is partly decoupled from linguistic operations and that landing-sites are instead selected on the basis of low-level spatial properties such as their proximity to the point from which the regressive saccade was launched. Two eye-tracking experiments were conducted to compare the explanatory potential of these two accounts. Experiment 1 manipulated the formatting of linguistically identical sentences and showed, contrary to purely linguistic supervision, that the landing site of the first regression from a critical word was reliably influenced by the physical layout of the text. Experiment 2 used a fixed physical format but manipulated the position in the display at which reanalysis-relevant material was located. Here the results showed a highly reliable linguistic influence on the overall distribution of regression landing sites (though with few effects being apparent on the very first regression). These results are interpreted as reflecting mutually exclusive forms of regression control with fixation sequences being influenced both by spatially constrained, partially decoupled supervision systems as well as by some kind of linguistic guidance. The findings are discussed in relation to existing computational models of eye-movements in reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When people read temporarily ambiguous sentences, there is often an increased prevalence of regressive eye-movements launched from the word that resolves the ambiguity. Traditionally, such regressions have been interpreted at least in part as reflecting readers' efforts to re-read and reconfigure earlier material, as exemplified by the Selective Reanalysis hypothesis [Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178-210]. Within such frameworks it is assumed that the selection of saccadic landing-sites is linguistically supervised. As an alternative to this proposal, we consider the possibility (dubbed the Time Out hypothesis) that regression control is partly decoupled from linguistic operations and that landing-sites are instead selected on the basis of low-level spatial properties such as their proximity to the point from which the regressive saccade was launched. Two eye-tracking experiments were conducted to compare the explanatory potential of these two accounts. Experiment 1 manipulated the formatting of linguistically identical sentences and showed, contrary to purely linguistic supervision, that the landing site of the first regression from a critical word was reliably influenced by the physical layout of the text. Experiment 2 used a fixed physical format but manipulated the position in the display at which reanalysis-relevant material was located. Here the results showed a highly reliable linguistic influence on the overall distribution of regression landing sites (though with few effects being apparent on the very first regression). These results are interpreted as reflecting mutually exclusive forms of regression control with fixation sequences being influenced both by spatially constrained, partially decoupled supervision systems as well as by some kind of linguistic guidance. The findings are discussed in relation to existing computational models of eye-movements in reading.

Close

  • doi:10.1016/j.jml.2008.06.002

Close

Peter Janssen; Siddharth Srivastava; Sien Ombelet; Guy A. Orban

Coding of shape and position in macaque lateral Iintraparietal area Journal Article

In: Journal of Neuroscience, vol. 28, no. 26, pp. 6679–6690, 2008.

Abstract | Links | BibTeX

@article{Janssen2008,
title = {Coding of shape and position in macaque lateral Iintraparietal area},
author = {Peter Janssen and Siddharth Srivastava and Sien Ombelet and Guy A. Orban},
doi = {10.1523/JNEUROSCI.0499-08.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neuroscience},
volume = {28},
number = {26},
pages = {6679--6690},
abstract = {The analysis of object shape is critical for both object recognition and grasping. Areas in the intraparietal sulcus of the rhesus monkey are important for the visuomotor transformations underlying actions directed toward objects. The lateral intraparietal (LIP) area has strong anatomical connections with the anterior intraparietal area, which is known to control the shaping of the hand during grasping, and LIP neurons can respond selectively to simple two-dimensional shapes. Here we investigate the shape representation in area LIP of awake rhesus monkeys. Specifically, we determined to what extent LIP neurons are tuned to shape dimensions known to be relevant for grasping and assessed the invariance of their shape preferences with regard to changes in stimulus size and position in the receptive field. Most LIP neurons proved to be significantly tuned to multiple shape dimensions. The population of LIP neurons that were tested showed barely significant size invariance. Position invariance was present in a minority of the neurons tested. Many LIP neurons displayed spurious shape selectivity arising from accidental interactions between the stimulus and the receptive field. We observed pronounced differences in the receptive field profiles determined by presenting two different shapes. Almost all LIP neurons showed spatially selective saccadic activity, but the receptive field for saccades did not always correspond to the receptive field as determined using shapes. Our results demonstrate that a subpopulation of LIP neurons encodes stimulus shape. Furthermore, the shape representation in the dorsal visual stream appears to differ radically from the known representation of shape in the ventral visual stream.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The analysis of object shape is critical for both object recognition and grasping. Areas in the intraparietal sulcus of the rhesus monkey are important for the visuomotor transformations underlying actions directed toward objects. The lateral intraparietal (LIP) area has strong anatomical connections with the anterior intraparietal area, which is known to control the shaping of the hand during grasping, and LIP neurons can respond selectively to simple two-dimensional shapes. Here we investigate the shape representation in area LIP of awake rhesus monkeys. Specifically, we determined to what extent LIP neurons are tuned to shape dimensions known to be relevant for grasping and assessed the invariance of their shape preferences with regard to changes in stimulus size and position in the receptive field. Most LIP neurons proved to be significantly tuned to multiple shape dimensions. The population of LIP neurons that were tested showed barely significant size invariance. Position invariance was present in a minority of the neurons tested. Many LIP neurons displayed spurious shape selectivity arising from accidental interactions between the stimulus and the receptive field. We observed pronounced differences in the receptive field profiles determined by presenting two different shapes. Almost all LIP neurons showed spatially selective saccadic activity, but the receptive field for saccades did not always correspond to the receptive field as determined using shapes. Our results demonstrate that a subpopulation of LIP neurons encodes stimulus shape. Furthermore, the shape representation in the dorsal visual stream appears to differ radically from the known representation of shape in the ventral visual stream.

Close

  • doi:10.1523/JNEUROSCI.0499-08.2008

Close

Wolfgang Jaschinski; Stephanie Jainta; Jörg Hoormann

Comparison of shutter glasses and mirror stereoscope for measuring dynamic and static vergence Journal Article

In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–7, 2008.

Abstract | Links | BibTeX

@article{Jaschinski2008,
title = {Comparison of shutter glasses and mirror stereoscope for measuring dynamic and static vergence},
author = {Wolfgang Jaschinski and Stephanie Jainta and Jörg Hoormann},
doi = {10.16910/jemr.1.2.5},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {1},
number = {2},
pages = {1--7},
abstract = {Vergence eye movement recordings in response to disparity step stimuli require to present different stimuli to the two eyes. The traditional method is a mirror stereoscope. Shutter glasses are more convenient, but have disadvantages as limited repetition rate, residual cross task, and reduced luminance. Therefore, we compared both techniques measuring (1) dynamic disparity step responses for stimuli of 1 and 3 deg and (2) fixation disparity, the static vergence error. Shutter glasses and mirror stereoscope gave very similar dynamic responses with correlations of about 0.95 for the objectively measured vergence velocity and for the response amplitude reached 400 ms after the step stimulus (measured objectively with eye movement recordings and subjectively with dichoptic nonius lines). Both techniques also provided similar amounts of fixation disparity, tested with dichoptic nonius lines.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Vergence eye movement recordings in response to disparity step stimuli require to present different stimuli to the two eyes. The traditional method is a mirror stereoscope. Shutter glasses are more convenient, but have disadvantages as limited repetition rate, residual cross task, and reduced luminance. Therefore, we compared both techniques measuring (1) dynamic disparity step responses for stimuli of 1 and 3 deg and (2) fixation disparity, the static vergence error. Shutter glasses and mirror stereoscope gave very similar dynamic responses with correlations of about 0.95 for the objectively measured vergence velocity and for the response amplitude reached 400 ms after the step stimulus (measured objectively with eye movement recordings and subjectively with dichoptic nonius lines). Both techniques also provided similar amounts of fixation disparity, tested with dichoptic nonius lines.

Close

  • doi:10.16910/jemr.1.2.5

Close

Manon W. Jones; Mateo Obregón; M. Louise Kelly; Holly P. Branigan

Elucidating the component processes involved in dyslexic and non-dyslexic reading fluency: An eye-tracking study Journal Article

In: Cognition, vol. 109, no. 3, pp. 389–407, 2008.

Abstract | Links | BibTeX

@article{Jones2008,
title = {Elucidating the component processes involved in dyslexic and non-dyslexic reading fluency: An eye-tracking study},
author = {Manon W. Jones and Mateo Obregón and M. Louise Kelly and Holly P. Branigan},
doi = {10.1016/j.cognition.2008.10.005},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {109},
number = {3},
pages = {389--407},
publisher = {Elsevier B.V.},
abstract = {The relationship between rapid automatized naming (RAN) and reading fluency is well documented (see Wolf, M. & Bowers, P.G. (1999). The double-deficit hypothesis for the developmental dyslexias. Journal of Educational Psychology, 91(3), 415-438, for a review), but little is known about which component processes are important in RAN, and why developmental dyslexics show longer latencies on these tasks. Researchers disagree as to whether these delays are caused by impaired phonological processing or whether extra-phonological processes also play a role (e.g., Clarke, P., Hulme, C., & Snowling, M. (2005). Individual differences in RAN and reading: A response timing analysis. Journal of Research in Reading, 28(2), 73-86; Wolf, M., Bowers, P.G., & Biddle, K. (2000). Naming-speed processes, timing, and reading: A conceptual review. Journal of learning disabilities, 33(4), 387-407). We conducted an eye-tracking study that manipulated phonological and visual information (as representative of extra-phonological processes) in RAN. Results from linear mixed (LME) effects analyses showed that both phonological and visual processes influence naming-speed for both dyslexic and non-dyslexic groups, but the influence on dyslexic readers is greater. Moreover, dyslexic readers' difficulties in these domains primarily emerge in a measure that explicitly includes the production phase of naming. This study elucidates processes underpinning RAN performance in non-dyslexic readers and pinpoints areas of difficulty for dyslexic readers. We discuss these findings with reference to phonological and extra-phonological hypotheses of naming-speed deficits.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The relationship between rapid automatized naming (RAN) and reading fluency is well documented (see Wolf, M. & Bowers, P.G. (1999). The double-deficit hypothesis for the developmental dyslexias. Journal of Educational Psychology, 91(3), 415-438, for a review), but little is known about which component processes are important in RAN, and why developmental dyslexics show longer latencies on these tasks. Researchers disagree as to whether these delays are caused by impaired phonological processing or whether extra-phonological processes also play a role (e.g., Clarke, P., Hulme, C., & Snowling, M. (2005). Individual differences in RAN and reading: A response timing analysis. Journal of Research in Reading, 28(2), 73-86; Wolf, M., Bowers, P.G., & Biddle, K. (2000). Naming-speed processes, timing, and reading: A conceptual review. Journal of learning disabilities, 33(4), 387-407). We conducted an eye-tracking study that manipulated phonological and visual information (as representative of extra-phonological processes) in RAN. Results from linear mixed (LME) effects analyses showed that both phonological and visual processes influence naming-speed for both dyslexic and non-dyslexic groups, but the influence on dyslexic readers is greater. Moreover, dyslexic readers' difficulties in these domains primarily emerge in a measure that explicitly includes the production phase of naming. This study elucidates processes underpinning RAN performance in non-dyslexic readers and pinpoints areas of difficulty for dyslexic readers. We discuss these findings with reference to phonological and extra-phonological hypotheses of naming-speed deficits.

Close

  • doi:10.1016/j.cognition.2008.10.005

Close

Jennifer J. Heisz; David I. Shore

More efficient scanning for familiar faces Journal Article

In: Journal of Vision, vol. 8, no. 1, pp. 1–10, 2008.

Abstract | Links | BibTeX

@article{Heisz2008,
title = {More efficient scanning for familiar faces},
author = {Jennifer J. Heisz and David I. Shore},
doi = {10.1167/8.1.9},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {1},
pages = {1--10},
abstract = {The present study reveals changes in eye movement patterns as newly learned faces become more familiar. Observers received multiple exposures to newly learned faces over four consecutive days. Recall tasks were performed on all 4 days, and a recognition task was performed on the fourth day. Eye movement behavior was compared across facial exposure and task type. Overall, the eyes were viewed for longer and more often than any other facial region, regardless of face familiarity. As a face became more familiar, observers made fewer fixations during recall and recognition. With increased exposure, observers sampled more from the eyes and sampled less from the nose, mouth, forehead, chin, and cheek regions. Interestingly, this change in scanning behavior was only observed for recall tasks, but not for recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study reveals changes in eye movement patterns as newly learned faces become more familiar. Observers received multiple exposures to newly learned faces over four consecutive days. Recall tasks were performed on all 4 days, and a recognition task was performed on the fourth day. Eye movement behavior was compared across facial exposure and task type. Overall, the eyes were viewed for longer and more often than any other facial region, regardless of face familiarity. As a face became more familiar, observers made fewer fixations during recall and recognition. With increased exposure, observers sampled more from the eyes and sampled less from the nose, mouth, forehead, chin, and cheek regions. Interestingly, this change in scanning behavior was only observed for recall tasks, but not for recognition.

Close

  • doi:10.1167/8.1.9

Close

Jens R. Helmert; Sebastian Pannasch; Boris M. Velichkovsky

Influences of dwell time and cursor control on the performance in gaze driven typing Journal Article

In: Journal of Eye Movement Research, vol. 2, no. 1, pp. 1–8, 2008.

Abstract | Links | BibTeX

@article{Helmert2008,
title = {Influences of dwell time and cursor control on the performance in gaze driven typing},
author = {Jens R. Helmert and Sebastian Pannasch and Boris M. Velichkovsky},
doi = {10.16910/jemr.2.4.3},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {2},
number = {1},
pages = {1--8},
abstract = {In gaze controlled computer interfaces the dwell time is often used as selection criterion. But this solution comes along with several problems, especially in the temporal domain: Eye movement studies on scene perception could demonstrate that fixations of different durations serve different purposes and should therefore be differentiated. The use of dwell time for selection implies the need to distinguish intentional selections from merely per-ceptual processes, described as the Midas touch problem. Moreover, the feedback of the actual own eye position has not yet been addressed to systematic studies in the context of usability in gaze based computer interaction. We present research on the usability of a simple eye typing set up. Different dwell time and eye position feedback configurations were tested. Our results indicate that smoothing raw eye position and temporal delays in visual feedback enhance the system's functionality and usability. Best overall performance was obtained with a dwell time of 500 ms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In gaze controlled computer interfaces the dwell time is often used as selection criterion. But this solution comes along with several problems, especially in the temporal domain: Eye movement studies on scene perception could demonstrate that fixations of different durations serve different purposes and should therefore be differentiated. The use of dwell time for selection implies the need to distinguish intentional selections from merely per-ceptual processes, described as the Midas touch problem. Moreover, the feedback of the actual own eye position has not yet been addressed to systematic studies in the context of usability in gaze based computer interaction. We present research on the usability of a simple eye typing set up. Different dwell time and eye position feedback configurations were tested. Our results indicate that smoothing raw eye position and temporal delays in visual feedback enhance the system's functionality and usability. Best overall performance was obtained with a dwell time of 500 ms.

Close

  • doi:10.16910/jemr.2.4.3

Close

John M. Henderson; Graham L. Pierce

Eye movements during scene viewing: Evidence for mixed control of fixation durations Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 3, pp. 566–573, 2008.

Abstract | Links | BibTeX

@article{Henderson2008,
title = {Eye movements during scene viewing: Evidence for mixed control of fixation durations},
author = {John M. Henderson and Graham L. Pierce},
doi = {10.3758/PBR.15.3.566},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {3},
pages = {566--573},
abstract = {Recent behavioral and computational research on eye movement control during scene viewing has focused on where the eyes move. However, fixations also differ in their durations, and when the eyes move may be another important indicator of perceptual and cognitive activity. Here we used a scene onset delay paradigm to investigate the degree to which individual fixation durations are under direct moment-to-moment control of the viewer's current visual scene. During saccades just prior to critical fixations, the scene was removed from view so that when the eyes landed, no scene was present. Following a manipulated delay period, the scene was restored to view. We found that one population of fixations was under the direct control of the current scene, increasing in duration as delay increased. A second population of fixations was relatively constant across delay. The pattern of data did not change whether delay duration was random or blocked, suggesting that the effects were not under the strategic control of the viewer. The results support a mixed control model in which the durations of some fixations proceed regardless of scene presence, whereas others are under the direct moment-to-moment control of ongoing scene analysis.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent behavioral and computational research on eye movement control during scene viewing has focused on where the eyes move. However, fixations also differ in their durations, and when the eyes move may be another important indicator of perceptual and cognitive activity. Here we used a scene onset delay paradigm to investigate the degree to which individual fixation durations are under direct moment-to-moment control of the viewer's current visual scene. During saccades just prior to critical fixations, the scene was removed from view so that when the eyes landed, no scene was present. Following a manipulated delay period, the scene was restored to view. We found that one population of fixations was under the direct control of the current scene, increasing in duration as delay increased. A second population of fixations was relatively constant across delay. The pattern of data did not change whether delay duration was random or blocked, suggesting that the effects were not under the strategic control of the viewer. The results support a mixed control model in which the durations of some fixations proceed regardless of scene presence, whereas others are under the direct moment-to-moment control of ongoing scene analysis.

Close

  • doi:10.3758/PBR.15.3.566

Close

Teresa D. Hernandez; Carmel A. Levitan; Martin S. Banks; Clifton M. Schor

How does saccade adaptation affect visual perception? Journal Article

In: Journal of Vision, vol. 8, no. 8, pp. 1–16, 2008.

Abstract | BibTeX

@article{Hernandez2008,
title = {How does saccade adaptation affect visual perception?},
author = {Teresa D. Hernandez and Carmel A. Levitan and Martin S. Banks and Clifton M. Schor},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {8},
pages = {1--16},
abstract = {Three signals are used to visually localize targets and stimulate saccades: (1) retinal location signals for intended saccade amplitude, (2) sensory-motor transform (SMT) of retinal signals to extra-ocular muscle innervation, and (3) estimates of eye position from extra-retinal signals. We investigated effects of adapting saccade amplitude to a double-step change in target location on perceived direction. In a flashed-pointing task, subjects pointed an unseen hand at a briefly displayed eccentric target without making a saccade. In a sustained-pointing task, subjects made a horizontal saccade to a double-step target. One second after the second step, they pointed an unseen hand at the final target position. After saccade-shortening adaptation, there was little change in hand-pointing azimuth toward the flashed target suggesting that most saccade adaptation was caused by changes in the SMT. After saccade-lengthening adaptation, there were small changes in hand-pointing azimuth to flashed targets, indicating that 1/3 of saccade adaptation was caused by changes in estimated retinal location signals and 2/3 by changes in the SMT. The sustained hand-pointing task indicated that estimates of eye position adapted inversely with changes of the SMT. Changes in perceived direction resulting from saccade adaptation are mainly influenced by extra-retinal factors with a small retinal component in the lengthening condition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Three signals are used to visually localize targets and stimulate saccades: (1) retinal location signals for intended saccade amplitude, (2) sensory-motor transform (SMT) of retinal signals to extra-ocular muscle innervation, and (3) estimates of eye position from extra-retinal signals. We investigated effects of adapting saccade amplitude to a double-step change in target location on perceived direction. In a flashed-pointing task, subjects pointed an unseen hand at a briefly displayed eccentric target without making a saccade. In a sustained-pointing task, subjects made a horizontal saccade to a double-step target. One second after the second step, they pointed an unseen hand at the final target position. After saccade-shortening adaptation, there was little change in hand-pointing azimuth toward the flashed target suggesting that most saccade adaptation was caused by changes in the SMT. After saccade-lengthening adaptation, there were small changes in hand-pointing azimuth to flashed targets, indicating that 1/3 of saccade adaptation was caused by changes in estimated retinal location signals and 2/3 by changes in the SMT. The sustained hand-pointing task indicated that estimates of eye position adapted inversely with changes of the SMT. Changes in perceived direction resulting from saccade adaptation are mainly influenced by extra-retinal factors with a small retinal component in the lengthening condition.

Close

Lee Hogarth; Anthony Dickinson; Alison Austin; Craig Brown; Theodora Duka

Attention and expectation in human predictive learning: The role of uncertainty Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 61, no. 11, pp. 1658–1668, 2008.

Abstract | Links | BibTeX

@article{Hogarth2008,
title = {Attention and expectation in human predictive learning: The role of uncertainty},
author = {Lee Hogarth and Anthony Dickinson and Alison Austin and Craig Brown and Theodora Duka},
doi = {10.1080/17470210701643439},
year = {2008},
date = {2008-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {61},
number = {11},
pages = {1658--1668},
abstract = {Three localized, visual pattern stimuli were trained as predictive signals of auditory outcomes. One signal partially predicted an aversive noise in Experiment 1 and a neutral tone in Experiment 2, whereas the other signals consistently predicted either the occurrence or absence of the noise. The expectation of the noise was measured during each signal presentation, and only participants for whom this expectation demonstrated contingency knowledge showed differential attention to the signals. Importantly, when attention was measured by visual fixations, the contingency-aware group attended more to the partially predictive signal than to the consistent predictors in both experiments. This profile of visual attention supports the Pearce and Hall (1980) theory of the role of attention in associative learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Three localized, visual pattern stimuli were trained as predictive signals of auditory outcomes. One signal partially predicted an aversive noise in Experiment 1 and a neutral tone in Experiment 2, whereas the other signals consistently predicted either the occurrence or absence of the noise. The expectation of the noise was measured during each signal presentation, and only participants for whom this expectation demonstrated contingency knowledge showed differential attention to the signals. Importantly, when attention was measured by visual fixations, the contingency-aware group attended more to the partially predictive signal than to the consistent predictors in both experiments. This profile of visual attention supports the Pearce and Hall (1980) theory of the role of attention in associative learning.

Close

  • doi:10.1080/17470210701643439

Close

Lee Hogarth; Anthony Dickinson; Molly Janowski; Aleksandra Nikitina; Theodora Duka

The role of attentional bias in mediating human drug-seeking behaviour Journal Article

In: Psychopharmacology, vol. 201, no. 1, pp. 29–41, 2008.

Abstract | Links | BibTeX

@article{Hogarth2008a,
title = {The role of attentional bias in mediating human drug-seeking behaviour},
author = {Lee Hogarth and Anthony Dickinson and Molly Janowski and Aleksandra Nikitina and Theodora Duka},
doi = {10.1007/s00213-008-1244-2},
year = {2008},
date = {2008-01-01},
journal = {Psychopharmacology},
volume = {201},
number = {1},
pages = {29--41},
abstract = {RATIONALE: The attentional bias for drug cues is believed to be a causal cognitive process mediating human drug seeking and relapse. OBJECTIVES, METHODS AND RESULTS: To test this claim, we trained smokers on a tobacco conditioning procedure in which the conditioned stimulus (or S+) acquired parallel control of an attentional bias (measured with an eye tracker), tobacco expectancy and instrumental tobacco-seeking behaviour. Although this correlation between measures may be regarded as consistent with the claim that the attentional bias for the S+ mediated tobacco seeking, when a secondary task was added in the test phase, the attentional bias for the S+ was abolished, yet the control of tobacco expectancy and tobacco seeking remained intact. CONCLUSIONS: This dissociation suggests that the attentional bias for drug cues is not necessary for the control that drug cues exert over drug-seeking behaviour. The question raised by these data is what function does the attentional bias serve if it does not mediate drug seeking?},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

RATIONALE: The attentional bias for drug cues is believed to be a causal cognitive process mediating human drug seeking and relapse. OBJECTIVES, METHODS AND RESULTS: To test this claim, we trained smokers on a tobacco conditioning procedure in which the conditioned stimulus (or S+) acquired parallel control of an attentional bias (measured with an eye tracker), tobacco expectancy and instrumental tobacco-seeking behaviour. Although this correlation between measures may be regarded as consistent with the claim that the attentional bias for the S+ mediated tobacco seeking, when a secondary task was added in the test phase, the attentional bias for the S+ was abolished, yet the control of tobacco expectancy and tobacco seeking remained intact. CONCLUSIONS: This dissociation suggests that the attentional bias for drug cues is not necessary for the control that drug cues exert over drug-seeking behaviour. The question raised by these data is what function does the attentional bias serve if it does not mediate drug seeking?

Close

  • doi:10.1007/s00213-008-1244-2

Close

Linus Holm; Johan Eriksson; Linus Andersson

Looking as if you know: Systematic object inspection precedes object recognition Journal Article

In: Journal of Vision, vol. 8, no. 4, pp. 1–7, 2008.

Abstract | Links | BibTeX

@article{Holm2008,
title = {Looking as if you know: Systematic object inspection precedes object recognition},
author = {Linus Holm and Johan Eriksson and Linus Andersson},
doi = {10.1167/8.4.14},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {4},
pages = {1--7},
abstract = {Sometimes we seem to look at the very object we are searching for, without consciously seeing it. How do we select object relevant information before we become aware of the object? We addressed this question in two recognition experiments involving pictures of fragmented objects. In Experiment 1, participants preferred to look at the target object rather than a control region 25 fixations prior to explicit recognition. Furthermore, participants inspected the target as if they had identified it around 9 fixations prior to explicit recognition. In Experiment 2, we investigated the influence of semantic knowledge in guiding object inspection prior to explicit recognition. Consistently, more specific knowledge about target identity made participants scan the fragmented stimulus more efficiently. For instance, non-target regions were rejected faster when participants knew the target object's name. Both experiments showed that participants were looking at the objects as if they knew them before they became aware of their identity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Sometimes we seem to look at the very object we are searching for, without consciously seeing it. How do we select object relevant information before we become aware of the object? We addressed this question in two recognition experiments involving pictures of fragmented objects. In Experiment 1, participants preferred to look at the target object rather than a control region 25 fixations prior to explicit recognition. Furthermore, participants inspected the target as if they had identified it around 9 fixations prior to explicit recognition. In Experiment 2, we investigated the influence of semantic knowledge in guiding object inspection prior to explicit recognition. Consistently, more specific knowledge about target identity made participants scan the fragmented stimulus more efficiently. For instance, non-target regions were rejected faster when participants knew the target object's name. Both experiments showed that participants were looking at the objects as if they knew them before they became aware of their identity.

Close

  • doi:10.1167/8.4.14

Close

L. Elliot Hong; Kathleen A. Turano; Hugh B. O'Neill; Lei Hao; Ikwunga Wonodi; Robert P. McMahon; Amie Elliott; Gunvant K. Thaker

Refining the predictive pursuit endophenotype in schizophrenia. Journal Article

In: Biological Psychiatry, vol. 63, no. 5, pp. 458–464, 2008.

Abstract | Links | BibTeX

@article{Hong2008,
title = {Refining the predictive pursuit endophenotype in schizophrenia.},
author = {L. Elliot Hong and Kathleen A. Turano and Hugh B. O'Neill and Lei Hao and Ikwunga Wonodi and Robert P. McMahon and Amie Elliott and Gunvant K. Thaker},
doi = {10.1016/j.biopsych.2007.06.004},
year = {2008},
date = {2008-01-01},
journal = {Biological Psychiatry},
volume = {63},
number = {5},
pages = {458--464},
abstract = {Background: To utilize fully a schizophrenia endophenotype in gene search and subsequent neurobiological studies, it is critical that the precise underlying physiologic deficit is identified. Abnormality in smooth pursuit eye movements is one of the endophenotypes of schizophrenia. The precise nature of the abnormality is unknown. Previous work has shown a reduced predictive pursuit response to a briefly masked (i.e., invisible) moving object in schizophrenia. However, the overt awareness of target removal can confound the measurement. Methods: This study employed a novel method that covertly stabilized the moving target image onto the fovea. The foveal stabilization was implemented after the target on a monitor had oscillated at least for one cycle and near the change of direction when the eye velocity momentarily reached zero. Thus, the subsequent pursuit eye movements were completely predictive and internally driven. Eye velocity during this foveally stabilized smooth pursuit was compared among schizophrenia patients (n = 45), their unaffected first-degree relatives (n = 42), and healthy comparison subjects (n = 22). Results: Schizophrenia patients and their unaffected relatives performed similarly and both had substantially reduced predictive pursuit acceleration and velocity under the foveally stabilized condition. Conclusions: These findings show that inability to maintain internal representation of the target motion or integration of such information into a predictive response may be the specific brain deficit indexed by the smooth pursuit endophenotype in schizophrenia. Similar performance between patients and unaffected relatives suggests that the refined predictive pursuit measure may index a less complex genetic origin of the eye-tracking deficits in schizophrenia families.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: To utilize fully a schizophrenia endophenotype in gene search and subsequent neurobiological studies, it is critical that the precise underlying physiologic deficit is identified. Abnormality in smooth pursuit eye movements is one of the endophenotypes of schizophrenia. The precise nature of the abnormality is unknown. Previous work has shown a reduced predictive pursuit response to a briefly masked (i.e., invisible) moving object in schizophrenia. However, the overt awareness of target removal can confound the measurement. Methods: This study employed a novel method that covertly stabilized the moving target image onto the fovea. The foveal stabilization was implemented after the target on a monitor had oscillated at least for one cycle and near the change of direction when the eye velocity momentarily reached zero. Thus, the subsequent pursuit eye movements were completely predictive and internally driven. Eye velocity during this foveally stabilized smooth pursuit was compared among schizophrenia patients (n = 45), their unaffected first-degree relatives (n = 42), and healthy comparison subjects (n = 22). Results: Schizophrenia patients and their unaffected relatives performed similarly and both had substantially reduced predictive pursuit acceleration and velocity under the foveally stabilized condition. Conclusions: These findings show that inability to maintain internal representation of the target motion or integration of such information into a predictive response may be the specific brain deficit indexed by the smooth pursuit endophenotype in schizophrenia. Similar performance between patients and unaffected relatives suggests that the refined predictive pursuit measure may index a less complex genetic origin of the eye-tracking deficits in schizophrenia families.

Close

  • doi:10.1016/j.biopsych.2007.06.004

Close

Jörg Hoormann; Stephanie Jainta; Wolfgang Jaschinski

The effect of calibration errors on the accuracy of the eye movement recordings Journal Article

In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–7, 2008.

Abstract | Links | BibTeX

@article{Hoormann2008,
title = {The effect of calibration errors on the accuracy of the eye movement recordings},
author = {Jörg Hoormann and Stephanie Jainta and Wolfgang Jaschinski},
doi = {10.16910/jemr.1.2.3},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {1},
number = {2},
pages = {1--7},
abstract = {For calibrating eye movement recordings, a regression between spatially defined calibration points and corresponding measured raw data is performed. Based on this regression, a confidence interval (CI) of the actually measured eye position can be calculated in order tonquantify the measurement error introduced by inaccurate calibration coefficients. For calculating this CI, a standard deviation (SD) - depending on the calibration quality and thendesign of the calibration procedure - is needed.nExamples of binocular recordings with separate monocular calibrations illustrate that the SD is almost independent of the number and spatial separation between the calibration points – even though the later was expected from theoretical simulation. Our simulations and recordings demonstrate that the SD depends critically on residuals at certain calibration points, thus robust regressions are suggested.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

For calibrating eye movement recordings, a regression between spatially defined calibration points and corresponding measured raw data is performed. Based on this regression, a confidence interval (CI) of the actually measured eye position can be calculated in order tonquantify the measurement error introduced by inaccurate calibration coefficients. For calculating this CI, a standard deviation (SD) - depending on the calibration quality and thendesign of the calibration procedure - is needed.nExamples of binocular recordings with separate monocular calibrations illustrate that the SD is almost independent of the number and spatial separation between the calibration points – even though the later was expected from theoretical simulation. Our simulations and recordings demonstrate that the SD depends critically on residuals at certain calibration points, thus robust regressions are suggested.

Close

  • doi:10.16910/jemr.1.2.3

Close

Fred H. Hamker; Marc Zirnsak; Markus Lappe

About the influence of post-saccadic mechanisms for visual stability on peri-saccadic compression of object location Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–13, 2008.

Abstract | Links | BibTeX

@article{Hamker2008,
title = {About the influence of post-saccadic mechanisms for visual stability on peri-saccadic compression of object location},
author = {Fred H. Hamker and Marc Zirnsak and Markus Lappe},
doi = {10.1167/8.14.1},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--13},
abstract = {Peri-saccadic perception experiments have revealed a multitude of mislocalization phenomena. For instance, a briefly flashed stimulus is perceived closer to the saccade target, whereas a displacement of the saccade target goes usually unnoticeable. This latter saccadic suppression of displacement has been explained by a built-in characteristic of the perceptual system: the assumption that during a saccade, the environment remains stable. We explored whether the mislocalization of a briefly flashed stimulus toward the saccade target also grounds in the built-in assumption of a stable environment. If the mislocalization of a peri-saccadically flashed stimulus originates from a post-saccadic alignment process, an additional location marker at the position of the upcoming flash should counteract compression. Alternatively, compression might be the result of peri-saccadic attentional phenomena. In this case, mislocalization should occur even if the position of the flashed stimulus is marked. When subjects were asked about their perceived location, they mislocalized the stimulus toward the saccade target, even though they were fully aware of the correct stimulus location. Thus, our results suggest that the uncertainty about the location of a flashed stimulus is not inherently relevant for compression.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Peri-saccadic perception experiments have revealed a multitude of mislocalization phenomena. For instance, a briefly flashed stimulus is perceived closer to the saccade target, whereas a displacement of the saccade target goes usually unnoticeable. This latter saccadic suppression of displacement has been explained by a built-in characteristic of the perceptual system: the assumption that during a saccade, the environment remains stable. We explored whether the mislocalization of a briefly flashed stimulus toward the saccade target also grounds in the built-in assumption of a stable environment. If the mislocalization of a peri-saccadically flashed stimulus originates from a post-saccadic alignment process, an additional location marker at the position of the upcoming flash should counteract compression. Alternatively, compression might be the result of peri-saccadic attentional phenomena. In this case, mislocalization should occur even if the position of the flashed stimulus is marked. When subjects were asked about their perceived location, they mislocalized the stimulus toward the saccade target, even though they were fully aware of the correct stimulus location. Thus, our results suggest that the uncertainty about the location of a flashed stimulus is not inherently relevant for compression.

Close

  • doi:10.1167/8.14.1

Close

Janet Hui-wen Hsiao; Garrison W. Cottrell

Two fixations suffice in face recognition Journal Article

In: Psychological Science, vol. 19, no. 10, pp. 998–1006, 2008.

Abstract | Links | BibTeX

@article{Hsiao2008,
title = {Two fixations suffice in face recognition},
author = {Janet Hui-wen Hsiao and Garrison W. Cottrell},
doi = {10.2139/ssrn.2836643},
year = {2008},
date = {2008-01-01},
journal = {Psychological Science},
volume = {19},
number = {10},
pages = {998--1006},
abstract = {It is well known that there exist preferred landing positions for eye fixations in visual word recogni- tion. However, the existence ofpreferred landing positions in face recognition is less well established. It is also un- known how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked.We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fix- ation is just to the left ofthe center ofthe nose, and that of the second fixation is around the center ofthe nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is well known that there exist preferred landing positions for eye fixations in visual word recogni- tion. However, the existence ofpreferred landing positions in face recognition is less well established. It is also un- known how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked.We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fix- ation is just to the left ofthe center ofthe nose, and that of the second fixation is around the center ofthe nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.

Close

  • doi:10.2139/ssrn.2836643

Close

Xu Huang; Jin Jing; Xiao-bing Zou; Meng-Long Wang; Xiu-Hong Li; Ai-Hua Lin

Eye movements characteristics of Chinese dyslexic children in picture searching Journal Article

In: Chinese Medical Journal, vol. 121, no. 17, pp. 1617–1621, 2008.

Abstract | Links | BibTeX

@article{Huang2008,
title = {Eye movements characteristics of Chinese dyslexic children in picture searching},
author = {Xu Huang and Jin Jing and Xiao-bing Zou and Meng-Long Wang and Xiu-Hong Li and Ai-Hua Lin},
doi = {10.1083/jcb.200506199},
year = {2008},
date = {2008-01-01},
journal = {Chinese Medical Journal},
volume = {121},
number = {17},
pages = {1617--1621},
abstract = {Background: Reading Chinese, a kind of ideogram, relies more on visual cognition. The visuospatial cognitive deficit of Chinese dyslexia is an interesting topic that has received much attention. The purpose of current research was to explore the visuopatial cognitive characteristics of Chinese dyslexic children by studying their eye movements via a picture searching test. Methods: According to the diagnostic criteria defined by ICD-10, twenty-eight dyslexic children (mean age (10.12±1.42) years) were enrolled from the Clinic of Children Behavioral Disorder in the third affiliated hospital of Sun Yat-sen University. And 28 normally reading children (mean age (10.06±1.29) years), 1:1 matched by age, sex, grade and family condition were chosen from an elementary school in Guangzhou as a control group. Four groups of pictures (cock, accident, canyon, meditate) from Picture Vocabulary Test were chosen as eye movement experiment targets. All the subjects carried out the picture searching task and their eye movement data were recorded by an Eyelink II High-Speed Eye Tracker. The duration time, average fixation duration, average saccade amplitude, fixation counts and saccade counts were compared between the two groups of children. Results: The dyslexic children had longer total fixation duration and average fixation duration (F=7.711, P <0.01; F=4.520, P <0.05), more fixation counts and saccade counts (F=7.498, P <0.01; F=11.040, P <0.01), and a smaller average saccade amplitude (F=29.743, P <0.01) compared with controls. But their performance in the picture vocabulary test was the same as those of the control group. The eye movement indexes were affected by the difficulty of the pictures and words, all eye movement indexes, except saccade amplitude, had a significant difference within groups (P <0.05). Conclusions: Chinese dyslexic children have abnormal eye movements in picture searching, applying slow fixations, more fixations and small and frequent saccades. Their abnormal eye movement mode reflects the poor ability and strategy of visual information processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Reading Chinese, a kind of ideogram, relies more on visual cognition. The visuospatial cognitive deficit of Chinese dyslexia is an interesting topic that has received much attention. The purpose of current research was to explore the visuopatial cognitive characteristics of Chinese dyslexic children by studying their eye movements via a picture searching test. Methods: According to the diagnostic criteria defined by ICD-10, twenty-eight dyslexic children (mean age (10.12±1.42) years) were enrolled from the Clinic of Children Behavioral Disorder in the third affiliated hospital of Sun Yat-sen University. And 28 normally reading children (mean age (10.06±1.29) years), 1:1 matched by age, sex, grade and family condition were chosen from an elementary school in Guangzhou as a control group. Four groups of pictures (cock, accident, canyon, meditate) from Picture Vocabulary Test were chosen as eye movement experiment targets. All the subjects carried out the picture searching task and their eye movement data were recorded by an Eyelink II High-Speed Eye Tracker. The duration time, average fixation duration, average saccade amplitude, fixation counts and saccade counts were compared between the two groups of children. Results: The dyslexic children had longer total fixation duration and average fixation duration (F=7.711, P <0.01; F=4.520, P <0.05), more fixation counts and saccade counts (F=7.498, P <0.01; F=11.040, P <0.01), and a smaller average saccade amplitude (F=29.743, P <0.01) compared with controls. But their performance in the picture vocabulary test was the same as those of the control group. The eye movement indexes were affected by the difficulty of the pictures and words, all eye movement indexes, except saccade amplitude, had a significant difference within groups (P <0.05). Conclusions: Chinese dyslexic children have abnormal eye movements in picture searching, applying slow fixations, more fixations and small and frequent saccades. Their abnormal eye movement mode reflects the poor ability and strategy of visual information processing.

Close

  • doi:10.1083/jcb.200506199

Close

Wendy E. Huddleston; Edgar A. DeYoe

The representation of spatial attention in human parietal cortex dynamically modulates with performance Journal Article

In: Cerebral Cortex, vol. 18, no. 6, pp. 1272–1280, 2008.

Abstract | Links | BibTeX

@article{Huddleston2008,
title = {The representation of spatial attention in human parietal cortex dynamically modulates with performance},
author = {Wendy E. Huddleston and Edgar A. DeYoe},
doi = {10.1093/cercor/bhm158},
year = {2008},
date = {2008-01-01},
journal = {Cerebral Cortex},
volume = {18},
number = {6},
pages = {1272--1280},
abstract = {The control and allocation of attention is an essential, ubiquitous neural process that gates our awareness of objects and events in the environment. Neural representations of the locus of spatial attention have been previously demonstrated in parietal cortex. However, the behavioral relevance of these neural representations is not known. While undergoing functional magnetic resonance imaging, subjects performed a covert spatial attention task that yielded a wide range of performance values. Voxels in parietal cortex selective for attended target location also dynamically modulated, becoming more or less responsive as performance levels changed. Surprisingly, this relationship was not linear. Responses peaked at intermediate performance levels and dropped both when performance was very high and when it was very low. Such dynamic modulation may represent a mechanism for organizing neural control signals according to behavioral task demands.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The control and allocation of attention is an essential, ubiquitous neural process that gates our awareness of objects and events in the environment. Neural representations of the locus of spatial attention have been previously demonstrated in parietal cortex. However, the behavioral relevance of these neural representations is not known. While undergoing functional magnetic resonance imaging, subjects performed a covert spatial attention task that yielded a wide range of performance values. Voxels in parietal cortex selective for attended target location also dynamically modulated, becoming more or less responsive as performance levels changed. Surprisingly, this relationship was not linear. Responses peaked at intermediate performance levels and dropped both when performance was very high and when it was very low. Such dynamic modulation may represent a mechanism for organizing neural control signals according to behavioral task demands.

Close

  • doi:10.1093/cercor/bhm158

Close

Falk Huettig; Robert J. Hartsuiker

When you name the pizza you look at the coin and the bread: Eye movements reveal semantic activation during word production Journal Article

In: Memory and Cognition, vol. 36, no. 2, pp. 341–360, 2008.

Abstract | Links | BibTeX

@article{Huettig2008,
title = {When you name the pizza you look at the coin and the bread: Eye movements reveal semantic activation during word production},
author = {Falk Huettig and Robert J. Hartsuiker},
doi = {10.3758/MC.36.2.341},
year = {2008},
date = {2008-01-01},
journal = {Memory and Cognition},
volume = {36},
number = {2},
pages = {341--360},
abstract = {Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 x 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 x 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).

Close

  • doi:10.3758/MC.36.2.341

Close

Jesse A. Harris; Liina Pylkkänen; Brian McElree; Steven Frisson

The cost of question concealment: Eye-tracking and MEG evidence Journal Article

In: Brain and Language, vol. 107, no. 1, pp. 44–61, 2008.

Abstract | Links | BibTeX

@article{Harris2008,
title = {The cost of question concealment: Eye-tracking and MEG evidence},
author = {Jesse A. Harris and Liina Pylkkänen and Brian McElree and Steven Frisson},
doi = {10.1016/j.bandl.2007.09.001},
year = {2008},
date = {2008-01-01},
journal = {Brain and Language},
volume = {107},
number = {1},
pages = {44--61},
abstract = {Although natural language appears to be largely compositional, the meanings of certain expressions cannot be straightforwardly recovered from the meanings of their parts. This study examined the online processing of one such class of expressions: concealed questions, in which the meaning of a complex noun phrase (the proof of the theorem) shifts to a covert question (what the proof of the theorem is) when mandated by a sub-class of question-selecting verbs (e.g., guess). Previous behavioral and magnetoencephalographic (MEG) studies have reported a cost associated with converting an entity denotation to an event. Our study tested whether both types of meaning-shift affect the same computational resources by examining the effects elicited by concealed questions in eye-tracking and MEG. Experiment 1 found evidence from eye-movements that verbs requiring the concealed question interpretation require more processing time than verbs that do not support a shift in meaning. Experiment 2 localized the cost of the concealed question interpretation in the left posterior temporal region, an area distinct from that affected by complement coercion. Experiment 3 presented the critical verbs in isolation and found no posterior temporal effect, confirming that the effect of Experiment 2 reflected sentential, and not lexical-level, processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although natural language appears to be largely compositional, the meanings of certain expressions cannot be straightforwardly recovered from the meanings of their parts. This study examined the online processing of one such class of expressions: concealed questions, in which the meaning of a complex noun phrase (the proof of the theorem) shifts to a covert question (what the proof of the theorem is) when mandated by a sub-class of question-selecting verbs (e.g., guess). Previous behavioral and magnetoencephalographic (MEG) studies have reported a cost associated with converting an entity denotation to an event. Our study tested whether both types of meaning-shift affect the same computational resources by examining the effects elicited by concealed questions in eye-tracking and MEG. Experiment 1 found evidence from eye-movements that verbs requiring the concealed question interpretation require more processing time than verbs that do not support a shift in meaning. Experiment 2 localized the cost of the concealed question interpretation in the left posterior temporal region, an area distinct from that affected by complement coercion. Experiment 3 presented the critical verbs in isolation and found no posterior temporal effect, confirming that the effect of Experiment 2 reflected sentential, and not lexical-level, processing.

Close

  • doi:10.1016/j.bandl.2007.09.001

Close

Benjamin Y. Hayden; Sarah R. Heilbronner; Amrita C. Nair; Michael L. Platt

Cognitive influences on risk-seeking by rhesus macaques Journal Article

In: Judgment and Decision Making, vol. 3, no. 5, pp. 389–395, 2008.

Abstract | BibTeX

@article{Hayden2008,
title = {Cognitive influences on risk-seeking by rhesus macaques},
author = {Benjamin Y. Hayden and Sarah R. Heilbronner and Amrita C. Nair and Michael L. Platt},
year = {2008},
date = {2008-01-01},
journal = {Judgment and Decision Making},
volume = {3},
number = {5},
pages = {389--395},
abstract = {Humans and other animals are idiosyncratically sensitive to risk, either preferring or avoiding options having the same value but differing in uncertainty. Many explanations for risk sensitivity rely on the non-linear shape of a hypothesized utility curve. Because such models do not place any importance on uncertainty per se, utility curve-based accounts predict indifference between risky and riskless options that offer the same distribution of rewards. Here we show that monkeys strongly prefer uncertain gambles to alternating rewards with the same payoffs, demonstrating that uncertainty itself contributes to the appeal of risky options. Based on prior observations, we hypothesized that the appeal of the risky option is enhanced by the salience of the potential jackpot. To test this, we subtly manipulated payoffs in a second gambling task. We found that monkeys are more sensitive to small changes in the size of the large reward than to equivalent changes in the size of the small reward, indicating that they attend preferentially to the jackpots. Together, these results challenge utility curve-based accounts of risk sensitivity, and suggest that psychological factors, such as outcome salience and uncertainty itself, contribute to risky decision-making.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans and other animals are idiosyncratically sensitive to risk, either preferring or avoiding options having the same value but differing in uncertainty. Many explanations for risk sensitivity rely on the non-linear shape of a hypothesized utility curve. Because such models do not place any importance on uncertainty per se, utility curve-based accounts predict indifference between risky and riskless options that offer the same distribution of rewards. Here we show that monkeys strongly prefer uncertain gambles to alternating rewards with the same payoffs, demonstrating that uncertainty itself contributes to the appeal of risky options. Based on prior observations, we hypothesized that the appeal of the risky option is enhanced by the salience of the potential jackpot. To test this, we subtly manipulated payoffs in a second gambling task. We found that monkeys are more sensitive to small changes in the size of the large reward than to equivalent changes in the size of the small reward, indicating that they attend preferentially to the jackpots. Together, these results challenge utility curve-based accounts of risk sensitivity, and suggest that psychological factors, such as outcome salience and uncertainty itself, contribute to risky decision-making.

Close

Benjamin Y. Hayden; Amrita C. Nair; Allison N. McCoy; Michael L. Platt

Posterior cingulate cortex mediates outcome-contingent allocation of behavior Journal Article

In: Neuron, vol. 60, no. 1, pp. 19–25, 2008.

Abstract | Links | BibTeX

@article{Hayden2008a,
title = {Posterior cingulate cortex mediates outcome-contingent allocation of behavior},
author = {Benjamin Y. Hayden and Amrita C. Nair and Allison N. McCoy and Michael L. Platt},
doi = {10.1016/j.neuron.2008.09.012},
year = {2008},
date = {2008-01-01},
journal = {Neuron},
volume = {60},
number = {1},
pages = {19--25},
abstract = {Adaptive decision making requires selecting an action and then monitoring its consequences to improve future decisions. The neuronal mechanisms supporting action evaluation and subsequent behavioral modification, however, remain poorly understood. To investigate the contribution of posterior cingulate cortex (CGp) to these processes, we recorded activity of single neurons in monkeys performing a gambling task in which the reward outcome of each choice strongly influenced subsequent choices. We found that CGp neurons signaled reward outcomes in a nonlinear fashion and that outcome-contingent modulations in firing rate persisted into subsequent trials. Moreover, firing rate on any one trial predicted switching to the alternative option on the next trial. Finally, microstimulation in CGp following risky choices promoted a preference reversal for the safe option on the following trial. Collectively, these results demonstrate that CGp directly contributes to the evaluative processes that support dynamic changes in decision making in volatile environments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Adaptive decision making requires selecting an action and then monitoring its consequences to improve future decisions. The neuronal mechanisms supporting action evaluation and subsequent behavioral modification, however, remain poorly understood. To investigate the contribution of posterior cingulate cortex (CGp) to these processes, we recorded activity of single neurons in monkeys performing a gambling task in which the reward outcome of each choice strongly influenced subsequent choices. We found that CGp neurons signaled reward outcomes in a nonlinear fashion and that outcome-contingent modulations in firing rate persisted into subsequent trials. Moreover, firing rate on any one trial predicted switching to the alternative option on the next trial. Finally, microstimulation in CGp following risky choices promoted a preference reversal for the safe option on the following trial. Collectively, these results demonstrate that CGp directly contributes to the evaluative processes that support dynamic changes in decision making in volatile environments.

Close

  • doi:10.1016/j.neuron.2008.09.012

Close

Amelia R. Hunt; Craig S. Chapman; Alan Kingstone

Taking a long look at action and time perception Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 1, pp. 125–136, 2008.

Abstract | Links | BibTeX

@article{Hunt2008,
title = {Taking a long look at action and time perception},
author = {Amelia R. Hunt and Craig S. Chapman and Alan Kingstone},
doi = {10.1037/0096-1523.34.1.125},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {34},
number = {1},
pages = {125--136},
abstract = {Everyone has probably experienced chronostasis, an illusion of time that can cause a clock's second hand to appear to stand still during an eye movement. Though the illusion was initially thought to reflect a mechanism for preserving perceptual continuity during eye movements, an alternative hypothesis has been advanced that overestimation of time might be a general effect of any action. Contrary to both of these hypotheses, the experiments reported here suggest that distortions of time perception related to an eye movement are not distinct from temporal distortions for other kinds of responses. Moreover, voluntary action is neither necessary nor sufficient for overestimation effects. These results lead to a new interpretation of chronostasis based on the role of attention and memory in time estimation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Everyone has probably experienced chronostasis, an illusion of time that can cause a clock's second hand to appear to stand still during an eye movement. Though the illusion was initially thought to reflect a mechanism for preserving perceptual continuity during eye movements, an alternative hypothesis has been advanced that overestimation of time might be a general effect of any action. Contrary to both of these hypotheses, the experiments reported here suggest that distortions of time perception related to an eye movement are not distinct from temporal distortions for other kinds of responses. Moreover, voluntary action is neither necessary nor sufficient for overestimation effects. These results lead to a new interpretation of chronostasis based on the role of attention and memory in time estimation.

Close

  • doi:10.1037/0096-1523.34.1.125

Close

Albrecht W. Inhoff; Matthew S. Solomon; Bradley A. Seymour; Ralph Radach

Eye position changes during reading fixations are spatially selective Journal Article

In: Vision Research, vol. 48, no. 8, pp. 1027–1039, 2008.

Abstract | Links | BibTeX

@article{Inhoff2008,
title = {Eye position changes during reading fixations are spatially selective},
author = {Albrecht W. Inhoff and Matthew S. Solomon and Bradley A. Seymour and Ralph Radach},
doi = {10.1016/j.visres.2008.01.012},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {8},
pages = {1027--1039},
abstract = {Intra-fixation location changes were measured when one-line sentences written in lower or aLtErNaTiNg case were read. Intra-fixation location changes were common and their size was normally distributed except for a relatively high proportion of fixations without a discernible location change. Location changes that did occur were systematically biased toward the right when alternating case was read. Irrespective of case type, changes of the right eye were biased toward the right at the onset of sentence reading, and this spatial bias decreased as sentence reading progressed from left to right. The left eye showed a relatively stable right-directed bias. These results show that processing demands can pull the two fixated eyes in the same direction and that the response to this pull can differ for the right and left eye.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intra-fixation location changes were measured when one-line sentences written in lower or aLtErNaTiNg case were read. Intra-fixation location changes were common and their size was normally distributed except for a relatively high proportion of fixations without a discernible location change. Location changes that did occur were systematically biased toward the right when alternating case was read. Irrespective of case type, changes of the right eye were biased toward the right at the onset of sentence reading, and this spatial bias decreased as sentence reading progressed from left to right. The left eye showed a relatively stable right-directed bias. These results show that processing demands can pull the two fixated eyes in the same direction and that the response to this pull can differ for the right and left eye.

Close

  • doi:10.1016/j.visres.2008.01.012

Close

Albrecht W. Inhoff; Matthew S. Starr; Matthew S. Solomon; Lars Placke

Eye movements during the reading of compound words and the influence of lexeme meaning Journal Article

In: Memory and Cognition, vol. 36, no. 3, pp. 675–687, 2008.

Abstract | Links | BibTeX

@article{Inhoff2008a,
title = {Eye movements during the reading of compound words and the influence of lexeme meaning},
author = {Albrecht W. Inhoff and Matthew S. Starr and Matthew S. Solomon and Lars Placke},
doi = {10.3758/MC.36.3.675},
year = {2008},
date = {2008-01-01},
journal = {Memory and Cognition},
volume = {36},
number = {3},
pages = {675--687},
abstract = {We examined the use of lexeme meaning during the processing of spatially unified bilexemic compound words by manipulating both the location and the word frequency of the lexeme that primarily defined the meaning of a compound (i.e., the dominant lexeme). The semantically dominant and nondominant lexemes occupied either the beginning or the ending compound word location, and the beginning and ending lexemes could be either high- or low-frequency words. Three tasks were used--lexical decision, naming, and sentence reading--all of which focused on the effects of lexeme frequency as a function of lexeme dominance. The results revealed a larger word frequency effect for the dominant lexeme in all three tasks. Eye movements during sentence reading further revealed larger word frequency effects for the dominant lexeme via several oculomotor motor measures, including the duration of the first fixation on a compound word. These findings favor theoretical conceptions in which the use of lexeme meaning is an integral part of the compound recognition process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined the use of lexeme meaning during the processing of spatially unified bilexemic compound words by manipulating both the location and the word frequency of the lexeme that primarily defined the meaning of a compound (i.e., the dominant lexeme). The semantically dominant and nondominant lexemes occupied either the beginning or the ending compound word location, and the beginning and ending lexemes could be either high- or low-frequency words. Three tasks were used--lexical decision, naming, and sentence reading--all of which focused on the effects of lexeme frequency as a function of lexeme dominance. The results revealed a larger word frequency effect for the dominant lexeme in all three tasks. Eye movements during sentence reading further revealed larger word frequency effects for the dominant lexeme via several oculomotor motor measures, including the duration of the first fixation on a compound word. These findings favor theoretical conceptions in which the use of lexeme meaning is an integral part of the compound recognition process.

Close

  • doi:10.3758/MC.36.3.675

Close

Helene Intraub; Christopher A. Dickinson

False memory 1/20th of a second later: What the early onset of boundary extension reveals about perception Journal Article

In: Psychological Science, vol. 19, no. 10, pp. 1007–1014, 2008.

Abstract | Links | BibTeX

@article{Intraub2008,
title = {False memory 1/20th of a second later: What the early onset of boundary extension reveals about perception},
author = {Helene Intraub and Christopher A. Dickinson},
doi = {10.1111/j.1467-9280.2008.02192.x},
year = {2008},
date = {2008-01-01},
journal = {Psychological Science},
volume = {19},
number = {10},
pages = {1007--1014},
abstract = {Errors of commission are thought to be caused by heavy memory loads, confusing information, lengthy retention intervals, or some combination of these factors. We report false memory beyond the boundaries of a view, boundary extension, after less than 1/20th of a second. Photographs of scenes were interrupted by a 42-ms or 250-ms mask, 250 ms into viewing, before reappearing or being replaced with a different view (Experiment 1). Postinterruption photographs that were unchanged were rated as closer up than the original views; when the photographs were changed, the same pair of closer-up and wider-angle views was rated as more similar when the closer view was first, rather than second. Thus, observers remembered preinterruption views with extended boundaries. Results were replicated when the interruption included a saccade (Experiment 2). The brevity of these interruptions has implications for visual scanning; it also challenges the traditional distinction between perception and memory. We offer an alternative conceptualization that shows how source monitoring can explain false memory after an interruption briefer than an eyeblink.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Errors of commission are thought to be caused by heavy memory loads, confusing information, lengthy retention intervals, or some combination of these factors. We report false memory beyond the boundaries of a view, boundary extension, after less than 1/20th of a second. Photographs of scenes were interrupted by a 42-ms or 250-ms mask, 250 ms into viewing, before reappearing or being replaced with a different view (Experiment 1). Postinterruption photographs that were unchanged were rated as closer up than the original views; when the photographs were changed, the same pair of closer-up and wider-angle views was rated as more similar when the closer view was first, rather than second. Thus, observers remembered preinterruption views with extended boundaries. Results were replicated when the interruption included a saccade (Experiment 2). The brevity of these interruptions has implications for visual scanning; it also challenges the traditional distinction between perception and memory. We offer an alternative conceptualization that shows how source monitoring can explain false memory after an interruption briefer than an eyeblink.

Close

  • doi:10.1111/j.1467-9280.2008.02192.x

Close

Stuart Jackson; Fred Cummins; Nuala Brady

Rapid perceptual switching of a reversible biological figure Journal Article

In: PLoS ONE, vol. 3, no. 12, pp. e3982, 2008.

Abstract | Links | BibTeX

@article{Jackson2008,
title = {Rapid perceptual switching of a reversible biological figure},
author = {Stuart Jackson and Fred Cummins and Nuala Brady},
doi = {10.1371/journal.pone.0003982},
year = {2008},
date = {2008-01-01},
journal = {PLoS ONE},
volume = {3},
number = {12},
pages = {e3982},
abstract = {Certain visual stimuli can give rise to contradictory perceptions. In this paper we examine the temporal dynamics of perceptual reversals experienced with biological motion, comparing these dynamics to those observed with other ambiguous structure from motion (SFM) stimuli. In our first experiment, naïve observers monitored perceptual alternations with an ambiguous rotating walker, a figure that randomly alternates between walking in clockwise (CW) and counter-clockwise (CCW) directions. While the number of reported reversals varied between observers, the observed dynamics (distribution of dominance durations, CW/CCW proportions) were comparable to those experienced with an ambiguous kinetic depth cylinder. In a second experiment, we compared reversal profiles with rotating and standard point-light walkers (i.e. non-rotating). Over multiple test repetitions, three out of four observers experienced consistently shorter mean percept durations with the rotating walker, suggesting that the added rotational component may speed up reversal rates with biomotion. For both stimuli, the drift in alternation rate across trial and across repetition was minimal. In our final experiment, we investigated whether reversals with the rotating walker and a non-biological object with similar global dimensions (rotating cuboid) occur at random phases of the rotation cycle. We found evidence that some observers experience peaks in the distribution of response locations that are relatively stable across sessions. Using control data, we discuss the role of eye movements in the development of these reversal patterns, and the related role of exogenous stimulus characteristics. In summary, we have demonstrated that the temporal dynamics of reversal with biological motion are similar to other forms of ambiguous SFM. We conclude that perceptual switching with biological motion is a robust bistable phenomenon.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Certain visual stimuli can give rise to contradictory perceptions. In this paper we examine the temporal dynamics of perceptual reversals experienced with biological motion, comparing these dynamics to those observed with other ambiguous structure from motion (SFM) stimuli. In our first experiment, naïve observers monitored perceptual alternations with an ambiguous rotating walker, a figure that randomly alternates between walking in clockwise (CW) and counter-clockwise (CCW) directions. While the number of reported reversals varied between observers, the observed dynamics (distribution of dominance durations, CW/CCW proportions) were comparable to those experienced with an ambiguous kinetic depth cylinder. In a second experiment, we compared reversal profiles with rotating and standard point-light walkers (i.e. non-rotating). Over multiple test repetitions, three out of four observers experienced consistently shorter mean percept durations with the rotating walker, suggesting that the added rotational component may speed up reversal rates with biomotion. For both stimuli, the drift in alternation rate across trial and across repetition was minimal. In our final experiment, we investigated whether reversals with the rotating walker and a non-biological object with similar global dimensions (rotating cuboid) occur at random phases of the rotation cycle. We found evidence that some observers experience peaks in the distribution of response locations that are relatively stable across sessions. Using control data, we discuss the role of eye movements in the development of these reversal patterns, and the related role of exogenous stimulus characteristics. In summary, we have demonstrated that the temporal dynamics of reversal with biological motion are similar to other forms of ambiguous SFM. We conclude that perceptual switching with biological motion is a robust bistable phenomenon.

Close

  • doi:10.1371/journal.pone.0003982

Close

K. -M. Lee; Edward L. Keller

Neural activity in the frontal eye fields modulated by the number of alternatives in target choice Journal Article

In: Journal of Neuroscience, vol. 28, no. 9, pp. 2242–2251, 2008.

Abstract | Links | BibTeX

@article{Lee2008,
title = {Neural activity in the frontal eye fields modulated by the number of alternatives in target choice},
author = {K. -M. Lee and Edward L. Keller},
doi = {10.1523/JNEUROSCI.3596-07.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neuroscience},
volume = {28},
number = {9},
pages = {2242--2251},
abstract = {Selection of identical responses may not use the same neural mechanisms when the number of alternatives (NA) for the selection changes, as suggested by Hick's law. For elucidating the choice mechanisms, frontal eye field (FEF) neurons were monitored during a color-to-location choice saccade task as the number of potential targets was varied. Visual responses to alternative targets decreased as NA increased, whereas perisaccade activities increased with NA. These modulations of FEF activities seem closely related to the choice process because the activity enhancements coincided with the timing of target selection, and the neural modulation was greater as NA increased, features expected of neural correlates for a choice process from the perspective of Hick's law. Our current observations suggest two novel notions of FEF neuronal behavior that have not been reported previously: (1) cells called "phasic visual" that do not discharge in the perisaccade interval in a delayed-saccade paradigm show such activity in a choice response task at the time of the saccade; and (2) the activity in FEF visuomotor cells display an inverse relationship between perisaccadic activity and the time of saccade triggering with higher levels of activity leading to longer saccade reaction times. These findings support the area's involvement in sensory-motor translation for target selection through coactivation and competitive interaction of neural populations that code for alternative action sets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Selection of identical responses may not use the same neural mechanisms when the number of alternatives (NA) for the selection changes, as suggested by Hick's law. For elucidating the choice mechanisms, frontal eye field (FEF) neurons were monitored during a color-to-location choice saccade task as the number of potential targets was varied. Visual responses to alternative targets decreased as NA increased, whereas perisaccade activities increased with NA. These modulations of FEF activities seem closely related to the choice process because the activity enhancements coincided with the timing of target selection, and the neural modulation was greater as NA increased, features expected of neural correlates for a choice process from the perspective of Hick's law. Our current observations suggest two novel notions of FEF neuronal behavior that have not been reported previously: (1) cells called "phasic visual" that do not discharge in the perisaccade interval in a delayed-saccade paradigm show such activity in a choice response task at the time of the saccade; and (2) the activity in FEF visuomotor cells display an inverse relationship between perisaccadic activity and the time of saccade triggering with higher levels of activity leading to longer saccade reaction times. These findings support the area's involvement in sensory-motor translation for target selection through coactivation and competitive interaction of neural populations that code for alternative action sets.

Close

  • doi:10.1523/JNEUROSCI.3596-07.2008

Close

Vaia Lestou; Frank E. Pollick; Zoe Kourtzi

Neural substrates for action understanding at different description levels in the human brain Journal Article

In: Journal of Cognitive Neuroscience, vol. 20, no. 2, pp. 324–341, 2008.

Abstract | Links | BibTeX

@article{Lestou2008,
title = {Neural substrates for action understanding at different description levels in the human brain},
author = {Vaia Lestou and Frank E. Pollick and Zoe Kourtzi},
doi = {10.1162/jocn.2008.20021},
year = {2008},
date = {2008-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {20},
number = {2},
pages = {324--341},
abstract = {Understanding complex movements and abstract action goals is an important skill for our social interactions. Successful social interactions entail understanding of actions at different levels of action description, ranging from detailed movement trajectories that support learning of complex motor skills through imitation to distinct features of actions that allow us to discriminate between action goals and different action styles. Previous studies have implicated premotor, parietal, and superior temporal areas in action understanding. However, the role of these different cortical areas in action understanding at different levels of action description remains largely unknown. We addressed this question using advanced animation and stimulus generation techniques in combination with sensitive functional magnetic resonance imaging adaptation or repetition suppression methods. We tested the neural sensitivity of fronto-parietal and visual areas to differences in the kinematics and goals of actions using kinematic morphs of arm movements. Our findings provide novel evidence for differential involvement of ventral premotor, parietal, and temporal regions in action understanding. We show that the ventral premotor cortex encodes the physical similarity between movement trajectories and action goals that are important for exact copying of actions and the acquisition of complex motor skills. In contrast, whereas parietal regions and the superior temporal sulcus process the perceptual similarity between movements and may support the perception and imitation of abstract action goals and movement styles. Thus, our findings propose that fronto-parietal and visual areas involved in action understanding mediate a cascade of visual-motor processes at different levels of action description from exact movement copies to abstract action goals achieved with different movement styles.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Understanding complex movements and abstract action goals is an important skill for our social interactions. Successful social interactions entail understanding of actions at different levels of action description, ranging from detailed movement trajectories that support learning of complex motor skills through imitation to distinct features of actions that allow us to discriminate between action goals and different action styles. Previous studies have implicated premotor, parietal, and superior temporal areas in action understanding. However, the role of these different cortical areas in action understanding at different levels of action description remains largely unknown. We addressed this question using advanced animation and stimulus generation techniques in combination with sensitive functional magnetic resonance imaging adaptation or repetition suppression methods. We tested the neural sensitivity of fronto-parietal and visual areas to differences in the kinematics and goals of actions using kinematic morphs of arm movements. Our findings provide novel evidence for differential involvement of ventral premotor, parietal, and temporal regions in action understanding. We show that the ventral premotor cortex encodes the physical similarity between movement trajectories and action goals that are important for exact copying of actions and the acquisition of complex motor skills. In contrast, whereas parietal regions and the superior temporal sulcus process the perceptual similarity between movements and may support the perception and imitation of abstract action goals and movement styles. Thus, our findings propose that fronto-parietal and visual areas involved in action understanding mediate a cascade of visual-motor processes at different levels of action description from exact movement copies to abstract action goals achieved with different movement styles.

Close

  • doi:10.1162/jocn.2008.20021

Close

Christof Körner; Iain D. Gilchrist

Memory processes in multiple-target visual search Journal Article

In: Psychological Research, vol. 72, no. 1, pp. 99–105, 2008.

Abstract | Links | BibTeX

@article{Koerner2008,
title = {Memory processes in multiple-target visual search},
author = {Christof Körner and Iain D. Gilchrist},
doi = {10.1007/s00426-006-0075-1},
year = {2008},
date = {2008-01-01},
journal = {Psychological Research},
volume = {72},
number = {1},
pages = {99--105},
abstract = {Gibson, Li, Skow, Brown, and Cooke (Psychological Science, 11, 324–327, 2000) had participants carry out a search task in which they were required to detect the presence of one or two targets. In order to successfully perform such a multiple-target visual search task, participants had to remember the location of the Wrst target while searching for the second target. In two experiments we investigated the cost of remembering this target location. In Experiment 1, we compared performance on the Gibson et al. task with performance on a more conventional present–absent search task. The comparison suggests a substantial performance cost as measured by reaction time, number of Wxations and slope of the search functions. In Experment 2, we looked in detail at reWxations of distractors, which are a direct measure of attentional deployment. We demonstrated that the cost in this multiple-target visual search task was due to an increased number of reWxations on previously visited distractors. Such reWxations were present right from the start of the search. This change in search behaviour may be caused by the necessity of having to remember a target-allocating memory for the upcoming target may consume memory capacity that may otherwise be available for the tagging of distractors. These results support the notion of limited capacity memory processes in search. Introduction},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Gibson, Li, Skow, Brown, and Cooke (Psychological Science, 11, 324–327, 2000) had participants carry out a search task in which they were required to detect the presence of one or two targets. In order to successfully perform such a multiple-target visual search task, participants had to remember the location of the Wrst target while searching for the second target. In two experiments we investigated the cost of remembering this target location. In Experiment 1, we compared performance on the Gibson et al. task with performance on a more conventional present–absent search task. The comparison suggests a substantial performance cost as measured by reaction time, number of Wxations and slope of the search functions. In Experment 2, we looked in detail at reWxations of distractors, which are a direct measure of attentional deployment. We demonstrated that the cost in this multiple-target visual search task was due to an increased number of reWxations on previously visited distractors. Such reWxations were present right from the start of the search. This change in search behaviour may be caused by the necessity of having to remember a target-allocating memory for the upcoming target may consume memory capacity that may otherwise be available for the tagging of distractors. These results support the notion of limited capacity memory processes in search. Introduction

Close

  • doi:10.1007/s00426-006-0075-1

Close

Victor Kuperman; Raymond Bertram; R. Harald Baayen

Morphological dynamics in compound processing Journal Article

In: Language and Cognitive Processes, vol. 23, no. 7-8, pp. 1089–1132, 2008.

Abstract | Links | BibTeX

@article{Kuperman2008,
title = {Morphological dynamics in compound processing},
author = {Victor Kuperman and Raymond Bertram and R. Harald Baayen},
doi = {10.1080/01690960802193688},
year = {2008},
date = {2008-01-01},
journal = {Language and Cognitive Processes},
volume = {23},
number = {7-8},
pages = {1089--1132},
abstract = {This paper explores the time-course of morphological processing of trimorphemic Finnish compounds. We find evidence for the parallel access to full- forms and morphological constituents diagnosed by the early effects of compound frequency, as well as early effects of left constituent frequency and family size. We also observe an interaction between compound frequency and both the left and the right constituent family sizes. Furthermore, our data show that suffixes embedded in the derived left constituent of a compound are efficiently used for establishing the boundary between compounds' constituents. The success of segmentation of a compound is demonstrably modulated by the affixal salience of the embedded suffixes. We discuss implications of these findings for current models of morphological processing and propose a new model that views morphemes, combinations of morphemes and morphological paradigms as probabilistic sources of information that are interactively used in recognition of complex words.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This paper explores the time-course of morphological processing of trimorphemic Finnish compounds. We find evidence for the parallel access to full- forms and morphological constituents diagnosed by the early effects of compound frequency, as well as early effects of left constituent frequency and family size. We also observe an interaction between compound frequency and both the left and the right constituent family sizes. Furthermore, our data show that suffixes embedded in the derived left constituent of a compound are efficiently used for establishing the boundary between compounds' constituents. The success of segmentation of a compound is demonstrably modulated by the affixal salience of the embedded suffixes. We discuss implications of these findings for current models of morphological processing and propose a new model that views morphemes, combinations of morphemes and morphological paradigms as probabilistic sources of information that are interactively used in recognition of complex words.

Close

  • doi:10.1080/01690960802193688

Close

Jochen Laubrock; Ralf Engbert; Reinhold Kliegl

Fixational eye movements predict the perceived direction of ambiguous apparent motion Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008.

Abstract | Links | BibTeX

@article{Laubrock2008,
title = {Fixational eye movements predict the perceived direction of ambiguous apparent motion},
author = {Jochen Laubrock and Ralf Engbert and Reinhold Kliegl},
doi = {10.1167/8.14.13},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--17},
abstract = {Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information.

Close

  • doi:10.1167/8.14.13

Close

Keisuke Kawasaki; David L. Sheinberg

Learning to recognize visual objects with microstimulation in inferior temporal cortex Journal Article

In: Journal of Neurophysiology, vol. 100, no. 1, pp. 197–211, 2008.

Abstract | Links | BibTeX

@article{Kawasaki2008,
title = {Learning to recognize visual objects with microstimulation in inferior temporal cortex},
author = {Keisuke Kawasaki and David L. Sheinberg},
doi = {10.1152/jn.90247.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {100},
number = {1},
pages = {197--211},
abstract = {The malleability of object representations by experience is essential for adaptive behavior. It has been hypothesized that neurons in inferior temporal cortex (IT) in monkeys are pivotal in visual association learning, evidenced by experiments revealing changes in neural selectivity following visual learning, as well as by lesion studies, wherein functional inactivation of IT impairs learning. A critical question remaining to be answered is whether IT neuronal activity is sufficient for learning. To address this question directly, we conducted experiments combining visual classification learning with microstimulation in IT. We assessed the effects of IT microstimulation during learning in cases where the stimulation was exclusively informative, conditionally informative, and informative but not necessary for the classification task. The results show that localized microstimulation in IT can be used to establish visual classification learning, and the same stimulation applied during learning can predictably bias judgments on subsequent recognition. The effect of induced activity can be explained neither by direct stimulation-motor association nor by simple detection of cortical stimulation. We also found that the learning effects are specific to IT stimulation as they are not observed by microstimulation in an adjacent auditory area. Our results add the evidence that the differential activity in IT during visual association learning is sufficient for establishing new associations. The results suggest that experimentally manipulated activity patterns within IT can be effectively combined with ongoing visually induced activity during the formation of new associations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The malleability of object representations by experience is essential for adaptive behavior. It has been hypothesized that neurons in inferior temporal cortex (IT) in monkeys are pivotal in visual association learning, evidenced by experiments revealing changes in neural selectivity following visual learning, as well as by lesion studies, wherein functional inactivation of IT impairs learning. A critical question remaining to be answered is whether IT neuronal activity is sufficient for learning. To address this question directly, we conducted experiments combining visual classification learning with microstimulation in IT. We assessed the effects of IT microstimulation during learning in cases where the stimulation was exclusively informative, conditionally informative, and informative but not necessary for the classification task. The results show that localized microstimulation in IT can be used to establish visual classification learning, and the same stimulation applied during learning can predictably bias judgments on subsequent recognition. The effect of induced activity can be explained neither by direct stimulation-motor association nor by simple detection of cortical stimulation. We also found that the learning effects are specific to IT stimulation as they are not observed by microstimulation in an adjacent auditory area. Our results add the evidence that the differential activity in IT during visual association learning is sufficient for establishing new associations. The results suggest that experimentally manipulated activity patterns within IT can be effectively combined with ongoing visually induced activity during the formation of new associations.

Close

  • doi:10.1152/jn.90247.2008

Close

Edward L. Keller; Kyoung-Min Lee; Se-Woong Park; Jessica A. Hill

Effect of inactivation of the cortical frontal eye field on saccades generated in a choice response paradigm Journal Article

In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2726–2737, 2008.

Abstract | Links | BibTeX

@article{Keller2008,
title = {Effect of inactivation of the cortical frontal eye field on saccades generated in a choice response paradigm},
author = {Edward L. Keller and Kyoung-Min Lee and Se-Woong Park and Jessica A. Hill},
doi = {10.1152/jn.90673.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {100},
number = {5},
pages = {2726--2737},
abstract = {Previous studies using muscimol inactivations in the frontal eye fields (FEFs) have shown that saccades generated by recall from working memory are eliminated by these lesions, whereas visually guided saccades are relatively spared. In these experiments, we made reversible inactivations in FEFs in alert macaque monkeys and examined the effect on saccades in a choice response task. Our task required monkeys to learn arbitrary pairings between colored stimuli and saccade direction. Following inactivations, the percentage of choice errors increased as a function of the number of alternative (NA) pairings. In contrast, the percentage of dysmetric saccades (saccades that landed in the correct quadrant but were inaccurate) did not vary with NA. Saccade latency increased postlesion but did not increase with NA. We also made simultaneous inactivations in both FEFs. The results following bilateral lesions showed approximately twice as many choice errors. We conclude that the FEFs are involved in the generation of saccades in choice response tasks. The dramatic effect of NA on choice errors, but the lack of an effect of NA on motor errors or response latency, suggests that two types of processing are interrupted by FEF lesions. The first involves the formation of a saccadic intention vector from associate memory inputs, and the second, the execution of the saccade from the intention vector. An alternative interpretation of the first result is that a role of the FEFs may be to suppress incorrect responses. The doubling of choice errors following bilateral FEF lesions suggests that the effect of unilateral lesions is not caused by a general inhibition of the lesioned side by the intact side.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies using muscimol inactivations in the frontal eye fields (FEFs) have shown that saccades generated by recall from working memory are eliminated by these lesions, whereas visually guided saccades are relatively spared. In these experiments, we made reversible inactivations in FEFs in alert macaque monkeys and examined the effect on saccades in a choice response task. Our task required monkeys to learn arbitrary pairings between colored stimuli and saccade direction. Following inactivations, the percentage of choice errors increased as a function of the number of alternative (NA) pairings. In contrast, the percentage of dysmetric saccades (saccades that landed in the correct quadrant but were inaccurate) did not vary with NA. Saccade latency increased postlesion but did not increase with NA. We also made simultaneous inactivations in both FEFs. The results following bilateral lesions showed approximately twice as many choice errors. We conclude that the FEFs are involved in the generation of saccades in choice response tasks. The dramatic effect of NA on choice errors, but the lack of an effect of NA on motor errors or response latency, suggests that two types of processing are interrupted by FEF lesions. The first involves the formation of a saccadic intention vector from associate memory inputs, and the second, the execution of the saccade from the intention vector. An alternative interpretation of the first result is that a role of the FEFs may be to suppress incorrect responses. The doubling of choice errors following bilateral FEF lesions suggests that the effect of unilateral lesions is not caused by a general inhibition of the lesioned side by the intact side.

Close

  • doi:10.1152/jn.90673.2008

Close

Chantal Kemner; Lizet Ewijk; Herman Engeland; Ignace T. C. Hooge

Brief report: Eye movements during visual search tasks indicate enhanced stimulus discriminability in subjects with PDD Journal Article

In: Journal of Autism and Developmental Disorders, vol. 38, no. 3, pp. 553–558, 2008.

Abstract | Links | BibTeX

@article{Kemner2008,
title = {Brief report: Eye movements during visual search tasks indicate enhanced stimulus discriminability in subjects with PDD},
author = {Chantal Kemner and Lizet Ewijk and Herman Engeland and Ignace T. C. Hooge},
doi = {10.1007/s10803-007-0406-0},
year = {2008},
date = {2008-01-01},
journal = {Journal of Autism and Developmental Disorders},
volume = {38},
number = {3},
pages = {553--558},
abstract = {Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements during visual search tasks in high functioning adult men with PDD and a control group. Subjects with PDD were significantly faster than controls in these tasks, replicating earlier findings in children. Eye movement data showed that subjects with PDD made fewer eye movements than controls. No evidence was found for a different search strategy between the groups. The data indicate an enhanced ability to discriminate between stimulus elements in PDD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements during visual search tasks in high functioning adult men with PDD and a control group. Subjects with PDD were significantly faster than controls in these tasks, replicating earlier findings in children. Eye movement data showed that subjects with PDD made fewer eye movements than controls. No evidence was found for a different search strategy between the groups. The data indicate an enhanced ability to discriminate between stimulus elements in PDD.

Close

  • doi:10.1007/s10803-007-0406-0

Close

Dirk Kerzel; Angélique Gauch; Blandine Ulmann

Local motion inside an object affects pointing less than smooth pursuit Journal Article

In: Experimental Brain Research, vol. 191, no. 2, pp. 187–195, 2008.

Abstract | Links | BibTeX

@article{Kerzel2008,
title = {Local motion inside an object affects pointing less than smooth pursuit},
author = {Dirk Kerzel and Angélique Gauch and Blandine Ulmann},
doi = {10.1007/s00221-008-1514-6},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {191},
number = {2},
pages = {187--195},
abstract = {During smooth pursuit eye movements, briefly presented objects are mislocalized in the direction of motion. It has been proposed that the localization error is the sum of the pursuit signal and the retinal motion signal in a $sim$200 ms interval after flash onset. To evaluate contributions of retinal motion signals produced by the entire object (global motion) and elements within the object (local motion), we asked observers to reach to flashed Gabor patches (Gaussian-windowed sine-wave gratings). Global motion was manipulated by varying the duration of a stationary flash, and local motion was manipulated by varying the motion of the sine-wave. Our results confirm that global retinal motion reduces the localization error. The effect of local retinal motion on object localization was far smaller, even though local and global motion had equal effects on eye velocity. Thus, local retinal motion has differential access to manual and oculomotor control circuits. Further, we observed moderate correlations between smooth pursuit gain and localization error.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During smooth pursuit eye movements, briefly presented objects are mislocalized in the direction of motion. It has been proposed that the localization error is the sum of the pursuit signal and the retinal motion signal in a $sim$200 ms interval after flash onset. To evaluate contributions of retinal motion signals produced by the entire object (global motion) and elements within the object (local motion), we asked observers to reach to flashed Gabor patches (Gaussian-windowed sine-wave gratings). Global motion was manipulated by varying the duration of a stationary flash, and local motion was manipulated by varying the motion of the sine-wave. Our results confirm that global retinal motion reduces the localization error. The effect of local retinal motion on object localization was far smaller, even though local and global motion had equal effects on eye velocity. Thus, local retinal motion has differential access to manual and oculomotor control circuits. Further, we observed moderate correlations between smooth pursuit gain and localization error.

Close

  • doi:10.1007/s00221-008-1514-6

Close

Dirk Kerzel; David Souto; Nathalie E. Ziegler

Effects of attention shifts to stationary objects during steady-state smooth pursuit eye movements Journal Article

In: Vision Research, vol. 48, no. 7, pp. 958–969, 2008.

Abstract | Links | BibTeX

@article{Kerzel2008a,
title = {Effects of attention shifts to stationary objects during steady-state smooth pursuit eye movements},
author = {Dirk Kerzel and David Souto and Nathalie E. Ziegler},
doi = {10.1016/j.visres.2008.01.015},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {7},
pages = {958--969},
abstract = {A number of studies have shown that stationary backgrounds compromise smooth pursuit eye movements. It has been suggested that poor attentional selection of the pursuit target was responsible for reductions of pursuit gain. To quantify the detrimental effects of attention, we instructed observers to either pay attention to background objects or to ignore them. The to-be-attended object was indicated by peripheral or central cues. Strong reductions of pursuit gain occurred when the following conditions were met: (a) the subject payed attention to the object (b) a salient event was present, for instance the onset of the target or cue and (c) the attended target produced retinal motion. Removing any of the three conditions resulted in no or far smaller decreases of pursuit gain. Further, decreases in pursuit gain were present with perceptual discrimination and simple manual detection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A number of studies have shown that stationary backgrounds compromise smooth pursuit eye movements. It has been suggested that poor attentional selection of the pursuit target was responsible for reductions of pursuit gain. To quantify the detrimental effects of attention, we instructed observers to either pay attention to background objects or to ignore them. The to-be-attended object was indicated by peripheral or central cues. Strong reductions of pursuit gain occurred when the following conditions were met: (a) the subject payed attention to the object (b) a salient event was present, for instance the onset of the target or cue and (c) the attended target produced retinal motion. Removing any of the three conditions resulted in no or far smaller decreases of pursuit gain. Further, decreases in pursuit gain were present with perceptual discrimination and simple manual detection.

Close

  • doi:10.1016/j.visres.2008.01.015

Close

C. -H. Juan; Neil G. Muggleton; Ovid J. L. Tzeng; D. L. Hung; A. Cowey; Vincent Walsh

Segregation of visual selection and saccades in human frontal eye fields Journal Article

In: Cerebral Cortex, vol. 18, no. 10, pp. 2410–2415, 2008.

Abstract | Links | BibTeX

@article{Juan2008,
title = {Segregation of visual selection and saccades in human frontal eye fields},
author = {C. -H. Juan and Neil G. Muggleton and Ovid J. L. Tzeng and D. L. Hung and A. Cowey and Vincent Walsh},
doi = {10.1093/cercor/bhn001},
year = {2008},
date = {2008-01-01},
journal = {Cerebral Cortex},
volume = {18},
number = {10},
pages = {2410--2415},
abstract = {The premotor theory of attention suggests that target processing and generation of a saccade to the target are interdependent. Temporally precise transcranial magnetic stimulation (TMS) was delivered over the human frontal eye fields, the area most frequently associated with the premotor theory in association with eye movements, while subjects performed a visually instructed pro-/antisaccade task. Visual analysis and saccade preparation were clearly separated in time, as indicated by 2 distinct time points of TMS delivery that resulted in elevated saccade latencies. These results show that visual analysis and saccade preparation, although frequently enacted together, are dissociable processes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The premotor theory of attention suggests that target processing and generation of a saccade to the target are interdependent. Temporally precise transcranial magnetic stimulation (TMS) was delivered over the human frontal eye fields, the area most frequently associated with the premotor theory in association with eye movements, while subjects performed a visually instructed pro-/antisaccade task. Visual analysis and saccade preparation were clearly separated in time, as indicated by 2 distinct time points of TMS delivery that resulted in elevated saccade latencies. These results show that visual analysis and saccade preparation, although frequently enacted together, are dissociable processes.

Close

  • doi:10.1093/cercor/bhn001

Close

Johanna K. Kaakinen; Jukka Hyönä

Perspective-driven text comprehension Journal Article

In: Applied Cognitive Psychology, vol. 22, pp. 319–334, 2008.

Abstract | BibTeX

@article{Kaakinen2008,
title = {Perspective-driven text comprehension},
author = {Johanna K. Kaakinen and Jukka Hyönä},
year = {2008},
date = {2008-01-01},
journal = {Applied Cognitive Psychology},
volume = {22},
pages = {319--334},
abstract = {The present article reports results of an eye‐tracking experiment, which examines whether the perspective‐driven text comprehension framework applies to comprehension of narrative text. Sixty‐four participants were instructed to adopt either a burglar's or an interior designer's perspective. A pilot test showed that readers have more overlapping prior knowledge with the burglar‐relevant than with the interior designer‐relevant information of the experimental text. Participants read either a transparent text version where the (ir)relevance of text segments to the perspective was made apparent, or an opaque text version where no direct mention of the perspective was made. After reading participants wrote a free recall of the text. The results showed that perspective‐related prior knowledge modulates the perspective effects observed in on‐line text processing and that signalling of (ir)relevance helps in encoding relevant information to memory. It is concluded that the proposed framework generalizes to the on‐line comprehension of narrative texts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present article reports results of an eye‐tracking experiment, which examines whether the perspective‐driven text comprehension framework applies to comprehension of narrative text. Sixty‐four participants were instructed to adopt either a burglar's or an interior designer's perspective. A pilot test showed that readers have more overlapping prior knowledge with the burglar‐relevant than with the interior designer‐relevant information of the experimental text. Participants read either a transparent text version where the (ir)relevance of text segments to the perspective was made apparent, or an opaque text version where no direct mention of the perspective was made. After reading participants wrote a free recall of the text. The results showed that perspective‐related prior knowledge modulates the perspective effects observed in on‐line text processing and that signalling of (ir)relevance helps in encoding relevant information to memory. It is concluded that the proposed framework generalizes to the on‐line comprehension of narrative texts.

Close

Roger Kalla; Neil G. Muggleton; Chi-Hung Juan; Alan Cowey; Vincent Walsh

The timing of the involvement of the frontal eye fields and posterior parietal cortex in visual search Journal Article

In: NeuroReport, vol. 19, no. 10, pp. 1069–1073, 2008.

Abstract | Links | BibTeX

@article{Kalla2008,
title = {The timing of the involvement of the frontal eye fields and posterior parietal cortex in visual search},
author = {Roger Kalla and Neil G. Muggleton and Chi-Hung Juan and Alan Cowey and Vincent Walsh},
doi = {10.1097/WNR.0b013e328304d9c4},
year = {2008},
date = {2008-01-01},
journal = {NeuroReport},
volume = {19},
number = {10},
pages = {1069--1073},
abstract = {The frontal eye fields (FEFs) and posterior parietal cortex (PPC) are important for target detection in conjunction visual search but the relative timings of their contribution have not been compared directly. We addressed this using temporally specific double pulse transcranial magnetic stimulation delivered at different times over FEFs and PPC during performance of a visual search task. Disruption of performance was earlier (0/40 ms) with FEF stimulation than with PPC stimulation (120/160 ms), revealing a clear and substantial temporal dissociation of the involvement of these two areas in conjunction visual search. We discuss these timings with reference to the respective roles of FEF and PPC in the modulation of extrastriate visual areas and selection of responses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The frontal eye fields (FEFs) and posterior parietal cortex (PPC) are important for target detection in conjunction visual search but the relative timings of their contribution have not been compared directly. We addressed this using temporally specific double pulse transcranial magnetic stimulation delivered at different times over FEFs and PPC during performance of a visual search task. Disruption of performance was earlier (0/40 ms) with FEF stimulation than with PPC stimulation (120/160 ms), revealing a clear and substantial temporal dissociation of the involvement of these two areas in conjunction visual search. We discuss these timings with reference to the respective roles of FEF and PPC in the modulation of extrastriate visual areas and selection of responses.

Close

  • doi:10.1097/WNR.0b013e328304d9c4

Close

Andre Kaminiarz; Bart Krekelberg; Frank Bremmer

Expansion of visual space during optokinetic afternystagmus (OKAN) Journal Article

In: Journal of Neurophysiology, vol. 99, no. 5, pp. 2470–2478, 2008.

Abstract | Links | BibTeX

@article{Kaminiarz2008,
title = {Expansion of visual space during optokinetic afternystagmus (OKAN)},
author = {Andre Kaminiarz and Bart Krekelberg and Frank Bremmer},
doi = {10.1152/jn.00017.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {99},
number = {5},
pages = {2470--2478},
abstract = {The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.

Close

  • doi:10.1152/jn.00017.2008

Close

Annette Kinder; Martin Rolfs; Reinhold Kliegl

Sequence learning at optimal stimulus – response mapping: Evidence from a serial reaction time task Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 61, no. 2, pp. 203–209, 2008.

Abstract | Links | BibTeX

@article{Kinder2008,
title = {Sequence learning at optimal stimulus – response mapping: Evidence from a serial reaction time task},
author = {Annette Kinder and Martin Rolfs and Reinhold Kliegl},
doi = {10.1080/17470210701557555},
year = {2008},
date = {2008-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {61},
number = {2},
pages = {203--209},
abstract = {We propose a new version of the serial reaction time (SRT) task in which participants merely looked at the target instead of responding manually. As response locations were identical to target locations, stimulus–response compatibility was maximal in this task. We demonstrated that saccadic response times decreased during training and increased again when a new sequence was presented. It is unlikely that this effect was caused by stimulus–response (S–R) learning because bonds between (visual) stimuli and (oculomotor) responses were already well established before the experiment started. Thus, the finding shows that the building ofS–R bonds is not essential for learning in the SRT task. Numerous},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We propose a new version of the serial reaction time (SRT) task in which participants merely looked at the target instead of responding manually. As response locations were identical to target locations, stimulus–response compatibility was maximal in this task. We demonstrated that saccadic response times decreased during training and increased again when a new sequence was presented. It is unlikely that this effect was caused by stimulus–response (S–R) learning because bonds between (visual) stimuli and (oculomotor) responses were already well established before the experiment started. Thus, the finding shows that the building ofS–R bonds is not essential for learning in the SRT task. Numerous

Close

  • doi:10.1080/17470210701557555

Close

P. Christiaan Klink; Raymond Van Ee; M. M. Nijs; G. J. Brouwer; A. J. Noest; Richard J. A. Wezel

Early interactions between neuronal adaptation and voluntary control determine perceptual choices in bistable vision Journal Article

In: Journal of Vision, vol. 8, no. 5, pp. 1–18, 2008.

Abstract | Links | BibTeX

@article{Klink2008,
title = {Early interactions between neuronal adaptation and voluntary control determine perceptual choices in bistable vision},
author = {P. Christiaan Klink and Raymond Van Ee and M. M. Nijs and G. J. Brouwer and A. J. Noest and Richard J. A. Wezel},
doi = {10.1167/8.5.16},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {5},
pages = {1--18},
abstract = {At the onset of bistable stimuli, the brain needs to choose which of the competing perceptual interpretations willfi rst reach awareness. Stimulus manipulations and cognitive control both infl uence this choice process, but the underlying mechanisms and interactions remain poorly understood. Using intermittent presentation of bistable visual stimuli, we demonstrate that short interruptions cause perceptual reversals upon the next presentation, whereas longer interstimulus intervals stabilize the percept. Top-down voluntary control biases this process but does not override the timing dependencies. Extending a recently introduced low-level neural model, we demonstrate that percept-choice dynamics in bistable vision can be fully understood with interactions in early neural processing stages. Our model includes adaptive neural processing preceding a rivalry resolution stage with cross-inhibition, adaptation, and an interaction of the adaptation levels with a neural baseline. Most importantly, ourfi ndings suggest that top-down attentional control over bistable stimuli interacts with low-level mechanisms at early levels of sensory processing before perceptual confl icts are resolved and perceptual choices about bistable stimuli are made.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

At the onset of bistable stimuli, the brain needs to choose which of the competing perceptual interpretations willfi rst reach awareness. Stimulus manipulations and cognitive control both infl uence this choice process, but the underlying mechanisms and interactions remain poorly understood. Using intermittent presentation of bistable visual stimuli, we demonstrate that short interruptions cause perceptual reversals upon the next presentation, whereas longer interstimulus intervals stabilize the percept. Top-down voluntary control biases this process but does not override the timing dependencies. Extending a recently introduced low-level neural model, we demonstrate that percept-choice dynamics in bistable vision can be fully understood with interactions in early neural processing stages. Our model includes adaptive neural processing preceding a rivalry resolution stage with cross-inhibition, adaptation, and an interaction of the adaptation levels with a neural baseline. Most importantly, ourfi ndings suggest that top-down attentional control over bistable stimuli interacts with low-level mechanisms at early levels of sensory processing before perceptual confl icts are resolved and perceptual choices about bistable stimuli are made.

Close

  • doi:10.1167/8.5.16

Close

Stefan Klöppel; Bogdan Draganski; Charlotte V. Golding; Carlton Chu; Zoltan Nagy; Philip A. Cook; Stephen L. Hicks; Christopher Kennard; Daniel C. Alexander; Geoff J. M. Parker; Sarah J. Tabrizi; Richard S. J. Frackowiak

White matter connections reflect changes in voluntary-guided saccades in pre-symptomatic Huntington's disease Journal Article

In: Brain, vol. 131, no. 1, pp. 196–204, 2008.

Abstract | Links | BibTeX

@article{Kloeppel2008,
title = {White matter connections reflect changes in voluntary-guided saccades in pre-symptomatic Huntington's disease},
author = {Stefan Klöppel and Bogdan Draganski and Charlotte V. Golding and Carlton Chu and Zoltan Nagy and Philip A. Cook and Stephen L. Hicks and Christopher Kennard and Daniel C. Alexander and Geoff J. M. Parker and Sarah J. Tabrizi and Richard S. J. Frackowiak},
doi = {10.1093/brain/awm275},
year = {2008},
date = {2008-01-01},
journal = {Brain},
volume = {131},
number = {1},
pages = {196--204},
abstract = {Huntington's disease is caused by a known genetic mutation and so potentially can be diagnosed many years before the onset of symptoms. Neuropathological changes have been found in both striatum and frontal cortex in the pre-symptomatic stage. Disruption of cortico-striatal white matter fibre tracts is therefore likely to contribute to the first clinical signs of the disease. We analysed diffusion tensor MR image (DTI) data from 25 pre-symptomatic gene carriers (PSCs) and 20 matched controls using a multivariate support vector machine to identify patterns of changes in fractional anisotropy (FA). In addition, we performed probabilistic fibre tracking to detect changes in 'streamlines' connecting frontal cortex to striatum. We found a pattern of structural brain changes that includes putamen bilaterally as well as anterior parts of the corpus callosum. This pattern was sufficiently specific to enable us to correctly classify 82% of scans as coming from a PSC or control subject. Fibre tracking revealed a reduction of frontal cortico-fugal streamlines reaching the body of the caudate in PSCs compared to controls. In the left hemispheres of PSCs we found a negative correlation between years to estimated disease onset and streamlines from frontal cortex to body of caudate. A large proportion of the fibres to the caudate body originate from the frontal eye fields, which play an important role in the control of voluntary saccades. This type of saccade is specifically impaired in PSCs and is an early clinical sign of motor abnormalities. A correlation analysis in 14 PSCs revealed that subjects with greater impairment of voluntary-guided saccades had fewer fibre tracking streamlines connecting the frontal cortex and caudate body. Our findings suggest a specific patho-physiological basis for these symptoms by indicating selective vulnerability of the associated white matter tracts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Huntington's disease is caused by a known genetic mutation and so potentially can be diagnosed many years before the onset of symptoms. Neuropathological changes have been found in both striatum and frontal cortex in the pre-symptomatic stage. Disruption of cortico-striatal white matter fibre tracts is therefore likely to contribute to the first clinical signs of the disease. We analysed diffusion tensor MR image (DTI) data from 25 pre-symptomatic gene carriers (PSCs) and 20 matched controls using a multivariate support vector machine to identify patterns of changes in fractional anisotropy (FA). In addition, we performed probabilistic fibre tracking to detect changes in 'streamlines' connecting frontal cortex to striatum. We found a pattern of structural brain changes that includes putamen bilaterally as well as anterior parts of the corpus callosum. This pattern was sufficiently specific to enable us to correctly classify 82% of scans as coming from a PSC or control subject. Fibre tracking revealed a reduction of frontal cortico-fugal streamlines reaching the body of the caudate in PSCs compared to controls. In the left hemispheres of PSCs we found a negative correlation between years to estimated disease onset and streamlines from frontal cortex to body of caudate. A large proportion of the fibres to the caudate body originate from the frontal eye fields, which play an important role in the control of voluntary saccades. This type of saccade is specifically impaired in PSCs and is an early clinical sign of motor abnormalities. A correlation analysis in 14 PSCs revealed that subjects with greater impairment of voluntary-guided saccades had fewer fibre tracking streamlines connecting the frontal cortex and caudate body. Our findings suggest a specific patho-physiological basis for these symptoms by indicating selective vulnerability of the associated white matter tracts.

Close

  • doi:10.1093/brain/awm275

Close

Christopher M. Knapp; Irene Gottlob; Rebecca J. McLean; Frank A. Proudlock

Horizontal and vertical look and stare optokinetic nystagmus symmetry in healthy adult volunteers Journal Article

In: Investigative Ophthalmology & Visual Science, vol. 49, no. 2, pp. 581–588, 2008.

Abstract | Links | BibTeX

@article{Knapp2008,
title = {Horizontal and vertical look and stare optokinetic nystagmus symmetry in healthy adult volunteers},
author = {Christopher M. Knapp and Irene Gottlob and Rebecca J. McLean and Frank A. Proudlock},
doi = {10.1167/iovs.07-0773},
year = {2008},
date = {2008-01-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {49},
number = {2},
pages = {581--588},
abstract = {PURPOSE: Look optokinetic nystagmus (OKN) consists of voluntary tracking of details in a moving visual field, whereas stare OKN is reflexive and consists of shorter slow phases of lower gain. Horizontal OKN is symmetrical in healthy adults, whereas symmetry of vertical OKN is controversial. Horizontal and vertical look and stare OKN symmetry was measured, and the consistency of individual asymmetries and the effect of varying stimulus conditions were investigated.METHODS: Horizontal and vertical look and stare OKN gains were recorded in 15 healthy volunteers (40 degrees /s) using new methods to delineate look and stare OKN. Responses with right and left eye viewing were compared to investigate consistency of individual OKN asymmetry. In a second experiment, the symmetry of stare OKN was measured in nine volunteers varying velocity (20 degrees /s and 40 degrees /s), contrast (50% and 100%), grating contrast profile (square or sine wave), and stimulus shape (full screen or circular vignetted).RESULTS: There was no horizontal or vertical asymmetry in look or stare OKN gain for all volunteers grouped together. However, individual vertical asymmetries were strongly correlated for left and right eye viewing (look: r = 0.77},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE: Look optokinetic nystagmus (OKN) consists of voluntary tracking of details in a moving visual field, whereas stare OKN is reflexive and consists of shorter slow phases of lower gain. Horizontal OKN is symmetrical in healthy adults, whereas symmetry of vertical OKN is controversial. Horizontal and vertical look and stare OKN symmetry was measured, and the consistency of individual asymmetries and the effect of varying stimulus conditions were investigated.METHODS: Horizontal and vertical look and stare OKN gains were recorded in 15 healthy volunteers (40 degrees /s) using new methods to delineate look and stare OKN. Responses with right and left eye viewing were compared to investigate consistency of individual OKN asymmetry. In a second experiment, the symmetry of stare OKN was measured in nine volunteers varying velocity (20 degrees /s and 40 degrees /s), contrast (50% and 100%), grating contrast profile (square or sine wave), and stimulus shape (full screen or circular vignetted).RESULTS: There was no horizontal or vertical asymmetry in look or stare OKN gain for all volunteers grouped together. However, individual vertical asymmetries were strongly correlated for left and right eye viewing (look: r = 0.77

Close

  • doi:10.1167/iovs.07-0773

Close

John D. Koehn; Elizabeth Roy; Jason J. S. Barton

The "diagonal effect": A systematic error in oblique antisaccades Journal Article

In: Journal of Neurophysiology, vol. 100, no. 2, pp. 587–597, 2008.

Abstract | Links | BibTeX

@article{Koehn2008,
title = {The "diagonal effect": A systematic error in oblique antisaccades},
author = {John D. Koehn and Elizabeth Roy and Jason J. S. Barton},
doi = {10.1152/jn.90268.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {100},
number = {2},
pages = {587--597},
abstract = {Antisaccades are known to show greater variable error and also a systematic hypometria in their amplitude compared with visually guided prosaccades. In this study, we examined whether their accuracy in direction (as opposed to amplitude) also showed a systematic error. We had human subjects perform prosaccades and antisaccades to goals located at a variety of polar angles. In the first experiment, subjects made prosaccades or antisaccades to one of eight equidistant locations in each block, whereas in the second, they made saccades to one of two equidistant locations per block. In the third, they made antisaccades to one of two locations at different distances but with the same polar angle in each block. Regardless of block design, the results consistently showed a saccadic systematic error, in that oblique antisaccades (but not prosaccades) requiring unequal vertical and horizontal vector components were deviated toward the 45 degrees diagonal meridians. This finding could not be attributed to range effects in either Cartesian or polar coordinates. A perceptual origin of the diagonal effect is suggested by similar systematic errors in other studies of memory-guided manual reaching or perceptual estimation of direction, and may indicate a common spatial bias when there is uncertain information about spatial location.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Antisaccades are known to show greater variable error and also a systematic hypometria in their amplitude compared with visually guided prosaccades. In this study, we examined whether their accuracy in direction (as opposed to amplitude) also showed a systematic error. We had human subjects perform prosaccades and antisaccades to goals located at a variety of polar angles. In the first experiment, subjects made prosaccades or antisaccades to one of eight equidistant locations in each block, whereas in the second, they made saccades to one of two equidistant locations per block. In the third, they made antisaccades to one of two locations at different distances but with the same polar angle in each block. Regardless of block design, the results consistently showed a saccadic systematic error, in that oblique antisaccades (but not prosaccades) requiring unequal vertical and horizontal vector components were deviated toward the 45 degrees diagonal meridians. This finding could not be attributed to range effects in either Cartesian or polar coordinates. A perceptual origin of the diagonal effect is suggested by similar systematic errors in other studies of memory-guided manual reaching or perceptual estimation of direction, and may indicate a common spatial bias when there is uncertain information about spatial location.

Close

  • doi:10.1152/jn.90268.2008

Close

Xingshan Li; Gordon D. Logan

Object-based attention in Chinese readers of Chinese words: Beyond Gestalt principles Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 5, pp. 945–949, 2008.

Abstract | Links | BibTeX

@article{Li2008,
title = {Object-based attention in Chinese readers of Chinese words: Beyond Gestalt principles},
author = {Xingshan Li and Gordon D. Logan},
doi = {10.3758/PBR.15.5.945},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {5},
pages = {945--949},
abstract = {Most object-based attention studies use objects defined bottom-up by Gestalt principles. In the present study, we defined objects top-down, using Chinese words that were seen as objects by skilled readers of Chinese. Using a spatial cuing paradigm, we found that a target character was detected faster if it was in the same word as the cued character than if it was in a different word. Because there were no bottom-up factors that distinguished the words, these results showed that objects defined by subjects' knowledge--in this case, lexical information--can also constrain the deployment of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Most object-based attention studies use objects defined bottom-up by Gestalt principles. In the present study, we defined objects top-down, using Chinese words that were seen as objects by skilled readers of Chinese. Using a spatial cuing paradigm, we found that a target character was detected faster if it was in the same word as the cued character than if it was in a different word. Because there were no bottom-up factors that distinguished the words, these results showed that objects defined by subjects' knowledge--in this case, lexical information--can also constrain the deployment of attention.

Close

  • doi:10.3758/PBR.15.5.945

Close

Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg

Adaptive strategies for reading with a forced retinal location Journal Article

In: Journal of Vision, vol. 8, no. 5, pp. 1–18, 2008.

Abstract | Links | BibTeX

@article{Lingnau2008,
title = {Adaptive strategies for reading with a forced retinal location},
author = {Angelika Lingnau and Jens Schwarzbach and Dirk Vorberg},
doi = {10.1167/8.5.6},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {5},
pages = {1--18},
abstract = {Forcing normal-sighted participants to use a distinct parafoveal retinal location for reading, we studied which part of the visual field is best suited to take over functions of the fovea during early stages of macular degeneration (MD). A region to the right of fixation lead to best reading performance and most natural gaze behavior, whereas reading performance was severely impaired when a region to the left or below fixation had to be used. An analysis of the underlying oculomotor behavior revealed that practice effects were accompanied by a larger number of saccades in text direction and decreased fixation durations, whereas no adjustment of saccade amplitudes was observed. We provide an explanation for the observed performance differences at different retinal locations based on the interplay of attention and eye movements. Our findings have important implications for the development of training methods for MD patients targeted at reading, suggesting that it would be beneficial for MD patients to use a region to the right of their central scotoma.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Forcing normal-sighted participants to use a distinct parafoveal retinal location for reading, we studied which part of the visual field is best suited to take over functions of the fovea during early stages of macular degeneration (MD). A region to the right of fixation lead to best reading performance and most natural gaze behavior, whereas reading performance was severely impaired when a region to the left or below fixation had to be used. An analysis of the underlying oculomotor behavior revealed that practice effects were accompanied by a larger number of saccades in text direction and decreased fixation durations, whereas no adjustment of saccade amplitudes was observed. We provide an explanation for the observed performance differences at different retinal locations based on the interplay of attention and eye movements. Our findings have important implications for the development of training methods for MD patients targeted at reading, suggesting that it would be beneficial for MD patients to use a region to the right of their central scotoma.

Close

  • doi:10.1167/8.5.6

Close

Valérie Gaveau; Denis Pélisson; Annabelle Blangero; Christian Urquizar; Claude Prablanc; Alain Vighetto; Laure Pisella

Saccade control and eye-hand coordination in optic ataxia Journal Article

In: Neuropsychologia, vol. 46, no. 2, pp. 475–486, 2008.

Abstract | Links | BibTeX

@article{Gaveau2008,
title = {Saccade control and eye-hand coordination in optic ataxia},
author = {Valérie Gaveau and Denis Pélisson and Annabelle Blangero and Christian Urquizar and Claude Prablanc and Alain Vighetto and Laure Pisella},
doi = {10.1016/j.neuropsychologia.2007.08.028},
year = {2008},
date = {2008-01-01},
journal = {Neuropsychologia},
volume = {46},
number = {2},
pages = {475--486},
abstract = {The aim of this work was to investigate ocular control in patients with optic ataxia (OA). Following a lesion in the posterior parietal cortex (PPC), these patients exhibit a deficit for fast visuo-motor control of reach-to-grasp movements. Here, we assessed the fast visuo-motor control of saccades as well as spontaneous eye-hand coordination in two bilateral OA patients and five neurologically intact controls in an ecological "look and point" paradigm. To test fast saccadic control, trials with unexpected target-jumps synchronised with saccade onset were randomly intermixed with stationary target trials. Results confirmed that control subjects achieved visual capture (foveation) of the displaced targets with the same timing as stationary targets (fast saccadic control) and began their hand movement systematically at the end of the primary saccade. In contrast, the two bilateral OA patients exhibited a delayed visual capture, especially of displaced targets, resulting from an impairment of fast saccadic control. They also exhibited a peculiar eye-hand coordination pattern, spontaneously delaying their hand movement onset until the execution of a final corrective saccade, which allowed target foveation. To test whether this pathological behaviour results from a delay in updating visual target location, we had subjects perform a second experiment in the same control subjects in which the target-jump was synchronised with saccade offset. With less time for target location updating, the control subjects exhibited the same lack of fast saccadic control as the OA patients. We propose that OA corresponds to an impairment of fast updating of target location, therefore affecting both eye and hand movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The aim of this work was to investigate ocular control in patients with optic ataxia (OA). Following a lesion in the posterior parietal cortex (PPC), these patients exhibit a deficit for fast visuo-motor control of reach-to-grasp movements. Here, we assessed the fast visuo-motor control of saccades as well as spontaneous eye-hand coordination in two bilateral OA patients and five neurologically intact controls in an ecological "look and point" paradigm. To test fast saccadic control, trials with unexpected target-jumps synchronised with saccade onset were randomly intermixed with stationary target trials. Results confirmed that control subjects achieved visual capture (foveation) of the displaced targets with the same timing as stationary targets (fast saccadic control) and began their hand movement systematically at the end of the primary saccade. In contrast, the two bilateral OA patients exhibited a delayed visual capture, especially of displaced targets, resulting from an impairment of fast saccadic control. They also exhibited a peculiar eye-hand coordination pattern, spontaneously delaying their hand movement onset until the execution of a final corrective saccade, which allowed target foveation. To test whether this pathological behaviour results from a delay in updating visual target location, we had subjects perform a second experiment in the same control subjects in which the target-jump was synchronised with saccade offset. With less time for target location updating, the control subjects exhibited the same lack of fast saccadic control as the OA patients. We propose that OA corresponds to an impairment of fast updating of target location, therefore affecting both eye and hand movements.

Close

  • doi:10.1016/j.neuropsychologia.2007.08.028

Close

Katharina Georg; Fred H. Hamker; Markus Lappe

Influence of adaptation state and stimulus luminance on peri-saccadic localization Journal Article

In: Journal of Vision, vol. 8, no. 1, pp. 1–11, 2008.

Abstract | Links | BibTeX

@article{Georg2008,
title = {Influence of adaptation state and stimulus luminance on peri-saccadic localization},
author = {Katharina Georg and Fred H. Hamker and Markus Lappe},
doi = {10.1167/8.1.15},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {1},
pages = {1--11},
abstract = {Spatial localization of flashed stimuli across saccades shows transient distortions of perceived position: Stimuli appear shifted in saccade direction and compressed towards the saccade target. The strength and spatial pattern of this mislocalization is influenced by contrast, duration, and spatial and temporal arrangement of stimuli and background. Because mislocalization of stimuli on a background depends on contrast, we asked whether mislocalization of stimuli in darkness depends on luminance. Since dark adaptation changes luminance thresholds, we compared mislocalization in dark-adapted and light-adapted states. Peri-saccadic mislocalization was measured with near-threshold stimuli and above-threshold stimuli in dark-adapted and light-adapted subjects. In both adaptation states, near-threshold stimuli gave much larger mislocalization than above-threshold stimuli. Furthermore, when the stimulus was presented near-threshold, the perceived positions of the stimuli clustered closer together. Stimulus luminance that produced strong mislocalization in the light-adapted state produced very little mislocalization in the dark-adapted state because it was now well above threshold. We conclude that the strength of peri-saccadic mislocalization depends on the strength of the stimulus: stimuli with near-threshold luminance, and hence low visibility, are more mis-localized than clearly visible stimuli with high luminance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Spatial localization of flashed stimuli across saccades shows transient distortions of perceived position: Stimuli appear shifted in saccade direction and compressed towards the saccade target. The strength and spatial pattern of this mislocalization is influenced by contrast, duration, and spatial and temporal arrangement of stimuli and background. Because mislocalization of stimuli on a background depends on contrast, we asked whether mislocalization of stimuli in darkness depends on luminance. Since dark adaptation changes luminance thresholds, we compared mislocalization in dark-adapted and light-adapted states. Peri-saccadic mislocalization was measured with near-threshold stimuli and above-threshold stimuli in dark-adapted and light-adapted subjects. In both adaptation states, near-threshold stimuli gave much larger mislocalization than above-threshold stimuli. Furthermore, when the stimulus was presented near-threshold, the perceived positions of the stimuli clustered closer together. Stimulus luminance that produced strong mislocalization in the light-adapted state produced very little mislocalization in the dark-adapted state because it was now well above threshold. We conclude that the strength of peri-saccadic mislocalization depends on the strength of the stimulus: stimuli with near-threshold luminance, and hence low visibility, are more mis-localized than clearly visible stimuli with high luminance.

Close

  • doi:10.1167/8.1.15

Close

Thomas Geyer; Hermann J. Müller; Joseph Krummenacher

Expectancies modulate attentional capture by salient color singletons Journal Article

In: Vision Research, vol. 48, no. 11, pp. 1315–1326, 2008.

Abstract | Links | BibTeX

@article{Geyer2008,
title = {Expectancies modulate attentional capture by salient color singletons},
author = {Thomas Geyer and Hermann J. Müller and Joseph Krummenacher},
doi = {10.1016/j.visres.2008.02.006},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {11},
pages = {1315--1326},
abstract = {In singleton feature search for a form-defined target, the presentation of a task-irrelevant, but salient singleton color distractor is known to interfere with target detection [Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception & Psychophysics, 50, 184-193; Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51, 599-606]. The present study was designed to re-examine this effect, by presenting observers with a singleton form target (on each trial) that could be accompanied by a salient) singleton color distractor, with the proportion of distractor to no-distractor trials systematically varying across blocks of trials. In addition to RTs, eye movements were recorded in order to examine the mechanisms underlying the distractor interference effect. The results showed that singleton distractors did interfere with target detection only when they were presented on a relatively small (but not on a large) proportion of trials. Overall, the findings suggest that cross-dimensional interference is a covert attention effect, arising from the competition of the target with the distractor for attentional selection [Kumada, T., & Humphreys, G. W. (2002). Cross-dimensional interference and cross-trial inhibition. Perception & Psychophysics, 64, 493-503], with the strength of the competition being modulated by observers' (top-down) incentive to suppress the distractor dimension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In singleton feature search for a form-defined target, the presentation of a task-irrelevant, but salient singleton color distractor is known to interfere with target detection [Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception & Psychophysics, 50, 184-193; Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51, 599-606]. The present study was designed to re-examine this effect, by presenting observers with a singleton form target (on each trial) that could be accompanied by a salient) singleton color distractor, with the proportion of distractor to no-distractor trials systematically varying across blocks of trials. In addition to RTs, eye movements were recorded in order to examine the mechanisms underlying the distractor interference effect. The results showed that singleton distractors did interfere with target detection only when they were presented on a relatively small (but not on a large) proportion of trials. Overall, the findings suggest that cross-dimensional interference is a covert attention effect, arising from the competition of the target with the distractor for attentional selection [Kumada, T., & Humphreys, G. W. (2002). Cross-dimensional interference and cross-trial inhibition. Perception & Psychophysics, 64, 493-503], with the strength of the competition being modulated by observers' (top-down) incentive to suppress the distractor dimension.

Close

  • doi:10.1016/j.visres.2008.02.006

Close

Richard Godijn; Arthur F. Kramer

Oculomotor capture by surprising onsets Journal Article

In: Visual Cognition, vol. 16, no. 2-3, pp. 279–289, 2008.

Abstract | Links | BibTeX

@article{Godijn2008b,
title = {Oculomotor capture by surprising onsets},
author = {Richard Godijn and Arthur F. Kramer},
doi = {10.1080/13506280701437295},
year = {2008},
date = {2008-01-01},
journal = {Visual Cognition},
volume = {16},
number = {2-3},
pages = {279--289},
abstract = {The present study examined the effect of surprising onsets on oculomotor behaviour. Participants were required to execute a saccadic eye movement to a colour singleton target. After a series of trials an unexpected onset distractor was abruptly presented on the surprise trial. The presentation of the onset was repeated on subsequent trials. The results showed that the onset captured the eyes for 28% of the participants on the surprise trial, but this percentage decreased after repeated exposure to the onset. Furthermore, saccade latencies to the target were increased when a surprising onset was presented. After repeated exposure to the onset, latencies to the target decreased to the preonset level. The results suggest that when the onset is not part of participants' task set it has a strong effect on oculomotor behaviour. Once the task set has been updated and the onset no longer comes as a surprise its effect on oculomotor behaviour is dramatically reduced.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study examined the effect of surprising onsets on oculomotor behaviour. Participants were required to execute a saccadic eye movement to a colour singleton target. After a series of trials an unexpected onset distractor was abruptly presented on the surprise trial. The presentation of the onset was repeated on subsequent trials. The results showed that the onset captured the eyes for 28% of the participants on the surprise trial, but this percentage decreased after repeated exposure to the onset. Furthermore, saccade latencies to the target were increased when a surprising onset was presented. After repeated exposure to the onset, latencies to the target decreased to the preonset level. The results suggest that when the onset is not part of participants' task set it has a strong effect on oculomotor behaviour. Once the task set has been updated and the onset no longer comes as a surprise its effect on oculomotor behaviour is dramatically reduced.

Close

  • doi:10.1080/13506280701437295

Close

Adele Diederich; Hans Colonius

Crossmodal interaction in saccadic reaction time: Separating multisensory from warning effects in the time window of integration model Journal Article

In: Experimental Brain Research, vol. 186, no. 1, pp. 1–22, 2008.

Abstract | Links | BibTeX

@article{Diederich2008,
title = {Crossmodal interaction in saccadic reaction time: Separating multisensory from warning effects in the time window of integration model},
author = {Adele Diederich and Hans Colonius},
doi = {10.1007/s00221-007-1197-4},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {186},
number = {1},
pages = {1--22},
abstract = {In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) non-target presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from -500 (non-target prior to target) to 0 ms, but the effect was larger for ipsi- than for contralateral presentation within an SOA range from -200 ms to 0. The time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000, 2004) is extended here to separate the effect of a spatially unspecific warning effect of the non-target from a spatially specific and genuine multisensory integration effect.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) non-target presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from -500 (non-target prior to target) to 0 ms, but the effect was larger for ipsi- than for contralateral presentation within an SOA range from -200 ms to 0. The time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000, 2004) is extended here to separate the effect of a spatially unspecific warning effect of the non-target from a spatially specific and genuine multisensory integration effect.

Close

  • doi:10.1007/s00221-007-1197-4

Close

Adele Diederich; Hans Colonius; Adele Diederich

When a high-intensity "distractor" is better then a low-intensity one: Modeling the effect of an auditory or tactile nontarget stimulus on visual saccadic reaction time Journal Article

In: Brain Research, vol. 1242, pp. 219–230, 2008.

Abstract | Links | BibTeX

@article{Diederich2008a,
title = {When a high-intensity "distractor" is better then a low-intensity one: Modeling the effect of an auditory or tactile nontarget stimulus on visual saccadic reaction time},
author = {Adele Diederich and Hans Colonius and Adele Diederich},
doi = {10.1016/j.brainres.2008.05.081},
year = {2008},
date = {2008-01-01},
journal = {Brain Research},
volume = {1242},
pages = {219--230},
publisher = {Elsevier B.V.},
abstract = {In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) nontarget presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from - 250 ms (nontarget prior to target) to 50 ms. This study specifically addressed the effect of varying nontarget intensity. While facilitation effects for auditory nontargets are somewhat more pronounced than for tactile ones, decreasing intensity slightly reduced facilitation for both types of nontargets. The time course of crossmodal mean SRT over SOA and the pattern of facilitation observed here suggest the existence of two distinct underlying mechanisms: (a) a spatially unspecific crossmodal warning triggered by the nontarget being detected early enough before the arrival of the target plus (b) a spatially specific multisensory integration mechanism triggered by the target processing time terminating within the time window of integration. It is shown that the time window of integration (TWIN) model introduced by the authors gives a reasonable quantitative account of the data relating observed SRT to the unobservable probability of integration and crossmodal warning for each SOA value under a high and low intensity level of the nontarget.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) nontarget presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from - 250 ms (nontarget prior to target) to 50 ms. This study specifically addressed the effect of varying nontarget intensity. While facilitation effects for auditory nontargets are somewhat more pronounced than for tactile ones, decreasing intensity slightly reduced facilitation for both types of nontargets. The time course of crossmodal mean SRT over SOA and the pattern of facilitation observed here suggest the existence of two distinct underlying mechanisms: (a) a spatially unspecific crossmodal warning triggered by the nontarget being detected early enough before the arrival of the target plus (b) a spatially specific multisensory integration mechanism triggered by the target processing time terminating within the time window of integration. It is shown that the time window of integration (TWIN) model introduced by the authors gives a reasonable quantitative account of the data relating observed SRT to the unobservable probability of integration and crossmodal warning for each SOA value under a high and low intensity level of the nontarget.

Close

  • doi:10.1016/j.brainres.2008.05.081

Close

Adele Diederich; Hans Colonius; Annette Schomburg

Assessing age-related multisensory enhancement with the time-window-of-integration model Journal Article

In: Neuropsychologia, vol. 46, no. 10, pp. 2556–2562, 2008.

Abstract | Links | BibTeX

@article{Diederich2008b,
title = {Assessing age-related multisensory enhancement with the time-window-of-integration model},
author = {Adele Diederich and Hans Colonius and Annette Schomburg},
doi = {10.1016/j.neuropsychologia.2008.03.026},
year = {2008},
date = {2008-01-01},
journal = {Neuropsychologia},
volume = {46},
number = {10},
pages = {2556--2562},
abstract = {Although from multisensory research a great deal is known about how the different senses interact, there is little knowledge as to the impact of aging on these multisensory processes. In this study, we measured saccadic reaction time (SRT) of aged and young individuals to the onset of a visual target stimulus with and without an accessory auditory stimulus occurring (focused attention task). The response time pattern for both groups was similar: mean SRT to bimodal stimuli was generally shorter than to unimodal stimuli, and mean bimodal SRT was shorter when the auditory accessory was presented ipsilaterally rather than contralaterally to the target. The elderly participants were considerably slower than the younger participants under all conditions but showed a greater multisensory enhancement, that is, they seem to benefit more from bimodal stimulus presentation. In an attempt to weigh the contributions of peripheral sensory processes relative to more central cognitive processes possibly responsible for the difference in the younger and older adults, the time-window-of-integration (TWIN) model for crossmodal interaction in saccadic eye movements developed by the authors was fitted to the data from both groups. The model parameters suggest that (i) there is a slowing of the peripheral sensory processing in the elderly, (ii) as a result of this slowing, the probability of integration is smaller in the elderly even with a wider time-window-of-integration, and (iii) multisensory integration, if it occurs, manifests itself in larger neural enhancement in the elderly; however, because of (ii), on average the integration effect is not large enough to compensate for the peripheral slowing in the elderly.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although from multisensory research a great deal is known about how the different senses interact, there is little knowledge as to the impact of aging on these multisensory processes. In this study, we measured saccadic reaction time (SRT) of aged and young individuals to the onset of a visual target stimulus with and without an accessory auditory stimulus occurring (focused attention task). The response time pattern for both groups was similar: mean SRT to bimodal stimuli was generally shorter than to unimodal stimuli, and mean bimodal SRT was shorter when the auditory accessory was presented ipsilaterally rather than contralaterally to the target. The elderly participants were considerably slower than the younger participants under all conditions but showed a greater multisensory enhancement, that is, they seem to benefit more from bimodal stimulus presentation. In an attempt to weigh the contributions of peripheral sensory processes relative to more central cognitive processes possibly responsible for the difference in the younger and older adults, the time-window-of-integration (TWIN) model for crossmodal interaction in saccadic eye movements developed by the authors was fitted to the data from both groups. The model parameters suggest that (i) there is a slowing of the peripheral sensory processing in the elderly, (ii) as a result of this slowing, the probability of integration is smaller in the elderly even with a wider time-window-of-integration, and (iii) multisensory integration, if it occurs, manifests itself in larger neural enhancement in the elderly; however, because of (ii), on average the integration effect is not large enough to compensate for the peripheral slowing in the elderly.

Close

  • doi:10.1016/j.neuropsychologia.2008.03.026

Close

Gregory J. Digirolamo; Jason S. McCarley; Arthur F. Kramer; Harry J. Griffin

Voluntary and reflexive eye movements to illusory lengths Journal Article

In: Visual Cognition, vol. 16, no. 1, pp. 68–89, 2008.

Abstract | Links | BibTeX

@article{Digirolamo2008,
title = {Voluntary and reflexive eye movements to illusory lengths},
author = {Gregory J. Digirolamo and Jason S. McCarley and Arthur F. Kramer and Harry J. Griffin},
doi = {10.1080/13506280701339160},
year = {2008},
date = {2008-01-01},
journal = {Visual Cognition},
volume = {16},
number = {1},
pages = {68--89},
abstract = {Considerable debate surrounds the extent and manner that motor control is, like perception, susceptible to visual illusions. Using the Brentano version of the Mu ¨ller-Lyer illusion, we measured the accuracy of voluntary and reflexive eye movements to the endpoints of equal length line segments that appeared different (Experiment 1) and different length line segments that appeared equal (Experiment 3). Voluntary and reflexive saccades were both influenced by the illusion, but the former were more strongly biased and closer to the subjective percept. Experiment 2 demonstrated that these data were the results of the illusion and not centre-of- gravity effects. The representations underlying perception and action interact and this interaction produces biases for actions, particularly voluntary actions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Considerable debate surrounds the extent and manner that motor control is, like perception, susceptible to visual illusions. Using the Brentano version of the Mu ¨ller-Lyer illusion, we measured the accuracy of voluntary and reflexive eye movements to the endpoints of equal length line segments that appeared different (Experiment 1) and different length line segments that appeared equal (Experiment 3). Voluntary and reflexive saccades were both influenced by the illusion, but the former were more strongly biased and closer to the subjective percept. Experiment 2 demonstrated that these data were the results of the illusion and not centre-of- gravity effects. The representations underlying perception and action interact and this interaction produces biases for actions, particularly voluntary actions.

Close

  • doi:10.1080/13506280701339160

Close

Mieke Donk; Wieske Zoest

Effects of salience are short-lived Journal Article

In: Psychological Science, vol. 19, no. 7, pp. 733–739, 2008.

Abstract | Links | BibTeX

@article{Donk2008,
title = {Effects of salience are short-lived},
author = {Mieke Donk and Wieske Zoest},
doi = {10.1111/j.1467-9280.2008.02149.x},
year = {2008},
date = {2008-01-01},
journal = {Psychological Science},
volume = {19},
number = {7},
pages = {733--739},
abstract = {A salient event in the visual field tends to attract attention and the eyes. To account for the effects of salience on visual selection, models generally assume that the human visual system continuously holds information concerning the relative salience of objects in the visual field. Here we show that salience in fact drives vision only during the short time interval immediately following the onset of a visual scene. In a saccadic target-selection task, human performance in making an eye movement to the most salient element in a display was accurate when response latencies were short, but was at chance when response latencies were long. In a manual discrimination task, performance in making a judgment of salience was more accurate with brief than with long display durations. These results suggest that salience is represented in the visual system only briefly after a visual image enters the brain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A salient event in the visual field tends to attract attention and the eyes. To account for the effects of salience on visual selection, models generally assume that the human visual system continuously holds information concerning the relative salience of objects in the visual field. Here we show that salience in fact drives vision only during the short time interval immediately following the onset of a visual scene. In a saccadic target-selection task, human performance in making an eye movement to the most salient element in a display was accurate when response latencies were short, but was at chance when response latencies were long. In a manual discrimination task, performance in making a judgment of salience was more accurate with brief than with long display durations. These results suggest that salience is represented in the visual system only briefly after a visual image enters the brain.

Close

  • doi:10.1111/j.1467-9280.2008.02149.x

Close

Denis Drieghe

Foveal processing and word skipping during reading Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 4, pp. 856–860, 2008.

Abstract | Links | BibTeX

@article{Drieghe2008,
title = {Foveal processing and word skipping during reading},
author = {Denis Drieghe},
doi = {10.3758/PBR.15.4.856},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {4},
pages = {856--860},
abstract = {An eyetracking experiment is reported examining the assumption that a word is skipped during sentence reading because parafoveal processing during preceding fixations has reached an advanced level in recognizing that word. Word n was presented with reduced contrast, with case alternation, or normally. Reingold and Rayner (2006) reported that, in comparison to the normal condition, reduced contrast increased viewing times on word n but not on word n+1, whereas case alternation increased viewing times on both words. These patterns were reflected in the fixation times of the present experiment, but a striking dissociation was observed in the skipping of word n+1: The reduced contrast of word n decreased skipping of word n+1, whereas case alternation did not. Apart from the amount of parafoveal processing, the decision to skip word n+1 is also influenced by the ease of processing word n: Difficulties in processing word n lead to a more conservative strategy in the decision to skip word n+1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An eyetracking experiment is reported examining the assumption that a word is skipped during sentence reading because parafoveal processing during preceding fixations has reached an advanced level in recognizing that word. Word n was presented with reduced contrast, with case alternation, or normally. Reingold and Rayner (2006) reported that, in comparison to the normal condition, reduced contrast increased viewing times on word n but not on word n+1, whereas case alternation increased viewing times on both words. These patterns were reflected in the fixation times of the present experiment, but a striking dissociation was observed in the skipping of word n+1: The reduced contrast of word n decreased skipping of word n+1, whereas case alternation did not. Apart from the amount of parafoveal processing, the decision to skip word n+1 is also influenced by the ease of processing word n: Difficulties in processing word n lead to a more conservative strategy in the decision to skip word n+1.

Close

  • doi:10.3758/PBR.15.4.856

Close

Jacob Duijnhouwer; Richard J. A. Wezel; Albert V. Van den Berg

The role of motion capture in an illusory transformation of optic flow fields. Journal Article

In: Journal of Vision, vol. 8, no. 4, pp. 1–18, 2008.

Abstract | Links | BibTeX

@article{Duijnhouwer2008,
title = {The role of motion capture in an illusory transformation of optic flow fields.},
author = {Jacob Duijnhouwer and Richard J. A. Wezel and Albert V. Van den Berg},
doi = {10.1167/8.4.27},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {4},
pages = {1--18},
abstract = {In the optic flow illusion, the focus of an expanding optic flow field appears shifted when uniform flow is transparently superimposed. The shift is in the direction of the uniform flow, or "inducer." Current explanations relate the transformation of the expanding optic flow field to perceptual subtraction of the inducer signal. Alternatively, the shift might result from motion capture acting on the perceived focus position. To test this alternative, we replaced expanding target flow with contracting or rotating flow. Current explanations predict focus shifts in expanding and contracting flows that are opposite but of equal magnitude and parallel to the inducer. In rotary flow, the current explanations predict shifts that are perpendicular to the inducer. In contrast, we report larger shift for expansion than for contraction and a component of shift parallel to the inducer for rotary flow. The magnitude of this novel component of shift depended on the target flow speed, the inducer flow speed, and the presentation duration. These results support the idea that motion capture contributes substantially to the optic flow illusion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the optic flow illusion, the focus of an expanding optic flow field appears shifted when uniform flow is transparently superimposed. The shift is in the direction of the uniform flow, or "inducer." Current explanations relate the transformation of the expanding optic flow field to perceptual subtraction of the inducer signal. Alternatively, the shift might result from motion capture acting on the perceived focus position. To test this alternative, we replaced expanding target flow with contracting or rotating flow. Current explanations predict focus shifts in expanding and contracting flows that are opposite but of equal magnitude and parallel to the inducer. In rotary flow, the current explanations predict shifts that are perpendicular to the inducer. In contrast, we report larger shift for expansion than for contraction and a component of shift parallel to the inducer for rotary flow. The magnitude of this novel component of shift depended on the target flow speed, the inducer flow speed, and the presentation duration. These results support the idea that motion capture contributes substantially to the optic flow illusion.

Close

  • doi:10.1167/8.4.27

Close

Kristie R. Dukewich; Raymond M. Klein; John Christie

The effect of gaze on gaze direction while looking at art Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 6, pp. 1141–1147, 2008.

Abstract | Links | BibTeX

@article{Dukewich2008,
title = {The effect of gaze on gaze direction while looking at art},
author = {Kristie R. Dukewich and Raymond M. Klein and John Christie},
doi = {10.3758/PBR.15.6.1141},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {6},
pages = {1141--1147},
abstract = {In highly controlled cuing experiments, conspecific gaze direction has powerful effects on an observer's attention. We explored the generality of this effect by using paintings in which the gaze direction of a key character had been carefully manipulated. Our observers looked at these paintings in one of three instructional states (neutral, social, or spatial) while we monitored their eye movements. Overt orienting was much less influenced by the critical gaze direction than what the cuing literature might suggest: An analysis of the direction of saccades following the first fixation of the critical gaze showed that observers were weakly biased to orient in the direction of the gaze. Over longer periods of viewing, however, this effect disappeared for all but the social condition. This restriction of gaze as an attentional cue to a social context is consistent with the idea that the evolution of gaze direction detection is rooted in social communication. The picture stimuli from this experiment can be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In highly controlled cuing experiments, conspecific gaze direction has powerful effects on an observer's attention. We explored the generality of this effect by using paintings in which the gaze direction of a key character had been carefully manipulated. Our observers looked at these paintings in one of three instructional states (neutral, social, or spatial) while we monitored their eye movements. Overt orienting was much less influenced by the critical gaze direction than what the cuing literature might suggest: An analysis of the direction of saccades following the first fixation of the critical gaze showed that observers were weakly biased to orient in the direction of the gaze. Over longer periods of viewing, however, this effect disappeared for all but the social condition. This restriction of gaze as an attentional cue to a social context is consistent with the idea that the evolution of gaze direction detection is rooted in social communication. The picture stimuli from this experiment can be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.

Close

  • doi:10.3758/PBR.15.6.1141

Close

10162 entries « ‹ 93 of 102 › »

Let’s Keep in Touch

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
Newsletter
Newsletter Archive
Conferences

Contact

info@sr-research.com

Phone: +1-613-271-8686

Toll Free: +1-866-821-0731

Fax: +1-613-482-4866

Quick Links

Products

Solutions

Support Forum

Legal

Legal Notice

Privacy Policy | Accessibility Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog

Reading Profiles of Adults with Dyslexia

Reading Profile of Adults with Dyslexia


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.