• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

高速、精准和可靠的眼动追踪解决方案

  • 硬件
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI和MEG系统
    • EyeLink II
    • 硬件集成
  • 软件
    • Experiment Builder
    • Data Viewer
    • WebLink
    • 软件集成
    • Purchase Licenses
  • 解决方案
    • 阅读与语言
    • 发展研究
    • fMRI 和 MEG
    • EEG 和 fNIRS
    • 临床与眼动机制研究
    • 认知性
    • 可用性与应用研究
    • 非人类 灵长类动物
  • 技术支持
    • 论坛
    • 资源
    • 有用的应用程序
    • 训练
  • 关于
    • 关于我们
    • EyeLink出版物
    • 新闻
    • 制造
    • 职业生涯
    • 关于眼动追踪
    • 新闻通讯
  • 博客
  • 联系
  • English
eye tracking research

EyeLink眼球跟踪出版物库

全部EyeLink出版物

截至2022,所有11000多份经同行评审的EyeLink研究出版物(其中一些在2023年初)以下按年份列出。您可以使用视觉搜索、平滑追踪、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼球追踪研究可在解决方案页面上找到。如果我们遗漏了任何EyeLink眼球追踪文件,请给我们发电子邮件!

11118 entries « ‹ 100 of 112 › »

2009

Yoni Pertzov; Ehud Zohary; Galia Avidan

Implicitly perceived objects attract gaze during later free viewing Journal Article

In: Journal of Vision, vol. 9, no. 6, pp. 1–12, 2009.

Abstract | BibTeX

@article{Pertzov2009a,
title = {Implicitly perceived objects attract gaze during later free viewing},
author = {Yoni Pertzov and Ehud Zohary and Galia Avidan},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {6},
pages = {1--12},
abstract = {Everyday life frequently requires searching for objects in the visual scene. Visual search is typically accompanied by a series of eye movements. In an effort to explain subjects' scanning patterns, models of visual search propose that a template of the target is used, to guide gaze (and attention) to locations which exhibit "suspicious" similarity to this template. We show here that the scanning patterns are also clearly influenced by implicit (unrecognized) cues: A backward masked object, presented before the scene display, automatically attracts gaze to its corresponding location in the following inspected image. Interestingly, subliminally observed words describing objects do not have the same effect. This demonstrates that visual search can be unconsciously guided by activated target representations at the perceptual level, but it is much less affected by implicit information at the semantic level. Implications on search models are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Everyday life frequently requires searching for objects in the visual scene. Visual search is typically accompanied by a series of eye movements. In an effort to explain subjects' scanning patterns, models of visual search propose that a template of the target is used, to guide gaze (and attention) to locations which exhibit "suspicious" similarity to this template. We show here that the scanning patterns are also clearly influenced by implicit (unrecognized) cues: A backward masked object, presented before the scene display, automatically attracts gaze to its corresponding location in the following inspected image. Interestingly, subliminally observed words describing objects do not have the same effect. This demonstrates that visual search can be unconsciously guided by activated target representations at the perceptual level, but it is much less affected by implicit information at the semantic level. Implications on search models are discussed.

Close

Jeffrey M. Peterson; Paul Dassonville

Differential latencies sculpt the time course of contextual effects on spatial perception Journal Article

In: Journal of Cognitive Neuroscience, vol. 34, no. 11, pp. 2168–2188, 2009.

Abstract | BibTeX

@article{Peterson2009,
title = {Differential latencies sculpt the time course of contextual effects on spatial perception},
author = {Jeffrey M. Peterson and Paul Dassonville},
year = {2009},
date = {2009-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {34},
number = {11},
pages = {2168--2188},
abstract = {The ability to judge an object's orientation with respect to gravitational vertical relies on an egocentric reference frame that is maintained using not only vestibular cues but also con- textual cues provided in the visual scene. Although much is known about how static contextual cues are incorporated into the egocentric reference frame, it is also important to under- stand how changes in these cues affect perception, since we move about in a world that is itself dynamic. To explore these temporal factors, we used a variant of the rod-and-frame illu- sion, in which participants indicated the perceived orientation of a briefly flashed rod (5-msec duration) presented before or after the onset of a tilted frame. The frame was found to bias the perceived orientation of rods presented as much as 185 msec before frame onset. To explain this postdictive effect, we propose a differential latency model, where the latency of the orientation judgment is greater than the latency of the con- textual cues' initial impact on the egocentric reference frame. In a subsequent test of this model, we decreased the luminance of the rod, which is known to increase visual afferent delays and slow decision processes. This further slowing of the orientation judgment caused the frame-induced bias to affect the perceived orientation of rods presented even further in advance of the frame. These findings indicate that the brain fails to compen- sate for a mismatch between the timing of orientation judg- ments and the incorporation of visual cues into the egocentric reference frame. ■},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The ability to judge an object's orientation with respect to gravitational vertical relies on an egocentric reference frame that is maintained using not only vestibular cues but also con- textual cues provided in the visual scene. Although much is known about how static contextual cues are incorporated into the egocentric reference frame, it is also important to under- stand how changes in these cues affect perception, since we move about in a world that is itself dynamic. To explore these temporal factors, we used a variant of the rod-and-frame illu- sion, in which participants indicated the perceived orientation of a briefly flashed rod (5-msec duration) presented before or after the onset of a tilted frame. The frame was found to bias the perceived orientation of rods presented as much as 185 msec before frame onset. To explain this postdictive effect, we propose a differential latency model, where the latency of the orientation judgment is greater than the latency of the con- textual cues' initial impact on the egocentric reference frame. In a subsequent test of this model, we decreased the luminance of the rod, which is known to increase visual afferent delays and slow decision processes. This further slowing of the orientation judgment caused the frame-induced bias to affect the perceived orientation of rods presented even further in advance of the frame. These findings indicate that the brain fails to compen- sate for a mismatch between the timing of orientation judg- ments and the incorporation of visual cues into the egocentric reference frame. ■

Close

Tobias Pflugshaupt; Klemens Gutbrod; Pascal Wurtz; Roman Von Wartburg; Thomas Nyffeler; Bianca De Haan; Hans-Otto Karnath; René M. Mueri

About the role of visual field defects in pure alexia Journal Article

In: Brain, vol. 132, no. 7, pp. 1907–1917, 2009.

Abstract | Links | BibTeX

@article{Pflugshaupt2009,
title = {About the role of visual field defects in pure alexia},
author = {Tobias Pflugshaupt and Klemens Gutbrod and Pascal Wurtz and Roman Von Wartburg and Thomas Nyffeler and Bianca De Haan and Hans-Otto Karnath and René M. Mueri},
doi = {10.1093/brain/awp141},
year = {2009},
date = {2009-01-01},
journal = {Brain},
volume = {132},
number = {7},
pages = {1907--1917},
abstract = {Pure alexia is an acquired reading disorder characterized by a disproportionate prolongation of reading time as a function of word length. Although the vast majority of cases reported in the literature show a right-sided visual defect, little is known about the contribution of this low-level visual impairment to their reading difficulties. The present study was aimed at investigating this issue by comparing eye movement patterns during text reading in six patients with pure alexia with those of six patients with hemianopic dyslexia showing similar right-sided visual field defects. We found that the role of the field defect in the reading difficulties of pure alexics was highly deficit-specific. While the amplitude of rightward saccades during text reading seems largely determined by the restricted visual field, other visuo-motor impairments—particularly the pronounced increases in fixation frequency and viewing time as a function of word length—may have little to do with their visual field defect. In addition, subtracting the lesions of the hemianopic dyslexics from those found in pure alexics revealed the largest group differences in posterior parts of the left fusiform gyrus, occipito-temporal sulcus and inferior temporal gyrus. These regions included the coordinate assigned to the centre of the visual word form area in healthy adults, which provides further evidence for a relation between pure alexia and a damaged visual word form area. Finally, we propose a list of three criteria that may improve the differential diagnosis of pure alexia and allow appropriate therapy recommendations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Pure alexia is an acquired reading disorder characterized by a disproportionate prolongation of reading time as a function of word length. Although the vast majority of cases reported in the literature show a right-sided visual defect, little is known about the contribution of this low-level visual impairment to their reading difficulties. The present study was aimed at investigating this issue by comparing eye movement patterns during text reading in six patients with pure alexia with those of six patients with hemianopic dyslexia showing similar right-sided visual field defects. We found that the role of the field defect in the reading difficulties of pure alexics was highly deficit-specific. While the amplitude of rightward saccades during text reading seems largely determined by the restricted visual field, other visuo-motor impairments—particularly the pronounced increases in fixation frequency and viewing time as a function of word length—may have little to do with their visual field defect. In addition, subtracting the lesions of the hemianopic dyslexics from those found in pure alexics revealed the largest group differences in posterior parts of the left fusiform gyrus, occipito-temporal sulcus and inferior temporal gyrus. These regions included the coordinate assigned to the centre of the visual word form area in healthy adults, which provides further evidence for a relation between pure alexia and a damaged visual word form area. Finally, we propose a list of three criteria that may improve the differential diagnosis of pure alexia and allow appropriate therapy recommendations.

Close

  • doi:10.1093/brain/awp141

Close

Tobias Pflugshaupt; Roman Wartburg; Pascal Wurtz; Silvia Chaves; Anouk Déruaz; Thomas Nyffeler; Sebastian Arx; Mathias Luethi; Dario Cazzoli; René M. Mueri

Linking physiology with behaviour: Functional specialisation of the visual field is reflected in gaze patterns during visual search Journal Article

In: Vision Research, vol. 49, no. 2, pp. 237–248, 2009.

Abstract | Links | BibTeX

@article{Pflugshaupt2009a,
title = {Linking physiology with behaviour: Functional specialisation of the visual field is reflected in gaze patterns during visual search},
author = {Tobias Pflugshaupt and Roman Wartburg and Pascal Wurtz and Silvia Chaves and Anouk Déruaz and Thomas Nyffeler and Sebastian Arx and Mathias Luethi and Dario Cazzoli and René M. Mueri},
doi = {10.1016/j.visres.2008.10.021},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {2},
pages = {237--248},
publisher = {Elsevier Ltd},
abstract = {Based on neurophysiological findings and a grid to score binocular visual field function, two hypotheses concerning the spatial distribution of fixations during visual search were tested and confirmed in healthy participants and patients with homonymous visual field defects. Both groups showed significant biases of fixations and viewing time towards the centre of the screen and the upper screen half. Patients displayed a third bias towards the side of their field defect, which represents oculomotor compensation. Moreover, significant correlations between the extent of these three biases and search performance were found. Our findings suggest a new, more dynamic view of how functional specialisation of the visual field influences behaviour.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Based on neurophysiological findings and a grid to score binocular visual field function, two hypotheses concerning the spatial distribution of fixations during visual search were tested and confirmed in healthy participants and patients with homonymous visual field defects. Both groups showed significant biases of fixations and viewing time towards the centre of the screen and the upper screen half. Patients displayed a third bias towards the side of their field defect, which represents oculomotor compensation. Moreover, significant correlations between the extent of these three biases and search performance were found. Our findings suggest a new, more dynamic view of how functional specialisation of the visual field influences behaviour.

Close

  • doi:10.1016/j.visres.2008.10.021

Close

Elmar H. Pinkhardt; Jan Kassubek; Sigurd Süssmuth; Albert C. Ludolph; Wolfgang Becker; Reinhart Jürgens

Comparison of smooth pursuit eye movement deficits in multiple system atrophy and Parkinson's disease Journal Article

In: Journal of Neurology, vol. 256, no. 9, pp. 1438–1446, 2009.

Abstract | Links | BibTeX

@article{Pinkhardt2009,
title = {Comparison of smooth pursuit eye movement deficits in multiple system atrophy and Parkinson's disease},
author = {Elmar H. Pinkhardt and Jan Kassubek and Sigurd Süssmuth and Albert C. Ludolph and Wolfgang Becker and Reinhart Jürgens},
doi = {10.1007/s00415-009-5131-5},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurology},
volume = {256},
number = {9},
pages = {1438--1446},
abstract = {Because of the large overlap and quantitative similarity of eye movement alterations in Parkinson's disease (PD) and multiple system atrophy (MSA), a measurement of eye movement is generally not considered helpful for the differential diagnosis. However, in view of the pathophysiological differences between MSA and PD as well as between the cerebellar (MSA-C) and Parkinsonian (MSA-P) subtypes of MSA, we wondered whether a detailed investigation of oculomotor performance would unravel parameters that could help to differentiate between these entities. We recorded eye movements during sinusoidal pursuit tracking by means of video-oculography in 11 cases of MSA-P, 8 cases of MSA-C and 27 cases of PD and compared them to 23 healthy controls (CTL). The gain of the smooth pursuit eye movement (SPEM) component exhibited significant group differences between each of the three subject groups (MSA, PD, controls) but not between MSA-P and MSA-C. The similarity of pursuit impairment in MSA-P and in MSA-C suggests a commencement of cerebellar pathology in MSA-P despite the lack of clinical signs. Otherwise, SPEM gain was of little use for differential diagnosis between MSA and PD because of wide overlap. However, inspection of the saccadic component of pursuit tracking revealed that in MSA saccades typically correct for position errors accumulated during SPEM epochs ("catch-up saccades"), whereas in PD, saccades were often directed toward future target positions ("anticipatory saccades"). The differences in pursuit tracking between PD and MSA were large enough to warrant their use as ancillary diagnostic criteria for the distinction between these disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Because of the large overlap and quantitative similarity of eye movement alterations in Parkinson's disease (PD) and multiple system atrophy (MSA), a measurement of eye movement is generally not considered helpful for the differential diagnosis. However, in view of the pathophysiological differences between MSA and PD as well as between the cerebellar (MSA-C) and Parkinsonian (MSA-P) subtypes of MSA, we wondered whether a detailed investigation of oculomotor performance would unravel parameters that could help to differentiate between these entities. We recorded eye movements during sinusoidal pursuit tracking by means of video-oculography in 11 cases of MSA-P, 8 cases of MSA-C and 27 cases of PD and compared them to 23 healthy controls (CTL). The gain of the smooth pursuit eye movement (SPEM) component exhibited significant group differences between each of the three subject groups (MSA, PD, controls) but not between MSA-P and MSA-C. The similarity of pursuit impairment in MSA-P and in MSA-C suggests a commencement of cerebellar pathology in MSA-P despite the lack of clinical signs. Otherwise, SPEM gain was of little use for differential diagnosis between MSA and PD because of wide overlap. However, inspection of the saccadic component of pursuit tracking revealed that in MSA saccades typically correct for position errors accumulated during SPEM epochs ("catch-up saccades"), whereas in PD, saccades were often directed toward future target positions ("anticipatory saccades"). The differences in pursuit tracking between PD and MSA were large enough to warrant their use as ancillary diagnostic criteria for the distinction between these disorders.

Close

  • doi:10.1007/s00415-009-5131-5

Close

Ming Qian; Mario Aguilar; Karen N. Zachery; Claudio M. Privitera; Stanley A. Klein; Thom Carney; Loren W. Nolte

Decision-level fusion of EEG and pupil features for single-trial visual detection analysis Journal Article

In: IEEE Transactions on Biomedical Engineering, vol. 56, no. 7, pp. 1929–1937, 2009.

Abstract | Links | BibTeX

@article{Qian2009,
title = {Decision-level fusion of EEG and pupil features for single-trial visual detection analysis},
author = {Ming Qian and Mario Aguilar and Karen N. Zachery and Claudio M. Privitera and Stanley A. Klein and Thom Carney and Loren W. Nolte},
doi = {10.1109/TBME.2009.2016670},
year = {2009},
date = {2009-01-01},
journal = {IEEE Transactions on Biomedical Engineering},
volume = {56},
number = {7},
pages = {1929--1937},
abstract = {Several recent studies have reported success in applying EEG-based signal analysis to achieve accurate single-trial classification of responses to visual target detection. Pupil responses are proposed as a complementary modality that can support improved accuracy of single-trial signal analysis. We develop a pupillary response feature-extraction and -selection procedure that helps to improve the classification performance of a system based only on EEG signal analysis. We apply a two-level linear classifier to obtain cognitive-task-related analysis of EEG and pupil responses. The classification results based on the two modalities are then fused at the decision level. Here, the goal is to support increased classification confidence through the inherent modality complementarities. The fusion results show significant improvement over classification performance based on a single modality.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Several recent studies have reported success in applying EEG-based signal analysis to achieve accurate single-trial classification of responses to visual target detection. Pupil responses are proposed as a complementary modality that can support improved accuracy of single-trial signal analysis. We develop a pupillary response feature-extraction and -selection procedure that helps to improve the classification performance of a system based only on EEG signal analysis. We apply a two-level linear classifier to obtain cognitive-task-related analysis of EEG and pupil responses. The classification results based on the two modalities are then fused at the decision level. Here, the goal is to support increased classification confidence through the inherent modality complementarities. The fusion results show significant improvement over classification performance based on a single modality.

Close

  • doi:10.1109/TBME.2009.2016670

Close

Alper Açik; Selim Onat; Frank Schumann; Wolfgang Einhäuser; Peter König

Effects of luminance contrast and its modifications on fixation behavior during free viewing of images from different categories Journal Article

In: Vision Research, vol. 49, no. 12, pp. 1541–1553, 2009.

Abstract | Links | BibTeX

@article{Acik2009,
title = {Effects of luminance contrast and its modifications on fixation behavior during free viewing of images from different categories},
author = {Alper Açik and Selim Onat and Frank Schumann and Wolfgang Einhäuser and Peter König},
doi = {10.1016/j.visres.2009.03.011},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {12},
pages = {1541--1553},
publisher = {Elsevier Ltd},
abstract = {During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were observed for urban scenes and faces. Surprisingly, modifications in fractal images did not influence fixations. Further analysis revealed an inverse relationship between modification effects and higher-level, phase-dependent image features. We suggest that high- and mid-level features - such as edges, symmetries, and recursive patterns - guide attention if present. However, if the scene lacks such diagnostic properties, low-level features prevail. We posit a hierarchical framework, which combines aspects of bottom-up and top-down theories and is compatible with our data.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were observed for urban scenes and faces. Surprisingly, modifications in fractal images did not influence fixations. Further analysis revealed an inverse relationship between modification effects and higher-level, phase-dependent image features. We suggest that high- and mid-level features - such as edges, symmetries, and recursive patterns - guide attention if present. However, if the scene lacks such diagnostic properties, low-level features prevail. We posit a hierarchical framework, which combines aspects of bottom-up and top-down theories and is compatible with our data.

Close

  • doi:10.1016/j.visres.2009.03.011

Close

Arash Afraz; Patrick Cavanagh

The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations Journal Article

In: Journal of Vision, vol. 9, no. 10, pp. 1–17, 2009.

Abstract | Links | BibTeX

@article{Afraz2009,
title = {The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations},
author = {Arash Afraz and Patrick Cavanagh},
doi = {10.1167/9.10.10},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {10},
pages = {1--17},
abstract = {In four experiments, we measured the gender-specific face-aftereffect following subject's eye movement, head rotation, or head movement toward the display and following movement of the adapting stimulus itself to a new test location. In all experiments, the face aftereffect was strongest at the retinal position, orientation, and size of the adaptor. There was no advantage for the spatiotopic location in any experiment nor was there an advantage for the location newly occupied by the adapting face after it moved in the final experiment. Nevertheless, the aftereffect showed a broad gradient of transfer across location, orientation and size that, although centered on the retinotopic values of the adapting stimulus, covered ranges far exceeding the tuning bandwidths of neurons in early visual cortices. These results are consistent with a high-level site of adaptation (e.g. FFA) where units of face analysis have modest coverage of visual field, centered in retinotopic coordinates, but relatively broad tolerance for variations in size and orientation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In four experiments, we measured the gender-specific face-aftereffect following subject's eye movement, head rotation, or head movement toward the display and following movement of the adapting stimulus itself to a new test location. In all experiments, the face aftereffect was strongest at the retinal position, orientation, and size of the adaptor. There was no advantage for the spatiotopic location in any experiment nor was there an advantage for the location newly occupied by the adapting face after it moved in the final experiment. Nevertheless, the aftereffect showed a broad gradient of transfer across location, orientation and size that, although centered on the retinotopic values of the adapting stimulus, covered ranges far exceeding the tuning bandwidths of neurons in early visual cortices. These results are consistent with a high-level site of adaptation (e.g. FFA) where units of face analysis have modest coverage of visual field, centered in retinotopic coordinates, but relatively broad tolerance for variations in size and orientation.

Close

  • doi:10.1167/9.10.10

Close

Ozgur E. Akman; Richard A. Clement; David S. Broomhead; Sabira K. Mannan; Ian Moorhead; Hugh R. Wilson

Probing bottom-up processing with multistable images Journal Article

In: Journal of Eye Movement Research, vol. 1, no. 3, pp. 1–7, 2009.

Abstract | BibTeX

@article{Akman2009,
title = {Probing bottom-up processing with multistable images},
author = {Ozgur E. Akman and Richard A. Clement and David S. Broomhead and Sabira K. Mannan and Ian Moorhead and Hugh R. Wilson},
year = {2009},
date = {2009-01-01},
journal = {Journal of Eye Movement Research},
volume = {1},
number = {3},
pages = {1--7},
abstract = {The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.

Close

Paul M. Bays; R. F. G. Catalao; Masud Husain

The precision of visual working memory is set by allocation of a shared resource Journal Article

In: Journal of Vision, vol. 9, no. 10, pp. 7–7, 2009.

Abstract | Links | BibTeX

@article{Bays2009,
title = {The precision of visual working memory is set by allocation of a shared resource},
author = {Paul M. Bays and R. F. G. Catalao and Masud Husain},
doi = {10.1167/9.10.7},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {10},
pages = {7--7},
abstract = {The mechanisms underlying visual working memory have recently become controversial. One account proposes a small number of memory "slots," each capable of storing a single visual object with fixed precision. A contrary view holds that working memory is a shared resource, with no upper limit on the number of items stored; instead, the more items that are held in memory, the less precisely each can be recalled. Recent findings from a color report task have been taken as crucial new evidence in favor of the slot model. However, while this task has previously been thought of as a simple test of memory for color, here we show that performance also critically depends on memory for location. When errors in memory are considered for both color and location, performance on this task is in fact well explained by the resource model. These results demonstrate that visual working memory consists of a common resource distributed dynamically across the visual scene, with no need to invoke an upper limit on the number of objects represented.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The mechanisms underlying visual working memory have recently become controversial. One account proposes a small number of memory "slots," each capable of storing a single visual object with fixed precision. A contrary view holds that working memory is a shared resource, with no upper limit on the number of items stored; instead, the more items that are held in memory, the less precisely each can be recalled. Recent findings from a color report task have been taken as crucial new evidence in favor of the slot model. However, while this task has previously been thought of as a simple test of memory for color, here we show that performance also critically depends on memory for location. When errors in memory are considered for both color and location, performance on this task is in fact well explained by the resource model. These results demonstrate that visual working memory consists of a common resource distributed dynamically across the visual scene, with no need to invoke an upper limit on the number of objects represented.

Close

  • doi:10.1167/9.10.7

Close

Mark W. Becker; Brian Detweiler-Bedell

Early detection and avoidance of threatening faces during passive viewing Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 62, no. 7, pp. 1257–1264, 2009.

Abstract | Links | BibTeX

@article{Becker2009,
title = {Early detection and avoidance of threatening faces during passive viewing},
author = {Mark W. Becker and Brian Detweiler-Bedell},
doi = {10.1080/17470210902725753},
year = {2009},
date = {2009-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {62},
number = {7},
pages = {1257--1264},
abstract = {To evaluate whether there is an early attentional bias towards negative stimuli, we tracked participants' eyes while they passively viewed displays composed of four Ekman faces. In Experiment 1 each display consisted of three neutral faces and one face depicting fear or happiness. In half of the trials, all faces were inverted. Although the passive viewing task should have been very sensitive to attentional biases, we found no evidence that overt attention was biased towards fearful faces. Instead, people tended to actively avoid looking at the fearful face. This avoidance was evident very early in scene viewing, suggesting that the threat associated with the faces was evaluated rapidly. Experiment 2 replicated this effect and extended it to angry faces. In sum, our data suggest that negative facial expressions are rapidly analysed and influence visual scanning, but, rather than attract attention, such faces are actively avoided.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To evaluate whether there is an early attentional bias towards negative stimuli, we tracked participants' eyes while they passively viewed displays composed of four Ekman faces. In Experiment 1 each display consisted of three neutral faces and one face depicting fear or happiness. In half of the trials, all faces were inverted. Although the passive viewing task should have been very sensitive to attentional biases, we found no evidence that overt attention was biased towards fearful faces. Instead, people tended to actively avoid looking at the fearful face. This avoidance was evident very early in scene viewing, suggesting that the threat associated with the faces was evaluated rapidly. Experiment 2 replicated this effect and extended it to angry faces. In sum, our data suggest that negative facial expressions are rapidly analysed and influence visual scanning, but, rather than attract attention, such faces are actively avoided.

Close

  • doi:10.1080/17470210902725753

Close

Stefanie I. Becker; Ulrich Ansorge; Massimo Turatto

Saccades reveal that allocentric coding of the moving object causes mislocalization in the flash-lag effect Journal Article

In: Attention, Perception, and Psychophysics, vol. 71, no. 6, pp. 1313–1324, 2009.

Abstract | Links | BibTeX

@article{Becker2009a,
title = {Saccades reveal that allocentric coding of the moving object causes mislocalization in the flash-lag effect},
author = {Stefanie I. Becker and Ulrich Ansorge and Massimo Turatto},
doi = {10.3758/APP.71.6.1313},
year = {2009},
date = {2009-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {71},
number = {6},
pages = {1313--1324},
abstract = {The flash-lag effect is a visual misperception of a position of a flash relative to that of a moving object: Even when both are at the same position, the flash is reported to lag behind the moving object. In the present study, the flash-lag effect was investigated with eye-movement measurements: Subjects were required to saccade to either the flash or the moving object. The results showed that saccades to the flash were precise, whereas saccades to the moving object showed an offset in the direction of motion. A further experiment revealed that this offset in the saccades to the moving object was eliminated when the whole background flashed. This result indicates that saccadic offsets to the moving stimulus critically depend on the spatially distinctive flash in the vicinity of the moving object. The results are incompatible with current theoretical explanations of the flash-lag effect, such as the motion extrapolation account. We propose that allocentric coding of the position of the moving object could account for the flash-lag effect.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The flash-lag effect is a visual misperception of a position of a flash relative to that of a moving object: Even when both are at the same position, the flash is reported to lag behind the moving object. In the present study, the flash-lag effect was investigated with eye-movement measurements: Subjects were required to saccade to either the flash or the moving object. The results showed that saccades to the flash were precise, whereas saccades to the moving object showed an offset in the direction of motion. A further experiment revealed that this offset in the saccades to the moving object was eliminated when the whole background flashed. This result indicates that saccadic offsets to the moving stimulus critically depend on the spatially distinctive flash in the vicinity of the moving object. The results are incompatible with current theoretical explanations of the flash-lag effect, such as the motion extrapolation account. We propose that allocentric coding of the position of the moving object could account for the flash-lag effect.

Close

  • doi:10.3758/APP.71.6.1313

Close

Artem V. Belopolsky; Jan Theeuwes

When are attention and saccade preparation dissociated? Journal Article

In: Psychological Science, vol. 20, no. 11, pp. 1340–1347, 2009.

Abstract | BibTeX

@article{Belopolsky2009,
title = {When are attention and saccade preparation dissociated?},
author = {Artem V. Belopolsky and Jan Theeuwes},
year = {2009},
date = {2009-01-01},
journal = {Psychological Science},
volume = {20},
number = {11},
pages = {1340--1347},
abstract = {To understand the mechanisms of visual attention, it is crucial to know the relationship between attention and saccades. Some theories propose a close relationship, whereas others view the attention and saccade systems as completely independent. One possible way to resolve this controversy is to distinguish between the maintenance and shifting of attention. The present study used a novel paradigm that allowed simultaneous measurement of attentional allocation and saccade preparation. Saccades toward the location where attention was maintained were either facilitated or suppressed depending on the probability of making a saccade to that location and the match between the attended location and the saccade location on the previous trial. Shifting attention to another location was always associated with saccade facilitation. The findings provide a new view, demonstrating that the maintenance of attention and shifting of attention differ in their relationship to the oculomotor system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To understand the mechanisms of visual attention, it is crucial to know the relationship between attention and saccades. Some theories propose a close relationship, whereas others view the attention and saccade systems as completely independent. One possible way to resolve this controversy is to distinguish between the maintenance and shifting of attention. The present study used a novel paradigm that allowed simultaneous measurement of attentional allocation and saccade preparation. Saccades toward the location where attention was maintained were either facilitated or suppressed depending on the probability of making a saccade to that location and the match between the attended location and the saccade location on the previous trial. Shifting attention to another location was always associated with saccade facilitation. The findings provide a new view, demonstrating that the maintenance of attention and shifting of attention differ in their relationship to the oculomotor system.

Close

Jeroen S. Benjamins; Ignace T. C. Hooge; Jacco C. Elst; Alexander H. Wertheim; Frans A. J. Verstraten

Search time critically depends on irrelevant subset size in visual search Journal Article

In: Vision Research, vol. 49, pp. 398–406, 2009.

Abstract | BibTeX

@article{Benjamins2009,
title = {Search time critically depends on irrelevant subset size in visual search},
author = {Jeroen S. Benjamins and Ignace T. C. Hooge and Jacco C. Elst and Alexander H. Wertheim and Frans A. J. Verstraten},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
pages = {398--406},
abstract = {In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets.

Close

R. Bibi; Jay A. Edelman

The influence of motor yraining onhuman express saccade production Journal Article

In: Journal of Neurophysiology, vol. 102, no. 6, pp. 3101–3110, 2009.

Abstract | Links | BibTeX

@article{Bibi2009,
title = {The influence of motor yraining onhuman express saccade production},
author = {R. Bibi and Jay A. Edelman},
doi = {10.1152/jn.90710.2008},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {102},
number = {6},
pages = {3101--3110},
abstract = {Express saccadic eye movements are saccades of extremely short latency. In monkey, express saccades have been shown to occur much more frequently when the monkey has been trained to make saccades in a particular direction to targets that appear in predictable locations. Such results suggest that express saccades occur in large number only under highly specific conditions, leading to the view that vector-specific training and motor preparatory processes are required to make an express saccade of a particular magnitude and direction. To evaluate this hypothesis in humans, we trained subjects to make saccades quickly to particular locations and then examined whether the frequency of express saccades depended on training and the number of possible target locations. Training significantly decreased saccade latency and increased express saccade production to both trained and untrained locations. Increasing the number of possible target locations (two vs. eight possible targets) led to only a modest increase of saccade latency. For most subjects, the probability of express saccade occurrence was much higher than that expected if vector-specific movement preparation were necessary for their production. These results suggest that vector-specific motor preparation and vector-specific saccade training are not necessary for express saccade production in humans and that increases in express saccade production are due in part to a facilitation in fixation disengagement or else a general enhancement in the ability of the saccadic system to respond to suddenly appearing visual stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Express saccadic eye movements are saccades of extremely short latency. In monkey, express saccades have been shown to occur much more frequently when the monkey has been trained to make saccades in a particular direction to targets that appear in predictable locations. Such results suggest that express saccades occur in large number only under highly specific conditions, leading to the view that vector-specific training and motor preparatory processes are required to make an express saccade of a particular magnitude and direction. To evaluate this hypothesis in humans, we trained subjects to make saccades quickly to particular locations and then examined whether the frequency of express saccades depended on training and the number of possible target locations. Training significantly decreased saccade latency and increased express saccade production to both trained and untrained locations. Increasing the number of possible target locations (two vs. eight possible targets) led to only a modest increase of saccade latency. For most subjects, the probability of express saccade occurrence was much higher than that expected if vector-specific movement preparation were necessary for their production. These results suggest that vector-specific motor preparation and vector-specific saccade training are not necessary for express saccade production in humans and that increases in express saccade production are due in part to a facilitation in fixation disengagement or else a general enhancement in the ability of the saccadic system to respond to suddenly appearing visual stimuli.

Close

  • doi:10.1152/jn.90710.2008

Close

Markus Bindemann; Christoph Scheepers; A. Mike Burton

Viewpoint and center of gravity affect eye movements to human faces Journal Article

In: Journal of Vision, vol. 9, no. 2, pp. 1–16, 2009.

Abstract | Links | BibTeX

@article{Bindemann2009,
title = {Viewpoint and center of gravity affect eye movements to human faces},
author = {Markus Bindemann and Christoph Scheepers and A. Mike Burton},
doi = {10.1167/9.2.7},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {2},
pages = {1--16},
abstract = {In everyday life, human faces are encountered in many different views. Despite this fact, most psychological research has focused on the perception of frontal faces. To address this shortcoming, the current study investigated how different face views are processed, by measuring eye movements to frontal, mid-profile and profile faces during a gender categorization (Experiment 1) and a free-viewing task (Experiment 2). In both experiments observers initially fixated the geometric center of a face, independent of face view. This center-of-gravity effect induced a qualitative shift in the features that were sampled across different face views in the time period immediately after stimulus onset. Subsequent eye fixations focused increasingly on specific facial features. At this stage, the eye regions were targeted predominantly in all face views, and to a lesser extent also the nose and the mouth. These findings show that initial saccades to faces are driven by general stimulus properties, before eye movements are redirected to the specific facial features in which observers take an interest. These findings are illustrated in detail by plotting the distribution of fixations, first fixations, and percentage fixations across time.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In everyday life, human faces are encountered in many different views. Despite this fact, most psychological research has focused on the perception of frontal faces. To address this shortcoming, the current study investigated how different face views are processed, by measuring eye movements to frontal, mid-profile and profile faces during a gender categorization (Experiment 1) and a free-viewing task (Experiment 2). In both experiments observers initially fixated the geometric center of a face, independent of face view. This center-of-gravity effect induced a qualitative shift in the features that were sampled across different face views in the time period immediately after stimulus onset. Subsequent eye fixations focused increasingly on specific facial features. At this stage, the eye regions were targeted predominantly in all face views, and to a lesser extent also the nose and the mouth. These findings show that initial saccades to faces are driven by general stimulus properties, before eye movements are redirected to the specific facial features in which observers take an interest. These findings are illustrated in detail by plotting the distribution of fixations, first fixations, and percentage fixations across time.

Close

  • doi:10.1167/9.2.7

Close

Elina Birmingham; Walter F. Bischof; Alan Kingstone

Get real! Resolving the debate about equivalent social stimuli Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 904–924, 2009.

Abstract | Links | BibTeX

@article{Birmingham2009,
title = {Get real! Resolving the debate about equivalent social stimuli},
author = {Elina Birmingham and Walter F. Bischof and Alan Kingstone},
doi = {10.1080/13506280902758044},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {904--924},
abstract = {Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention. Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention. Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention.

Close

  • doi:10.1080/13506280902758044

Close

Elina Birmingham; Walter F. Bischof; Alan Kingstone

Saliency does not account for fixations to eyes within social scenes Journal Article

In: Vision Research, vol. 49, pp. 2992–3000, 2009.

Abstract | BibTeX

@article{Birmingham2009a,
title = {Saliency does not account for fixations to eyes within social scenes},
author = {Elina Birmingham and Walter F. Bischof and Alan Kingstone},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
pages = {2992--3000},
abstract = {We assessed the role of saliency in driving observers to fixate the eyes in social scenes. Saliency maps (Itti & Koch, 2000) were computed for the scenes from three previous studies. Saliency provided a poor account of the data. The saliency values for the first-fixated locations were extremely low and no greater than what would be expected by chance. In addition, the saliency values for the eye regions were low. Furthermore, whereas saliency was no better at predicting early saccades than late saccades, the average latency to fixate social areas of the scene (e.g., the eyes) was very fast (within 200 ms). Thus, visual saliency does not account for observers' bias to select the eyes within complex social scenes, nor does it account for fixation behavior in general. Instead, it appears that observers' fixations are driven largely by their default interest in social information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We assessed the role of saliency in driving observers to fixate the eyes in social scenes. Saliency maps (Itti & Koch, 2000) were computed for the scenes from three previous studies. Saliency provided a poor account of the data. The saliency values for the first-fixated locations were extremely low and no greater than what would be expected by chance. In addition, the saliency values for the eye regions were low. Furthermore, whereas saliency was no better at predicting early saccades than late saccades, the average latency to fixate social areas of the scene (e.g., the eyes) was very fast (within 200 ms). Thus, visual saliency does not account for observers' bias to select the eyes within complex social scenes, nor does it account for fixation behavior in general. Instead, it appears that observers' fixations are driven largely by their default interest in social information.

Close

Christoph Bledowski; Benjamin Rahm; James B. Rowe

What "works" in working memory? Separate systems for selection and updating of critical information Journal Article

In: Journal of Neuroscience, vol. 29, no. 43, pp. 13735–13741, 2009.

Abstract | Links | BibTeX

@article{Bledowski2009,
title = {What "works" in working memory? Separate systems for selection and updating of critical information},
author = {Christoph Bledowski and Benjamin Rahm and James B. Rowe},
doi = {10.1523/JNEUROSCI.2547-09.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neuroscience},
volume = {29},
number = {43},
pages = {13735--13741},
abstract = {Cognition depends critically on working memory, the active representation of a limited number of items over short periods of time. In addition to the maintenance of information during the course of cognitive processing, many tasks require that some of the items in working memory become transiently more important than others. Based on cognitive models of working memory, we hypothesized two complementary essential cognitive operations to achieve this: a selection operation that retrieves the most relevant item, and an updating operation that changes the focus of attention onto it. Using functional magnetic resonance imaging, high-resolution oculometry, and behavioral analysis, we demonstrate that these two operations are functionally and neuroanatomically dissociated. Updating the attentional focus elicited transient activation in the caudal superior frontal sulcus and posterior parietal cortex. In contrast, increasing demands on selection selectively modulated activation in rostral superior frontal sulcus and posterior cingulate/precuneus. We conclude that prioritizing one memory item over others invokes independent mechanisms of mnemonic retrieval and attentional focusing, each with its distinct neuroanatomical basis within frontal and parietal regions. These support the developing understanding of working memory as emerging from the interaction between memory and attentional systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Cognition depends critically on working memory, the active representation of a limited number of items over short periods of time. In addition to the maintenance of information during the course of cognitive processing, many tasks require that some of the items in working memory become transiently more important than others. Based on cognitive models of working memory, we hypothesized two complementary essential cognitive operations to achieve this: a selection operation that retrieves the most relevant item, and an updating operation that changes the focus of attention onto it. Using functional magnetic resonance imaging, high-resolution oculometry, and behavioral analysis, we demonstrate that these two operations are functionally and neuroanatomically dissociated. Updating the attentional focus elicited transient activation in the caudal superior frontal sulcus and posterior parietal cortex. In contrast, increasing demands on selection selectively modulated activation in rostral superior frontal sulcus and posterior cingulate/precuneus. We conclude that prioritizing one memory item over others invokes independent mechanisms of mnemonic retrieval and attentional focusing, each with its distinct neuroanatomical basis within frontal and parietal regions. These support the developing understanding of working memory as emerging from the interaction between memory and attentional systems.

Close

  • doi:10.1523/JNEUROSCI.2547-09.2009

Close

Tanya Blekher; Marjorie R. Weaver; Xueya Cai; Siu L. Hui; Jeanine Marshall; Jacqueline Gray Jackson; Joanne Wojcieszek; Robert D. Yee; Tatiana M. Foroud

Test-retest reliability of saccadic measures in subjects at risk for Huntington disease Journal Article

In: Investigative Ophthalmology & Visual Science, vol. 50, no. 12, pp. 5707–5711, 2009.

Abstract | Links | BibTeX

@article{Blekher2009,
title = {Test-retest reliability of saccadic measures in subjects at risk for Huntington disease},
author = {Tanya Blekher and Marjorie R. Weaver and Xueya Cai and Siu L. Hui and Jeanine Marshall and Jacqueline Gray Jackson and Joanne Wojcieszek and Robert D. Yee and Tatiana M. Foroud},
doi = {10.1167/iovs.09-3538},
year = {2009},
date = {2009-01-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {50},
number = {12},
pages = {5707--5711},
abstract = {PURPOSE Abnormalities in saccades appear to be sensitive and specific biomarkers in the prediagnostic stages of Huntington disease (HD). The goal of this study was to evaluate test-retest reliability of saccadic measures in prediagnostic carriers of the HD gene expansion (PDHD) and normal controls (NC). METHODS The study sample included 9 PDHD and 12 NC who completed two study visits within an approximate 1-month interval. At the first visit, all participants completed a uniform clinical evaluation. A high-resolution, video-based system was used to record eye movements during completion of a battery of visually guided, antisaccade, and memory-guided tasks. Latency, velocity, gain, and percentage of errors were quantified. Test-retest reliability was estimated by calculating the intraclass correlation (ICC) of the saccade measures collected at the first and second visits. In addition, an equality test based on Fisher's z-transformation was used to evaluate the effects of group (PDHD and NC) and the subject's sex on ICC. RESULTS The percentage of errors showed moderate to high reliability in the antisaccade and memory-guided tasks (ICC = 0.64-0.93). The latency of the saccades also demonstrated moderate to high reliability (ICC = 0.55-0.87) across all tasks. The velocity and gain of the saccades showed moderate reliability. The ICC was similar in the PDHD and NC groups. There was no significant effect of sex on the ICC. CONCLUSIONS Good reliability of saccadic latency and percentage of errors in both antisaccade and memory-guided tasks suggests that these measures could serve as biomarkers to evaluate progression in HD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE Abnormalities in saccades appear to be sensitive and specific biomarkers in the prediagnostic stages of Huntington disease (HD). The goal of this study was to evaluate test-retest reliability of saccadic measures in prediagnostic carriers of the HD gene expansion (PDHD) and normal controls (NC). METHODS The study sample included 9 PDHD and 12 NC who completed two study visits within an approximate 1-month interval. At the first visit, all participants completed a uniform clinical evaluation. A high-resolution, video-based system was used to record eye movements during completion of a battery of visually guided, antisaccade, and memory-guided tasks. Latency, velocity, gain, and percentage of errors were quantified. Test-retest reliability was estimated by calculating the intraclass correlation (ICC) of the saccade measures collected at the first and second visits. In addition, an equality test based on Fisher's z-transformation was used to evaluate the effects of group (PDHD and NC) and the subject's sex on ICC. RESULTS The percentage of errors showed moderate to high reliability in the antisaccade and memory-guided tasks (ICC = 0.64-0.93). The latency of the saccades also demonstrated moderate to high reliability (ICC = 0.55-0.87) across all tasks. The velocity and gain of the saccades showed moderate reliability. The ICC was similar in the PDHD and NC groups. There was no significant effect of sex on the ICC. CONCLUSIONS Good reliability of saccadic latency and percentage of errors in both antisaccade and memory-guided tasks suggests that these measures could serve as biomarkers to evaluate progression in HD.

Close

  • doi:10.1167/iovs.09-3538

Close

Tanya Blekher; Marjorie R. Weaver; Jeanine Marshall; Siu L. Hui; Jacqueline Gray Jackson; Julie C. Stout; Xabier Beristain; Joanne Wojcieszek; Robert D. Yee; Tatiana M. Foroud

Visual scanning and cognitive performance in prediagnostic and early-stage Huntington's disease Journal Article

In: Movement Disorders, vol. 24, no. 4, pp. 533–540, 2009.

Abstract | Links | BibTeX

@article{Blekher2009a,
title = {Visual scanning and cognitive performance in prediagnostic and early-stage Huntington's disease},
author = {Tanya Blekher and Marjorie R. Weaver and Jeanine Marshall and Siu L. Hui and Jacqueline Gray Jackson and Julie C. Stout and Xabier Beristain and Joanne Wojcieszek and Robert D. Yee and Tatiana M. Foroud},
doi = {10.1002/mds.22329},
year = {2009},
date = {2009-01-01},
journal = {Movement Disorders},
volume = {24},
number = {4},
pages = {533--540},
abstract = {The objective of this study was to evaluate visual scanning strategies in carriers of the Huntington disease (HD) gene expansion and to test whether there is an association between measures of visual scanning and cognitive performance. The study sample included control (NC},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The objective of this study was to evaluate visual scanning strategies in carriers of the Huntington disease (HD) gene expansion and to test whether there is an association between measures of visual scanning and cognitive performance. The study sample included control (NC

Close

  • doi:10.1002/mds.22329

Close

Tanya Blekher; Marjorie R. Weaver; Jason Rupp; William C. Nichols; Siu L. Hui; Jacqueline Gray; Robert D. Yee; Joanne Wojcieszek; Tatiana M. Foroud

Multiple step pattern as a biomarker in Parkinson disease Journal Article

In: Parkinsonism and Related Disorders, vol. 15, no. 7, pp. 506–510, 2009.

Abstract | Links | BibTeX

@article{Blekher2009b,
title = {Multiple step pattern as a biomarker in Parkinson disease},
author = {Tanya Blekher and Marjorie R. Weaver and Jason Rupp and William C. Nichols and Siu L. Hui and Jacqueline Gray and Robert D. Yee and Joanne Wojcieszek and Tatiana M. Foroud},
doi = {10.1016/j.parkreldis.2009.01.002},
year = {2009},
date = {2009-01-01},
journal = {Parkinsonism and Related Disorders},
volume = {15},
number = {7},
pages = {506--510},
publisher = {Elsevier Ltd},
abstract = {Objective: To evaluate quantitative measures of saccades as possible biomarkers in early stages of Parkinson disease (PD) and in a population at-risk for PD. Methods: The study sample (n = 68) included mildly to moderately affected PD patients, their unaffected siblings, and control individuals. All participants completed a clinical evaluation by a movement disorder neurologist. Genotyping of the G2019S mutation in the LRRK2 gene was performed in the PD patients and their unaffected siblings. A high resolution, video-based eye tracking system was employed to record eye positions during a battery of visually guided, anti-saccadic (AS), and two memory-guided (MG) tasks. Saccade measures (latency, velocity, gain, error rate, and multiple step pattern) were quantified. Results: PD patients and a subgroup of their unaffected siblings had an abnormally high incidence of multiple step patterns (MSP) and reduced gain of saccades as compared with controls. The abnormalities were most pronounced in the more challenging version of the MG task. For this task, the MSP measure demonstrated good sensitivity (87%) and excellent specificity (96%) in the ability to discriminate PD patients from controls. PD patients and their siblings also made more errors in the AS task. Conclusions: Abnormalities in eye movement measures appear to be sensitive and specific measures in PD patients as well as a subset of those at-risk for PD. The inclusion of quantitative laboratory testing of saccadic movements may increase the sensitivity of the neurological examination to identify individuals who are at greater risk for PD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: To evaluate quantitative measures of saccades as possible biomarkers in early stages of Parkinson disease (PD) and in a population at-risk for PD. Methods: The study sample (n = 68) included mildly to moderately affected PD patients, their unaffected siblings, and control individuals. All participants completed a clinical evaluation by a movement disorder neurologist. Genotyping of the G2019S mutation in the LRRK2 gene was performed in the PD patients and their unaffected siblings. A high resolution, video-based eye tracking system was employed to record eye positions during a battery of visually guided, anti-saccadic (AS), and two memory-guided (MG) tasks. Saccade measures (latency, velocity, gain, error rate, and multiple step pattern) were quantified. Results: PD patients and a subgroup of their unaffected siblings had an abnormally high incidence of multiple step patterns (MSP) and reduced gain of saccades as compared with controls. The abnormalities were most pronounced in the more challenging version of the MG task. For this task, the MSP measure demonstrated good sensitivity (87%) and excellent specificity (96%) in the ability to discriminate PD patients from controls. PD patients and their siblings also made more errors in the AS task. Conclusions: Abnormalities in eye movement measures appear to be sensitive and specific measures in PD patients as well as a subset of those at-risk for PD. The inclusion of quantitative laboratory testing of saccadic movements may increase the sensitivity of the neurological examination to identify individuals who are at greater risk for PD.

Close

  • doi:10.1016/j.parkreldis.2009.01.002

Close

Jens Bölte; Andrea Böhl; Christian Dobel; Pienie Zwitserlood

Effects of referential ambiguity, time constraints and addressee orientation on the production of morphologically complex words Journal Article

In: European Journal of Cognitive Psychology, vol. 21, no. 8, pp. 1166–1199, 2009.

Abstract | Links | BibTeX

@article{Boelte2009,
title = {Effects of referential ambiguity, time constraints and addressee orientation on the production of morphologically complex words},
author = {Jens Bölte and Andrea Böhl and Christian Dobel and Pienie Zwitserlood},
doi = {10.1080/09541440902719025},
year = {2009},
date = {2009-01-01},
journal = {European Journal of Cognitive Psychology},
volume = {21},
number = {8},
pages = {1166--1199},
abstract = {In five experiments, participants were asked to describe unambiguously a target picture in a picture-picture paradigm. In the same-category condition, target (e. g., water bucket) and distractor picture (e. g., ice bucket) had identical names when their preferred, morphologically simple, name was used (e. g., bucket). The ensuing lexical ambiguity could be resolved by compound use (e. g., water bucket). Simple names sufficed as means of specification in other conditions, with distractors identical to the target, completely unrelated, or geometric figures. With standard timing parameters, participants produced mainly ambiguous answers in Experiment 1. An increase in available processing time hardly improved unambiguous responding (Experiment 2). A referential communication instruction (Experiment 3) increased the number of compound responses considerably, but morphologically simple answers still prevailed. Unambiguous responses outweighed ambiguous ones in Experiment 4, when timing parameters were further relaxed. Finally, the requirement to name both objects resulted in a nearly perfect ambiguity resolution (Experiment 5). Together, the results showed that speakers overcome lexical ambiguity only when time permits, when an addressee perspective is given and, most importantly, when their own speech overtly signals the ambiguity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In five experiments, participants were asked to describe unambiguously a target picture in a picture-picture paradigm. In the same-category condition, target (e. g., water bucket) and distractor picture (e. g., ice bucket) had identical names when their preferred, morphologically simple, name was used (e. g., bucket). The ensuing lexical ambiguity could be resolved by compound use (e. g., water bucket). Simple names sufficed as means of specification in other conditions, with distractors identical to the target, completely unrelated, or geometric figures. With standard timing parameters, participants produced mainly ambiguous answers in Experiment 1. An increase in available processing time hardly improved unambiguous responding (Experiment 2). A referential communication instruction (Experiment 3) increased the number of compound responses considerably, but morphologically simple answers still prevailed. Unambiguous responses outweighed ambiguous ones in Experiment 4, when timing parameters were further relaxed. Finally, the requirement to name both objects resulted in a nearly perfect ambiguity resolution (Experiment 5). Together, the results showed that speakers overcome lexical ambiguity only when time permits, when an addressee perspective is given and, most importantly, when their own speech overtly signals the ambiguity.

Close

  • doi:10.1080/09541440902719025

Close

Walter R. Boot; Ensar Becic; Arthur F. Kramer

Stable individual differences in search strategy?: The effect of task demands and motivational factors on scanning strategy in visual search Journal Article

In: Journal of Vision, vol. 9, no. 3, pp. 1–16, 2009.

Abstract | Links | BibTeX

@article{Boot2009,
title = {Stable individual differences in search strategy?: The effect of task demands and motivational factors on scanning strategy in visual search},
author = {Walter R. Boot and Ensar Becic and Arthur F. Kramer},
doi = {10.1167/9.3.7},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {3},
pages = {1--16},
abstract = {Previous studies have demonstrated large individual differences in scanning strategy during a dynamic visual search task (E. Becic, A. F. Kramer, & W. R. Boot, 2007; W. R. Boot, A. F. Kramer, E. Becic, D. A. Wiegmann, & T. Kubose, 2006). These differences accounted for substantial variance in performance. Participants who chose to search covertly (without eye movements) excelled, participants who searched overtly (with eye movements) performed poorly. The aim of the current study was to investigate the stability of scanning strategies across different visual search tasks in an attempt to explain why a large percentage of observers might engage in maladaptive strategies. Scanning strategy was assessed for a group of observers across a variety of search tasks without feedback (efficient search, inefficient search, change detection, dynamic search). While scanning strategy was partly determined by task demands, stable individual differences emerged. Participants who searched either overtly or covertly tended to adopt the same strategy regardless of the demands of the search task, even in tasks in which such a strategy was maladaptive. However, when participants were given explicit feedback about their performance during search and performance incentives, strategies across tasks diverged. Thus it appears that observers by default will favor a particular search strategy but can modify this strategy when it is clearly maladaptive to the task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies have demonstrated large individual differences in scanning strategy during a dynamic visual search task (E. Becic, A. F. Kramer, & W. R. Boot, 2007; W. R. Boot, A. F. Kramer, E. Becic, D. A. Wiegmann, & T. Kubose, 2006). These differences accounted for substantial variance in performance. Participants who chose to search covertly (without eye movements) excelled, participants who searched overtly (with eye movements) performed poorly. The aim of the current study was to investigate the stability of scanning strategies across different visual search tasks in an attempt to explain why a large percentage of observers might engage in maladaptive strategies. Scanning strategy was assessed for a group of observers across a variety of search tasks without feedback (efficient search, inefficient search, change detection, dynamic search). While scanning strategy was partly determined by task demands, stable individual differences emerged. Participants who searched either overtly or covertly tended to adopt the same strategy regardless of the demands of the search task, even in tasks in which such a strategy was maladaptive. However, when participants were given explicit feedback about their performance during search and performance incentives, strategies across tasks diverged. Thus it appears that observers by default will favor a particular search strategy but can modify this strategy when it is clearly maladaptive to the task.

Close

  • doi:10.1167/9.3.7

Close

Kim Joris Boström; Anne Kathrin Warzecha

Ocular following response to sampled motion Journal Article

In: Vision Research, vol. 49, no. 13, pp. 1693–1701, 2009.

Abstract | Links | BibTeX

@article{Bostroem2009,
title = {Ocular following response to sampled motion},
author = {Kim Joris Boström and Anne Kathrin Warzecha},
doi = {10.1016/j.visres.2009.04.006},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {13},
pages = {1693--1701},
publisher = {Elsevier Ltd},
abstract = {We investigate the impact of monitor frame rate on the human ocular following response (OFR) and find that the response latency considerably depends on the frame rate in the range of 80-160 Hz, which is far above the flicker fusion limit. From the lowest to the highest frame rate the latency declines by roughly 10 ms. Moreover, the relationship between response latency and stimulus speed is affected by the frame rate, compensating and even inverting the effect at lower frame rates. In contrast to that, the initial response acceleration is not affected by the frame rate and its expected dependence on stimulus speed remains stable. The nature of these phenomena reveals insights into the neural mechanism of low-level motion detection underlying the ocular following response.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigate the impact of monitor frame rate on the human ocular following response (OFR) and find that the response latency considerably depends on the frame rate in the range of 80-160 Hz, which is far above the flicker fusion limit. From the lowest to the highest frame rate the latency declines by roughly 10 ms. Moreover, the relationship between response latency and stimulus speed is affected by the frame rate, compensating and even inverting the effect at lower frame rates. In contrast to that, the initial response acceleration is not affected by the frame rate and its expected dependence on stimulus speed remains stable. The nature of these phenomena reveals insights into the neural mechanism of low-level motion detection underlying the ocular following response.

Close

  • doi:10.1016/j.visres.2009.04.006

Close

Christian Boucheny; Georges Pierre Bonneau; Jacques Droulez; Guillaume Thibault; Stephane Ploix

A perceptive evaluation of volume rendering techniques Journal Article

In: ACM Transactions on Applied Perception, vol. 5, no. 4, pp. 1–24, 2009.

Abstract | Links | BibTeX

@article{Boucheny2009,
title = {A perceptive evaluation of volume rendering techniques},
author = {Christian Boucheny and Georges Pierre Bonneau and Jacques Droulez and Guillaume Thibault and Stephane Ploix},
doi = {10.1145/1462048.1462054},
year = {2009},
date = {2009-01-01},
journal = {ACM Transactions on Applied Perception},
volume = {5},
number = {4},
pages = {1--24},
abstract = {The display of space filling data is still a challenge for the community of visualization. Direct volume rendering (DVR) is one of the most important techniques developed to achieve direct perception of such volumetric data. It is based on semitransparent representations, where the data are accumulated in a depth-dependent order. However, it produces images that may be difficult to understand, and thus several techniques have been proposed so as to improve its effectiveness, using for instance lighting models or simpler representations (e.g., maximum intensity projection). In this article, we present three perceptual studies that examine how DVR meets its goals, in either static or dynamic context. We show that a static representation is highly ambiguous, even in simple cases, but this can be counterbalanced by use of dynamic cues (i.e., motion parallax) provided that the rendering parameters are correctly tuned. In addition, perspective projections are demonstrated to provide relevant information to disambiguate depth perception in dynamic displays.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The display of space filling data is still a challenge for the community of visualization. Direct volume rendering (DVR) is one of the most important techniques developed to achieve direct perception of such volumetric data. It is based on semitransparent representations, where the data are accumulated in a depth-dependent order. However, it produces images that may be difficult to understand, and thus several techniques have been proposed so as to improve its effectiveness, using for instance lighting models or simpler representations (e.g., maximum intensity projection). In this article, we present three perceptual studies that examine how DVR meets its goals, in either static or dynamic context. We show that a static representation is highly ambiguous, even in simple cases, but this can be counterbalanced by use of dynamic cues (i.e., motion parallax) provided that the rendering parameters are correctly tuned. In addition, perspective projections are demonstrated to provide relevant information to disambiguate depth perception in dynamic displays.

Close

  • doi:10.1145/1462048.1462054

Close

Gerry T. M. Altmann; Yuki Kamide

Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation Journal Article

In: Cognition, vol. 111, no. 1, pp. 55–71, 2009.

Abstract | Links | BibTeX

@article{Altmann2009,
title = {Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation},
author = {Gerry T. M. Altmann and Yuki Kamide},
doi = {10.1016/j.cognition.2008.12.005},
year = {2009},
date = {2009-01-01},
journal = {Cognition},
volume = {111},
number = {1},
pages = {55--71},
publisher = {Elsevier B.V.},
abstract = {Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).

Close

  • doi:10.1016/j.cognition.2008.12.005

Close

Erhardt Barth; Eleonora Vig; Michael Dorr

Efficient visual coding and the predictability of eye movements on natural movies Journal Article

In: Spatial Vision, vol. 22, no. 5, pp. 397–408, 2009.

Abstract | Links | BibTeX

@article{Barth2009,
title = {Efficient visual coding and the predictability of eye movements on natural movies},
author = {Erhardt Barth and Eleonora Vig and Michael Dorr},
doi = {10.1163/156856809789476065},
year = {2009},
date = {2009-01-01},
journal = {Spatial Vision},
volume = {22},
number = {5},
pages = {397--408},
abstract = {We deal with the analysis of eye movements made on natural movies in free-viewing conditions. Saccades are detected and used to label two classes of movie patches as attended and non-attended. Machine learning techniques are then used to determine how well the two classes can be separated, i.e., how predictable saccade targets are. Although very simple saliency measures are used and then averaged to obtain just one average value per scale, the two classes can be separated with an ROC score of around 0.7, which is higher than previously reported results. Moreover, predictability is analysed for different representations to obtain indirect evidence for the likelihood of a particular representation. It is shown that the predictability correlates with the local intrinsic dimension in a movie.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We deal with the analysis of eye movements made on natural movies in free-viewing conditions. Saccades are detected and used to label two classes of movie patches as attended and non-attended. Machine learning techniques are then used to determine how well the two classes can be separated, i.e., how predictable saccade targets are. Although very simple saliency measures are used and then averaged to obtain just one average value per scale, the two classes can be separated with an ROC score of around 0.7, which is higher than previously reported results. Moreover, predictability is analysed for different representations to obtain indirect evidence for the likelihood of a particular representation. It is shown that the predictability correlates with the local intrinsic dimension in a movie.

Close

  • doi:10.1163/156856809789476065

Close

Sarah Bate; Catherine Haslam; Timothy L. Hodgson

Angry faces are special too: Evidence From the visual scanpath Journal Article

In: Neuropsychology, vol. 23, no. 5, pp. 658–667, 2009.

Abstract | Links | BibTeX

@article{Bate2009a,
title = {Angry faces are special too: Evidence From the visual scanpath},
author = {Sarah Bate and Catherine Haslam and Timothy L. Hodgson},
doi = {10.1037/a0014518},
year = {2009},
date = {2009-01-01},
journal = {Neuropsychology},
volume = {23},
number = {5},
pages = {658--667},
abstract = {Traditional models of face processing posit independent pathways for the processing of facial identity and facial expression (e.g., Bruce & Young, 1986). However, such models have been questioned by recent reports that suggest positive expressions may facilitate recognition (e.g., Baudouin et al., 2000), although little attention has been paid to the role of negative expressions. The current study used eye movement indicators to examine the influence of emotional expression (angry, happy, neutral) on the recognition of famous and novel faces. In line with previous research, the authors found some evidence that only happy expressions facilitate the processing of famous faces. However, the processing of novel faces was enhanced by the presence of an angry expression. Contrary to previous findings, this paper suggests that angry expressions also have an important role in the recognition process, and that the influence of emotional expression is modulated by face familiarity. The implications of this finding are discussed in relation to (1) current models of face processing, and (2) theories of oculomotor control in the viewing of facial stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Traditional models of face processing posit independent pathways for the processing of facial identity and facial expression (e.g., Bruce & Young, 1986). However, such models have been questioned by recent reports that suggest positive expressions may facilitate recognition (e.g., Baudouin et al., 2000), although little attention has been paid to the role of negative expressions. The current study used eye movement indicators to examine the influence of emotional expression (angry, happy, neutral) on the recognition of famous and novel faces. In line with previous research, the authors found some evidence that only happy expressions facilitate the processing of famous faces. However, the processing of novel faces was enhanced by the presence of an angry expression. Contrary to previous findings, this paper suggests that angry expressions also have an important role in the recognition process, and that the influence of emotional expression is modulated by face familiarity. The implications of this finding are discussed in relation to (1) current models of face processing, and (2) theories of oculomotor control in the viewing of facial stimuli.

Close

  • doi:10.1037/a0014518

Close

Sarah Bate; Catherine Haslam; Ashok Jansari; Timothy L. Hodgson

Covert face recognition relies on affective valence in congenital prosopagnosia Journal Article

In: Cognitive Neuropsychology, vol. 26, no. 4, pp. 391–411, 2009.

Abstract | Links | BibTeX

@article{Bate2009,
title = {Covert face recognition relies on affective valence in congenital prosopagnosia},
author = {Sarah Bate and Catherine Haslam and Ashok Jansari and Timothy L. Hodgson},
doi = {10.1080/02643290903175004},
year = {2009},
date = {2009-01-01},
journal = {Cognitive Neuropsychology},
volume = {26},
number = {4},
pages = {391--411},
abstract = {Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied--nice compared to studied--aggressive faces, and performance for studied--neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied--nice compared to studied--aggressive faces, and performance for studied--neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity.

Close

  • doi:10.1080/02643290903175004

Close

Tanja C. W. Nijboer; Stefan Van der Stigchel

Is attention essential for inducing synesthetic colors? Evidence from oculomotor distractors Journal Article

In: Journal of Vision, vol. 9, no. 6, pp. 1–9, 2009.

Abstract | Links | BibTeX

@article{Nijboer2009,
title = {Is attention essential for inducing synesthetic colors? Evidence from oculomotor distractors},
author = {Tanja C. W. Nijboer and Stefan Van der Stigchel},
doi = {10.1167/9.6.21},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {6},
pages = {1--9},
abstract = {In studies investigating visual attention in synesthesia, the targets usually induce a synesthetic color. To measure to what extent attention is necessary to induce synesthetic color experiences, one needs a task in which the synesthetic color is induced by a task-irrelevant distractor. In the current study, an oculomotor distractor task was used in which an eye movement was to be made to a physically colored target while ignoring a single physically colored or synesthetic distractor. Whereas many erroneous eye movements were made to distractors with an identical hue as the target (i.e., capture), much less interference was found with synesthetic distractors. The interference of synesthetic distractors was comparable with achromatic non-digit distractors. These results suggest that attention and hence overt recognition of the inducing stimulus are essential for the synesthetic color experience to occur.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In studies investigating visual attention in synesthesia, the targets usually induce a synesthetic color. To measure to what extent attention is necessary to induce synesthetic color experiences, one needs a task in which the synesthetic color is induced by a task-irrelevant distractor. In the current study, an oculomotor distractor task was used in which an eye movement was to be made to a physically colored target while ignoring a single physically colored or synesthetic distractor. Whereas many erroneous eye movements were made to distractors with an identical hue as the target (i.e., capture), much less interference was found with synesthetic distractors. The interference of synesthetic distractors was comparable with achromatic non-digit distractors. These results suggest that attention and hence overt recognition of the inducing stimulus are essential for the synesthetic color experience to occur.

Close

  • doi:10.1167/9.6.21

Close

Satoshi Nishida; Tomohiro Shibata; Kazushi Ikeda

Prediction of human eye movements in facial discrimination tasks Journal Article

In: Artificial Life and Robotics, vol. 14, no. 3, pp. 348–351, 2009.

Abstract | Links | BibTeX

@article{Nishida2009,
title = {Prediction of human eye movements in facial discrimination tasks},
author = {Satoshi Nishida and Tomohiro Shibata and Kazushi Ikeda},
doi = {10.1007/s10015-009-0679-9},
year = {2009},
date = {2009-01-01},
journal = {Artificial Life and Robotics},
volume = {14},
number = {3},
pages = {348--351},
abstract = {Under natural viewing conditions, human observers selectively allocate their attention to subsets of the visual input. Since overt allocation of attention appears as eye movements, the mechanism of selective attention can be uncovered through computational studies of eyemovement predictions. Since top-down attentional control in a task is expected to modulate eye movements significantly, the models that take a bottom-up approach based on low-level local properties are not expected to suffice for prediction. In this study, we introduce two representative models, apply them to a facial discrimination task with morphed face images, and evaluate their performance by comparing them with the human eye-movement data. The result shows that they are not good at predicting eye movements in this task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Under natural viewing conditions, human observers selectively allocate their attention to subsets of the visual input. Since overt allocation of attention appears as eye movements, the mechanism of selective attention can be uncovered through computational studies of eyemovement predictions. Since top-down attentional control in a task is expected to modulate eye movements significantly, the models that take a bottom-up approach based on low-level local properties are not expected to suffice for prediction. In this study, we introduce two representative models, apply them to a facial discrimination task with morphed face images, and evaluate their performance by comparing them with the human eye-movement data. The result shows that they are not good at predicting eye movements in this task.

Close

  • doi:10.1007/s10015-009-0679-9

Close

Atsushi Noritake; Bob Uttl; Masahiko Terao; Masayoshi Nagai; Junji Watanabe; Akihiro Yagi

Saccadic compression of rectangle and Kanizsa figures: Now you see it, now you don't Journal Article

In: PLoS ONE, vol. 4, no. 7, pp. e6383, 2009.

Abstract | Links | BibTeX

@article{Noritake2009,
title = {Saccadic compression of rectangle and Kanizsa figures: Now you see it, now you don't},
author = {Atsushi Noritake and Bob Uttl and Masahiko Terao and Masayoshi Nagai and Junji Watanabe and Akihiro Yagi},
doi = {10.1371/journal.pone.0006383},
year = {2009},
date = {2009-01-01},
journal = {PLoS ONE},
volume = {4},
number = {7},
pages = {e6383},
abstract = {BACKGROUND: Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles. CONCLUSIONS: Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

BACKGROUND: Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles. CONCLUSIONS: Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods.

Close

  • doi:10.1371/journal.pone.0006383

Close

Ulrich Nuding; Roger Kalla; Neil G. Muggleton; Ulrich Büttner; Vincent Walsh; Stefan Glasauer

TMS evidence for smooth pursuit gain control by the frontal eye fields Journal Article

In: Cerebral Cortex, vol. 19, no. 5, pp. 1144–1150, 2009.

Abstract | Links | BibTeX

@article{Nuding2009,
title = {TMS evidence for smooth pursuit gain control by the frontal eye fields},
author = {Ulrich Nuding and Roger Kalla and Neil G. Muggleton and Ulrich Büttner and Vincent Walsh and Stefan Glasauer},
doi = {10.1093/cercor/bhn162},
year = {2009},
date = {2009-01-01},
journal = {Cerebral Cortex},
volume = {19},
number = {5},
pages = {1144--1150},
abstract = {Smooth pursuit eye movements are used to continuously track slowly moving visual objects. A peculiar property of the smooth pursuit system is the nonlinear increase in sensitivity to changes in target motion with increasing pursuit velocities. We investigated the role of the frontal eye fields (FEFs) in this dynamic gain control mechanism by application of transcranial magnetic stimulation. Subjects were required to pursue a slowly moving visual target whose motion consisted of 2 components: a constant velocity component at 4 different velocities (0, 8, 16, and 24 deg/s) and a superimposed high-frequency sinusoidal oscillation (4 Hz, +/-8 deg/s). Magnetic stimulation of the FEFs reduced not only the overall gain of the system, but also the efficacy of the dynamic gain control. We thus provide the first direct evidence that the FEF population is significantly involved in the nonlinear computation necessary for continuously adjusting the feedforward gain of the pursuit system. We discuss this with relation to current models of smooth pursuit.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Smooth pursuit eye movements are used to continuously track slowly moving visual objects. A peculiar property of the smooth pursuit system is the nonlinear increase in sensitivity to changes in target motion with increasing pursuit velocities. We investigated the role of the frontal eye fields (FEFs) in this dynamic gain control mechanism by application of transcranial magnetic stimulation. Subjects were required to pursue a slowly moving visual target whose motion consisted of 2 components: a constant velocity component at 4 different velocities (0, 8, 16, and 24 deg/s) and a superimposed high-frequency sinusoidal oscillation (4 Hz, +/-8 deg/s). Magnetic stimulation of the FEFs reduced not only the overall gain of the system, but also the efficacy of the dynamic gain control. We thus provide the first direct evidence that the FEF population is significantly involved in the nonlinear computation necessary for continuously adjusting the feedforward gain of the pursuit system. We discuss this with relation to current models of smooth pursuit.

Close

  • doi:10.1093/cercor/bhn162

Close

Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo

Emotional scene content drives the saccade generation system reflexively Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 2, pp. 305–323, 2009.

Abstract | Links | BibTeX

@article{Nummenmaa2009,
title = {Emotional scene content drives the saccade generation system reflexively},
author = {Lauri Nummenmaa and Jukka Hyönä and Manuel G. Calvo},
doi = {10.1037/a0013626},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {2},
pages = {305--323},
abstract = {The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster when the cue pointed toward the emotional picture rather than toward the neutral picture. Experiment 2 replicated these findings with a reflexive saccade task, in which abrupt luminosity changes were used as exogenous saccade cues. In Experiment 3, participants performed vertical reflexive saccades that were orthogonal to the emotional-neutral picture locations. Saccade endpoints and trajectories deviated away from the visual field in which the emotional scenes were presented. Experiment 4 showed that computationally modeled visual saliency does not vary as a function of scene content and that inversion abolishes the rapid orienting toward the emotional scenes. Visual confounds cannot thus explain the results. The authors conclude that early saccade target selection and execution processes are automatically influenced by emotional picture content. This reveals processing of meaningful scene content prior to overt attention to the stimulus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster when the cue pointed toward the emotional picture rather than toward the neutral picture. Experiment 2 replicated these findings with a reflexive saccade task, in which abrupt luminosity changes were used as exogenous saccade cues. In Experiment 3, participants performed vertical reflexive saccades that were orthogonal to the emotional-neutral picture locations. Saccade endpoints and trajectories deviated away from the visual field in which the emotional scenes were presented. Experiment 4 showed that computationally modeled visual saliency does not vary as a function of scene content and that inversion abolishes the rapid orienting toward the emotional scenes. Visual confounds cannot thus explain the results. The authors conclude that early saccade target selection and execution processes are automatically influenced by emotional picture content. This reveals processing of meaningful scene content prior to overt attention to the stimulus.

Close

  • doi:10.1037/a0013626

Close

Lauri Nummenmaa; Jukka Hyönä; Jari K. Hietanen

I'll walk this way: Eyes reveal the direction of locomotion and make passersby look and go the other way Journal Article

In: Psychological Science, vol. 20, no. 12, pp. 1454–1458, 2009.

Abstract | BibTeX

@article{Nummenmaa2009a,
title = {I'll walk this way: Eyes reveal the direction of locomotion and make passersby look and go the other way},
author = {Lauri Nummenmaa and Jukka Hyönä and Jari K. Hietanen},
year = {2009},
date = {2009-01-01},
journal = {Psychological Science},
volume = {20},
number = {12},
pages = {1454--1458},
abstract = {This study shows that humans (a) infer other people's movement trajectories from their gaze direction and (b) use this information to guide their own visual scanning of the environment and plan their own movement. In two eye-tracking experiments, participants viewed an animated character walking directly toward them on a street. The character looked constantly to the left or to the right (Experiment 1) or suddenly shifted his gaze from direct to the left or to the right (Experiment 2). Participants had to decide on which side they would skirt the character. They shifted their gaze toward the direction in which the character was not gazing, that is, away from his gaze, and chose to skirt him on that side. Gaze following is not always an obligatory social reflex; social-cognitive evaluations of gaze direction can lead to reversed gaze-following behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study shows that humans (a) infer other people's movement trajectories from their gaze direction and (b) use this information to guide their own visual scanning of the environment and plan their own movement. In two eye-tracking experiments, participants viewed an animated character walking directly toward them on a street. The character looked constantly to the left or to the right (Experiment 1) or suddenly shifted his gaze from direct to the left or to the right (Experiment 2). Participants had to decide on which side they would skirt the character. They shifted their gaze toward the direction in which the character was not gazing, that is, away from his gaze, and chose to skirt him on that side. Gaze following is not always an obligatory social reflex; social-cognitive evaluations of gaze direction can lead to reversed gaze-following behavior.

Close

Antje Nuthmann; Ralf Engbert

Mindless reading revisited: An analysis based on the SWIFT model of eye-movement control Journal Article

In: Vision Research, vol. 49, no. 3, pp. 322–336, 2009.

Abstract | Links | BibTeX

@article{Nuthmann2009,
title = {Mindless reading revisited: An analysis based on the SWIFT model of eye-movement control},
author = {Antje Nuthmann and Ralf Engbert},
doi = {10.1016/j.visres.2008.10.022},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {3},
pages = {322--336},
abstract = {In this article, we revisit the mindless reading paradigm from the perspective of computational modeling. In the standard version of the paradigm, participants read sentences in both their normal version as well as the transformed (or mindless) version where each letter is replaced with a z. z-String scanning shares the oculomotor requirements with reading but none of the higher-level lexical and semantic processes. Here we use the z-string scanning task to validate the SWIFT model of saccade generation [Engbert, R., Nuthmann, A., Richter, E., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112(4), 777-813] as an example for an advanced theory of eye-movement control in reading. We test the central assumption of spatially distributed processing across an attentional gradient proposed by the SWIFT model. Key experimental results like prolonged average fixation durations in z-string scanning compared to normal reading and the existence of a string-length effect on fixation durations and probabilities were reproduced by the model, which lends support to the model's assumptions on visual processing. Moreover, simulation results for patterns of regressive saccades in z-string scanning confirm SWIFT's concept of activation field dynamics for the selection of saccade targets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this article, we revisit the mindless reading paradigm from the perspective of computational modeling. In the standard version of the paradigm, participants read sentences in both their normal version as well as the transformed (or mindless) version where each letter is replaced with a z. z-String scanning shares the oculomotor requirements with reading but none of the higher-level lexical and semantic processes. Here we use the z-string scanning task to validate the SWIFT model of saccade generation [Engbert, R., Nuthmann, A., Richter, E., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112(4), 777-813] as an example for an advanced theory of eye-movement control in reading. We test the central assumption of spatially distributed processing across an attentional gradient proposed by the SWIFT model. Key experimental results like prolonged average fixation durations in z-string scanning compared to normal reading and the existence of a string-length effect on fixation durations and probabilities were reproduced by the model, which lends support to the model's assumptions on visual processing. Moreover, simulation results for patterns of regressive saccades in z-string scanning confirm SWIFT's concept of activation field dynamics for the selection of saccade targets.

Close

  • doi:10.1016/j.visres.2008.10.022

Close

Antje Nuthmann; Reinhold Kliegl

An examination of binocular reading fixations based on sentence corpus data Journal Article

In: Journal of Vision, vol. 9, no. 5, pp. 31–31, 2009.

Abstract | Links | BibTeX

@article{Nuthmann2009a,
title = {An examination of binocular reading fixations based on sentence corpus data},
author = {Antje Nuthmann and Reinhold Kliegl},
doi = {10.1167/9.5.31},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {5},
pages = {31--31},
abstract = {Binocular eye movements of normal adult readers were examined as they read single sentences. Analyses of horizontal and vertical fixation disparities indicated that the most prevalent type of disparate fixation is crossed (i.e., the left eye is located further to the right than the right eye) while the left eye frequently fixates somewhat above the right eye. The Gaussian distribution of the binocular fixation point peaked 2.6 cm in front of the plane of text, reflecting the prevalence of horizontally crossed fixations. Fixation disparity accumulates during the course of successive saccades and fixations within a line of text, but only to an extent that does not compromise single binocular vision. In reading, the version and vergence system interact in a way that is qualitatively similar to what has been observed in simple nonreading tasks. Finally, results presented here render it unlikely that vergence movements in reading aim at realigning the eyes at a given saccade target word.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Binocular eye movements of normal adult readers were examined as they read single sentences. Analyses of horizontal and vertical fixation disparities indicated that the most prevalent type of disparate fixation is crossed (i.e., the left eye is located further to the right than the right eye) while the left eye frequently fixates somewhat above the right eye. The Gaussian distribution of the binocular fixation point peaked 2.6 cm in front of the plane of text, reflecting the prevalence of horizontally crossed fixations. Fixation disparity accumulates during the course of successive saccades and fixations within a line of text, but only to an extent that does not compromise single binocular vision. In reading, the version and vergence system interact in a way that is qualitatively similar to what has been observed in simple nonreading tasks. Finally, results presented here render it unlikely that vergence movements in reading aim at realigning the eyes at a given saccade target word.

Close

  • doi:10.1167/9.5.31

Close

Anna Montagnini; Leonardo Chelazzi

Dynamic interaction between "Go" and "Stop" signals in the saccadic eye movement system: New evidence against the functional independence of the underlying neural mechanisms Journal Article

In: Vision Research, vol. 49, no. 10, pp. 1316–1328, 2009.

Abstract | Links | BibTeX

@article{Montagnini2009,
title = {Dynamic interaction between "Go" and "Stop" signals in the saccadic eye movement system: New evidence against the functional independence of the underlying neural mechanisms},
author = {Anna Montagnini and Leonardo Chelazzi},
doi = {10.1016/j.visres.2008.07.018},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {10},
pages = {1316--1328},
publisher = {Elsevier Ltd},
abstract = {We investigated human oculomotor behaviour in a Go-NoGo saccadic task in which the saccadic response to a peripheral visual target was to be inhibited in a minority of trials (NoGo trials). Different from classical experimental paradigms on the inhibitory control of intended actions, in our task the inhibitory cue was identical to the saccadic target (used in Go trials) in timing, location and shape-the only difference being its colour. By analysing the latency and the metrics of saccades erroneously executed after a NoGo instruction (NoGo-escapes), we observed a characteristic pattern of performance: first, we observed a decrease in the amplitude of NoGo-escapes with increasing latency; second, we revealed a consistent population of long-latency small saccades opposite in direction to the NoGo cue; finally, we found a strong side-specific inhibitory effect in terms of saccadic reaction times, on trials immediately following a NoGo trial. In addition, we manipulated the readiness to initiate a saccade towards the visual target, by introducing a probability bias in the random sequence of target locations. We found that the capacity to inhibit the impending saccade was improved for the most likely target location, i.e. the condition corresponding to the increased readiness for movement execution. Overall, our results challenge the notion of a central inhibitory mechanism independent from movement preparation. More precisely, they indicate that the two mechanisms (action preparation and action inhibition) interact dynamically, possibly sharing spatially-specific mechanisms, and are similarly affected by particular contextual manipulations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated human oculomotor behaviour in a Go-NoGo saccadic task in which the saccadic response to a peripheral visual target was to be inhibited in a minority of trials (NoGo trials). Different from classical experimental paradigms on the inhibitory control of intended actions, in our task the inhibitory cue was identical to the saccadic target (used in Go trials) in timing, location and shape-the only difference being its colour. By analysing the latency and the metrics of saccades erroneously executed after a NoGo instruction (NoGo-escapes), we observed a characteristic pattern of performance: first, we observed a decrease in the amplitude of NoGo-escapes with increasing latency; second, we revealed a consistent population of long-latency small saccades opposite in direction to the NoGo cue; finally, we found a strong side-specific inhibitory effect in terms of saccadic reaction times, on trials immediately following a NoGo trial. In addition, we manipulated the readiness to initiate a saccade towards the visual target, by introducing a probability bias in the random sequence of target locations. We found that the capacity to inhibit the impending saccade was improved for the most likely target location, i.e. the condition corresponding to the increased readiness for movement execution. Overall, our results challenge the notion of a central inhibitory mechanism independent from movement preparation. More precisely, they indicate that the two mechanisms (action preparation and action inhibition) interact dynamically, possibly sharing spatially-specific mechanisms, and are similarly affected by particular contextual manipulations.

Close

  • doi:10.1016/j.visres.2008.07.018

Close

Celia J. A. Morgan; Vyv C. Huddy; Michelle Lipton; H. Valerie Curran; Eileen M. Joyce

Is persistent ketamine use a valid model of the cognitive and oculomotor deficits in schizophrenia? Journal Article

In: Biological Psychiatry, vol. 65, no. 12, pp. 1099–1102, 2009.

Abstract | Links | BibTeX

@article{Morgan2009,
title = {Is persistent ketamine use a valid model of the cognitive and oculomotor deficits in schizophrenia?},
author = {Celia J. A. Morgan and Vyv C. Huddy and Michelle Lipton and H. Valerie Curran and Eileen M. Joyce},
doi = {10.1016/j.biopsych.2008.10.045},
year = {2009},
date = {2009-01-01},
journal = {Biological Psychiatry},
volume = {65},
number = {12},
pages = {1099--1102},
publisher = {Society of Biological Psychiatry},
abstract = {Background: Acute ketamine has been shown to model features of schizophrenia such as psychotic symptoms, cognitive deficits and smooth pursuit eye movement dysfunction. There have been suggestions that chronic ketamine may also produce an analogue of the disorder. In this study, we investigated the effect of persistent recreational ketamine use on tests of episodic and working memory and on oculomotor tasks of smooth pursuit and pro- and antisaccades. Methods: Twenty ketamine users were compared with 1) 20 first-episode schizophrenia patients, 2) 17 polydrug control subjects who did not use ketamine but were matched to the ketamine users for other drug use, and 3) 20 non-drug-using control subjects. All groups were matched for estimated premorbid IQ. Results: Ketamine users made more antisaccade errors than both control groups but did not differ from patients. Ketamine users performed better than schizophrenia patients on smooth pursuit, antisaccade metrics, and both memory tasks but did not differ from control groups. Conclusions: Problems inhibiting reflexive eye movements may be a consequence of repeated ketamine self-administration. The absence of any other oculomotor or cognitive deficit present in schizophrenia suggests that chronic self-administration of ketamine may not be a good model of these aspects of the disorder.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Acute ketamine has been shown to model features of schizophrenia such as psychotic symptoms, cognitive deficits and smooth pursuit eye movement dysfunction. There have been suggestions that chronic ketamine may also produce an analogue of the disorder. In this study, we investigated the effect of persistent recreational ketamine use on tests of episodic and working memory and on oculomotor tasks of smooth pursuit and pro- and antisaccades. Methods: Twenty ketamine users were compared with 1) 20 first-episode schizophrenia patients, 2) 17 polydrug control subjects who did not use ketamine but were matched to the ketamine users for other drug use, and 3) 20 non-drug-using control subjects. All groups were matched for estimated premorbid IQ. Results: Ketamine users made more antisaccade errors than both control groups but did not differ from patients. Ketamine users performed better than schizophrenia patients on smooth pursuit, antisaccade metrics, and both memory tasks but did not differ from control groups. Conclusions: Problems inhibiting reflexive eye movements may be a consequence of repeated ketamine self-administration. The absence of any other oculomotor or cognitive deficit present in schizophrenia suggests that chronic self-administration of ketamine may not be a good model of these aspects of the disorder.

Close

  • doi:10.1016/j.biopsych.2008.10.045

Close

Camille Morvan; Mark Wexler

The nonlinear structure of motion perception during smooth eye movements Journal Article

In: Journal of Vision, vol. 9, no. 7, pp. 1–13, 2009.

Abstract | Links | BibTeX

@article{Morvan2009,
title = {The nonlinear structure of motion perception during smooth eye movements},
author = {Camille Morvan and Mark Wexler},
doi = {10.1167/9.7.1},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {7},
pages = {1--13},
abstract = {To perceive object motion when the eyes themselves undergo smooth movement, we can either perceive motion directly-by extracting motion relative to a background presumed to be fixed-or through compensation, by correcting retinal motion by information about eye movement. To isolate compensation, we created stimuli in which, while the eye undergoes smooth movement due to inertia, only one object is visible-and the motion of this stimulus is decoupled from that of the eye. Using a wide variety of stimulus speeds and directions, we rule out a linear model of compensation, in which stimulus velocity is estimated as a linear combination of retinal and eye velocities multiplied by a constant gain. In fact, we find that when the stimulus moves in the same direction as the eyes, there is little compensation, but when movement is in the opposite direction, compensation grows in a nonlinear way with speed. We conclude that eye movement is estimated from a combination of extraretinal and retinal signals, the latter based on an assumption of stimulus stationarity. Two simple models, in which the direction of eye movement is computed from the extraretinal signal and the speed from the retinal signal, account well for our results.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To perceive object motion when the eyes themselves undergo smooth movement, we can either perceive motion directly-by extracting motion relative to a background presumed to be fixed-or through compensation, by correcting retinal motion by information about eye movement. To isolate compensation, we created stimuli in which, while the eye undergoes smooth movement due to inertia, only one object is visible-and the motion of this stimulus is decoupled from that of the eye. Using a wide variety of stimulus speeds and directions, we rule out a linear model of compensation, in which stimulus velocity is estimated as a linear combination of retinal and eye velocities multiplied by a constant gain. In fact, we find that when the stimulus moves in the same direction as the eyes, there is little compensation, but when movement is in the opposite direction, compensation grows in a nonlinear way with speed. We conclude that eye movement is estimated from a combination of extraretinal and retinal signals, the latter based on an assumption of stimulus stationarity. Two simple models, in which the direction of eye movement is computed from the extraretinal signal and the speed from the retinal signal, account well for our results.

Close

  • doi:10.1167/9.7.1

Close

Weimin Mou; Xianyun Liu; Timothy P. McNamara

Layout geometry in encoding and retrieval of spatial memory Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 1, pp. 83–93, 2009.

Abstract | Links | BibTeX

@article{Mou2009,
title = {Layout geometry in encoding and retrieval of spatial memory},
author = {Weimin Mou and Xianyun Liu and Timothy P. McNamara},
doi = {10.1037/0096-1523.35.1.83},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {1},
pages = {83--93},
abstract = {Two experiments investigated whether the spatial reference directions that are used to specify objects' locations in memory can be solely determined by layout geometry. Participants studied a layout of objects from a single viewpoint while their eye movements were recorded. Subsequently, participants used memory to make judgments of relative direction (e.g., "Imagine you are standing at X, facing Y, please point to Z"). When the layout had a symmetric axis that was different from participants' viewing direction, the sequence of eye fixations on objects during learning and the preferred directions in pointing judgments were both determined by the direction of the symmetric axis. These results provide further evidence that interobject spatial relations are represented in memory with intrinsic frames of reference.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments investigated whether the spatial reference directions that are used to specify objects' locations in memory can be solely determined by layout geometry. Participants studied a layout of objects from a single viewpoint while their eye movements were recorded. Subsequently, participants used memory to make judgments of relative direction (e.g., "Imagine you are standing at X, facing Y, please point to Z"). When the layout had a symmetric axis that was different from participants' viewing direction, the sequence of eye fixations on objects during learning and the preferred directions in pointing judgments were both determined by the direction of the symmetric axis. These results provide further evidence that interobject spatial relations are represented in memory with intrinsic frames of reference.

Close

  • doi:10.1037/0096-1523.35.1.83

Close

Mulckhuyse Mulckhuyse; Stefan Van der Stigchel; Jan Theeuwes

Early and late modulation of saccade deviations by target distractor similarity Journal Article

In: Journal of Neurophysiology, vol. 102, no. 3, pp. 1451–1458, 2009.

Abstract | Links | BibTeX

@article{Mulckhuyse2009,
title = {Early and late modulation of saccade deviations by target distractor similarity},
author = {Mulckhuyse Mulckhuyse and Stefan Van der Stigchel and Jan Theeuwes},
doi = {10.1152/jn.00068.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {102},
number = {3},
pages = {1451--1458},
abstract = {In this study, we investigated the time course of oculomotor competition between bottom-up and top-down selection processes using saccade trajectory deviations as a dependent measure. We used a paradigm in which we manipulated saccade latency by offsetting the fixation point at different time points relative to target onset. In experiment 1, observers made a saccade to a filled colored circle while another irrelevant distractor circle was presented. The distractor was either similar (i.e., identical) or dissimilar to the target. Results showed that the strength of saccade deviation was modulated by target distractor similarity for short saccade latencies. To rule out the possibility that the similar distractor affected the saccade trajectory merely because it was identical to the target, the distractor in experiment 2 was a square shape of which only the color was similar or dissimilar to the target. The results showed that deviations for both short and long latencies were modulated by target distractor similarity. When saccade latencies were short, we found less saccade deviation away from a similar than from a dissimilar distractor. When saccade latencies were long, the opposite pattern was found: more saccade deviation away from a similar than from a dissimilar distractor. In contrast to previous findings, our study shows that task-relevant information can already influence the early processes of oculomotor control. We conclude that competition between saccadic goals is subject to two different processes with different time courses: one fast activating process signaling the saliency and task relevance of a location and one slower inhibitory process suppressing that location.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this study, we investigated the time course of oculomotor competition between bottom-up and top-down selection processes using saccade trajectory deviations as a dependent measure. We used a paradigm in which we manipulated saccade latency by offsetting the fixation point at different time points relative to target onset. In experiment 1, observers made a saccade to a filled colored circle while another irrelevant distractor circle was presented. The distractor was either similar (i.e., identical) or dissimilar to the target. Results showed that the strength of saccade deviation was modulated by target distractor similarity for short saccade latencies. To rule out the possibility that the similar distractor affected the saccade trajectory merely because it was identical to the target, the distractor in experiment 2 was a square shape of which only the color was similar or dissimilar to the target. The results showed that deviations for both short and long latencies were modulated by target distractor similarity. When saccade latencies were short, we found less saccade deviation away from a similar than from a dissimilar distractor. When saccade latencies were long, the opposite pattern was found: more saccade deviation away from a similar than from a dissimilar distractor. In contrast to previous findings, our study shows that task-relevant information can already influence the early processes of oculomotor control. We conclude that competition between saccadic goals is subject to two different processes with different time courses: one fast activating process signaling the saliency and task relevance of a location and one slower inhibitory process suppressing that location.

Close

  • doi:10.1152/jn.00068.2009

Close

Bettina Olk; Alan Kingstone

A new look at aging and performance in the antisaccade task: The impact of response selection Journal Article

In: European Journal of Cognitive Psychology, vol. 21, no. 2-3, pp. 406–427, 2009.

Abstract | Links | BibTeX

@article{Olk2009,
title = {A new look at aging and performance in the antisaccade task: The impact of response selection},
author = {Bettina Olk and Alan Kingstone},
doi = {10.1080/09541440802333190},
year = {2009},
date = {2009-01-01},
journal = {European Journal of Cognitive Psychology},
volume = {21},
number = {2-3},
pages = {406--427},
abstract = {Aged adults respond more slowly and less accurately in the antisaccade task, in which a saccade away from a visual stimulus is required. This decreased performance has been attributed to a decline in the ability to inhibit prepotent responses with age. Considering that antisaccades also involve response selection, the present experiment investigated the contribution of inhibition and response selection. Young and aged adults were compared between conditions that required varying percentages of prosaccades, antisaccades, and no-go trials. The comparison between no-go (inhibition of a prosaccade) and antisaccade trials (inhibition of a prosaccade \textit{and} selection of an antisaccade) showed significantly worse performance in the antisaccade task, especially for the older group, suggesting that they failed to select the antisaccade in a situation in which a competing, prepotent response is available. The impact of this response selection failure was underlined by an equivalent ability of both groups to impose inhibition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Aged adults respond more slowly and less accurately in the antisaccade task, in which a saccade away from a visual stimulus is required. This decreased performance has been attributed to a decline in the ability to inhibit prepotent responses with age. Considering that antisaccades also involve response selection, the present experiment investigated the contribution of inhibition and response selection. Young and aged adults were compared between conditions that required varying percentages of prosaccades, antisaccades, and no-go trials. The comparison between no-go (inhibition of a prosaccade) and antisaccade trials (inhibition of a prosaccade 和 selection of an antisaccade) showed significantly worse performance in the antisaccade task, especially for the older group, suggesting that they failed to select the antisaccade in a situation in which a competing, prepotent response is available. The impact of this response selection failure was underlined by an equivalent ability of both groups to impose inhibition.

Close

  • doi:10.1080/09541440802333190

Close

Ayelet McKyton; Yoni Pertzov; Ehud Zohary

Pattern matching is assessed in retinotopic coordinates Journal Article

In: Journal of Vision, vol. 9, no. 13, pp. 1–10, 2009.

Abstract | BibTeX

@article{McKyton2009,
title = {Pattern matching is assessed in retinotopic coordinates},
author = {Ayelet McKyton and Yoni Pertzov and Ehud Zohary},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {13},
pages = {1--10},
abstract = {We typically examine scenes by performing multiple saccades to different objects of interest within the image. Therefore, an extra-retinotopic representation, invariant to the changes in the retinal image caused by eye movements, might be useful for high-level visual processing. We investigate here, using a matching task, whether the representation of complex natural images is retinotopic or screen-based. Subjects observed two simultaneously presented images, made a saccadic eye movement to a new fixation point, and viewed a third image. Their task was to judge whether the third image was identical to one of the two earlier images or different. Identical images could appear either in the same retinotopic position, in the same screen position, or in totally different locations. Performance was best when the identical images appeared in the same retinotopic position and worst when they appeared in the opposite hemifield. Counter to commonplace intuition, no advantage was conferred from presenting the identical images in the same screen position. This, together with performance sensitivity for image translation of a few degrees, suggests that image matching, which can often be judged without overall recognition of the scene, is mostly determined by neuronal activity in earlier brain areas containing a strictly retinotopic representation and small receptive fields.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We typically examine scenes by performing multiple saccades to different objects of interest within the image. Therefore, an extra-retinotopic representation, invariant to the changes in the retinal image caused by eye movements, might be useful for high-level visual processing. We investigate here, using a matching task, whether the representation of complex natural images is retinotopic or screen-based. Subjects observed two simultaneously presented images, made a saccadic eye movement to a new fixation point, and viewed a third image. Their task was to judge whether the third image was identical to one of the two earlier images or different. Identical images could appear either in the same retinotopic position, in the same screen position, or in totally different locations. Performance was best when the identical images appeared in the same retinotopic position and worst when they appeared in the opposite hemifield. Counter to commonplace intuition, no advantage was conferred from presenting the identical images in the same screen position. This, together with performance sensitivity for image translation of a few degrees, suggests that image matching, which can often be judged without overall recognition of the scene, is mostly determined by neuronal activity in earlier brain areas containing a strictly retinotopic representation and small receptive fields.

Close

Patricia A. McMullen; Lesley E. MacSween; Charles A. Collin

Behavioral effects of visual field location on processing motion- and luminance-defined form Journal Article

In: Journal of Vision, vol. 9, no. 6, pp. 1–11, 2009.

Abstract | Links | BibTeX

@article{McMullen2009,
title = {Behavioral effects of visual field location on processing motion- and luminance-defined form},
author = {Patricia A. McMullen and Lesley E. MacSween and Charles A. Collin},
doi = {10.1167/9.6.24},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {6},
pages = {1--11},
abstract = {Traditional theories posit a ventral cortical visual pathway subserving object recognition regardless of the information defining the contour. However, functional magnetic resonance imaging (fMRI) studies have shown dorsal cortical activity during visual processing of static luminance-defined (SL) and motion-defined form (MDF). It is unknown if this activity is supported behaviorally, or if it depends on central or peripheral vision. The present study compared behavioral performance with two types of MDF [one without translational motion (MDF) and another with (TM)] and SL shapes in a shape matching task where shape pairs appeared in the upper or lower visual fields or along the horizontal meridian of central or peripheral vision. MDF matching was superior to the other contour types regardless of location in central vision. Both MDF and TM matching was superior to SL matching for presentations in peripheral vision. Importantly, there was an advantage for MDF and TM matching in the lower peripheral visual field that was not present for SL forms. These results are consistent with previous behavioral findings that show no field advantage for static form processing and a lower field advantage for motion processing. They are also suggestive of more dorsal cortical involvement in the processing of shapes defined by motion than luminance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Traditional theories posit a ventral cortical visual pathway subserving object recognition regardless of the information defining the contour. However, functional magnetic resonance imaging (fMRI) studies have shown dorsal cortical activity during visual processing of static luminance-defined (SL) and motion-defined form (MDF). It is unknown if this activity is supported behaviorally, or if it depends on central or peripheral vision. The present study compared behavioral performance with two types of MDF [one without translational motion (MDF) and another with (TM)] and SL shapes in a shape matching task where shape pairs appeared in the upper or lower visual fields or along the horizontal meridian of central or peripheral vision. MDF matching was superior to the other contour types regardless of location in central vision. Both MDF and TM matching was superior to SL matching for presentations in peripheral vision. Importantly, there was an advantage for MDF and TM matching in the lower peripheral visual field that was not present for SL forms. These results are consistent with previous behavioral findings that show no field advantage for static form processing and a lower field advantage for motion processing. They are also suggestive of more dorsal cortical involvement in the processing of shapes defined by motion than luminance.

Close

  • doi:10.1167/9.6.24

Close

Bob McMurray; Michael K. Tanenhaus; Richard N. Aslin

Within-category VOT affects recovery from "lexical" garden-paths: Evidence against phoneme-level inhibition Journal Article

In: Journal of Memory and Language, vol. 60, no. 1, pp. 65–91, 2009.

Abstract | Links | BibTeX

@article{McMurray2009,
title = {Within-category VOT affects recovery from "lexical" garden-paths: Evidence against phoneme-level inhibition},
author = {Bob McMurray and Michael K. Tanenhaus and Richard N. Aslin},
doi = {10.1016/j.jml.2008.07.002},
year = {2009},
date = {2009-01-01},
journal = {Journal of Memory and Language},
volume = {60},
number = {1},
pages = {65--91},
publisher = {Elsevier Inc.},
abstract = {Spoken word recognition shows gradient sensitivity to within-category voice onset time (VOT), as predicted by several current models of spoken word recognition, including TRACE (McClelland, J., & Elman, J. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1-86). It remains unclear, however, whether this sensitivity is short-lived or whether it persists over multiple syllables. VOT continua were synthesized for pairs of words like barricade and parakeet, which differ in the voicing of their initial phoneme, but otherwise overlap for at least four phonemes, creating an opportunity for "lexical garden-paths" when listeners encounter the phonemic information consistent with only one member of the pair. Simulations established that phoneme-level inhibition in TRACE eliminates sensitivity to VOT too rapidly to influence recovery. However, in two Visual World experiments, look-contingent and response-contingent analyses demonstrated effects of word initial VOT on lexical garden-path recovery. These results are inconsistent with inhibition at the phoneme level and support models of spoken word recognition in which sub-phonetic detail is preserved throughout the processing system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Spoken word recognition shows gradient sensitivity to within-category voice onset time (VOT), as predicted by several current models of spoken word recognition, including TRACE (McClelland, J., & Elman, J. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1-86). It remains unclear, however, whether this sensitivity is short-lived or whether it persists over multiple syllables. VOT continua were synthesized for pairs of words like barricade and parakeet, which differ in the voicing of their initial phoneme, but otherwise overlap for at least four phonemes, creating an opportunity for "lexical garden-paths" when listeners encounter the phonemic information consistent with only one member of the pair. Simulations established that phoneme-level inhibition in TRACE eliminates sensitivity to VOT too rapidly to influence recovery. However, in two Visual World experiments, look-contingent and response-contingent analyses demonstrated effects of word initial VOT on lexical garden-path recovery. These results are inconsistent with inhibition at the phoneme level and support models of spoken word recognition in which sub-phonetic detail is preserved throughout the processing system.

Close

  • doi:10.1016/j.jml.2008.07.002

Close

Eugene McSorley; Alice G. Cruickshank; Laura A. Inman

The development of the spatial extent of oculomotor inhibition Journal Article

In: Brain Research, vol. 1298, pp. 92–98, 2009.

Abstract | Links | BibTeX

@article{McSorley2009b,
title = {The development of the spatial extent of oculomotor inhibition},
author = {Eugene McSorley and Alice G. Cruickshank and Laura A. Inman},
doi = {10.1016/j.brainres.2009.08.081},
year = {2009},
date = {2009-01-01},
journal = {Brain Research},
volume = {1298},
pages = {92--98},
publisher = {Elsevier B.V.},
abstract = {Inhibition is intimately involved in the ability to select a target for a goal-directed movement. The effect of distracters on the deviation of oculomotor trajectories and landing positions provides evidence of such inhibition. Individual saccade trajectories and landing positions may deviate initially either towards, or away from, a competing distracter-the direction and extent of this deviation depends upon saccade latency and the target to distracter separation. However, the underlying commonality of the sources of oculomotor inhibition has not been investigated. Here we report the relationship between distracter-related deviation of saccade trajectory, landing position and saccade latency. Observers saccaded to a target which could be accompanied by a distracter shown at various distances from very close (10 angular degrees) to far away (120 angular degrees). A fixation-gap paradigm was used to manipulate latency independently of the influence of competing distracters. When distracters were close to the target, saccade trajectory and landing position deviated toward the distracter position, while at greater separations landing position was always accurate but trajectories deviated away from the distracters. Different spatial patterns of deviations across latency were found. This pattern of results is consistent with the metrics of the saccade reflecting coarse pooling of the ongoing activity at the distracter location: saccade trajectory reflects activity at saccade initiation while landing position reveals activity at saccade end.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition is intimately involved in the ability to select a target for a goal-directed movement. The effect of distracters on the deviation of oculomotor trajectories and landing positions provides evidence of such inhibition. Individual saccade trajectories and landing positions may deviate initially either towards, or away from, a competing distracter-the direction and extent of this deviation depends upon saccade latency and the target to distracter separation. However, the underlying commonality of the sources of oculomotor inhibition has not been investigated. Here we report the relationship between distracter-related deviation of saccade trajectory, landing position and saccade latency. Observers saccaded to a target which could be accompanied by a distracter shown at various distances from very close (10 angular degrees) to far away (120 angular degrees). A fixation-gap paradigm was used to manipulate latency independently of the influence of competing distracters. When distracters were close to the target, saccade trajectory and landing position deviated toward the distracter position, while at greater separations landing position was always accurate but trajectories deviated away from the distracters. Different spatial patterns of deviations across latency were found. This pattern of results is consistent with the metrics of the saccade reflecting coarse pooling of the ongoing activity at the distracter location: saccade trajectory reflects activity at saccade initiation while landing position reveals activity at saccade end.

Close

  • doi:10.1016/j.brainres.2009.08.081

Close

Eugene McSorley; Patrick Haggard; Robin Walker

The spatial and temporal shape of oculomotor inhibition Journal Article

In: Vision Research, vol. 49, no. 6, pp. 608–614, 2009.

Abstract | Links | BibTeX

@article{McSorley2009,
title = {The spatial and temporal shape of oculomotor inhibition},
author = {Eugene McSorley and Patrick Haggard and Robin Walker},
doi = {10.1016/j.visres.2009.01.015},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {6},
pages = {608--614},
publisher = {Elsevier Ltd},
abstract = {Selecting a stimulus as the target for a goal-directed movement involves inhibiting other competing possible responses. Inhibition has generally proved hard to study behaviorally, because it results in no measurable output. The effect of distractors on the shape of oculomotor and manual trajectories provide evidence of such inhibition. Individual saccades may deviate initially either towards, or away from, a competing distractor - the direction and extent of this deviation depends upon saccade latency, target predictability and the target to distractor separation. The experiment reported here used these effects to show how inhibition of distractor locations develops over time. Distractors could be presented at various distances from unpredictable and predictable targets in two separate experiments. The deviation of saccade trajectories was compared between trials with and without distractors. Inhibition was measured by saccade trajectory deviation. Inhibition was found to increase as the distractor distance from target decreased but was found to increase with saccade latency at all distractor distances (albeit to different peaks). Surprisingly, no differences were found between unpredictable and predictable targets perhaps because our saccade latencies were generally long (∼260-280 ms.). We conclude that oculomotor inhibition of saccades to possible target objects involves the same mechanisms for all distractor distances and target types.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Selecting a stimulus as the target for a goal-directed movement involves inhibiting other competing possible responses. Inhibition has generally proved hard to study behaviorally, because it results in no measurable output. The effect of distractors on the shape of oculomotor and manual trajectories provide evidence of such inhibition. Individual saccades may deviate initially either towards, or away from, a competing distractor - the direction and extent of this deviation depends upon saccade latency, target predictability and the target to distractor separation. The experiment reported here used these effects to show how inhibition of distractor locations develops over time. Distractors could be presented at various distances from unpredictable and predictable targets in two separate experiments. The deviation of saccade trajectories was compared between trials with and without distractors. Inhibition was measured by saccade trajectory deviation. Inhibition was found to increase as the distractor distance from target decreased but was found to increase with saccade latency at all distractor distances (albeit to different peaks). Surprisingly, no differences were found between unpredictable and predictable targets perhaps because our saccade latencies were generally long (∼260-280 ms.). We conclude that oculomotor inhibition of saccades to possible target objects involves the same mechanisms for all distractor distances and target types.

Close

  • doi:10.1016/j.visres.2009.01.015

Close

Eugene McSorley; Rachel McCloy

Saccadic eye movements as an index of perceptual decision-making Journal Article

In: Experimental Brain Research, vol. 198, no. 4, pp. 513–520, 2009.

Abstract | Links | BibTeX

@article{McSorley2009a,
title = {Saccadic eye movements as an index of perceptual decision-making},
author = {Eugene McSorley and Rachel McCloy},
doi = {10.1007/s00221-009-1952-9},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {198},
number = {4},
pages = {513--520},
abstract = {One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.

Close

  • doi:10.1007/s00221-009-1952-9

Close

Jerome Munuera; Pierre Morel; Jean-Rene Duhamel; Sophie Deneve

Optimal sensorimotor control in eye movement sequences Journal Article

In: Journal of Neuroscience, vol. 29, no. 10, pp. 3026–3035, 2009.

Abstract | Links | BibTeX

@article{Munuera2009,
title = {Optimal sensorimotor control in eye movement sequences},
author = {Jerome Munuera and Pierre Morel and Jean-Rene Duhamel and Sophie Deneve},
doi = {10.1523/JNEUROSCI.1169-08.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neuroscience},
volume = {29},
number = {10},
pages = {3026--3035},
abstract = {Fast and accurate motor behavior requires combining noisy and delayed sensory information with knowledge of self-generated body motion; much evidence indicates that humans do this in a near-optimal manner during arm movements. However, it is unclear whether this principle applies to eye movements. We measured the relative contributions of visual sensory feedback and the motor efference copy (and/or proprioceptive feedback) when humans perform two saccades in rapid succession, the first saccade to a visual target and the second to a memorized target. Unbeknownst to the subject, we introduced an artificial motor error by randomly "jumping" the visual target during the first saccade. The correction of the memory-guided saccade allowed us to measure the relative contributions of visual feedback and efferent copy (and/or proprioceptive feedback) to motor-plan updating. In a control experiment, we extinguished the target during the saccade rather than changing its location to measure the relative contribution of motor noise and target localization error to saccade variability without any visual feedback. The motor noise contribution increased with saccade amplitude, but remained <30% of the total variability. Subjects adjusted the gain of their visual feedback for different saccade amplitudes as a function of its reliability. Even during trials where subjects performed a corrective saccade to compensate for the target-jump, the correction by the visual feedback, while stronger, remained far below 100%. In all conditions, an optimal controller predicted the visual feedback gain well, suggesting that humans combine optimally their efferent copy and sensory feedback when performing eye movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Fast and accurate motor behavior requires combining noisy and delayed sensory information with knowledge of self-generated body motion; much evidence indicates that humans do this in a near-optimal manner during arm movements. However, it is unclear whether this principle applies to eye movements. We measured the relative contributions of visual sensory feedback and the motor efference copy (and/or proprioceptive feedback) when humans perform two saccades in rapid succession, the first saccade to a visual target and the second to a memorized target. Unbeknownst to the subject, we introduced an artificial motor error by randomly "jumping" the visual target during the first saccade. The correction of the memory-guided saccade allowed us to measure the relative contributions of visual feedback and efferent copy (and/or proprioceptive feedback) to motor-plan updating. In a control experiment, we extinguished the target during the saccade rather than changing its location to measure the relative contribution of motor noise and target localization error to saccade variability without any visual feedback. The motor noise contribution increased with saccade amplitude, but remained <30% of the total variability. Subjects adjusted the gain of their visual feedback for different saccade amplitudes as a function of its reliability. Even during trials where subjects performed a corrective saccade to compensate for the target-jump, the correction by the visual feedback, while stronger, remained far below 100%. In all conditions, an optimal controller predicted the visual feedback gain well, suggesting that humans combine optimally their efferent copy and sensory feedback when performing eye movements.

Close

  • doi:10.1523/JNEUROSCI.1169-08.2009

Close

René M. Müri; D. Cazzoli; Thomas Nyffeler; Tobias Pflugshaupt

Visual exploration pattern in hemineglect Journal Article

In: Psychological Research, vol. 73, no. 2, pp. 147–157, 2009.

Abstract | Links | BibTeX

@article{Mueri2009,
title = {Visual exploration pattern in hemineglect},
author = {René M. Müri and D. Cazzoli and Thomas Nyffeler and Tobias Pflugshaupt},
doi = {10.1007/s00426-008-0204-0},
year = {2009},
date = {2009-01-01},
journal = {Psychological Research},
volume = {73},
number = {2},
pages = {147--157},
abstract = {The analysis of eye movement parameters in visual neglect such as cumulative fixation duration, saccade amplitude, or the numbers of saccades has been used to probe attention deficits in neglect patients, since the pattern of exploratory eye movements has been taken as a strong index of attention distribution. The current overview of the literature of visual neglect has its emphasis on studies dealing with eye movement and exploration analysis. We present our own results in 15 neglect patients. The free exploration behavior was analyzed in these patients presenting 32 naturalistic color photographs of everyday scenes. Cumulative fixation duration, spatial distribution of fixations in the horizontal and vertical plane, the number and amplitude of exploratory saccades was analyzed and compared with the results of an age-matched control group. A main result of our study was that in neglect patients, fixation distribution of free exploration of natural scenes is not only influenced by the left-right bias in the horizontal direction but also by the vertical direction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The analysis of eye movement parameters in visual neglect such as cumulative fixation duration, saccade amplitude, or the numbers of saccades has been used to probe attention deficits in neglect patients, since the pattern of exploratory eye movements has been taken as a strong index of attention distribution. The current overview of the literature of visual neglect has its emphasis on studies dealing with eye movement and exploration analysis. We present our own results in 15 neglect patients. The free exploration behavior was analyzed in these patients presenting 32 naturalistic color photographs of everyday scenes. Cumulative fixation duration, spatial distribution of fixations in the horizontal and vertical plane, the number and amplitude of exploratory saccades was analyzed and compared with the results of an age-matched control group. A main result of our study was that in neglect patients, fixation distribution of free exploration of natural scenes is not only influenced by the left-right bias in the horizontal direction but also by the vertical direction.

Close

  • doi:10.1007/s00426-008-0204-0

Close

John Palmer; Cathleen M. Moore

Using a filtering task to measure the spatial extent of selective attention Journal Article

In: Vision Research, vol. 49, no. 10, pp. 1045–1064, 2009.

Abstract | Links | BibTeX

@article{Palmer2009,
title = {Using a filtering task to measure the spatial extent of selective attention},
author = {John Palmer and Cathleen M. Moore},
doi = {10.1016/j.visres.2008.02.022},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {10},
pages = {1045--1064},
abstract = {The spatial extent of attention was investigated by measuring sensitivity to stimuli at to-be-ignored locations. Observers detected a stimulus at a cued location (target), while ignoring otherwise identical stimuli at nearby locations (foils). Only an attentional cue distinguished target from foil. Several experiments varied the contrast and separation of targets and foils. Two theories of selection were compared: contrast gain and a version of attention switching called an all-or-none mixture model. Results included large effects of separation, rejection of the contrast gain model, and the measurement of the size and profile of the spatial extent of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The spatial extent of attention was investigated by measuring sensitivity to stimuli at to-be-ignored locations. Observers detected a stimulus at a cued location (target), while ignoring otherwise identical stimuli at nearby locations (foils). Only an attentional cue distinguished target from foil. Several experiments varied the contrast and separation of targets and foils. Two theories of selection were compared: contrast gain and a version of attention switching called an all-or-none mixture model. Results included large effects of separation, rejection of the contrast gain model, and the measurement of the size and profile of the spatial extent of attention.

Close

  • doi:10.1016/j.visres.2008.02.022

Close

Daniele Panizza; Gennaro Chierchia; Charles Clifton

On the role of entailment patterns and scalar implicatures in the processing of numerals Journal Article

In: Journal of Memory and Language, vol. 61, no. 4, pp. 503–518, 2009.

Abstract | Links | BibTeX

@article{Panizza2009,
title = {On the role of entailment patterns and scalar implicatures in the processing of numerals},
author = {Daniele Panizza and Gennaro Chierchia and Charles Clifton},
doi = {10.1016/j.jml.2009.07.005},
year = {2009},
date = {2009-01-01},
journal = {Journal of Memory and Language},
volume = {61},
number = {4},
pages = {503--518},
abstract = {There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('exact') interpretations vs. lower-bounded ('at-least') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('exact') interpretations vs. lower-bounded ('at-least') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature.

Close

  • doi:10.1016/j.jml.2009.07.005

Close

Sebastian Pannasch; Boris M. Velichkovsky

Distractor effect and saccade amplitudes: Further evidence on different modes of processing in free exploration of visual images Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 1109–1131, 2009.

Abstract | Links | BibTeX

@article{Pannasch2009,
title = {Distractor effect and saccade amplitudes: Further evidence on different modes of processing in free exploration of visual images},
author = {Sebastian Pannasch and Boris M. Velichkovsky},
doi = {10.1080/13506280902764422},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {1109--1131},
abstract = {In view of a variety of everyday tasks, it is highly implausible that all visual fixations fulfil the same role. Earlier we demonstrated that a combination of fixation duration and amplitude of related saccades strongly correlates with the probability of correct recognition of objects and events both in static and in dynamic scenes (Velichkovsky, Joos, Helmert, Velichkovsky, Rothert, Kopf, Dornhoefer, see Pannasch, Dornhoefer, Unema, & Velichkovsky, 2001) in relation to amplitudes of the preceding saccade. In Experiment 1, it is shown that retinotopically identical visual events occurring 100 ms after the onset of a fixation have significantly less influence on fixation duration if the amplitude of the previous saccade exceeds the parafoveal range (set on 5° of arc). Experiment 2 demonstrates that this difference diminishes for distractors of obvious biological value such as looming motion patterns. In Experiment 3, we show that saccade amplitudes influence visual but not acoustic or haptic distractor effects. These results suggest an explanation in terms of a shifting balance of at least two modes of visual processing in free viewing of complex visual images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In view of a variety of everyday tasks, it is highly implausible that all visual fixations fulfil the same role. Earlier we demonstrated that a combination of fixation duration and amplitude of related saccades strongly correlates with the probability of correct recognition of objects and events both in static and in dynamic scenes (Velichkovsky, Joos, Helmert, Velichkovsky, Rothert, Kopf, Dornhoefer, see Pannasch, Dornhoefer, Unema, & Velichkovsky, 2001) in relation to amplitudes of the preceding saccade. In Experiment 1, it is shown that retinotopically identical visual events occurring 100 ms after the onset of a fixation have significantly less influence on fixation duration if the amplitude of the previous saccade exceeds the parafoveal range (set on 5° of arc). Experiment 2 demonstrates that this difference diminishes for distractors of obvious biological value such as looming motion patterns. In Experiment 3, we show that saccade amplitudes influence visual but not acoustic or haptic distractor effects. These results suggest an explanation in terms of a shifting balance of at least two modes of visual processing in free viewing of complex visual images.

Close

  • doi:10.1080/13506280902764422

Close

Muriel T. N. Panouillères; Tiffany Weiss; Christian Urquizar; Roméo Salemme; Douglas P. Munoz; Denis Pélisson

Behavioural evidence of separate adaptation mechanisms controlling saccade amplitude lengthening and shortening Journal Article

In: Journal of Neurophysiology, vol. 101, no. 3, pp. 1550–1559, 2009.

Abstract | Links | BibTeX

@article{Panouilleres2009,
title = {Behavioural evidence of separate adaptation mechanisms controlling saccade amplitude lengthening and shortening},
author = {Muriel T. N. Panouillères and Tiffany Weiss and Christian Urquizar and Roméo Salemme and Douglas P. Munoz and Denis Pélisson},
doi = {10.1152/jn.90988.2008},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {101},
number = {3},
pages = {1550--1559},
abstract = {The accuracy of saccadic eye movements is maintained over the long term by adaptation mechanisms that decrease or increase saccade amplitude. It is still unknown whether these opposite adaptive changes rely on common mechanisms. Here, a double-step target paradigm was used to adaptively decrease (backward second target step) or increase (forward step) the amplitude of reactive saccades in one direction only. To test which sensorimotor transformation stages are subjected to these adaptive changes, we measured their transfer to antisaccades in which sensory and motor vectors are spatially dissociated. In the backward adaptation condition, all subjects showed a significant amplitude decrease for adapted prosaccades and a significant transfer of adaptation to antisaccades performed in the adapted direction, but not to oppositely directed antisaccades elicited by a target jump in the adapted direction. In the forward adaptation condition, only 14 of 19 subjects showed a significant amplitude increase for prosaccades and no significant adaptation transfer to antisaccades was detected in either the adapted or nonadapted direction. These findings suggest that, whereas the level(s) of forward adaptation cannot be resolved, the mechanisms involved in backward adaptation of reactive saccades take place at a sensorimotor level downstream from the vector inversion process of antisaccades and differ markedly from those involved in forward adaptation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The accuracy of saccadic eye movements is maintained over the long term by adaptation mechanisms that decrease or increase saccade amplitude. It is still unknown whether these opposite adaptive changes rely on common mechanisms. Here, a double-step target paradigm was used to adaptively decrease (backward second target step) or increase (forward step) the amplitude of reactive saccades in one direction only. To test which sensorimotor transformation stages are subjected to these adaptive changes, we measured their transfer to antisaccades in which sensory and motor vectors are spatially dissociated. In the backward adaptation condition, all subjects showed a significant amplitude decrease for adapted prosaccades and a significant transfer of adaptation to antisaccades performed in the adapted direction, but not to oppositely directed antisaccades elicited by a target jump in the adapted direction. In the forward adaptation condition, only 14 of 19 subjects showed a significant amplitude increase for prosaccades and no significant adaptation transfer to antisaccades was detected in either the adapted or nonadapted direction. These findings suggest that, whereas the level(s) of forward adaptation cannot be resolved, the mechanisms involved in backward adaptation of reactive saccades take place at a sensorimotor level downstream from the vector inversion process of antisaccades and differ markedly from those involved in forward adaptation.

Close

  • doi:10.1152/jn.90988.2008

Close

Sébastien Miellet; Patrick J. O'Donnell; Sara C. Sereno

Parafoveal magnification: Visual acuity does not modulate the perceptual span in reading Journal Article

In: Psychological Science, vol. 20, no. 6, pp. 721–728, 2009.

Abstract | Links | BibTeX

@article{Miellet2009,
title = {Parafoveal magnification: Visual acuity does not modulate the perceptual span in reading},
author = {Sébastien Miellet and Patrick J. O'Donnell and Sara C. Sereno},
doi = {10.1111/j.1467-9280.2009.02364.x},
year = {2009},
date = {2009-01-01},
journal = {Psychological Science},
volume = {20},
number = {6},
pages = {721--728},
abstract = {Models of eye guidance in reading rely on the concept of the perceptual span—the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm—parafoveal magnification (PM)— that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account ofeye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Models of eye guidance in reading rely on the concept of the perceptual span—the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm—parafoveal magnification (PM)— that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account ofeye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.

Close

  • doi:10.1111/j.1467-9280.2009.02364.x

Close

Holger Mitterer; James M. McQueen

Processing reduced word-forms in speech perception using probabilistic knowledge about speech production Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 1, pp. 244–263, 2009.

Abstract | Links | BibTeX

@article{Mitterer2009,
title = {Processing reduced word-forms in speech perception using probabilistic knowledge about speech production},
author = {Holger Mitterer and James M. McQueen},
doi = {10.1037/a0012730},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {1},
pages = {244--263},
abstract = {Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech- production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech- production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes.

Close

  • doi:10.1037/a0012730

Close

Korbinian Moeller; Martin H. Fischer; Hans-Christoph Nuerk; Klaus Willmes

Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 62, no. 2, pp. 323–334, 2009.

Abstract | Links | BibTeX

@article{Moeller2009a,
title = {Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking},
author = {Korbinian Moeller and Martin H. Fischer and Hans-Christoph Nuerk and Klaus Willmes},
doi = {10.1080/17470210801946740},
year = {2009},
date = {2009-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {62},
number = {2},
pages = {323--334},
abstract = {While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.

Close

  • doi:10.1080/17470210801946740

Close

Korbinian Moeller; S. Neuburger; L. Kaufmann; K. Landerl; Hans-Christoph Nuerk

Basic number processing deficits in developmental dyscalculia: Evidence from eye tracking Journal Article

In: Cognitive Development, vol. 24, no. 4, pp. 371–386, 2009.

Abstract | Links | BibTeX

@article{Moeller2009,
title = {Basic number processing deficits in developmental dyscalculia: Evidence from eye tracking},
author = {Korbinian Moeller and S. Neuburger and L. Kaufmann and K. Landerl and Hans-Christoph Nuerk},
doi = {10.1016/j.cogdev.2009.09.007},
year = {2009},
date = {2009-01-01},
journal = {Cognitive Development},
volume = {24},
number = {4},
pages = {371--386},
abstract = {Recent research suggests that developmental dyscalculia is associated with a subitizing deficit (i.e., the inability to quickly enumerate small sets of up to 3 objects). However, the nature of this deficit has not previously been investigated. In the present study the eye-tracking methodology was employed to clarify whether (a) the subitizing deficit of two boys with dyscalculia resulted from a general slowing in the access to magnitude representation, or (b) children with dyscalculia resort to a back-up counting strategy even for small object sets. In a dot-counting task, a standard problem size effect for the number of fixations required to encode the presented numerosity within the subitizing range was observed. Together with the finding that problem size had no impact on the average fixation duration, this result suggested that children with dyscalculia may indeed have to count, while typically developing controls are able to enumerate the number of dots in parallel, i.e., subitize. Implications for the understanding of developmental dyscalculia are considered.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent research suggests that developmental dyscalculia is associated with a subitizing deficit (i.e., the inability to quickly enumerate small sets of up to 3 objects). However, the nature of this deficit has not previously been investigated. In the present study the eye-tracking methodology was employed to clarify whether (a) the subitizing deficit of two boys with dyscalculia resulted from a general slowing in the access to magnitude representation, or (b) children with dyscalculia resort to a back-up counting strategy even for small object sets. In a dot-counting task, a standard problem size effect for the number of fixations required to encode the presented numerosity within the subitizing range was observed. Together with the finding that problem size had no impact on the average fixation duration, this result suggested that children with dyscalculia may indeed have to count, while typically developing controls are able to enumerate the number of dots in parallel, i.e., subitize. Implications for the understanding of developmental dyscalculia are considered.

Close

  • doi:10.1016/j.cogdev.2009.09.007

Close

Mackenzie G. Glaholt; Eyal M. Reingold

The time course of gaze bias in visual decision tasks Journal Article

In: Visual Cognition, vol. 17, no. 8, pp. 1228–1243, 2009.

Abstract | Links | BibTeX

@article{Glaholt2009a,
title = {The time course of gaze bias in visual decision tasks},
author = {Mackenzie G. Glaholt and Eyal M. Reingold},
doi = {10.1080/13506280802362962},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {8},
pages = {1228--1243},
abstract = {In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation. [ABSTRACT FROM AUTHOR]},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation. [ABSTRACT FROM AUTHOR]

Close

  • doi:10.1080/13506280802362962

Close

Mackenzie G. Glaholt; Mei-Chun Wu; Eyal M. Reingold

Predicting preference from fixations Journal Article

In: PsychNology Journal, vol. 7, no. 2, pp. 141–158, 2009.

Abstract | Links | BibTeX

@article{Glaholt2009,
title = {Predicting preference from fixations},
author = {Mackenzie G. Glaholt and Mei-Chun Wu and Eyal M. Reingold},
doi = {10.1016/S0142-9612(01)00166-1},
year = {2009},
date = {2009-01-01},
journal = {PsychNology Journal},
volume = {7},
number = {2},
pages = {141--158},
abstract = {We measured the strength of the association between looking behaviour and preference. Participants selected the most preferred face out of a grid of 8 faces. Fixation times were correlated with selection on a trial-by-trial basis, as well as with explicit preference ratings. Furthermore, by ranking features based on fixation times, we were able to successfully predict participants' preferences for novel feature combinations in a two-alternative forced choice task. In addition, we obtained a similar pattern of findings in a very different stimulus domain: mock company logos. Our results indicated that fixation times can be used to predict selection in large arrays and they might also be employed to estimate preferences for whole stimuli as well as their constituent features.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We measured the strength of the association between looking behaviour and preference. Participants selected the most preferred face out of a grid of 8 faces. Fixation times were correlated with selection on a trial-by-trial basis, as well as with explicit preference ratings. Furthermore, by ranking features based on fixation times, we were able to successfully predict participants' preferences for novel feature combinations in a two-alternative forced choice task. In addition, we obtained a similar pattern of findings in a very different stimulus domain: mock company logos. Our results indicated that fixation times can be used to predict selection in large arrays and they might also be employed to estimate preferences for whole stimuli as well as their constituent features.

Close

  • doi:10.1016/S0142-9612(01)00166-1

Close

Diana J. Gorbet; Lauren E. Sergio

The behavioural consequences of dissociating the spatial directions of eye and arm movements Journal Article

In: Brain Research, vol. 1284, pp. 77–88, 2009.

Abstract | Links | BibTeX

@article{Gorbet2009,
title = {The behavioural consequences of dissociating the spatial directions of eye and arm movements},
author = {Diana J. Gorbet and Lauren E. Sergio},
doi = {10.1016/j.brainres.2009.05.057},
year = {2009},
date = {2009-01-01},
journal = {Brain Research},
volume = {1284},
pages = {77--88},
abstract = {Many of our daily movements use visual information to guide our arms toward objects of interest. Typically, these visually guided movements involve first focusing our gaze on the intended target and then reaching toward the direction of our gaze. The literature on eye-hand coordination provides a great deal of evidence that circuitry in the brain exists which can couple eye and arm movements. Moving both of these effectors towards a common spatial direction may be a default setting used by the brain to simplify the planning of movements. We tested this idea in 20 subjects using two experimental tasks. In a "Standard" condition, the eyes and a cursor were guided to the same spatial location by moving the arm (on a touchpad) and the eyes in the same direction. In a "Dissociated" condition, the eye and cursor were again guided to the same spatial location but the arm was required to move in a direction opposite to the eyes to successfully achieve this goal. In this study, we observed that dissociating the directions of eye and arm movement significantly changed the kinematic properties of both effectors including the latency and peak velocity of eye movements and the curvature of hand-path trajectories. Thus, forcing the brain to plan simultaneous eye and arm movements in different directions alters some of the basic (and often stereotyped) characteristics of motor responses. We suggest that interference with the function of a neural network that couples gaze and reach to congruent spatial locations underlies these kinematic alterations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Many of our daily movements use visual information to guide our arms toward objects of interest. Typically, these visually guided movements involve first focusing our gaze on the intended target and then reaching toward the direction of our gaze. The literature on eye-hand coordination provides a great deal of evidence that circuitry in the brain exists which can couple eye and arm movements. Moving both of these effectors towards a common spatial direction may be a default setting used by the brain to simplify the planning of movements. We tested this idea in 20 subjects using two experimental tasks. In a "Standard" condition, the eyes and a cursor were guided to the same spatial location by moving the arm (on a touchpad) and the eyes in the same direction. In a "Dissociated" condition, the eye and cursor were again guided to the same spatial location but the arm was required to move in a direction opposite to the eyes to successfully achieve this goal. In this study, we observed that dissociating the directions of eye and arm movement significantly changed the kinematic properties of both effectors including the latency and peak velocity of eye movements and the curvature of hand-path trajectories. Thus, forcing the brain to plan simultaneous eye and arm movements in different directions alters some of the basic (and often stereotyped) characteristics of motor responses. We suggest that interference with the function of a neural network that couples gaze and reach to congruent spatial locations underlies these kinematic alterations.

Close

  • doi:10.1016/j.brainres.2009.05.057

Close

H. S. Greenwald; David C. Knill

Cue integration outside central fixation: A study of grasping in depth Journal Article

In: Journal of Vision, vol. 9, no. 2, pp. 1–16, 2009.

Abstract | BibTeX

@article{Greenwald2009,
title = {Cue integration outside central fixation: A study of grasping in depth},
author = {H. S. Greenwald and David C. Knill},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {2},
pages = {1--16},
abstract = {We assessed the usefulness of stereopsis across the visual field by quantifying how retinal eccentricity and distance from the horopter affect humans' relative dependence on monocular and binocular cues about 3D orientation. The reliabilities of monocular and binocular cues both decline with eccentricity, but the reliability of binocular information decreases more rapidly. Binocular cue reliability also declines with increasing distance from the horopter, whereas the reliability of monocular cues is virtually unaffected. We measured how subjects integrated these cues to orient their hands when grasping oriented discs at different eccentricities and distances from the horopter. Subjects relied increasingly less on binocular disparity as targets' retinal eccentricity and distance from the horopter increased. The measured cue influences were consistent with what would be predicted from the relative cue reliabilities at the various target locations. Our results showed that relative reliability affects how cues influence motor control and that stereopsis is of limited use in the periphery and away from the horopter because monocular cues are more reliable in these regions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We assessed the usefulness of stereopsis across the visual field by quantifying how retinal eccentricity and distance from the horopter affect humans' relative dependence on monocular and binocular cues about 3D orientation. The reliabilities of monocular and binocular cues both decline with eccentricity, but the reliability of binocular information decreases more rapidly. Binocular cue reliability also declines with increasing distance from the horopter, whereas the reliability of monocular cues is virtually unaffected. We measured how subjects integrated these cues to orient their hands when grasping oriented discs at different eccentricities and distances from the horopter. Subjects relied increasingly less on binocular disparity as targets' retinal eccentricity and distance from the horopter increased. The measured cue influences were consistent with what would be predicted from the relative cue reliabilities at the various target locations. Our results showed that relative reliability affects how cues influence motor control and that stereopsis is of limited use in the periphery and away from the horopter because monocular cues are more reliable in these regions.

Close

Stefan Grondelaers; Dirk Speelman; Denis Drieghe; Marc Brysbaert; Dirk Geeraerts

Introducing a new entity into discourse: Comprehension and production evidence for the status of Dutch er ‘‘there” as a higher-level expectancy monitor Journal Article

In: Acta Psychologica, vol. 130, no. 2, pp. 1–33, 2009.

Abstract | Links | BibTeX

@article{Grondelaers2009,
title = {Introducing a new entity into discourse: Comprehension and production evidence for the status of Dutch er ‘‘there” as a higher-level expectancy monitor},
author = {Stefan Grondelaers and Dirk Speelman and Denis Drieghe and Marc Brysbaert and Dirk Geeraerts},
doi = {10.1016/j.actpsy.2008.11.003},
year = {2009},
date = {2009-01-01},
journal = {Acta Psychologica},
volume = {130},
number = {2},
pages = {1--33},
publisher = {Elsevier B.V.},
abstract = {This paper reports on the ways in which new entities are introduced into discourse. First, we present the evidence in support of a model of indefinite reference processing based on three principles: the listener's ability to make predictive inferences in order to decrease the unexpectedness of upcoming words, the availability to the speaker of grammatical constructions that customize predictive inferences, and the use of "expectancy monitors" to signal and facilitate the introduction of highly unpredictable entities. We provide evidence that one of these expectancy monitors in Dutch is the post-verbal variant of existential er (the equivalent of the unstressed existential "there" in English). In an eye-tracking experiment we demonstrate that the presence of er decreases the processing difficulties caused by low subject expectancy. A corpus-based regression analysis subsequently confirms that the production of er is determined almost exclusively by seven parameters of low subject expectancy. Together, the comprehension and production data suggest that while existential er functions as an expectancy monitor in much the same way as speech disfluencies (hesitations, pauses and filled pauses), er is a higher-level expectancy monitor because it is available in spoken and written discourse and because it is produced more systematically than any disfluency.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This paper reports on the ways in which new entities are introduced into discourse. First, we present the evidence in support of a model of indefinite reference processing based on three principles: the listener's ability to make predictive inferences in order to decrease the unexpectedness of upcoming words, the availability to the speaker of grammatical constructions that customize predictive inferences, and the use of "expectancy monitors" to signal and facilitate the introduction of highly unpredictable entities. We provide evidence that one of these expectancy monitors in Dutch is the post-verbal variant of existential er (the equivalent of the unstressed existential "there" in English). In an eye-tracking experiment we demonstrate that the presence of er decreases the processing difficulties caused by low subject expectancy. A corpus-based regression analysis subsequently confirms that the production of er is determined almost exclusively by seven parameters of low subject expectancy. Together, the comprehension and production data suggest that while existential er functions as an expectancy monitor in much the same way as speech disfluencies (hesitations, pauses and filled pauses), er is a higher-level expectancy monitor because it is available in spoken and written discourse and because it is produced more systematically than any disfluency.

Close

  • doi:10.1016/j.actpsy.2008.11.003

Close

Todd M. Herrington; Nicolas Y. Masse; Karim J. Hachmeh; Jackson E. T. Smith; John A. Assad; Erik P. Cook

The effect of microsaccades on the correlation between neural activity and behavior in middle temporal, ventral intraparietal, and lateral intraparietal areas Journal Article

In: Journal of Neuroscience, vol. 29, no. 18, pp. 5793–5805, 2009.

Abstract | Links | BibTeX

@article{Herrington2009,
title = {The effect of microsaccades on the correlation between neural activity and behavior in middle temporal, ventral intraparietal, and lateral intraparietal areas},
author = {Todd M. Herrington and Nicolas Y. Masse and Karim J. Hachmeh and Jackson E. T. Smith and John A. Assad and Erik P. Cook},
doi = {10.1523/JNEUROSCI.4412-08.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neuroscience},
volume = {29},
number = {18},
pages = {5793--5805},
abstract = {It is widely reported that the activity of single neurons in visual cortex is correlated with the perceptual decision of the subject. The strength of this correlation has implications for the neuronal populations generating the percepts. Here we asked whether microsaccades, which are small, involuntary eye movements, contribute to the correlation between neural activity and behavior. We analyzed data from three different visual detection experiments, with neural recordings from the middle temporal (MT), lateral intraparietal (LIP), and ventral intraparietal (VIP) areas. All three experiments used random dot motion stimuli, with the animals required to detect a transient or sustained change in the speed or strength of motion. We found that microsaccades suppressed neural activity and inhibited detection of the motion stimulus, contributing to the correlation between neural activity and detection behavior. Microsaccades accounted for as much as 19% of the correlation for area MT, 21% for area LIP, and 17% for VIP. While microsaccades only explain part of the correlation between neural activity and behavior, their effect has implications when considering the neuronal populations underlying perceptual decisions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is widely reported that the activity of single neurons in visual cortex is correlated with the perceptual decision of the subject. The strength of this correlation has implications for the neuronal populations generating the percepts. Here we asked whether microsaccades, which are small, involuntary eye movements, contribute to the correlation between neural activity and behavior. We analyzed data from three different visual detection experiments, with neural recordings from the middle temporal (MT), lateral intraparietal (LIP), and ventral intraparietal (VIP) areas. All three experiments used random dot motion stimuli, with the animals required to detect a transient or sustained change in the speed or strength of motion. We found that microsaccades suppressed neural activity and inhibited detection of the motion stimulus, contributing to the correlation between neural activity and detection behavior. Microsaccades accounted for as much as 19% of the correlation for area MT, 21% for area LIP, and 17% for VIP. While microsaccades only explain part of the correlation between neural activity and behavior, their effect has implications when considering the neuronal populations underlying perceptual decisions.

Close

  • doi:10.1523/JNEUROSCI.4412-08.2009

Close

Susanne Hertel; Andreas Sprenger; Christine Klein; Detlef Kömpf; Christoph Helmchen; Hubert Kimmig

Different saccadic abnormalities in PINK1 mutation carriers and in patients with non-genetic Parkinson's disease Journal Article

In: Journal of Neurology, vol. 256, no. 7, pp. 1192–1194, 2009.

Links | BibTeX

@article{Hertel2009,
title = {Different saccadic abnormalities in PINK1 mutation carriers and in patients with non-genetic Parkinson's disease},
author = {Susanne Hertel and Andreas Sprenger and Christine Klein and Detlef Kömpf and Christoph Helmchen and Hubert Kimmig},
doi = {10.1007/s00415-009-5092-8},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurology},
volume = {256},
number = {7},
pages = {1192--1194},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

  • doi:10.1007/s00415-009-5092-8

Close

Richard W. Hertle; Joost Felius; Dongsheng Yang; Matthew Kaufman

Eye muscle surgery for infantile nystagmus syndrome in the first two years of life Journal Article

In: Clinical Ophthalmology, vol. 3, no. 1, pp. 615–624, 2009.

Abstract | Links | BibTeX

@article{Hertle2009,
title = {Eye muscle surgery for infantile nystagmus syndrome in the first two years of life},
author = {Richard W. Hertle and Joost Felius and Dongsheng Yang and Matthew Kaufman},
doi = {10.2147/OPTH.S7541},
year = {2009},
date = {2009-01-01},
journal = {Clinical Ophthalmology},
volume = {3},
number = {1},
pages = {615--624},
abstract = {PURPOSE: To report visual and elctrophysioloigcal effects of eye muscle surgery in young patients with infantile nystagmus syndrome (INS). METHODS: Prospective, interventional case cohort of 19 patients aged under 24 months who were operated on for combinations of strabismus, an anomalous head posture, and nystagmus. All patients were followed at least nine months. Outcome measures, part of an institutionally approved study, included Teller acuity, head position, strabismic deviation, and eye movement recordings, from which waveform types and a nystagmus optimal foveation fraction (NOFF). Computerized parametric and nonparametric statistical analysis of data were perfomed using standard software on both individual and group data. RESULTS: Age averaged 17.7 months (13.1-month follow-up). Thirteen (68%) patients had associated optic nerve or retinal disease. 42% had amblyopia, 68% had refractive errors. Group means in binocular Teller acuity (P < 0.05), strabismic deviation (P < 0.05), head posture (P < 0.001), and the NOFF measures (P < 0.01) from eye movement recordings improved in all patients. There was a change in null zone waveforms to more favorable jerk types. There were no reoperations or surgical complications. CONCLUSIONS: Surgery on the extraocular muscles in patients aged less than two years with INS results in improvements in multiple aspects of ocular motor and visual function.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE: To report visual and elctrophysioloigcal effects of eye muscle surgery in young patients with infantile nystagmus syndrome (INS). METHODS: Prospective, interventional case cohort of 19 patients aged under 24 months who were operated on for combinations of strabismus, an anomalous head posture, and nystagmus. All patients were followed at least nine months. Outcome measures, part of an institutionally approved study, included Teller acuity, head position, strabismic deviation, and eye movement recordings, from which waveform types and a nystagmus optimal foveation fraction (NOFF). Computerized parametric and nonparametric statistical analysis of data were perfomed using standard software on both individual and group data. RESULTS: Age averaged 17.7 months (13.1-month follow-up). Thirteen (68%) patients had associated optic nerve or retinal disease. 42% had amblyopia, 68% had refractive errors. Group means in binocular Teller acuity (P < 0.05), strabismic deviation (P < 0.05), head posture (P < 0.001), and the NOFF measures (P < 0.01) from eye movement recordings improved in all patients. There was a change in null zone waveforms to more favorable jerk types. There were no reoperations or surgical complications. CONCLUSIONS: Surgery on the extraocular muscles in patients aged less than two years with INS results in improvements in multiple aspects of ocular motor and visual function.

Close

  • doi:10.2147/OPTH.S7541

Close

Valerie Higenell; Brian J. White; Joshua R. Hwang; Douglas P. Munoz

Localizing the neural substrate of reflexive covert orienting Journal Article

In: Journal of Eye Movement Research, vol. 6, no. 1, pp. 1–14, 2009.

Abstract | Links | BibTeX

@article{Higenell2009,
title = {Localizing the neural substrate of reflexive covert orienting},
author = {Valerie Higenell and Brian J. White and Joshua R. Hwang and Douglas P. Munoz},
doi = {10.16910/jemr.6.1.1},
year = {2009},
date = {2009-01-01},
journal = {Journal of Eye Movement Research},
volume = {6},
number = {1},
pages = {1--14},
abstract = {The capture of covert spatial attention by salient visual events influences subsequent gaze behavior. A task irrelevant stimulus (cue) can reduce (Attention capture) or prolong (Inhibition of return) saccade reaction time to a subsequent target stimulus depending on the cue-target delay. Here we investigated the mechanisms that underlie the sensory-based account of AC/IOR by manipulating the visual processing stage where the cue and target interact. In Experiment 1, liquid crystal shutter goggles were used to test whether AC/IOR occur at a monocular versus binocular processing stage (before versus after signals from both eyes converge). In Experiment 2, we tested whether visual orientation selective mechanisms are critical for AC/IOR by using oriented Gabor stimuli. We found that the magnitude of AC and IOR was not different between monocular and interocular viewing conditions, or between iso- and ortho-oriented cue-target interactions. The results suggest that the visual mechanisms that contribute to AC/IOR arise at an orientation-independent binocular processing stage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The capture of covert spatial attention by salient visual events influences subsequent gaze behavior. A task irrelevant stimulus (cue) can reduce (Attention capture) or prolong (Inhibition of return) saccade reaction time to a subsequent target stimulus depending on the cue-target delay. Here we investigated the mechanisms that underlie the sensory-based account of AC/IOR by manipulating the visual processing stage where the cue and target interact. In Experiment 1, liquid crystal shutter goggles were used to test whether AC/IOR occur at a monocular versus binocular processing stage (before versus after signals from both eyes converge). In Experiment 2, we tested whether visual orientation selective mechanisms are critical for AC/IOR by using oriented Gabor stimuli. We found that the magnitude of AC and IOR was not different between monocular and interocular viewing conditions, or between iso- and ortho-oriented cue-target interactions. The results suggest that the visual mechanisms that contribute to AC/IOR arise at an orientation-independent binocular processing stage.

Close

  • doi:10.16910/jemr.6.1.1

Close

Jesse Hochstadt

Set-shifting and the on-line processing of relative clauses in Parkinson's disease: Results from a novel eye-tracking method Journal Article

In: Cortex, vol. 45, no. 8, pp. 991–1011, 2009.

Abstract | Links | BibTeX

@article{Hochstadt2009,
title = {Set-shifting and the on-line processing of relative clauses in Parkinson's disease: Results from a novel eye-tracking method},
author = {Jesse Hochstadt},
doi = {10.1016/j.cortex.2009.03.010},
year = {2009},
date = {2009-01-01},
journal = {Cortex},
volume = {45},
number = {8},
pages = {991--1011},
publisher = {Elsevier Srl},
abstract = {Past research indicates that in Parkinson's disease (PD), set-shifting deficits cause impaired comprehension of sentences containing restrictive relative clauses (RCs). Some research also suggests that verbal working memory deficits impair comprehension of long-distance (LD) dependencies in sentences with center-embedded RCs. To test whether these deficits impair comprehension by affecting on-line processing, we tracked patients' eye movements as they matched pictures with sentences with final- or center-embedded RCs (e.g., The queen was kicking the cook who was fat, The queen who was kicking the cook was thin) and active or passive verbs. Decreases in looks to distracters ruled out at the transitive verb (e.g., a cook kicking a queen) and the adjective (a fat queen kicking a thin cook) reflected how effective processing was at those points. Though patients showed greater difficulty comprehending center-embedded and passive sentences, set-shifting errors correlated with comprehension of all sentences. Consistent with this, patients with poorer comprehension exhibited impaired on-line processing of both center-embedded and final RCs (for which comprehension was better due to their grammatical simplicity), and these effects correlated with set-shifting errors. We consider two possible explanations for this apparently general RC-processing deficit. First, because RCs are infrequent, set-shifting may be needed to override the processor's expectations for higher-frequency structures. Second, because restrictive RCs typically refer to information already in the discourse context, set-shifting may be needed to redirect attention from linguistic foreground to background information. Eye-tracking data indicated no difficulty processing LD dependencies; correlations of verbal working memory with comprehension of passive and center-embedded sentences may reflect off-line use of memory. In trials with passive verbs, patients looked toward the verb distracter before even processing the verb. This effect was larger than that previously seen for young participants, suggesting that PD may amplify a normal bias to assume the subject noun is the agent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Past research indicates that in Parkinson's disease (PD), set-shifting deficits cause impaired comprehension of sentences containing restrictive relative clauses (RCs). Some research also suggests that verbal working memory deficits impair comprehension of long-distance (LD) dependencies in sentences with center-embedded RCs. To test whether these deficits impair comprehension by affecting on-line processing, we tracked patients' eye movements as they matched pictures with sentences with final- or center-embedded RCs (e.g., The queen was kicking the cook who was fat, The queen who was kicking the cook was thin) and active or passive verbs. Decreases in looks to distracters ruled out at the transitive verb (e.g., a cook kicking a queen) and the adjective (a fat queen kicking a thin cook) reflected how effective processing was at those points. Though patients showed greater difficulty comprehending center-embedded and passive sentences, set-shifting errors correlated with comprehension of all sentences. Consistent with this, patients with poorer comprehension exhibited impaired on-line processing of both center-embedded and final RCs (for which comprehension was better due to their grammatical simplicity), and these effects correlated with set-shifting errors. We consider two possible explanations for this apparently general RC-processing deficit. First, because RCs are infrequent, set-shifting may be needed to override the processor's expectations for higher-frequency structures. Second, because restrictive RCs typically refer to information already in the discourse context, set-shifting may be needed to redirect attention from linguistic foreground to background information. Eye-tracking data indicated no difficulty processing LD dependencies; correlations of verbal working memory with comprehension of passive and center-embedded sentences may reflect off-line use of memory. In trials with passive verbs, patients looked toward the verb distracter before even processing the verb. This effect was larger than that previously seen for young participants, suggesting that PD may amplify a normal bias to assume the subject noun is the agent.

Close

  • doi:10.1016/j.cortex.2009.03.010

Close

Timothy L. Hodgson; Benjamin A. Parris; Nicola J. Gregory; Tracey Jarvis

The saccadic Stroop effect: Evidence for involuntary programming of eye movements by linguistic cues Journal Article

In: Vision Research, vol. 49, no. 5, pp. 569–574, 2009.

Abstract | Links | BibTeX

@article{Hodgson2009,
title = {The saccadic Stroop effect: Evidence for involuntary programming of eye movements by linguistic cues},
author = {Timothy L. Hodgson and Benjamin A. Parris and Nicola J. Gregory and Tracey Jarvis},
doi = {10.1016/j.visres.2009.01.001},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {5},
pages = {569--574},
publisher = {Elsevier Ltd},
abstract = {The effect of automatic priming of behaviour by linguistic cues is well established. However, as yet these effects have not been directly demonstrated for eye movement responses. We investigated the effect of linguistic cues on eye movements using a modified version of the Stroop task in which a saccade was made to the location of a peripheral colour patch which matched the "ink" colour of a centrally presented word cue. The words were either colour words ("red", "green", "blue", "yellow") or location words ("up", "down", "left", "right"). As in the original version of the Stroop task the identity of the word could be either congruent or incongruent with the response location. The results showed that oculomotor programming was influenced by word identity, even though the written word provided no task relevant information. Saccade latency was increased on incongruent trials and an increased frequency of error saccades was observed in the direction congruent with the word identity. The results argue against traditional distinctions between reflexive and voluntary programming of saccades and suggest that linguistic cues can also influence eye movement programming in an automatic manner.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The effect of automatic priming of behaviour by linguistic cues is well established. However, as yet these effects have not been directly demonstrated for eye movement responses. We investigated the effect of linguistic cues on eye movements using a modified version of the Stroop task in which a saccade was made to the location of a peripheral colour patch which matched the "ink" colour of a centrally presented word cue. The words were either colour words ("red", "green", "blue", "yellow") or location words ("up", "down", "left", "right"). As in the original version of the Stroop task the identity of the word could be either congruent or incongruent with the response location. The results showed that oculomotor programming was influenced by word identity, even though the written word provided no task relevant information. Saccade latency was increased on incongruent trials and an increased frequency of error saccades was observed in the direction congruent with the word identity. The results argue against traditional distinctions between reflexive and voluntary programming of saccades and suggest that linguistic cues can also influence eye movement programming in an automatic manner.

Close

  • doi:10.1016/j.visres.2009.01.001

Close

Lee Hogarth; Anthony Dickinson; Theodora Duka

Detection versus sustained attention to drug cues have dissociable roles in mediating drug seeking behavior Journal Article

In: Experimental and Clinical Psychopharmacology, vol. 17, no. 1, pp. 21–30, 2009.

Abstract | Links | BibTeX

@article{Hogarth2009,
title = {Detection versus sustained attention to drug cues have dissociable roles in mediating drug seeking behavior},
author = {Lee Hogarth and Anthony Dickinson and Theodora Duka},
doi = {10.1080/19368623.2011.605037},
year = {2009},
date = {2009-01-01},
journal = {Experimental and Clinical Psychopharmacology},
volume = {17},
number = {1},
pages = {21--30},
abstract = {It is commonly thought that attentional bias for drug cues plays an important role in motivating human drug-seeking behavior. To assess this claim, two groups of smokers were trained in a discrimination task in which a tobacco-seeking response was rewarded only in the presence of 1 particular stimulus (the S+). The key manipulation was that whereas 1 group could control the duration of S+ presentation, for the second group, this duration was fixed. The results showed that the fixed-duration group acquired a sustained attentional bias to the S+ over training, indexed by greater dwell time and fixation count, which emerged in parallel with the control exerted by the S+ over tobacco-seeking behavior. By contrast, the controllable-duration group acquired no sustained attentional bias for S+ and instead used efficient detection of the S+ to achieve a comparable level of control over tobacco seeking. These data suggest that detection and sustained attention to drug cues have dissociable roles in enabling drug cues to motivate drug-seeking behavior, which has implications for attentional retraining as a treatment for addiction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is commonly thought that attentional bias for drug cues plays an important role in motivating human drug-seeking behavior. To assess this claim, two groups of smokers were trained in a discrimination task in which a tobacco-seeking response was rewarded only in the presence of 1 particular stimulus (the S+). The key manipulation was that whereas 1 group could control the duration of S+ presentation, for the second group, this duration was fixed. The results showed that the fixed-duration group acquired a sustained attentional bias to the S+ over training, indexed by greater dwell time and fixation count, which emerged in parallel with the control exerted by the S+ over tobacco-seeking behavior. By contrast, the controllable-duration group acquired no sustained attentional bias for S+ and instead used efficient detection of the S+ to achieve a comparable level of control over tobacco seeking. These data suggest that detection and sustained attention to drug cues have dissociable roles in enabling drug cues to motivate drug-seeking behavior, which has implications for attentional retraining as a treatment for addiction.

Close

  • doi:10.1080/19368623.2011.605037

Close

Andrew Hollingworth; Steven J. Luck

The role of visual working memory in the control of gaze during visual search Journal Article

In: Attention, Perception, and Psychophysics, vol. 71, no. 4, pp. 936–949, 2009.

Abstract | BibTeX

@article{Hollingworth2009,
title = {The role of visual working memory in the control of gaze during visual search},
author = {Andrew Hollingworth and Steven J. Luck},
year = {2009},
date = {2009-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {71},
number = {4},
pages = {936--949},
abstract = {We investigated the interactions among visual working memory (VWM), attention, and gaze control in a visual search task that was performed while a color was held in VWM for a concurrent discrimination task. In the search task, participants were required to foveate a cued item within a circular array of colored objects. During the saccade to the target, the array was sometimes rotated so that the eyes landed midway between the target object and an adjacent distractor object, necessitating a second saccade to foveate the target. When the color of the adjacent distractor matched a color being maintained in VWM, execution of this secondary saccade was impaired, indicating that the current contents of VWM bias saccade targeting mechanisms that direct gaze toward target objects during visual search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated the interactions among visual working memory (VWM), attention, and gaze control in a visual search task that was performed while a color was held in VWM for a concurrent discrimination task. In the search task, participants were required to foveate a cued item within a circular array of colored objects. During the saccade to the target, the array was sometimes rotated so that the eyes landed midway between the target object and an adjacent distractor object, necessitating a second saccade to foveate the target. When the color of the adjacent distractor matched a color being maintained in VWM, execution of this secondary saccade was impaired, indicating that the current contents of VWM bias saccade targeting mechanisms that direct gaze toward target objects during visual search.

Close

Robin Hawes

Vision and reality: Relativity in art Journal Article

In: Digital Creativity, vol. 20, no. 3, pp. 177–186, 2009.

Abstract | Links | BibTeX

@article{Hawes2009,
title = {Vision and reality: Relativity in art},
author = {Robin Hawes},
doi = {10.1080/14626260903083587},
year = {2009},
date = {2009-01-01},
journal = {Digital Creativity},
volume = {20},
number = {3},
pages = {177--186},
abstract = {Artist and researcher, Robin Hawes, presents a recently completed art/science collaboration which examined the processes undertaken by the eye in providing sensory data to the brain and aimed to explore the internally construc- tive and idiosyncratic aspects of visual percep- tion. With the physiology of the retina providing inconsistent quality of information across our field of view, the project set out to reveal the disparity between the visual information gath- ered by our eyes and the conscious picture of ‘reality' formed in our minds. The paper will map out the psychological, physiological and philosophical basis for the research, as well as presenting images produced by the project. In essence, each time someone contemplates a work of art, the work of art is re-constructed ‘internally'. This project set out, in part at least, to make ‘visible' this hitherto internal, idiosyncratic, unique and unshared neurological event.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Artist and researcher, Robin Hawes, presents a recently completed art/science collaboration which examined the processes undertaken by the eye in providing sensory data to the brain and aimed to explore the internally construc- tive and idiosyncratic aspects of visual percep- tion. With the physiology of the retina providing inconsistent quality of information across our field of view, the project set out to reveal the disparity between the visual information gath- ered by our eyes and the conscious picture of ‘reality' formed in our minds. The paper will map out the psychological, physiological and philosophical basis for the research, as well as presenting images produced by the project. In essence, each time someone contemplates a work of art, the work of art is re-constructed ‘internally'. This project set out, in part at least, to make ‘visible' this hitherto internal, idiosyncratic, unique and unshared neurological event.

Close

  • doi:10.1080/14626260903083587

Close

Benjamin Y. Hayden; Jack L. Gallant

Combined effects of spatial and feature-based attention on responses of V4 neurons Journal Article

In: Vision Research, vol. 49, no. 10, pp. 1182–1187, 2009.

Abstract | Links | BibTeX

@article{Hayden2009,
title = {Combined effects of spatial and feature-based attention on responses of V4 neurons},
author = {Benjamin Y. Hayden and Jack L. Gallant},
doi = {10.1016/j.visres.2008.06.011},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {10},
pages = {1182--1187},
publisher = {Elsevier Ltd},
abstract = {Attention is thought to be controlled by a specialized fronto-parietal network that modulates the responses of neurons in sensory and association cortex. However, the principles by which this network affects the responses of these sensory and association neurons remains unknown. In particular, it remains unclear whether different forms of attention, such as spatial and feature-based attention, independently modulate responses of single neurons. We recorded responses of single V4 neurons in a task that controls both forms of attention independently. We find that the combined effects of spatial and feature-based attention can be described as the sum of independent processes with a small super-additive interaction term. This pattern of effects demonstrates that the spatial and feature-based aspects of the attentional control system can independently affect responses of single neurons. These results are consistent with the idea that spatial and feature-based attention are controlled by distinct neural substrates whose effects combine synergistically to influence responses of visual neurons.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Attention is thought to be controlled by a specialized fronto-parietal network that modulates the responses of neurons in sensory and association cortex. However, the principles by which this network affects the responses of these sensory and association neurons remains unknown. In particular, it remains unclear whether different forms of attention, such as spatial and feature-based attention, independently modulate responses of single neurons. We recorded responses of single V4 neurons in a task that controls both forms of attention independently. We find that the combined effects of spatial and feature-based attention can be described as the sum of independent processes with a small super-additive interaction term. This pattern of effects demonstrates that the spatial and feature-based aspects of the attentional control system can independently affect responses of single neurons. These results are consistent with the idea that spatial and feature-based attention are controlled by distinct neural substrates whose effects combine synergistically to influence responses of visual neurons.

Close

  • doi:10.1016/j.visres.2008.06.011

Close

Benjamin Y. Hayden; David V. Smith; Michael L. Platt

Electrophysiological correlates of default-mode processing in macaque posterior cingulate cortex Journal Article

In: Proceedings of the National Academy of Sciences, vol. 106, no. 14, pp. 5948–5953, 2009.

Abstract | Links | BibTeX

@article{Hayden2009a,
title = {Electrophysiological correlates of default-mode processing in macaque posterior cingulate cortex},
author = {Benjamin Y. Hayden and David V. Smith and Michael L. Platt},
doi = {10.1073/pnas.0812035106},
year = {2009},
date = {2009-01-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {106},
number = {14},
pages = {5948--5953},
abstract = {During the course of daily activity, our level of engagement with the world varies on a moment-to-moment basis. Although these fluctuations in vigilance have critical consequences for our thoughts and actions, almost nothing is known about the neuronal substrates governing such dynamic variations in task engagement. We investigated the hypothesis that the posterior cingulate cortex (CGp), a region linked to default-mode processing by hemodynamic and metabolic measures, controls such variations. We recorded the activity of single neurons in CGp in 2 macaque monkeys performing simple tasks in which their behavior varied from vigilant to inattentive. We found that firing rates were reliably suppressed during task performance and returned to a higher resting baseline between trials. Importantly, higher firing rates predicted errors and slow behavioral responses, and were also observed during cued rest periods when monkeys were temporarily liberated from exteroceptive vigilance. These patterns of activity were not observed in the lateral intraparietal area, an area linked to the frontoparietal attention network. Our findings provide physiological confirmation that CGp mediates exteroceptive vigilance and are consistent with the idea that CGp is part of the "default network" of brain areas associated with control of task engagement.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During the course of daily activity, our level of engagement with the world varies on a moment-to-moment basis. Although these fluctuations in vigilance have critical consequences for our thoughts and actions, almost nothing is known about the neuronal substrates governing such dynamic variations in task engagement. We investigated the hypothesis that the posterior cingulate cortex (CGp), a region linked to default-mode processing by hemodynamic and metabolic measures, controls such variations. We recorded the activity of single neurons in CGp in 2 macaque monkeys performing simple tasks in which their behavior varied from vigilant to inattentive. We found that firing rates were reliably suppressed during task performance and returned to a higher resting baseline between trials. Importantly, higher firing rates predicted errors and slow behavioral responses, and were also observed during cued rest periods when monkeys were temporarily liberated from exteroceptive vigilance. These patterns of activity were not observed in the lateral intraparietal area, an area linked to the frontoparietal attention network. Our findings provide physiological confirmation that CGp mediates exteroceptive vigilance and are consistent with the idea that CGp is part of the "default network" of brain areas associated with control of task engagement.

Close

  • doi:10.1073/pnas.0812035106

Close

L. Elliot Hong; Kathleen A. Turano; Hugh B. O'Neill; Lei Hao; Ikwunga Wonodi; Robert P. McMahon; Gunvant K. Thaker

Is motion perception deficit in Schizophrenia a consequence of eye-tracking abnormality? Journal Article

In: Biological Psychiatry, vol. 65, no. 12, pp. 1079–1085, 2009.

Abstract | Links | BibTeX

@article{Hong2009,
title = {Is motion perception deficit in Schizophrenia a consequence of eye-tracking abnormality?},
author = {L. Elliot Hong and Kathleen A. Turano and Hugh B. O'Neill and Lei Hao and Ikwunga Wonodi and Robert P. McMahon and Gunvant K. Thaker},
doi = {10.1016/j.biopsych.2008.10.021},
year = {2009},
date = {2009-01-01},
journal = {Biological Psychiatry},
volume = {65},
number = {12},
pages = {1079--1085},
publisher = {Society of Biological Psychiatry},
abstract = {Background: Studies have shown that schizophrenia patients have motion perception deficit, which was thought to cause eye-tracking abnormality in schizophrenia. However, eye movement closely interacts with motion perception. The known eye-tracking difficulties in schizophrenia patients may interact with their motion perception. Methods: Two speed discrimination experiments were conducted in a within-subject design. In experiment 1, the stimulus duration was 150 msec to minimize the chance of eye-tracking occurrence. In experiment 2, the duration was increased to 300 msec, increasing the possibility of eye movement intrusion. Regular eye-tracking performance was evaluated in a third experiment. Results: At 150 msec, speed discrimination thresholds did not differ between schizophrenia patients (n = 38) and control subjects (n = 33). At 300 msec, patients had significantly higher thresholds than control subjects (p = .03). Furthermore, frequencies of eye tracking during the 300 msec stimulus were significantly correlated with speed discrimination in control subjects (p = .01) but not in patients, suggesting that eye-tracking initiation may benefit control subjects but not patients. The frequency of eye tracking during speed discrimination was not significantly related to regular eye-tracking performance. Conclusions: Speed discrimination, per se, is not impaired in schizophrenia patients. The observed abnormality appears to be a consequence of impairment in generating or integrating the feedback information from eye movements. This study introduces a novel approach to motion perception studies and highlights the importance of concurrently measuring eye movements to understand interactions between these two systems; the results argue for a conceptual revision regarding motion perception abnormality in schizophrenia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Studies have shown that schizophrenia patients have motion perception deficit, which was thought to cause eye-tracking abnormality in schizophrenia. However, eye movement closely interacts with motion perception. The known eye-tracking difficulties in schizophrenia patients may interact with their motion perception. Methods: Two speed discrimination experiments were conducted in a within-subject design. In experiment 1, the stimulus duration was 150 msec to minimize the chance of eye-tracking occurrence. In experiment 2, the duration was increased to 300 msec, increasing the possibility of eye movement intrusion. Regular eye-tracking performance was evaluated in a third experiment. Results: At 150 msec, speed discrimination thresholds did not differ between schizophrenia patients (n = 38) and control subjects (n = 33). At 300 msec, patients had significantly higher thresholds than control subjects (p = .03). Furthermore, frequencies of eye tracking during the 300 msec stimulus were significantly correlated with speed discrimination in control subjects (p = .01) but not in patients, suggesting that eye-tracking initiation may benefit control subjects but not patients. The frequency of eye tracking during speed discrimination was not significantly related to regular eye-tracking performance. Conclusions: Speed discrimination, per se, is not impaired in schizophrenia patients. The observed abnormality appears to be a consequence of impairment in generating or integrating the feedback information from eye movements. This study introduces a novel approach to motion perception studies and highlights the importance of concurrently measuring eye movements to understand interactions between these two systems; the results argue for a conceptual revision regarding motion perception abnormality in schizophrenia.

Close

  • doi:10.1016/j.biopsych.2008.10.021

Close

Janet H. Hsiao; Garrison W. Cottrell

Not all visual expertise is holistic, but it may be leftist the case of Chinese character recognition Journal Article

In: Psychological Science, vol. 20, no. 4, pp. 455–463, 2009.

Abstract | BibTeX

@article{Hsiao2009,
title = {Not all visual expertise is holistic, but it may be leftist the case of Chinese character recognition},
author = {Janet H. Hsiao and Garrison W. Cottrell},
year = {2009},
date = {2009-01-01},
journal = {Psychological Science},
volume = {20},
number = {4},
pages = {455--463},
abstract = {We examined whether two purportedly face-specific effects, holistic processing and the left-side bias, can also be observed in expert-level processing of Chinese characters, which are logographic and share many properties with faces. Non-Chinese readers (novices) perceived these characters more holistically than Chinese readers (experts). Chinese readers had a better awareness of the components of characters, which were not clearly separable to novices. This finding suggests that holistic processing is not a marker of general visual expertise; rather, holistic processing depends on the features of the stimuli and the tasks typically performed on them. In contrast, results for the left-side bias were similar to those obtained in studies of face perception. Chinese readers exhibited a left-side bias in the perception of mirror-symmetric characters, whereas novices did not; this effect was also reflected in eye fixations. Thus, the left-side bias may be a marker of visual expertise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined whether two purportedly face-specific effects, holistic processing and the left-side bias, can also be observed in expert-level processing of Chinese characters, which are logographic and share many properties with faces. Non-Chinese readers (novices) perceived these characters more holistically than Chinese readers (experts). Chinese readers had a better awareness of the components of characters, which were not clearly separable to novices. This finding suggests that holistic processing is not a marker of general visual expertise; rather, holistic processing depends on the features of the stimuli and the tasks typically performed on them. In contrast, results for the left-side bias were similar to those obtained in studies of face perception. Chinese readers exhibited a left-side bias in the perception of mirror-symmetric characters, whereas novices did not; this effect was also reflected in eye fixations. Thus, the left-side bias may be a marker of visual expertise.

Close

P. -J. Hsieh; P. U. Tse

Feature mixing rather than feature replacement during perceptual filling-in Journal Article

In: Vision Research, vol. 49, no. 4, pp. 439–450, 2009.

Abstract | Links | BibTeX

@article{Hsieh2009,
title = {Feature mixing rather than feature replacement during perceptual filling-in},
author = {P. -J. Hsieh and P. U. Tse},
doi = {10.1016/j.visres.2008.12.004},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {4},
pages = {439--450},
abstract = {'Filling-in' occurs when a retinally stabilized object subjectively appears to vanish following perceptual fading of its boundaries. The term 'filling-in' literally means that information about the apparently vanished object is lost and replaced solely by information arising from the surrounding background. However, we find evidence that the mechanism of 'filling-in' can actually involve a process of 'feature mixing' rather than 'feature replacement,' whereby features on either side of a perceptually faded boundary merge. Here we investigate the properties of feature mixing and specify certain conditions under which such mixing occurs. Our results show that, when using visual stimuli composed of spatially alternating stripes containing different luminances or motion signals, and when using the neon-color-spreading paradigm, the filled-in luminance, motion, or color is approximately the area and magnitude weighted average of the background and the foreground luminance, motion, or color, respectively. Together, these results demonstrate that, under at least certain conditions, 'filling-in' may involve a process of feature mixing or feature averaging rather than one of feature replacement.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

'Filling-in' occurs when a retinally stabilized object subjectively appears to vanish following perceptual fading of its boundaries. The term 'filling-in' literally means that information about the apparently vanished object is lost and replaced solely by information arising from the surrounding background. However, we find evidence that the mechanism of 'filling-in' can actually involve a process of 'feature mixing' rather than 'feature replacement,' whereby features on either side of a perceptually faded boundary merge. Here we investigate the properties of feature mixing and specify certain conditions under which such mixing occurs. Our results show that, when using visual stimuli composed of spatially alternating stripes containing different luminances or motion signals, and when using the neon-color-spreading paradigm, the filled-in luminance, motion, or color is approximately the area and magnitude weighted average of the background and the foreground luminance, motion, or color, respectively. Together, these results demonstrate that, under at least certain conditions, 'filling-in' may involve a process of feature mixing or feature averaging rather than one of feature replacement.

Close

  • doi:10.1016/j.visres.2008.12.004

Close

P. -J. Hsieh; P. U. Tse

Motion fading and the motion aftereffect share a common process of neural adaptation Journal Article

In: Attention, Perception, and Psychophysics, vol. 71, no. 4, pp. 724–733, 2009.

Abstract | Links | BibTeX

@article{Hsieh2009a,
title = {Motion fading and the motion aftereffect share a common process of neural adaptation},
author = {P. -J. Hsieh and P. U. Tse},
doi = {10.3758/APP.71.4.724},
year = {2009},
date = {2009-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {71},
number = {4},
pages = {724--733},
abstract = {After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here, we show that this motion fading occurs not only for slowly moving stimuli, but also for stimuli moving at high speed; after prolonged viewing of high-speed stimuli, the stimuli appear to slow down but not to stop. We report psychophysical evidence that the same neural adaptation process likely gives rise to motion fading and to the motion aftereffect.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here, we show that this motion fading occurs not only for slowly moving stimuli, but also for stimuli moving at high speed; after prolonged viewing of high-speed stimuli, the stimuli appear to slow down but not to stop. We report psychophysical evidence that the same neural adaptation process likely gives rise to motion fading and to the motion aftereffect.

Close

  • doi:10.3758/APP.71.4.724

Close

Po-Jang Hsieh; Peter U. Tse

Microsaccade rate varies with subjective visibility during motion-induced blindness Journal Article

In: PLoS ONE, vol. 4, no. 4, pp. e5163, 2009.

Abstract | Links | BibTeX

@article{Hsieh2009b,
title = {Microsaccade rate varies with subjective visibility during motion-induced blindness},
author = {Po-Jang Hsieh and Peter U. Tse},
doi = {10.1371/journal.pone.0005163},
year = {2009},
date = {2009-01-01},
journal = {PLoS ONE},
volume = {4},
number = {4},
pages = {e5163},
abstract = {Motion-induced blindness (MIB) occurs when a dot embedded in a motion field subjectively vanishes. Here we report the first psychophysical data concerning effects of microsaccade/eyeblink rate upon perceptual switches during MIB. We find that the rate of microsaccades/eyeblink rises before and after perceptual transitions from not seeing to seeing the dot, and decreases before perceptual transitions from seeing it to not seeing it. In addition, event-related fMRI data reveal that, when a dot subjectively reappears during MIB, the blood oxygen-level dependent (BOLD) signal increases in V1v and V2v and decreases in contralateral hMT+. These BOLD signal changes observed upon perceptual state changes in MIB could be driven by the change of perceptual states and/or a confounding factor, such as the microsaccade/eyeblink rate.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Motion-induced blindness (MIB) occurs when a dot embedded in a motion field subjectively vanishes. Here we report the first psychophysical data concerning effects of microsaccade/eyeblink rate upon perceptual switches during MIB. We find that the rate of microsaccades/eyeblink rises before and after perceptual transitions from not seeing to seeing the dot, and decreases before perceptual transitions from seeing it to not seeing it. In addition, event-related fMRI data reveal that, when a dot subjectively reappears during MIB, the blood oxygen-level dependent (BOLD) signal increases in V1v and V2v and decreases in contralateral hMT+. These BOLD signal changes observed upon perceptual state changes in MIB could be driven by the change of perceptual states and/or a confounding factor, such as the microsaccade/eyeblink rate.

Close

  • doi:10.1371/journal.pone.0005163

Close

Yufen Hsieh; Julie E. Boland; Yaxu Zhang; Ming Yan

Limited syntactic parallelism in Chinese ambiguity resolution Journal Article

In: Language and Cognitive Processes, vol. 24, no. 7-8, pp. 1227–1264, 2009.

Abstract | Links | BibTeX

@article{Hsieh2009c,
title = {Limited syntactic parallelism in Chinese ambiguity resolution},
author = {Yufen Hsieh and Julie E. Boland and Yaxu Zhang and Ming Yan},
doi = {10.1080/01690960802050375},
year = {2009},
date = {2009-01-01},
journal = {Language and Cognitive Processes},
volume = {24},
number = {7-8},
pages = {1227--1264},
abstract = {Using the stop-making-sense paradigm (Boland, Tanenhaus, Garnsey, & Carlsen, 1995) and eye-tracking during reading, we examined the processing of the Chinese Verb NP1 de NP2 construction, which is temporarily ambiguous between a complement clause (CC) analysis and a relative clause (RC) analysis. resolving the ambiguity as the more complex, less preferred CC was costly under come conditions but not under others. We took this as evidence for a limited parallel processor, such as Tabor and Hutchins' (2004) SOPARSE, that maintains multiple syntaxctic analyses across several words of a sentence when the structures are each supported by the available constraints.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Using the stop-making-sense paradigm (Boland, Tanenhaus, Garnsey, & Carlsen, 1995) and eye-tracking during reading, we examined the processing of the Chinese Verb NP1 de NP2 construction, which is temporarily ambiguous between a complement clause (CC) analysis and a relative clause (RC) analysis. resolving the ambiguity as the more complex, less preferred CC was costly under come conditions but not under others. We took this as evidence for a limited parallel processor, such as Tabor and Hutchins' (2004) SOPARSE, that maintains multiple syntaxctic analyses across several words of a sentence when the structures are each supported by the available constraints.

Close

  • doi:10.1080/01690960802050375

Close

Emmanuel Guzman-Martinez; Parkson Leung; Steven L. Franconeri; Marcia Grabowecky; Satoru Suzuki

Rapid eye-fixation training without eyetracking Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 3, pp. 491–496, 2009.

Abstract | Links | BibTeX

@article{GuzmanMartinez2009,
title = {Rapid eye-fixation training without eyetracking},
author = {Emmanuel Guzman-Martinez and Parkson Leung and Steven L. Franconeri and Marcia Grabowecky and Satoru Suzuki},
doi = {10.3758/PBR.16.3.491},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {3},
pages = {491--496},
abstract = {Maintenance of stable central eye fixation is crucial for a variety of behavioral, electrophysiological, and neuroimaging experiments. Naive observers in these experiments are not typically accustomed to fixating, either requiring the use of cumbersome and costly eyetracking or producing confounds in results. We devised a flicker display that produced an easily detectable visual phenomenon whenever the eyes moved. A few minutes of training using this display dramatically improved the accuracy of eye fixation while observers performed a demanding spatial attention cuing task. The same amount of training using control displays did not produce significant fixation improvements, and some observers consistently made eye movements to the peripheral attention cue, contaminating the cuing effect. Our results indicate that (1) eye fixation can be rapidly improved in naive observers by providing real-time feedback about eye movements, and (2) our simple flicker technique provides an easy and effective method for providing this feedback.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Maintenance of stable central eye fixation is crucial for a variety of behavioral, electrophysiological, and neuroimaging experiments. Naive observers in these experiments are not typically accustomed to fixating, either requiring the use of cumbersome and costly eyetracking or producing confounds in results. We devised a flicker display that produced an easily detectable visual phenomenon whenever the eyes moved. A few minutes of training using this display dramatically improved the accuracy of eye fixation while observers performed a demanding spatial attention cuing task. The same amount of training using control displays did not produce significant fixation improvements, and some observers consistently made eye movements to the peripheral attention cue, contaminating the cuing effect. Our results indicate that (1) eye fixation can be rapidly improved in naive observers by providing real-time feedback about eye movements, and (2) our simple flicker technique provides an easy and effective method for providing this feedback.

Close

  • doi:10.3758/PBR.16.3.491

Close

Tuomo Häikiö; Raymond Bertram; Jukka Hyönä; Pekka Niemi

Development of the letter identity span in reading: Evidence from the eye movement moving window paradigm Journal Article

In: Journal of Experimental Child Psychology, vol. 102, no. 2, pp. 167–181, 2009.

Abstract | Links | BibTeX

@article{Haeikioe2009,
title = {Development of the letter identity span in reading: Evidence from the eye movement moving window paradigm},
author = {Tuomo Häikiö and Raymond Bertram and Jukka Hyönä and Pekka Niemi},
doi = {10.1016/j.jecp.2008.04.002},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Child Psychology},
volume = {102},
number = {2},
pages = {167--181},
publisher = {Elsevier Inc.},
abstract = {By means of the moving window paradigm, we examined how many letters can be identified during a single eye fixation and whether this letter identity span changes as a function of reading skill. The results revealed that 8-year-old Finnish readers identify approximately 5 characters, 10-year-old readers identify approximately 7 characters, and 12-year-old and adult readers identify approximately 9 characters to the right of fixation. Comparison with earlier studies revealed that the letter identity span is smaller than the span for identifying letter features and that it is as wide in Finnish as in English. Furthermore, the letter identity span of faster readers of each age group was larger than that of slower readers, indicating that slower readers, unlike faster readers, allocate most of their processing resources to foveally fixated words. Finally, slower second graders were largely not disrupted by smaller windows, suggesting that their word decoding skill is not yet fully automatized.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

By means of the moving window paradigm, we examined how many letters can be identified during a single eye fixation and whether this letter identity span changes as a function of reading skill. The results revealed that 8-year-old Finnish readers identify approximately 5 characters, 10-year-old readers identify approximately 7 characters, and 12-year-old and adult readers identify approximately 9 characters to the right of fixation. Comparison with earlier studies revealed that the letter identity span is smaller than the span for identifying letter features and that it is as wide in Finnish as in English. Furthermore, the letter identity span of faster readers of each age group was larger than that of slower readers, indicating that slower readers, unlike faster readers, allocate most of their processing resources to foveally fixated words. Finally, slower second graders were largely not disrupted by smaller windows, suggesting that their word decoding skill is not yet fully automatized.

Close

  • doi:10.1016/j.jecp.2008.04.002

Close

Glenda Halliday; Maria Trinidad Herrero; Karen Murphy; Heather McCann; Francisco Ros-Bernal; Carlos Barcia; Hideo Mori; Francisco J. Blesa; Jose A. Obeso

No Lewy pathology in monkeys with over 10 years of severe MPTP Parkinsonism Journal Article

In: Movement Disorders, vol. 24, no. 10, pp. 1519–1545, 2009.

Abstract | Links | BibTeX

@article{Halliday2009,
title = {No Lewy pathology in monkeys with over 10 years of severe MPTP Parkinsonism},
author = {Glenda Halliday and Maria Trinidad Herrero and Karen Murphy and Heather McCann and Francisco Ros-Bernal and Carlos Barcia and Hideo Mori and Francisco J. Blesa and Jose A. Obeso},
doi = {10.1002/mds.22481},
year = {2009},
date = {2009-01-01},
journal = {Movement Disorders},
volume = {24},
number = {10},
pages = {1519--1545},
abstract = {The recent knowledge that 10 years after trans- plantation surviving human fetal neurons adopt the histopathology of Parkinson's disease suggests that Lewy body formation takes a decade to achieve. To determine whether similar histopathology occurs in 1-methyl-4- phenyl-1,2,3,6-tetrahydropyridine (MPTP)-primate models over a similar timeframe, the brains of two adult monkeys made parkinsonian in their youth with intermittent injections of MPTP were studied. Despite substantial nigral degeneration and increased $alpha$-synuclein immunoreactivity within surviving neurons, there was no evidence of Lewy body formation. This suggests that MPTP-induced oxidative stress and inflammation per se are not sufficient for Lewy body formation, or Lewy bodies are human specific},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The recent knowledge that 10 years after trans- plantation surviving human fetal neurons adopt the histopathology of Parkinson's disease suggests that Lewy body formation takes a decade to achieve. To determine whether similar histopathology occurs in 1-methyl-4- phenyl-1,2,3,6-tetrahydropyridine (MPTP)-primate models over a similar timeframe, the brains of two adult monkeys made parkinsonian in their youth with intermittent injections of MPTP were studied. Despite substantial nigral degeneration and increased $alpha$-synuclein immunoreactivity within surviving neurons, there was no evidence of Lewy body formation. This suggests that MPTP-induced oxidative stress and inflammation per se are not sufficient for Lewy body formation, or Lewy bodies are human specific

Close

  • doi:10.1002/mds.22481

Close

Derek A. Hamilton; Travis E. Johnson; Edward S. Redhead; Steven P. Verney

Control of rodent and human spatial navigation by room and apparatus cues Journal Article

In: Behavioural Processes, vol. 81, no. 2, pp. 154–169, 2009.

Abstract | Links | BibTeX

@article{Hamilton2009,
title = {Control of rodent and human spatial navigation by room and apparatus cues},
author = {Derek A. Hamilton and Travis E. Johnson and Edward S. Redhead and Steven P. Verney},
doi = {10.1016/j.beproc.2008.12.003},
year = {2009},
date = {2009-01-01},
journal = {Behavioural Processes},
volume = {81},
number = {2},
pages = {154--169},
abstract = {A growing body of literature indicates that rats prefer to navigate in the direction of a goal in the environment (directional responding) rather than to the precise location of the goal (place navigation). This paper provides a brief review of this literature with an emphasis on recent findings in the Morris water task. Four experiments designed to extend this work to humans in a computerized, virtual Morris water task are also described. Special emphasis is devoted to how directional responding and place navigation are influenced by room and apparatus cues, and how these cues control distinct components of navigation to a goal. Experiments 1 and 2 demonstrate that humans, like rats, perform directional responses when cues from the apparatus are present, while Experiment 3 demonstrates that place navigation predominates when apparatus cues are eliminated. In Experiment 4, an eyetracking system measured gaze location in the virtual environment dynamically as participants navigated from a start point to the goal. Participants primarily looked at room cues during the early segment of each trial, but primarily focused on the apparatus as the trial progressed, suggesting distinct, sequential stimulus functions. Implications for computational modeling of navigation in the Morris water task and related tasks are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A growing body of literature indicates that rats prefer to navigate in the direction of a goal in the environment (directional responding) rather than to the precise location of the goal (place navigation). This paper provides a brief review of this literature with an emphasis on recent findings in the Morris water task. Four experiments designed to extend this work to humans in a computerized, virtual Morris water task are also described. Special emphasis is devoted to how directional responding and place navigation are influenced by room and apparatus cues, and how these cues control distinct components of navigation to a goal. Experiments 1 and 2 demonstrate that humans, like rats, perform directional responses when cues from the apparatus are present, while Experiment 3 demonstrates that place navigation predominates when apparatus cues are eliminated. In Experiment 4, an eyetracking system measured gaze location in the virtual environment dynamically as participants navigated from a start point to the goal. Participants primarily looked at room cues during the early segment of each trial, but primarily focused on the apparatus as the trial progressed, suggesting distinct, sequential stimulus functions. Implications for computational modeling of navigation in the Morris water task and related tasks are discussed.

Close

  • doi:10.1016/j.beproc.2008.12.003

Close

John M. Henderson; Myriam Chanceaux; Tim J. Smith

The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements Journal Article

In: Journal of Vision, vol. 9, no. 1, pp. 1–8, 2009.

Abstract | Links | BibTeX

@article{Henderson2009b,
title = {The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements},
author = {John M. Henderson and Myriam Chanceaux and Tim J. Smith},
doi = {10.1167/9.1.32},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {1},
pages = {1--8},
abstract = {We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.

Close

  • doi:10.1167/9.1.32

Close

John M. Henderson; George L. Malcolm; Charles Schandl

Searching in the dark: Cognitive relevance drives attention in real-world scenes Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 850–856, 2009.

Abstract | Links | BibTeX

@article{Henderson2009,
title = {Searching in the dark: Cognitive relevance drives attention in real-world scenes},
author = {John M. Henderson and George L. Malcolm and Charles Schandl},
doi = {10.3758/PBR.16.5.850},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {5},
pages = {850--856},
abstract = {We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.

Close

  • doi:10.3758/PBR.16.5.850

Close

John M. Henderson; Tim J. Smith

How are eye fixation durations controlled during scene viewing? Further evidence from a scene onset delay paradigm Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 1055–1082, 2009.

Abstract | Links | BibTeX

@article{Henderson2009a,
title = {How are eye fixation durations controlled during scene viewing? Further evidence from a scene onset delay paradigm},
author = {John M. Henderson and Tim J. Smith},
doi = {10.1080/13506280802685552},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {1055--1082},
abstract = {Recent research on eye movements during scene viewing has focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. In two scene memorization and one visual search experiments, the scene was removed from view during critical fixations for a predetermined delay, and then restored following the delay. Experiment 1 compared filled (pattern mask) and unfilled (grey field) delays. Experiment 2 compared random to blocked delays. Experiment 3 extended the results to a visual search task. The results demonstrate that fixation durations in scene viewing comprise two fixation populations. One population remains relatively constant across delay, and the second population increases with scene onset delay. The results are consistent with a mixed eye movement control model that incorporates an autonomous control mechanism with process monitoring. The results suggest that a complete gaze control model will have to account for both fixation location and fixation duration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent research on eye movements during scene viewing has focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. In two scene memorization and one visual search experiments, the scene was removed from view during critical fixations for a predetermined delay, and then restored following the delay. Experiment 1 compared filled (pattern mask) and unfilled (grey field) delays. Experiment 2 compared random to blocked delays. Experiment 3 extended the results to a visual search task. The results demonstrate that fixation durations in scene viewing comprise two fixation populations. One population remains relatively constant across delay, and the second population increases with scene onset delay. The results are consistent with a mixed eye movement control model that incorporates an autonomous control mechanism with process monitoring. The results suggest that a complete gaze control model will have to account for both fixation location and fixation duration.

Close

  • doi:10.1080/13506280802685552

Close

Timothy J. Slattery

Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 6, pp. 1969–1975, 2009.

Abstract | Links | BibTeX

@article{Slattery2009,
title = {Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements},
author = {Timothy J. Slattery},
doi = {10.1037/a0016894},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {6},
pages = {1969--1975},
abstract = {An eye movement experiment was conducted to investigate whether the processing of a word can be affected by its higher frequency neighbor (HFN). Target words with an HFN (birch) or without one (spruce) were embedded into 2 types of sentence frames: 1 in which the HFN (birth) could fit given the prior sentence context, and 1 in which it could not. The results suggest that words can be misperceived as their HFN, and that top-down information from sentence context strongly modulates this effect. Implications for models of word recognition and eye movements during reading are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An eye movement experiment was conducted to investigate whether the processing of a word can be affected by its higher frequency neighbor (HFN). Target words with an HFN (birch) or without one (spruce) were embedded into 2 types of sentence frames: 1 in which the HFN (birth) could fit given the prior sentence context, and 1 in which it could not. The results suggest that words can be misperceived as their HFN, and that top-down information from sentence context strongly modulates this effect. Implications for models of word recognition and eye movements during reading are discussed.

Close

  • doi:10.1037/a0016894

Close

Daniel Smilek; Grayden J. F. Solman; Peter Murawski; Jonathan S. A. Carriere

The eyes fixate the optimal viewing position of task-irrelevant words Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 1, pp. 57–61, 2009.

Abstract | Links | BibTeX

@article{Smilek2009,
title = {The eyes fixate the optimal viewing position of task-irrelevant words},
author = {Daniel Smilek and Grayden J. F. Solman and Peter Murawski and Jonathan S. A. Carriere},
doi = {10.3758/PBR.16.1.57},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {1},
pages = {57--61},
abstract = {We evaluated whether one's eyes tend to fixate the optimal viewing position (OVP) of words even when the words are task irrelevant and should be ignored. Participants completed the standard Stroop task, in which they named the physical color of congruent and incongruent color words without regard to the meanings of the color words. We monitored the horizontal position of the first eye fixation that occurred after the onset of each color word to evaluate whether these fixations would be at the OVP, which is just to the left of word midline. The results showed that (1) the peak of the distribution of eye fixation positions was to the left of the midline of the color words, (2) the majority of the fixations landed on the left side of the color words, and (3) the average leftward displacement of the first fixation from word midline was greater for longer color words. These results suggest that the eyes tend to fixate the OVP of words even when those words are task irrelevant.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We evaluated whether one's eyes tend to fixate the optimal viewing position (OVP) of words even when the words are task irrelevant and should be ignored. Participants completed the standard Stroop task, in which they named the physical color of congruent and incongruent color words without regard to the meanings of the color words. We monitored the horizontal position of the first eye fixation that occurred after the onset of each color word to evaluate whether these fixations would be at the OVP, which is just to the left of word midline. The results showed that (1) the peak of the distribution of eye fixation positions was to the left of the midline of the color words, (2) the majority of the fixations landed on the left side of the color words, and (3) the average leftward displacement of the first fixation from word midline was greater for longer color words. These results suggest that the eyes tend to fixate the OVP of words even when those words are task irrelevant.

Close

  • doi:10.3758/PBR.16.1.57

Close

Tim J. Smith; John M. Henderson

Facilitation of return during scene viewing Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 1083–1108, 2009.

Abstract | Links | BibTeX

@article{Smith2009,
title = {Facilitation of return during scene viewing},
author = {Tim J. Smith and John M. Henderson},
doi = {10.1080/13506280802678557},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {1083--1108},
abstract = {Inhibition of Return (IOR) is a delay in initiating attentional shifts to previously attended locations. It is believed to facilitate attentional exploration of a scene. Computational models of attention have implemented IOR as a simple mechanism for driving attention through a scene. However, evidence for IOR during scene viewing is inconclusive. In this study IOR during scene memorization and in response to sudden onsets at the last (1-back) and penultimate (2-back) fixation location was measured. The results indicate that there is a tendency for saccades to continue the trajectory of the last saccade (Saccadic Momentum), but contrary to the “foraging facilitator” hypothesis of IOR, there is also a distinct population of saccades directed back to the last fixation location, especially in response to onsets. Voluntary return saccades to the 1-back location experience temporal delay but this does not affect their likelihood of occurrence. No localized temporal delay is exhibited at 2-back. These results suggest that IOR exists at the last fixation location during scene memorization but that this temporal delay is overridden by Facilitation of Return. Computational models of attention will fail to capture the pattern of saccadic eye movements during scene viewing unless they model the dynamics of visual encoding and can account for the interaction between Facilitation of Return, Saccadic Momentum, and Inhibition of Return.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition of Return (IOR) is a delay in initiating attentional shifts to previously attended locations. It is believed to facilitate attentional exploration of a scene. Computational models of attention have implemented IOR as a simple mechanism for driving attention through a scene. However, evidence for IOR during scene viewing is inconclusive. In this study IOR during scene memorization and in response to sudden onsets at the last (1-back) and penultimate (2-back) fixation location was measured. The results indicate that there is a tendency for saccades to continue the trajectory of the last saccade (Saccadic Momentum), but contrary to the “foraging facilitator” hypothesis of IOR, there is also a distinct population of saccades directed back to the last fixation location, especially in response to onsets. Voluntary return saccades to the 1-back location experience temporal delay but this does not affect their likelihood of occurrence. No localized temporal delay is exhibited at 2-back. These results suggest that IOR exists at the last fixation location during scene memorization but that this temporal delay is overridden by Facilitation of Return. Computational models of attention will fail to capture the pattern of saccadic eye movements during scene viewing unless they model the dynamics of visual encoding and can account for the interaction between Facilitation of Return, Saccadic Momentum, and Inhibition of Return.

Close

  • doi:10.1080/13506280802678557

Close

John F. Soechting; John Z. Juveli; Hrishikesh M. Rao

Models for the extrapolation of target motion for manual interception Journal Article

In: Journal of Neurophysiology, vol. 102, no. 3, pp. 1491–1502, 2009.

Abstract | Links | BibTeX

@article{Soechting2009,
title = {Models for the extrapolation of target motion for manual interception},
author = {John F. Soechting and John Z. Juveli and Hrishikesh M. Rao},
doi = {10.1152/jn.00398.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {102},
number = {3},
pages = {1491--1502},
abstract = {Intercepting a moving target requires a prediction of the target's future motion. This extrapolation could be achieved using sensed parameters of the target motion, e.g., its position and velocity. However, the accuracy of the prediction would be improved if subjects were also able to incorporate the statistical properties of the target's motion, accumulated as they watched the target move. The present experiments were designed to test for this possibility. Subjects intercepted a target moving on the screen of a computer monitor by sliding their extended finger along the monitor's surface. Along any of the six possible target paths, target speed could be governed by one of three possible rules: constant speed, a power law relation between speed and curvature, or the trajectory resulting from a sum of sinusoids. A go signal was given to initiate interception and was always presented when the target had the same speed, irrespective of the law of motion. The dependence of the initial direction of finger motion on the target's law of motion was examined. This direction did not depend on the speed profile of the target, contrary to the hypothesis. However, finger direction could be well predicted by assuming that target location was extrapolated using target velocity and that the amount of extrapolation depended on the distance from the finger to the target. Subsequent analysis showed that the same model of target motion was also used for on-line, visually mediated corrections of finger movement when the motion was initially misdirected.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intercepting a moving target requires a prediction of the target's future motion. This extrapolation could be achieved using sensed parameters of the target motion, e.g., its position and velocity. However, the accuracy of the prediction would be improved if subjects were also able to incorporate the statistical properties of the target's motion, accumulated as they watched the target move. The present experiments were designed to test for this possibility. Subjects intercepted a target moving on the screen of a computer monitor by sliding their extended finger along the monitor's surface. Along any of the six possible target paths, target speed could be governed by one of three possible rules: constant speed, a power law relation between speed and curvature, or the trajectory resulting from a sum of sinusoids. A go signal was given to initiate interception and was always presented when the target had the same speed, irrespective of the law of motion. The dependence of the initial direction of finger motion on the target's law of motion was examined. This direction did not depend on the speed profile of the target, contrary to the hypothesis. However, finger direction could be well predicted by assuming that target location was extrapolated using target velocity and that the amount of extrapolation depended on the distance from the finger to the target. Subsequent analysis showed that the same model of target motion was also used for on-line, visually mediated corrections of finger movement when the motion was initially misdirected.

Close

  • doi:10.1152/jn.00398.2009

Close

Benjamin W. Tatler; Benjamin T. Vincent

The prominence of behavioural biases in eye guidance Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 1029–1054, 2009.

Abstract | Links | BibTeX

@article{Tatler2009,
title = {The prominence of behavioural biases in eye guidance},
author = {Benjamin W. Tatler and Benjamin T. Vincent},
doi = {10.1080/13506280902764539},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {1029--1054},
abstract = {When attempting to understand where people look during scene perception, researchers typically focus on the relative contributions of low- and high-level cues. Computational models of the contribution of low-level features to fixation selection, with modifications to incorporate top-down sources of information have been abundant in recent research. However, we are still some way from a model that can explain many of the complexities of eye movement behaviour. Here we show that understanding biases in how we move the eyes can provide powerful new insights into the decision about where to look in complex scenes. A model based solely on these biases and therefore blind to current visual information outperformed popular salience-based approaches. Our data show that incorporating an understanding of oculomotor behavioural biases into models of eye guidance is likely to significantly improve our understanding of where we choose to fixate in natural scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When attempting to understand where people look during scene perception, researchers typically focus on the relative contributions of low- and high-level cues. Computational models of the contribution of low-level features to fixation selection, with modifications to incorporate top-down sources of information have been abundant in recent research. However, we are still some way from a model that can explain many of the complexities of eye movement behaviour. Here we show that understanding biases in how we move the eyes can provide powerful new insights into the decision about where to look in complex scenes. A model based solely on these biases and therefore blind to current visual information outperformed popular salience-based approaches. Our data show that incorporating an understanding of oculomotor behavioural biases into models of eye guidance is likely to significantly improve our understanding of where we choose to fixate in natural scenes.

Close

  • doi:10.1080/13506280902764539

Close

Abtine Tavassoli; Dario L. Ringach

Dynamics of smooth pursuit maintenance Journal Article

In: Journal of Neurophysiology, vol. 102, no. 1, pp. 110–118, 2009.

Abstract | Links | BibTeX

@article{Tavassoli2009,
title = {Dynamics of smooth pursuit maintenance},
author = {Abtine Tavassoli and Dario L. Ringach},
doi = {10.1152/jn.91320.2008},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {102},
number = {1},
pages = {110--118},
abstract = {Smooth pursuit eye movements allow the approximate stabilization of a moving visual target on the retina. To study the dynamics of smooth pursuit, we measured eye velocity during the visual tracking of a Gabor target moving at a constant velocity plus a noisy perturbation term. The optimal linear filter linking fluctuations in target velocity to evoked fluctuations in eye velocity was computed. These filters predicted eye velocity to novel stimuli in the 0- to 15-Hz band with good accuracy, showing that pursuit maintenance is approximately linear under these conditions. The shape of the filters were indicative of fast dynamics, with pure delays of merely approximately 67 ms, times-to-peak of approximately 115 ms, and effective integration times of approximately 45 ms. The gain of the system, reflected in the amplitude of the filters, was inversely proportional to the size of the velocity fluctuations and independent of the target mean speed. A modest slow-down of the dynamics was observed as the contrast of the target decreased. Finally, the temporal filters recovered during fixation and pursuit were similar in shape, supporting the notion that they might share a common underlying circuitry. These findings show that the visual tracking of moving objects by the human eye includes a reflexive-like pathway with high contrast sensitivity and fast dynamics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Smooth pursuit eye movements allow the approximate stabilization of a moving visual target on the retina. To study the dynamics of smooth pursuit, we measured eye velocity during the visual tracking of a Gabor target moving at a constant velocity plus a noisy perturbation term. The optimal linear filter linking fluctuations in target velocity to evoked fluctuations in eye velocity was computed. These filters predicted eye velocity to novel stimuli in the 0- to 15-Hz band with good accuracy, showing that pursuit maintenance is approximately linear under these conditions. The shape of the filters were indicative of fast dynamics, with pure delays of merely approximately 67 ms, times-to-peak of approximately 115 ms, and effective integration times of approximately 45 ms. The gain of the system, reflected in the amplitude of the filters, was inversely proportional to the size of the velocity fluctuations and independent of the target mean speed. A modest slow-down of the dynamics was observed as the contrast of the target decreased. Finally, the temporal filters recovered during fixation and pursuit were similar in shape, supporting the notion that they might share a common underlying circuitry. These findings show that the visual tracking of moving objects by the human eye includes a reflexive-like pathway with high contrast sensitivity and fast dynamics.

Close

  • doi:10.1152/jn.91320.2008

Close

Alisdair J. G. Taylor; Samuel B. Hutton

The effects of task instructions on pro and antisaccade performance Journal Article

In: Experimental Brain Research, vol. 195, no. 1, pp. 5–14, 2009.

Abstract | Links | BibTeX

@article{Taylor2009,
title = {The effects of task instructions on pro and antisaccade performance},
author = {Alisdair J. G. Taylor and Samuel B. Hutton},
doi = {10.1007/s00221-009-1750-4},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {195},
number = {1},
pages = {5--14},
abstract = {In the antisaccade task participants are required to overcome the strong tendency to saccade towards a sudden onset target, and instead make a saccade to the mirror image location. The task thus provides a powerful tool with which to study the cognitive processes underlying goal directed behaviour, and has become a widely used index of "disinhibition" in a range of clinical populations. Across two experiments we explored the role of top-down strategic influences on antisaccade performance by varying the instructions that participants received. Instructions to delay making a response resulted in a significant increase in correct antisaccade latencies and reduction in erroneous prosaccades towards the target. Instructions to make antisaccades as quickly as possible resulted in faster correct responses, whereas instructions to be as spatially accurate as possible increased correct antisaccade latencies. Neither of these manipulations resulted in a significant change in error rate. In a second experiment, participants made fewer errors in delayed pro and antisaccade tasks than in a standard antisaccade task. The implications of these results for current models of antisaccade performance, and the interpretation of antisaccade deficits in clinical populations are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the antisaccade task participants are required to overcome the strong tendency to saccade towards a sudden onset target, and instead make a saccade to the mirror image location. The task thus provides a powerful tool with which to study the cognitive processes underlying goal directed behaviour, and has become a widely used index of "disinhibition" in a range of clinical populations. Across two experiments we explored the role of top-down strategic influences on antisaccade performance by varying the instructions that participants received. Instructions to delay making a response resulted in a significant increase in correct antisaccade latencies and reduction in erroneous prosaccades towards the target. Instructions to make antisaccades as quickly as possible resulted in faster correct responses, whereas instructions to be as spatially accurate as possible increased correct antisaccade latencies. Neither of these manipulations resulted in a significant change in error rate. In a second experiment, participants made fewer errors in delayed pro and antisaccade tasks than in a standard antisaccade task. The implications of these results for current models of antisaccade performance, and the interpretation of antisaccade deficits in clinical populations are discussed.

Close

  • doi:10.1007/s00221-009-1750-4

Close

Hiroyuki Sogo; Yuji Takeda

Effect of spatial inhibition on saccade trajectory depends on location-based mechanisms Journal Article

In: Japanese Psychological Research, vol. 51, no. 1, pp. 35–46, 2009.

Abstract | Links | BibTeX

@article{Sogo2009,
title = {Effect of spatial inhibition on saccade trajectory depends on location-based mechanisms},
author = {Hiroyuki Sogo and Yuji Takeda},
doi = {10.1111/j.1468-5884.2009.00386.x},
year = {2009},
date = {2009-01-01},
journal = {Japanese Psychological Research},
volume = {51},
number = {1},
pages = {35--46},
abstract = {Saccade trajectory often curves away from a previously attended, inhibited location. A recent study of curved saccades showed that an inhibitory effect prevents ineffective reexamination during serial visual search. The time course of this effect differs from that of a similar inhibitory effect, known as inhibition of return (IOR). In the present study, we examined whether this saccade-related inhibitory effect can operate in an object-based manner (similar to IOR). Using a spatial cueing paradigm, we demonstrated that if a cue is presented on a placeholder that is then shifted from its original location, the saccade trajectory curves away from the original (cued) location (Experiment 1), yet the IOR effect is observed on the cued placeholder (Experiment 2). The inhibitory mechanism that causes curved saccades appears to operate in a location-based manner, whereas the mechanism underlying IOR appears to operate in an object-based manner. We propose that these inhibitory mechanisms work in a complementary fashion to guide eye movements efficiently under conditions of a dynamic visual environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccade trajectory often curves away from a previously attended, inhibited location. A recent study of curved saccades showed that an inhibitory effect prevents ineffective reexamination during serial visual search. The time course of this effect differs from that of a similar inhibitory effect, known as inhibition of return (IOR). In the present study, we examined whether this saccade-related inhibitory effect can operate in an object-based manner (similar to IOR). Using a spatial cueing paradigm, we demonstrated that if a cue is presented on a placeholder that is then shifted from its original location, the saccade trajectory curves away from the original (cued) location (Experiment 1), yet the IOR effect is observed on the cued placeholder (Experiment 2). The inhibitory mechanism that causes curved saccades appears to operate in a location-based manner, whereas the mechanism underlying IOR appears to operate in an object-based manner. We propose that these inhibitory mechanisms work in a complementary fashion to guide eye movements efficiently under conditions of a dynamic visual environment.

Close

  • doi:10.1111/j.1468-5884.2009.00386.x

Close

Joo-Hyun Song; Robert M. McPeek

Eye-hand coordination during target selection in a pop-out visual search Journal Article

In: Journal of Neurophysiology, vol. 102, no. 5, pp. 2681–2692, 2009.

Abstract | Links | BibTeX

@article{Song2009,
title = {Eye-hand coordination during target selection in a pop-out visual search},
author = {Joo-Hyun Song and Robert M. McPeek},
doi = {10.1152/jn.91352.2008},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {102},
number = {5},
pages = {2681--2692},
abstract = {We examined the coordination of saccades and reaches in a visual search task in which monkeys were rewarded for reaching to an odd-colored target among distractors. Eye movements were unconstrained, and monkeys typically made one or more saccades before initiating a reach. Target selection for reaching and saccades was highly correlated with the hand and eyes landing near the same final stimulus both for correct reaches to the target and for incorrect reaches to a distractor. Incorrect reaches showed a bias in target selection: they were directed to the distractor in the same hemifield as the target more often than to other distractors. A similar bias was seen in target selection for the initial saccade in correct reaching trials with multiple saccades. We also examined the temporal coupling of saccades and reaches. In trials with a single saccade, a reaching movement was made after a fairly stereotyped delay. In multiple-saccade trials, a reach to the target could be initiated near or even before the onset of the final target-directed saccade. In these trials, the initial trajectory of the reach was often directed toward the fixated distractor before veering toward the target around the time of the final saccade. In virtually all cases, the eyes arrived at the target before the hand, and remained fixated until reach completion. Overall, these results are consistent with flexible temporal coupling of saccade and reach initiation, but fairly tight coupling of target selection for the two types of action.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined the coordination of saccades and reaches in a visual search task in which monkeys were rewarded for reaching to an odd-colored target among distractors. Eye movements were unconstrained, and monkeys typically made one or more saccades before initiating a reach. Target selection for reaching and saccades was highly correlated with the hand and eyes landing near the same final stimulus both for correct reaches to the target and for incorrect reaches to a distractor. Incorrect reaches showed a bias in target selection: they were directed to the distractor in the same hemifield as the target more often than to other distractors. A similar bias was seen in target selection for the initial saccade in correct reaching trials with multiple saccades. We also examined the temporal coupling of saccades and reaches. In trials with a single saccade, a reaching movement was made after a fairly stereotyped delay. In multiple-saccade trials, a reach to the target could be initiated near or even before the onset of the final target-directed saccade. In these trials, the initial trajectory of the reach was often directed toward the fixated distractor before veering toward the target around the time of the final saccade. In virtually all cases, the eyes arrived at the target before the hand, and remained fixated until reach completion. Overall, these results are consistent with flexible temporal coupling of saccade and reach initiation, but fairly tight coupling of target selection for the two types of action.

Close

  • doi:10.1152/jn.91352.2008

Close

David Souto; Dirk Kerzel

Involuntary cueing effects during smooth pursuit: Facilitation and inhibition of return in oculocentric coordinates Journal Article

In: Experimental Brain Research, vol. 192, no. 1, pp. 25–31, 2009.

Abstract | Links | BibTeX

@article{Souto2009a,
title = {Involuntary cueing effects during smooth pursuit: Facilitation and inhibition of return in oculocentric coordinates},
author = {David Souto and Dirk Kerzel},
doi = {10.1007/s00221-008-1555-x},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {192},
number = {1},
pages = {25--31},
abstract = {Peripheral cues induce facilitation with short cue-target intervals and inhibition of return (IOR) with long cue-target intervals. Modulations of facilitation and IOR by continuous displacements of the eye or the cued stimuli are poorly understood. Previously, the retinal coordinates of the cued location were changed by saccadic or smooth pursuit eye movements during the cue-target interval. In contrast, we probed the relevant coordinates for facilitation and IOR by orthogonally varying object motion (stationary, moving) and eye movement (fixation, smooth pursuit). In the pursuit conditions, cue and target were presented during the ongoing eye movement and observers made a saccade to the target. Importantly, we found facilitation and IOR of similar size during smooth pursuit and fixation. The results suggest that involuntary orienting is possible even when attention has to be allocated to the moving target during smooth pursuit. Comparison of conditions with stabilized and moving objects suggest an oculocentric basis for facilitation as well as inhibition. Facilitation and IOR were reduced with objects that moved on the retina both with smooth pursuit and eye fixation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Peripheral cues induce facilitation with short cue-target intervals and inhibition of return (IOR) with long cue-target intervals. Modulations of facilitation and IOR by continuous displacements of the eye or the cued stimuli are poorly understood. Previously, the retinal coordinates of the cued location were changed by saccadic or smooth pursuit eye movements during the cue-target interval. In contrast, we probed the relevant coordinates for facilitation and IOR by orthogonally varying object motion (stationary, moving) and eye movement (fixation, smooth pursuit). In the pursuit conditions, cue and target were presented during the ongoing eye movement and observers made a saccade to the target. Importantly, we found facilitation and IOR of similar size during smooth pursuit and fixation. The results suggest that involuntary orienting is possible even when attention has to be allocated to the moving target during smooth pursuit. Comparison of conditions with stabilized and moving objects suggest an oculocentric basis for facilitation as well as inhibition. Facilitation and IOR were reduced with objects that moved on the retina both with smooth pursuit and eye fixation.

Close

  • doi:10.1007/s00221-008-1555-x

Close

David Souto; Dirk Kerzel

Evidence for an attentional component in saccadic inhibition of return Journal Article

In: Experimental Brain Research, vol. 195, no. 4, pp. 531–540, 2009.

Abstract | Links | BibTeX

@article{Souto2009,
title = {Evidence for an attentional component in saccadic inhibition of return},
author = {David Souto and Dirk Kerzel},
doi = {10.1007/s00221-009-1824-3},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {195},
number = {4},
pages = {531--540},
abstract = {After presentation of a peripheral cue, facilitation at the cued location is followed by inhibition of return (IOR). It has been recently proposed that IOR may originate at different processing stages for manual and ocular responses, with manual IOR resulting from inhibited attentional orienting, and ocular IOR resulting form inhibited motor preparation. Contrary to this interpretation, we found an effect of target contrast on saccadic IOR. The effect of contrast decreased with increasing reaction times (RTs) for saccades, but not for manual key-press responses. This may have masked the effect of contrast on IOR with saccades in previous studies (Hunt and Kingstone in J Exp Psychol Hum Percept Perform 29:1068-1074, 2003) because only mean RTs were considered. We also found that background luminance strongly influenced the effects of gap and target contrast on IOR.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

After presentation of a peripheral cue, facilitation at the cued location is followed by inhibition of return (IOR). It has been recently proposed that IOR may originate at different processing stages for manual and ocular responses, with manual IOR resulting from inhibited attentional orienting, and ocular IOR resulting form inhibited motor preparation. Contrary to this interpretation, we found an effect of target contrast on saccadic IOR. The effect of contrast decreased with increasing reaction times (RTs) for saccades, but not for manual key-press responses. This may have masked the effect of contrast on IOR with saccades in previous studies (Hunt and Kingstone in J Exp Psychol Hum Percept Perform 29:1068-1074, 2003) because only mean RTs were considered. We also found that background luminance strongly influenced the effects of gap and target contrast on IOR.

Close

  • doi:10.1007/s00221-009-1824-3

Close

11118 entries « ‹ 100 of 112 › »

让我们保持联系

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
新闻通讯
新闻通讯存档
会议

联系

info@sr-research.com

电话: +1-613-271-8686

免费电话: +1-866-821-0731

传真: +1-613-482-4866

快捷链接

产品

解决方案

技术支持

法律信息

法律声明

隐私政策 | 可访性政策

EyeLink® 眼动仪是研究设备,不能用于医疗诊断或治疗。

特色博客

Reading Profiles of Adults with Dyslexia

成人阅读障碍的阅读概况


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.