• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

高速、精准和可靠的眼动追踪解决方案

  • 硬件
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI和MEG系统
    • EyeLink II
    • 硬件集成
  • 软件
    • Experiment Builder
    • Data Viewer
    • WebLink
    • 软件集成
    • Purchase Licenses
  • 解决方案
    • 阅读与语言
    • 发展研究
    • fMRI 和 MEG
    • EEG 和 fNIRS
    • 临床与眼动机制研究
    • 认知性
    • 可用性与应用研究
    • 非人类 灵长类动物
  • 技术支持
    • 论坛
    • 资源
    • 有用的应用程序
    • 训练
  • 关于
    • 关于我们
    • EyeLink出版物
    • 新闻
    • 制造
    • 职业生涯
    • 关于眼动追踪
    • 新闻通讯
  • 博客
  • 联系
  • English
eye tracking research

EyeLink眼球跟踪出版物库

全部EyeLink出版物

截至2021,所有10000多份经同行评审的EyeLink研究出版物(其中一些在2022年初)以下按年份列出。您可以使用视觉搜索、平滑追踪、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼球追踪研究可在解决方案页面上找到。如果我们遗漏了任何EyeLink眼球追踪文件,请给我们发电子邮件!

10162 entries « ‹ 92 of 102 › »

2008

Cliodhna Quigley; Selim Onat; Sue Harding; Martin Cooke; Peter König

Audio-visual integration during overt visual attention Journal Article

In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 4, 2008.

Abstract | Links | BibTeX

@article{Quigley2008,
title = {Audio-visual integration during overt visual attention},
author = {Cliodhna Quigley and Selim Onat and Sue Harding and Martin Cooke and Peter König},
doi = {10.16910/jemr.1.2.4},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {1},
number = {2},
pages = {4},
abstract = {How do different sources of information arising from different modalities interact to control where we look? To answer this question with respect to real-world operational conditions we presented natural images and spatially localized sounds in (V)isual, Audiovisual (AV) and (A)uditory conditions and measured subjects' eye-movements. Our results demonstrate that eye-movements in AV conditions are spatially biased towards the part of the image corresponding to the sound source. Interestingly, this spatial bias is dependent on the probability of a given image region to be fixated (saliency) in the V condition. This indicates that fixation behaviour during the AV conditions is the result of an integration process. Regression analysis shows that this integration is best accounted for by a linear combination of unimodal saliencies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How do different sources of information arising from different modalities interact to control where we look? To answer this question with respect to real-world operational conditions we presented natural images and spatially localized sounds in (V)isual, Audiovisual (AV) and (A)uditory conditions and measured subjects' eye-movements. Our results demonstrate that eye-movements in AV conditions are spatially biased towards the part of the image corresponding to the sound source. Interestingly, this spatial bias is dependent on the probability of a given image region to be fixated (saliency) in the V condition. This indicates that fixation behaviour during the AV conditions is the result of an integration process. Regression analysis shows that this integration is best accounted for by a linear combination of unimodal saliencies.

Close

  • doi:10.16910/jemr.1.2.4

Close

Ralph Radach; Lynn Huestegge; Ronan G. Reilly

The role of global top-down factors in local eye-movement control in reading Journal Article

In: Psychological Research, vol. 72, no. 6, pp. 675–688, 2008.

Abstract | Links | BibTeX

@article{Radach2008,
title = {The role of global top-down factors in local eye-movement control in reading},
author = {Ralph Radach and Lynn Huestegge and Ronan G. Reilly},
doi = {10.1007/s00426-008-0173-3},
year = {2008},
date = {2008-01-01},
journal = {Psychological Research},
volume = {72},
number = {6},
pages = {675--688},
abstract = {Although the development of the field of reading has been impressive, there are a number of issues that still require much more attention. One of these concerns the variability of skilled reading within the individual. This paper explores the topic in three ways: (1) it quantifies the extent to which, two factors, the specific reading task (comprehension vs. word verification) and the format of reading material (sentence vs. passage) influence the temporal aspects of reading as expressed in word-viewing durations; (2) it examines whether they also affect visuomotor aspects of eye-movement control; and (3) determine whether they can modulate local lexical processing. The results reveal reading as a dynamic, interactive process involving semi-autonomous modules, with top-down influences clearly evident in the eye-movement record.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although the development of the field of reading has been impressive, there are a number of issues that still require much more attention. One of these concerns the variability of skilled reading within the individual. This paper explores the topic in three ways: (1) it quantifies the extent to which, two factors, the specific reading task (comprehension vs. word verification) and the format of reading material (sentence vs. passage) influence the temporal aspects of reading as expressed in word-viewing durations; (2) it examines whether they also affect visuomotor aspects of eye-movement control; and (3) determine whether they can modulate local lexical processing. The results reveal reading as a dynamic, interactive process involving semi-autonomous modules, with top-down influences clearly evident in the eye-movement record.

Close

  • doi:10.1007/s00426-008-0173-3

Close

Christoph Rasche; Karl R. Gegenfurtner

Orienting during gaze guidance in a letter-identification task Journal Article

In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–10, 2008.

Abstract | BibTeX

@article{Rasche2008,
title = {Orienting during gaze guidance in a letter-identification task},
author = {Christoph Rasche and Karl R. Gegenfurtner},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {3},
number = {4},
pages = {1--10},
abstract = {The idea of gaze guidance is to lead a viewer's gaze through a visual display in order to facilitate the viewer's search for specific information in a least-obtrusive manner. This study investigates saccadic orienting when a viewer is guided in a fast-paced, low-contrast letter identification task. Despite the task's difficulty and although guiding cues were ad-justed to gaze eccentricity, observers preferred attentional over saccadic shifts to obtain a letter identification judgment; and if a saccade was carried out its saccadic constant error was 50%. From those results we derive a number of design recommendations for the process of gaze guidance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The idea of gaze guidance is to lead a viewer's gaze through a visual display in order to facilitate the viewer's search for specific information in a least-obtrusive manner. This study investigates saccadic orienting when a viewer is guided in a fast-paced, low-contrast letter identification task. Despite the task's difficulty and although guiding cues were ad-justed to gaze eccentricity, observers preferred attentional over saccadic shifts to obtain a letter identification judgment; and if a saccade was carried out its saccadic constant error was 50%. From those results we derive a number of design recommendations for the process of gaze guidance.

Close

Keith Rayner; Brett Miller; Caren M. Rotello

Eye movements when looking at print advertisements: The goal of the viewer matters Journal Article

In: Applied Cognitive Psychology, vol. 22, no. 5, pp. 697–707, 2008.

Abstract | Links | BibTeX

@article{Rayner2008,
title = {Eye movements when looking at print advertisements: The goal of the viewer matters},
author = {Keith Rayner and Brett Miller and Caren M. Rotello},
doi = {10.1002/acp.1389},
year = {2008},
date = {2008-01-01},
journal = {Applied Cognitive Psychology},
volume = {22},
number = {5},
pages = {697--707},
abstract = {Viewers looked at print advertisements as their eye movements were recorded. Half of them were asked to rate how much they liked each ad (for convenience, we will generally use the term 'ad' from this point on), while the other half were asked to rate the effectiveness of each ad. Previous research indicated that viewers who were asked to consider purchasing products in the ads looked at the text earlier and more often than the picture part of the ad. In contrast, viewers in the present experiment looked at the picture part of the ad earlier and longer than the text. The results indicate quite clearly that the goal of the viewer very much influences where (and for how long) viewers look at different parts of ads, but also indicate that the nature of the ad per se matters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Viewers looked at print advertisements as their eye movements were recorded. Half of them were asked to rate how much they liked each ad (for convenience, we will generally use the term 'ad' from this point on), while the other half were asked to rate the effectiveness of each ad. Previous research indicated that viewers who were asked to consider purchasing products in the ads looked at the text earlier and more often than the picture part of the ad. In contrast, viewers in the present experiment looked at the picture part of the ad earlier and longer than the text. The results indicate quite clearly that the goal of the viewer very much influences where (and for how long) viewers look at different parts of ads, but also indicate that the nature of the ad per se matters.

Close

  • doi:10.1002/acp.1389

Close

Paul Reeve; James J. Clark; J. Kevin O'Regan

Convergent flash localization near saccades without equivalent "compression" of perceived separation Journal Article

In: Journal of Vision, vol. 8, no. 13, pp. 1–19, 2008.

Abstract | Links | BibTeX

@article{Reeve2008,
title = {Convergent flash localization near saccades without equivalent "compression" of perceived separation},
author = {Paul Reeve and James J. Clark and J. Kevin O'Regan},
doi = {10.1167/8.13.5},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {13},
pages = {1--19},
abstract = {Visual space is sometimes said to be "compressed" before saccadic eye movements. The most central evidence for this hypothesis is a converging pattern of localization errors on single flashes presented close to saccade time under certain conditions. An intuitive version of the compression hypothesis predicts that the reported distance between simultaneous, spatially separated presaccadic flashes should contract in the same way as their individual locations. In our experiment we tested this prediction by having subjects perform one of two tasks on stimuli made up of two bars simultaneously flashed near saccade time: either localizing one of the bars or judging the separation between the two. Localization judgments showed the previously observed converging pattern over the 50-100 ms before saccades. Contractions in perceived separation between the two bars were not accurately predicted by this pattern: they occurred mainly during saccades and were much weaker than convergence in localization. Different forms of spatial information about flashed stimuli can be differentially modulated before, during, and after saccades. Structural alterations in the perceptual field around saccades may explain these different effects, but alternative hypotheses based on decision making under uncertainty and on the influence of other perisaccadic mechanisms are also consistent with this and other evidence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual space is sometimes said to be "compressed" before saccadic eye movements. The most central evidence for this hypothesis is a converging pattern of localization errors on single flashes presented close to saccade time under certain conditions. An intuitive version of the compression hypothesis predicts that the reported distance between simultaneous, spatially separated presaccadic flashes should contract in the same way as their individual locations. In our experiment we tested this prediction by having subjects perform one of two tasks on stimuli made up of two bars simultaneously flashed near saccade time: either localizing one of the bars or judging the separation between the two. Localization judgments showed the previously observed converging pattern over the 50-100 ms before saccades. Contractions in perceived separation between the two bars were not accurately predicted by this pattern: they occurred mainly during saccades and were much weaker than convergence in localization. Different forms of spatial information about flashed stimuli can be differentially modulated before, during, and after saccades. Structural alterations in the perceptual field around saccades may explain these different effects, but alternative hypotheses based on decision making under uncertainty and on the influence of other perisaccadic mechanisms are also consistent with this and other evidence.

Close

  • doi:10.1167/8.13.5

Close

Erik D. Reichle; Polina M. Vanyukov; Patryk A. Laurent; Tessa Warren

Serial or parallel? Using depth-of-processing to examine attention allocation during reading Journal Article

In: Vision Research, vol. 48, no. 17, pp. 1831–1836, 2008.

Abstract | Links | BibTeX

@article{Reichle2008,
title = {Serial or parallel? Using depth-of-processing to examine attention allocation during reading},
author = {Erik D. Reichle and Polina M. Vanyukov and Patryk A. Laurent and Tessa Warren},
doi = {10.1016/j.visres.2008.05.007},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {17},
pages = {1831--1836},
abstract = {This paper presents an experiment investigating attention allocation in four tasks requiring varied degrees of lexical processing of 1-4 simultaneously displayed words. Response times and eye movements were only modestly affected by the number of words in an asterisk-detection task but increased markedly with the number of words in letter-detection, rhyme-judgment, and semantic-judgment tasks, suggesting that attention may not be serial for tasks that do not require significant lexical processing (e.g., detecting visual features), but is approximately serial for tasks that do (e.g., retrieving word meanings). The implications of these results for models of readers' eye movements are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This paper presents an experiment investigating attention allocation in four tasks requiring varied degrees of lexical processing of 1-4 simultaneously displayed words. Response times and eye movements were only modestly affected by the number of words in an asterisk-detection task but increased markedly with the number of words in letter-detection, rhyme-judgment, and semantic-judgment tasks, suggesting that attention may not be serial for tasks that do not require significant lexical processing (e.g., detecting visual features), but is approximately serial for tasks that do (e.g., retrieving word meanings). The implications of these results for models of readers' eye movements are discussed.

Close

  • doi:10.1016/j.visres.2008.05.007

Close

Kathleen Pirog Revill; Michael K. Tanenhaus; Richard N. Aslin

Context and spoken word recognition in a novel lexicon Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 5, pp. 1207–1223, 2008.

Abstract | Links | BibTeX

@article{Revill2008,
title = {Context and spoken word recognition in a novel lexicon},
author = {Kathleen Pirog Revill and Michael K. Tanenhaus and Richard N. Aslin},
doi = {10.1037/a0012796},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {34},
number = {5},
pages = {1207--1223},
abstract = {Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access–selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access–selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access–selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access–selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models.

Close

  • doi:10.1037/a0012796

Close

Frédéric P. Rey; Thanh Thuan Lê; René Bertin; Zoï Kapoula

Saccades horizontal or vertical at near or at far do not deteriorate postural control Journal Article

In: Auris Nasus Larynx, vol. 35, no. 2, pp. 185–191, 2008.

Abstract | Links | BibTeX

@article{Rey2008,
title = {Saccades horizontal or vertical at near or at far do not deteriorate postural control},
author = {Frédéric P. Rey and Thanh Thuan Lê and René Bertin and Zoï Kapoula},
doi = {10.1016/j.anl.2007.07.001},
year = {2008},
date = {2008-01-01},
journal = {Auris Nasus Larynx},
volume = {35},
number = {2},
pages = {185--191},
abstract = {Objective: There is a discrepancy about the effect of saccades on postural control: some studies reported a stabilization effect, other studies the opposite. Perturbation of posture by saccades could be related to loss of vision during saccades (saccades suppression) due to high velocity retinal slip. On the other hand, efferent and afferent proprioceptive signals related to saccades can be used for obtaining spatial stability over saccades and maintaining good postural control. In natural conditions saccades can be horizontal, vertical and made at different distance. The present study examines all these parameters to provide a more complete view on the role of saccade on postural control in quiet stance. Methods: Horizontal or vertical saccades of 30° were made at 1 Hz and at two distances, 40 and 200 cm. Eye movements were recorded with video-oculograhpy (EyeLink II). Posturography was recorded with the TechnoConcept platform. The results from "saccade" conditions are compared to "fixation control" condition (at far and near). Results: The video oculography results show that subjects performed the fixation or the saccade task correctly. Execution of saccades (horizontal or vertical at near or at far distance) had no significant effect on the surface of center of pressure (CoP), neither on the standard deviation of the lateral body sway, nor on the variance of speed of the CoP. Moreover, whatever the distance, execution of saccades decreased significantly the standard deviation of the antero-posterior sway. Conclusion: We conclude that saccades, of either the direction and at either the distance, do not deteriorate postural control; rather they could reduce sway. Efferent and proprioceptive oculomotor signals as well as attention could contribute to maintain or improve postural stability while making saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: There is a discrepancy about the effect of saccades on postural control: some studies reported a stabilization effect, other studies the opposite. Perturbation of posture by saccades could be related to loss of vision during saccades (saccades suppression) due to high velocity retinal slip. On the other hand, efferent and afferent proprioceptive signals related to saccades can be used for obtaining spatial stability over saccades and maintaining good postural control. In natural conditions saccades can be horizontal, vertical and made at different distance. The present study examines all these parameters to provide a more complete view on the role of saccade on postural control in quiet stance. Methods: Horizontal or vertical saccades of 30° were made at 1 Hz and at two distances, 40 and 200 cm. Eye movements were recorded with video-oculograhpy (EyeLink II). Posturography was recorded with the TechnoConcept platform. The results from "saccade" conditions are compared to "fixation control" condition (at far and near). Results: The video oculography results show that subjects performed the fixation or the saccade task correctly. Execution of saccades (horizontal or vertical at near or at far distance) had no significant effect on the surface of center of pressure (CoP), neither on the standard deviation of the lateral body sway, nor on the variance of speed of the CoP. Moreover, whatever the distance, execution of saccades decreased significantly the standard deviation of the antero-posterior sway. Conclusion: We conclude that saccades, of either the direction and at either the distance, do not deteriorate postural control; rather they could reduce sway. Efferent and proprioceptive oculomotor signals as well as attention could contribute to maintain or improve postural stability while making saccades.

Close

  • doi:10.1016/j.anl.2007.07.001

Close

Xiaochuan Pan; Kosuke Sawa; Ichiro Tsuda; Minoru Tsukada; Masamichi Sakagami

Reward prediction based on stimulus categorization in primate lateral prefrontal cortex Journal Article

In: Nature Neuroscience, vol. 11, no. 6, pp. 703–712, 2008.

Abstract | Links | BibTeX

@article{Pan2008,
title = {Reward prediction based on stimulus categorization in primate lateral prefrontal cortex},
author = {Xiaochuan Pan and Kosuke Sawa and Ichiro Tsuda and Minoru Tsukada and Masamichi Sakagami},
doi = {10.1038/nn.2128},
year = {2008},
date = {2008-01-01},
journal = {Nature Neuroscience},
volume = {11},
number = {6},
pages = {703--712},
abstract = {To adapt to changeable or unfamiliar environments, it is important that animals develop strategies for goal-directed behaviors that meet the new challenges. We used a sequential paired-association task with asymmetric reward schedule to investigate how prefrontal neurons integrate multiple already-acquired associations to predict reward. Two types of reward-related neurons were observed in the lateral prefrontal cortex: one type predicted reward independent of physical properties of visual stimuli and the other encoded the reward value specific to a category of stimuli defined by the task requirements. Neurons of the latter type were able to predict reward on the basis of stimuli that had not yet been associated with reward, provided that another stimulus from the same category was paired with reward. The results suggest that prefrontal neurons can represent reward information on the basis of category and propagate this information to category members that have not been linked directly with any experience of reward.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To adapt to changeable or unfamiliar environments, it is important that animals develop strategies for goal-directed behaviors that meet the new challenges. We used a sequential paired-association task with asymmetric reward schedule to investigate how prefrontal neurons integrate multiple already-acquired associations to predict reward. Two types of reward-related neurons were observed in the lateral prefrontal cortex: one type predicted reward independent of physical properties of visual stimuli and the other encoded the reward value specific to a category of stimuli defined by the task requirements. Neurons of the latter type were able to predict reward on the basis of stimuli that had not yet been associated with reward, provided that another stimulus from the same category was paired with reward. The results suggest that prefrontal neurons can represent reward information on the basis of category and propagate this information to category members that have not been linked directly with any experience of reward.

Close

  • doi:10.1038/nn.2128

Close

Sebastian Pannasch; Jens R. Helmert; Katharina Roth; Ann-Katrin Herbold; Henrik Walter

Visual fixation durations and saccade amplitudes: Shifting relationship in a variety of conditions Journal Article

In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–19, 2008.

Abstract | Links | BibTeX

@article{Pannasch2008,
title = {Visual fixation durations and saccade amplitudes: Shifting relationship in a variety of conditions},
author = {Sebastian Pannasch and Jens R. Helmert and Katharina Roth and Ann-Katrin Herbold and Henrik Walter},
doi = {10.16910/jemr.2.2.4},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {2},
number = {2},
pages = {1--19},
abstract = {Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing.

Close

  • doi:10.16910/jemr.2.2.4

Close

Alicia Peltsch; Aaron B. Hoffman; I. T. Armstrong; Giovanna Pari; D. P. Munoz

Saccadic impairments in Huntington's disease Journal Article

In: Experimental Brain Research, vol. 186, no. 3, pp. 457–469, 2008.

Abstract | Links | BibTeX

@article{Peltsch2008,
title = {Saccadic impairments in Huntington's disease},
author = {Alicia Peltsch and Aaron B. Hoffman and I. T. Armstrong and Giovanna Pari and D. P. Munoz},
doi = {10.1007/s00221-007-1248-x},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {186},
number = {3},
pages = {457--469},
abstract = {Huntington's disease (HD), a progressive neurological disorder involving degeneration in basal ganglia structures, leads to abnormal control of saccadic eye movements. We investigated whether saccadic impairments in HD (N = 9) correlated with clinical disease severity to determine the relationship between saccadic control and basal ganglia pathology. HD patients and age/sex-matched controls performed various eye movement tasks that required the execution or suppression of automatic or voluntary saccades. In the "immediate" saccade tasks, subjects were instructed to look either toward (pro-saccade) or away from (anti-saccade) a peripheral stimulus. In the "delayed" saccade tasks (pro-/anti-saccades; delayed memory-guided sequential saccades), subjects were instructed to wait for a central fixation point to disappear before initiating saccades towards or away from a peripheral stimulus that had appeared previously. In all tasks, mean saccadic reaction time was longer and more variable amongst the HD patients. On immediate anti-saccade trials, the occurrence of direction errors (pro-saccades initiated toward stimulus) was higher in the HD patients. In the delayed tasks, timing errors (eye movements made prior to the go signal) were also greater in the HD patients. The increased variability in saccadic reaction times and occurrence of errors (both timing and direction errors) were highly correlated with disease severity, as assessed with the Unified Huntington's Disease Rating Scale, suggesting that saccadic impairments worsen as the disease progresses. Thus, performance on voluntary saccade paradigms provides a sensitive indicator of disease progression in HD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Huntington's disease (HD), a progressive neurological disorder involving degeneration in basal ganglia structures, leads to abnormal control of saccadic eye movements. We investigated whether saccadic impairments in HD (N = 9) correlated with clinical disease severity to determine the relationship between saccadic control and basal ganglia pathology. HD patients and age/sex-matched controls performed various eye movement tasks that required the execution or suppression of automatic or voluntary saccades. In the "immediate" saccade tasks, subjects were instructed to look either toward (pro-saccade) or away from (anti-saccade) a peripheral stimulus. In the "delayed" saccade tasks (pro-/anti-saccades; delayed memory-guided sequential saccades), subjects were instructed to wait for a central fixation point to disappear before initiating saccades towards or away from a peripheral stimulus that had appeared previously. In all tasks, mean saccadic reaction time was longer and more variable amongst the HD patients. On immediate anti-saccade trials, the occurrence of direction errors (pro-saccades initiated toward stimulus) was higher in the HD patients. In the delayed tasks, timing errors (eye movements made prior to the go signal) were also greater in the HD patients. The increased variability in saccadic reaction times and occurrence of errors (both timing and direction errors) were highly correlated with disease severity, as assessed with the Unified Huntington's Disease Rating Scale, suggesting that saccadic impairments worsen as the disease progresses. Thus, performance on voluntary saccade paradigms provides a sensitive indicator of disease progression in HD.

Close

  • doi:10.1007/s00221-007-1248-x

Close

Angélica Pérez Fornos; Jörg Sommerhalder; Alexandre Pittard; Avinoam B. Safran; Marco Pelizzone

Simulation of artificial vision: IV. Visual information required to achieve simple pointing and manipulation tasks Journal Article

In: Vision Research, vol. 48, no. 16, pp. 1705–1718, 2008.

Abstract | Links | BibTeX

@article{PerezFornos2008,
title = {Simulation of artificial vision: IV. Visual information required to achieve simple pointing and manipulation tasks},
author = {Angélica Pérez Fornos and Jörg Sommerhalder and Alexandre Pittard and Avinoam B. Safran and Marco Pelizzone},
doi = {10.1016/j.visres.2008.04.027},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {16},
pages = {1705--1718},
abstract = {Retinal prostheses attempt to restore some amount of vision to totally blind patients. Vision evoked this way will be however severely constrained because of several factors (e.g., size of the implanted device, number of stimulating contacts, etc.). We used simulations of artificial vision to study how such restrictions of the amount of visual information provided would affect performance on simple pointing and manipulation tasks. Five normal subjects participated in the study. Two tasks were used: pointing on random targets (LEDs task) and arranging wooden chips according to a given model (CHIPs task). Both tasks had to be completed while the amount of visual information was limited by reducing the resolution (number of pixels) and modifying the size of the effective field of view. All images were projected on a 10° × 7° viewing area, stabilised at a given position on the retina. In central vision, the time required to accomplish the tasks remained systematically slower than with normal vision. Accuracy was close to normal at high image resolutions and decreased at 500 pixels or below, depending on the field of view used. Subjects adapted quite rapidly (in less than 15 sessions) to performing both tasks in eccentric vision (15° in the lower visual field), achieving after adaptation performances close to those observed in central vision. These results demonstrate that, if vision is restricted to a small visual area stabilised on the retina (as would be the case in a retinal prosthesis), the perception of several hundreds of retinotopically arranged phosphenes is still needed to restore accurate but slow performance on pointing and manipulation tasks. Considering that present prototypes afford less than 100 stimulation contacts and that our simulations represent the most favourable visual input conditions that the user might experience, further development is required to achieve optimal rehabilitation prospects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retinal prostheses attempt to restore some amount of vision to totally blind patients. Vision evoked this way will be however severely constrained because of several factors (e.g., size of the implanted device, number of stimulating contacts, etc.). We used simulations of artificial vision to study how such restrictions of the amount of visual information provided would affect performance on simple pointing and manipulation tasks. Five normal subjects participated in the study. Two tasks were used: pointing on random targets (LEDs task) and arranging wooden chips according to a given model (CHIPs task). Both tasks had to be completed while the amount of visual information was limited by reducing the resolution (number of pixels) and modifying the size of the effective field of view. All images were projected on a 10° × 7° viewing area, stabilised at a given position on the retina. In central vision, the time required to accomplish the tasks remained systematically slower than with normal vision. Accuracy was close to normal at high image resolutions and decreased at 500 pixels or below, depending on the field of view used. Subjects adapted quite rapidly (in less than 15 sessions) to performing both tasks in eccentric vision (15° in the lower visual field), achieving after adaptation performances close to those observed in central vision. These results demonstrate that, if vision is restricted to a small visual area stabilised on the retina (as would be the case in a retinal prosthesis), the perception of several hundreds of retinotopically arranged phosphenes is still needed to restore accurate but slow performance on pointing and manipulation tasks. Considering that present prototypes afford less than 100 stimulation contacts and that our simulations represent the most favourable visual input conditions that the user might experience, further development is required to achieve optimal rehabilitation prospects.

Close

  • doi:10.1016/j.visres.2008.04.027

Close

Matthew S. Peterson; Melissa R. Beck; Jason H. Wong

Were you paying attention to where you looked? The role of executive working memory in visual search Journal Article

In: Psychonomic Bulletin & Review, vol. 15, no. 2, pp. 372–377, 2008.

Abstract | Links | BibTeX

@article{Peterson2008,
title = {Were you paying attention to where you looked? The role of executive working memory in visual search},
author = {Matthew S. Peterson and Melissa R. Beck and Jason H. Wong},
doi = {10.3758/PBR.15.2.372},
year = {2008},
date = {2008-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {15},
number = {2},
pages = {372--377},
abstract = {Recent evidence has indicated that performing a working memory task that loads executive working memory leads to less efficient visual search (Han & Kim, 2004). We explored the role that executive functioning plays in visual search by examining the pattern of eye movements while participants performed a search task with or without a secondary executive working memory task. Results indicate that executive functioning plays two roles in visual search: the identification of objects and the control of the disengagement of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent evidence has indicated that performing a working memory task that loads executive working memory leads to less efficient visual search (Han & Kim, 2004). We explored the role that executive functioning plays in visual search by examining the pattern of eye movements while participants performed a search task with or without a secondary executive working memory task. Results indicate that executive functioning plays two roles in visual search: the identification of objects and the control of the disengagement of attention.

Close

  • doi:10.3758/PBR.15.2.372

Close

Tobias Pflugshaupt; Thomas Nyffeler; Roman Von Wartburg; Christian W. Hess; René M. Müri

Loss of exploratory vertical saccades after unilateral frontal eye field damage Journal Article

In: Journal of Neurology, Neurosurgery and Psychiatry, vol. 79, no. 4, pp. 474–477, 2008.

Abstract | Links | BibTeX

@article{Pflugshaupt2008,
title = {Loss of exploratory vertical saccades after unilateral frontal eye field damage},
author = {Tobias Pflugshaupt and Thomas Nyffeler and Roman Von Wartburg and Christian W. Hess and René M. Müri},
doi = {10.1136/jnnp.2007.132290},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurology, Neurosurgery and Psychiatry},
volume = {79},
number = {4},
pages = {474--477},
abstract = {Despite their relevance for locomotion and social interaction in everyday situations, little is known about the cortical control of vertical saccades in humans. Results from microstimulation studies indicate that both frontal eye fields (FEFs) contribute to these eye movements. Here, we present a patient with a damaged right FEF, who hardly made vertical saccades during visual exploration. This finding suggests that, for the cortical control of exploratory vertical saccades, integrity of both FEFs is indeed important.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite their relevance for locomotion and social interaction in everyday situations, little is known about the cortical control of vertical saccades in humans. Results from microstimulation studies indicate that both frontal eye fields (FEFs) contribute to these eye movements. Here, we present a patient with a damaged right FEF, who hardly made vertical saccades during visual exploration. This finding suggests that, for the cortical control of exploratory vertical saccades, integrity of both FEFs is indeed important.

Close

  • doi:10.1136/jnnp.2007.132290

Close

M. Niwa; J. Ditterich

Perceptual decisions between multiple directions of visual motion Journal Article

In: Journal of Neuroscience, vol. 28, no. 17, pp. 4435–4445, 2008.

Abstract | Links | BibTeX

@article{Niwa2008,
title = {Perceptual decisions between multiple directions of visual motion},
author = {M. Niwa and J. Ditterich},
doi = {10.1523/JNEUROSCI.5564-07.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neuroscience},
volume = {28},
number = {17},
pages = {4435--4445},
abstract = {Previous studies and models of perceptual decision making have largely focused on binary choices. However, we often have to choose from multiple alternatives. To study the neural mechanisms underlying multialternative decision making, we have asked human subjects to make perceptual decisions between multiple possible directions of visual motion. Using a multicomponent version of the random-dot stimulus, we were able to control experimentally how much sensory evidence we wanted to provide for each of the possible alternatives. We demonstrate that this task provides a rich quantitative dataset for multialternative decision making, spanning a wide range of accuracy levels and mean response times. We further present a computational model that can explain the structure of our behavioral dataset. It is based on the idea of a race between multiple integrators to a decision threshold. Each of these integrators accumulates net sensory evidence for a particular choice, provided by linear combinations of the activities of decision-relevant pools of sensory neurons.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies and models of perceptual decision making have largely focused on binary choices. However, we often have to choose from multiple alternatives. To study the neural mechanisms underlying multialternative decision making, we have asked human subjects to make perceptual decisions between multiple possible directions of visual motion. Using a multicomponent version of the random-dot stimulus, we were able to control experimentally how much sensory evidence we wanted to provide for each of the possible alternatives. We demonstrate that this task provides a rich quantitative dataset for multialternative decision making, spanning a wide range of accuracy levels and mean response times. We further present a computational model that can explain the structure of our behavioral dataset. It is based on the idea of a race between multiple integrators to a decision threshold. Each of these integrators accumulates net sensory evidence for a particular choice, provided by linear combinations of the activities of decision-relevant pools of sensory neurons.

Close

  • doi:10.1523/JNEUROSCI.5564-07.2008

Close

Lauri Nummenmaa; Jussi Hirvonen; Riitta Parkkola; Jari K. Hietanen

Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy Journal Article

In: NeuroImage, vol. 43, no. 3, pp. 571–580, 2008.

Abstract | Links | BibTeX

@article{Nummenmaa2008,
title = {Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy},
author = {Lauri Nummenmaa and Jussi Hirvonen and Riitta Parkkola and Jari K. Hietanen},
doi = {10.1016/j.neuroimage.2008.08.014},
year = {2008},
date = {2008-01-01},
journal = {NeuroImage},
volume = {43},
number = {3},
pages = {571--580},
publisher = {Elsevier Inc.},
abstract = {Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.

Close

  • doi:10.1016/j.neuroimage.2008.08.014

Close

Thomas Nyffeler; Dario Cazzoli; Pascal Wurtz; Mathias Lüthi; Roman Von Wartburg; Silvia Chaves; Anouk Déruaz; Christian W. Hess; René M. Müri

Neglect-like visual exploration behaviour after theta burst transcranial magnetic stimulation of the right posterior parietal cortex Journal Article

In: European Journal of Neuroscience, vol. 27, no. 7, pp. 1809–1813, 2008.

Abstract | Links | BibTeX

@article{Nyffeler2008,
title = {Neglect-like visual exploration behaviour after theta burst transcranial magnetic stimulation of the right posterior parietal cortex},
author = {Thomas Nyffeler and Dario Cazzoli and Pascal Wurtz and Mathias Lüthi and Roman Von Wartburg and Silvia Chaves and Anouk Déruaz and Christian W. Hess and René M. Müri},
doi = {10.1111/j.1460-9568.2008.06154.x},
year = {2008},
date = {2008-01-01},
journal = {European Journal of Neuroscience},
volume = {27},
number = {7},
pages = {1809--1813},
abstract = {The right posterior parietal cortex (PPC) is critically involved in visual exploration behaviour, and damage to this area may lead to neglect of the left hemispace. We investigated whether neglect-like visual exploration behaviour could be induced in healthy subjects using theta burst repetitive transcranial magnetic stimulation (rTMS). To this end, one continuous train of theta burst rTMS was applied over the right PPC in 12 healthy subjects prior to a visual exploration task where colour photographs of real-life scenes were presented on a computer screen. In a control experiment, stimulation was also applied over the vertex. Eye movements were measured, and the distribution of visual fixations in the left and right halves of the screen was analysed. In comparison to the performance of 28 control subjects without stimulation, theta burst rTMS over the right PPC, but not the vertex, significantly decreased cumulative fixation duration in the left screen-half and significantly increased cumulative fixation duration in the right screen-half for a time period of 30 min. These results suggest that theta burst rTMS is a reliable method of inducing transient neglect-like visual exploration behaviour.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The right posterior parietal cortex (PPC) is critically involved in visual exploration behaviour, and damage to this area may lead to neglect of the left hemispace. We investigated whether neglect-like visual exploration behaviour could be induced in healthy subjects using theta burst repetitive transcranial magnetic stimulation (rTMS). To this end, one continuous train of theta burst rTMS was applied over the right PPC in 12 healthy subjects prior to a visual exploration task where colour photographs of real-life scenes were presented on a computer screen. In a control experiment, stimulation was also applied over the vertex. Eye movements were measured, and the distribution of visual fixations in the left and right halves of the screen was analysed. In comparison to the performance of 28 control subjects without stimulation, theta burst rTMS over the right PPC, but not the vertex, significantly decreased cumulative fixation duration in the left screen-half and significantly increased cumulative fixation duration in the right screen-half for a time period of 30 min. These results suggest that theta burst rTMS is a reliable method of inducing transient neglect-like visual exploration behaviour.

Close

  • doi:10.1111/j.1460-9568.2008.06154.x

Close

Matthew H. Phillips; Jay A. Edelman

The dependence of visual scanning performance on search direction and difficulty Journal Article

In: Vision Research, vol. 48, no. 21, pp. 2184–2192, 2008.

Abstract | Links | BibTeX

@article{Phillips2008,
title = {The dependence of visual scanning performance on search direction and difficulty},
author = {Matthew H. Phillips and Jay A. Edelman},
doi = {10.1016/j.visres.2008.06.025},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {21},
pages = {2184--2192},
abstract = {Phillips and Edelman [Phillips, M. H., & Edelman, J. A. (2008). The dependence of visual scanning performance on saccade, fixation, and perceptual metrics. Vision Research, 48(7), 926-936] presented evidence that performance variability in a visual scanning task depends on oculomotor variables related to saccade amplitude rather than fixation duration, and that saccade-related metrics reflects perceptual span. Here, we extend these results by showing that even for extremely difficult searches trial-to-trial performance variability still depends on saccade-related metrics and not fixation duration. We also show that scanning speed is faster for horizontal than for vertical searches, and that these differences derive again from differences in saccade-based metrics and not from differences in fixation duration. We find perceptual span to be larger for horizontal than vertical searches, and approximately symmetric about the line of gaze.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Phillips and Edelman [Phillips, M. H., & Edelman, J. A. (2008). The dependence of visual scanning performance on saccade, fixation, and perceptual metrics. Vision Research, 48(7), 926-936] presented evidence that performance variability in a visual scanning task depends on oculomotor variables related to saccade amplitude rather than fixation duration, and that saccade-related metrics reflects perceptual span. Here, we extend these results by showing that even for extremely difficult searches trial-to-trial performance variability still depends on saccade-related metrics and not fixation duration. We also show that scanning speed is faster for horizontal than for vertical searches, and that these differences derive again from differences in saccade-based metrics and not from differences in fixation duration. We find perceptual span to be larger for horizontal than vertical searches, and approximately symmetric about the line of gaze.

Close

  • doi:10.1016/j.visres.2008.06.025

Close

Elmar H. Pinkhardt; Reinhart Jürgens; Wolfgang Becker; Federica Valdarno; Albert C. Ludolph; Jan Kassubek

Differential diagnostic value of eye movement recording in PSP-parkinsonism, Richardson's syndrome, and idiopathic Parkinson's disease Journal Article

In: Journal of Neurology, vol. 255, no. 12, pp. 1916–1925, 2008.

Abstract | Links | BibTeX

@article{Pinkhardt2008,
title = {Differential diagnostic value of eye movement recording in PSP-parkinsonism, Richardson's syndrome, and idiopathic Parkinson's disease},
author = {Elmar H. Pinkhardt and Reinhart Jürgens and Wolfgang Becker and Federica Valdarno and Albert C. Ludolph and Jan Kassubek},
doi = {10.1007/s00415-009-0027-y},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurology},
volume = {255},
number = {12},
pages = {1916--1925},
abstract = {Vertical gaze palsy is a highly relevant clinical sign in parkinsonian syndromes. As the eponymous sign of progressive supranuclear palsy (PSP), it is one of the core features in the diagnosis of this disease. Recent studies have suggested a further differentiation of PSP in Richardson's syndrome (RS) and PSP-parkinsonism (PSPP). The aim of this study was to search for oculomotor abnormalities in the PSP-P subset of a sample of PSP patients and to compare these findings with those of (i) RS patients, (ii) patients with idiopathic Parkinson's disease (IPD), and (iii) a control group. Twelve cases of RS, 5 cases of PSP-P, and 27 cases of IPD were examined by use of video-oculography (VOG) and compared to 23 healthy normal controls. Both groups of PSP patients (RS, PSP-P) had significantly slower saccades than either IPD patients or controls, whereas no differences in saccadic eye peak velocity were found between the two PSP groups or in the comparison of IPD with controls. RS and PSP-P were also similar to each other with regard to smooth pursuit eye movements (SPEM), with both groups having significantly lower gain than controls (except for downward pursuit); however, SPEM gain exhibited no consistent difference between PSP and IPD. A correlation between eye movement data and clinical data (Hoehn & Yahr scale or disease duration) could not be observed. As PSP-P patients were still in an early stage of the disease when a differentiation from IPD is difficult on clinical grounds, the clear-cut separation between PSP-P and IPD obtained by measuring saccade velocity suggests that VOG could contribute to the early differentiation between these patient groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Vertical gaze palsy is a highly relevant clinical sign in parkinsonian syndromes. As the eponymous sign of progressive supranuclear palsy (PSP), it is one of the core features in the diagnosis of this disease. Recent studies have suggested a further differentiation of PSP in Richardson's syndrome (RS) and PSP-parkinsonism (PSPP). The aim of this study was to search for oculomotor abnormalities in the PSP-P subset of a sample of PSP patients and to compare these findings with those of (i) RS patients, (ii) patients with idiopathic Parkinson's disease (IPD), and (iii) a control group. Twelve cases of RS, 5 cases of PSP-P, and 27 cases of IPD were examined by use of video-oculography (VOG) and compared to 23 healthy normal controls. Both groups of PSP patients (RS, PSP-P) had significantly slower saccades than either IPD patients or controls, whereas no differences in saccadic eye peak velocity were found between the two PSP groups or in the comparison of IPD with controls. RS and PSP-P were also similar to each other with regard to smooth pursuit eye movements (SPEM), with both groups having significantly lower gain than controls (except for downward pursuit); however, SPEM gain exhibited no consistent difference between PSP and IPD. A correlation between eye movement data and clinical data (Hoehn & Yahr scale or disease duration) could not be observed. As PSP-P patients were still in an early stage of the disease when a differentiation from IPD is difficult on clinical grounds, the clear-cut separation between PSP-P and IPD obtained by measuring saccade velocity suggests that VOG could contribute to the early differentiation between these patient groups.

Close

  • doi:10.1007/s00415-009-0027-y

Close

Alexander Pollatsek; Timothy J. Slattery; Barbara J. Juhasz

The processing of novel and lexicalised prefixed words in reading Journal Article

In: Language and Cognitive Processes, vol. 23, no. 7-8, pp. 1133–1158, 2008.

Abstract | Links | BibTeX

@article{Pollatsek2008,
title = {The processing of novel and lexicalised prefixed words in reading},
author = {Alexander Pollatsek and Timothy J. Slattery and Barbara J. Juhasz},
doi = {10.1080/01690960801945484},
year = {2008},
date = {2008-01-01},
journal = {Language and Cognitive Processes},
volume = {23},
number = {7-8},
pages = {1133--1158},
abstract = {Two experiments compared how relatively long novel prefixed words (e.g., overfarm) and existing prefixed words were processed in reading. The use of novel prefixed words allows one to examine the roles of whole-word access and decompositional processing in the processing of non-novel prefixed words. The two experiments found that, although there was a large cost to novelty (e.g., gaze durations were about 100 ms longer for novel prefixedwords), the effect of the frequency of the root morpheme on fixation measures was about the same for novel and non-novel prefixed words for most measures. This finding rules out a (‘‘horse-race'') dual-route model of processing for existing prefixed words in which the whole-word and decompositional route are parallel and independent, as such a model would predict a substantially larger root frequency effect for novel words (where whole-word processes do not exist). The most likely model to explain the processing of prefixed words is a parallel interactive one.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments compared how relatively long novel prefixed words (e.g., overfarm) and existing prefixed words were processed in reading. The use of novel prefixed words allows one to examine the roles of whole-word access and decompositional processing in the processing of non-novel prefixed words. The two experiments found that, although there was a large cost to novelty (e.g., gaze durations were about 100 ms longer for novel prefixedwords), the effect of the frequency of the root morpheme on fixation measures was about the same for novel and non-novel prefixed words for most measures. This finding rules out a (‘‘horse-race'') dual-route model of processing for existing prefixed words in which the whole-word and decompositional route are parallel and independent, as such a model would predict a substantially larger root frequency effect for novel words (where whole-word processes do not exist). The most likely model to explain the processing of prefixed words is a parallel interactive one.

Close

  • doi:10.1080/01690960801945484

Close

Hans P. Op De Beeck; Jennifer A. Deutsch; Wim Vanduffel; Nancy Kanwisher; James J. DiCarlo

A stable topography of selectivity for unfamiliar shape classes in monkey inferior temporal cortex Journal Article

In: Cerebral Cortex, vol. 18, no. 7, pp. 1676–1694, 2008.

Abstract | Links | BibTeX

@article{OpDeBeeck2008,
title = {A stable topography of selectivity for unfamiliar shape classes in monkey inferior temporal cortex},
author = {Hans P. Op De Beeck and Jennifer A. Deutsch and Wim Vanduffel and Nancy Kanwisher and James J. DiCarlo},
doi = {10.1093/cercor/bhm196},
year = {2008},
date = {2008-01-01},
journal = {Cerebral Cortex},
volume = {18},
number = {7},
pages = {1676--1694},
abstract = {The inferior temporal (IT) cortex in monkeys plays a central role in visual object recognition and learning. Previous studies have observed patches in IT cortex with strong selectivity for highly familiar object classes (e.g., faces), but the principles behind this functional organization are largely unknown due to the many properties that distinguish different object classes. To unconfound shape from meaning and memory, we scanned monkeys with functional magnetic resonance imaging while they viewed classes of initially novel objects. Our data revealed a topography of selectivity for these novel object classes across IT cortex. We found that this selectivity topography was highly reproducible and remarkably stable across a 3-month interval during which monkeys were extensively trained to discriminate among exemplars within one of the object classes. Furthermore, this selectivity topography was largely unaffected by changes in behavioral task and object retinal position, both of which preserve shape. In contrast, it was strongly influenced by changes in object shape. The topography was partially related to, but not explained by, the previously described pattern of face selectivity. Together, these results suggest that IT cortex contains a large-scale map of shape that is largely independent of meaning, familiarity, and behavioral task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The inferior temporal (IT) cortex in monkeys plays a central role in visual object recognition and learning. Previous studies have observed patches in IT cortex with strong selectivity for highly familiar object classes (e.g., faces), but the principles behind this functional organization are largely unknown due to the many properties that distinguish different object classes. To unconfound shape from meaning and memory, we scanned monkeys with functional magnetic resonance imaging while they viewed classes of initially novel objects. Our data revealed a topography of selectivity for these novel object classes across IT cortex. We found that this selectivity topography was highly reproducible and remarkably stable across a 3-month interval during which monkeys were extensively trained to discriminate among exemplars within one of the object classes. Furthermore, this selectivity topography was largely unaffected by changes in behavioral task and object retinal position, both of which preserve shape. In contrast, it was strongly influenced by changes in object shape. The topography was partially related to, but not explained by, the previously described pattern of face selectivity. Together, these results suggest that IT cortex contains a large-scale map of shape that is largely independent of meaning, familiarity, and behavioral task.

Close

  • doi:10.1093/cercor/bhm196

Close

Jorge Otero-Millan; Xoana G. Troncoso; Stephen L. Macknik; Ignacio Serrano-Pedraza; Susana Martinez-Conde

Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–18, 2008.

Abstract | Links | BibTeX

@article{OteroMillan2008,
title = {Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator},
author = {Jorge Otero-Millan and Xoana G. Troncoso and Stephen L. Macknik and Ignacio Serrano-Pedraza and Susana Martinez-Conde},
doi = {10.1167/9.8.447},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--18},
abstract = {Microsaccades are known to occur during prolonged visual fixation, but it is a matter of controversy whether they also happen during free-viewing. Here we set out to determine: 1) whether microsaccades occur during free visual exploration and visual search, 2) whether microsaccade dynamics vary as a function of visual stimulation and viewing task, and 3) whether saccades and microsaccades share characteristics that might argue in favor of a common saccade-microsaccade oculomotor generator. Human subjects viewed naturalistic stimuli while performing various viewing tasks, including visual exploration, visual search, and prolonged visual fixation. Their eye movements were simultaneously recorded with high precision. Our results show that microsaccades are produced during the fixation periods that occur during visual exploration and visual search. Microsaccade dynamics during free-viewing moreover varied as a function of visual stimulation and viewing task, with increasingly demanding tasks resulting in increased microsaccade production. Moreover, saccades and microsaccades had comparable spatiotemporal characteristics, including the presence of equivalent refractory periods between all pair-wise combinations of saccades and microsaccades. Thus our results indicate a microsaccade-saccade continuum and support the hypothesis of a common oculomotor generator for saccades and microsaccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Microsaccades are known to occur during prolonged visual fixation, but it is a matter of controversy whether they also happen during free-viewing. Here we set out to determine: 1) whether microsaccades occur during free visual exploration and visual search, 2) whether microsaccade dynamics vary as a function of visual stimulation and viewing task, and 3) whether saccades and microsaccades share characteristics that might argue in favor of a common saccade-microsaccade oculomotor generator. Human subjects viewed naturalistic stimuli while performing various viewing tasks, including visual exploration, visual search, and prolonged visual fixation. Their eye movements were simultaneously recorded with high precision. Our results show that microsaccades are produced during the fixation periods that occur during visual exploration and visual search. Microsaccade dynamics during free-viewing moreover varied as a function of visual stimulation and viewing task, with increasingly demanding tasks resulting in increased microsaccade production. Moreover, saccades and microsaccades had comparable spatiotemporal characteristics, including the presence of equivalent refractory periods between all pair-wise combinations of saccades and microsaccades. Thus our results indicate a microsaccade-saccade continuum and support the hypothesis of a common oculomotor generator for saccades and microsaccades.

Close

  • doi:10.1167/9.8.447

Close

Manabu Shikauchi; Shin Ishii; Tomohiro Shibata

Prediction of aperiodic target sequences by saccades Journal Article

In: Behavioural Brain Research, vol. 189, no. 2, pp. 325–331, 2008.

Abstract | Links | BibTeX

@article{Shikauchi2008,
title = {Prediction of aperiodic target sequences by saccades},
author = {Manabu Shikauchi and Shin Ishii and Tomohiro Shibata},
doi = {10.1016/j.bbr.2008.01.019},
year = {2008},
date = {2008-01-01},
journal = {Behavioural Brain Research},
volume = {189},
number = {2},
pages = {325--331},
abstract = {Through recording of saccadic eye movements, we investigated whether humans can achieve prediction of aperiodic target sequences which cannot be predicted based solely on memorizing short-length patterns of the target sequence. We proposed a novel experimental paradigm in which Auto-Regressive (AR) processes are used to generate aperiodic target sequences. If subjects can fully utilize the knowledge on the AR dynamics that have generated the target sequence, optimal prediction can be made. As a control task, a completely unpredictable (random) target sequence was generated by shuffling the AR sequences. Behavioral analysis suggested that the prediction of the next target position in the AR sequence was significantly more successful than that by the random guess or the optimal guess for the random sequence. Although their performances were not optimal, learning of the AR dynamics was observed for first-order AR sequences, suggesting that the subjects attempted to predict the next target position based on partially identified AR dynamics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Through recording of saccadic eye movements, we investigated whether humans can achieve prediction of aperiodic target sequences which cannot be predicted based solely on memorizing short-length patterns of the target sequence. We proposed a novel experimental paradigm in which Auto-Regressive (AR) processes are used to generate aperiodic target sequences. If subjects can fully utilize the knowledge on the AR dynamics that have generated the target sequence, optimal prediction can be made. As a control task, a completely unpredictable (random) target sequence was generated by shuffling the AR sequences. Behavioral analysis suggested that the prediction of the next target position in the AR sequence was significantly more successful than that by the random guess or the optimal guess for the random sequence. Although their performances were not optimal, learning of the AR dynamics was observed for first-order AR sequences, suggesting that the subjects attempted to predict the next target position based on partially identified AR dynamics.

Close

  • doi:10.1016/j.bbr.2008.01.019

Close

Mariano Sigman; Jérôme Sackur; Antoine Del Cul; Stanislas Dehaene

Illusory displacement due to object substitution near the consciousness threshold Journal Article

In: Journal of Vision, vol. 8, no. 1, pp. 1–10, 2008.

Abstract | Links | BibTeX

@article{Sigman2008,
title = {Illusory displacement due to object substitution near the consciousness threshold},
author = {Mariano Sigman and Jérôme Sackur and Antoine Del Cul and Stanislas Dehaene},
doi = {10.1167/8.1.13},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {1},
pages = {1--10},
abstract = {A briefly presented target shape can be made invisible by the subsequent presentation of a mask that replaces the target. While varying the target-mask interval in order to investigate perception near the consciousness threshold, we discovered a novel visual illusion. At some intervals, the target is clearly visible, but its location is misperceived. By manipulating the mask's size and target's position, we demonstrate that the perceived target location is always displaced to the boundary of a virtual surface defined by the mask contours. Thus, mutual exclusion of surfaces appears as a cause of masking.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A briefly presented target shape can be made invisible by the subsequent presentation of a mask that replaces the target. While varying the target-mask interval in order to investigate perception near the consciousness threshold, we discovered a novel visual illusion. At some intervals, the target is clearly visible, but its location is misperceived. By manipulating the mask's size and target's position, we demonstrate that the perceived target location is always displaced to the boundary of a virtual surface defined by the mask contours. Thus, mutual exclusion of surfaces appears as a cause of masking.

Close

  • doi:10.1167/8.1.13

Close

Michael A. Silver; Amitai Shenhav; Mark D'Esposito

Cholinergic enhancement reduces spatial spread of visual responses in human early visual cortex Journal Article

In: Neuron, vol. 60, no. 5, pp. 904–914, 2008.

Abstract | Links | BibTeX

@article{Silver2008,
title = {Cholinergic enhancement reduces spatial spread of visual responses in human early visual cortex},
author = {Michael A. Silver and Amitai Shenhav and Mark D'Esposito},
doi = {10.1016/j.neuron.2008.09.038},
year = {2008},
date = {2008-01-01},
journal = {Neuron},
volume = {60},
number = {5},
pages = {904--914},
abstract = {Animal studies have shown that acetylcholine decreases excitatory receptive field size and spread of excitation in early visual cortex. These effects are thought to be due to facilitation of thalamocortical synaptic transmission and/or suppression of intracortical connections. We have used functional magnetic resonance imaging (fMRI) to measure the spatial spread of responses to visual stimulation in human early visual cortex. The cholinesterase inhibitor donepezil was administered to normal healthy human subjects to increase synaptic levels of acetylcholine in the brain. Cholinergic enhancement with donepezil decreased the spatial spread of excitatory fMRI responses in visual cortex, consistent with a role of acetylcholine in reducing excitatory receptive field size of cortical neurons. Donepezil also reduced response amplitude in visual cortex, but the cholinergic effects on spatial spread were not a direct result of reduced amplitude. These findings demonstrate that acetylcholine regulates spatial integration in human visual cortex.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Animal studies have shown that acetylcholine decreases excitatory receptive field size and spread of excitation in early visual cortex. These effects are thought to be due to facilitation of thalamocortical synaptic transmission and/or suppression of intracortical connections. We have used functional magnetic resonance imaging (fMRI) to measure the spatial spread of responses to visual stimulation in human early visual cortex. The cholinesterase inhibitor donepezil was administered to normal healthy human subjects to increase synaptic levels of acetylcholine in the brain. Cholinergic enhancement with donepezil decreased the spatial spread of excitatory fMRI responses in visual cortex, consistent with a role of acetylcholine in reducing excitatory receptive field size of cortical neurons. Donepezil also reduced response amplitude in visual cortex, but the cholinergic effects on spatial spread were not a direct result of reduced amplitude. These findings demonstrate that acetylcholine regulates spatial integration in human visual cortex.

Close

  • doi:10.1016/j.neuron.2008.09.038

Close

Tim J. Smith; John M. Henderson

Edit Blindness: The relationship between attention and global change blindness in dynamic scenes Journal Article

In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–17, 2008.

Abstract | BibTeX

@article{Smith2008,
title = {Edit Blindness: The relationship between attention and global change blindness in dynamic scenes},
author = {Tim J. Smith and John M. Henderson},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {2},
number = {2},
pages = {1--17},
abstract = {Although we experience the visual world as a continuous, richly detailed space we often fail to notice large and significant changes. Such change blindness has been demonstrated for local object changes and changes to the visual form of whole images, however it is assumed that total changes from one image to another would be easily detected. Film editing presents such total changes several times a minute yet we rarely seem to be aware of them, a phenomenon we refer to here as edit blindness. This phenomenon has never been empirically demonstrated even though film editors believe they have at their disposal techniques that induce edit blindness, the Continuity Editing Rules. In the present study we tested the relationship between Continuity Editing Rules and edit blindness by instructing participants to detect edits while watching excerpts from feature films. Eye movements were recorded during the task. The results indicate that edits constructed according to the Continuity Editing Rules result in greater edit blindness than edits not adhering to the rules. A quarter of edits joining two viewpoints of the same scene were undetected and this increased to a third when the edit coincided with a sudden onset of motion. Some cuts may be missed due to suppression of the cut transients by coinciding with eyeblinks or saccadic eye movements but the majority seem to be due to inattentional blindness as viewers attend to the depicted narrative. In conclusion, this study presents the first empirical evidence of edit blindness and its relationship to natural attentional behaviour during dynamic scene viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although we experience the visual world as a continuous, richly detailed space we often fail to notice large and significant changes. Such change blindness has been demonstrated for local object changes and changes to the visual form of whole images, however it is assumed that total changes from one image to another would be easily detected. Film editing presents such total changes several times a minute yet we rarely seem to be aware of them, a phenomenon we refer to here as edit blindness. This phenomenon has never been empirically demonstrated even though film editors believe they have at their disposal techniques that induce edit blindness, the Continuity Editing Rules. In the present study we tested the relationship between Continuity Editing Rules and edit blindness by instructing participants to detect edits while watching excerpts from feature films. Eye movements were recorded during the task. The results indicate that edits constructed according to the Continuity Editing Rules result in greater edit blindness than edits not adhering to the rules. A quarter of edits joining two viewpoints of the same scene were undetected and this increased to a third when the edit coincided with a sudden onset of motion. Some cuts may be missed due to suppression of the cut transients by coinciding with eyeblinks or saccadic eye movements but the majority seem to be due to inattentional blindness as viewers attend to the depicted narrative. In conclusion, this study presents the first empirical evidence of edit blindness and its relationship to natural attentional behaviour during dynamic scene viewing.

Close

J. F. Soechting; Martha Flanders

Extrapolation of visual motion for manual interception Journal Article

In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2956–2967, 2008.

Abstract | Links | BibTeX

@article{Soechting2008,
title = {Extrapolation of visual motion for manual interception},
author = {J. F. Soechting and Martha Flanders},
doi = {10.1152/jn.90308.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {99},
number = {6},
pages = {2956--2967},
abstract = {A frequent goal of hand movement is to touch a moving target or to make contact with a stationary object that is in motion relative to the moving head and body. This process requires a prediction of the target's motion, since the initial direction of the hand movement anticipates target motion. This experiment was designed to define the visual motion parameters that are incorporated in this prediction of target motion. On seeing a go signal (a change in target color), human subjects slid the right index finger along a touch-sensitive computer monitor to intercept a target moving along an unseen circular or oval path. The analysis focused on the initial direction of the interception movement, which was found to be influenced by the time required to intercept the target and the target's distance from the finger's starting location. Initial direction also depended on the curvature of the target's trajectory in a manner that suggested that this parameter was underestimated during the process of extrapolation. The pattern of smooth pursuit eye movements suggests that the extrapolation of visual target motion was based on local motion cues around the time of the onset of hand movement, rather than on a cognitive synthesis of the target's pattern of motion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A frequent goal of hand movement is to touch a moving target or to make contact with a stationary object that is in motion relative to the moving head and body. This process requires a prediction of the target's motion, since the initial direction of the hand movement anticipates target motion. This experiment was designed to define the visual motion parameters that are incorporated in this prediction of target motion. On seeing a go signal (a change in target color), human subjects slid the right index finger along a touch-sensitive computer monitor to intercept a target moving along an unseen circular or oval path. The analysis focused on the initial direction of the interception movement, which was found to be influenced by the time required to intercept the target and the target's distance from the finger's starting location. Initial direction also depended on the curvature of the target's trajectory in a manner that suggested that this parameter was underestimated during the process of extrapolation. The pattern of smooth pursuit eye movements suggests that the extrapolation of visual target motion was based on local motion cues around the time of the onset of hand movement, rather than on a cognitive synthesis of the target's pattern of motion.

Close

  • doi:10.1152/jn.90308.2008

Close

Alexandra Soliman; Gillian A. O'Driscoll; Jens Pruessner; Anne Lise V. Holahan; Isabelle Boileau; Danny Gagnon; Alain Dagher

Stress-induced dopamine release in humans at risk of psychosis: A [ "C] raclopride PET study Journal Article

In: Neuropsychopharmacology, vol. 33, no. 8, pp. 2033–2041, 2008.

Abstract | Links | BibTeX

@article{Soliman2008,
title = {Stress-induced dopamine release in humans at risk of psychosis: A [ "C] raclopride PET study},
author = {Alexandra Soliman and Gillian A. O'Driscoll and Jens Pruessner and Anne Lise V. Holahan and Isabelle Boileau and Danny Gagnon and Alain Dagher},
doi = {10.1038/sj.npp.1301597},
year = {2008},
date = {2008-01-01},
journal = {Neuropsychopharmacology},
volume = {33},
number = {8},
pages = {2033--2041},
abstract = {Drugs that increase dopamine levels in the brain can cause psychotic symptoms in healthy individuals and worsen them in schizophrenic patients. Psychological stress also increases dopamine release and is thought to play a role in susceptibility to psychotic illness. We hypothesized that healthy individuals at elevated risk of developing psychosis would show greater striatal dopamine release than controls in response to stress. Using positron emission tomography and [(11)C]raclopride, we measured changes in synaptic dopamine concentrations in 10 controls and 16 psychometric schizotypes; 9 with perceptual aberrations (PerAb, ie positive schizotypy) and 7 with physical anhedonia (PhysAn, ie negative schizotypy). [(11)C]Raclopride binding potential was measured during a psychological stress task and a sensory-motor control. All three groups showed significant increases in self-reported stress and cortisol levels between the stress and control conditions. However, only the PhysAn group showed significant stress-induced dopamine release. Dopamine release in the entire sample was significantly negatively correlated with smooth pursuit gain, an endophenotype linked to frontal lobe function. Our findings suggest the presence of abnormalities in the dopamine response to stress in negative symptom schizotypy, and provide indirect evidence of a link to frontal function.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Drugs that increase dopamine levels in the brain can cause psychotic symptoms in healthy individuals and worsen them in schizophrenic patients. Psychological stress also increases dopamine release and is thought to play a role in susceptibility to psychotic illness. We hypothesized that healthy individuals at elevated risk of developing psychosis would show greater striatal dopamine release than controls in response to stress. Using positron emission tomography and [(11)C]raclopride, we measured changes in synaptic dopamine concentrations in 10 controls and 16 psychometric schizotypes; 9 with perceptual aberrations (PerAb, ie positive schizotypy) and 7 with physical anhedonia (PhysAn, ie negative schizotypy). [(11)C]Raclopride binding potential was measured during a psychological stress task and a sensory-motor control. All three groups showed significant increases in self-reported stress and cortisol levels between the stress and control conditions. However, only the PhysAn group showed significant stress-induced dopamine release. Dopamine release in the entire sample was significantly negatively correlated with smooth pursuit gain, an endophenotype linked to frontal lobe function. Our findings suggest the presence of abnormalities in the dopamine response to stress in negative symptom schizotypy, and provide indirect evidence of a link to frontal function.

Close

  • doi:10.1038/sj.npp.1301597

Close

Leah Roberts; Marianne Gullberg; Peter Indefrey

Online pronoun resolution in L2 discouse: L1 influence and general learner effects Journal Article

In: Studies in Second Language Acquisition, vol. 30, pp. 333–357, 2008.

Abstract | BibTeX

@article{Roberts2008,
title = {Online pronoun resolution in L2 discouse: L1 influence and general learner effects},
author = {Leah Roberts and Marianne Gullberg and Peter Indefrey},
year = {2008},
date = {2008-01-01},
journal = {Studies in Second Language Acquisition},
volume = {30},
pages = {333--357},
abstract = {This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.

Close

Anne Roefs; Anita Jansen; Sofie Moresi; Paul Willems; Sarah Grootel; Anouk Borgh

Looking good: BMI, attractiveness bias and visual attention. Journal Article

In: Appetite, vol. 51, pp. 552–555, 2008.

Abstract | BibTeX

@article{Roefs2008,
title = {Looking good: BMI, attractiveness bias and visual attention.},
author = {Anne Roefs and Anita Jansen and Sofie Moresi and Paul Willems and Sarah Grootel and Anouk Borgh},
year = {2008},
date = {2008-01-01},
journal = {Appetite},
volume = {51},
pages = {552--555},
abstract = {The aim of this study was to study attentional bias when viewing one's own and a control body, and to relate this bias to body-weight and attractiveness ratings. Participants were 51 normal-weight female students with an unrestrained eating style. They were successively shown pictures of their own and a control body for 30s each, while their eye movements (overt attention) were being measured. Afterwards, participants were asked to identify the most attractive and most unattractive body part of both their own and a control body. The results show that with increasing BMI and where an individual has given a relatively low rating of attractiveness to their own body, participants attended relatively more to their self-identified most unattractive body part and the control body's most attractive body part. This increasingly negative bias in visual attention for bodies may maintain and/or exacerbate body dissatisfaction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The aim of this study was to study attentional bias when viewing one's own and a control body, and to relate this bias to body-weight and attractiveness ratings. Participants were 51 normal-weight female students with an unrestrained eating style. They were successively shown pictures of their own and a control body for 30s each, while their eye movements (overt attention) were being measured. Afterwards, participants were asked to identify the most attractive and most unattractive body part of both their own and a control body. The results show that with increasing BMI and where an individual has given a relatively low rating of attractiveness to their own body, participants attended relatively more to their self-identified most unattractive body part and the control body's most attractive body part. This increasingly negative bias in visual attention for bodies may maintain and/or exacerbate body dissatisfaction.

Close

Ardi Roelofs

Tracing attention and the activation flow in spoken word planning using eye movements Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 2, pp. 353–368, 2008.

Abstract | Links | BibTeX

@article{Roelofs2008,
title = {Tracing attention and the activation flow in spoken word planning using eye movements},
author = {Ardi Roelofs},
doi = {10.1037/0278-7393.34.2.353},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {34},
number = {2},
pages = {353--368},
abstract = {The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The distractor pictures affected the latencies of gaze shifting and vocal naming. The magnitude of the phonological effects increased linearly with latency, excluding lapses of attention as the cause of the effects. In Experiment 2, no distractor effects were obtained when both pictures were named. When pictures with superimposed distractor words were named or the words were read in Experiment 3, the words influenced the latencies of gaze shifting and picture naming, but the pictures yielded no such latency effects in word reading. The picture-word asymmetry was obtained even with equivalent reading and naming latencies. The picture-picture effects suggest that activation spreads continuously from concepts to phonological forms, whereas the picture-word asymmetry indicates that the amount of activation is limited and task dependent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The distractor pictures affected the latencies of gaze shifting and vocal naming. The magnitude of the phonological effects increased linearly with latency, excluding lapses of attention as the cause of the effects. In Experiment 2, no distractor effects were obtained when both pictures were named. When pictures with superimposed distractor words were named or the words were read in Experiment 3, the words influenced the latencies of gaze shifting and picture naming, but the pictures yielded no such latency effects in word reading. The picture-word asymmetry was obtained even with equivalent reading and naming latencies. The picture-picture effects suggest that activation spreads continuously from concepts to phonological forms, whereas the picture-word asymmetry indicates that the amount of activation is limited and task dependent.

Close

  • doi:10.1037/0278-7393.34.2.353

Close

Ardi Roelofs

Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 6, pp. 1580–1598, 2008.

Abstract | Links | BibTeX

@article{Roelofs2008a,
title = {Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning},
author = {Ardi Roelofs},
doi = {10.1037/a0012476},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {34},
number = {6},
pages = {1580--1598},
abstract = {Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared, and they manually responded to arrows presented away from (Experiment 1), or superimposed onto, the pictures (Experiments 2 and 3); or they responded to tones (Experiment 4). Pictures and arrows/tones were presented at stimulus onset asynchronies of 0, 300, and 1,000 ms. Earlier research showed that vocal responding hampers auditory perception, which predicts earlier shifts of attention to the tones than to the arrows. Word planning yielded dual-task interference. Phonological preparation reduced the latencies of picture naming and gaze shifting. The preparation benefit was propagated into the latencies of the manual responses to the arrows but not to the tones. The malleability of the interference supports the attentional control account. This conclusion was corroborated by computer simulations showing that an extension of WEAVER++ (A. Roelofs, 2003) with assumptions about the attentional control of tasks quantitatively accounts for the latencies of vocal responding, gaze shifting, and manual responding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared, and they manually responded to arrows presented away from (Experiment 1), or superimposed onto, the pictures (Experiments 2 and 3); or they responded to tones (Experiment 4). Pictures and arrows/tones were presented at stimulus onset asynchronies of 0, 300, and 1,000 ms. Earlier research showed that vocal responding hampers auditory perception, which predicts earlier shifts of attention to the tones than to the arrows. Word planning yielded dual-task interference. Phonological preparation reduced the latencies of picture naming and gaze shifting. The preparation benefit was propagated into the latencies of the manual responses to the arrows but not to the tones. The malleability of the interference supports the attentional control account. This conclusion was corroborated by computer simulations showing that an extension of WEAVER++ (A. Roelofs, 2003) with assumptions about the attentional control of tasks quantitatively accounts for the latencies of vocal responding, gaze shifting, and manual responding.

Close

  • doi:10.1037/a0012476

Close

Martin Rolfs; Reinhold Kliegl; Ralf Engbert

Toward a model of microsaccade generation: The case of microsaccadic inhibition Journal Article

In: Journal of Vision, vol. 8, no. 11, pp. 1–23, 2008.

Abstract | Links | BibTeX

@article{Rolfs2008a,
title = {Toward a model of microsaccade generation: The case of microsaccadic inhibition},
author = {Martin Rolfs and Reinhold Kliegl and Ralf Engbert},
doi = {10.1167/8.11.5},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {11},
pages = {1--23},
abstract = {Microsaccades are one component of the small eye movements that constitute fixation. Their implementation in the oculomotor system is unknown. To better understand the physiological and mechanistic processes underlying microsaccade generation, we studied microsaccadic inhibition, a transient drop of microsaccade rate, in response to irrelevant visual and auditory stimuli. Quantitative descriptions of the time course and strength of inhibition revealed a strong dependence of microsaccadic inhibition on stimulus characteristics. In Experiment 1, microsaccadic inhibition occurred sooner after auditory than after visual stimuli and after luminance-contrast than after color-contrast visual stimuli. Moreover, microsaccade amplitude strongly decreased during microsaccadic inhibition. In Experiment 2, the latency of microsaccadic inhibition increased with decreasing luminance contrast. We develop a conceptual model of microsaccade generation in which microsaccades result from fixation-related activity in a motor map coding for both fixation and saccades. In this map, fixation is represented at the central site. Saccades are generated by activity in the periphery, their amplitude increasing with eccentricity. The activity at the central, fixation-related site of the map predicts the rate of microsaccades as well as their amplitude and direction distributions. This model represents a framework for understanding the dynamics of microsaccade behavior in a broad range of tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Microsaccades are one component of the small eye movements that constitute fixation. Their implementation in the oculomotor system is unknown. To better understand the physiological and mechanistic processes underlying microsaccade generation, we studied microsaccadic inhibition, a transient drop of microsaccade rate, in response to irrelevant visual and auditory stimuli. Quantitative descriptions of the time course and strength of inhibition revealed a strong dependence of microsaccadic inhibition on stimulus characteristics. In Experiment 1, microsaccadic inhibition occurred sooner after auditory than after visual stimuli and after luminance-contrast than after color-contrast visual stimuli. Moreover, microsaccade amplitude strongly decreased during microsaccadic inhibition. In Experiment 2, the latency of microsaccadic inhibition increased with decreasing luminance contrast. We develop a conceptual model of microsaccade generation in which microsaccades result from fixation-related activity in a motor map coding for both fixation and saccades. In this map, fixation is represented at the central site. Saccades are generated by activity in the periphery, their amplitude increasing with eccentricity. The activity at the central, fixation-related site of the map predicts the rate of microsaccades as well as their amplitude and direction distributions. This model represents a framework for understanding the dynamics of microsaccade behavior in a broad range of tasks.

Close

  • doi:10.1167/8.11.5

Close

Martin Rolfs; Jochen Laubrock; Reinhold Kliegl

Microsaccade-induced prolongation of saccadic latencies depends on microsaccade amplitude Journal Article

In: Journal of Eye Movement Research, vol. 1, no. 3, pp. 1–8, 2008.

Abstract | Links | BibTeX

@article{Rolfs2008,
title = {Microsaccade-induced prolongation of saccadic latencies depends on microsaccade amplitude},
author = {Martin Rolfs and Jochen Laubrock and Reinhold Kliegl},
doi = {10.16910/jemr.1.3.1},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {1},
number = {3},
pages = {1--8},
abstract = {Fixations consist of small movements including microsaccades, i.e., rapid flicks in eye position that replace the retinal image by up to 1 degree of visual angle. Recently, we showed in a delayed-saccade task (1) that the rate of microsaccades decreased in the course of saccade preparation and (2) that microsaccades occurring around the time of a go signal were associated with prolonged saccade latencies (Rolfs et al., 2006). A re-analysis of the same data set revealed a strong dependence of these findings on microsaccade amplitude. First, microsaccade amplitude dropped to a minimum just before the generation of a saccade. Second, the delay of response saccades was a function of microsaccade amplitude: Microsaccades with larger amplitudes were followed by longer response latencies. These finding were predicted by a recently proposed model that attributes microsaccade generation to fixation-related activity in a saccadic motor map that is in competition with the generation of large saccades (Rolfs et al., 2008). We propose, therefore, that microsaccade statistics provide a behavioral correlate of fixation-related activity in the oculomotor system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Fixations consist of small movements including microsaccades, i.e., rapid flicks in eye position that replace the retinal image by up to 1 degree of visual angle. Recently, we showed in a delayed-saccade task (1) that the rate of microsaccades decreased in the course of saccade preparation and (2) that microsaccades occurring around the time of a go signal were associated with prolonged saccade latencies (Rolfs et al., 2006). A re-analysis of the same data set revealed a strong dependence of these findings on microsaccade amplitude. First, microsaccade amplitude dropped to a minimum just before the generation of a saccade. Second, the delay of response saccades was a function of microsaccade amplitude: Microsaccades with larger amplitudes were followed by longer response latencies. These finding were predicted by a recently proposed model that attributes microsaccade generation to fixation-related activity in a saccadic motor map that is in competition with the generation of large saccades (Rolfs et al., 2008). We propose, therefore, that microsaccade statistics provide a behavioral correlate of fixation-related activity in the oculomotor system.

Close

  • doi:10.16910/jemr.1.3.1

Close

N. N. J. Rommelse; Stefan Van der Stigchel; J. Witlox; C. J. A. Geldof; J. -B. Deijen; Jan Theeuwes; Jaap Oosterlaan; J. A. Sergeant

Deficits in visuo-spatial working memory, inhibition and oculomotor control in boys with ADHD and their non-affected brothers Journal Article

In: Journal of Neural Transmission, vol. 115, no. 2, pp. 249–260, 2008.

Abstract | Links | BibTeX

@article{Rommelse2008,
title = {Deficits in visuo-spatial working memory, inhibition and oculomotor control in boys with ADHD and their non-affected brothers},
author = {N. N. J. Rommelse and Stefan Van der Stigchel and J. Witlox and C. J. A. Geldof and J. -B. Deijen and Jan Theeuwes and Jaap Oosterlaan and J. A. Sergeant},
doi = {10.1007/s00702-007-0865-7},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neural Transmission},
volume = {115},
number = {2},
pages = {249--260},
abstract = {Few studies have assessed visuo-spatial working memory and inhibition in attention-deficit/hyperactivity disorder (ADHD) by recording saccades and consequently little additional knowledge has been gathered on oculomotor functioning in ADHD. Moreover, this is the first study to report the performance of non-affected siblings of children with ADHD, which may shed light on the familiality of deficits. A total of 14 boys with ADHD, 18 non-affected brothers, and 15 control boys aged 7-14 years, were administered a memory-guided saccade task with delays of three and seven seconds. Familial deficits were found in accuracy of visuo-spatial working memory, percentage of anticipatory saccades, and tendency to overshoot saccades relative to controls. These findings suggest memory-guided saccade deficits may relate to a familial predisposition for ADHD.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Few studies have assessed visuo-spatial working memory and inhibition in attention-deficit/hyperactivity disorder (ADHD) by recording saccades and consequently little additional knowledge has been gathered on oculomotor functioning in ADHD. Moreover, this is the first study to report the performance of non-affected siblings of children with ADHD, which may shed light on the familiality of deficits. A total of 14 boys with ADHD, 18 non-affected brothers, and 15 control boys aged 7-14 years, were administered a memory-guided saccade task with delays of three and seven seconds. Familial deficits were found in accuracy of visuo-spatial working memory, percentage of anticipatory saccades, and tendency to overshoot saccades relative to controls. These findings suggest memory-guided saccade deficits may relate to a familial predisposition for ADHD.

Close

  • doi:10.1007/s00702-007-0865-7

Close

Gianluca U. Sorrento; Denise Y. P. Henriques

Reference frame conversions for repeated arm movements Journal Article

In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2968–2984, 2008.

Abstract | Links | BibTeX

@article{Sorrento2008,
title = {Reference frame conversions for repeated arm movements},
author = {Gianluca U. Sorrento and Denise Y. P. Henriques},
doi = {10.1152/jn.90225.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {99},
number = {6},
pages = {2968--2984},
abstract = {The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.

Close

  • doi:10.1152/jn.90225.2008

Close

Jan L. Souman; Tom C. A. Freeman

Motion perception during sinusoidal smooth pursuit eye movements: Signal latencies and non-linearities Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–14, 2008.

Abstract | Links | BibTeX

@article{Souman2008,
title = {Motion perception during sinusoidal smooth pursuit eye movements: Signal latencies and non-linearities},
author = {Jan L. Souman and Tom C. A. Freeman},
doi = {10.1167/8.14.10r/8/14/10/ [pii]},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--14},
abstract = {Smooth pursuit eye movements add motion to the retinal image. To compensate, the visual system can combine estimates of pursuit velocity and retinal motion to recover motion with respect to the head. Little attention has been paid to the temporal characteristics of this compensation process. Here, we describe how the latency difference between the eye movement signal and the retinal signal can be measured for motion perception during sinusoidal pursuit. In two experiments, observers compared the peak velocity of a motion stimulus presented in pursuit and fixation intervals. Both the pursuit target and the motion stimulus moved with a sinusoidal profile. The phase and amplitude of the motion stimulus were varied systematically in different conditions, along with the amplitude of pursuit. The latency difference between the eye movement signal and the retinal signal was measured by fitting the standard linear model and a non-linear variant to the observed velocity matches. We found that the eye movement signal lagged the retinal signal by a small amount. The non-linear model fitted the velocity matches better than the linear one and this difference increased with pursuit amplitude. The results support previous claims that the visual system estimates eye movement velocity and retinal velocity in a non-linear fashion and that the latency difference between the two signals is small.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Smooth pursuit eye movements add motion to the retinal image. To compensate, the visual system can combine estimates of pursuit velocity and retinal motion to recover motion with respect to the head. Little attention has been paid to the temporal characteristics of this compensation process. Here, we describe how the latency difference between the eye movement signal and the retinal signal can be measured for motion perception during sinusoidal pursuit. In two experiments, observers compared the peak velocity of a motion stimulus presented in pursuit and fixation intervals. Both the pursuit target and the motion stimulus moved with a sinusoidal profile. The phase and amplitude of the motion stimulus were varied systematically in different conditions, along with the amplitude of pursuit. The latency difference between the eye movement signal and the retinal signal was measured by fitting the standard linear model and a non-linear variant to the observed velocity matches. We found that the eye movement signal lagged the retinal signal by a small amount. The non-linear model fitted the velocity matches better than the linear one and this difference increased with pursuit amplitude. The results support previous claims that the visual system estimates eye movement velocity and retinal velocity in a non-linear fashion and that the latency difference between the two signals is small.

Close

  • doi:10.1167/8.14.10r/8/14/10/ [pii]

Close

David Souto; Dirk Kerzel

Dynamics of attention during the initiation of smooth pursuit eye movements Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 3–1–16, 2008.

Abstract | Links | BibTeX

@article{Souto2008,
title = {Dynamics of attention during the initiation of smooth pursuit eye movements},
author = {David Souto and Dirk Kerzel},
doi = {10.1167/8.14.3},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {3--1--16},
abstract = {Many studies indicate that saccades are necessarily preceded by a shift of attention to the target location. There is no direct evidence for the same coupling during smooth pursuit. If smooth pursuit and attention were coupled, pursuit onset should be delayed whenever attention is focused on a stationary, non-target location. To test this hypothesis, observers were instructed to shift their attention to a peripheral location according to a location cue (Experiments 1 and 2) or a symbolic cue (Experiment 3) around the time of smooth pursuit initiation. Attending to static targets had only negligible effects on smooth pursuit latencies and the early open-loop response but lowered pursuit velocity substantially about the onset of closed-loop pursuit. Around this time, eye velocity reflected the competition between the to-be-tracked and to-be-attended object motion, entailing a reduction of eye velocity by 50% compared to the single task condition. The precise time course of attentional modulation of smooth pursuit initiation was at odds with the idea that an attention shift must precede any voluntary eye movement. Finally, the initial catch-up saccades were strongly delayed with attention diverted from the pursuit target. Implications for models of target selection for pursuit and saccades are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Many studies indicate that saccades are necessarily preceded by a shift of attention to the target location. There is no direct evidence for the same coupling during smooth pursuit. If smooth pursuit and attention were coupled, pursuit onset should be delayed whenever attention is focused on a stationary, non-target location. To test this hypothesis, observers were instructed to shift their attention to a peripheral location according to a location cue (Experiments 1 and 2) or a symbolic cue (Experiment 3) around the time of smooth pursuit initiation. Attending to static targets had only negligible effects on smooth pursuit latencies and the early open-loop response but lowered pursuit velocity substantially about the onset of closed-loop pursuit. Around this time, eye velocity reflected the competition between the to-be-tracked and to-be-attended object motion, entailing a reduction of eye velocity by 50% compared to the single task condition. The precise time course of attentional modulation of smooth pursuit initiation was at odds with the idea that an attention shift must precede any voluntary eye movement. Finally, the initial catch-up saccades were strongly delayed with attention diverted from the pursuit target. Implications for models of target selection for pursuit and saccades are discussed.

Close

  • doi:10.1167/8.14.3

Close

Miriam Spering; Anna Montagnini; Karl R. Gegenfurtner

Competition between color and luminance for target selection in smooth pursuit and saccadic eye movements Journal Article

In: Journal of Vision, vol. 8, no. 15, pp. 1–19, 2008.

Abstract | BibTeX

@article{Spering2008,
title = {Competition between color and luminance for target selection in smooth pursuit and saccadic eye movements},
author = {Miriam Spering and Anna Montagnini and Karl R. Gegenfurtner},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {15},
pages = {1--19},
abstract = {Visual processing of color and luminance for smooth pursuit and saccadic eye movements was investigated using a target selection paradigm. In two experiments, stimuli were varied along the dimensions color and luminance, and selection of the more salient target was compared in pursuit and saccades. Initial pursuit was biased in the direction of the luminance component whereas saccades showed a relative preference for color. An early pursuit response toward luminance was often reversed to color by a later saccade. Observers' perceptual judgments of stimulus salience, obtained in two control experiments, were clearly biased toward luminance. This choice bias in perceptual data implies that the initial short-latency pursuit response agrees with perceptual judgments. In contrast, saccades, which have a longer latency than pursuit, do not seem to follow the perceptual judgment of salience but instead show a stronger relative preference for color. These substantial differences in target selection imply that target selection processes for pursuit and saccadic eye movements use distinctly different weights for color and luminance stimuli.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual processing of color and luminance for smooth pursuit and saccadic eye movements was investigated using a target selection paradigm. In two experiments, stimuli were varied along the dimensions color and luminance, and selection of the more salient target was compared in pursuit and saccades. Initial pursuit was biased in the direction of the luminance component whereas saccades showed a relative preference for color. An early pursuit response toward luminance was often reversed to color by a later saccade. Observers' perceptual judgments of stimulus salience, obtained in two control experiments, were clearly biased toward luminance. This choice bias in perceptual data implies that the initial short-latency pursuit response agrees with perceptual judgments. In contrast, saccades, which have a longer latency than pursuit, do not seem to follow the perceptual judgment of salience but instead show a stronger relative preference for color. These substantial differences in target selection imply that target selection processes for pursuit and saccadic eye movements use distinctly different weights for color and luminance stimuli.

Close

Rike Steenken; Hans Colonius; Adele Diederich; Stefan Rach

Visual-auditory interaction in saccadic reaction time: Effects of auditory masker level Journal Article

In: Brain Research, vol. 1220, pp. 150–156, 2008.

Abstract | Links | BibTeX

@article{Steenken2008,
title = {Visual-auditory interaction in saccadic reaction time: Effects of auditory masker level},
author = {Rike Steenken and Hans Colonius and Adele Diederich and Stefan Rach},
doi = {10.1016/j.brainres.2007.08.034},
year = {2008},
date = {2008-01-01},
journal = {Brain Research},
volume = {1220},
pages = {150--156},
abstract = {Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Observed SRT reductions typically range between 10 and 50 ms and decrease as spatial disparity between the stimuli increases. Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory accessory. Here we probe this hypothesis by presenting an additional white-noise masker background of 3 s duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident vs. disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. As verified in a separate auditory localization task, localizability of the auditory accessory decreases with masker level. The SRT results are accounted for by a conceptual model positing that increasing masker level enlarges the area of possible auditory stimulus locations: it implies that perceivable distances decrease for disparate stimulus configurations and increase for coincident stimulus pairs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Observed SRT reductions typically range between 10 and 50 ms and decrease as spatial disparity between the stimuli increases. Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory accessory. Here we probe this hypothesis by presenting an additional white-noise masker background of 3 s duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident vs. disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. As verified in a separate auditory localization task, localizability of the auditory accessory decreases with masker level. The SRT results are accounted for by a conceptual model positing that increasing masker level enlarges the area of possible auditory stimulus locations: it implies that perceivable distances decrease for disparate stimulus configurations and increase for coincident stimulus pairs.

Close

  • doi:10.1016/j.brainres.2007.08.034

Close

Rike Steenken; Adele Diederich; Hans Colonius

Time course of auditory masker effects: Tapping the locus of audiovisual integration? Journal Article

In: Neuroscience Letters, vol. 435, no. 1, pp. 78–83, 2008.

Abstract | Links | BibTeX

@article{Steenken2008a,
title = {Time course of auditory masker effects: Tapping the locus of audiovisual integration?},
author = {Rike Steenken and Adele Diederich and Hans Colonius},
doi = {10.1016/j.neulet.2008.02.017},
year = {2008},
date = {2008-01-01},
journal = {Neuroscience Letters},
volume = {435},
number = {1},
pages = {78--83},
abstract = {In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.

Close

  • doi:10.1016/j.neulet.2008.02.017

Close

Timo Stein; Ignacio Vallines; Werner X. Schneider

Primary visual cortex repoundsects behavioral performance in the attentional blink Journal Article

In: NeuroReport, vol. 19, no. 13, pp. 1277–1281, 2008.

Abstract | Links | BibTeX

@article{Stein2008,
title = {Primary visual cortex repoundsects behavioral performance in the attentional blink},
author = {Timo Stein and Ignacio Vallines and Werner X. Schneider},
doi = {10.1097/WNR.0b013e32830bab02},
year = {2008},
date = {2008-01-01},
journal = {NeuroReport},
volume = {19},
number = {13},
pages = {1277--1281},
abstract = {When two masked targets are presented in a rapid sequence, attentional limitations are reflected in reduced identification accuracy for the second target (T2). We used functional magnetic resonance imaging to disentangle the distinct neural substrates of T2 processing during this attentional blink phenomenon. Spatially separating the two targets allows the retinotopic localization of the different stimuli's encoding sites in primary visual cortex (V1) and thus enables activation elicited by each target to be differentially measured in V1. The encoding location of the second target mirrored T2 identification accuracy in a retinotopically specific manner. These results are the first evidence for effects of behavioral performance on hemodynamic responses in V1 under conditions of the attentional blink.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When two masked targets are presented in a rapid sequence, attentional limitations are reflected in reduced identification accuracy for the second target (T2). We used functional magnetic resonance imaging to disentangle the distinct neural substrates of T2 processing during this attentional blink phenomenon. Spatially separating the two targets allows the retinotopic localization of the different stimuli's encoding sites in primary visual cortex (V1) and thus enables activation elicited by each target to be differentially measured in V1. The encoding location of the second target mirrored T2 identification accuracy in a retinotopically specific manner. These results are the first evidence for effects of behavioral performance on hemodynamic responses in V1 under conditions of the attentional blink.

Close

  • doi:10.1097/WNR.0b013e32830bab02

Close

Paul Sauleau; Pierre Pollak; Paul Krack; Jean Hubert Courjon; Alain Vighetto; Alim Louis Benabid; Denis Pélisson; Caroline Tilikete

Subthalamic stimulation improves orienting gaze movements in Parkinson's disease Journal Article

In: Clinical Neurophysiology, vol. 119, no. 8, pp. 1857–1863, 2008.

Abstract | Links | BibTeX

@article{Sauleau2008,
title = {Subthalamic stimulation improves orienting gaze movements in Parkinson's disease},
author = {Paul Sauleau and Pierre Pollak and Paul Krack and Jean Hubert Courjon and Alain Vighetto and Alim Louis Benabid and Denis Pélisson and Caroline Tilikete},
doi = {10.1016/j.clinph.2008.04.013},
year = {2008},
date = {2008-01-01},
journal = {Clinical Neurophysiology},
volume = {119},
number = {8},
pages = {1857--1863},
abstract = {Objective: To determine the effect of subthalamic stimulation on visually triggered eye and head movements in patients with Parkinson's disease (PD). Methods: We compared the gain and latency of visually triggered eye and head movements in 12 patients bilaterally implanted into the subthalamic nucleus (STN) for severe PD and six age-matched control subjects. Visually triggered movements of eye (head restrained), and of eye and head (head unrestrained) were recorded in the absence of dopaminergic medication. Bilateral stimulation was turned OFF and then turned ON with voltage and contact used in chronic setting. The latency was determined from the beginning of initial horizontal eye movements relative to the target onset, and the gain was defined as the ratio of the amplitude of the initial movement to the amplitude of the target movement. Results: Without stimulation, the initiation of the head movement was significantly delayed in patients and the gain of head movement was reduced. Our patients also presented significantly prolonged latencies and hypometry of visually triggered saccades in the head-fixed condition and of gaze in head-free condition. Bilateral STN stimulation with therapeutic parameters improved performance of orienting gaze, eye and head movements towards the controls' level. Conclusions: These results demonstrate that visually triggered saccades and orienting eye-head movements are impaired in the advanced stage of PD. In addition, subthalamic stimulation enhances amplitude and shortens latency of these movements. Significance: These results are likely explained by alteration of the information processed by the superior colliculus (SC), a pivotal visuomotor structure involved in both voluntary and reflexive saccades. Improvement of movements with stimulation of the STN may be related to its positive input either on the STN-Substantia Nigra-SC pathway or on the parietal cortex-SC pathway.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: To determine the effect of subthalamic stimulation on visually triggered eye and head movements in patients with Parkinson's disease (PD). Methods: We compared the gain and latency of visually triggered eye and head movements in 12 patients bilaterally implanted into the subthalamic nucleus (STN) for severe PD and six age-matched control subjects. Visually triggered movements of eye (head restrained), and of eye and head (head unrestrained) were recorded in the absence of dopaminergic medication. Bilateral stimulation was turned OFF and then turned ON with voltage and contact used in chronic setting. The latency was determined from the beginning of initial horizontal eye movements relative to the target onset, and the gain was defined as the ratio of the amplitude of the initial movement to the amplitude of the target movement. Results: Without stimulation, the initiation of the head movement was significantly delayed in patients and the gain of head movement was reduced. Our patients also presented significantly prolonged latencies and hypometry of visually triggered saccades in the head-fixed condition and of gaze in head-free condition. Bilateral STN stimulation with therapeutic parameters improved performance of orienting gaze, eye and head movements towards the controls' level. Conclusions: These results demonstrate that visually triggered saccades and orienting eye-head movements are impaired in the advanced stage of PD. In addition, subthalamic stimulation enhances amplitude and shortens latency of these movements. Significance: These results are likely explained by alteration of the information processed by the superior colliculus (SC), a pivotal visuomotor structure involved in both voluntary and reflexive saccades. Improvement of movements with stimulation of the STN may be related to its positive input either on the STN-Substantia Nigra-SC pathway or on the parietal cortex-SC pathway.

Close

  • doi:10.1016/j.clinph.2008.04.013

Close

Christoph Scheepers; Frank Keller; Mirella Lapata

Evidence for serial coercion: A time course analysis using the visual-world paradigm Journal Article

In: Cognitive Psychology, vol. 56, no. 1, pp. 1–29, 2008.

Abstract | Links | BibTeX

@article{Scheepers2008,
title = {Evidence for serial coercion: A time course analysis using the visual-world paradigm},
author = {Christoph Scheepers and Frank Keller and Mirella Lapata},
doi = {10.1016/j.cogpsych.2006.10.001},
year = {2008},
date = {2008-01-01},
journal = {Cognitive Psychology},
volume = {56},
number = {1},
pages = {1--29},
abstract = {Metonymic verbs like start or enjoy often occur with artifact-denoting complements (e.g., The artist started the picture) although semantically they require event-denoting complements (e.g., The artist started painting the picture). In case of artifact-denoting objects, the complement is assumed to be type shifted (or coerced) into an event to conform to the verb's semantic restrictions. Psycholinguistic research has provided evidence for this kind of enriched composition: readers experience processing difficulty when faced with metonymic constructions compared to non-metonymic controls. However, slower reading times for metonymic constructions could also be due to competition between multiple interpretations that are being entertained in parallel whenever a metonymic verb is encountered. Using the visual-world paradigm, we devised an experiment which enabled us to determine the time course of metonymic interpretation in relation to non-metonymic controls. The experiment provided evidence in favor of a non-competitive, serial coercion process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Metonymic verbs like start or enjoy often occur with artifact-denoting complements (e.g., The artist started the picture) although semantically they require event-denoting complements (e.g., The artist started painting the picture). In case of artifact-denoting objects, the complement is assumed to be type shifted (or coerced) into an event to conform to the verb's semantic restrictions. Psycholinguistic research has provided evidence for this kind of enriched composition: readers experience processing difficulty when faced with metonymic constructions compared to non-metonymic controls. However, slower reading times for metonymic constructions could also be due to competition between multiple interpretations that are being entertained in parallel whenever a metonymic verb is encountered. Using the visual-world paradigm, we devised an experiment which enabled us to determine the time course of metonymic interpretation in relation to non-metonymic controls. The experiment provided evidence in favor of a non-competitive, serial coercion process.

Close

  • doi:10.1016/j.cogpsych.2006.10.001

Close

Anne-Catherine Scherlen; Jean-Baptiste Bernard; Aurélie Calabrèse; Eric Castet

Page mode reading with simulated scotomas: Oculo-motor patterns Journal Article

In: Vision Research, vol. 48, no. 18, pp. 1870–1878, 2008.

Abstract | BibTeX

@article{Scherlen2008,
title = {Page mode reading with simulated scotomas: Oculo-motor patterns},
author = {Anne-Catherine Scherlen and Jean-Baptiste Bernard and Aurélie Calabrèse and Eric Castet},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {18},
pages = {1870--1878},
abstract = {This study investigated the relationship between reading speed and oculo-motor parameters when normally sighted observers had to read single sentences with an artificial macular scotoma. Using multiple regression analysis, our main result shows that two significant predictors, number of saccades per sentence followed by average fixation duration, account for 94% of reading speed variance: reading speed decreases when number of saccades and fixation duration increase. The number of letters per forward saccade (L/FS), which was measured directly in contrast to previous studies, is not a significant predictor. The results suggest that, independently of the size of saccades, some or all portions of a sentence are temporally integrated across an increasing number of fixations as reading speed is reduced.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigated the relationship between reading speed and oculo-motor parameters when normally sighted observers had to read single sentences with an artificial macular scotoma. Using multiple regression analysis, our main result shows that two significant predictors, number of saccades per sentence followed by average fixation duration, account for 94% of reading speed variance: reading speed decreases when number of saccades and fixation duration increase. The number of letters per forward saccade (L/FS), which was measured directly in contrast to previous studies, is not a significant predictor. The results suggest that, independently of the size of saccades, some or all portions of a sentence are temporally integrated across an increasing number of fixations as reading speed is reduced.

Close

Laura Schmalzl; Romina Palermo; Melissa J. Green; Ruth Brunsdon; Max Coltheart

Training of familiar face recognition and visual scan paths for faces in a child with congenital prosopagnosia Journal Article

In: Cognitive Neuropsychology, vol. 25, no. 5, pp. 704–729, 2008.

Abstract | Links | BibTeX

@article{Schmalzl2008,
title = {Training of familiar face recognition and visual scan paths for faces in a child with congenital prosopagnosia},
author = {Laura Schmalzl and Romina Palermo and Melissa J. Green and Ruth Brunsdon and Max Coltheart},
doi = {10.1080/02643290802299350},
year = {2008},
date = {2008-01-01},
journal = {Cognitive Neuropsychology},
volume = {25},
number = {5},
pages = {704--729},
abstract = {In the current report we describe a successful training study aimed at improving recognition ofa set of familiar face photographs in K., a 4-year-old girl with congenital prosopagnosia (CP). A detailed assessment of K.'s face-processing skills showed a deficit in structural encoding, most pronounced in the processing of facial features within the face. In addition, eye movement recordings revealed that K.'s scan paths for faces were characterized by a large percentage of fixations directed to areas outside the internal core features (i.e., eyes, nose, and mouth), in particular by poor attendance to the eye region. Following multiple baseline assessments, training focused on teaching K. to reliably recognize a set of familiar face photographs by directing visual attention to specific characteristics of the internal features of each face. The training significantly improved K.'s ability to recognize the target faces, with her performance being flawless immediately after training as well as at a follow-up assessment 1 month later. In addition, eye movement recordings following training showed a significant change in K.'s scan paths, with a significant increase in the percentage offixations directed to the internal features, particularly the eye region. Encouragingly, not only was the change in scan paths observed for the set offamiliar trained faces, but it generalized to a set offaces that was not presented during training. In addition to documenting significant training effects, our study raises the intriguing question ofwhether abnormal scan paths for faces may be a common factor underlying face recognition impairments in childhood CP, an issue that has not been explored so far.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the current report we describe a successful training study aimed at improving recognition ofa set of familiar face photographs in K., a 4-year-old girl with congenital prosopagnosia (CP). A detailed assessment of K.'s face-processing skills showed a deficit in structural encoding, most pronounced in the processing of facial features within the face. In addition, eye movement recordings revealed that K.'s scan paths for faces were characterized by a large percentage of fixations directed to areas outside the internal core features (i.e., eyes, nose, and mouth), in particular by poor attendance to the eye region. Following multiple baseline assessments, training focused on teaching K. to reliably recognize a set of familiar face photographs by directing visual attention to specific characteristics of the internal features of each face. The training significantly improved K.'s ability to recognize the target faces, with her performance being flawless immediately after training as well as at a follow-up assessment 1 month later. In addition, eye movement recordings following training showed a significant change in K.'s scan paths, with a significant increase in the percentage offixations directed to the internal features, particularly the eye region. Encouragingly, not only was the change in scan paths observed for the set offamiliar trained faces, but it generalized to a set offaces that was not presented during training. In addition to documenting significant training effects, our study raises the intriguing question ofwhether abnormal scan paths for faces may be a common factor underlying face recognition impairments in childhood CP, an issue that has not been explored so far.

Close

  • doi:10.1080/02643290802299350

Close

Michael Schneider; Angela Heine; Verena Thaler; Joke Torbeyns; Bert De Smedt; Lieven Verschaffel; Arthur M. Jacobs; Elsbeth Stern

A validation of eye movements as a measure of elementary school children's developing number sense Journal Article

In: Cognitive Development, vol. 23, no. 3, pp. 409–422, 2008.

Abstract | Links | BibTeX

@article{Schneider2008,
title = {A validation of eye movements as a measure of elementary school children's developing number sense},
author = {Michael Schneider and Angela Heine and Verena Thaler and Joke Torbeyns and Bert De Smedt and Lieven Verschaffel and Arthur M. Jacobs and Elsbeth Stern},
doi = {10.1016/j.cogdev.2008.07.002},
year = {2008},
date = {2008-01-01},
journal = {Cognitive Development},
volume = {23},
number = {3},
pages = {409--422},
publisher = {6},
address = {// Age},
abstract = {The number line estimation task captures central aspects of children's developing number sense, that is, their intuitions for numbers and their interrelations. Previous research used children's answer patterns and verbal reports as evidence of how they solve this task. In the present study we investigated to what extent eye movements recorded during task solution reflect children's use of the number line. By means of a cross-sectional design with 66 children from Grades 1, 2, and 3, we show that eye-tracking data (a) reflect grade-related increase in estimation competence, (b) are correlated with the accuracy of manual answers, (c) relate, in Grade 2, to children's addition competence, (d) are systematically distributed over the number line, and (e) replicate previous findings concerning children's use of counting strategies and orientation-point strategies. These findings demonstrate the validity and utility of eye-tracking data for investigating children's developing number sense and estimation competence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The number line estimation task captures central aspects of children's developing number sense, that is, their intuitions for numbers and their interrelations. Previous research used children's answer patterns and verbal reports as evidence of how they solve this task. In the present study we investigated to what extent eye movements recorded during task solution reflect children's use of the number line. By means of a cross-sectional design with 66 children from Grades 1, 2, and 3, we show that eye-tracking data (a) reflect grade-related increase in estimation competence, (b) are correlated with the accuracy of manual answers, (c) relate, in Grade 2, to children's addition competence, (d) are systematically distributed over the number line, and (e) replicate previous findings concerning children's use of counting strategies and orientation-point strategies. These findings demonstrate the validity and utility of eye-tracking data for investigating children's developing number sense and estimation competence.

Close

  • doi:10.1016/j.cogdev.2008.07.002

Close

Werner X. Schneider; Ellen Matthias; Melissa L. -H. Võ

Transsaccadic scene memory revisited: A 'Theory of Visual Attention (TVA)' based approach to recognition memory and confidence for objects in naturalistic scenes Journal Article

In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–13, 2008.

Abstract | BibTeX

@article{Schneider2008a,
title = {Transsaccadic scene memory revisited: A 'Theory of Visual Attention (TVA)' based approach to recognition memory and confidence for objects in naturalistic scenes},
author = {Werner X. Schneider and Ellen Matthias and Melissa L. -H. Võ},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {2},
number = {2},
pages = {1--13},
abstract = {The study presented here introduces a new approach to the investigation of transsaccadic memory for objects in naturalistic scenes. Participants were tested with a whole-report task from which — based on the theory of visual attention (TVA) — processing efficiency parameters were derived, namely visual short-term memory storage capacity and visual processing speed. By combining these processing efficiency parameters with transsaccadic memory data from a previous study, we were able to take a closer look at the contribution of visual short-term memory capacity and processing speed to the establishment of visual long-term memory representations during scene viewing. Results indicate that especially the VSTM storage capacity plays a major role in the generation of transsaccadic visual representations of naturalistic scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study presented here introduces a new approach to the investigation of transsaccadic memory for objects in naturalistic scenes. Participants were tested with a whole-report task from which — based on the theory of visual attention (TVA) — processing efficiency parameters were derived, namely visual short-term memory storage capacity and visual processing speed. By combining these processing efficiency parameters with transsaccadic memory data from a previous study, we were able to take a closer look at the contribution of visual short-term memory capacity and processing speed to the establishment of visual long-term memory representations during scene viewing. Results indicate that especially the VSTM storage capacity plays a major role in the generation of transsaccadic visual representations of naturalistic scenes.

Close

Alexander C. Schütz; Doris I. Braun; Dirk Kerzel; Karl R. Gegenfurtner

Improved visual sensitivity during smooth pursuit eye movements Journal Article

In: Nature Neuroscience, vol. 11, no. 10, pp. 1211–1216, 2008.

Abstract | Links | BibTeX

@article{Schuetz2008,
title = {Improved visual sensitivity during smooth pursuit eye movements},
author = {Alexander C. Schütz and Doris I. Braun and Dirk Kerzel and Karl R. Gegenfurtner},
doi = {10.1038/nn.2194},
year = {2008},
date = {2008-01-01},
journal = {Nature Neuroscience},
volume = {11},
number = {10},
pages = {1211--1216},
abstract = {When we view the world around us, we constantly move our eyes. This brings objects of interest into the fovea and keeps them there, but visual sensitivity has been shown to deteriorate while the eyes are moving. Here we show that human sensitivity for some visual stimuli is improved during smooth pursuit eye movements. Detection thresholds for briefly flashed, colored stimuli were 16% lower during pursuit than during fixation. Similarly, detection thresholds for luminance-defined stimuli of high spatial frequency were lowered. These findings suggest that the pursuit-induced sensitivity increase may have its neuronal origin in the parvocellular retino-thalamic system. This implies that the visual system not only uses feedback connections to improve processing for locations and objects being attended to, but that a whole processing subsystem can be boosted. During pursuit, facilitation of the parvocellular system may reduce motion blur for stationary objects and increase sensitivity to speed changes of the tracked object.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When we view the world around us, we constantly move our eyes. This brings objects of interest into the fovea and keeps them there, but visual sensitivity has been shown to deteriorate while the eyes are moving. Here we show that human sensitivity for some visual stimuli is improved during smooth pursuit eye movements. Detection thresholds for briefly flashed, colored stimuli were 16% lower during pursuit than during fixation. Similarly, detection thresholds for luminance-defined stimuli of high spatial frequency were lowered. These findings suggest that the pursuit-induced sensitivity increase may have its neuronal origin in the parvocellular retino-thalamic system. This implies that the visual system not only uses feedback connections to improve processing for locations and objects being attended to, but that a whole processing subsystem can be boosted. During pursuit, facilitation of the parvocellular system may reduce motion blur for stationary objects and increase sensitivity to speed changes of the tracked object.

Close

  • doi:10.1038/nn.2194

Close

Tamara A. Russell; Melissa J. Green; Ian Simpson; Max Coltheart

Remediation of facial emotion perception in schizophrenia: Concomitant changes in visual attention Journal Article

In: Schizophrenia Research, vol. 103, no. 1-3, pp. 248–256, 2008.

Abstract | Links | BibTeX

@article{Russell2008,
title = {Remediation of facial emotion perception in schizophrenia: Concomitant changes in visual attention},
author = {Tamara A. Russell and Melissa J. Green and Ian Simpson and Max Coltheart},
doi = {10.1016/j.schres.2008.04.033},
year = {2008},
date = {2008-01-01},
journal = {Schizophrenia Research},
volume = {103},
number = {1-3},
pages = {248--256},
abstract = {The study examined changes in visual attention in schizophrenia following training with a social-cognitive remediation package designed to improve facial emotion recognition (the Micro-Expression Training Tool; METT). Forty out-patients with schizophrenia were randomly allocated to active training (METT; n = 26), or repeated exposure (RE; n = 14); all completed an emotion recognition task with concurrent eye movement recording. Emotion recognition accuracy was significantly improved in the METT group, and this effect was maintained after one week. Immediately following training, the METT group directed more eye movements within feature areas of faces (i.e., eyes, nose, mouth) compared to the RE group. The number of fixations directed to feature areas of faces was positively associated with emotion recognition accuracy prior to training. After one week, the differences between METT and RE groups in viewing feature areas of faces were reduced to trends. However, within group analyses of the METT group revealed significantly increased number of fixations to, and dwell time within, feature areas following training which were maintained after one week. These results provide the first evidence that improvements in emotion recognition following METT training are associated with changes in visual attention to the feature areas of emotional faces. These findings support the contribution of visual attention abnormalities to emotion recognition impairment in schizophrenia, and suggest that one mechanism for improving emotion recognition involves re-directing visual attention to relevant features of emotional faces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study examined changes in visual attention in schizophrenia following training with a social-cognitive remediation package designed to improve facial emotion recognition (the Micro-Expression Training Tool; METT). Forty out-patients with schizophrenia were randomly allocated to active training (METT; n = 26), or repeated exposure (RE; n = 14); all completed an emotion recognition task with concurrent eye movement recording. Emotion recognition accuracy was significantly improved in the METT group, and this effect was maintained after one week. Immediately following training, the METT group directed more eye movements within feature areas of faces (i.e., eyes, nose, mouth) compared to the RE group. The number of fixations directed to feature areas of faces was positively associated with emotion recognition accuracy prior to training. After one week, the differences between METT and RE groups in viewing feature areas of faces were reduced to trends. However, within group analyses of the METT group revealed significantly increased number of fixations to, and dwell time within, feature areas following training which were maintained after one week. These results provide the first evidence that improvements in emotion recognition following METT training are associated with changes in visual attention to the feature areas of emotional faces. These findings support the contribution of visual attention abnormalities to emotion recognition impairment in schizophrenia, and suggest that one mechanism for improving emotion recognition involves re-directing visual attention to relevant features of emotional faces.

Close

  • doi:10.1016/j.schres.2008.04.033

Close

Stan Van Pelt; W. Pieter Medendorp

Updating target distance across eye movements in depth Journal Article

In: Journal of Neurophysiology, vol. 99, no. 5, pp. 2281–2290, 2008.

Abstract | Links | BibTeX

@article{VanPelt2008,
title = {Updating target distance across eye movements in depth},
author = {Stan Van Pelt and W. Pieter Medendorp},
doi = {10.1152/jn.01281.2007},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {99},
number = {5},
pages = {2281--2290},
abstract = {We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.

Close

  • doi:10.1152/jn.01281.2007

Close

Wieske Van Zoest; Mieke Donk

Goal-driven modulation as a function of time in saccadic target selection Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 61, no. 10, pp. 1553–1572, 2008.

Abstract | Links | BibTeX

@article{VanZoest2008,
title = {Goal-driven modulation as a function of time in saccadic target selection},
author = {Wieske Van Zoest and Mieke Donk},
doi = {10.1080/17470210701595555},
year = {2008},
date = {2008-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {61},
number = {10},
pages = {1553--1572},
abstract = {Four experiments were performed to investigate the contribution of goal-driven modulation in saccadic target selection as a function of time. Observers were required to make an eye movement to a prespecified target that was concurrently presented with multiple nontargets and possibly one distractor. Target and distractor were defined in different dimensions (orientation dimension and colour dimension in Experiment 1), or were both defined in the same dimension (i.e., both defined in the orientation dimension in Experiment 2, or both defined in the colour dimension in Experiments 3 and 4). The identities of target and distractor were switched over conditions. Speed-accuracy functions were computed to examine the full time course of selection in each condition. There were three major results. First, the ability to exert goal-driven control increased as a function of response latency. Second, this ability depended on the specific target-distractor combination, yet was not a function of whether target and distractor were defined within or across dimensions. Third, goal-driven control was available earlier when target and distractor were dissimilar than when they were similar. It was concluded that the influence of goal-driven control in visual selection is not all or none, but is of a continuous nature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Four experiments were performed to investigate the contribution of goal-driven modulation in saccadic target selection as a function of time. Observers were required to make an eye movement to a prespecified target that was concurrently presented with multiple nontargets and possibly one distractor. Target and distractor were defined in different dimensions (orientation dimension and colour dimension in Experiment 1), or were both defined in the same dimension (i.e., both defined in the orientation dimension in Experiment 2, or both defined in the colour dimension in Experiments 3 and 4). The identities of target and distractor were switched over conditions. Speed-accuracy functions were computed to examine the full time course of selection in each condition. There were three major results. First, the ability to exert goal-driven control increased as a function of response latency. Second, this ability depended on the specific target-distractor combination, yet was not a function of whether target and distractor were defined within or across dimensions. Third, goal-driven control was available earlier when target and distractor were dissimilar than when they were similar. It was concluded that the influence of goal-driven control in visual selection is not all or none, but is of a continuous nature.

Close

  • doi:10.1080/17470210701595555

Close

Wieske Van Zoest; Stefan Van der Stigchel; Jason J. S. Barton

Distractor effects on saccade trajectories: A comparison of prosaccades, antisaccades, and memory-guided saccades Journal Article

In: Experimental Brain Research, vol. 186, no. 3, pp. 431–442, 2008.

Abstract | Links | BibTeX

@article{VanZoest2008a,
title = {Distractor effects on saccade trajectories: A comparison of prosaccades, antisaccades, and memory-guided saccades},
author = {Wieske Van Zoest and Stefan Van der Stigchel and Jason J. S. Barton},
doi = {10.1007/s00221-007-1243-2},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {186},
number = {3},
pages = {431--442},
abstract = {The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2-4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2-4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display.

Close

  • doi:10.1007/s00221-007-1243-2

Close

André Vandierendonck; Maud Deschuyteneer; Ann Depoorter; Denis Drieghe

Input monitoring and response selection as components of executive control in pro-saccades and anti-saccades Journal Article

In: Psychological Research, vol. 72, no. 1, pp. 1–11, 2008.

Abstract | Links | BibTeX

@article{Vandierendonck2008,
title = {Input monitoring and response selection as components of executive control in pro-saccades and anti-saccades},
author = {André Vandierendonck and Maud Deschuyteneer and Ann Depoorter and Denis Drieghe},
doi = {10.1007/s00426-006-0078-y},
year = {2008},
date = {2008-01-01},
journal = {Psychological Research},
volume = {72},
number = {1},
pages = {1--11},
abstract = {Several studies have shown that anti-saccades, more than pro-saccades, are executed under executive control. It is argued that executive control subsumes a variety of controlled processes. The present study tested whether some of these underlying processes are involved in the execution of anti-saccades. An experiment is reported in which two such processes were parametrically varied, namely input monitoring and response selection. This resulted in four selective interference conditions obtained by factorially combining the degree of input monitoring and the presence of response selection in the interference task. The four tasks were combined with a primary task which required the participants to perform either pro-saccades or anti-saccades. By comparison of performance in these dual-task conditions and performance in single-task conditions, it was shown that anti-saccades, but not pro-saccades, were delayed when the secondary task required input monitoring or response selection. The results are discussed with respect to theoretical attempts to deconstruct the concept of executive control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Several studies have shown that anti-saccades, more than pro-saccades, are executed under executive control. It is argued that executive control subsumes a variety of controlled processes. The present study tested whether some of these underlying processes are involved in the execution of anti-saccades. An experiment is reported in which two such processes were parametrically varied, namely input monitoring and response selection. This resulted in four selective interference conditions obtained by factorially combining the degree of input monitoring and the presence of response selection in the interference task. The four tasks were combined with a primary task which required the participants to perform either pro-saccades or anti-saccades. By comparison of performance in these dual-task conditions and performance in single-task conditions, it was shown that anti-saccades, but not pro-saccades, were delayed when the secondary task required input monitoring or response selection. The results are discussed with respect to theoretical attempts to deconstruct the concept of executive control.

Close

  • doi:10.1007/s00426-006-0078-y

Close

Suiping Wang; Hsuan-Chih Chen; Jinmian Yang; Lei Mo

Immediacy of integration in discourse comprehension: Evidence from Chinese readers' eye movements Journal Article

In: Language and Cognitive Processes, vol. 23, no. 2, pp. 241–257, 2008.

Abstract | Links | BibTeX

@article{Wang2008a,
title = {Immediacy of integration in discourse comprehension: Evidence from Chinese readers' eye movements},
author = {Suiping Wang and Hsuan-Chih Chen and Jinmian Yang and Lei Mo},
doi = {10.1080/01690960701437061},
year = {2008},
date = {2008-01-01},
journal = {Language and Cognitive Processes},
volume = {23},
number = {2},
pages = {241--257},
abstract = {An eye-movement study was conducted to examine whether Chinese readers immediately activate and integrate related background information during discourse comprehension. Participants were asked to read short passages, each containing a critical word that fitted well within the local context but was inconsistent or neutral with background information from the early part of the passage. This manipulation of textual consistency produced reliable effects on both first-pass reading fixations in the target region and second-pass reading times in the pre-target and target regions. These results indicate that integration processes start very rapidly in reading text in a writing system with properties that encourage delayed processing, suggesting that immediate processing is likely a universal principle in discourse comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An eye-movement study was conducted to examine whether Chinese readers immediately activate and integrate related background information during discourse comprehension. Participants were asked to read short passages, each containing a critical word that fitted well within the local context but was inconsistent or neutral with background information from the early part of the passage. This manipulation of textual consistency produced reliable effects on both first-pass reading fixations in the target region and second-pass reading times in the pre-target and target regions. These results indicate that integration processes start very rapidly in reading text in a writing system with properties that encourage delayed processing, suggesting that immediate processing is likely a universal principle in discourse comprehension.

Close

  • doi:10.1080/01690960701437061

Close

Z. I. Wang; Louis F. Dell'Osso

Tenotomy procedure alleviates the "slow to see" phenomenon in infantile nystagmus syndrome: Model prediction and patient data Journal Article

In: Vision Research, vol. 48, no. 12, pp. 1409–1419, 2008.

Abstract | Links | BibTeX

@article{Wang2008,
title = {Tenotomy procedure alleviates the "slow to see" phenomenon in infantile nystagmus syndrome: Model prediction and patient data},
author = {Z. I. Wang and Louis F. Dell'Osso},
doi = {10.1016/j.visres.2008.03.007},
year = {2008},
date = {2008-01-01},
journal = {Vision Research},
volume = {48},
number = {12},
pages = {1409--1419},
abstract = {Our purpose was to perform a systematic study of the post-four-muscle-tenotomy procedure changes in target acquisition time by comparing predictions from the behavioral ocular motor system (OMS) model and data from infantile nystagmus syndrome (INS) patients. We studied five INS patients who underwent only tenotomy at the enthesis and reattachment at the original insertion of each (previously unoperated) horizontal rectus muscle for their INS treatment. We measured their pre- and post-tenotomy target acquisition changes using data from infrared reflection and high-speed digital video. Three key aspects were calculated and analyzed: the saccadic latency (Ls), the time to target acquisition after the target jump (Lt) and the normalized stimulus time within the cycle. Analyses were performed in MATLAB environment (The MathWorks, Natick, MA) using OMLAB software (OMtools, available from http://www.omlab.org). Model simulations were performed in MATLAB Simulink environment. The model simulation suggested an Lt reduction due to an overall foveation-quality improvement. Consistent with that prediction, improvement in Lt, ranging from ∼200 ms to ∼500 ms (average ∼ 280 ms), was documented in all five patients post-tenotomy. The Lt improvement was not a result of a reduced Ls. INS patients acquired step-target stimuli faster post-tenotomy. This target acquisition improvement may be due to the elevated foveation quality resulting in less inherent variation in the input to the OMS. A refined behavioral OMS model, with "fast" and "slow" motor neuron pathways and a more physiological plant, successfully predicted this improved visual behavior and again demonstrated its utility in guiding ocular motor research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Our purpose was to perform a systematic study of the post-four-muscle-tenotomy procedure changes in target acquisition time by comparing predictions from the behavioral ocular motor system (OMS) model and data from infantile nystagmus syndrome (INS) patients. We studied five INS patients who underwent only tenotomy at the enthesis and reattachment at the original insertion of each (previously unoperated) horizontal rectus muscle for their INS treatment. We measured their pre- and post-tenotomy target acquisition changes using data from infrared reflection and high-speed digital video. Three key aspects were calculated and analyzed: the saccadic latency (Ls), the time to target acquisition after the target jump (Lt) and the normalized stimulus time within the cycle. Analyses were performed in MATLAB environment (The MathWorks, Natick, MA) using OMLAB software (OMtools, available from http://www.omlab.org). Model simulations were performed in MATLAB Simulink environment. The model simulation suggested an Lt reduction due to an overall foveation-quality improvement. Consistent with that prediction, improvement in Lt, ranging from ∼200 ms to ∼500 ms (average ∼ 280 ms), was documented in all five patients post-tenotomy. The Lt improvement was not a result of a reduced Ls. INS patients acquired step-target stimuli faster post-tenotomy. This target acquisition improvement may be due to the elevated foveation quality resulting in less inherent variation in the input to the OMS. A refined behavioral OMS model, with "fast" and "slow" motor neuron pathways and a more physiological plant, successfully predicted this improved visual behavior and again demonstrated its utility in guiding ocular motor research.

Close

  • doi:10.1016/j.visres.2008.03.007

Close

Tessa Warren; Kerry McConnell; Keith Rayner

Effects of context on eye movements when reading about possible and impossible events Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 1001–1010, 2008.

Abstract | Links | BibTeX

@article{Warren2008,
title = {Effects of context on eye movements when reading about possible and impossible events},
author = {Tessa Warren and Kerry McConnell and Keith Rayner},
doi = {10.1037/0278-7393.34.4.1001},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {34},
number = {4},
pages = {1001--1010},
abstract = {Plausibility violations resulting in impossible scenarios lead to earlier and longer lasting eye movement disruption than violations resulting in highly unlikely scenarios (K. Rayner, T. Warren, B. J. Juhasz, & S. P. Liversedge, 2004; T. Warren & K. McConnell, 2007). This could reflect either differences in the timing of availability of different kinds of information (e.g., selectional restrictions, world knowledge, and context) or differences in their relative power to guide semantic interpretation. The authors investigated eye movements to possible and impossible events in real-world and fantasy contexts to determine when contextual information influences detection of impossibility cued by a semantic mismatch between a verb and an argument. Gaze durations on a target word were longer to impossible events independent of context. However, a measure of the time elapsed from first fixating the target word to moving past it showed disruption only in the real-world context. These results suggest that contextual information did not eliminate initial disruption but moderated it quickly thereafter.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Plausibility violations resulting in impossible scenarios lead to earlier and longer lasting eye movement disruption than violations resulting in highly unlikely scenarios (K. Rayner, T. Warren, B. J. Juhasz, & S. P. Liversedge, 2004; T. Warren & K. McConnell, 2007). This could reflect either differences in the timing of availability of different kinds of information (e.g., selectional restrictions, world knowledge, and context) or differences in their relative power to guide semantic interpretation. The authors investigated eye movements to possible and impossible events in real-world and fantasy contexts to determine when contextual information influences detection of impossibility cued by a semantic mismatch between a verb and an argument. Gaze durations on a target word were longer to impossible events independent of context. However, a measure of the time elapsed from first fixating the target word to moving past it showed disruption only in the real-world context. These results suggest that contextual information did not eliminate initial disruption but moderated it quickly thereafter.

Close

  • doi:10.1037/0278-7393.34.4.1001

Close

Geoffrey Underwood; Emma Templeman; Laura Lamming; Tom Foulsham

Is attention necessary for object identification? Evidence from eye movements during the inspection of real-world scenes Journal Article

In: Consciousness and Cognition, vol. 17, no. 1, pp. 159–170, 2008.

Abstract | Links | BibTeX

@article{Underwood2008,
title = {Is attention necessary for object identification? Evidence from eye movements during the inspection of real-world scenes},
author = {Geoffrey Underwood and Emma Templeman and Laura Lamming and Tom Foulsham},
doi = {10.1016/j.concog.2006.11.008},
year = {2008},
date = {2008-01-01},
journal = {Consciousness and Cognition},
volume = {17},
number = {1},
pages = {159--170},
abstract = {Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.

Close

  • doi:10.1016/j.concog.2006.11.008

Close

Seppo Vainio; Jukka Hyönä; Anneli Pajunen

Processing modifier-head agreement in reading: Evidence for a delayed effect of agreement Journal Article

In: Memory and Cognition, vol. 36, no. 2, pp. 329–340, 2008.

Abstract | Links | BibTeX

@article{Vainio2008,
title = {Processing modifier-head agreement in reading: Evidence for a delayed effect of agreement},
author = {Seppo Vainio and Jukka Hyönä and Anneli Pajunen},
doi = {10.3758/MC.36.2.329},
year = {2008},
date = {2008-01-01},
journal = {Memory and Cognition},
volume = {36},
number = {2},
pages = {329--340},
abstract = {The present study examined whether type of inflectional case (semantic or grammatical) and phonological and morphological transparency affect the processing of Finnish modifier-head agreement in reading. Readers' eye movement patterns were registered. In Experiment 1, an agreeing modifier condition (agreement was transparent) was compared with a no-modifier condition, and in Experiment 2, similar constructions with opaque agreement were used. In both experiments, agreement was found to affect the processing of the target noun with some delay. In Experiment 3, unmarked and case-marked modifiers were used. The results again demonstrated a delayed agreement effect, ruling out the possibility that the agreement effects observed in Experiments 1 and 2 reflect a mere modifier-presence effect. We concluded that agreement exerts its effect at the level of syntactic integration but not at the level of lexical access.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study examined whether type of inflectional case (semantic or grammatical) and phonological and morphological transparency affect the processing of Finnish modifier-head agreement in reading. Readers' eye movement patterns were registered. In Experiment 1, an agreeing modifier condition (agreement was transparent) was compared with a no-modifier condition, and in Experiment 2, similar constructions with opaque agreement were used. In both experiments, agreement was found to affect the processing of the target noun with some delay. In Experiment 3, unmarked and case-marked modifiers were used. The results again demonstrated a delayed agreement effect, ruling out the possibility that the agreement effects observed in Experiments 1 and 2 reflect a mere modifier-presence effect. We concluded that agreement exerts its effect at the level of syntactic integration but not at the level of lexical access.

Close

  • doi:10.3758/MC.36.2.329

Close

Matteo Valsecchi; Sven Saage; Brian J. White; Karl R. Gegenfurtner

Advantage in reading lexical bundles is reduced in non-native speakers Journal Article

In: Journal of Eye Movement Research, vol. 6, no. 5:2, pp. 1–15, 2008.

Abstract | Links | BibTeX

@article{Valsecchi2008,
title = {Advantage in reading lexical bundles is reduced in non-native speakers},
author = {Matteo Valsecchi and Sven Saage and Brian J. White and Karl R. Gegenfurtner},
doi = {10.16910/jemr.6.5.2},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {6},
number = {5:2},
pages = {1--15},
abstract = {Formulaic sequences such as idioms, collocations, and lexical bundles, which may be processed as holistic units, make up a large proportion of natural language. For language learners, however, formulaic patterns are a major barrier to achieving native like compe- tence. The present study investigated the processing of lexical bundles by native speakers and less advanced non-native English speakers using corpus analysis for the identification of lexical bundles and eye-tracking to measure the reading times. The participants read sentences containing 4-grams and control phrases which were matched for sub-string fre- quency. The results for native speakers demonstrate a processing advantage for formulaic sequences over the matched control units. We do not find any processing advantage for non-native speakers which suggests that native like processing of lexical bundles comes only late in the acquisition process},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Formulaic sequences such as idioms, collocations, and lexical bundles, which may be processed as holistic units, make up a large proportion of natural language. For language learners, however, formulaic patterns are a major barrier to achieving native like compe- tence. The present study investigated the processing of lexical bundles by native speakers and less advanced non-native English speakers using corpus analysis for the identification of lexical bundles and eye-tracking to measure the reading times. The participants read sentences containing 4-grams and control phrases which were matched for sub-string fre- quency. The results for native speakers demonstrate a processing advantage for formulaic sequences over the matched control units. We do not find any processing advantage for non-native speakers which suggests that native like processing of lexical bundles comes only late in the acquisition process

Close

  • doi:10.16910/jemr.6.5.2

Close

Ronald Berg; Frans W. Cornelissen; Jos B. T. M. Roerdink

Perceptual dependencies in information visualization assessed by complex visual search Journal Article

In: ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1–21, 2008.

Abstract | Links | BibTeX

@article{Berg2008,
title = {Perceptual dependencies in information visualization assessed by complex visual search},
author = {Ronald Berg and Frans W. Cornelissen and Jos B. T. M. Roerdink},
doi = {10.1145/1278760.1278763},
year = {2008},
date = {2008-01-01},
journal = {ACM Transactions on Applied Perception},
volume = {4},
number = {4},
pages = {1--21},
abstract = {A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of the end-user tasks performed on these images involve a form of visual search. Often, it is simply assumed that features can be judged independently of each other in such tasks. However, there is evidence for perceptual dependencies when simultaneously presenting multiple features. Such dependencies could potentially affect information visualizations that contain combinations of features for encoding information and, thereby, bias subjects into unequally weighting the relevance of different data dimensions. We experimentally assess (1) the presence of judgment dependencies in a visualization task (searching for a target node in a node-link diagram) and (2) how feature contrast relates to salience. From a visualization point of view, our most relevant findings are that (a) to equalize saliency (and thus bottom-up weighting) of size and color, color contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values, because it does not show a clear relationship between contrast and salience; (b) color and size are features that can be used independently to represent information, at least as far as the range of colors that were used in our study are concerned; (c) the concept of (static) feature salience hierarchies is wrong; how salient a feature is compared to another is not fixed, but a function of feature contrasts; (d) final decisions appear to be as good an indicator of perceptual performance as indicators based on measures obtained from individual fixations. Eye tracking, therefore, does not necessarily present a benefit for user studies that aim at evaluating performance in search tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of the end-user tasks performed on these images involve a form of visual search. Often, it is simply assumed that features can be judged independently of each other in such tasks. However, there is evidence for perceptual dependencies when simultaneously presenting multiple features. Such dependencies could potentially affect information visualizations that contain combinations of features for encoding information and, thereby, bias subjects into unequally weighting the relevance of different data dimensions. We experimentally assess (1) the presence of judgment dependencies in a visualization task (searching for a target node in a node-link diagram) and (2) how feature contrast relates to salience. From a visualization point of view, our most relevant findings are that (a) to equalize saliency (and thus bottom-up weighting) of size and color, color contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values, because it does not show a clear relationship between contrast and salience; (b) color and size are features that can be used independently to represent information, at least as far as the range of colors that were used in our study are concerned; (c) the concept of (static) feature salience hierarchies is wrong; how salient a feature is compared to another is not fixed, but a function of feature contrasts; (d) final decisions appear to be as good an indicator of perceptual performance as indicators based on measures obtained from individual fixations. Eye tracking, therefore, does not necessarily present a benefit for user studies that aim at evaluating performance in search tasks.

Close

  • doi:10.1145/1278760.1278763

Close

Menno Van Der Schoot; Alain L. Vasbinder; Tako M. Horsley; Ernest C. D. M. Van Lieshout

The role of two reading strategies in text comprehension: An eye fixation study in primary school children Journal Article

In: Journal of Research in Reading, vol. 31, no. 2, pp. 203–223, 2008.

Abstract | Links | BibTeX

@article{VanDerSchoot2008,
title = {The role of two reading strategies in text comprehension: An eye fixation study in primary school children},
author = {Menno Van Der Schoot and Alain L. Vasbinder and Tako M. Horsley and Ernest C. D. M. Van Lieshout},
doi = {10.1111/j.1467-9817.2007.00354.x},
year = {2008},
date = {2008-01-01},
journal = {Journal of Research in Reading},
volume = {31},
number = {2},
pages = {203--223},
abstract = {This study examined whether 1012-year-old children use two reading strategies to aid their text comprehension: (1) distinguishing between important and unimportant words; and (2) resolving anaphoric references. Of interest was the question to what extent use of these reading strategies was predictive of reading comprehension skill over and above decoding skill and vocabulary. Reading strategy use was examined by the recording of eye fixations on specific target words. In contrast to less successful comprehenders, more successful comprehenders invested more processing time in important than in unimportant words. On the other hand, they needed less time to determine the antecedent of an anaphor. The results suggest that more successful comprehenders build a more effective mental model of the text than less successful comprehenders in at least two ways. First, they allocate more attention to the incorporation of goal-relevant than goal-irrelevant information into the model. Second, they ascertain that the text model is coherent and richly connected.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study examined whether 1012-year-old children use two reading strategies to aid their text comprehension: (1) distinguishing between important and unimportant words; and (2) resolving anaphoric references. Of interest was the question to what extent use of these reading strategies was predictive of reading comprehension skill over and above decoding skill and vocabulary. Reading strategy use was examined by the recording of eye fixations on specific target words. In contrast to less successful comprehenders, more successful comprehenders invested more processing time in important than in unimportant words. On the other hand, they needed less time to determine the antecedent of an anaphor. The results suggest that more successful comprehenders build a more effective mental model of the text than less successful comprehenders in at least two ways. First, they allocate more attention to the incorporation of goal-relevant than goal-irrelevant information into the model. Second, they ascertain that the text model is coherent and richly connected.

Close

  • doi:10.1111/j.1467-9817.2007.00354.x

Close

Stefan Van der Stigchel; Jan Theeuwes

Differences in distractor-induced deviation between horizontal and vertical saccade trajectories Journal Article

In: NeuroReport, vol. 19, no. 2, pp. 251–254, 2008.

Abstract | Links | BibTeX

@article{VanderStigchel2008,
title = {Differences in distractor-induced deviation between horizontal and vertical saccade trajectories},
author = {Stefan Van der Stigchel and Jan Theeuwes},
doi = {10.1097/WNR.0b013e3282f49b3f},
year = {2008},
date = {2008-01-01},
journal = {NeuroReport},
volume = {19},
number = {2},
pages = {251--254},
abstract = {The present study systematically investigated the influence of a distractor on horizontal and vertical eye movements. Results showed that both horizontal and vertical eye movements deviated away from the distractor but these deviations were stronger for vertical than for horizontal movements. As trajectory deviations away from a distractor are generally attributed to inhibition applied to the distractor, this suggests that this deviation is not only due to differences in activity between the two collicular motor maps, but can also be evoked by local application of inhibitory processes in the same map as the target. Nonetheless, deviations were more dominant for vertical movements which suggests that for these movements more inhibition is applied than for horizontal movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study systematically investigated the influence of a distractor on horizontal and vertical eye movements. Results showed that both horizontal and vertical eye movements deviated away from the distractor but these deviations were stronger for vertical than for horizontal movements. As trajectory deviations away from a distractor are generally attributed to inhibition applied to the distractor, this suggests that this deviation is not only due to differences in activity between the two collicular motor maps, but can also be evoked by local application of inhibitory processes in the same map as the target. Nonetheless, deviations were more dominant for vertical movements which suggests that for these movements more inhibition is applied than for horizontal movements.

Close

  • doi:10.1097/WNR.0b013e3282f49b3f

Close

Stefan Van Der Stigchel; Wieske Van Zoest; Jan Theeuwes; Jason J. S. Barton

The influence of "blind" distractors on eye movement trajectories in visual hemifield defects Journal Article

In: Journal of Cognitive Neuroscience, vol. 20, no. 11, pp. 2025–2036, 2008.

Abstract | Links | BibTeX

@article{VanDerStigchel2008b,
title = {The influence of "blind" distractors on eye movement trajectories in visual hemifield defects},
author = {Stefan Van Der Stigchel and Wieske Van Zoest and Jan Theeuwes and Jason J. S. Barton},
doi = {10.1162/jocn.2008.20145},
year = {2008},
date = {2008-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {20},
number = {11},
pages = {2025--2036},
abstract = {There is evidence that some visual information in blind regions may still be processed in patients with hemifield defects after cerebral lesions ("blindsight"). We tested the hypothesis that, in the absence of retinogeniculostriate processing, residual retinotectal processing may still be detected as modifications of saccades to seen targets by irrelevant distractors in the blind hemifield. Six patients were presented with distractors in the blind and intact portions of their visual field and participants were instructed to make eye movements to targets in the intact field. Eye movements were recorded to determine if blind-field distractors caused deviation in saccadic trajectories. No deviation was found in one patient with an optic chiasm lesion, which affect both retinotectal and retinogeniculostriate pathways. In five patients with lesions of the optic radiations or the striate cortex, the results were mixed, with two of the five patients showing significant deviations of saccadic trajectory away from the "blind" distractor. In a second experiment, two of the five patients were tested with the target and the distractor more closely aligned. Both patients showed a "global effect," in that saccades deviated toward the distractor, but the effect was stronger in the patient who also showed significant trajectory deviation in the first experiment. Although our study confirms that distractor effects on saccadic trajectory can occur in patients with damage to the retinogeniculostriate visual pathway but preserved retinotectal projections, there remain questions regarding what additional factors are required for these effects to manifest themselves in a given patient.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There is evidence that some visual information in blind regions may still be processed in patients with hemifield defects after cerebral lesions ("blindsight"). We tested the hypothesis that, in the absence of retinogeniculostriate processing, residual retinotectal processing may still be detected as modifications of saccades to seen targets by irrelevant distractors in the blind hemifield. Six patients were presented with distractors in the blind and intact portions of their visual field and participants were instructed to make eye movements to targets in the intact field. Eye movements were recorded to determine if blind-field distractors caused deviation in saccadic trajectories. No deviation was found in one patient with an optic chiasm lesion, which affect both retinotectal and retinogeniculostriate pathways. In five patients with lesions of the optic radiations or the striate cortex, the results were mixed, with two of the five patients showing significant deviations of saccadic trajectory away from the "blind" distractor. In a second experiment, two of the five patients were tested with the target and the distractor more closely aligned. Both patients showed a "global effect," in that saccades deviated toward the distractor, but the effect was stronger in the patient who also showed significant trajectory deviation in the first experiment. Although our study confirms that distractor effects on saccadic trajectory can occur in patients with damage to the retinogeniculostriate visual pathway but preserved retinotectal projections, there remain questions regarding what additional factors are required for these effects to manifest themselves in a given patient.

Close

  • doi:10.1162/jocn.2008.20145

Close

Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde

Corner salience varies linearly with corner angle during flicker-augmented contrast: A general principle of corner perception based on Vasarely's artworks Journal Article

In: Spatial Vision, vol. 22, pp. 335–348, 2008.

Abstract | BibTeX

@article{Troncoso2008a,
title = {Corner salience varies linearly with corner angle during flicker-augmented contrast: A general principle of corner perception based on Vasarely's artworks},
author = {Xoana G. Troncoso and Stephen L. Macknik and Susana Martinez-Conde},
year = {2008},
date = {2008-01-01},
journal = {Spatial Vision},
volume = {22},
pages = {335--348},
abstract = {When corners are embedded in a luminance gradient, their perceived salience varies linearly with corner angle (Troncoso et al., 2005). Here we hypothesize that this relationship may hold true for all corners, not just corner gradients. To test this hypothesis, we developed a novel variant of the flicker-augmented contrast illusion (Anstis and Ho, 1998) that employs solid (non-gradient) corners of varying angles to modify perceived brightness. We flickered solid corners from dark to light grey (50% luminance over time) against a black or a white background. With this new stimulus, subjects compared the apparent brightness of corners, which did not vary in actual luminance, to non-illusory stimuli that varied in actual luminance. We found that the apparent brightness of corners was linearly related to the sharpness of corner angle. Thus this relationship is not solely an effect of corners embedded in gradients, but may be a general principle of corner perception. These findings may have important repercussions for brain mechanisms underlying the early visual processing of shape and brightness. A large fraction of Vasarely's art showcases the perceptual salience of corners, curvature and terminators. Several of these artworks and their implications for visual processing are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When corners are embedded in a luminance gradient, their perceived salience varies linearly with corner angle (Troncoso et al., 2005). Here we hypothesize that this relationship may hold true for all corners, not just corner gradients. To test this hypothesis, we developed a novel variant of the flicker-augmented contrast illusion (Anstis and Ho, 1998) that employs solid (non-gradient) corners of varying angles to modify perceived brightness. We flickered solid corners from dark to light grey (50% luminance over time) against a black or a white background. With this new stimulus, subjects compared the apparent brightness of corners, which did not vary in actual luminance, to non-illusory stimuli that varied in actual luminance. We found that the apparent brightness of corners was linearly related to the sharpness of corner angle. Thus this relationship is not solely an effect of corners embedded in gradients, but may be a general principle of corner perception. These findings may have important repercussions for brain mechanisms underlying the early visual processing of shape and brightness. A large fraction of Vasarely's art showcases the perceptual salience of corners, curvature and terminators. Several of these artworks and their implications for visual processing are discussed.

Close

Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-conde

Microsaccades counteract perceptual filling-in Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–9, 2008.

Abstract | BibTeX

@article{Troncoso2008,
title = {Microsaccades counteract perceptual filling-in},
author = {Xoana G. Troncoso and Stephen L. Macknik and Susana Martinez-conde},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--9},
abstract = {Artificial scotomas positioned within peripheral dynamic noise fade perceptually during visual fixation (that is, the surrounding dynamic noise appears to fill-in the scotoma). Because the scotomas' edges are continuously refreshed by the dynamic noise background, this filling-in effect cannot be explained by low-level adaptation mechanisms (such as those that may underlie classical Troxler fading). We recently showed that microsaccades counteract Troxler fading and drive first-order visibility during fixation (S. Martinez-Conde, S. L. Macknik, X. G. Troncoso, & T. A. Dyar, 2006). Here we set out to determine whether microsaccades may counteract the perceptual filling-in of artificial scotomas and thus drive second-order visibility. If so, microsaccades may not only counteract low-level adaptation but also play a role in higher perceptual processes. We asked subjects to indicate, via button press/release, whether an artificial scotoma presented on a dynamic noise background was visible or invisible at any given time. The subjects' eye movements were simultaneously measured with a high precision video system. We found that increases in microsaccade production counteracted the perception of filling-in, driving the visibility of the artificial scotoma. Conversely, decreased microsaccades allowed perceptual filling-in to take place. Our results show that microsaccades do not solely overcome low-level adaptation mechanisms but they also contribute to maintaining second-order visibility during fixation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Artificial scotomas positioned within peripheral dynamic noise fade perceptually during visual fixation (that is, the surrounding dynamic noise appears to fill-in the scotoma). Because the scotomas' edges are continuously refreshed by the dynamic noise background, this filling-in effect cannot be explained by low-level adaptation mechanisms (such as those that may underlie classical Troxler fading). We recently showed that microsaccades counteract Troxler fading and drive first-order visibility during fixation (S. Martinez-Conde, S. L. Macknik, X. G. Troncoso, & T. A. Dyar, 2006). Here we set out to determine whether microsaccades may counteract the perceptual filling-in of artificial scotomas and thus drive second-order visibility. If so, microsaccades may not only counteract low-level adaptation but also play a role in higher perceptual processes. We asked subjects to indicate, via button press/release, whether an artificial scotoma presented on a dynamic noise background was visible or invisible at any given time. The subjects' eye movements were simultaneously measured with a high precision video system. We found that increases in microsaccade production counteracted the perception of filling-in, driving the visibility of the artificial scotoma. Conversely, decreased microsaccades allowed perceptual filling-in to take place. Our results show that microsaccades do not solely overcome low-level adaptation mechanisms but they also contribute to maintaining second-order visibility during fixation.

Close

Xoana G. Troncoso; Stephen L. Macknik; Jorge Otero-Millan; Susana Martinez-Conde

Microsaccades drive illusory motion in the Enigma illusion Journal Article

In: Proceedings of the National Academy of Sciences, vol. 105, no. 41, pp. 16033–16038, 2008.

Abstract | Links | BibTeX

@article{Troncoso2008b,
title = {Microsaccades drive illusory motion in the Enigma illusion},
author = {Xoana G. Troncoso and Stephen L. Macknik and Jorge Otero-Millan and Susana Martinez-Conde},
doi = {10.1073/pnas.0709389105},
year = {2008},
date = {2008-01-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {105},
number = {41},
pages = {16033--16038},
abstract = {Visual images consisting of repetitive patterns can elicit striking illusory motion percepts. For almost 200 years, artists, psychologists, and neuroscientists have debated whether this type of illusion originates in the eye or in the brain. For more than a decade, the controversy has centered on the powerful illusory motion perceived in the painting Enigma, created by op-artist Isia Leviant. However, no previous study has directly correlated the Enigma illusion to any specific physiological mechanism, and so the debate rages on. Here, we show that microsaccades, a type of miniature eye movement produced during visual fixation, can drive illusory motion in Enigma. We asked subjects to indicate when illusory motion sped up or slowed down during the observation of Enigma while we simultaneously recorded their eye movements with high precision. Before "faster" motion periods, the rate of microsaccades increased. Before "slower/no" motion periods, the rate of microsaccades decreased. These results reveal a direct link between microsaccade production and the perception of illusory motion in Enigma and rule out the hypothesis that the origin of the illusion is purely cortical.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual images consisting of repetitive patterns can elicit striking illusory motion percepts. For almost 200 years, artists, psychologists, and neuroscientists have debated whether this type of illusion originates in the eye or in the brain. For more than a decade, the controversy has centered on the powerful illusory motion perceived in the painting Enigma, created by op-artist Isia Leviant. However, no previous study has directly correlated the Enigma illusion to any specific physiological mechanism, and so the debate rages on. Here, we show that microsaccades, a type of miniature eye movement produced during visual fixation, can drive illusory motion in Enigma. We asked subjects to indicate when illusory motion sped up or slowed down during the observation of Enigma while we simultaneously recorded their eye movements with high precision. Before "faster" motion periods, the rate of microsaccades increased. Before "slower/no" motion periods, the rate of microsaccades decreased. These results reveal a direct link between microsaccade production and the perception of illusory motion in Enigma and rule out the hypothesis that the origin of the illusion is purely cortical.

Close

  • doi:10.1073/pnas.0709389105

Close

Yuan-Chi Tseng; Chiang-Shan Ray Li

The effects of response readiness and error monitoring on saccade countermanding Journal Article

In: The Open Psychology Journal, vol. 1, no. 1, pp. 18–25, 2008.

Abstract | Links | BibTeX

@article{Tseng2008,
title = {The effects of response readiness and error monitoring on saccade countermanding},
author = {Yuan-Chi Tseng and Chiang-Shan Ray Li},
doi = {10.2174/1874350100801010018},
year = {2008},
date = {2008-01-01},
journal = {The Open Psychology Journal},
volume = {1},
number = {1},
pages = {18--25},
abstract = {The stop-signal task (SST) and anti-saccade tasks are both widely used to explore cognitive inhibitory control. Our previous work on a manual SST showed that subjects' readiness to respond to the go signal and the extent to which subjects monitor their errors need to be considered in order to attribute impaired performance to deficits in response inhi- bition. Here we examine whether these same task-related variables similarly influence oculomotor SST and anti-saccade performance. Thirty-six and sixty healthy, adult subjects participated in an oculomotor SST and anti-saccade task, respec- tively, in which the fore-period (FP) of imperative stimulus varied randomly from trial to trial. We computed a FP effect to index response readiness to the imperative stimulus and a post-error slowing (PES) effect to index error monitoring. Contrary to what we had anticipated, other than a weak but negative association between the FP effect and anti-saccade errors, these behavioral variables did not correlate with SST or anti-saccade performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The stop-signal task (SST) and anti-saccade tasks are both widely used to explore cognitive inhibitory control. Our previous work on a manual SST showed that subjects' readiness to respond to the go signal and the extent to which subjects monitor their errors need to be considered in order to attribute impaired performance to deficits in response inhi- bition. Here we examine whether these same task-related variables similarly influence oculomotor SST and anti-saccade performance. Thirty-six and sixty healthy, adult subjects participated in an oculomotor SST and anti-saccade task, respec- tively, in which the fore-period (FP) of imperative stimulus varied randomly from trial to trial. We computed a FP effect to index response readiness to the imperative stimulus and a post-error slowing (PES) effect to index error monitoring. Contrary to what we had anticipated, other than a weak but negative association between the FP effect and anti-saccade errors, these behavioral variables did not correlate with SST or anti-saccade performance.

Close

  • doi:10.2174/1874350100801010018

Close

Brian Sullivan; Jelena Jovancevic-Misic; Mary Hayhoe; Gwen Sterns

Use of multiple preferred retinal loci in Stargardt's disease during natural tasks: A case study Journal Article

In: Ophthalmic and Physiological Optics, vol. 28, no. 2, pp. 168–177, 2008.

Abstract | Links | BibTeX

@article{Sullivan2008,
title = {Use of multiple preferred retinal loci in Stargardt's disease during natural tasks: A case study},
author = {Brian Sullivan and Jelena Jovancevic-Misic and Mary Hayhoe and Gwen Sterns},
doi = {10.1111/j.1475-1313.2008.00546.x},
year = {2008},
date = {2008-01-01},
journal = {Ophthalmic and Physiological Optics},
volume = {28},
number = {2},
pages = {168--177},
abstract = {Individuals with central visual field loss often use a preferred retinal locus (PRL) to compensate for their deficit. We present a case study examining the eye movements of a subject with Stargardt's disease causing bilateral central scotomas, while performing a set of natural tasks including: making a sandwich; building a model; reaching and grasping; and catching a ball. In general, the subject preferred to use PRLs in the lower left visual field. However, there was considerable variation in the location and extent of the PRLs used. Our results demonstrate that a well-defined PRL is not necessary to adequately perform this set of tasks and that many sites in the peripheral retina may be viable for PRLs, contingent on task and stimulus constraints.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Individuals with central visual field loss often use a preferred retinal locus (PRL) to compensate for their deficit. We present a case study examining the eye movements of a subject with Stargardt's disease causing bilateral central scotomas, while performing a set of natural tasks including: making a sandwich; building a model; reaching and grasping; and catching a ball. In general, the subject preferred to use PRLs in the lower left visual field. However, there was considerable variation in the location and extent of the PRLs used. Our results demonstrate that a well-defined PRL is not necessary to adequately perform this set of tasks and that many sites in the peripheral retina may be viable for PRLs, contingent on task and stimulus constraints.

Close

  • doi:10.1111/j.1475-1313.2008.00546.x

Close

Joshua M. Susskind; Daniel H. Lee; Andrée Cusi; Roman Feiman; Wojtek Grabski; Adam K. Anderson

Expressing fear enhances sensory acquisition Journal Article

In: Nature Neuroscience, vol. 11, no. 7, pp. 843–850, 2008.

Abstract | Links | BibTeX

@article{Susskind2008,
title = {Expressing fear enhances sensory acquisition},
author = {Joshua M. Susskind and Daniel H. Lee and Andrée Cusi and Roman Feiman and Wojtek Grabski and Adam K. Anderson},
doi = {10.1038/nn.2138},
year = {2008},
date = {2008-01-01},
journal = {Nature Neuroscience},
volume = {11},
number = {7},
pages = {843--850},
abstract = {It has been proposed that facial expression production originates in sensory regulation. Here we demonstrate that facial expressions of fear are configured to enhance sensory acquisition. A statistical model of expression appearance revealed that fear and disgust expressions have opposite shape and surface reflectance features. We hypothesized that this reflects a fundamental antagonism serving to augment versus diminish sensory exposure. In keeping with this hypothesis, when subjects posed expressions of fear, they had a subjectively larger visual field, faster eye movements during target localization and an increase in nasal volume and air velocity during inspiration. The opposite pattern was found for disgust. Fear may therefore work to enhance perception, whereas disgust dampens it. These convergent results provide support for the Darwinian hypothesis that facial expressions are not arbitrary configurations for social communication, but rather, expressions may have originated in altering the sensory interface with the physical world.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has been proposed that facial expression production originates in sensory regulation. Here we demonstrate that facial expressions of fear are configured to enhance sensory acquisition. A statistical model of expression appearance revealed that fear and disgust expressions have opposite shape and surface reflectance features. We hypothesized that this reflects a fundamental antagonism serving to augment versus diminish sensory exposure. In keeping with this hypothesis, when subjects posed expressions of fear, they had a subjectively larger visual field, faster eye movements during target localization and an increase in nasal volume and air velocity during inspiration. The opposite pattern was found for disgust. Fear may therefore work to enhance perception, whereas disgust dampens it. These convergent results provide support for the Darwinian hypothesis that facial expressions are not arbitrary configurations for social communication, but rather, expressions may have originated in altering the sensory interface with the physical world.

Close

  • doi:10.1038/nn.2138

Close

Giovanni Taibbi; Zhong I. Wang; Louis F. Dell'Osso

Infantile nystagmus syndrome : Broadening the high-foveation-quality fi eld with contact lenses Journal Article

In: Ophthalmology, vol. 2, no. 3, pp. 585–589, 2008.

Abstract | BibTeX

@article{Taibbi2008,
title = {Infantile nystagmus syndrome : Broadening the high-foveation-quality fi eld with contact lenses},
author = {Giovanni Taibbi and Zhong I. Wang and Louis F. Dell'Osso},
year = {2008},
date = {2008-01-01},
journal = {Ophthalmology},
volume = {2},
number = {3},
pages = {585--589},
abstract = {We investigated the effects of contact lenses in broadening and improving the high-foveation-quality fi eld in a subject with infantile nystagmus syndrome (INS). A high-speed, digitized video system was used for the eye-movement recording. The subject was asked to fi xate a far target at different horizontal gaze angles with contact lenses inserted. Data from the subject while fi xating at far without refractive correction and at near (at a convergence angle of 60 PD), were used for comparison. The eXpanded Nystagmus Acuity Function (NAFX) was used to evaluate the foveation quality at each gaze angle. Contact lenses broadened the high- foveation-quality range of gaze angles in this subject. The broadening was comparable to that achieved during 60 PD of convergence although the NAFX values were lower. Contact lenses allowed the subject to see “more” (he had a wider range of high-foveation-quality gaze angles) and “better” (he had improved foveation at each gaze angle). Instead of being contraindicated by INS, contact lenses emerge as a potentially important therapeutic option. Contact lenses employ afferent feedback via the ophthalmic division of the V cranial nerve to damp INS slow phases over a broadened range of gaze angles. This supports the proprioceptive hypothesis of INS improvement.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated the effects of contact lenses in broadening and improving the high-foveation-quality fi eld in a subject with infantile nystagmus syndrome (INS). A high-speed, digitized video system was used for the eye-movement recording. The subject was asked to fi xate a far target at different horizontal gaze angles with contact lenses inserted. Data from the subject while fi xating at far without refractive correction and at near (at a convergence angle of 60 PD), were used for comparison. The eXpanded Nystagmus Acuity Function (NAFX) was used to evaluate the foveation quality at each gaze angle. Contact lenses broadened the high- foveation-quality range of gaze angles in this subject. The broadening was comparable to that achieved during 60 PD of convergence although the NAFX values were lower. Contact lenses allowed the subject to see “more” (he had a wider range of high-foveation-quality gaze angles) and “better” (he had improved foveation at each gaze angle). Instead of being contraindicated by INS, contact lenses emerge as a potentially important therapeutic option. Contact lenses employ afferent feedback via the ophthalmic division of the V cranial nerve to damp INS slow phases over a broadened range of gaze angles. This supports the proprioceptive hypothesis of INS improvement.

Close

Kohske Takahashi; Katsumi Watanabe

Persisting effect of prior experience of change blindness Journal Article

In: Perception, vol. 37, no. 2, pp. 324–327, 2008.

Abstract | Links | BibTeX

@article{Takahashi2008,
title = {Persisting effect of prior experience of change blindness},
author = {Kohske Takahashi and Katsumi Watanabe},
doi = {10.1068/p5906},
year = {2008},
date = {2008-01-01},
journal = {Perception},
volume = {37},
number = {2},
pages = {324--327},
abstract = {Most cognitive scientists know that an airplane tends to lose its engine when the display is flickering. How does such prior experience influence visual search? We recorded eye movements made by vision researchers while they were actively performing a change-detection task. In selected trials, we presented Rensink's familiar 'airplane' display, but with changes occurring at locations other than the jet engine. The observers immediately noticed that there was no change in the location where the engine had changed in the previous change-blindness demonstration. Nevertheless, eye-movement analyses indicated that the observers were compelled to look at the location of the unchanged engine. These results demonstrate the powerful effect of prior experience on eye movements, even when the observers are aware of the futility of doing so.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Most cognitive scientists know that an airplane tends to lose its engine when the display is flickering. How does such prior experience influence visual search? We recorded eye movements made by vision researchers while they were actively performing a change-detection task. In selected trials, we presented Rensink's familiar 'airplane' display, but with changes occurring at locations other than the jet engine. The observers immediately noticed that there was no change in the location where the engine had changed in the previous change-blindness demonstration. Nevertheless, eye-movement analyses indicated that the observers were compelled to look at the location of the unchanged engine. These results demonstrate the powerful effect of prior experience on eye movements, even when the observers are aware of the futility of doing so.

Close

  • doi:10.1068/p5906

Close

Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Thomas Eggert; Zoï Kapoula

How the brain obeys Hering's law: A TMS study of the posterior parietal cortex Journal Article

In: Investigative Ophthalmology & Visual Science, vol. 49, no. 1, pp. 230–237, 2008.

Abstract | Links | BibTeX

@article{Vernet2008,
title = {How the brain obeys Hering's law: A TMS study of the posterior parietal cortex},
author = {Marine Vernet and Qing Yang and Gintautas Daunys and Christophe Orssaud and Thomas Eggert and Zoï Kapoula},
doi = {10.1167/iovs.07-0854},
year = {2008},
date = {2008-01-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {49},
number = {1},
pages = {230--237},
abstract = {PURPOSE: Human ocular saccades are not perfectly yoked; the origin of this disconjugacy (muscular versus central) remains controversial. The purpose of this study was to test a cortical influence on the binocular coordination of saccades. METHODS: The authors used a gap paradigm to elicit vertical or horizontal saccades of 10 degrees , randomly interleaved; transcranial magnetic stimulation (TMS) was applied on the posterior parietal cortex (PPC) 100 ms after the target onset. RESULTS: TMS of the left or right PPC increased (i) the misalignment of the eyes during the presaccadic fixation period; (ii) the size difference between the saccades of the eyes, called disconjugacy; the increase of disconjugacy was significant for rightward and downward saccades after TMS of the right PPC and for downward saccades after TMS of the left PPC. CONCLUSIONS: The authors conclude that the PPC is actively involved in maintaining eye alignment during fixation and in the control of binocular coordination of saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

PURPOSE: Human ocular saccades are not perfectly yoked; the origin of this disconjugacy (muscular versus central) remains controversial. The purpose of this study was to test a cortical influence on the binocular coordination of saccades. METHODS: The authors used a gap paradigm to elicit vertical or horizontal saccades of 10 degrees , randomly interleaved; transcranial magnetic stimulation (TMS) was applied on the posterior parietal cortex (PPC) 100 ms after the target onset. RESULTS: TMS of the left or right PPC increased (i) the misalignment of the eyes during the presaccadic fixation period; (ii) the size difference between the saccades of the eyes, called disconjugacy; the increase of disconjugacy was significant for rightward and downward saccades after TMS of the right PPC and for downward saccades after TMS of the left PPC. CONCLUSIONS: The authors conclude that the PPC is actively involved in maintaining eye alignment during fixation and in the control of binocular coordination of saccades.

Close

  • doi:10.1167/iovs.07-0854

Close

Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Zoï Kapoula

TMS of the posterior parietal cortex delays the latency of unpredictable saccades but not when they are combined with predictable divergence Journal Article

In: Brain Research Bulletin, vol. 76, no. 1-2, pp. 50–56, 2008.

Abstract | Links | BibTeX

@article{Vernet2008a,
title = {TMS of the posterior parietal cortex delays the latency of unpredictable saccades but not when they are combined with predictable divergence},
author = {Marine Vernet and Qing Yang and Gintautas Daunys and Christophe Orssaud and Zoï Kapoula},
doi = {10.1016/j.brainresbull.2007.11.007},
year = {2008},
date = {2008-01-01},
journal = {Brain Research Bulletin},
volume = {76},
number = {1-2},
pages = {50--56},
abstract = {This study tests the influence of transcranial magnetic stimulation (TMS) of the posterior parietal cortex (PPC) on the initiation of horizontal and vertical saccades, alone or combined with a predictable divergence. A gap paradigm was used; TMS was applied 100 ms after target onset. TMS of the left PPC increased the latency of unpredictable rightward saccades, while TMS of the right PPC increased the latency of unpredictable downward saccades. Yet, when unpredictable saccades were combined with predictable divergence, neither component was affected. We suggest that in the latter case, the initiation of both components was taken in charge by another area, e.g. frontal. Thus, even when one component was predictable, a common mechanism controls the initiation of both components. The results confirm that TMS only modifies the latency when the cortical area stimulated is involved in the triggering of the eye movement.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study tests the influence of transcranial magnetic stimulation (TMS) of the posterior parietal cortex (PPC) on the initiation of horizontal and vertical saccades, alone or combined with a predictable divergence. A gap paradigm was used; TMS was applied 100 ms after target onset. TMS of the left PPC increased the latency of unpredictable rightward saccades, while TMS of the right PPC increased the latency of unpredictable downward saccades. Yet, when unpredictable saccades were combined with predictable divergence, neither component was affected. We suggest that in the latter case, the initiation of both components was taken in charge by another area, e.g. frontal. Thus, even when one component was predictable, a common mechanism controls the initiation of both components. The results confirm that TMS only modifies the latency when the cortical area stimulated is involved in the triggering of the eye movement.

Close

  • doi:10.1016/j.brainresbull.2007.11.007

Close

Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Zoï Kapoula

Divergence influences triggering of both vertical and horizontal saccades Journal Article

In: Optometry and Vision Science, vol. 85, no. 3, pp. 187–195, 2008.

Abstract | Links | BibTeX

@article{Vernet2008b,
title = {Divergence influences triggering of both vertical and horizontal saccades},
author = {Marine Vernet and Qing Yang and Gintautas Daunys and Christophe Orssaud and Zoï Kapoula},
doi = {10.1097/OPX.0b013e3181647196},
year = {2008},
date = {2008-01-01},
journal = {Optometry and Vision Science},
volume = {85},
number = {3},
pages = {187--195},
abstract = {Purpose. In real life, divergence is frequently combined with vertical saccades. The purpose of this study was to examine the initiation of vertical and horizontal saccades, pure or combined with divergence. Methods. We used a gap paradigm to elicit vertical or horizontal saccades (10 degrees), pure or combined with a predictable divergence (10 degrees). Eye movements from 12 subjects were recorded with EyeLink II. Results. The major results were (i) when combined with divergence, the latency of horizontal saccades increased but not the latency of vertical saccades; (ii) for both vertical and horizontal saccades, a tight correlation between the latency of saccade and divergence was found; (iii) when the divergence was anticipated, the saccade was delayed. Conclusion. We conclude that the initiation of both components of combined movements is interdependent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose. In real life, divergence is frequently combined with vertical saccades. The purpose of this study was to examine the initiation of vertical and horizontal saccades, pure or combined with divergence. Methods. We used a gap paradigm to elicit vertical or horizontal saccades (10 degrees), pure or combined with a predictable divergence (10 degrees). Eye movements from 12 subjects were recorded with EyeLink II. Results. The major results were (i) when combined with divergence, the latency of horizontal saccades increased but not the latency of vertical saccades; (ii) for both vertical and horizontal saccades, a tight correlation between the latency of saccade and divergence was found; (iii) when the divergence was anticipated, the saccade was delayed. Conclusion. We conclude that the initiation of both components of combined movements is interdependent.

Close

  • doi:10.1097/OPX.0b013e3181647196

Close

Julius Verrel; Harold Bekkering; Bert Steenbergen

Eye-hand coordination during manual object transport with the affected and less affected hand in adolescents with hemiparetic cerebral palsy Journal Article

In: Experimental Brain Research, vol. 187, no. 1, pp. 107–116, 2008.

Abstract | Links | BibTeX

@article{Verrel2008,
title = {Eye-hand coordination during manual object transport with the affected and less affected hand in adolescents with hemiparetic cerebral palsy},
author = {Julius Verrel and Harold Bekkering and Bert Steenbergen},
doi = {10.1007/s00221-008-1287-y},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {187},
number = {1},
pages = {107--116},
abstract = {In the present study we investigated eye-hand coordination in adolescents with hemiparetic cerebral palsy (CP) and neurologically healthy controls. Using an object prehension and transport task, we addressed two hypotheses, motivated by the question whether early brain damage and the ensuing limitations of motor activity lead to general and/or effector-specific effects in visuomotor control of manual actions. We hypothesized that individuals with hemiparetic CP would more closely visually monitor actions with their affected hand, compared to both their less affected hand and to control participants without a sensorimotor impairment. A second, more speculative hypothesis was that, in relation to previously established deficits in prospective action control in individuals with hemiparetic CP, gaze patterns might be less anticipatory in general, also during actions performed with the less affected hand. Analysis of the gaze and hand movement data revealed the increased visual monitoring of participants with CP when using their affected hand at the beginning as well as during object transport. In contrast, no general deficit in anticipatory gaze control in the participants with hemiparetic CP could be observed. Collectively, these findings are the first to directly show that individuals with hemiparetic CP adapt eye-hand coordination to the specific constraints of the moving limb, presumably to compensate for sensorimotor deficits.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the present study we investigated eye-hand coordination in adolescents with hemiparetic cerebral palsy (CP) and neurologically healthy controls. Using an object prehension and transport task, we addressed two hypotheses, motivated by the question whether early brain damage and the ensuing limitations of motor activity lead to general and/or effector-specific effects in visuomotor control of manual actions. We hypothesized that individuals with hemiparetic CP would more closely visually monitor actions with their affected hand, compared to both their less affected hand and to control participants without a sensorimotor impairment. A second, more speculative hypothesis was that, in relation to previously established deficits in prospective action control in individuals with hemiparetic CP, gaze patterns might be less anticipatory in general, also during actions performed with the less affected hand. Analysis of the gaze and hand movement data revealed the increased visual monitoring of participants with CP when using their affected hand at the beginning as well as during object transport. In contrast, no general deficit in anticipatory gaze control in the participants with hemiparetic CP could be observed. Collectively, these findings are the first to directly show that individuals with hemiparetic CP adapt eye-hand coordination to the specific constraints of the moving limb, presumably to compensate for sensorimotor deficits.

Close

  • doi:10.1007/s00221-008-1287-y

Close

Christian Vorstius; Ralph Radach; Alan R. Lang; Christina J. Riccardi

Specific visuomotor deficits due to alcohol intoxication: evidence from the pro- and antisaccade paradigms. Journal Article

In: Psychopharmacology, vol. 196, no. 2, pp. 201–210, 2008.

Abstract | Links | BibTeX

@article{Vorstius2008,
title = {Specific visuomotor deficits due to alcohol intoxication: evidence from the pro- and antisaccade paradigms.},
author = {Christian Vorstius and Ralph Radach and Alan R. Lang and Christina J. Riccardi},
doi = {10.1007/s00213-007-0954-1},
year = {2008},
date = {2008-01-01},
journal = {Psychopharmacology},
volume = {196},
number = {2},
pages = {201--210},
abstract = {RATIONALE: Alcohol affects a variety of human behaviors, including visual perception and motor control. Although recent research has begun to explore mechanisms that mediate these changes, their exact nature is still not well understood. OBJECTIVES: The present study used two basic oculomotor tasks to examine the effect of alcohol on different levels of visual processing within the same individuals. A theoretical framework is offered to integrate findings across multiple levels of oculomotor control. MATERIALS AND METHODS: Twenty-four healthy participants were asked to perform eye movements in reflexive (pro-) and voluntary (anti-) saccade tasks. In one of two counterbalanced sessions, performance was measured after alcohol administration (mean BrAC=69 mg%); the other served as a within-subjects no-alcohol comparison condition. RESULTS: Error rates were not influenced by alcohol intoxication in either task. However, there were significant effects of alcohol on saccade latency and peak velocity in both tasks. Critically, a specific alcohol-induced impairment (hypermetria) in saccade amplitudes was observed exclusively in the anti-saccade task. CONCLUSIONS: The saccade latency data strongly suggest that alcohol intoxication impairs temporal aspects of saccade generation, irrespective of the level of processing triggering the saccade. The absence of effects on anti-saccade errors calls for further research into the notion of alcohol-induced impairment of the ability to inhibit prepotent responses. Furthermore, the specific impairment of saccade amplitude in the anti-saccade task under alcohol suggests that higher level processes involved in the spatial remapping of target location in the absence of a visually specified saccade goal are specifically affected by alcohol intoxication.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

RATIONALE: Alcohol affects a variety of human behaviors, including visual perception and motor control. Although recent research has begun to explore mechanisms that mediate these changes, their exact nature is still not well understood. OBJECTIVES: The present study used two basic oculomotor tasks to examine the effect of alcohol on different levels of visual processing within the same individuals. A theoretical framework is offered to integrate findings across multiple levels of oculomotor control. MATERIALS AND METHODS: Twenty-four healthy participants were asked to perform eye movements in reflexive (pro-) and voluntary (anti-) saccade tasks. In one of two counterbalanced sessions, performance was measured after alcohol administration (mean BrAC=69 mg%); the other served as a within-subjects no-alcohol comparison condition. RESULTS: Error rates were not influenced by alcohol intoxication in either task. However, there were significant effects of alcohol on saccade latency and peak velocity in both tasks. Critically, a specific alcohol-induced impairment (hypermetria) in saccade amplitudes was observed exclusively in the anti-saccade task. CONCLUSIONS: The saccade latency data strongly suggest that alcohol intoxication impairs temporal aspects of saccade generation, irrespective of the level of processing triggering the saccade. The absence of effects on anti-saccade errors calls for further research into the notion of alcohol-induced impairment of the ability to inhibit prepotent responses. Furthermore, the specific impairment of saccade amplitude in the anti-saccade task under alcohol suggests that higher level processes involved in the spatial remapping of target location in the absence of a visually specified saccade goal are specifically affected by alcohol intoxication.

Close

  • doi:10.1007/s00213-007-0954-1

Close

Robin Walker; Eugene McSorley

The influence of distractors on saccade target selection: Saccade trajectory effects Journal Article

In: Journal of Eye Movement Research, vol. 2, no. 3, pp. 1–13, 2008.

Abstract | Links | BibTeX

@article{Walker2008,
title = {The influence of distractors on saccade target selection: Saccade trajectory effects},
author = {Robin Walker and Eugene McSorley},
doi = {10.16910/jemr.2.3.7},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {2},
number = {3},
pages = {1--13},
abstract = {It has long been known that the path (trajectory) taken by the eye to land on a target is rarely straight (Yarbus, 1967). Furthermore, the magnitude and direction of this natural tendency for curvature can be modulated by the presence of a competing distractor stimulus presented along with the saccade target. The distractorrelated modulation of saccade trajectories provides a subtle measure of the underlying competitive processes involved in saccade target selection. Here we review some of our own studies into the effects distractors have on saccade trajectories, which can be regarded as a way of probing the competitive balance between target and distractor salience.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has long been known that the path (trajectory) taken by the eye to land on a target is rarely straight (Yarbus, 1967). Furthermore, the magnitude and direction of this natural tendency for curvature can be modulated by the presence of a competing distractor stimulus presented along with the saccade target. The distractorrelated modulation of saccade trajectories provides a subtle measure of the underlying competitive processes involved in saccade target selection. Here we review some of our own studies into the effects distractors have on saccade trajectories, which can be regarded as a way of probing the competitive balance between target and distractor salience.

Close

  • doi:10.16910/jemr.2.3.7

Close

Benjamin W. Tatler; Benjamin T. Vincent

Systematic tendencies in scene viewing Journal Article

In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–18, 2008.

Abstract | Links | BibTeX

@article{Tatler2008,
title = {Systematic tendencies in scene viewing},
author = {Benjamin W. Tatler and Benjamin T. Vincent},
doi = {http://dx.doi.org/10.16910/jemr.2.2.5},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {2},
number = {2},
pages = {1--18},
abstract = {While many current models of scene perception debate the relative roles of low- and high- level factors in eye guidance, systematic tendencies in how the eyes move may be infor- mative. We consider how each saccade and fixation is influenced by that which preceded or followed it, during free inspection of images of natural scenes. We find evidence to suggest periods of localized scanning separated by ‘global' relocations to new regions of the scene. We also find evidence to support the existence of small amplitude ‘corrective' saccades in natural image viewing. Our data reveal statistical dependencies between suc- cessive eye movements, which may be informative in furthering our understanding of eye guidance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

While many current models of scene perception debate the relative roles of low- and high- level factors in eye guidance, systematic tendencies in how the eyes move may be infor- mative. We consider how each saccade and fixation is influenced by that which preceded or followed it, during free inspection of images of natural scenes. We find evidence to suggest periods of localized scanning separated by ‘global' relocations to new regions of the scene. We also find evidence to support the existence of small amplitude ‘corrective' saccades in natural image viewing. Our data reveal statistical dependencies between suc- cessive eye movements, which may be informative in furthering our understanding of eye guidance.

Close

  • doi:http://dx.doi.org/10.16910/jemr.2.2.5

Close

Benjamin W. Tatler; Nicholas J. Wade; Kathrin Kaulard

Examining art: Dissociating pattern and perceptual influences on oculomotor behaviour Journal Article

In: Spatial Vision, vol. 21, no. 1, pp. 165–184, 2008.

Abstract | BibTeX

@article{Tatler2008a,
title = {Examining art: Dissociating pattern and perceptual influences on oculomotor behaviour},
author = {Benjamin W. Tatler and Nicholas J. Wade and Kathrin Kaulard},
year = {2008},
date = {2008-01-01},
journal = {Spatial Vision},
volume = {21},
number = {1},
pages = {165--184},
abstract = {When observing art the viewer's understanding results from the interplay between the marks made on the surface by the artist and the viewer's perception and knowledge of it. Here we use a novel set of stimuli to dissociate the influences of the marks on the surface and the viewer's perceptual experience upon the manner in which the viewer inspects art. Our stimuli provide the opportunity to study situations in which (1) the same visual stimulus can give rise to two different perceptual experiences in the viewer, and (2) the visual stimuli differ but give rise to the same perceptual experience in the viewer. We find that oculomotor behaviour changes when the perceptual experience changes. Oculomotor behaviour also differs when the viewer's perceptual experience is the same but the visual stimulus is different. The methodology used and insights gained from this study offer a first step toward an experimental exploration of the relative influences of the artist's creation and viewer's perception when viewing art and also toward a better understanding of the principles of composition in portraiture.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When observing art the viewer's understanding results from the interplay between the marks made on the surface by the artist and the viewer's perception and knowledge of it. Here we use a novel set of stimuli to dissociate the influences of the marks on the surface and the viewer's perceptual experience upon the manner in which the viewer inspects art. Our stimuli provide the opportunity to study situations in which (1) the same visual stimulus can give rise to two different perceptual experiences in the viewer, and (2) the visual stimuli differ but give rise to the same perceptual experience in the viewer. We find that oculomotor behaviour changes when the perceptual experience changes. Oculomotor behaviour also differs when the viewer's perceptual experience is the same but the visual stimulus is different. The methodology used and insights gained from this study offer a first step toward an experimental exploration of the relative influences of the artist's creation and viewer's perception when viewing art and also toward a better understanding of the principles of composition in portraiture.

Close

T. Teichert; Steffen Klingenhoefer; T. Wachtler; Frank Bremmer

Depth perception during saccades Journal Article

In: Journal of Vision, vol. 8, no. 14, pp. 1–13, 2008.

Abstract | Links | BibTeX

@article{Teichert2008,
title = {Depth perception during saccades},
author = {T. Teichert and Steffen Klingenhoefer and T. Wachtler and Frank Bremmer},
doi = {10.1167/8.14.27},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {14},
pages = {1--13},
abstract = {A number of studies have investigated the localization of briefly flashed targets during saccades to understand how the brain perceptually compensates for changes in gaze direction. Typical version saccades, i.e., saccades between two points of the horopter, are not only associated with changes in gaze direction, but also with large transient changes of ocular vergence. These transient changes in vergence have to be compensated for just as changes in gaze direction. We investigated depth judgments of perisaccadically flashed stimuli relative to continuously present references and report several novel findings. First, disparity thresholds increased around saccade onset. Second, for horizontal saccades, depth judgments were prone to systematic errors: Stimuli flashed around saccade onset were perceived in a closer depth plane than persistently shown references with the same retinal disparity. Briefly before and after this period, flashed stimuli tended to be perceived in a farther depth plane. Third, depth judgments for upward and downward saccades differed substantially: For upward, but not for downward saccades we observed the same pattern of mislocalization as for horizontal saccades. Finally, unlike localization in the fronto-parallel plane, depth judgments did not critically depend on the presence of visual references. Current models fail to account for the observed pattern of mislocalization in depth.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A number of studies have investigated the localization of briefly flashed targets during saccades to understand how the brain perceptually compensates for changes in gaze direction. Typical version saccades, i.e., saccades between two points of the horopter, are not only associated with changes in gaze direction, but also with large transient changes of ocular vergence. These transient changes in vergence have to be compensated for just as changes in gaze direction. We investigated depth judgments of perisaccadically flashed stimuli relative to continuously present references and report several novel findings. First, disparity thresholds increased around saccade onset. Second, for horizontal saccades, depth judgments were prone to systematic errors: Stimuli flashed around saccade onset were perceived in a closer depth plane than persistently shown references with the same retinal disparity. Briefly before and after this period, flashed stimuli tended to be perceived in a farther depth plane. Third, depth judgments for upward and downward saccades differed substantially: For upward, but not for downward saccades we observed the same pattern of mislocalization as for horizontal saccades. Finally, unlike localization in the fronto-parallel plane, depth judgments did not critically depend on the presence of visual references. Current models fail to account for the observed pattern of mislocalization in depth.

Close

  • doi:10.1167/8.14.27

Close

Masahiko Terao; Junji Watanabe; Akihiro Yagi; Shin'ya Nishida

Reduction of stimulus visibility compresses apparent time intervals Journal Article

In: Nature Neuroscience, vol. 11, no. 5, pp. 541–542, 2008.

Abstract | Links | BibTeX

@article{Terao2008,
title = {Reduction of stimulus visibility compresses apparent time intervals},
author = {Masahiko Terao and Junji Watanabe and Akihiro Yagi and Shin'ya Nishida},
doi = {10.1038/nn.2111},
year = {2008},
date = {2008-01-01},
journal = {Nature Neuroscience},
volume = {11},
number = {5},
pages = {541--542},
abstract = {The neural mechanisms underlying visual estimation of subsecond durations remain unknown, but perisaccadic underestimation of interflash intervals may provide a clue as to the nature of these mechanisms. Here we found that simply reducing the flash visibility, particularly the visibility of transient signals, induced similar time underestimation by human observers. Our results suggest that weak transient responses fail to trigger the proper detection of temporal asynchrony, leading to increased perception of simultaneity and apparent time compression.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The neural mechanisms underlying visual estimation of subsecond durations remain unknown, but perisaccadic underestimation of interflash intervals may provide a clue as to the nature of these mechanisms. Here we found that simply reducing the flash visibility, particularly the visibility of transient signals, induced similar time underestimation by human observers. Our results suggest that weak transient responses fail to trigger the proper detection of temporal asynchrony, leading to increased perception of simultaneity and apparent time compression.

Close

  • doi:10.1038/nn.2111

Close

Marco Thiel; M. Carmen Romano; Jürgen Kurths; Martin Rolfs; Reinhold Kliegl

Generating surrogates from recurrences Journal Article

In: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 366, pp. 545–557, 2008.

Abstract | Links | BibTeX

@article{Thiel2008,
title = {Generating surrogates from recurrences},
author = {Marco Thiel and M. Carmen Romano and Jürgen Kurths and Martin Rolfs and Reinhold Kliegl},
doi = {10.1098/rsta.2007.2109},
year = {2008},
date = {2008-01-01},
journal = {Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences},
volume = {366},
pages = {545--557},
abstract = {In this paper, we present an approach to recover the dynamics from recurrences of a system and then generate (multivariate) twin surrogate (TS) trajectories. In contrast to other approaches, such as the linear-like surrogates, this technique produces surrogates which correspond to an independent copy of the underlying system, i.e. they induce a trajectory of the underlying system visiting the attractor in a different way. We show that these surrogates are well suited to test for complex synchronization, which makes it possible to systematically assess the reliability of synchronization analyses. We then apply the TS to study binocular fixational movements and find strong indications that the fixational movements of the left and right eye are phase synchronized. This result indicates that there might be only one centre in the brain that produces the fixational movements in both eyes or a close link between the two centres.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this paper, we present an approach to recover the dynamics from recurrences of a system and then generate (multivariate) twin surrogate (TS) trajectories. In contrast to other approaches, such as the linear-like surrogates, this technique produces surrogates which correspond to an independent copy of the underlying system, i.e. they induce a trajectory of the underlying system visiting the attractor in a different way. We show that these surrogates are well suited to test for complex synchronization, which makes it possible to systematically assess the reliability of synchronization analyses. We then apply the TS to study binocular fixational movements and find strong indications that the fixational movements of the left and right eye are phase synchronized. This result indicates that there might be only one centre in the brain that produces the fixational movements in both eyes or a close link between the two centres.

Close

  • doi:10.1098/rsta.2007.2109

Close

P. D. Thiem; Jessica A. Hill; K. -M. Lee; Edward L. Keller

Behavioral properties of saccades generated as a choice response Journal Article

In: Experimental Brain Research, vol. 186, no. 3, pp. 355–364, 2008.

Abstract | Links | BibTeX

@article{Thiem2008,
title = {Behavioral properties of saccades generated as a choice response},
author = {P. D. Thiem and Jessica A. Hill and K. -M. Lee and Edward L. Keller},
doi = {10.1007/s00221-007-1239-y},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {186},
number = {3},
pages = {355--364},
abstract = {The behavior characterizing choice response decision-making was studied in monkeys to provide background information for ongoing neurophysiological studies of the neural mechanisms underlying saccadic choice decisions. Animals were trained to associate a specific color from a set of colored visual stimuli with a specific spatial location. The visual stimuli (colored disks) appeared briefly at equal eccentricity from a central fixation position and then were masked by gray disks. The correct target association was subsequently cued by the appearance of a colored stimulus at the fixation point. The animal indicated its choice by saccading to the remembered location of the eccentric stimulus, which had matched the color of the cue. The number of alternative associations (NA) varied from 1 to 4 and remained fixed within a block of trials. After the training period, performance (percent correct responses) declined modestly as NA increased (on average 96, 93 or 84% correct for 1, 2 or 4 NA, respectively). Response latency increased logarithmically as a function of NA, thus obeying Hick's law. The spatial extent of the learned association between color and location was investigated by rotating the array of colored stimuli that had remained fixed during the learning phase to various different angles. Error rates in choice saccades increased gradually as a function of the amount of rotation. The learned association biased the direction of the saccadic response toward the quadrant associated with the cue, but saccade direction was always toward one of the actual visual stimuli. This suggests that the learned associations between stimuli and responses were not spatially exact, but instead the association between color and location was distributed with declining strength from the trained locations. These results demonstrate that the saccade system in monkeys also displays the characteristic dependence on NA in choice response latencies, while more basic features of the eye movements are invariant from those in other tasks. The findings also provide behavioral evidence that spatially distributed regions are established for the sensory-to-motor associations during training which are later utilized for choice decisions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The behavior characterizing choice response decision-making was studied in monkeys to provide background information for ongoing neurophysiological studies of the neural mechanisms underlying saccadic choice decisions. Animals were trained to associate a specific color from a set of colored visual stimuli with a specific spatial location. The visual stimuli (colored disks) appeared briefly at equal eccentricity from a central fixation position and then were masked by gray disks. The correct target association was subsequently cued by the appearance of a colored stimulus at the fixation point. The animal indicated its choice by saccading to the remembered location of the eccentric stimulus, which had matched the color of the cue. The number of alternative associations (NA) varied from 1 to 4 and remained fixed within a block of trials. After the training period, performance (percent correct responses) declined modestly as NA increased (on average 96, 93 or 84% correct for 1, 2 or 4 NA, respectively). Response latency increased logarithmically as a function of NA, thus obeying Hick's law. The spatial extent of the learned association between color and location was investigated by rotating the array of colored stimuli that had remained fixed during the learning phase to various different angles. Error rates in choice saccades increased gradually as a function of the amount of rotation. The learned association biased the direction of the saccadic response toward the quadrant associated with the cue, but saccade direction was always toward one of the actual visual stimuli. This suggests that the learned associations between stimuli and responses were not spatially exact, but instead the association between color and location was distributed with declining strength from the trained locations. These results demonstrate that the saccade system in monkeys also displays the characteristic dependence on NA in choice response latencies, while more basic features of the eye movements are invariant from those in other tasks. The findings also provide behavioral evidence that spatially distributed regions are established for the sensory-to-motor associations during training which are later utilized for choice decisions.

Close

  • doi:10.1007/s00221-007-1239-y

Close

Shery Thomas; Frank A. Proudlock; Nagini Sarvananthan; Eryl O. Roberts; Musarat Awan; Rebecca J. McLean; Mylvaganam Surendran; A. S. Anil Kumar; Shegufta J. Farooq; Christopher Degg; Richard P. Gale; Robert D. Reinecke; Geoffrey Woodruff; Andrea Langmann; Susanne Lindner; Sunila Jain; Patrick Tarpey; F. Lucy Raymond; Irene Gottlob

Phenotypical characteristics of idiopathic infantile nystagmus with and without mutations in FRMD7 Journal Article

In: Brain, vol. 131, no. 5, pp. 1259–1267, 2008.

Abstract | Links | BibTeX

@article{Thomas2008,
title = {Phenotypical characteristics of idiopathic infantile nystagmus with and without mutations in FRMD7},
author = {Shery Thomas and Frank A. Proudlock and Nagini Sarvananthan and Eryl O. Roberts and Musarat Awan and Rebecca J. McLean and Mylvaganam Surendran and A. S. Anil Kumar and Shegufta J. Farooq and Christopher Degg and Richard P. Gale and Robert D. Reinecke and Geoffrey Woodruff and Andrea Langmann and Susanne Lindner and Sunila Jain and Patrick Tarpey and F. Lucy Raymond and Irene Gottlob},
doi = {10.1093/brain/awn046},
year = {2008},
date = {2008-01-01},
journal = {Brain},
volume = {131},
number = {5},
pages = {1259--1267},
abstract = {Idiopathic infantile nystagmus (IIN) consists of involuntary oscillations of the eyes. The familial form is most commonly X-linked. We recently found mutations in a novel gene FRMD7 (Xq26.2), which provided an opportunity to investigate a genetically defined and homogeneous group of patients with nystagmus. We compared clinical features and eye movement recordings of 90 subjects with mutation in the gene (FRMD7 group) to 48 subjects without mutations but with clinical IIN (non-FRMD7 group). Fifty-eight female obligate carriers of the mutation were also investigated. The median visual acuity (VA) was 0.2 logMAR (Snellen equivalent 6/9) in both groups and most patients had good stereopsis. The prevalence of strabismus was also similar (FRMD7: 7.8%, non-FRMD7: 10%). The presence of anomalous head posture (AHP) was significantly higher in the non-FRMD7 group (P < 0.0001). The amplitude of nystagmus was more strongly dependent on the direction of gaze in the FRMD7 group being lower at primary position (P < 0.0001), compared to non-FRMD7 group (P = 0.83). Pendular nystagmus waveforms were also more frequent in the FRMD7 group (P = 0.003). Fifty-three percent of the obligate female carriers of an FRMD7 mutation were clinically affected. The VA's in affected females were slightly better compared to affected males (P = 0.014). Subnormal optokinetic responses were found in a subgroup of obligate unaffected carriers, which may be interpreted as a sub-clinical manifestation. FRMD7 is a major cause of X-linked IIN. Most clinical and eye movement characteristics were similar in the FRMD7 group and non-FRMD7 group with most patients having good VA and stereopsis and low incidence of strabismus. Fewer patients in the FRMD7 group had AHPs, their amplitude of nystagmus being lower in primary position. Our findings are helpful in the clinical identification of IIN and genetic counselling of nystagmus patients.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Idiopathic infantile nystagmus (IIN) consists of involuntary oscillations of the eyes. The familial form is most commonly X-linked. We recently found mutations in a novel gene FRMD7 (Xq26.2), which provided an opportunity to investigate a genetically defined and homogeneous group of patients with nystagmus. We compared clinical features and eye movement recordings of 90 subjects with mutation in the gene (FRMD7 group) to 48 subjects without mutations but with clinical IIN (non-FRMD7 group). Fifty-eight female obligate carriers of the mutation were also investigated. The median visual acuity (VA) was 0.2 logMAR (Snellen equivalent 6/9) in both groups and most patients had good stereopsis. The prevalence of strabismus was also similar (FRMD7: 7.8%, non-FRMD7: 10%). The presence of anomalous head posture (AHP) was significantly higher in the non-FRMD7 group (P < 0.0001). The amplitude of nystagmus was more strongly dependent on the direction of gaze in the FRMD7 group being lower at primary position (P < 0.0001), compared to non-FRMD7 group (P = 0.83). Pendular nystagmus waveforms were also more frequent in the FRMD7 group (P = 0.003). Fifty-three percent of the obligate female carriers of an FRMD7 mutation were clinically affected. The VA's in affected females were slightly better compared to affected males (P = 0.014). Subnormal optokinetic responses were found in a subgroup of obligate unaffected carriers, which may be interpreted as a sub-clinical manifestation. FRMD7 is a major cause of X-linked IIN. Most clinical and eye movement characteristics were similar in the FRMD7 group and non-FRMD7 group with most patients having good VA and stereopsis and low incidence of strabismus. Fewer patients in the FRMD7 group had AHPs, their amplitude of nystagmus being lower in primary position. Our findings are helpful in the clinical identification of IIN and genetic counselling of nystagmus patients.

Close

  • doi:10.1093/brain/awn046

Close

Aidan A. Thompson; Denise Y. P. Henriques

Updating visual memory across eye movements for ocular and arm motor control Journal Article

In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2507–2514, 2008.

Abstract | Links | BibTeX

@article{Thompson2008,
title = {Updating visual memory across eye movements for ocular and arm motor control},
author = {Aidan A. Thompson and Denise Y. P. Henriques},
doi = {10.1152/jn.90599.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {100},
number = {5},
pages = {2507--2514},
abstract = {Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.

Close

  • doi:10.1152/jn.90599.2008

Close

Delphine Dahan; Sarah J. Drucker; Rebecca A. Scarborough

Talker adaptation in speech perception: Adjusting the signal or the representations? Journal Article

In: Cognition, vol. 108, no. 3, pp. 710–718, 2008.

Abstract | Links | BibTeX

@article{Dahan2008,
title = {Talker adaptation in speech perception: Adjusting the signal or the representations?},
author = {Delphine Dahan and Sarah J. Drucker and Rebecca A. Scarborough},
doi = {10.1016/j.cognition.2008.06.003},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {108},
number = {3},
pages = {710--718},
abstract = {Past research has established that listeners can accommodate a wide range of talkers in understanding language. How this adjustment operates, however, is a matter of debate. Here, listeners were exposed to spoken words from a speaker of an American English dialect in which the vowel /æ/ is raised before /g/, but not before /k/. Results from two experiments showed that listeners' identification of /k/-final words like back (which are unaffected by the dialect) was facilitated by prior exposure to their dialect-affected /g/-final counterparts, e.g., bag. This facilitation occurred because the competition between interpretations, e.g., bag or back, while hearing the initial portion of the input [bæ], was mitigated by the reduced probability for the input to correspond to bag as produced by this talker. Thus, adaptation to an accent is not just a matter of adjusting the speech signal as it is being heard; adaptation involves dynamic adjustment of the representations stored in the lexicon, according to the characteristics of the speaker or the context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Past research has established that listeners can accommodate a wide range of talkers in understanding language. How this adjustment operates, however, is a matter of debate. Here, listeners were exposed to spoken words from a speaker of an American English dialect in which the vowel /æ/ is raised before /g/, but not before /k/. Results from two experiments showed that listeners' identification of /k/-final words like back (which are unaffected by the dialect) was facilitated by prior exposure to their dialect-affected /g/-final counterparts, e.g., bag. This facilitation occurred because the competition between interpretations, e.g., bag or back, while hearing the initial portion of the input [bæ], was mitigated by the reduced probability for the input to correspond to bag as produced by this talker. Thus, adaptation to an accent is not just a matter of adjusting the speech signal as it is being heard; adaptation involves dynamic adjustment of the representations stored in the lexicon, according to the characteristics of the speaker or the context.

Close

  • doi:10.1016/j.cognition.2008.06.003

Close

Stephen V. David; Benjamin Y. Hayden; James A. Mazer; Jack L. Gallant

Attention to stimulus features shifts spectral tuning of V4 neurons during natural vision Journal Article

In: Neuron, vol. 59, no. 3, pp. 509–521, 2008.

Abstract | Links | BibTeX

@article{David2008,
title = {Attention to stimulus features shifts spectral tuning of V4 neurons during natural vision},
author = {Stephen V. David and Benjamin Y. Hayden and James A. Mazer and Jack L. Gallant},
doi = {10.1016/j.neuron.2008.07.001},
year = {2008},
date = {2008-01-01},
journal = {Neuron},
volume = {59},
number = {3},
pages = {509--521},
abstract = {Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.

Close

  • doi:10.1016/j.neuron.2008.07.001

Close

Scott L. Davis; Teresa C. Frohman; C. J. Crandall; M. J. Brown; D. A. Mills; Phillip D. Kramer; O. Stuve; Elliot M. Frohman

Modeling Uhthoff's phenomenon in MS patients with internuclear ophthalmoparesis Journal Article

In: Neurology, vol. 70, pp. 1098–1106, 2008.

Abstract | Links | BibTeX

@article{Davis2008,
title = {Modeling Uhthoff's phenomenon in MS patients with internuclear ophthalmoparesis},
author = {Scott L. Davis and Teresa C. Frohman and C. J. Crandall and M. J. Brown and D. A. Mills and Phillip D. Kramer and O. Stuve and Elliot M. Frohman},
doi = {10.1212/01.wnl.0000291009.69226.4d},
year = {2008},
date = {2008-01-01},
journal = {Neurology},
volume = {70},
pages = {1098--1106},
abstract = {Objective: The goal of this investigation was to demonstrate that internuclear ophthalmoparesis (INO) can be utilized to model the effects of body temperature-induced changes on the fidelity of axonal conduction in multiple sclerosis (Uhthoff's phenomenon). Methods: Ocular motor function was measured using infrared oculography at 10-minute intervals in patients with multiple sclerosis (MS) with INO (MS-INO; n=8), patients with MS without INO (MS-CON; n=8), and matched healthy controls (CON; n=8) at normothermic baseline, during whole-body heating (increase in core temperature 0.8°C as measured by an ingestible temperature probe and transabdominal telemetry), and after whole-body cooling. The versional disconjugacy index (velocity-VDI), the ratio of abducting/adducting eye movements for velocity, was calculated to assess changes in interocular disconjugacy. The first pass amplitude (FPA), the position of the adducting eye when the abducting eye achieves a centrifugal fixation target, was also computed. Results: Velocity-VDI and FPA in MS-INO patients was elevated (p<0.001) following whole body heating with respect to baseline measures, confirming a compromise in axonal electrical impulse transmission properties. Velocity-VDI and FPA in MS-INO patients was then restored to baseline values following whole-body cooling, confirming the reversible and stereotyped nature of this characteristic feature of demyelination. Conclusions: We have developed a neurophysiologic model for objectively understanding temperature-related reversible changes in axonal conduction in multiple sclerosis. Our observations corroborate the hypothesis that changes in core body temperature (heating and cooling) are associated with stereotypic decay and restoration in axonal conduction mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: The goal of this investigation was to demonstrate that internuclear ophthalmoparesis (INO) can be utilized to model the effects of body temperature-induced changes on the fidelity of axonal conduction in multiple sclerosis (Uhthoff's phenomenon). Methods: Ocular motor function was measured using infrared oculography at 10-minute intervals in patients with multiple sclerosis (MS) with INO (MS-INO; n=8), patients with MS without INO (MS-CON; n=8), and matched healthy controls (CON; n=8) at normothermic baseline, during whole-body heating (increase in core temperature 0.8°C as measured by an ingestible temperature probe and transabdominal telemetry), and after whole-body cooling. The versional disconjugacy index (velocity-VDI), the ratio of abducting/adducting eye movements for velocity, was calculated to assess changes in interocular disconjugacy. The first pass amplitude (FPA), the position of the adducting eye when the abducting eye achieves a centrifugal fixation target, was also computed. Results: Velocity-VDI and FPA in MS-INO patients was elevated (p<0.001) following whole body heating with respect to baseline measures, confirming a compromise in axonal electrical impulse transmission properties. Velocity-VDI and FPA in MS-INO patients was then restored to baseline values following whole-body cooling, confirming the reversible and stereotyped nature of this characteristic feature of demyelination. Conclusions: We have developed a neurophysiologic model for objectively understanding temperature-related reversible changes in axonal conduction in multiple sclerosis. Our observations corroborate the hypothesis that changes in core body temperature (heating and cooling) are associated with stereotypic decay and restoration in axonal conduction mechanisms.

Close

  • doi:10.1212/01.wnl.0000291009.69226.4d

Close

Denise D. J. Grave; Constanze Hesse; Anne-Marie Brouwer; Volker H. Franz

Fixation locations when grasping partly occluded objects Journal Article

In: Journal of Vision, vol. 8, no. 7, pp. 1–11, 2008.

Abstract | Links | BibTeX

@article{Grave2008,
title = {Fixation locations when grasping partly occluded objects},
author = {Denise D. J. Grave and Constanze Hesse and Anne-Marie Brouwer and Volker H. Franz},
doi = {10.1167/8.7.5},
year = {2008},
date = {2008-01-01},
journal = {Journal of Vision},
volume = {8},
number = {7},
pages = {1--11},
abstract = {When grasping an object, subjects tend to look at the contact positions of the digits (A. M. Brouwer, V. H. Franz, D. Kerzel, & K. R. Gegenfurtner, 2005; R. S. Johansson, G. Westling, A. Bäckström, & J. R. Flanagan, 2001). However, these contact positions are not always visible due to occlusion. Subjects might look at occluded parts to determine the location of the contact positions based on extrapolated information. On the other hand, subjects might avoid looking at occluded parts since no object information can be gathered there. To find out where subjects fixate when grasping occluded objects, we let them grasp flat shapes with the index finger and thumb at predefined contact positions. Either the contact position of the thumb or the finger or both was occluded. In a control condition, a part of the object that does not involve the contact positions was occluded. The results showed that subjects did look at occluded object parts, suggesting that they used extrapolated object information for grasping. Additionally, they preferred to look in the direction of the index finger. When the contact position of the index finger was occluded, this tendency was inhibited. Thus, an occluder does not prevent fixations on occluded object parts, but it does affect fixation locations especially in conditions where the preferred fixation location is occluded.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When grasping an object, subjects tend to look at the contact positions of the digits (A. M. Brouwer, V. H. Franz, D. Kerzel, & K. R. Gegenfurtner, 2005; R. S. Johansson, G. Westling, A. Bäckström, & J. R. Flanagan, 2001). However, these contact positions are not always visible due to occlusion. Subjects might look at occluded parts to determine the location of the contact positions based on extrapolated information. On the other hand, subjects might avoid looking at occluded parts since no object information can be gathered there. To find out where subjects fixate when grasping occluded objects, we let them grasp flat shapes with the index finger and thumb at predefined contact positions. Either the contact position of the thumb or the finger or both was occluded. In a control condition, a part of the object that does not involve the contact positions was occluded. The results showed that subjects did look at occluded object parts, suggesting that they used extrapolated object information for grasping. Additionally, they preferred to look in the direction of the index finger. When the contact position of the index finger was occluded, this tendency was inhibited. Thus, an occluder does not prevent fixations on occluded object parts, but it does affect fixation locations especially in conditions where the preferred fixation location is occluded.

Close

  • doi:10.1167/8.7.5

Close

Sarah Brown-Schmidt; Christine Gunlogson; Michael K. Tanenhaus

Addressees distinguish shared from private information when interpreting questions during interactive conversation Journal Article

In: Cognition, vol. 107, no. 3, pp. 1122–1134, 2008.

Abstract | Links | BibTeX

@article{BrownSchmidt2008,
title = {Addressees distinguish shared from private information when interpreting questions during interactive conversation},
author = {Sarah Brown-Schmidt and Christine Gunlogson and Michael K. Tanenhaus},
doi = {10.1016/j.cognition.2007.11.005},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {107},
number = {3},
pages = {1122--1134},
abstract = {Two experiments examined the role of common ground in the production and on-line interpretation of wh-questions such as What's above the cow with shoes? Experiment 1 examined unscripted conversation, and found that speakers consistently use wh-questions to inquire about information known only to the addressee. Addressees were sensitive to this tendency, and quickly directed attention toward private entities when interpreting these questions. A second experiment replicated the interpretation findings in a more constrained setting. These results add to previous evidence that the common ground influences initial language processes, and suggests that the strength and polarity of common ground effects may depend on contributions of sentence type as well as the interactivity of the situation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments examined the role of common ground in the production and on-line interpretation of wh-questions such as What's above the cow with shoes? Experiment 1 examined unscripted conversation, and found that speakers consistently use wh-questions to inquire about information known only to the addressee. Addressees were sensitive to this tendency, and quickly directed attention toward private entities when interpreting these questions. A second experiment replicated the interpretation findings in a more constrained setting. These results add to previous evidence that the common ground influences initial language processes, and suggests that the strength and polarity of common ground effects may depend on contributions of sentence type as well as the interactivity of the situation.

Close

  • doi:10.1016/j.cognition.2007.11.005

Close

Julie N. Buchan; Martin Paré; Kevin G. Munhall

The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception Journal Article

In: Brain Research, vol. 1242, pp. 162–171, 2008.

Abstract | Links | BibTeX

@article{Buchan2008,
title = {The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception},
author = {Julie N. Buchan and Martin Paré and Kevin G. Munhall},
doi = {10.1016/j.brainres.2008.06.083},
year = {2008},
date = {2008-01-01},
journal = {Brain Research},
volume = {1242},
pages = {162--171},
abstract = {During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.

Close

  • doi:10.1016/j.brainres.2008.06.083

Close

Antimo Buonocore; Robert D. McIntosh

Saccadic inhibition underlies the remote distractor effect Journal Article

In: Experimental Brain Research, vol. 191, no. 1, pp. 117–122, 2008.

Abstract | Links | BibTeX

@article{Buonocore2008,
title = {Saccadic inhibition underlies the remote distractor effect},
author = {Antimo Buonocore and Robert D. McIntosh},
doi = {10.1007/s00221-008-1558-7},
year = {2008},
date = {2008-01-01},
journal = {Experimental Brain Research},
volume = {191},
number = {1},
pages = {117--122},
abstract = {The remote distractor effect is a robust finding whereby a saccade to a lateralised visual target is delayed by the simultaneous, or near simultaneous, onset of a distractor in the opposite hemifield. Saccadic inhibition is a more recently discovered phenomenon whereby a transient change to the scene during a visual task induces a depression in saccadic frequency beginning within 70 ms, and maximal around 90-100 ms. We assessed whether saccadic inhibition is responsible for the increase in saccadic latency induced by remote distractors. Participants performed a simple saccadic task in which the delay between target and distractor was varied between 0, 25, 50, 100 and 150 ms. Examination of the distributions of saccadic latencies showed that each distractor produced a discrete dip in saccadic frequency, time-locked to distractor onset, conforming closely to the character of saccadic inhibition. We conclude that saccadic inhibition underlies the remote distractor effect.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The remote distractor effect is a robust finding whereby a saccade to a lateralised visual target is delayed by the simultaneous, or near simultaneous, onset of a distractor in the opposite hemifield. Saccadic inhibition is a more recently discovered phenomenon whereby a transient change to the scene during a visual task induces a depression in saccadic frequency beginning within 70 ms, and maximal around 90-100 ms. We assessed whether saccadic inhibition is responsible for the increase in saccadic latency induced by remote distractors. Participants performed a simple saccadic task in which the delay between target and distractor was varied between 0, 25, 50, 100 and 150 ms. Examination of the distributions of saccadic latencies showed that each distractor produced a discrete dip in saccadic frequency, time-locked to distractor onset, conforming closely to the character of saccadic inhibition. We conclude that saccadic inhibition underlies the remote distractor effect.

Close

  • doi:10.1007/s00221-008-1558-7

Close

Manuel G. Calvo; Pedro Avero

Affective priming of emotional pictures in parafoveal vision: Left visual field advantage Journal Article

In: Cognitive, Affective and Behavioral Neuroscience, vol. 8, no. 1, pp. 41–53, 2008.

Abstract | Links | BibTeX

@article{Calvo2008,
title = {Affective priming of emotional pictures in parafoveal vision: Left visual field advantage},
author = {Manuel G. Calvo and Pedro Avero},
doi = {10.3758/CABN.8.1.41},
year = {2008},
date = {2008-01-01},
journal = {Cognitive, Affective and Behavioral Neuroscience},
volume = {8},
number = {1},
pages = {41--53},
abstract = {This study investigated whether stimulus affective content can be extracted from visual scenes when these appear in parafoveal locations of the visual field and are foveally masked, and whether there is lateralization involved. Parafoveal prime pleasant or unpleasant scenes were presented for 150 msec 2.5° away from fixation and were followed by a foveal probe scene that was either congruent or incongruent in emotional valence with the prime. Participants responded whether the probe was emotionally positive or negative. Affective priming was demonstrated by shorter response latencies for congruent than for incongruent prime-probe pairs. This effect occurred when the prime was presented in the left visual field at a 300-msec prime-probe stimulus onset asynchrony, even when the prime and the probe were different in physical appearance and semantic category. This result reveals that the affective significance of emotional stimuli can be assessed early through covert attention mechanisms, in the absence of overt eye fixations on the stimuli, and suggests that right-hemisphere dominance is involved. Copyright 2008 Psychonomic Society, Inc.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigated whether stimulus affective content can be extracted from visual scenes when these appear in parafoveal locations of the visual field and are foveally masked, and whether there is lateralization involved. Parafoveal prime pleasant or unpleasant scenes were presented for 150 msec 2.5° away from fixation and were followed by a foveal probe scene that was either congruent or incongruent in emotional valence with the prime. Participants responded whether the probe was emotionally positive or negative. Affective priming was demonstrated by shorter response latencies for congruent than for incongruent prime-probe pairs. This effect occurred when the prime was presented in the left visual field at a 300-msec prime-probe stimulus onset asynchrony, even when the prime and the probe were different in physical appearance and semantic category. This result reveals that the affective significance of emotional stimuli can be assessed early through covert attention mechanisms, in the absence of overt eye fixations on the stimuli, and suggests that right-hemisphere dominance is involved. Copyright 2008 Psychonomic Society, Inc.

Close

  • doi:10.3758/CABN.8.1.41

Close

Manuel G. Calvo; Michael W. Eysenck

Affective significance enhances covert attention: Roles of anxiety and word familiarity Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 61, no. 11, pp. 1669–1686, 2008.

Abstract | Links | BibTeX

@article{Calvo2008a,
title = {Affective significance enhances covert attention: Roles of anxiety and word familiarity},
author = {Manuel G. Calvo and Michael W. Eysenck},
doi = {10.1080/17470210701743700},
year = {2008},
date = {2008-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {61},
number = {11},
pages = {1669--1686},
abstract = {To investigate the processing of emotional words by covert attention, threat-related, positive, and neutral word primes were presented parafoveally (2.2 degrees away from fixation) for 150 ms, under gaze-contingent foveal masking, to prevent eye fixations. The primes were followed by a probe word in a lexical-decision task. In Experiment 1, results showed a parafoveal threat-anxiety superiority: Parafoveal prime threat words facilitated responses to probe threat words for high-anxiety individuals, in comparison with neutral and positive words, and relative to low-anxiety individuals. This reveals an advantage in threat processing by covert attention, without differences in overt attention. However, anxiety was also associated with greater familiarity with threat words, and the parafoveal priming effects were significantly reduced when familiarity was covaried out. To further examine the role of word knowledge, in Experiment 2, vocabulary and word familiarity were equated for low- and high-anxiety groups. In these conditions, the parafoveal threat-anxiety advantage disappeared. This suggests that the enhanced covert-attention effect depends on familiarity with words.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To investigate the processing of emotional words by covert attention, threat-related, positive, and neutral word primes were presented parafoveally (2.2 degrees away from fixation) for 150 ms, under gaze-contingent foveal masking, to prevent eye fixations. The primes were followed by a probe word in a lexical-decision task. In Experiment 1, results showed a parafoveal threat-anxiety superiority: Parafoveal prime threat words facilitated responses to probe threat words for high-anxiety individuals, in comparison with neutral and positive words, and relative to low-anxiety individuals. This reveals an advantage in threat processing by covert attention, without differences in overt attention. However, anxiety was also associated with greater familiarity with threat words, and the parafoveal priming effects were significantly reduced when familiarity was covaried out. To further examine the role of word knowledge, in Experiment 2, vocabulary and word familiarity were equated for low- and high-anxiety groups. In these conditions, the parafoveal threat-anxiety advantage disappeared. This suggests that the enhanced covert-attention effect depends on familiarity with words.

Close

  • doi:10.1080/17470210701743700

Close

Manuel G. Calvo; Lauri Nummenmaa

Detection of emotional faces: Salient physical features guide effective visual search Journal Article

In: Journal of Experimental Psychology: General, vol. 137, no. 3, pp. 471–494, 2008.

Abstract | Links | BibTeX

@article{Calvo2008b,
title = {Detection of emotional faces: Salient physical features guide effective visual search},
author = {Manuel G. Calvo and Lauri Nummenmaa},
doi = {10.1037/a0012771},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {137},
number = {3},
pages = {471--494},
abstract = {In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection.

Close

  • doi:10.1037/a0012771

Close

Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero

Visual search of emotional faces: Eye-movement assessment of component processes Journal Article

In: Experimental Psychology, vol. 55, no. 6, pp. 359–370, 2008.

Abstract | Links | BibTeX

@article{Calvo2008c,
title = {Visual search of emotional faces: Eye-movement assessment of component processes},
author = {Manuel G. Calvo and Lauri Nummenmaa and Pedro Avero},
doi = {10.1027/1618-3169.55.6.359},
year = {2008},
date = {2008-01-01},
journal = {Experimental Psychology},
volume = {55},
number = {6},
pages = {359--370},
abstract = {In a visual search task using photographs of real faces, a target emotional face was presented in an array of six neutral faces. Eye movements were monitored to assess attentional orienting and detection efficiency. Target faces with happy, surprised, and disgusted expressions were: (a) responded to more quickly and accurately, (b) localized and fixated earlier, and (c) detected as different faster and with fewer fixations, in comparison with fearful, angry, and sad target faces. This reveals a happy, surprised, and disgusted-face advantage in visual search, with earlier attentional orienting and more efficient detection. The pattern of findings remained equivalent across upright and inverted presentation conditions, which suggests that the search advantage involves processing of featural rather than configural information. Detection responses occurred generally after having fixated the target, which implies that detection of all facial expressions is post- rather than preattentional},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a visual search task using photographs of real faces, a target emotional face was presented in an array of six neutral faces. Eye movements were monitored to assess attentional orienting and detection efficiency. Target faces with happy, surprised, and disgusted expressions were: (a) responded to more quickly and accurately, (b) localized and fixated earlier, and (c) detected as different faster and with fewer fixations, in comparison with fearful, angry, and sad target faces. This reveals a happy, surprised, and disgusted-face advantage in visual search, with earlier attentional orienting and more efficient detection. The pattern of findings remained equivalent across upright and inverted presentation conditions, which suggests that the search advantage involves processing of featural rather than configural information. Detection responses occurred generally after having fixated the target, which implies that detection of all facial expressions is post- rather than preattentional

Close

  • doi:10.1027/1618-3169.55.6.359

Close

Manuel G. Calvo; Lauri Nummenmaa; Jukka Hyönä

Emotional scenes in peripheral vision: Selective orienting and gist processing, but not content identification Journal Article

In: Emotion, vol. 8, no. 1, pp. 68–80, 2008.

Abstract | Links | BibTeX

@article{Calvo2008d,
title = {Emotional scenes in peripheral vision: Selective orienting and gist processing, but not content identification},
author = {Manuel G. Calvo and Lauri Nummenmaa and Jukka Hyönä},
doi = {10.1037/1528-3542.8.1.68},
year = {2008},
date = {2008-01-01},
journal = {Emotion},
volume = {8},
number = {1},
pages = {68--80},
abstract = {Emotional-neutral pairs of visual scenes were presented peripherally (with their inner edges 5.2 degrees away from fixation) as primes for 150 to 900 ms, followed by a centrally presented recognition probe scene, which was either identical in specific content to one of the primes or related in general content and affective valence. Results indicated that (a) if no foveal fixations on the primes were allowed, the false alarm rate for emotional probes was increased; (b) hit rate and sensitivity (A') were higher for emotional than for neutral probes only when a fixation was possible on only one prime; and (c) emotional scenes were more likely to attract the first fixation than neutral scenes. It is concluded that the specific content of emotional or neutral scenes is not processed in peripheral vision. Nevertheless, a coarse impression of emotional scenes may be extracted, which then leads to selective attentional orienting or--in the absence of overt attention--causes false alarms for related probes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Emotional-neutral pairs of visual scenes were presented peripherally (with their inner edges 5.2 degrees away from fixation) as primes for 150 to 900 ms, followed by a centrally presented recognition probe scene, which was either identical in specific content to one of the primes or related in general content and affective valence. Results indicated that (a) if no foveal fixations on the primes were allowed, the false alarm rate for emotional probes was increased; (b) hit rate and sensitivity (A') were higher for emotional than for neutral probes only when a fixation was possible on only one prime; and (c) emotional scenes were more likely to attract the first fixation than neutral scenes. It is concluded that the specific content of emotional or neutral scenes is not processed in peripheral vision. Nevertheless, a coarse impression of emotional scenes may be extracted, which then leads to selective attentional orienting or--in the absence of overt attention--causes false alarms for related probes.

Close

  • doi:10.1037/1528-3542.8.1.68

Close

David D. Cox; Alexander M. Papanastassiou; Daniel Oreper; Benjamin B. Andken; James J. DiCarlo

High-resolution three-dimensional microelectrode brain mapping using stereo microfocal x-ray imaging Journal Article

In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2966–2976, 2008.

Abstract | Links | BibTeX

@article{Cox2008,
title = {High-resolution three-dimensional microelectrode brain mapping using stereo microfocal x-ray imaging},
author = {David D. Cox and Alexander M. Papanastassiou and Daniel Oreper and Benjamin B. Andken and James J. DiCarlo},
doi = {10.1152/jn.90672.2008},
year = {2008},
date = {2008-01-01},
journal = {Journal of Neurophysiology},
volume = {100},
number = {5},
pages = {2966--2976},
abstract = {Much of our knowledge of brain function has been gleaned from studies using microelectrodes to characterize the response properties of individual neurons in vivo. However, because it is difficult to accurately determine the location of a microelectrode tip within the brain, it is impossible to systematically map the fine three-dimensional spatial organization of many brain areas, especially in deep structures. Here, we present a practical method based on digital stereo microfocal X-ray imaging that makes it possible to estimate the three-dimensional position of each and every microelectrode recording site in "real time" during experimental sessions. We determined the system's ex vivo localization accuracy to be better than 50 microm, and we show how we have used this method to coregister hundreds of deep-brain microelectrode recordings in monkeys to a common frame of reference with median error of <150 microm. we further show how can coregister those sites with magnetic resonance images (mris), allowing for comparison anatomy, and laying the groundwork more detailed electrophysiology functional mri comparison.minimally, this method allows one to marry single-cell specificity of microelectrode recording spatial mapping abilities imaging techniques; furthermore, it has potential yielding fundamentally new kinds high-resolution maps brain function.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Much of our knowledge of brain function has been gleaned from studies using microelectrodes to characterize the response properties of individual neurons in vivo. However, because it is difficult to accurately determine the location of a microelectrode tip within the brain, it is impossible to systematically map the fine three-dimensional spatial organization of many brain areas, especially in deep structures. Here, we present a practical method based on digital stereo microfocal X-ray imaging that makes it possible to estimate the three-dimensional position of each and every microelectrode recording site in "real time" during experimental sessions. We determined the system's ex vivo localization accuracy to be better than 50 microm, and we show how we have used this method to coregister hundreds of deep-brain microelectrode recordings in monkeys to a common frame of reference with median error of <150 microm. We further show how we can coregister those sites with magnetic resonance images (MRIs), allowing for comparison with anatomy, and laying the groundwork for more detailed electrophysiology/functional MRI comparison. Minimally, this method allows one to marry the single-cell specificity of microelectrode recording with the spatial mapping abilities of imaging techniques; furthermore, it has the potential of yielding fundamentally new kinds of high-resolution maps of brain function.

Close

  • doi:10.1152/jn.90672.2008

Close

Matthew T. Crawford; John J. Skowronski; Chris Stiff; Ute Leonards

Seeing, but not thinking: Limiting the spread of spontaneous trait transference II Journal Article

In: Journal of Experimental Social Psychology, vol. 44, no. 3, pp. 840–847, 2008.

Abstract | Links | BibTeX

@article{Crawford2008,
title = {Seeing, but not thinking: Limiting the spread of spontaneous trait transference II},
author = {Matthew T. Crawford and John J. Skowronski and Chris Stiff and Ute Leonards},
doi = {10.1016/j.jesp.2007.08.001},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Social Psychology},
volume = {44},
number = {3},
pages = {840--847},
abstract = {When an informant describes trait-implicative behavior of a target, the informant is often associated with the trait implied by the behavior and can be assigned heightened ratings on that trait (STT effects). Presentation of a target photo along with the description seemingly eliminates these effects. Using three different measures of visual attention, the results of two studies show the elimination of STT effects by target photo presentation cannot be attributed to associative mechanisms linked to enhanced visual attention to targets. Instead, presentation of a target's photo likely prompts perceivers to spontaneously make target inferences in much the same way they make spontaneous inferences about self-describers. As argued by Todorov and Uleman [Todorov, A., & Uleman, J. S. (2004). The person reference process in spontaneous trait inferences. Journal of Personality & Social Psychology, 87, 482-493], such attributional processing can preclude the formation of trait associations to informants.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When an informant describes trait-implicative behavior of a target, the informant is often associated with the trait implied by the behavior and can be assigned heightened ratings on that trait (STT effects). Presentation of a target photo along with the description seemingly eliminates these effects. Using three different measures of visual attention, the results of two studies show the elimination of STT effects by target photo presentation cannot be attributed to associative mechanisms linked to enhanced visual attention to targets. Instead, presentation of a target's photo likely prompts perceivers to spontaneously make target inferences in much the same way they make spontaneous inferences about self-describers. As argued by Todorov and Uleman [Todorov, A., & Uleman, J. S. (2004). The person reference process in spontaneous trait inferences. Journal of Personality & Social Psychology, 87, 482-493], such attributional processing can preclude the formation of trait associations to informants.

Close

  • doi:10.1016/j.jesp.2007.08.001

Close

10162 entries « ‹ 92 of 102 › »

让我们保持联系

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
新闻通讯
新闻通讯存档
会议

联系

info@sr-research.com

电话: +1-613-271-8686

免费电话: +1-866-821-0731

传真: +1-613-482-4866

快捷链接

产品

解决方案

支援论坛

法律信息

法律声明

隐私政策 | 可访性政策

EyeLink® 眼动仪是研究设备,不能用于医疗诊断或治疗。

特色博客

Reading Profiles of Adults with Dyslexia

成人阅读障碍的阅读概况


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.