• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
    • Purchase Licenses
  • Solutions
    • Reading and Language
    • Developmental
    • fMRI and MEG
    • EEG and fNIRS
    • Clinical and Oculomotor
    • Cognitive
    • Usability and Applied
    • Non Human Primate
  • Support
    • Forum
    • Resources
    • Useful Apps
    • Training
  • About
    • About Us
    • EyeLink Publications
    • Events
    • Manufacturing
    • Careers
    • About Eye Tracking
    • Newsletter
  • Blog
  • Contact
  • 简体中文
eye tracking research

Cognitive Publications

EyeLink Cognitive Publications 

All EyeLink cognitive and perception research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!

6509 entries « ‹ 58 of 66 › »

2010

Melissa L. -H. Võ; Werner X. Schneider

A glimpse is not a glimpse: Differential processing of flashed scene previews leads to differential target search benefits Journal Article

In: Visual Cognition, vol. 18, no. 2, pp. 171–200, 2010.

Abstract | Links | BibTeX

@article{Vo2010a,
title = {A glimpse is not a glimpse: Differential processing of flashed scene previews leads to differential target search benefits},
author = {Melissa L. -H. Võ and Werner X. Schneider},
doi = {10.1080/13506280802547901},
year = {2010},
date = {2010-01-01},
journal = {Visual Cognition},
volume = {18},
number = {2},
pages = {171--200},
abstract = {What information can we extract from an initial glimpse of a scene and how do people differ in the way they process visual information? In Experiment 1, participants searched 3-D-rendered images of naturalistic scenes for embedded target objects through a gaze-contingent window. A briefly flashed scene preview (identical, background, objects, or control) preceded each search scene. We found that search performance varied as a function of the participants' reported ability to distinguish between previews. Experiment 2 further investigated the source of individual differences using a whole-report task. Data were analysed following the ‘‘Theory of Visual Attention'' approach, which allows the assessment of visual processing efficiency parameters. Results from both experiments indicate that during the first glimpse of a scene global processing of visual information predominates and that individual differences in initial scene processing and subsequent eye movement behaviour are based on individual differences in visual perceptual processing speed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

What information can we extract from an initial glimpse of a scene and how do people differ in the way they process visual information? In Experiment 1, participants searched 3-D-rendered images of naturalistic scenes for embedded target objects through a gaze-contingent window. A briefly flashed scene preview (identical, background, objects, or control) preceded each search scene. We found that search performance varied as a function of the participants' reported ability to distinguish between previews. Experiment 2 further investigated the source of individual differences using a whole-report task. Data were analysed following the ‘‘Theory of Visual Attention'' approach, which allows the assessment of visual processing efficiency parameters. Results from both experiments indicate that during the first glimpse of a scene global processing of visual information predominates and that individual differences in initial scene processing and subsequent eye movement behaviour are based on individual differences in visual perceptual processing speed.

Close

  • doi:10.1080/13506280802547901

Close

Melissa L. -H. Võ; Jan Zwickel; Werner X. Schneider

Has someone moved my plate? The immediate and persistent effects of object location changes on gaze allocation during natural scene viewing Journal Article

In: Attention, Perception, and Psychophysics, vol. 72, no. 5, pp. 1251–1255, 2010.

Abstract | Links | BibTeX

@article{Vo2010b,
title = {Has someone moved my plate? The immediate and persistent effects of object location changes on gaze allocation during natural scene viewing},
author = {Melissa L. -H. Võ and Jan Zwickel and Werner X. Schneider},
doi = {10.3758/APP.72.5.1251},
year = {2010},
date = {2010-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {72},
number = {5},
pages = {1251--1255},
abstract = {In this study, we investigated the immediate and persisting effects of object location changes on gaze control during scene viewing. Participants repeatedly inspected a randomized set of naturalistic scenes for later questioning. On the seventh presentation, an object was shown at a new location, whereas the change was reversed for all subsequent presentations of the scene. We tested whether deviations from stored scene representations would modify eye movements to the changed regions and whether these effects would persist. We found that changed objects were looked at longer and more often, regardless of change reportability. These effects were most pronounced immediately after the change occurred and quickly leveled off once a scene remained unchanged. However, participants continued to perform short validation checks to changed scene regions, which implies a persistent modulation of eye movement control beyond the occurrence of object location changes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this study, we investigated the immediate and persisting effects of object location changes on gaze control during scene viewing. Participants repeatedly inspected a randomized set of naturalistic scenes for later questioning. On the seventh presentation, an object was shown at a new location, whereas the change was reversed for all subsequent presentations of the scene. We tested whether deviations from stored scene representations would modify eye movements to the changed regions and whether these effects would persist. We found that changed objects were looked at longer and more often, regardless of change reportability. These effects were most pronounced immediately after the change occurred and quickly leveled off once a scene remained unchanged. However, participants continued to perform short validation checks to changed scene regions, which implies a persistent modulation of eye movement control beyond the occurrence of object location changes.

Close

  • doi:10.3758/APP.72.5.1251

Close

Noriko Yamagishi; Stephen J. Anderson; Mitsuo Kawato

The observant mind: Self-awareness of attentional status Journal Article

In: Proceedings of the Royal Society B: Biological Sciences, vol. 277, no. 1699, pp. 3421–3426, 2010.

Abstract | Links | BibTeX

@article{Yamagishi2010,
title = {The observant mind: Self-awareness of attentional status},
author = {Noriko Yamagishi and Stephen J. Anderson and Mitsuo Kawato},
doi = {10.1098/rspb.2010.0891},
year = {2010},
date = {2010-01-01},
journal = {Proceedings of the Royal Society B: Biological Sciences},
volume = {277},
number = {1699},
pages = {3421--3426},
abstract = {Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation--by any amount required--until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation--by any amount required--until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model.

Close

  • doi:10.1098/rspb.2010.0891

Close

Jan Zwickel; Melissa L. -H. Võ

How the presence of persons biases eye movements Journal Article

In: Psychonomic Bulletin & Review, vol. 17, no. 2, pp. 257–262, 2010.

Abstract | Links | BibTeX

@article{Zwickel2010,
title = {How the presence of persons biases eye movements},
author = {Jan Zwickel and Melissa L. -H. Võ},
doi = {10.3758/PBR.17.2.257},
year = {2010},
date = {2010-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {17},
number = {2},
pages = {257--262},
abstract = {We investigated modulation of gaze behavior of observers viewing complex scenes that included a person. To assess spontaneous orientation-following, and in contrast to earlier studies, we did not make the person salient via instruction or low-level saliency. Still, objects that were referred to by the orientation of the person were visited earlier, more often, and longer than when they were not referred to. Analysis of fixation sequences showed that the number of saccades to the cued and uncued objects differed only for saccades that started from the head region, but not for saccades starting from a control object or from a body region. We therefore argue that viewing a person leads to an increase in spontaneous following of the person's viewing direction even when the person plays no role in scene understanding and is not made prominent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated modulation of gaze behavior of observers viewing complex scenes that included a person. To assess spontaneous orientation-following, and in contrast to earlier studies, we did not make the person salient via instruction or low-level saliency. Still, objects that were referred to by the orientation of the person were visited earlier, more often, and longer than when they were not referred to. Analysis of fixation sequences showed that the number of saccades to the cued and uncued objects differed only for saccades that started from the head region, but not for saccades starting from a control object or from a body region. We therefore argue that viewing a person leads to an increase in spontaneous following of the person's viewing direction even when the person plays no role in scene understanding and is not made prominent.

Close

  • doi:10.3758/PBR.17.2.257

Close

Holly Bridge; Stephen L. Hicks; Jingyi Xie; Thomas W. Okell; Sabira K. Mannan; Iona Alexander; Alan Cowey; Christopher Kennard

Visual activation of extra-striate cortex in the absence of V1 activation Journal Article

In: Neuropsychologia, vol. 48, no. 14, pp. 4148–4154, 2010.

Abstract | Links | BibTeX

@article{Bridge2010,
title = {Visual activation of extra-striate cortex in the absence of V1 activation},
author = {Holly Bridge and Stephen L. Hicks and Jingyi Xie and Thomas W. Okell and Sabira K. Mannan and Iona Alexander and Alan Cowey and Christopher Kennard},
doi = {10.1016/j.neuropsychologia.2010.10.022},
year = {2010},
date = {2010-01-01},
journal = {Neuropsychologia},
volume = {48},
number = {14},
pages = {4148--4154},
publisher = {Elsevier Ltd},
abstract = {When the primary visual cortex (V1) is damaged, there are a number of alternative pathways that can carry visual information from the eyes to extrastriate visual areas. Damage to the visual cortex from trauma or infarct is often unilateral, extensive and includes gray matter and white matter tracts, which can disrupt other routes to residual visual function. We report an unusual young patient, SBR, who has bilateral damage to the gray matter of V1, sparing the adjacent white matter and surrounding visual areas. Using functional magnetic resonance imaging (fMRI), we show that area MT+/V5 is activated bilaterally to visual stimulation, while no significant activity could be measured in V1. Additionally, the white matter tracts between the lateral geniculate nucleus (LGN) and V1 appear to show some degeneration, while the tracts between LGN and MT+/V5 do not differ from controls. Furthermore, the bilateral nature of the damage suggests that residual visual capacity does not result from strengthened interhemispheric connections. The very specific lesion in SBR suggests that the ipsilateral connection between LGN and MT+/V5 may be important for residual visual function in the presence of damage to V1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When the primary visual cortex (V1) is damaged, there are a number of alternative pathways that can carry visual information from the eyes to extrastriate visual areas. Damage to the visual cortex from trauma or infarct is often unilateral, extensive and includes gray matter and white matter tracts, which can disrupt other routes to residual visual function. We report an unusual young patient, SBR, who has bilateral damage to the gray matter of V1, sparing the adjacent white matter and surrounding visual areas. Using functional magnetic resonance imaging (fMRI), we show that area MT+/V5 is activated bilaterally to visual stimulation, while no significant activity could be measured in V1. Additionally, the white matter tracts between the lateral geniculate nucleus (LGN) and V1 appear to show some degeneration, while the tracts between LGN and MT+/V5 do not differ from controls. Furthermore, the bilateral nature of the damage suggests that residual visual capacity does not result from strengthened interhemispheric connections. The very specific lesion in SBR suggests that the ipsilateral connection between LGN and MT+/V5 may be important for residual visual function in the presence of damage to V1.

Close

  • doi:10.1016/j.neuropsychologia.2010.10.022

Close

James R. Brockmole; Melissa L. -H. Võ

Semantic memory for contextual regularities within and across scene categories: Evidence from eye movements Journal Article

In: Attention, Perception, and Psychophysics, vol. 72, no. 7, pp. 1803–1813, 2010.

Abstract | Links | BibTeX

@article{Brockmole2010,
title = {Semantic memory for contextual regularities within and across scene categories: Evidence from eye movements},
author = {James R. Brockmole and Melissa L. -H. Võ},
doi = {10.3758/APP.72.7.1803},
year = {2010},
date = {2010-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {72},
number = {7},
pages = {1803--1813},
abstract = {When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes.

Close

  • doi:10.3758/APP.72.7.1803

Close

Simona Buetti; Dirk Kerzel

Effects of saccades and response type on the simon effect: If you look at the stimulus, the Simon effect may be gone Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 63, no. 11, pp. 2172–2189, 2010.

Abstract | Links | BibTeX

@article{Buetti2010,
title = {Effects of saccades and response type on the simon effect: If you look at the stimulus, the Simon effect may be gone},
author = {Simona Buetti and Dirk Kerzel},
doi = {10.1080/17470211003802434},
year = {2010},
date = {2010-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {63},
number = {11},
pages = {2172--2189},
abstract = {The Simon effect has most often been investigated with key-press responses and eye fixation. In the present study, we asked how the type of eye movement and the type of manual response affect response selection in a Simon task. We investigated three eye movement instructions (spontaneous, saccade, and fixation) while participants performed goal-directed (i.e., reaching) or symbolic (i.e., finger-lift) responses. Initially, no oculomotor constraints were imposed, and a Simon effect was present for both response types. Next, eye movements were constrained. Participants had to either make a saccade toward the stimulus or maintain gaze fixed in the screen centre. While a congruency effect was always observed in reaching responses, it disappeared in finger-lift responses. We suggest that the redirection of saccades from the stimulus to the correct response location in noncorresponding trials contributes to the Simon effect. Because of eye-hand coupling, this occurred in a mandatory manner with reaching responses but not with finger-lift responses. Thus, the Simon effect with key-presses disappears when participants do what they typically do--look at the stimulus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The Simon effect has most often been investigated with key-press responses and eye fixation. In the present study, we asked how the type of eye movement and the type of manual response affect response selection in a Simon task. We investigated three eye movement instructions (spontaneous, saccade, and fixation) while participants performed goal-directed (i.e., reaching) or symbolic (i.e., finger-lift) responses. Initially, no oculomotor constraints were imposed, and a Simon effect was present for both response types. Next, eye movements were constrained. Participants had to either make a saccade toward the stimulus or maintain gaze fixed in the screen centre. While a congruency effect was always observed in reaching responses, it disappeared in finger-lift responses. We suggest that the redirection of saccades from the stimulus to the correct response location in noncorresponding trials contributes to the Simon effect. Because of eye-hand coupling, this occurred in a mandatory manner with reaching responses but not with finger-lift responses. Thus, the Simon effect with key-presses disappears when participants do what they typically do--look at the stimulus.

Close

  • doi:10.1080/17470211003802434

Close

Patrick A. Byrne; David C. Cappadocia; J. Douglas Crawford

Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating Journal Article

In: Vision Research, vol. 50, no. 24, pp. 2661–2670, 2010.

Abstract | Links | BibTeX

@article{Byrne2010,
title = {Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating},
author = {Patrick A. Byrne and David C. Cappadocia and J. Douglas Crawford},
doi = {10.1016/j.visres.2010.08.038},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {24},
pages = {2661--2670},
abstract = {Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action.

Close

  • doi:10.1016/j.visres.2010.08.038

Close

Torsten Betz

Investigating task-dependent top-down effects on overt visual attention Journal Article

In: Journal of Vision, vol. 10, no. 3, pp. 1–14, 2010.

Abstract | Links | BibTeX

@article{Betz2010,
title = {Investigating task-dependent top-down effects on overt visual attention},
author = {Torsten Betz},
doi = {10.1167/10.3.15},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {3},
pages = {1--14},
abstract = {Different tasks can induce different viewing behavior, yet it is still an open question how or whether at all high-level task information interacts with the bottom-up processing of stimulus-related information. Two possible causal routes are considered in this paper. Firstly, the weak top-down hypothesis, according to which top-down effects are mediated by changes of feature weights in the bottom-up system. Secondly, the strong top-down hypothesis, which proposes that top-down information acts independently of the bottom-up process. To clarify the influences of these different routes, viewing behavior was recorded on web pages for three different tasks: free viewing, content awareness, and information search. The data reveal significant task-dependent differences in viewing behavior that are accompanied by minor changes in feature-fixation correlations. Extensive computational modeling shows that these small but significant changes are insufficient to explain the observed differences in viewing behavior. Collectively, the results show that task-dependent differences in the current setting are not mediated by a reweighting of features in the bottom-up hierarchy, ruling out the weak top-down hypothesis. Consequently, the strong top-down hypothesis is the most viable explanation for the observed data.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Different tasks can induce different viewing behavior, yet it is still an open question how or whether at all high-level task information interacts with the bottom-up processing of stimulus-related information. Two possible causal routes are considered in this paper. Firstly, the weak top-down hypothesis, according to which top-down effects are mediated by changes of feature weights in the bottom-up system. Secondly, the strong top-down hypothesis, which proposes that top-down information acts independently of the bottom-up process. To clarify the influences of these different routes, viewing behavior was recorded on web pages for three different tasks: free viewing, content awareness, and information search. The data reveal significant task-dependent differences in viewing behavior that are accompanied by minor changes in feature-fixation correlations. Extensive computational modeling shows that these small but significant changes are insufficient to explain the observed differences in viewing behavior. Collectively, the results show that task-dependent differences in the current setting are not mediated by a reweighting of features in the bottom-up hierarchy, ruling out the weak top-down hypothesis. Consequently, the strong top-down hypothesis is the most viable explanation for the observed data.

Close

  • doi:10.1167/10.3.15

Close

Markus Bindemann

Scene and screen center bias early eye movements in scene viewing Journal Article

In: Vision Research, vol. 50, no. 23, pp. 2577–2587, 2010.

Abstract | Links | BibTeX

@article{Bindemann2010,
title = {Scene and screen center bias early eye movements in scene viewing},
author = {Markus Bindemann},
doi = {10.1016/j.visres.2010.08.016},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {23},
pages = {2577--2587},
abstract = {In laboratory studies of visual perception, images of natural scenes are routinely presented on a computer screen. Under these conditions, observers look at the center of scenes first, which might reflect an advantageous viewing position for extracting visual information. This study examined an alternative possibility, namely that initial eye movements are drawn towards the center of the screen. Observers searched visual scenes in a person detection task, while the scenes were aligned with the screen center or offset horizontally (Experiment 1). Two central viewing effects were observed, reflecting early visual biases to the scene and the screen center. The scene effect was modified by person content but is not specific to person detection tasks, while the screen bias cannot be explained by the low-level salience of a computer display (Experiment 2). These findings support the notion of a central viewing tendency in scene analysis, but also demonstrate a bias to the screen center that forms a potential artifact in visual perception experiments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In laboratory studies of visual perception, images of natural scenes are routinely presented on a computer screen. Under these conditions, observers look at the center of scenes first, which might reflect an advantageous viewing position for extracting visual information. This study examined an alternative possibility, namely that initial eye movements are drawn towards the center of the screen. Observers searched visual scenes in a person detection task, while the scenes were aligned with the screen center or offset horizontally (Experiment 1). Two central viewing effects were observed, reflecting early visual biases to the scene and the screen center. The scene effect was modified by person content but is not specific to person detection tasks, while the screen bias cannot be explained by the low-level salience of a computer display (Experiment 2). These findings support the notion of a central viewing tendency in scene analysis, but also demonstrate a bias to the screen center that forms a potential artifact in visual perception experiments.

Close

  • doi:10.1016/j.visres.2010.08.016

Close

Markus Bindemann; Christoph Scheepers; Heather J. Ferguson; A. Mike Burton

Face, body, and center of gravity mediate person detection in natural scenes Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1477–1485, 2010.

Abstract | Links | BibTeX

@article{Bindemann2010a,
title = {Face, body, and center of gravity mediate person detection in natural scenes},
author = {Markus Bindemann and Christoph Scheepers and Heather J. Ferguson and A. Mike Burton},
doi = {10.1037/a0019057},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {36},
number = {6},
pages = {1477--1485},
abstract = {Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene, and only then to fixate on a person. When a person's face was rendered invisible in scenes, bodies were detected as quickly as faces without bodies, indicating that both are equally useful for person detection. Detection was optimized when face and body could be seen, but observers preferentially fixated faces, reinforcing the notion of a prominent role for the face in social perception. These findings have implications for claims of attention capture by faces in that they demonstrate a mediating influence of body cues and general scanning principles in natural scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene, and only then to fixate on a person. When a person's face was rendered invisible in scenes, bodies were detected as quickly as faces without bodies, indicating that both are equally useful for person detection. Detection was optimized when face and body could be seen, but observers preferentially fixated faces, reinforcing the notion of a prominent role for the face in social perception. These findings have implications for claims of attention capture by faces in that they demonstrate a mediating influence of body cues and general scanning principles in natural scenes.

Close

  • doi:10.1037/a0019057

Close

Walter R. Boot; James R. Brockmole

Irrelevant features at fixation modulate saccadic latency and direction in visual search Journal Article

In: Visual Cognition, vol. 18, no. 4, pp. 481–491, 2010.

Abstract | Links | BibTeX

@article{Boot2010,
title = {Irrelevant features at fixation modulate saccadic latency and direction in visual search},
author = {Walter R. Boot and James R. Brockmole},
doi = {10.1136/jmedgenet-2011-100306},
year = {2010},
date = {2010-01-01},
journal = {Visual Cognition},
volume = {18},
number = {4},
pages = {481--491},
abstract = {Do irrelevant visual features at fixation influence saccadic latency and direction? In a novel search paradigm, we found that when the feature of an irrelevant item at fixation matched the feature defining the target, oculomotor disengagement was delayed, and when it matched a salient distractor more eye movements were directed to that distractor. Latency effects were short-lived; direction effects persisted for up to 200 ms. We replicated latency results and demonstrated facilitated eye movements to the target when the fixated item matched the target colour. Irrelevant features of fixated items influence saccadic latency and direction and may be important considerations in predicting search behaviour.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Do irrelevant visual features at fixation influence saccadic latency and direction? In a novel search paradigm, we found that when the feature of an irrelevant item at fixation matched the feature defining the target, oculomotor disengagement was delayed, and when it matched a salient distractor more eye movements were directed to that distractor. Latency effects were short-lived; direction effects persisted for up to 200 ms. We replicated latency results and demonstrated facilitated eye movements to the target when the fixated item matched the target colour. Irrelevant features of fixated items influence saccadic latency and direction and may be important considerations in predicting search behaviour.

Close

  • doi:10.1136/jmedgenet-2011-100306

Close

Kim Joris Boström; Anne Kathrin Warzecha

Open-loop speed discrimination performance of ocular following response and perception Journal Article

In: Vision Research, vol. 50, no. 9, pp. 870–882, 2010.

Abstract | Links | BibTeX

@article{Bostroem2010,
title = {Open-loop speed discrimination performance of ocular following response and perception},
author = {Kim Joris Boström and Anne Kathrin Warzecha},
doi = {10.1016/j.visres.2010.02.010},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {9},
pages = {870--882},
publisher = {Elsevier Ltd},
abstract = {So far, it remains largely unresolved to what extent neuronal noise affects behavioral responses. Here, we investigate, where in the human visual motion pathway noise originates that limits the performance of the entire system. In particular, we ask whether perception and eye movements are limited by a common noise source, or whether processing stages after the separation into different streams limit their performance. We use the ocular following response of human subjects and a simultaneously performed psychophysical paradigm to directly compare perceptual and oculomotor system with respect to their speed discrimination ability. Our results show that on the open-loop condition the perceptual system is superior to the oculomotor system and that the responses of both systems are not correlated. Two alternative conclusions can be drawn from these findings. Either the perceptual and oculomotor pathway are effectively separate, or the amount of post-sensory (motor) noise is not negligible in comparison to the amount of sensory noise. In view of well-established experimental findings and due to plausibility considerations, we favor the latter conclusion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

So far, it remains largely unresolved to what extent neuronal noise affects behavioral responses. Here, we investigate, where in the human visual motion pathway noise originates that limits the performance of the entire system. In particular, we ask whether perception and eye movements are limited by a common noise source, or whether processing stages after the separation into different streams limit their performance. We use the ocular following response of human subjects and a simultaneously performed psychophysical paradigm to directly compare perceptual and oculomotor system with respect to their speed discrimination ability. Our results show that on the open-loop condition the perceptual system is superior to the oculomotor system and that the responses of both systems are not correlated. Two alternative conclusions can be drawn from these findings. Either the perceptual and oculomotor pathway are effectively separate, or the amount of post-sensory (motor) noise is not negligible in comparison to the amount of sensory noise. In view of well-established experimental findings and due to plausibility considerations, we favor the latter conclusion.

Close

  • doi:10.1016/j.visres.2010.02.010

Close

Sarah Bate; Catherine Haslam; Timothy L. Hodgson; Ashok Jansari; Nicola J. Gregory; Janice Kay

Positive and negative emotion enhances the processing of famous faces in a semantic judgment task Journal Article

In: Neuropsychology, vol. 24, no. 1, pp. 84–89, 2010.

Abstract | Links | BibTeX

@article{Bate2010,
title = {Positive and negative emotion enhances the processing of famous faces in a semantic judgment task},
author = {Sarah Bate and Catherine Haslam and Timothy L. Hodgson and Ashok Jansari and Nicola J. Gregory and Janice Kay},
doi = {10.1037/a0017202},
year = {2010},
date = {2010-01-01},
journal = {Neuropsychology},
volume = {24},
number = {1},
pages = {84--89},
abstract = {Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing.

Close

  • doi:10.1037/a0017202

Close

Oliver Baumann; Jason B. Mattingley

Scaling of neural responses to visual and auditory motion in the human cerebellum Journal Article

In: Journal of Neuroscience, vol. 30, no. 12, pp. 4489–4495, 2010.

Abstract | Links | BibTeX

@article{Baumann2010,
title = {Scaling of neural responses to visual and auditory motion in the human cerebellum},
author = {Oliver Baumann and Jason B. Mattingley},
doi = {10.1523/JNEUROSCI.5661-09.2010},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neuroscience},
volume = {30},
number = {12},
pages = {4489--4495},
abstract = {The human cerebellum contains approximately half of all the neurons within the cerebrum, yet most experimental work in human neuroscience over the last century has focused exclusively on the structure and functions of the forebrain. The cerebellum has an undisputed role in a range of motor functions (Thach et al., 1992), but its potential contributions to sensory and cognitive processes are widely debated (Stoodley and Schmahmann, 2009). Here we used functional magnetic resonance imaging to test the hypothesis that the human cerebellum is involved in the acquisition of auditory and visual sensory data. We monitored neural activity within the cerebellum while participants engaged in a task that required them to discriminate the direction of a visual or auditory motion signal in noise. We identified a distinct set of cerebellar regions that were differentially activated for visual stimuli (vermal lobule VI and right-hemispheric lobule X) and auditory stimuli (right-hemispheric lobules VIIIA and VIIIB and hemispheric lobule VI bilaterally). In addition, we identified a region in left crus I in which activity correlated significantly with increases in the perceptual demands of the task (i.e., with decreasing signal strength), for both auditory and visual stimuli. Our results support suggestions of a role for the cerebellum in the processing of auditory and visual motion and suggest that parts of cerebellar cortex are concerned with tracking movements of objects around the animal, rather than with controlling movements of the animal itself (Paulin, 1993).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The human cerebellum contains approximately half of all the neurons within the cerebrum, yet most experimental work in human neuroscience over the last century has focused exclusively on the structure and functions of the forebrain. The cerebellum has an undisputed role in a range of motor functions (Thach et al., 1992), but its potential contributions to sensory and cognitive processes are widely debated (Stoodley and Schmahmann, 2009). Here we used functional magnetic resonance imaging to test the hypothesis that the human cerebellum is involved in the acquisition of auditory and visual sensory data. We monitored neural activity within the cerebellum while participants engaged in a task that required them to discriminate the direction of a visual or auditory motion signal in noise. We identified a distinct set of cerebellar regions that were differentially activated for visual stimuli (vermal lobule VI and right-hemispheric lobule X) and auditory stimuli (right-hemispheric lobules VIIIA and VIIIB and hemispheric lobule VI bilaterally). In addition, we identified a region in left crus I in which activity correlated significantly with increases in the perceptual demands of the task (i.e., with decreasing signal strength), for both auditory and visual stimuli. Our results support suggestions of a role for the cerebellum in the processing of auditory and visual motion and suggest that parts of cerebellar cortex are concerned with tracking movements of objects around the animal, rather than with controlling movements of the animal itself (Paulin, 1993).

Close

  • doi:10.1523/JNEUROSCI.5661-09.2010

Close

Paul M. Bays; V. Singh-Curry; N. Gorgoraptis; Jon Driver; Masud Husain

Integration of goal- and stimulus-related visual signals revealed by damage to human parietal cortex Journal Article

In: Journal of Neuroscience, vol. 30, no. 17, pp. 5968–5978, 2010.

Abstract | Links | BibTeX

@article{Bays2010,
title = {Integration of goal- and stimulus-related visual signals revealed by damage to human parietal cortex},
author = {Paul M. Bays and V. Singh-Curry and N. Gorgoraptis and Jon Driver and Masud Husain},
doi = {10.1523/JNEUROSCI.0997-10.2010},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neuroscience},
volume = {30},
number = {17},
pages = {5968--5978},
abstract = {Where we look is determined both by our current intentions and by the tendency of visually salient items to "catch our eye." After damage to parietal cortex, the normal process of directing attention is often profoundly impaired. Here, we tracked parietal patients' eye movements during visual search to separately map impairments in goal-directed orienting to targets versus stimulus-driven gaze shifts to salient but task-irrelevant probes. Deficits in these two distinct types of attentional selection are shown to be identical in both magnitude and spatial distribution, consistent with damage to a "priority map" that integrates goal- and stimulus-related signals to select visual targets. When goal-relevant and visually salient items compete for attention, the outcome depends on a biased competition in which the priority of contralesional targets is undervalued. On the basis of these findings, we further demonstrate that parietal patients' spatial bias (neglect) in goal-directed visual exploration can be corrected and even reversed by systematically manipulating the spatial distribution of stimulus salience in the visual array.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Where we look is determined both by our current intentions and by the tendency of visually salient items to "catch our eye." After damage to parietal cortex, the normal process of directing attention is often profoundly impaired. Here, we tracked parietal patients' eye movements during visual search to separately map impairments in goal-directed orienting to targets versus stimulus-driven gaze shifts to salient but task-irrelevant probes. Deficits in these two distinct types of attentional selection are shown to be identical in both magnitude and spatial distribution, consistent with damage to a "priority map" that integrates goal- and stimulus-related signals to select visual targets. When goal-relevant and visually salient items compete for attention, the outcome depends on a biased competition in which the priority of contralesional targets is undervalued. On the basis of these findings, we further demonstrate that parietal patients' spatial bias (neglect) in goal-directed visual exploration can be corrected and even reversed by systematically manipulating the spatial distribution of stimulus salience in the visual array.

Close

  • doi:10.1523/JNEUROSCI.0997-10.2010

Close

Melissa R. Beck; Maura C. Lohrenz; J. Gregory Trafton

Measuring search efficiency in complex visual search tasks: Global and local clutter Journal Article

In: Journal of Experimental Psychology: Applied, vol. 16, no. 3, pp. 238–250, 2010.

Abstract | Links | BibTeX

@article{Beck2010,
title = {Measuring search efficiency in complex visual search tasks: Global and local clutter},
author = {Melissa R. Beck and Maura C. Lohrenz and J. Gregory Trafton},
doi = {10.1037/a0019633},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: Applied},
volume = {16},
number = {3},
pages = {238--250},
abstract = {Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts.

Close

  • doi:10.1037/a0019633

Close

Stefanie I. Becker

The role of target-distractor relationships in guiding attention and the eyes in visual search Journal Article

In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 247–265, 2010.

Abstract | Links | BibTeX

@article{Becker2010,
title = {The role of target-distractor relationships in guiding attention and the eyes in visual search},
author = {Stefanie I. Becker},
doi = {10.1037/a0018808},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {139},
number = {2},
pages = {247--265},
abstract = {Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target differs from the irrelevant distractors (e.g., larger, redder, darker). Guidance by the relational properties of the target governed intertrial priming effects and capture by irrelevant distractors. First, intertrial switch costs occurred only upon reversals of the coarse relationship between target and nontargets, but they did not occur when the target and nontarget features changed such that the relation remained the same. Second, irrelevant distractors captured most strongly when they differed in the correct direction from all other items--despite the fact that they were less similar to the target. This suggests that priming and contingent capture, which have previously been regarded as prime evidence for feature-based selection, are really due to a relational selection mechanism. Here I propose a new relational vector account of guidance, which holds promise to synthesize a wide range of different findings that have previously been attributed to different mechanisms of visual search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target differs from the irrelevant distractors (e.g., larger, redder, darker). Guidance by the relational properties of the target governed intertrial priming effects and capture by irrelevant distractors. First, intertrial switch costs occurred only upon reversals of the coarse relationship between target and nontargets, but they did not occur when the target and nontarget features changed such that the relation remained the same. Second, irrelevant distractors captured most strongly when they differed in the correct direction from all other items--despite the fact that they were less similar to the target. This suggests that priming and contingent capture, which have previously been regarded as prime evidence for feature-based selection, are really due to a relational selection mechanism. Here I propose a new relational vector account of guidance, which holds promise to synthesize a wide range of different findings that have previously been attributed to different mechanisms of visual search.

Close

  • doi:10.1037/a0018808

Close

Stefanie I. Becker

Oculomotor capture by colour singletons depends on intertrial priming Journal Article

In: Vision Research, vol. 50, no. 21, pp. 2116–2126, 2010.

Abstract | Links | BibTeX

@article{Becker2010a,
title = {Oculomotor capture by colour singletons depends on intertrial priming},
author = {Stefanie I. Becker},
doi = {10.1016/j.visres.2010.08.001},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {21},
pages = {2116--2126},
abstract = {In visual search, an irrelevant colour singleton captures attention when the colour of the distractor changes across trials (e.g., from red to green), but not when the colour remains constant (Becker, 2007). The present study shows that intertrial changes of the distractor colour also modulate oculomotor capture: an irrelevant colour singleton distractor was only selected more frequently than the inconspicuous nontargets (1) when its features had switched (compared to the previous trial), or (2) when the distractor had been presented at the same position as the target on the previous trial. These results throw doubt on the notion that colour distractors capture attention and the eyes because of their high feature contrast, which is available at an earlier point in time than information about specific feature values. Instead, attention and eye movements are apparently controlled by a system that operates on feature-specific information, and gauges the informativity of nominally irrelevant features.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In visual search, an irrelevant colour singleton captures attention when the colour of the distractor changes across trials (e.g., from red to green), but not when the colour remains constant (Becker, 2007). The present study shows that intertrial changes of the distractor colour also modulate oculomotor capture: an irrelevant colour singleton distractor was only selected more frequently than the inconspicuous nontargets (1) when its features had switched (compared to the previous trial), or (2) when the distractor had been presented at the same position as the target on the previous trial. These results throw doubt on the notion that colour distractors capture attention and the eyes because of their high feature contrast, which is available at an earlier point in time than information about specific feature values. Instead, attention and eye movements are apparently controlled by a system that operates on feature-specific information, and gauges the informativity of nominally irrelevant features.

Close

  • doi:10.1016/j.visres.2010.08.001

Close

Stefanie I. Becker

Testing a postselectional account of across-dimension switch costs Journal Article

In: Psychonomic Bulletin & Review, vol. 17, no. 6, pp. 853–861, 2010.

Abstract | Links | BibTeX

@article{Becker2010b,
title = {Testing a postselectional account of across-dimension switch costs},
author = {Stefanie I. Becker},
doi = {10.3758/PBR.17.6.853},
year = {2010},
date = {2010-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {17},
number = {6},
pages = {853--861},
abstract = {In visual search for a pop-out target, responses are faster when the target dimension from the previous trial is repeated than when it changes. Currently, it is unclear whether these across-dimension switch costs originate from processes that guide attention to the target or from later processes (e.g., target identification or response selection). The present study tested two critical predictions of a response-selection account of across-dimension switch costs-namely, (1) that switch costs should occur even when visual attention is guided by a completely different feature and (2) that changing the target dimension should affect the speed of responding, but not the speed of eye movements to the target. The results supported both predictions, indicating that changes of the target dimension do not affect early processes that guide attention to the target but, rather, affect later processes, which commence after the target has been selected.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In visual search for a pop-out target, responses are faster when the target dimension from the previous trial is repeated than when it changes. Currently, it is unclear whether these across-dimension switch costs originate from processes that guide attention to the target or from later processes (e.g., target identification or response selection). The present study tested two critical predictions of a response-selection account of across-dimension switch costs-namely, (1) that switch costs should occur even when visual attention is guided by a completely different feature and (2) that changing the target dimension should affect the speed of responding, but not the speed of eye movements to the target. The results supported both predictions, indicating that changes of the target dimension do not affect early processes that guide attention to the target but, rather, affect later processes, which commence after the target has been selected.

Close

  • doi:10.3758/PBR.17.6.853

Close

Stefanie I. Becker; Charles L. Folk; Roger W. Remington

The role of relational information in contingent capture Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 6, pp. 1460–1476, 2010.

Abstract | Links | BibTeX

@article{Becker2010c,
title = {The role of relational information in contingent capture},
author = {Stefanie I. Becker and Charles L. Folk and Roger W. Remington},
doi = {10.1037/a0020370},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {36},
number = {6},
pages = {1460--1476},
abstract = {On the contingent capture account, top-down attentional control settings restrict involuntary attentional capture to items that match the features of the search target. Attention capture is involuntary, but contingent on goals and intentions. The observation that only target-similar items can capture attention has usually been taken to show that the content of the attentional control settings consists of specific feature values. In contrast, the present study demonstrates that the top-down target template can include information about the relationship between the target and nontarget features (e.g., redder, darker, larger). Several spatial cuing experiments show that a singleton cue that is less similar to the target but that shares the same relational property that distinguishes targets from nontargets can capture attention to the same extent as cues that are similar to the target. Moreover, less similar cues can even capture attention more than cues that are identical to the target when they are relationally better than identical cues. The implications for current theories of attentional capture and attentional guidance are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

On the contingent capture account, top-down attentional control settings restrict involuntary attentional capture to items that match the features of the search target. Attention capture is involuntary, but contingent on goals and intentions. The observation that only target-similar items can capture attention has usually been taken to show that the content of the attentional control settings consists of specific feature values. In contrast, the present study demonstrates that the top-down target template can include information about the relationship between the target and nontarget features (e.g., redder, darker, larger). Several spatial cuing experiments show that a singleton cue that is less similar to the target but that shares the same relational property that distinguishes targets from nontargets can capture attention to the same extent as cues that are similar to the target. Moreover, less similar cues can even capture attention more than cues that are identical to the target when they are relationally better than identical cues. The implications for current theories of attentional capture and attentional guidance are discussed.

Close

  • doi:10.1037/a0020370

Close

A. J. Austin; Theodora Duka

Mechanisms of attention for appetitive and aversive outcomes in Pavlovian conditioning Journal Article

In: Behavioural Brain Research, vol. 213, no. 1, pp. 19–26, 2010.

Abstract | Links | BibTeX

@article{Austin2010,
title = {Mechanisms of attention for appetitive and aversive outcomes in Pavlovian conditioning},
author = {A. J. Austin and Theodora Duka},
doi = {10.1016/j.bbr.2010.04.019},
year = {2010},
date = {2010-01-01},
journal = {Behavioural Brain Research},
volume = {213},
number = {1},
pages = {19--26},
publisher = {Elsevier B.V.},
abstract = {Different mechanisms of attention controlling learning have been proposed in appetitive and aversive conditioning. The aim of the present study was to compare attention and learning in a Pavlovian conditioning paradigm using visual stimuli of varying predictive value of either monetary reward (appetitive conditioning; 10p or 50p) or blast of white noise (aversive conditioning; 97 dB or 102 dB). Outcome values were matched across the two conditions with regard to their emotional significance. Sixty-four participants were allocated to one of the four conditions matched for age and gender. All participants underwent a discriminative learning task using pairs of visual stimuli that signalled a 100%, 50%, or 0% probability of receiving an outcome. Learning was measured using a 9-point Likert scale of expectancy of the outcome, while attention using an eyetracker device. Arousal and emotional conditioning were also evaluated. Dwell time was greatest for the full predictor in the noise groups, while in the money groups attention was greatest for the partial predictor over the other two predictors. The progression of learning was the same for both groups. These findings suggest that in aversive conditioning attention is driven by the predictive salience of the stimulus while in appetitive conditioning attention is error-driven, when emotional value of the outcome is comparable.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Different mechanisms of attention controlling learning have been proposed in appetitive and aversive conditioning. The aim of the present study was to compare attention and learning in a Pavlovian conditioning paradigm using visual stimuli of varying predictive value of either monetary reward (appetitive conditioning; 10p or 50p) or blast of white noise (aversive conditioning; 97 dB or 102 dB). Outcome values were matched across the two conditions with regard to their emotional significance. Sixty-four participants were allocated to one of the four conditions matched for age and gender. All participants underwent a discriminative learning task using pairs of visual stimuli that signalled a 100%, 50%, or 0% probability of receiving an outcome. Learning was measured using a 9-point Likert scale of expectancy of the outcome, while attention using an eyetracker device. Arousal and emotional conditioning were also evaluated. Dwell time was greatest for the full predictor in the noise groups, while in the money groups attention was greatest for the partial predictor over the other two predictors. The progression of learning was the same for both groups. These findings suggest that in aversive conditioning attention is driven by the predictive salience of the stimulus while in appetitive conditioning attention is error-driven, when emotional value of the outcome is comparable.

Close

  • doi:10.1016/j.bbr.2010.04.019

Close

Jeremy B. Badler; Philippe Lefevre; Marcus Missal

Causality attribution biases oculomotor responses Journal Article

In: Journal of Neuroscience, vol. 30, no. 31, pp. 10517–10525, 2010.

Abstract | Links | BibTeX

@article{Badler2010,
title = {Causality attribution biases oculomotor responses},
author = {Jeremy B. Badler and Philippe Lefevre and Marcus Missal},
doi = {10.1523/JNEUROSCI.1733-10.2010},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neuroscience},
volume = {30},
number = {31},
pages = {10517--10525},
abstract = {When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements.

Close

  • doi:10.1523/JNEUROSCI.1733-10.2010

Close

Daniel H. Baker; Erich W. Graf

Extrinsic factors in the perception of bistable motion stimuli Journal Article

In: Vision Research, vol. 50, no. 13, pp. 1257–1265, 2010.

Abstract | Links | BibTeX

@article{Baker2010,
title = {Extrinsic factors in the perception of bistable motion stimuli},
author = {Daniel H. Baker and Erich W. Graf},
doi = {10.1016/j.visres.2010.04.016},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {13},
pages = {1257--1265},
abstract = {When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because (i) more saccades were directionally congruent with the currently reported percept than expected by chance, and (ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because (i) more saccades were directionally congruent with the currently reported percept than expected by chance, and (ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades.

Close

  • doi:10.1016/j.visres.2010.04.016

Close

Mathias Abegg; Hyung Lee; Jason J. S. Barton

Systematic diagonal and vertical errors in antisaccades and memory-guided saccades Journal Article

In: Journal of Eye Movement Research, vol. 3, no. 3, pp. 1–10, 2010.

Abstract | Links | BibTeX

@article{Abegg2010,
title = {Systematic diagonal and vertical errors in antisaccades and memory-guided saccades},
author = {Mathias Abegg and Hyung Lee and Jason J. S. Barton},
doi = {10.16910/jemr.3.3.5},
year = {2010},
date = {2010-01-01},
journal = {Journal of Eye Movement Research},
volume = {3},
number = {3},
pages = {1--10},
abstract = {Studies of memory-guided saccades in monkeys show an upward bias, while studies of antisaccades in humans show a diagonal effect, a deviation of endpoints toward the 45° diagonal. To determine if these two different spatial biases are specific to different types of saccades, we studied prosaccades, antisaccades and memory-guided saccades in humans. The diagonal effect occurred not with prosaccades but with antisaccades and memory-guided saccades with long intervals, consistent with hypotheses that it originates in computations of goal location under conditions of uncertainty. There was a small upward bias for memory-guided saccades but not prosaccades or antisaccades. Thus this bias is not a general effect of target uncertainty but a property specific to memory-guided saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies of memory-guided saccades in monkeys show an upward bias, while studies of antisaccades in humans show a diagonal effect, a deviation of endpoints toward the 45° diagonal. To determine if these two different spatial biases are specific to different types of saccades, we studied prosaccades, antisaccades and memory-guided saccades in humans. The diagonal effect occurred not with prosaccades but with antisaccades and memory-guided saccades with long intervals, consistent with hypotheses that it originates in computations of goal location under conditions of uncertainty. There was a small upward bias for memory-guided saccades but not prosaccades or antisaccades. Thus this bias is not a general effect of target uncertainty but a property specific to memory-guided saccades.

Close

  • doi:10.16910/jemr.3.3.5

Close

Mathias Abegg; Amadeo R. Rodriguez; Hyung Lee; Jason J. S. Barton

‘Alternate-goal bias' in antisaccades and the influence of expectation Journal Article

In: Experimental Brain Research, vol. 203, no. 3, pp. 553–562, 2010.

Abstract | Links | BibTeX

@article{Abegg2010a,
title = {‘Alternate-goal bias' in antisaccades and the influence of expectation},
author = {Mathias Abegg and Amadeo R. Rodriguez and Hyung Lee and Jason J. S. Barton},
doi = {10.1007/s00221-010-2259-6},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {203},
number = {3},
pages = {553--562},
abstract = {Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'.

Close

  • doi:10.1007/s00221-010-2259-6

Close

Naotoshi Abekawa; Hiroaki Gomi

Spatial coincidence of intentional actions modulates an implicit visuomotor control Journal Article

In: Journal of Neurophysiology, vol. 103, no. 5, pp. 2717–2727, 2010.

Abstract | Links | BibTeX

@article{Abekawa2010,
title = {Spatial coincidence of intentional actions modulates an implicit visuomotor control},
author = {Naotoshi Abekawa and Hiroaki Gomi},
doi = {10.1152/jn.91133.2008},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neurophysiology},
volume = {103},
number = {5},
pages = {2717--2727},
abstract = {We investigated a visuomotor mechanism contributing to reach correction: the manual following response (MFR), which is a quick response to background visual motion that frequently occurs as a reafference when the body moves. Although several visual specificities of the MFR have been elucidated, the functional and computational mechanisms of its motor coordination remain unclear mainly because it involves complex relationships among gaze, reaching target, and visual stimuli. To directly explore how these factors interact in the MFR, we assessed the impact of spatial coincidences among gaze, arm reaching, and visual motion on the MFR. When gaze location was displaced from the reaching target with an identical visual motion kept on the retina, the amplitude of the MFR significantly decreased as displacement increased. A factorial manipulation of gaze, reaching-target, and visual motion locations showed that the response decrease is due to the spatial separation between gaze and reaching target but is not due to the spatial separation between visual motion and reaching target. Additionally, elimination of visual motion around the fovea attenuated the MFR. The effects of these spatial coincidences on the MFR are completely different from their effects on the perceptual mislocalization of targets caused by visual motion. Furthermore, we found clear differences between the modulation sensitivities of the MFR and the ocular following response to spatial mismatch between gaze and reaching locations. These results suggest that the MFR modulation observed in our experiment is not due to changes in visual interaction between target and visual motion or to modulation of motion sensitivity in early visual processing. Instead the motor command of the MFR appears to be modulated by the spatial relationship between gaze and reaching.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated a visuomotor mechanism contributing to reach correction: the manual following response (MFR), which is a quick response to background visual motion that frequently occurs as a reafference when the body moves. Although several visual specificities of the MFR have been elucidated, the functional and computational mechanisms of its motor coordination remain unclear mainly because it involves complex relationships among gaze, reaching target, and visual stimuli. To directly explore how these factors interact in the MFR, we assessed the impact of spatial coincidences among gaze, arm reaching, and visual motion on the MFR. When gaze location was displaced from the reaching target with an identical visual motion kept on the retina, the amplitude of the MFR significantly decreased as displacement increased. A factorial manipulation of gaze, reaching-target, and visual motion locations showed that the response decrease is due to the spatial separation between gaze and reaching target but is not due to the spatial separation between visual motion and reaching target. Additionally, elimination of visual motion around the fovea attenuated the MFR. The effects of these spatial coincidences on the MFR are completely different from their effects on the perceptual mislocalization of targets caused by visual motion. Furthermore, we found clear differences between the modulation sensitivities of the MFR and the ocular following response to spatial mismatch between gaze and reaching locations. These results suggest that the MFR modulation observed in our experiment is not due to changes in visual interaction between target and visual motion or to modulation of motion sensitivity in early visual processing. Instead the motor command of the MFR appears to be modulated by the spatial relationship between gaze and reaching.

Close

  • doi:10.1152/jn.91133.2008

Close

Alper Açik; Adjmal Sarwary; Rafael Schultze-Kraft; Selim Onat; Peter König

Developmental changes in natural viewing behavior: Bottom-up and top-down differences between children, young adults and older adults Journal Article

In: Frontiers in Psychology, vol. 1, pp. 207, 2010.

Abstract | Links | BibTeX

@article{Acik2010,
title = {Developmental changes in natural viewing behavior: Bottom-up and top-down differences between children, young adults and older adults},
author = {Alper Açik and Adjmal Sarwary and Rafael Schultze-Kraft and Selim Onat and Peter König},
doi = {10.3389/fpsyg.2010.00207},
year = {2010},
date = {2010-01-01},
journal = {Frontiers in Psychology},
volume = {1},
pages = {207},
abstract = {Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature - color, luminance contrast etc. - guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature - color, luminance contrast etc. - guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing.

Close

  • doi:10.3389/fpsyg.2010.00207

Close

Ava-Ann Allman; Chawki Benkelfat; France Durand; Igor Sibon; Alain Dagher; Marco Leyton; Glen B. Baker; Gillian A. O'Driscoll

Effect of D-amphetamine on inhibition and motor planning as a function of baseline performance. Journal Article

In: Psychopharmacology, vol. 211, no. 4, pp. 423–33, 2010.

Abstract | Links | BibTeX

@article{Allman2010,
title = {Effect of D-amphetamine on inhibition and motor planning as a function of baseline performance.},
author = {Ava-Ann Allman and Chawki Benkelfat and France Durand and Igor Sibon and Alain Dagher and Marco Leyton and Glen B. Baker and Gillian A. O'Driscoll},
doi = {10.1007/s00213-010-1912-x},
year = {2010},
date = {2010-01-01},
journal = {Psychopharmacology},
volume = {211},
number = {4},
pages = {423--33},
abstract = {RATIONALE: Baseline performance has been reported to predict dopamine (DA) effects on working memory, following an inverted-U pattern. This pattern may hold true for other executive functions that are DA-sensitive. OBJECTIVES: The objective of this study is to investigate the effect of D: -amphetamine, an indirect DA agonist, on two other putatively DA-sensitive executive functions, inhibition and motor planning, as a function of baseline performance. METHODS: Participants with no prior stimulant exposure participated in a double-blind crossover study of a single dose of 0.3 mg/kg, p.o. of D: -amphetamine and placebo. Participants were divided into high and low groups, based on their performance on the antisaccade and predictive saccade tasks on the baseline day. Executive functions, mood states, heart rate and blood pressure were assessed before (T0) and after drug administration, at 1.5 (T1), 2.5 (T2) and 3.5 h (T3) post-drug. RESULTS: Antisaccade errors decreased with D: -amphetamine irrespective of baseline performance (p = 0.025). For antisaccade latency, participants who generated short-latency antisaccades at baseline had longer latencies on D: -amphetamine than placebo, while those with long-latency antisaccades at baseline had shorter latencies on D: -amphetamine than placebo (drug x group},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

RATIONALE: Baseline performance has been reported to predict dopamine (DA) effects on working memory, following an inverted-U pattern. This pattern may hold true for other executive functions that are DA-sensitive. OBJECTIVES: The objective of this study is to investigate the effect of D: -amphetamine, an indirect DA agonist, on two other putatively DA-sensitive executive functions, inhibition and motor planning, as a function of baseline performance. METHODS: Participants with no prior stimulant exposure participated in a double-blind crossover study of a single dose of 0.3 mg/kg, p.o. of D: -amphetamine and placebo. Participants were divided into high and low groups, based on their performance on the antisaccade and predictive saccade tasks on the baseline day. Executive functions, mood states, heart rate and blood pressure were assessed before (T0) and after drug administration, at 1.5 (T1), 2.5 (T2) and 3.5 h (T3) post-drug. RESULTS: Antisaccade errors decreased with D: -amphetamine irrespective of baseline performance (p = 0.025). For antisaccade latency, participants who generated short-latency antisaccades at baseline had longer latencies on D: -amphetamine than placebo, while those with long-latency antisaccades at baseline had shorter latencies on D: -amphetamine than placebo (drug x group

Close

  • doi:10.1007/s00213-010-1912-x

Close

Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard

Overlapping functional anatomy for working memory and visual search. Journal Article

In: Experimental Brain Research, vol. 200, no. 1, pp. 91–107, 2010.

Abstract | Links | BibTeX

@article{Anderson2010,
title = {Overlapping functional anatomy for working memory and visual search.},
author = {Elaine J. Anderson and Sabira K. Mannan and Geraint Rees and Petroc Sumner and Christopher Kennard},
doi = {10.1007/s00221-009-2000-5},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {200},
number = {1},
pages = {91--107},
abstract = {Recent behavioural findings using dual-task paradigms demonstrate the importance of both spatial and non-spatial working memory processes in inefficient visual search (Anderson et al. in Exp Psychol 55:301-312, 2008). Here, using functional magnetic resonance imaging (fMRI), we sought to determine whether brain areas recruited during visual search are also involved in working memory. Using visually matched spatial and non-spatial working memory tasks, we confirmed previous behavioural findings that show significant dual-task interference effects occur when inefficient visual search is performed concurrently with either working memory task. Furthermore, we find considerable overlap in the cortical network activated by inefficient search and both working memory tasks. Our findings suggest that the interference effects observed behaviourally may have arisen from competition for cortical processes subserved by these overlapping regions. Drawing on previous findings (Anderson et al. in Exp Brain Res 180:289-302, 2007), we propose that the most likely anatomical locus for these interference effects is the inferior and middle frontal cortex of the right hemisphere. These areas are associated with attentional selection from memory as well as manipulation of information in memory, and we propose that the visual search and working memory tasks used here compete for common processing resources underlying these mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent behavioural findings using dual-task paradigms demonstrate the importance of both spatial and non-spatial working memory processes in inefficient visual search (Anderson et al. in Exp Psychol 55:301-312, 2008). Here, using functional magnetic resonance imaging (fMRI), we sought to determine whether brain areas recruited during visual search are also involved in working memory. Using visually matched spatial and non-spatial working memory tasks, we confirmed previous behavioural findings that show significant dual-task interference effects occur when inefficient visual search is performed concurrently with either working memory task. Furthermore, we find considerable overlap in the cortical network activated by inefficient search and both working memory tasks. Our findings suggest that the interference effects observed behaviourally may have arisen from competition for cortical processes subserved by these overlapping regions. Drawing on previous findings (Anderson et al. in Exp Brain Res 180:289-302, 2007), we propose that the most likely anatomical locus for these interference effects is the inferior and middle frontal cortex of the right hemisphere. These areas are associated with attentional selection from memory as well as manipulation of information in memory, and we propose that the visual search and working memory tasks used here compete for common processing resources underlying these mechanisms.

Close

  • doi:10.1007/s00221-009-2000-5

Close

Eamon Caddigan; Alejandro Lleras

Saccadic repulsion in pop-out search: How a target's dodgy history can push the eyes away from it Journal Article

In: Journal of Vision, vol. 10, no. 14, pp. 1–9, 2010.

Abstract | Links | BibTeX

@article{Caddigan2010,
title = {Saccadic repulsion in pop-out search: How a target's dodgy history can push the eyes away from it},
author = {Eamon Caddigan and Alejandro Lleras},
doi = {10.1167/10.14.9},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {14},
pages = {1--9},
abstract = {Previous studies have shown that even in the context of fairly easy selection tasks, as is the case in a pop-out task, selection of the pop-out stimulus can be sped up (in terms of eye movements) when the target-defining feature repeats across trials. Here, we show that selection of a pop-out target can actually be delayed (in terms of saccadic latencies) and made less accurate (in terms of saccade accuracy) when the target-defining feature has recently been associated with distractor status. This effect was observed even though participants' task was to fixate color oddballs (when present) and simply press a button when their eyes reached the target to advance to the next trial. Importantly, the inter-trial effect was also observed in response time (time to advance to the next trial). In contrast, this response time effect was completely eliminated in a second experiment when eye movements were eliminated from the task. That is, when participants still had to press a button to advance to the next trial when an oddball target was present in the display (an oddball detection task experiment). This pattern of results closely links the "need for selection" in a task to the presence of an inter-trial bias of attention (and eye movements) in pop-out search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies have shown that even in the context of fairly easy selection tasks, as is the case in a pop-out task, selection of the pop-out stimulus can be sped up (in terms of eye movements) when the target-defining feature repeats across trials. Here, we show that selection of a pop-out target can actually be delayed (in terms of saccadic latencies) and made less accurate (in terms of saccade accuracy) when the target-defining feature has recently been associated with distractor status. This effect was observed even though participants' task was to fixate color oddballs (when present) and simply press a button when their eyes reached the target to advance to the next trial. Importantly, the inter-trial effect was also observed in response time (time to advance to the next trial). In contrast, this response time effect was completely eliminated in a second experiment when eye movements were eliminated from the task. That is, when participants still had to press a button to advance to the next trial when an oddball target was present in the display (an oddball detection task experiment). This pattern of results closely links the "need for selection" in a task to the presence of an inter-trial bias of attention (and eye movements) in pop-out search.

Close

  • doi:10.1167/10.14.9

Close

Roberto Caldara; Xinyue Zhou; Sébastien Miellet

Putting culture under the 'Spotlight' reveals universal information use for face recognition Journal Article

In: PLoS ONE, vol. 5, no. 3, pp. e9708, 2010.

Abstract | Links | BibTeX

@article{Caldara2010,
title = {Putting culture under the 'Spotlight' reveals universal information use for face recognition},
author = {Roberto Caldara and Xinyue Zhou and Sébastien Miellet},
doi = {10.1371/journal.pone.0009708},
year = {2010},
date = {2010-01-01},
journal = {PLoS ONE},
volume = {5},
number = {3},
pages = {e9708},
abstract = {Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used Spotlights with Gaussian apertures of 2, 5 or 8 dynamically centered on observers' fixations. Strikingly, in constrained Spotlight conditions (2 and 5) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used Spotlights with Gaussian apertures of 2, 5 or 8 dynamically centered on observers' fixations. Strikingly, in constrained Spotlight conditions (2 and 5) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture.

Close

  • doi:10.1371/journal.pone.0009708

Close

Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero

Recognition advantage of happy faces in extrafoveal vision: Featural and affective processing Journal Article

In: Visual Cognition, vol. 18, no. 9, pp. 1274–1297, 2010.

Abstract | Links | BibTeX

@article{Calvo2010,
title = {Recognition advantage of happy faces in extrafoveal vision: Featural and affective processing},
author = {Manuel G. Calvo and Lauri Nummenmaa and Pedro Avero},
doi = {10.1080/13506285.2010.481867},
year = {2010},
date = {2010-01-01},
journal = {Visual Cognition},
volume = {18},
number = {9},
pages = {1274--1297},
abstract = {Happy, surprised, disgusted, angry, sad, fearful, and neutral facial expressions were presented extrafoveally (2.5° away from fixation) for 150 ms, followed by a probe word for recognition (Experiment 1) or a probe scene for affective valence evaluation (Experiment 2). Eye movements were recorded and gaze-contingent masking prevented foveal viewing of the faces. Results showed that (a) happy expressions were recognized faster than others in the absence of fixations on the faces, (b) the same pattern emerged when the faces were presented upright or upside-down, (c) happy prime faces facilitated the affective evaluation of emotionally congruent probe scenes, and (d) such priming effects occurred at 750 but not at 250 ms prime-probe stimulus-onset asynchrony. This reveals an advantage in the recognition of happy faces outside of overt visual attention, and suggests that this recognition advantage relies initially on featural processing and involves processing of positive affect at a later stage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Happy, surprised, disgusted, angry, sad, fearful, and neutral facial expressions were presented extrafoveally (2.5° away from fixation) for 150 ms, followed by a probe word for recognition (Experiment 1) or a probe scene for affective valence evaluation (Experiment 2). Eye movements were recorded and gaze-contingent masking prevented foveal viewing of the faces. Results showed that (a) happy expressions were recognized faster than others in the absence of fixations on the faces, (b) the same pattern emerged when the faces were presented upright or upside-down, (c) happy prime faces facilitated the affective evaluation of emotionally congruent probe scenes, and (d) such priming effects occurred at 750 but not at 250 ms prime-probe stimulus-onset asynchrony. This reveals an advantage in the recognition of happy faces outside of overt visual attention, and suggests that this recognition advantage relies initially on featural processing and involves processing of positive affect at a later stage.

Close

  • doi:10.1080/13506285.2010.481867

Close

Linda E. Campbell; Kathryn L. McCabe; Kate Leadbeater; Ulrich Schall; Carmel M. Loughland; Dominique Rich

Visual scanning of faces in 22q11.2 deletion syndrome: Attention to the mouth or the eyes? Journal Article

In: Psychiatry Research, vol. 177, no. 1-2, pp. 211–215, 2010.

Abstract | Links | BibTeX

@article{Campbell2010,
title = {Visual scanning of faces in 22q11.2 deletion syndrome: Attention to the mouth or the eyes?},
author = {Linda E. Campbell and Kathryn L. McCabe and Kate Leadbeater and Ulrich Schall and Carmel M. Loughland and Dominique Rich},
doi = {10.1016/j.psychres.2009.06.007},
year = {2010},
date = {2010-01-01},
journal = {Psychiatry Research},
volume = {177},
number = {1-2},
pages = {211--215},
publisher = {Elsevier Ltd},
abstract = {Previous research demonstrates that people with 22q11.2 deletion syndrome (22q11DS) have social and interpersonal skill deficits. However, the basis of this deficit is unknown. This study examined, for the first time, how people with 22q11DS process emotional face stimuli using visual scanpath technology. The visual scanpaths of 17 adolescents and age/gender matched healthy controls were recorded while they viewed face images depicting one of seven basic emotions (happy, sad, surprised, angry, fear, disgust and neutral). Recognition accuracy was measured concurrently. People with 22q11DS differed significantly from controls, displaying visual scanpath patterns that were characterised by fewer fixations and a shorter scanpath length. The 22q11DS group also spent significantly more time gazing at the mouth region and significantly less time looking at eye regions of the faces. Recognition accuracy was correspondingly impaired, with 22q11DS subjects displaying particular deficits for fear and disgust. These findings suggest that 22q11DS is associated with a maladaptive visual information processing strategy that may underlie affect recognition accuracy and social functioning deficits in this group.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research demonstrates that people with 22q11.2 deletion syndrome (22q11DS) have social and interpersonal skill deficits. However, the basis of this deficit is unknown. This study examined, for the first time, how people with 22q11DS process emotional face stimuli using visual scanpath technology. The visual scanpaths of 17 adolescents and age/gender matched healthy controls were recorded while they viewed face images depicting one of seven basic emotions (happy, sad, surprised, angry, fear, disgust and neutral). Recognition accuracy was measured concurrently. People with 22q11DS differed significantly from controls, displaying visual scanpath patterns that were characterised by fewer fixations and a shorter scanpath length. The 22q11DS group also spent significantly more time gazing at the mouth region and significantly less time looking at eye regions of the faces. Recognition accuracy was correspondingly impaired, with 22q11DS subjects displaying particular deficits for fear and disgust. These findings suggest that 22q11DS is associated with a maladaptive visual information processing strategy that may underlie affect recognition accuracy and social functioning deficits in this group.

Close

  • doi:10.1016/j.psychres.2009.06.007

Close

Elena Carbone; Werner X. Schneider

The control of stimulus-driven saccades is subject not to central, but to visual attention limitations Journal Article

In: Attention, Perception, and Psychophysics, vol. 72, no. 8, pp. 2168–2175, 2010.

Abstract | Links | BibTeX

@article{Carbone2010,
title = {The control of stimulus-driven saccades is subject not to central, but to visual attention limitations},
author = {Elena Carbone and Werner X. Schneider},
doi = {10.3758/BF03196692},
year = {2010},
date = {2010-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {72},
number = {8},
pages = {2168--2175},
abstract = {In three experiments, we investigated whether the control of reflexive saccades is subject to central attention limitations. In a dual-task procedure, Task 1 required either unspeeded reporting or ignoring of briefly presented masked stimuli, whereas Task 2 required a speeded saccade toward a visual target. The stimulus onset asyn- chrony (SOA) between the two tasks was varied. In Experiments 1 and 2, the Task 1 stimulus was one or three letters, and we asked how saccade target selection is influenced by the number of items. We found (1) longer saccade latencies at short than at long SOAs in the report condition, (2) a substantially larger latency increase for three letters than for one letter, and (3) a latency difference between SOAs in the ignore condition. Broadly, these results match the central interference theory. However, in Experiment 3, an auditory stimulus was used as the Task 1 stimulus, to test whether the interference effects in Experiments 1 and 2 were due to visual instead of central interference. Although there was a small saccade latency increase from short to long SOAs, this differ- ence did not increase from the ignore to the report condition. To explain visual interference effects between letter encoding and stimulus-driven saccade control, we propose an extended theory of visual attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In three experiments, we investigated whether the control of reflexive saccades is subject to central attention limitations. In a dual-task procedure, Task 1 required either unspeeded reporting or ignoring of briefly presented masked stimuli, whereas Task 2 required a speeded saccade toward a visual target. The stimulus onset asyn- chrony (SOA) between the two tasks was varied. In Experiments 1 and 2, the Task 1 stimulus was one or three letters, and we asked how saccade target selection is influenced by the number of items. We found (1) longer saccade latencies at short than at long SOAs in the report condition, (2) a substantially larger latency increase for three letters than for one letter, and (3) a latency difference between SOAs in the ignore condition. Broadly, these results match the central interference theory. However, in Experiment 3, an auditory stimulus was used as the Task 1 stimulus, to test whether the interference effects in Experiments 1 and 2 were due to visual instead of central interference. Although there was a small saccade latency increase from short to long SOAs, this differ- ence did not increase from the ignore to the report condition. To explain visual interference effects between letter encoding and stimulus-driven saccade control, we propose an extended theory of visual attention.

Close

  • doi:10.3758/BF03196692

Close

Ana B. Chica; Raymond M. Klein; Robert D. Rafal; Joseph B. Hopfinger

Endogenous saccade preparation does not produce inhibition of return: Failure to replicate Rafal, Calabresi, Brennan, & Sciolto (1989) Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 5, pp. 1193–1206, 2010.

Abstract | Links | BibTeX

@article{Chica2010,
title = {Endogenous saccade preparation does not produce inhibition of return: Failure to replicate Rafal, Calabresi, Brennan, & Sciolto (1989)},
author = {Ana B. Chica and Raymond M. Klein and Robert D. Rafal and Joseph B. Hopfinger},
doi = {10.1037/a0019951},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {36},
number = {5},
pages = {1193--1206},
abstract = {Inhibition of Return (IOR, slower reaction times to previously cued or inspected locations) is observed both when eye movements are prohibited, and when the eyes move to the peripheral location and back to the centre before the target appears. It has been postulated that both effects are generated by a common mechanism, the activation of the oculomotor system. In strong support of this claim, IOR is not observed when attention is oriented endogenously and covertly, but it has been observed when eye movements are endogenously prepared, even when not executed. Here, we aimed to replicate and extend the finding that endogenous saccade preparation produces IOR. In five experiments using different paradigms, IOR was not observed when participants endogenously prepared an eye movement. These results lead us to conclude that endogenous saccade preparation is not sufficient to produce IOR.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition of Return (IOR, slower reaction times to previously cued or inspected locations) is observed both when eye movements are prohibited, and when the eyes move to the peripheral location and back to the centre before the target appears. It has been postulated that both effects are generated by a common mechanism, the activation of the oculomotor system. In strong support of this claim, IOR is not observed when attention is oriented endogenously and covertly, but it has been observed when eye movements are endogenously prepared, even when not executed. Here, we aimed to replicate and extend the finding that endogenous saccade preparation produces IOR. In five experiments using different paradigms, IOR was not observed when participants endogenously prepared an eye movement. These results lead us to conclude that endogenous saccade preparation is not sufficient to produce IOR.

Close

  • doi:10.1037/a0019951

Close

Ana B. Chica; Tracy L. Taylor; Juan Lupiáñez; Raymond M. Klein

Two mechanisms underlying inhibition of return Journal Article

In: Experimental Brain Research, vol. 201, no. 1, pp. 25–35, 2010.

Abstract | Links | BibTeX

@article{Chica2010a,
title = {Two mechanisms underlying inhibition of return},
author = {Ana B. Chica and Tracy L. Taylor and Juan Lupiáñez and Raymond M. Klein},
doi = {10.1007/s00221-009-2004-1},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {201},
number = {1},
pages = {25--35},
abstract = {Inhibition of return (IOR) refers to slower reaction times to targets presented at previously stimulated or inspected locations. Taylor and Klein (J Exp Psychol Hum Percept Perform 26(5):1639-1656, 2000) showed that IOR can affect either attentional/perceptual or motor processes, depending on whether the oculomotor system is in a quiescent or in an activated state, respectively. If the motoric flavour of IOR is truly non-perceptual and non-attentional, no IOR should be observed when the responses to targets are not based on spatial information. In the present experiments, we demonstrated that when the eyes moved to the peripheral cue and back to centre before the target appeared (to generate the motoric flavour), IOR was observed in detection tasks, for which the spatial location is an integral feature of the onset that is reported, but not in colour discrimination tasks, for which the outcome of a non-spatial perceptual discrimination is reported. When eye movements were prevented, both tasks showed robust IOR. We, therefore, conclude that the motoric flavour of IOR, elicited by oculomotor activation, does not affect attention or perceptual processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition of return (IOR) refers to slower reaction times to targets presented at previously stimulated or inspected locations. Taylor and Klein (J Exp Psychol Hum Percept Perform 26(5):1639-1656, 2000) showed that IOR can affect either attentional/perceptual or motor processes, depending on whether the oculomotor system is in a quiescent or in an activated state, respectively. If the motoric flavour of IOR is truly non-perceptual and non-attentional, no IOR should be observed when the responses to targets are not based on spatial information. In the present experiments, we demonstrated that when the eyes moved to the peripheral cue and back to centre before the target appeared (to generate the motoric flavour), IOR was observed in detection tasks, for which the spatial location is an integral feature of the onset that is reported, but not in colour discrimination tasks, for which the outcome of a non-spatial perceptual discrimination is reported. When eye movements were prevented, both tasks showed robust IOR. We, therefore, conclude that the motoric flavour of IOR, elicited by oculomotor activation, does not affect attention or perceptual processing.

Close

  • doi:10.1007/s00221-009-2004-1

Close

Hyung Lee; Mathias Abegg; Amadeo Rodriguez; John D. Koehn; Jason J. S. Barton

Why do humans make antisaccade errors? Journal Article

In: Experimental Brain Research, vol. 201, no. 1, pp. 65–73, 2010.

Abstract | Links | BibTeX

@article{Lee2010,
title = {Why do humans make antisaccade errors?},
author = {Hyung Lee and Mathias Abegg and Amadeo Rodriguez and John D. Koehn and Jason J. S. Barton},
doi = {10.1007/s00221-009-2008-x},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {201},
number = {1},
pages = {65--73},
abstract = {Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation.

Close

  • doi:10.1007/s00221-009-2008-x

Close

Xingshan Li; Gordon D. Logan; N. Jane Zbrodoff

Where do we look when we count? The role of eye movements in enumeration Journal Article

In: Attention, Perception, and Psychophysics, vol. 72, no. 2, pp. 409–426, 2010.

Abstract | Links | BibTeX

@article{Li2010,
title = {Where do we look when we count? The role of eye movements in enumeration},
author = {Xingshan Li and Gordon D. Logan and N. Jane Zbrodoff},
doi = {10.3758/APP.72.2.409},
year = {2010},
date = {2010-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {72},
number = {2},
pages = {409--426},
abstract = {Two experiments addressed the coupling between eye movements and the cognitive processes underlying enumeration. Experiment 1 compared eye movements in a counting task with those in a “look” task, in which subjects were told to look at each dot in a pattern once and only once. Experiment 2 presented the same dot patterns to every subject twice, to measure the consistency with which dots were fixated between and within subjects. In both experiments, the number of fixations increased linearly with the number of objects to be enu- merated, consistent with tight coupling between eye movements and enumeration. However, analyses of fixation locations showed that subjects tended to look at dots in dense, central regions of the display and tended not to look at dots in sparse, peripheral regions of the display, suggesting a looser coupling between eye movements and enumeration. Thus, the eyes do not mirror the enumeration process very directly.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments addressed the coupling between eye movements and the cognitive processes underlying enumeration. Experiment 1 compared eye movements in a counting task with those in a “look” task, in which subjects were told to look at each dot in a pattern once and only once. Experiment 2 presented the same dot patterns to every subject twice, to measure the consistency with which dots were fixated between and within subjects. In both experiments, the number of fixations increased linearly with the number of objects to be enu- merated, consistent with tight coupling between eye movements and enumeration. However, analyses of fixation locations showed that subjects tended to look at dots in dense, central regions of the display and tended not to look at dots in sparse, peripheral regions of the display, suggesting a looser coupling between eye movements and enumeration. Thus, the eyes do not mirror the enumeration process very directly.

Close

  • doi:10.3758/APP.72.2.409

Close

Hanneke Liesker; Eli Brenner; Jeroen B. J. Smeets

Eye-hand coupling is not the cause of manual return movements when searching Journal Article

In: Experimental Brain Research, vol. 201, no. 2, pp. 221–227, 2010.

Abstract | Links | BibTeX

@article{Liesker2010,
title = {Eye-hand coupling is not the cause of manual return movements when searching},
author = {Hanneke Liesker and Eli Brenner and Jeroen B. J. Smeets},
doi = {10.1007/s00221-009-2032-x},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {201},
number = {2},
pages = {221--227},
abstract = {When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control.

Close

  • doi:10.1007/s00221-009-2032-x

Close

Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg

(Un)-coupling gaze and attention outside central vision Journal Article

In: Journal of Vision, vol. 10, no. 11, pp. 1–13, 2010.

Abstract | Links | BibTeX

@article{Lingnau2010,
title = {(Un)-coupling gaze and attention outside central vision},
author = {Angelika Lingnau and Jens Schwarzbach and Dirk Vorberg},
doi = {10.1167/10.11.13},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {11},
pages = {1--13},
abstract = {In normal vision, shifts of attention and gaze are tightly coupled. Here we ask if this coupling affects performance also when central vision is not available. To this aim, we trained normal-sighted participants to perform a visual search task while vision was restricted to a gaze-contingent viewing window (" forced field location ") either in the left, right, upper, or lower visual field. Gaze direction was manipulated within a continuous visual search task that required leftward, rightward, upward, or downward eye movements. We found no general performance advantage for a particular part of the visual field or for a specific gaze direction. Rather, performance depended on the coordination of visual attention and eye movements, with impaired performance when sustained attention and gaze have to be moved in opposite directions. Our results suggest that during early stages of central visual field loss, the optimal location for the substitution of foveal vision does not depend on the particular retinal location alone, as has previously been thought, but also on the gaze direction required by the task the patient wishes to perform.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In normal vision, shifts of attention and gaze are tightly coupled. Here we ask if this coupling affects performance also when central vision is not available. To this aim, we trained normal-sighted participants to perform a visual search task while vision was restricted to a gaze-contingent viewing window (" forced field location ") either in the left, right, upper, or lower visual field. Gaze direction was manipulated within a continuous visual search task that required leftward, rightward, upward, or downward eye movements. We found no general performance advantage for a particular part of the visual field or for a specific gaze direction. Rather, performance depended on the coordination of visual attention and eye movements, with impaired performance when sustained attention and gaze have to be moved in opposite directions. Our results suggest that during early stages of central visual field loss, the optimal location for the substitution of foveal vision does not depend on the particular retinal location alone, as has previously been thought, but also on the gaze direction required by the task the patient wishes to perform.

Close

  • doi:10.1167/10.11.13

Close

Chia-Lun Liu; Hui-Yan Chiau; Philip Tseng; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan

Antisaccade cost is modulated by contextual experience of location probability Journal Article

In: Journal of Neurophysiology, vol. 103, no. 3, pp. 1438–1447, 2010.

Abstract | Links | BibTeX

@article{Liu2010,
title = {Antisaccade cost is modulated by contextual experience of location probability},
author = {Chia-Lun Liu and Hui-Yan Chiau and Philip Tseng and Daisy L. Hung and Ovid J. L. Tzeng and Neil G. Muggleton and Chi-Hung Juan},
doi = {10.1152/jn.00815.2009},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neurophysiology},
volume = {103},
number = {3},
pages = {1438--1447},
abstract = {It is well known that pro- and antisaccades may deploy different cognitive processes. However, the specific reason why antisaccades have longer latencies than prosaccades is still under debate. In three experiments, we studied the factors contributing to the antisaccade cost by taking attentional orienting and target location probabilities into account. In experiment 1, using a new antisaccade paradigm, we directly tested Olk and Kingstone's hypothesis, which attributes longer antisaccade latency to the time it takes to reorient from the visual target to the opposite saccadic target. By eliminating the reorienting component in our paradigm, we found no significant difference between the latencies of the two saccade types. In experiment 2, we varied the proportion of prosaccades made to certain locations and found that latencies in the high location-probability (75%) condition were faster than those in the low location-probability condition. Moreover, antisaccade latencies were significantly longer when location probability was high. This pattern can be explained by the notion of competing pathways for pro- and antisaccades in findings of others. In experiment 3, we further explored the degrees of modulation of location probability by decreasing the magnitude of high probability from 75 to 65%. We again observed a pattern similar to that seen in experiment 2 but with smaller modulation effects. Together, these experiments indicate that the reorienting process is a critical factor in producing the antisaccade cost. Furthermore, the antisaccade cost can be modulated by probabilistic contextual information such as location probabilities.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It is well known that pro- and antisaccades may deploy different cognitive processes. However, the specific reason why antisaccades have longer latencies than prosaccades is still under debate. In three experiments, we studied the factors contributing to the antisaccade cost by taking attentional orienting and target location probabilities into account. In experiment 1, using a new antisaccade paradigm, we directly tested Olk and Kingstone's hypothesis, which attributes longer antisaccade latency to the time it takes to reorient from the visual target to the opposite saccadic target. By eliminating the reorienting component in our paradigm, we found no significant difference between the latencies of the two saccade types. In experiment 2, we varied the proportion of prosaccades made to certain locations and found that latencies in the high location-probability (75%) condition were faster than those in the low location-probability condition. Moreover, antisaccade latencies were significantly longer when location probability was high. This pattern can be explained by the notion of competing pathways for pro- and antisaccades in findings of others. In experiment 3, we further explored the degrees of modulation of location probability by decreasing the magnitude of high probability from 75 to 65%. We again observed a pattern similar to that seen in experiment 2 but with smaller modulation effects. Together, these experiments indicate that the reorienting process is a critical factor in producing the antisaccade cost. Furthermore, the antisaccade cost can be modulated by probabilistic contextual information such as location probabilities.

Close

  • doi:10.1152/jn.00815.2009

Close

Gang Luo; Tyler W. Garaas; Marc Pomplun; Eli Peli

Inconsistency between peri-saccadic mislocalization and compression: evidence for separate "what" and "where" visual systems Journal Article

In: Journal of Vision, vol. 10, no. 12, pp. 1–8, 2010.

Abstract | Links | BibTeX

@article{Luo2010,
title = {Inconsistency between peri-saccadic mislocalization and compression: evidence for separate "what" and "where" visual systems},
author = {Gang Luo and Tyler W. Garaas and Marc Pomplun and Eli Peli},
doi = {10.1167/10.12.32},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {12},
pages = {1--8},
abstract = {The view of two separate "what" and "where" visual systems is supported by compelling neurophysiological evidence. However, very little direct psychophysical evidence has been presented to suggest that the two functions can be separated in neurologically intact persons. Using a peri-saccadic perception paradigm in which bars of different lengths were flashed around saccade onset, we directly measured the perceived object size (a "what" attribute) and location (a "where" attribute). We found that the perceived object location shifted toward the saccade target to show strongly compressed localization, whereas the perceived object size was not compressed accordingly. This dissociation indicates that the perceived size is not determined by spatial localization of the object boundary, providing direct psychophysical evidence to support that "what" and "where" attributes of objects are indeed processed separately.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The view of two separate "what" and "where" visual systems is supported by compelling neurophysiological evidence. However, very little direct psychophysical evidence has been presented to suggest that the two functions can be separated in neurologically intact persons. Using a peri-saccadic perception paradigm in which bars of different lengths were flashed around saccade onset, we directly measured the perceived object size (a "what" attribute) and location (a "where" attribute). We found that the perceived object location shifted toward the saccade target to show strongly compressed localization, whereas the perceived object size was not compressed accordingly. This dissociation indicates that the perceived size is not determined by spatial localization of the object boundary, providing direct psychophysical evidence to support that "what" and "where" attributes of objects are indeed processed separately.

Close

  • doi:10.1167/10.12.32

Close

B. Machner; C. Klein; Andreas Sprenger; P. Baumbach; P. P. Pramstaller; Christoph Helmchen; Wolfgang Heide

Eye movement disorders are different in Parkin-linked and idiopathic early-onset PD Journal Article

In: Neurology, vol. 75, pp. 125–128, 2010.

Abstract | Links | BibTeX

@article{Machner2010,
title = {Eye movement disorders are different in Parkin-linked and idiopathic early-onset PD},
author = {B. Machner and C. Klein and Andreas Sprenger and P. Baumbach and P. P. Pramstaller and Christoph Helmchen and Wolfgang Heide},
doi = {10.1212/WNL.0b013e3181e7ca6d},
year = {2010},
date = {2010-01-01},
journal = {Neurology},
volume = {75},
pages = {125--128},
abstract = {OBJECTIVES Parkin gene mutations are the most common cause of early-onset parkinsonism. Patients with Parkin mutations may be clinically indistinguishable from patients with idiopathic early-onset Parkinson disease (EOPD) without Parkin mutations. Eye movement disorders have been shown to differentiate parkinsonian syndromes, but have never been systematically studied in Parkin mutation carriers. METHODS Eye movements were recorded in symptomatic (n = 9) and asymptomatic Parkin mutation carriers (n = 13), patients with idiopathic EOPD (n = 14), and age-matched control subjects (n = 27) during established oculomotor tasks. RESULTS Both patients with EOPD and symptomatic Parkin mutation carriers showed hypometric prosaccades toward visual stimuli, as well as deficits in suppressing reflexive saccades toward unintended targets (antisaccade task). When directing gaze toward memorized target positions, patients with EOPD exhibited hypometric saccades, whereas symptomatic Parkin mutation carriers showed normal saccades. In contrast to patients with EOPD, the symptomatic Parkin mutation carriers showed impaired tracking of a moving target (reduced smooth pursuit gain). The asymptomatic Parkin mutation carriers did not differ from healthy control subjects in any of the tasks. CONCLUSIONS Although clinically similarly affected, symptomatic Parkin mutation carriers and patients with idiopathic EOPD differed in several oculomotor tasks. This finding may point to distinct anatomic structures underlying either condition: dysfunctions of cortical areas involved in smooth pursuit (V5, frontal eye field) in Parkin-linked parkinsonism vs greater impairment of basal ganglia circuits in idiopathic Parkinson disease.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

OBJECTIVES Parkin gene mutations are the most common cause of early-onset parkinsonism. Patients with Parkin mutations may be clinically indistinguishable from patients with idiopathic early-onset Parkinson disease (EOPD) without Parkin mutations. Eye movement disorders have been shown to differentiate parkinsonian syndromes, but have never been systematically studied in Parkin mutation carriers. METHODS Eye movements were recorded in symptomatic (n = 9) and asymptomatic Parkin mutation carriers (n = 13), patients with idiopathic EOPD (n = 14), and age-matched control subjects (n = 27) during established oculomotor tasks. RESULTS Both patients with EOPD and symptomatic Parkin mutation carriers showed hypometric prosaccades toward visual stimuli, as well as deficits in suppressing reflexive saccades toward unintended targets (antisaccade task). When directing gaze toward memorized target positions, patients with EOPD exhibited hypometric saccades, whereas symptomatic Parkin mutation carriers showed normal saccades. In contrast to patients with EOPD, the symptomatic Parkin mutation carriers showed impaired tracking of a moving target (reduced smooth pursuit gain). The asymptomatic Parkin mutation carriers did not differ from healthy control subjects in any of the tasks. CONCLUSIONS Although clinically similarly affected, symptomatic Parkin mutation carriers and patients with idiopathic EOPD differed in several oculomotor tasks. This finding may point to distinct anatomic structures underlying either condition: dysfunctions of cortical areas involved in smooth pursuit (V5, frontal eye field) in Parkin-linked parkinsonism vs greater impairment of basal ganglia circuits in idiopathic Parkinson disease.

Close

  • doi:10.1212/WNL.0b013e3181e7ca6d

Close

Vincenzo Maffei; Emiliano Macaluso; Iole Indovina; Guy A. Orban; Francesco Lacquaniti

Processing of targets in smooth or apparent motion along the vertical in the human brain: An fMRI study Journal Article

In: Journal of Neurophysiology, vol. 103, no. 1, pp. 360–370, 2010.

Abstract | Links | BibTeX

@article{Maffei2010,
title = {Processing of targets in smooth or apparent motion along the vertical in the human brain: An fMRI study},
author = {Vincenzo Maffei and Emiliano Macaluso and Iole Indovina and Guy A. Orban and Francesco Lacquaniti},
doi = {10.1152/jn.00892.2009},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neurophysiology},
volume = {103},
number = {1},
pages = {360--370},
abstract = {Neural substrates for processing constant speed visual motion have been extensively studied. Less is known about the brain activity patterns when the target speed changes continuously, for instance under the influence of gravity. Using functional MRI (fMRI), here we compared brain responses to accelerating/decelerating targets with the responses to constant speed targets. The target could move along the vertical under gravity (1g), under reversed gravity (-1g), or at constant speed (0g). In the first experiment, subjects observed targets moving in smooth motion and responded to a GO signal delivered at a random time after target arrival. As expected, we found that the timing of the motor responses did not depend significantly on the specific motion law. Therefore brain activity in the contrast between different motion laws was not related to motor timing responses. Average BOLD signals were significantly greater for 1g targets than either 0g or -1g targets in a distributed network including bilateral insulae, left lingual gyrus, and brain stem. Moreover, in these regions, the mean activity decreased monotonically from 1g to 0g and to -1g. In the second experiment, subjects intercepted 1g, 0g, and -1g targets either in smooth motion (RM) or in long-range apparent motion (LAM). We found that the sites in the right insula and left lingual gyrus, which were selectively engaged by 1g targets in the first experiment, were also significantly more active during 1g trials than during -1g trials both in RM and LAM. The activity in 0g trials was again intermediate between that in 1g trials and that in -1g trials. Therefore in these regions the global activity modulation with the law of vertical motion appears to hold for both RM and LAM. Instead, a region in the inferior parietal lobule showed a preference for visual gravitational motion only in LAM but not RM.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Neural substrates for processing constant speed visual motion have been extensively studied. Less is known about the brain activity patterns when the target speed changes continuously, for instance under the influence of gravity. Using functional MRI (fMRI), here we compared brain responses to accelerating/decelerating targets with the responses to constant speed targets. The target could move along the vertical under gravity (1g), under reversed gravity (-1g), or at constant speed (0g). In the first experiment, subjects observed targets moving in smooth motion and responded to a GO signal delivered at a random time after target arrival. As expected, we found that the timing of the motor responses did not depend significantly on the specific motion law. Therefore brain activity in the contrast between different motion laws was not related to motor timing responses. Average BOLD signals were significantly greater for 1g targets than either 0g or -1g targets in a distributed network including bilateral insulae, left lingual gyrus, and brain stem. Moreover, in these regions, the mean activity decreased monotonically from 1g to 0g and to -1g. In the second experiment, subjects intercepted 1g, 0g, and -1g targets either in smooth motion (RM) or in long-range apparent motion (LAM). We found that the sites in the right insula and left lingual gyrus, which were selectively engaged by 1g targets in the first experiment, were also significantly more active during 1g trials than during -1g trials both in RM and LAM. The activity in 0g trials was again intermediate between that in 1g trials and that in -1g trials. Therefore in these regions the global activity modulation with the law of vertical motion appears to hold for both RM and LAM. Instead, a region in the inferior parietal lobule showed a preference for visual gravitational motion only in LAM but not RM.

Close

  • doi:10.1152/jn.00892.2009

Close

Femke Maij; Eli Brenner; Hyung-Chul O. Li; Frans W. Cornelissen; Jeroen B. J. Smeets

The use of the saccade target as a visual reference when localizing flashes during saccades Journal Article

In: Journal of Vision, vol. 10, no. 4, pp. 1–9, 2010.

Abstract | Links | BibTeX

@article{Maij2010,
title = {The use of the saccade target as a visual reference when localizing flashes during saccades},
author = {Femke Maij and Eli Brenner and Hyung-Chul O. Li and Frans W. Cornelissen and Jeroen B. J. Smeets},
doi = {10.1167/10.4.7},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {4},
pages = {1--9},
abstract = {Flashes presented around the time of a saccade are often mislocalized. Such mislocalization is influenced by various factors. Here, we evaluate the role of the saccade target as a landmark when localizing flashes. The experiment was performed in a normally illuminated room to provide ample other visual references. Subjects were instructed to follow a randomly jumping target with their eyes. We flashed a black dot on the screen around the time of saccade onset. The subjects were asked to localize the black dot by touching the appropriate location on the screen. In a first experiment, the saccade target was displaced during the saccade. In a second experiment, it disappeared at different moments. Both manipulations affected the mislocalization. We conclude that our subjects' judgments are partly based on the flashed dot's position relative to the saccade target.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Flashes presented around the time of a saccade are often mislocalized. Such mislocalization is influenced by various factors. Here, we evaluate the role of the saccade target as a landmark when localizing flashes. The experiment was performed in a normally illuminated room to provide ample other visual references. Subjects were instructed to follow a randomly jumping target with their eyes. We flashed a black dot on the screen around the time of saccade onset. The subjects were asked to localize the black dot by touching the appropriate location on the screen. In a first experiment, the saccade target was displaced during the saccade. In a second experiment, it disappeared at different moments. Both manipulations affected the mislocalization. We conclude that our subjects' judgments are partly based on the flashed dot's position relative to the saccade target.

Close

  • doi:10.1167/10.4.7

Close

George L. Malcolm; John M. Henderson

Combining top-down processes to guide eye movements during real-world scene search Journal Article

In: Journal of Vision, vol. 10, no. 2, pp. 1–11, 2010.

Abstract | Links | BibTeX

@article{Malcolm2010,
title = {Combining top-down processes to guide eye movements during real-world scene search},
author = {George L. Malcolm and John M. Henderson},
doi = {10.1167/10.2.4},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {2},
pages = {1--11},
abstract = {Eye movements can be guided by various types of information in real-world scenes. Here we investigated how the visual system combines multiple types of top-down information to facilitate search. We manipulated independently the specificity of the search target template and the usefulness of contextual constraint in an object search task. An eye tracker was used to segment search time into three behaviorally defined epochs so that influences on specific search processes could be identified. The results support previous studies indicating that the availability of either a specific target template or scene context facilitates search. The results also show that target template and contextual constraints combine additively in facilitating search. The results extend recent eye guidance models by suggesting the manner in which our visual system utilizes multiple types of top-down information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye movements can be guided by various types of information in real-world scenes. Here we investigated how the visual system combines multiple types of top-down information to facilitate search. We manipulated independently the specificity of the search target template and the usefulness of contextual constraint in an object search task. An eye tracker was used to segment search time into three behaviorally defined epochs so that influences on specific search processes could be identified. The results support previous studies indicating that the availability of either a specific target template or scene context facilitates search. The results also show that target template and contextual constraints combine additively in facilitating search. The results extend recent eye guidance models by suggesting the manner in which our visual system utilizes multiple types of top-down information.

Close

  • doi:10.1167/10.2.4

Close

Sabira K. Mannan; Christopher Kennard; Daniela Potter; Yi Pan; David Soto

Early oculomotor capture by new onsets driven by the contents of working memory Journal Article

In: Vision Research, vol. 50, no. 16, pp. 1590–1597, 2010.

Abstract | Links | BibTeX

@article{Mannan2010,
title = {Early oculomotor capture by new onsets driven by the contents of working memory},
author = {Sabira K. Mannan and Christopher Kennard and Daniela Potter and Yi Pan and David Soto},
doi = {10.1016/j.visres.2010.05.015},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {16},
pages = {1590--1597},
abstract = {Oculomotor capture can occur automatically in a bottom-up way through the sudden appearance of a new object or in a top-down fashion when a stimulus in the array matches the contents of working memory. However, it is not clear whether or not working memory processing can influence the early stages of oculomotor capture by abrupt onsets. Here we present clear evidence for an early modulation driven by stimulus matches to the contents of working memory in the colour dimension. Interestingly, verbal as well as visual information in working memory influenced the direction of the fastest saccades made in search, saccadic latencies and the curvature of the scan paths made to the search target. This pattern of results arose even though the contents of working memory were detrimental for search, demonstrating an early, automatic top-down mediation of oculomotor onset capture by the contents of working memory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Oculomotor capture can occur automatically in a bottom-up way through the sudden appearance of a new object or in a top-down fashion when a stimulus in the array matches the contents of working memory. However, it is not clear whether or not working memory processing can influence the early stages of oculomotor capture by abrupt onsets. Here we present clear evidence for an early modulation driven by stimulus matches to the contents of working memory in the colour dimension. Interestingly, verbal as well as visual information in working memory influenced the direction of the fastest saccades made in search, saccadic latencies and the curvature of the scan paths made to the search target. This pattern of results arose even though the contents of working memory were detrimental for search, demonstrating an early, automatic top-down mediation of oculomotor onset capture by the contents of working memory.

Close

  • doi:10.1016/j.visres.2010.05.015

Close

Sebastiaan Mathôt; Jan Theeuwes

Gradual remapping results in early retinotopic and late spatiotopic inhibition of return Journal Article

In: Psychological Science, vol. 21, no. 12, pp. 1793–1798, 2010.

Abstract | Links | BibTeX

@article{Mathot2010,
title = {Gradual remapping results in early retinotopic and late spatiotopic inhibition of return},
author = {Sebastiaan Mathôt and Jan Theeuwes},
doi = {10.1177/0956797610388813},
year = {2010},
date = {2010-01-01},
journal = {Psychological Science},
volume = {21},
number = {12},
pages = {1793--1798},
abstract = {Here we report that immediately following the execution of an eye movement, oculomotor inhibition of return resides in retinotopic (eye-centered) coordinates. At longer postsaccadic intervals, inhibition resides in spatiotopic (world-centered) coordinates. These results are explained in terms of perisaccadic remapping. In the interval surrounding an eye movement, information is remapped within retinotopic maps to compensate for the retinal displacement. Because remapping is not an instantaneous process, a fast, but gradual, transfer of inhibition of return from retinotopic to spatiotopic coordinates can be observed in the postsaccadic interval. The observation that visual stability is preserved in inhibition of return is consistent with its function as a "foraging facilitator," which requires locations to be inhibited across multiple eye movements. The current results support the idea that the visual system is retinotopically organized and that the appearance of a spatiotopic organization is due to remapping of visual information to compensate for eye movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Here we report that immediately following the execution of an eye movement, oculomotor inhibition of return resides in retinotopic (eye-centered) coordinates. At longer postsaccadic intervals, inhibition resides in spatiotopic (world-centered) coordinates. These results are explained in terms of perisaccadic remapping. In the interval surrounding an eye movement, information is remapped within retinotopic maps to compensate for the retinal displacement. Because remapping is not an instantaneous process, a fast, but gradual, transfer of inhibition of return from retinotopic to spatiotopic coordinates can be observed in the postsaccadic interval. The observation that visual stability is preserved in inhibition of return is consistent with its function as a "foraging facilitator," which requires locations to be inhibited across multiple eye movements. The current results support the idea that the visual system is retinotopically organized and that the appearance of a spatiotopic organization is due to remapping of visual information to compensate for eye movements.

Close

  • doi:10.1177/0956797610388813

Close

Sebastiaan Mathôt; Jan Theeuwes

Evidence for the predictive remapping of visual attention Journal Article

In: Experimental Brain Research, vol. 200, no. 1, pp. 117–122, 2010.

Abstract | Links | BibTeX

@article{Mathot2010a,
title = {Evidence for the predictive remapping of visual attention},
author = {Sebastiaan Mathôt and Jan Theeuwes},
doi = {10.1007/s00221-009-2055-3},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {200},
number = {1},
pages = {117--122},
abstract = {When attending an object in visual space, perception of the object remains stable despite frequent eye movements. It is assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Remapping is predictive when it starts before the actual eye movement. Until now, most evidence for predictive remapping has been obtained in single cell studies involving monkeys. Here, we report that predictive remapping affects visual attention prior to an eye movement. Immediately following a saccade, we show that attention has partly shifted with the saccade (Experiment 1). Importantly, we show that remapping is predictive and affects the locus of attention prior to saccade execution (Experiments 2 and 3): before the saccade was executed, there was attentional facilitation at the location which, after the saccade, would retinotopically match the attended location.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When attending an object in visual space, perception of the object remains stable despite frequent eye movements. It is assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Remapping is predictive when it starts before the actual eye movement. Until now, most evidence for predictive remapping has been obtained in single cell studies involving monkeys. Here, we report that predictive remapping affects visual attention prior to an eye movement. Immediately following a saccade, we show that attention has partly shifted with the saccade (Experiment 1). Importantly, we show that remapping is predictive and affects the locus of attention prior to saccade execution (Experiments 2 and 3): before the saccade was executed, there was attentional facilitation at the location which, after the saccade, would retinotopically match the attended location.

Close

  • doi:10.1007/s00221-009-2055-3

Close

Ellen Matthias; Peter Bublak; Hermann J. Muller; Werner X. Schneider; Joseph Krummenacher; Kathrin Finke

The influence of alertness on spatial and nonspatial components of visual attention Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, pp. 38–56, 2010.

Abstract | Links | BibTeX

@article{Matthias2010,
title = {The influence of alertness on spatial and nonspatial components of visual attention},
author = {Ellen Matthias and Peter Bublak and Hermann J. Muller and Werner X. Schneider and Joseph Krummenacher and Kathrin Finke},
doi = {10.1037/a0017602},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {36},
pages = {38--56},
abstract = {Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus onset asynchronies in two different whole-report paradigms based on Bundesen's (1990) theory of visual attention, which permits spatial and nonspatial components of selective attention to be assessed independently. The results revealed the level of alertness to affect both the spatial distribution of attentional weighting and processing speed, but not visual short-term memory capacity, with the effect on processing speed preceding that on the spatial distribution of attentional weighting. This pattern indicates that the level of alertness influences both spatial and nonspatial component mechanisms of visual attention and that these two effects develop independently of each other; moreover, it suggests that intrinsic and phasic alertness effects involve the same processing route, on which spatial and nonspatial mechanisms are mediated by independent processing systems that are activated, due to increased alertness, in temporal succession.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus onset asynchronies in two different whole-report paradigms based on Bundesen's (1990) theory of visual attention, which permits spatial and nonspatial components of selective attention to be assessed independently. The results revealed the level of alertness to affect both the spatial distribution of attentional weighting and processing speed, but not visual short-term memory capacity, with the effect on processing speed preceding that on the spatial distribution of attentional weighting. This pattern indicates that the level of alertness influences both spatial and nonspatial component mechanisms of visual attention and that these two effects develop independently of each other; moreover, it suggests that intrinsic and phasic alertness effects involve the same processing route, on which spatial and nonspatial mechanisms are mediated by independent processing systems that are activated, due to increased alertness, in temporal succession.

Close

  • doi:10.1037/a0017602

Close

Anna Ma-Wyatt; Martin Stritzke; Julia Trommershäuser

Eye-hand coordination while pointing rapidly under risk Journal Article

In: Experimental Brain Research, vol. 203, no. 1, pp. 131–145, 2010.

Abstract | Links | BibTeX

@article{MaWyatt2010,
title = {Eye-hand coordination while pointing rapidly under risk},
author = {Anna Ma-Wyatt and Martin Stritzke and Julia Trommershäuser},
doi = {10.1007/s00221-010-2218-2},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {203},
number = {1},
pages = {131--145},
abstract = {Humans make rapid, goal-directed movements to interact with their environment. Saccadic eye movements usually accompany rapid hand movements, suggesting neural coupling, although it remains unclear what determines the strength of the coupling. Here, we present evidence that humans can alter eye-hand coordination in response to risk associated with endpoint variability. We used a paradigm in which human participants were forced to point rapidly under risk and were penalized or rewarded depending on the hand movement outcome. A separate reward schedule was employed for relative saccadic endpoint position. Participants received a monetary reward proportional to points won. We present a model that defines optimality of eye-hand coordination for this task depending on where the hand lands relative to the eye. A comparison of the results and model predictions showed that participants could optimize performance to maximize gain in some conditions, but not others. Participants produced near-optimal results when no feedback was given about relative saccade location and when negative feedback was provided for large distances between the saccade and hand. Participants were sub-optimal when given negative feedback for saccades very close to the hand endpoint. Our results suggest that eye-hand coordination is flexible when pointing rapidly under risk, but final eye position remains correlated with finger location.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans make rapid, goal-directed movements to interact with their environment. Saccadic eye movements usually accompany rapid hand movements, suggesting neural coupling, although it remains unclear what determines the strength of the coupling. Here, we present evidence that humans can alter eye-hand coordination in response to risk associated with endpoint variability. We used a paradigm in which human participants were forced to point rapidly under risk and were penalized or rewarded depending on the hand movement outcome. A separate reward schedule was employed for relative saccadic endpoint position. Participants received a monetary reward proportional to points won. We present a model that defines optimality of eye-hand coordination for this task depending on where the hand lands relative to the eye. A comparison of the results and model predictions showed that participants could optimize performance to maximize gain in some conditions, but not others. Participants produced near-optimal results when no feedback was given about relative saccade location and when negative feedback was provided for large distances between the saccade and hand. Participants were sub-optimal when given negative feedback for saccades very close to the hand endpoint. Our results suggest that eye-hand coordination is flexible when pointing rapidly under risk, but final eye position remains correlated with finger location.

Close

  • doi:10.1007/s00221-010-2218-2

Close

Vidhya Navalpakkam; Christof Koch; Antonio Rangel; Pietro Perona

Optimal reward harvesting in complex perceptual environments Journal Article

In: Proceedings of the National Academy of Sciences, vol. 107, no. 11, pp. 5232–5237, 2010.

Abstract | Links | BibTeX

@article{Navalpakkam2010,
title = {Optimal reward harvesting in complex perceptual environments},
author = {Vidhya Navalpakkam and Christof Koch and Antonio Rangel and Pietro Perona},
doi = {10.1073/pnas.0911972107},
year = {2010},
date = {2010-01-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {107},
number = {11},
pages = {5232--5237},
abstract = {The ability to choose rapidly among multiple targets embedded in a complex perceptual environment is key to survival. Targets may differ in their reward value as well as in their low-level perceptual properties (e.g., visual saliency). Previous studies investigated separately the impact of either value or saliency on choice; thus, it is not known how the brain combines these two variables during decision making. We addressed this question with three experiments in which human subjects attempted to maximize their monetary earnings by rapidly choosing items from a brief display. Each display contained several worthless items (distractors) as well as two targets, whose value and saliency were varied systematically. We compared the behavioral data with the predictions of three computational models assuming that (i) subjects seek the most valuable item in the display, (ii) subjects seek the most easily detectable item, and (iii) subjects behave as an ideal Bayesian observer who combines both factors to maximize the expected reward within each trial. Regardless of the type of motor response used to express the choices, we find that decisions are influenced by both value and feature-contrast in a way that is consistent with the ideal Bayesian observer, even when the targets' feature-contrast is varied unpredictably between trials. This suggests that individuals are able to harvest rewards optimally and dynamically under time pressure while seeking multiple targets embedded in perceptual clutter.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The ability to choose rapidly among multiple targets embedded in a complex perceptual environment is key to survival. Targets may differ in their reward value as well as in their low-level perceptual properties (e.g., visual saliency). Previous studies investigated separately the impact of either value or saliency on choice; thus, it is not known how the brain combines these two variables during decision making. We addressed this question with three experiments in which human subjects attempted to maximize their monetary earnings by rapidly choosing items from a brief display. Each display contained several worthless items (distractors) as well as two targets, whose value and saliency were varied systematically. We compared the behavioral data with the predictions of three computational models assuming that (i) subjects seek the most valuable item in the display, (ii) subjects seek the most easily detectable item, and (iii) subjects behave as an ideal Bayesian observer who combines both factors to maximize the expected reward within each trial. Regardless of the type of motor response used to express the choices, we find that decisions are influenced by both value and feature-contrast in a way that is consistent with the ideal Bayesian observer, even when the targets' feature-contrast is varied unpredictably between trials. This suggests that individuals are able to harvest rewards optimally and dynamically under time pressure while seeking multiple targets embedded in perceptual clutter.

Close

  • doi:10.1073/pnas.0911972107

Close

Mark B. Neider; Xin Chen; Christopher A. Dickinson; Susan E. Brennan; Gregory J. Zelinsky

Coordinating spatial referencing using shared gaze Journal Article

In: Psychonomic Bulletin & Review, vol. 17, no. 5, pp. 718–724, 2010.

Abstract | Links | BibTeX

@article{Neider2010a,
title = {Coordinating spatial referencing using shared gaze},
author = {Mark B. Neider and Xin Chen and Christopher A. Dickinson and Susan E. Brennan and Gregory J. Zelinsky},
doi = {10.3758/PBR.17.5.718},
year = {2010},
date = {2010-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {17},
number = {5},
pages = {718--724},
abstract = {To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information.

Close

  • doi:10.3758/PBR.17.5.718

Close

Dylan Nieman; Bhavin R. Sheth; Shinsuke Shimojo

Perceiving a discontinuity in motion Journal Article

In: Journal of Vision, vol. 10, no. 6, pp. 1–23, 2010.

Abstract | Links | BibTeX

@article{Nieman2010,
title = {Perceiving a discontinuity in motion},
author = {Dylan Nieman and Bhavin R. Sheth and Shinsuke Shimojo},
doi = {10.1167/10.6.9},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {6},
pages = {1--23},
abstract = {Studies have shown that the position of a target stimulus is misperceived owing to ongoing motion. Although static forces (fixation, landmarks) affect perceived position, motion remains the overwhelming force driving estimates of position. Motion endpoint estimates biased in the direction of motion are perceptual signatures of motion's dominant role in localization. We sought conditions in which static forces exert the predominant influence over perceived position: stimulus displays for which target position is perceived backward relative to motion. We used a target that moved diagonally with constant speed, abruptly turned 90 degrees and continued at constant speed; observers localized the discontinuity. This yielded a previously undescribed effect, "turn-point shift," the tendency of observers to estimate the position of orthogonal direction change backward relative to subsequent motion direction. Display and mislocalization direction differ from past studies. Static forces (foveal attraction, repulsion by subsequently occupied spatial positions) were found to be responsible. Delayed turn-point estimates, reconstructed from probing the entire trajectory, shifted the horizontal coordinate forward in the direction of motion. This implies more than one percept of turn-point position. As various estimates of turn-point position arise at different times, under different task demands, the perceptual system does not necessarily resolve conflicts between them.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies have shown that the position of a target stimulus is misperceived owing to ongoing motion. Although static forces (fixation, landmarks) affect perceived position, motion remains the overwhelming force driving estimates of position. Motion endpoint estimates biased in the direction of motion are perceptual signatures of motion's dominant role in localization. We sought conditions in which static forces exert the predominant influence over perceived position: stimulus displays for which target position is perceived backward relative to motion. We used a target that moved diagonally with constant speed, abruptly turned 90 degrees and continued at constant speed; observers localized the discontinuity. This yielded a previously undescribed effect, "turn-point shift," the tendency of observers to estimate the position of orthogonal direction change backward relative to subsequent motion direction. Display and mislocalization direction differ from past studies. Static forces (foveal attraction, repulsion by subsequently occupied spatial positions) were found to be responsible. Delayed turn-point estimates, reconstructed from probing the entire trajectory, shifted the horizontal coordinate forward in the direction of motion. This implies more than one percept of turn-point position. As various estimates of turn-point position arise at different times, under different task demands, the perceptual system does not necessarily resolve conflicts between them.

Close

  • doi:10.1167/10.6.9

Close

Tanja C. W. Nijboer; Anneloes Vree; Chris Dijkerman; Stefan Van der Stigchel

Prism adaptation influences perception but not attention: Evidence from antisaccades Journal Article

In: NeuroReport, vol. 21, no. 5, pp. 386–389, 2010.

Abstract | Links | BibTeX

@article{Nijboer2010,
title = {Prism adaptation influences perception but not attention: Evidence from antisaccades},
author = {Tanja C. W. Nijboer and Anneloes Vree and Chris Dijkerman and Stefan Van der Stigchel},
doi = {10.1097/WNR.0b013e328337f95f},
year = {2010},
date = {2010-01-01},
journal = {NeuroReport},
volume = {21},
number = {5},
pages = {386--389},
abstract = {Prism adaptation has been shown to successfully alleviate symptoms of hemispatial neglect, yet the underlying mechanism is still poorly understood. In this study, the antisaccade task was used to measure the effects of prism adaptation on spatial attention in healthy participants. Results indicated that prism adaptation did not influence the saccade latencies or antisaccade errors, both strong measures of attentional deployment, despite a successful prism adaptation procedure. In contrast to visual attention, prism adaptation evoked a perceptual bias in visual space as measured by the landmark task. We conclude that prism adaptation has a differential influence on visual attention and visual perception in healthy participants as measured by the tasks used.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Prism adaptation has been shown to successfully alleviate symptoms of hemispatial neglect, yet the underlying mechanism is still poorly understood. In this study, the antisaccade task was used to measure the effects of prism adaptation on spatial attention in healthy participants. Results indicated that prism adaptation did not influence the saccade latencies or antisaccade errors, both strong measures of attentional deployment, despite a successful prism adaptation procedure. In contrast to visual attention, prism adaptation evoked a perceptual bias in visual space as measured by the landmark task. We conclude that prism adaptation has a differential influence on visual attention and visual perception in healthy participants as measured by the tasks used.

Close

  • doi:10.1097/WNR.0b013e328337f95f

Close

Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo

Semantic recognition precedes affective evaluation of visual scenes Journal Article

In: Journal of Experimental Psychology: General, vol. 139, no. 2, pp. 222–246, 2010.

Abstract | Links | BibTeX

@article{Nummenmaa2010,
title = {Semantic recognition precedes affective evaluation of visual scenes},
author = {Lauri Nummenmaa and Jukka Hyönä and Manuel G. Calvo},
doi = {10.1037/a0018858},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {139},
number = {2},
pages = {222--246},
abstract = {We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by a predefined target scene. The affective task involved saccading toward an unpleasant or pleasant scene, and the semantic task involved saccading toward a scene containing an animal. Both affective and semantic target scenes could be reliably categorized in less than 220 ms, but semantic categorization was always faster than affective categorization. This finding was replicated with singly, foveally presented scenes and manual responses. In comparison with foveal presentation, extrafoveal presentation slowed down the categorization of affective targets more than that of semantic targets. Exposure threshold for accurate categorization was lower for semantic information than for affective information. Superordinate-, basic-, and subordinate-level semantic categorizations were faster than affective evaluation. We conclude that affective analysis of scenes cannot bypass object recognition. Rather, semantic categorization precedes and is required for affective evaluation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by a predefined target scene. The affective task involved saccading toward an unpleasant or pleasant scene, and the semantic task involved saccading toward a scene containing an animal. Both affective and semantic target scenes could be reliably categorized in less than 220 ms, but semantic categorization was always faster than affective categorization. This finding was replicated with singly, foveally presented scenes and manual responses. In comparison with foveal presentation, extrafoveal presentation slowed down the categorization of affective targets more than that of semantic targets. Exposure threshold for accurate categorization was lower for semantic information than for affective information. Superordinate-, basic-, and subordinate-level semantic categorizations were faster than affective evaluation. We conclude that affective analysis of scenes cannot bypass object recognition. Rather, semantic categorization precedes and is required for affective evaluation.

Close

  • doi:10.1037/a0018858

Close

Stéphanie M. Morand; Marie-Hélène Grosbras; Roberto Caldara; Monika Harvey

Looking away from faces: Influence of high-level visual processes on saccade programming Journal Article

In: Journal of Vision, vol. 10, no. 3, pp. 1–10, 2010.

Abstract | Links | BibTeX

@article{Morand2010,
title = {Looking away from faces: Influence of high-level visual processes on saccade programming},
author = {Stéphanie M. Morand and Marie-Hélène Grosbras and Roberto Caldara and Monika Harvey},
doi = {10.1167/10.3.16},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {3},
pages = {1--10},
abstract = {Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

Close

  • doi:10.1167/10.3.16

Close

Adam P. Morris; Charles C. Liu; Simon J. Cropper; Jason D. Forte; Bart Krekelberg; Jason B. Mattingley

Summation of visual motion across eye movements reflects a nonspatial decision mechanism Journal Article

In: Journal of Neuroscience, vol. 30, no. 29, pp. 9821–9830, 2010.

Abstract | Links | BibTeX

@article{Morris2010,
title = {Summation of visual motion across eye movements reflects a nonspatial decision mechanism},
author = {Adam P. Morris and Charles C. Liu and Simon J. Cropper and Jason D. Forte and Bart Krekelberg and Jason B. Mattingley},
doi = {10.1523/JNEUROSCI.1705-10.2010},
year = {2010},
date = {2010-01-01},
journal = {Journal of Neuroscience},
volume = {30},
number = {29},
pages = {9821--9830},
abstract = {Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., “spatiotopic” receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., “spatiotopic” receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.

Close

  • doi:10.1523/JNEUROSCI.1705-10.2010

Close

Albert Moukheiber; Gilles Rautureau; Fernando Perez-Diaz; Robert Soussignan; Stéphanie Dubal; Roland Jouvent; Antoine Pelissolo

Gaze avoidance in social phobia: Objective measure and correlates Journal Article

In: Behaviour Research and Therapy, vol. 48, pp. 147–151, 2010.

Abstract | Links | BibTeX

@article{Moukheiber2010,
title = {Gaze avoidance in social phobia: Objective measure and correlates},
author = {Albert Moukheiber and Gilles Rautureau and Fernando Perez-Diaz and Robert Soussignan and Stéphanie Dubal and Roland Jouvent and Antoine Pelissolo},
doi = {10.1016/j.brat.2009.09.012},
year = {2010},
date = {2010-01-01},
journal = {Behaviour Research and Therapy},
volume = {48},
pages = {147--151},
abstract = {Gaze aversion could be a central component of the physiopathology of social phobia. The emotions of the people interacting with a person with social phobia seem to model this gaze aversion. Our research consists of testing gaze aversion in subjects with social phobia compared to control subjects in different emotional faces of men and women using an eye tracker. Twenty-six subjects with DSM-IV social phobia were recruited. Twenty-four healthy subjects aged and sex-matched constituted the control group. We looked at the number of fixations and the dwell time in the eyes area on the pictures. The main findings of this research are: confirming a significantly lower amount of fixations and dwell time in patients with social phobia as a general mean and for the 6 basic emotions independently from gender; observing a significant correlation between the severity of the phobia and the degree of gaze avoidance. However, no difference in gaze avoidance according to subject/picture gender matching was observed. These findings confirm and extend some previous results, and suggest that eye avoidance is a robust marker of persons with social phobia, which could be used as a behavioral phenotype for brain imagery studies on this disorder.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Gaze aversion could be a central component of the physiopathology of social phobia. The emotions of the people interacting with a person with social phobia seem to model this gaze aversion. Our research consists of testing gaze aversion in subjects with social phobia compared to control subjects in different emotional faces of men and women using an eye tracker. Twenty-six subjects with DSM-IV social phobia were recruited. Twenty-four healthy subjects aged and sex-matched constituted the control group. We looked at the number of fixations and the dwell time in the eyes area on the pictures. The main findings of this research are: confirming a significantly lower amount of fixations and dwell time in patients with social phobia as a general mean and for the 6 basic emotions independently from gender; observing a significant correlation between the severity of the phobia and the degree of gaze avoidance. However, no difference in gaze avoidance according to subject/picture gender matching was observed. These findings confirm and extend some previous results, and suggest that eye avoidance is a robust marker of persons with social phobia, which could be used as a behavioral phenotype for brain imagery studies on this disorder.

Close

  • doi:10.1016/j.brat.2009.09.012

Close

Sven Mucke; Velitchko Manahilov; Niall C. Strang; Dirk Seidel; Lyle S. Gray; Uma Shahani

Investigating the mechanisms that may underlie the reduction in contrast sensitivity during dynamic accommodation Journal Article

In: Journal of Vision, vol. 10, no. 5, pp. 1–14, 2010.

Abstract | Links | BibTeX

@article{Mucke2010,
title = {Investigating the mechanisms that may underlie the reduction in contrast sensitivity during dynamic accommodation},
author = {Sven Mucke and Velitchko Manahilov and Niall C. Strang and Dirk Seidel and Lyle S. Gray and Uma Shahani},
doi = {10.1167/10.5.5},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {5},
pages = {1--14},
abstract = {Head and eye movements, together with ocular accommodation enable us to explore our visual environment. The stability of this environment is maintained during saccadic and vergence eye movements due to reduced contrast sensitivity to low spatial frequency information. Our recent work has revealed a new type of selective reduction of contrast sensitivity to high spatial frequency patterns during the fast phase of dynamic accommodation responses compared with steady-state accommodation. Here were report data which show a strong correlation between the effects of reduced contrast sensitivity during dynamic accommodation and velocity of accommodation responses, elicited by ramp changes in accommodative demand. The results were accounted for by a contrast gain control model of a cortical mechanism for contrast detection during dynamic ocular accommodation. Sensitivity, however, was not altered during attempted accommodation responses in the absence of crystalline-lens changes due to cycloplegia. These findings suggest that contrast sensitivity reduction during dynamic accommodation may be a consequence of cortical inhibition driven by proprioceptive-like signals originating within the ciliary muscle, rather than by corollary discharge signals elicited simultaneously with the motor command to the ciliary muscle.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Head and eye movements, together with ocular accommodation enable us to explore our visual environment. The stability of this environment is maintained during saccadic and vergence eye movements due to reduced contrast sensitivity to low spatial frequency information. Our recent work has revealed a new type of selective reduction of contrast sensitivity to high spatial frequency patterns during the fast phase of dynamic accommodation responses compared with steady-state accommodation. Here were report data which show a strong correlation between the effects of reduced contrast sensitivity during dynamic accommodation and velocity of accommodation responses, elicited by ramp changes in accommodative demand. The results were accounted for by a contrast gain control model of a cortical mechanism for contrast detection during dynamic ocular accommodation. Sensitivity, however, was not altered during attempted accommodation responses in the absence of crystalline-lens changes due to cycloplegia. These findings suggest that contrast sensitivity reduction during dynamic accommodation may be a consequence of cortical inhibition driven by proprioceptive-like signals originating within the ciliary muscle, rather than by corollary discharge signals elicited simultaneously with the motor command to the ciliary muscle.

Close

  • doi:10.1167/10.5.5

Close

Manon Mulckhuyse; Jan Theeuwes

Unconscious cueing effects in saccadic eye movements - Facilitation and inhibition in temporal and nasal hemifield Journal Article

In: Vision Research, vol. 50, no. 6, pp. 606–613, 2010.

Abstract | Links | BibTeX

@article{Mulckhuyse2010,
title = {Unconscious cueing effects in saccadic eye movements - Facilitation and inhibition in temporal and nasal hemifield},
author = {Manon Mulckhuyse and Jan Theeuwes},
doi = {10.1016/j.visres.2010.01.005},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {6},
pages = {606--613},
publisher = {Elsevier Ltd},
abstract = {The current study investigated whether subliminal spatial cues can affect the oculomotor system. In addition, we performed the experiment under monocular viewing conditions. By limiting participants to monocular viewing conditions, we can examine behavioral temporal-nasal hemifield asymmetries. These behavioral asymmetries may arise from an anatomical asymmetry in the retinotectal pathway. The results show that even though our spatial cues were not consciously perceived they did affect the oculomotor system: relative to the neutral condition, saccade latencies to the validly cued location were shorter and saccade latencies to the invalidly cued location were longer. Although we did not observe an overall inhibition of return effect, there was a reliable effect of hemifield on IOR for those observers who showed an overall IOR effect. More specifically, consistent with the notion that processing via the retinotectal pathway is stronger in the temporal hemifield than in the nasal hemifield we found an IOR effect for cues presented in the temporal hemifield but not for cues presented in the nasal hemifield. We conclude that unconsciously processed spatial cues can affect the oculomotor system. In addition, the observed behavioral temporal-nasal hemifield asymmetry is consistent with retinotectal mediation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The current study investigated whether subliminal spatial cues can affect the oculomotor system. In addition, we performed the experiment under monocular viewing conditions. By limiting participants to monocular viewing conditions, we can examine behavioral temporal-nasal hemifield asymmetries. These behavioral asymmetries may arise from an anatomical asymmetry in the retinotectal pathway. The results show that even though our spatial cues were not consciously perceived they did affect the oculomotor system: relative to the neutral condition, saccade latencies to the validly cued location were shorter and saccade latencies to the invalidly cued location were longer. Although we did not observe an overall inhibition of return effect, there was a reliable effect of hemifield on IOR for those observers who showed an overall IOR effect. More specifically, consistent with the notion that processing via the retinotectal pathway is stronger in the temporal hemifield than in the nasal hemifield we found an IOR effect for cues presented in the temporal hemifield but not for cues presented in the nasal hemifield. We conclude that unconsciously processed spatial cues can affect the oculomotor system. In addition, the observed behavioral temporal-nasal hemifield asymmetry is consistent with retinotectal mediation.

Close

  • doi:10.1016/j.visres.2010.01.005

Close

Sébastien Miellet; Xinyue Zhou; Lingnan He; Helen Rodger; Roberto Caldara

Investigating cultural diversity for extrafoveal information use in visual scenes Journal Article

In: Journal of Vision, vol. 10, no. 6, pp. 1–18, 2010.

Abstract | Links | BibTeX

@article{Miellet2010,
title = {Investigating cultural diversity for extrafoveal information use in visual scenes},
author = {Sébastien Miellet and Xinyue Zhou and Lingnan He and Helen Rodger and Roberto Caldara},
doi = {10.1167/10.6.21},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {6},
pages = {1--18},
abstract = {Culture shapes how people gather information from the visual world. We recently showed that Western observers focus on the eyes region during face recognition, whereas Eastern observers fixate predominantly the center of faces, suggesting a more effective use of extrafoveal information for Easterners compared to Westerners. However, the cultural variation in eye movements during scene perception is a highly debated topic. Additionally, the extent to which those perceptual differences across observers from different cultures rely on modulations of extrafoveal information use remains to be clarified. We used a gaze-contingent technique designed to dynamically mask central vision, the Blindspot, during a visual search task of animals in natural scenes. We parametrically controlled the Blindspots and target animal sizes (0°, 2°, 5°, or 8°). We processed eye-tracking data using an unbiased data-driven approach based on fixation maps and we introduced novel spatiotemporal analyses in order to finely characterize the dynamics of scene exploration. Both groups of observers, Eastern and Western, showed comparable animal identification performance, which decreased as a function of the Blindspot sizes. Importantly, dynamic analysis of the exploration pathways revealed identical oculomotor strategies for both groups of observers during animal search in scenes. Culture does not impact extrafoveal information use during the ecologically valid visual search of animals in natural scenes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Culture shapes how people gather information from the visual world. We recently showed that Western observers focus on the eyes region during face recognition, whereas Eastern observers fixate predominantly the center of faces, suggesting a more effective use of extrafoveal information for Easterners compared to Westerners. However, the cultural variation in eye movements during scene perception is a highly debated topic. Additionally, the extent to which those perceptual differences across observers from different cultures rely on modulations of extrafoveal information use remains to be clarified. We used a gaze-contingent technique designed to dynamically mask central vision, the Blindspot, during a visual search task of animals in natural scenes. We parametrically controlled the Blindspots and target animal sizes (0°, 2°, 5°, or 8°). We processed eye-tracking data using an unbiased data-driven approach based on fixation maps and we introduced novel spatiotemporal analyses in order to finely characterize the dynamics of scene exploration. Both groups of observers, Eastern and Western, showed comparable animal identification performance, which decreased as a function of the Blindspot sizes. Importantly, dynamic analysis of the exploration pathways revealed identical oculomotor strategies for both groups of observers during animal search in scenes. Culture does not impact extrafoveal information use during the ecologically valid visual search of animals in natural scenes.

Close

  • doi:10.1167/10.6.21

Close

Milica Milosavljevic; Jonathan Malmaud; Alexander Huth; Christof Koch; Antonio Rangel

The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure Journal Article

In: Judgment and Decision Making, vol. 5, no. 6, pp. 437–449, 2010.

Abstract | Links | BibTeX

@article{Milosavljevic2010,
title = {The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure},
author = {Milica Milosavljevic and Jonathan Malmaud and Alexander Huth and Christof Koch and Antonio Rangel},
doi = {10.2139/ssrn.1901533},
year = {2010},
date = {2010-01-01},
journal = {Judgment and Decision Making},
volume = {5},
number = {6},
pages = {437--449},
abstract = {An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM), since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM), since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process.

Close

  • doi:10.2139/ssrn.1901533

Close

Anish R. Mitra; Mathias Abegg; Jayalakshmi Viswanathan; Jason J. S. Barton

Line bisection in simulated homonymous hemianopia Journal Article

In: Neuropsychologia, vol. 48, no. 6, pp. 1742–1749, 2010.

Abstract | Links | BibTeX

@article{Mitra2010,
title = {Line bisection in simulated homonymous hemianopia},
author = {Anish R. Mitra and Mathias Abegg and Jayalakshmi Viswanathan and Jason J. S. Barton},
doi = {10.1016/j.neuropsychologia.2010.02.023},
year = {2010},
date = {2010-01-01},
journal = {Neuropsychologia},
volume = {48},
number = {6},
pages = {1742--1749},
publisher = {Elsevier Ltd},
abstract = {Hemianopic patients make a systematic error in line bisection, showing a contra-lesional bias towards their blind side, which is the opposite of that in hemineglect patients. This error has been attributed variously to the visual field defect, to long-term strategic adaptation, or to independent effects of damage to extrastriate cortex. To determine if hemianopic bisection error can occur without the latter two factors, we studied line bisection in healthy subjects with simulated homonymous hemianopia using a gaze-contingent display, with different line-lengths, and with or without markers at both ends of the lines. Simulated homonymous hemianopia did induce a contra-lesional bisection error and this was associated with increased fixations towards the blind field. This error was found with end-marked lines and was greater with very long lines. In a second experiment we showed that eccentric fixation alone produces a similar bisection error and eliminates the effect of line-end markers. We conclude that a homonymous hemianopic field defect alone is sufficient to induce both a contra-lesional line bisection error and previously described alterations in fixation distribution, and does not require long-term adaptation or extrastriate damage. textcopyright 2010 Elsevier Ltd.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Hemianopic patients make a systematic error in line bisection, showing a contra-lesional bias towards their blind side, which is the opposite of that in hemineglect patients. This error has been attributed variously to the visual field defect, to long-term strategic adaptation, or to independent effects of damage to extrastriate cortex. To determine if hemianopic bisection error can occur without the latter two factors, we studied line bisection in healthy subjects with simulated homonymous hemianopia using a gaze-contingent display, with different line-lengths, and with or without markers at both ends of the lines. Simulated homonymous hemianopia did induce a contra-lesional bisection error and this was associated with increased fixations towards the blind field. This error was found with end-marked lines and was greater with very long lines. In a second experiment we showed that eccentric fixation alone produces a similar bisection error and eliminates the effect of line-end markers. We conclude that a homonymous hemianopic field defect alone is sufficient to induce both a contra-lesional line bisection error and previously described alterations in fixation distribution, and does not require long-term adaptation or extrastriate damage. textcopyright 2010 Elsevier Ltd.

Close

  • doi:10.1016/j.neuropsychologia.2010.02.023

Close

Joy J. Geng; Nicholas E. DiQuattro

Attentional capture by a perceptually salient non-target facilitates target processing through inhibition and rapid rejection Journal Article

In: Journal of Vision, vol. 10, no. 6, pp. 1–12, 2010.

Abstract | Links | BibTeX

@article{Geng2010,
title = {Attentional capture by a perceptually salient non-target facilitates target processing through inhibition and rapid rejection},
author = {Joy J. Geng and Nicholas E. DiQuattro},
doi = {10.1167/10.6.5},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {6},
pages = {1--12},
abstract = {Perceptually salient distractors typically interfere with target processing in visual search situations. Here we demonstrate that a perceptually salient distractor that captures attention can nevertheless facilitate task performance if the observer knows that it cannot be the target. Eye-position data indicate that facilitation is achieved by two strategies: inhibition when the first saccade was directed to the target, and rapid rejection when the first saccade was captured by the salient distractor. Both mechanisms relied on the distractor being perceptually salient and not just perceptually different. The results demonstrate how bottom-up attentional capture can play a critical role in constraining top-down attentional selection at multiple stages of processing throughout a single trial.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Perceptually salient distractors typically interfere with target processing in visual search situations. Here we demonstrate that a perceptually salient distractor that captures attention can nevertheless facilitate task performance if the observer knows that it cannot be the target. Eye-position data indicate that facilitation is achieved by two strategies: inhibition when the first saccade was directed to the target, and rapid rejection when the first saccade was captured by the salient distractor. Both mechanisms relied on the distractor being perceptually salient and not just perceptually different. The results demonstrate how bottom-up attentional capture can play a critical role in constraining top-down attentional selection at multiple stages of processing throughout a single trial.

Close

  • doi:10.1167/10.6.5

Close

Mackenzie G. Glaholt; Mei-Chun Wu; Eyal M. Reingold

Evidence for top-down control of eye movements during visual decision making Journal Article

In: Journal of Vision, vol. 10, no. 5, pp. 1–10, 2010.

Abstract | Links | BibTeX

@article{Glaholt2010,
title = {Evidence for top-down control of eye movements during visual decision making},
author = {Mackenzie G. Glaholt and Mei-Chun Wu and Eyal M. Reingold},
doi = {10.1167/10.5.15},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {5},
pages = {1--10},
abstract = {Participants' eye movements were monitored while they viewed displays containing 6 exemplars from one of several categories of everyday items (belts, sunglasses, shirts, shoes), with a column of 3 items presented on the left and another column of 3 items presented on the right side of the display. Participants were either required to choose which of the two sets of 3 items was the most expensive (2-AFC) or which of the 6 items was the most expensive (6-AFC). Importantly, the stimulus display, and the relevant stimulus dimension, were held constant across conditions. Consistent with the hypothesis of top-down control of eye movements during visual decision making, we documented greater selectivity in the processing of stimulus information in the 6-AFC than the 2-AFC decision. In addition, strong spatial biases in looking behavior were demonstrated, but these biases were largely insensitive to the instructional manipulation, and did not substantially influence participants' choices.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Participants' eye movements were monitored while they viewed displays containing 6 exemplars from one of several categories of everyday items (belts, sunglasses, shirts, shoes), with a column of 3 items presented on the left and another column of 3 items presented on the right side of the display. Participants were either required to choose which of the two sets of 3 items was the most expensive (2-AFC) or which of the 6 items was the most expensive (6-AFC). Importantly, the stimulus display, and the relevant stimulus dimension, were held constant across conditions. Consistent with the hypothesis of top-down control of eye movements during visual decision making, we documented greater selectivity in the processing of stimulus information in the 6-AFC than the 2-AFC decision. In addition, strong spatial biases in looking behavior were demonstrated, but these biases were largely insensitive to the instructional manipulation, and did not substantially influence participants' choices.

Close

  • doi:10.1167/10.5.15

Close

Robert D. Gordon; Sarah D. Vollmer

Episodic representation of diagnostic and nondiagnostic object colour Journal Article

In: Visual Cognition, vol. 18, no. 5, pp. 728–750, 2010.

Abstract | Links | BibTeX

@article{Gordon2010,
title = {Episodic representation of diagnostic and nondiagnostic object colour},
author = {Robert D. Gordon and Sarah D. Vollmer},
doi = {10.1080/13506280903004190},
year = {2010},
date = {2010-01-01},
journal = {Visual Cognition},
volume = {18},
number = {5},
pages = {728--750},
abstract = {In three experiments, we investigated transsaccadic object file representations. In each experiment, participants moved their eyes from a central fixation cross to a saccade target located between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials in which the target identity matched one of the preview objects, its color either matched or did not match the previewed object color. The results indicated that color changes disrupt perceptual continuity, but only for the class of objects for which color is diagnostic of object identity. When the color is not integral to identifying an object (for example, when the object is a letter or an object without a characteristic color), object continuity is preserved regardless of changes to the object's color. These results suggest that object features that are important for defining the object are incorporated into its episodic representation. Furthermore, the results are consistent with previous work showing that the quality of a feature's representation determines its importance in preserving continuity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In three experiments, we investigated transsaccadic object file representations. In each experiment, participants moved their eyes from a central fixation cross to a saccade target located between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials in which the target identity matched one of the preview objects, its color either matched or did not match the previewed object color. The results indicated that color changes disrupt perceptual continuity, but only for the class of objects for which color is diagnostic of object identity. When the color is not integral to identifying an object (for example, when the object is a letter or an object without a characteristic color), object continuity is preserved regardless of changes to the object's color. These results suggest that object features that are important for defining the object are incorporated into its episodic representation. Furthermore, the results are consistent with previous work showing that the quality of a feature's representation determines its importance in preserving continuity.

Close

  • doi:10.1080/13506280903004190

Close

Harold H. Greene; Alexander Pollatsek; Kathleen M. Masserang; Yen Ju Lee; Keith Rayner

Directional processing within the perceptual span during visual target localization Journal Article

In: Vision Research, vol. 50, no. 13, pp. 1274–1282, 2010.

Abstract | Links | BibTeX

@article{Greene2010,
title = {Directional processing within the perceptual span during visual target localization},
author = {Harold H. Greene and Alexander Pollatsek and Kathleen M. Masserang and Yen Ju Lee and Keith Rayner},
doi = {10.1016/j.visres.2010.04.012},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {13},
pages = {1274--1282},
abstract = {In order to understand how processing occurs within the effective field of vision (i.e. perceptual span) during visual target localization, a gaze-contingent moving mask procedure was used to disrupt parafoveal information pickup along the vertical and the horizontal visual fields. When the mask was present within the horizontal visual field, there was a relative increase in saccade probability along the nearby vertical field, but not along the opposite horizontal field. When the mask was present either above or below fixation, saccades downwards were reduced in magnitude. This pattern of data suggests that parafoveal information selection (indexed by probability of saccade direction) and the extent of spatial parafoveal processing in a given direction (indexed by saccade amplitude) may be controlled by somewhat different mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In order to understand how processing occurs within the effective field of vision (i.e. perceptual span) during visual target localization, a gaze-contingent moving mask procedure was used to disrupt parafoveal information pickup along the vertical and the horizontal visual fields. When the mask was present within the horizontal visual field, there was a relative increase in saccade probability along the nearby vertical field, but not along the opposite horizontal field. When the mask was present either above or below fixation, saccades downwards were reduced in magnitude. This pattern of data suggests that parafoveal information selection (indexed by probability of saccade direction) and the extent of spatial parafoveal processing in a given direction (indexed by saccade amplitude) may be controlled by somewhat different mechanisms.

Close

  • doi:10.1016/j.visres.2010.04.012

Close

Martin Groen; Jan Noyes

Solving problems: How can guidance concerning task-relevancy be provided? Journal Article

In: Computers in Human Behavior, vol. 26, no. 6, pp. 1318–1326, 2010.

Abstract | Links | BibTeX

@article{Groen2010,
title = {Solving problems: How can guidance concerning task-relevancy be provided?},
author = {Martin Groen and Jan Noyes},
doi = {10.1016/j.chb.2010.04.004},
year = {2010},
date = {2010-01-01},
journal = {Computers in Human Behavior},
volume = {26},
number = {6},
pages = {1318--1326},
publisher = {Elsevier Ltd},
abstract = {The analysis of eye movements of people working on problem solving tasks has enabled a more thorough understanding than would have been possible with a traditional analysis of cognitive behavior. Recent studies report that influencing 'where we look' can affect task performance. However, some of the studies that reported these results have shortcomings, namely, it is unclear whether the reported effects are the result of 'attention guidance' or an effect of highlighting display elements alone; second, the selection of the highlighted display elements was based on subjective methods which could have introduced bias. In the study reported here, two experiments are described that attempt to address these shortcomings. Experiment 1 investigates the relative contribution of each display element to successful task realization and does so with an objective analysis method, namely signal detection analysis. Experiment 2 examines whether any performance effects of highlighting are due to foregrounding intrinsic task-relevant aspects or whether they are a result of the act of highlighting in itself. Results show that the chosen objective method is effective and that highlighting the display element thus identified improves task performance significantly. These findings are not an effect of the highlighting per se and thus indicate that the highlighted element is conveying task-relevant information. These findings improve on previous results as the objective selection and analysis methods reduce potential bias and provide a more reliable input to the design and provision of computer-based problem solving support.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The analysis of eye movements of people working on problem solving tasks has enabled a more thorough understanding than would have been possible with a traditional analysis of cognitive behavior. Recent studies report that influencing 'where we look' can affect task performance. However, some of the studies that reported these results have shortcomings, namely, it is unclear whether the reported effects are the result of 'attention guidance' or an effect of highlighting display elements alone; second, the selection of the highlighted display elements was based on subjective methods which could have introduced bias. In the study reported here, two experiments are described that attempt to address these shortcomings. Experiment 1 investigates the relative contribution of each display element to successful task realization and does so with an objective analysis method, namely signal detection analysis. Experiment 2 examines whether any performance effects of highlighting are due to foregrounding intrinsic task-relevant aspects or whether they are a result of the act of highlighting in itself. Results show that the chosen objective method is effective and that highlighting the display element thus identified improves task performance significantly. These findings are not an effect of the highlighting per se and thus indicate that the highlighted element is conveying task-relevant information. These findings improve on previous results as the objective selection and analysis methods reduce potential bias and provide a more reliable input to the design and provision of computer-based problem solving support.

Close

  • doi:10.1016/j.chb.2010.04.004

Close

Nathalie Guyader; Jennifer Malsert; Christian Marendaz

Having to identify a target reduces latencies in prosaccades but not in antisaccades Journal Article

In: Psychological Research, vol. 74, no. 1, pp. 12–20, 2010.

Abstract | Links | BibTeX

@article{Guyader2010,
title = {Having to identify a target reduces latencies in prosaccades but not in antisaccades},
author = {Nathalie Guyader and Jennifer Malsert and Christian Marendaz},
doi = {10.1007/s00426-008-0218-7},
year = {2010},
date = {2010-01-01},
journal = {Psychological Research},
volume = {74},
number = {1},
pages = {12--20},
abstract = {In a princeps study, Trottier and Pratt (2005) showed that saccadic latencies were dramatically reduced when subjects were instructed to not simply look at a peripheral target (reflexive saccade) but to identify some of its properties. According to the authors, the shortening of saccadic reactions times may arise from a top-down disinhibition of the superior colliculus (SC), potentially mediated by the direct pathway connecting frontal/prefrontal cortex structures to the SC. Using a "cue paradigm" (a cue preceded the appearance of the target), the present study tests if the task instruction (Identify vs. Glance) also reduces the latencies of antisaccades (AS), which involve prefrontal structures. We show that instruction reduces latencies for prosaccade but not for AS. An AS requires two processes: the inhibition of a reflexive saccade and the generation of a voluntary saccade. To separate these processes and to better understand the task effect we also test the effect of the task instruction only on voluntary saccades. The effect still exists but it is much weaker than for reflexive saccades. The instruction effect closely depends on task demands in executive resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In a princeps study, Trottier and Pratt (2005) showed that saccadic latencies were dramatically reduced when subjects were instructed to not simply look at a peripheral target (reflexive saccade) but to identify some of its properties. According to the authors, the shortening of saccadic reactions times may arise from a top-down disinhibition of the superior colliculus (SC), potentially mediated by the direct pathway connecting frontal/prefrontal cortex structures to the SC. Using a "cue paradigm" (a cue preceded the appearance of the target), the present study tests if the task instruction (Identify vs. Glance) also reduces the latencies of antisaccades (AS), which involve prefrontal structures. We show that instruction reduces latencies for prosaccade but not for AS. An AS requires two processes: the inhibition of a reflexive saccade and the generation of a voluntary saccade. To separate these processes and to better understand the task effect we also test the effect of the task instruction only on voluntary saccades. The effect still exists but it is much weaker than for reflexive saccades. The instruction effect closely depends on task demands in executive resources.

Close

  • doi:10.1007/s00426-008-0218-7

Close

Norbert Hagemann; Jörg Schorer; R. Canal-Bruland; Simone Lotz; Bernd Strauss

Visual perception in fencing: Do the eye movements of fencers represent their information pickup? Journal Article

In: Attention, Perception, and Psychophysics, vol. 72, no. 8, pp. 2204–2214, 2010.

Abstract | Links | BibTeX

@article{Hagemann2010,
title = {Visual perception in fencing: Do the eye movements of fencers represent their information pickup?},
author = {Norbert Hagemann and Jörg Schorer and R. Canal-Bruland and Simone Lotz and Bernd Strauss},
doi = {10.3758/APP.72.8.2204},
year = {2010},
date = {2010-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {72},
number = {8},
pages = {2204--2214},
abstract = {The present study examined whether results of athletes' eye movements while they observe fencing attacks reflect their actual information pickup by comparing these results with others gained with temporal and spatial occlusion and cuing techniques. Fifteen top-ranking expert fencers, 15 advanced fencers, and 32 sport students predicted the target region of 405 fencing attacks on a computer monitor. Results of eye movement recordings showed a stronger foveal fixation on the opponent's trunk and weapon in the two fencer groups. Top-ranking expert fencers fixated particularly on the upper trunk. This matched their performance decrements in the spatial occlusion condition. However, when the upper trunk was occluded, participants also shifted eye movements to neighboring body regions. Adding cues to the video material had no positive effects on prediction performance. We conclude that gaze behavior does not necessarily represent information pickup, but that studies applying the spatial occlusion paradigm should also register eye movements to avoid underestimating the information contributed by occluded regions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study examined whether results of athletes' eye movements while they observe fencing attacks reflect their actual information pickup by comparing these results with others gained with temporal and spatial occlusion and cuing techniques. Fifteen top-ranking expert fencers, 15 advanced fencers, and 32 sport students predicted the target region of 405 fencing attacks on a computer monitor. Results of eye movement recordings showed a stronger foveal fixation on the opponent's trunk and weapon in the two fencer groups. Top-ranking expert fencers fixated particularly on the upper trunk. This matched their performance decrements in the spatial occlusion condition. However, when the upper trunk was occluded, participants also shifted eye movements to neighboring body regions. Adding cues to the video material had no positive effects on prediction performance. We conclude that gaze behavior does not necessarily represent information pickup, but that studies applying the spatial occlusion paradigm should also register eye movements to avoid underestimating the information contributed by occluded regions.

Close

  • doi:10.3758/APP.72.8.2204

Close

Jessica K. Hall; Samuel B. Hutton; Michael J. Morgan

Sex differences in scanning faces: Does attention to the eyes explain female superiority in facial expression recognition? Journal Article

In: Cognition and Emotion, vol. 24, no. 4, pp. 629–637, 2010.

Abstract | Links | BibTeX

@article{Hall2010,
title = {Sex differences in scanning faces: Does attention to the eyes explain female superiority in facial expression recognition?},
author = {Jessica K. Hall and Samuel B. Hutton and Michael J. Morgan},
doi = {10.1080/02699930902906882},
year = {2010},
date = {2010-01-01},
journal = {Cognition and Emotion},
volume = {24},
number = {4},
pages = {629--637},
abstract = {Previous meta-analyses support a female advantage in decoding non-verbal emotion (Hall, 1978, 1984), yet the mechanisms underlying this advantage are not understood. The present study examined whether the female advantage is related to greater female attention to the eyes. Eye-tracking techniques were used to measure attention to the eyes in 19 males and 20 females during a facial expression recognition task. Women were faster and more accurate in their expression recognition compared with men, and women looked more at the eyes than men. Positive relationships were observed between dwell time and number of fixations to the eyes and both accuracy of facial expression recognition and speed of facial expression recognition. These results support the hypothesis that the female advantage in facial expression recognition is related to greater female attention to the eyes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous meta-analyses support a female advantage in decoding non-verbal emotion (Hall, 1978, 1984), yet the mechanisms underlying this advantage are not understood. The present study examined whether the female advantage is related to greater female attention to the eyes. Eye-tracking techniques were used to measure attention to the eyes in 19 males and 20 females during a facial expression recognition task. Women were faster and more accurate in their expression recognition compared with men, and women looked more at the eyes than men. Positive relationships were observed between dwell time and number of fixations to the eyes and both accuracy of facial expression recognition and speed of facial expression recognition. These results support the hypothesis that the female advantage in facial expression recognition is related to greater female attention to the eyes.

Close

  • doi:10.1080/02699930902906882

Close

S. N. Hamid; B. Stankiewicz; Mary Hayhoe

Gaze patterns in navigation: Encoding information in large-scale environments Journal Article

In: Journal of Vision, vol. 10, no. 12, pp. 1–11, 2010.

Abstract | Links | BibTeX

@article{Hamid2010,
title = {Gaze patterns in navigation: Encoding information in large-scale environments},
author = {S. N. Hamid and B. Stankiewicz and Mary Hayhoe},
doi = {10.1167/10.12.28},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {12},
pages = {1--11},
abstract = {We investigated the role of gaze in encoding of object landmarks in navigation. Gaze behavior was measured while participants learnt to navigate in a virtual large-scale environment in order to understand the sampling strategies subjects use to select visual information during navigation. The results showed a consistent sampling pattern. Participants preferentially directed gaze at a subset of the available object landmarks with a preference for object landmarks at the end of hallways and T-junctions. In a subsequent test of knowledge of the environment, we removed landmarks depending on how frequently they had been viewed. Removal of infrequently viewed landmarks had little effect on performance, whereas removal of the most viewed landmarks impaired performance substantially. Thus, gaze location during learning reveals the information that is selectively encoded, and landmarks at choice points are selected in preference to less informative landmarks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigated the role of gaze in encoding of object landmarks in navigation. Gaze behavior was measured while participants learnt to navigate in a virtual large-scale environment in order to understand the sampling strategies subjects use to select visual information during navigation. The results showed a consistent sampling pattern. Participants preferentially directed gaze at a subset of the available object landmarks with a preference for object landmarks at the end of hallways and T-junctions. In a subsequent test of knowledge of the environment, we removed landmarks depending on how frequently they had been viewed. Removal of infrequently viewed landmarks had little effect on performance, whereas removal of the most viewed landmarks impaired performance substantially. Thus, gaze location during learning reveals the information that is selectively encoded, and landmarks at choice points are selected in preference to less informative landmarks.

Close

  • doi:10.1167/10.12.28

Close

Kevin Fleming; Carole L. Bandy; Matthew O. Kimble

Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors Journal Article

In: Social Neuroscience, vol. 5, no. 2, pp. 201–220, 2010.

Abstract | Links | BibTeX

@article{Fleming2010,
title = {Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors},
author = {Kevin Fleming and Carole L. Bandy and Matthew O. Kimble},
doi = {10.1080/17470910903268931},
year = {2010},
date = {2010-01-01},
journal = {Social Neuroscience},
volume = {5},
number = {2},
pages = {201--220},
abstract = {The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.

Close

  • doi:10.1080/17470910903268931

Close

Tom Foulsham; Joey T. Cheng; Jessica L. Tracy; Joseph Henrich; Alan Kingstone

Gaze allocation in a dynamic situation: Effects of social status and speaking Journal Article

In: Cognition, vol. 117, no. 3, pp. 319–331, 2010.

Abstract | Links | BibTeX

@article{Foulsham2010a,
title = {Gaze allocation in a dynamic situation: Effects of social status and speaking},
author = {Tom Foulsham and Joey T. Cheng and Jessica L. Tracy and Joseph Henrich and Alan Kingstone},
doi = {10.1016/j.cognition.2010.09.003},
year = {2010},
date = {2010-01-01},
journal = {Cognition},
volume = {117},
number = {3},
pages = {319--331},
abstract = {Human visual attention operates in a context that is complex, social and dynamic. To explore this, we recorded people taking part in a group decision-making task and then showed video clips of these situations to new participants while tracking their eye movements. Observers spent the majority of time looking at the people in the videos, and in particular at their eyes and faces. The social status of the people in the clips had been rated by their peers in the group task, and this status hierarchy strongly predicted where eye-tracker participants looked: high-status individuals were gazed at much more often, and for longer, than low-status individuals, even over short, 20-s videos. Fixation was temporally coupled to the person who was talking at any one time, but this did not account for the effect of social status on attention. These results are consistent with a gaze system that is attuned to the presence of other individuals, to their social status within a group, and to the information most useful for social interaction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Human visual attention operates in a context that is complex, social and dynamic. To explore this, we recorded people taking part in a group decision-making task and then showed video clips of these situations to new participants while tracking their eye movements. Observers spent the majority of time looking at the people in the videos, and in particular at their eyes and faces. The social status of the people in the clips had been rated by their peers in the group task, and this status hierarchy strongly predicted where eye-tracker participants looked: high-status individuals were gazed at much more often, and for longer, than low-status individuals, even over short, 20-s videos. Fixation was temporally coupled to the person who was talking at any one time, but this did not account for the effect of social status on attention. These results are consistent with a gaze system that is attuned to the presence of other individuals, to their social status within a group, and to the information most useful for social interaction.

Close

  • doi:10.1016/j.cognition.2010.09.003

Close

Tom Foulsham; Alan Kingstone

Asymmetries in the direction of saccades during perception of scenes and fractals: Effects of image type and image features Journal Article

In: Vision Research, vol. 50, no. 8, pp. 779–795, 2010.

Abstract | Links | BibTeX

@article{Foulsham2010,
title = {Asymmetries in the direction of saccades during perception of scenes and fractals: Effects of image type and image features},
author = {Tom Foulsham and Alan Kingstone},
doi = {10.1016/j.visres.2010.01.019},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {8},
pages = {779--795},
abstract = {The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance.

Close

  • doi:10.1016/j.visres.2010.01.019

Close

Alessio Fracasso; Alfonso Caramazza; David Melcher

Continuous perception of motion and shape across saccadic eye movements Journal Article

In: Journal of Vision, vol. 10, no. 13, pp. 1–17, 2010.

Abstract | Links | BibTeX

@article{Fracasso2010,
title = {Continuous perception of motion and shape across saccadic eye movements},
author = {Alessio Fracasso and Alfonso Caramazza and David Melcher},
doi = {10.1167/10.13.14},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {13},
pages = {1--17},
abstract = {Although our naïve experience of visual perception is that it is smooth and coherent, the actual input from the retina involves brief and discrete fixations separated by saccadic eye movements. This raises the question of whether our impression of stable and continuous vision is merely an illusion. To test this, we examined whether motion perception can "bridge" a saccade in a two-frame apparent motion display in which the two frames were separated by a saccade. We found that transformational apparent motion, in which an object is seen to change shape and even move in three dimensions during the motion trajectory, continues across saccades. Moreover, participants preferred an interpretation of motion in spatial, rather than retinal, coordinates. The strength of the motion percept depended on the temporal delay between the two motion frames and was sufficient to give rise to a motion-from-shape aftereffect, even when the motion was defined by a second-order shape cue ("phantom transformational apparent motion"). These findings suggest that motion and shape information are integrated across saccades into a single, coherent percept of a moving object.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although our naïve experience of visual perception is that it is smooth and coherent, the actual input from the retina involves brief and discrete fixations separated by saccadic eye movements. This raises the question of whether our impression of stable and continuous vision is merely an illusion. To test this, we examined whether motion perception can "bridge" a saccade in a two-frame apparent motion display in which the two frames were separated by a saccade. We found that transformational apparent motion, in which an object is seen to change shape and even move in three dimensions during the motion trajectory, continues across saccades. Moreover, participants preferred an interpretation of motion in spatial, rather than retinal, coordinates. The strength of the motion percept depended on the temporal delay between the two motion frames and was sufficient to give rise to a motion-from-shape aftereffect, even when the motion was defined by a second-order shape cue ("phantom transformational apparent motion"). These findings suggest that motion and shape information are integrated across saccades into a single, coherent percept of a moving object.

Close

  • doi:10.1167/10.13.14

Close

Tom C. A. Freeman; Rebecca A. Champion; Paul A. Warren

A Bayesian model of perceived head-centered Velocity during smooth pursuit eye movement Journal Article

In: Current Biology, vol. 20, no. 8, pp. 757–762, 2010.

Abstract | Links | BibTeX

@article{Freeman2010,
title = {A Bayesian model of perceived head-centered Velocity during smooth pursuit eye movement},
author = {Tom C. A. Freeman and Rebecca A. Champion and Paul A. Warren},
doi = {10.1016/j.cub.2010.02.059},
year = {2010},
date = {2010-01-01},
journal = {Current Biology},
volume = {20},
number = {8},
pages = {757--762},
publisher = {Elsevier Ltd},
abstract = {During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.

Close

  • doi:10.1016/j.cub.2010.02.059

Close

Amanda L. Gamble; Ronald M. Rapee

The time-course of attention to emotional faces in social phobia Journal Article

In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 41, no. 1, pp. 39–44, 2010.

Abstract | Links | BibTeX

@article{Gamble2010,
title = {The time-course of attention to emotional faces in social phobia},
author = {Amanda L. Gamble and Ronald M. Rapee},
doi = {10.1016/j.jbtep.2009.08.008},
year = {2010},
date = {2010-01-01},
journal = {Journal of Behavior Therapy and Experimental Psychiatry},
volume = {41},
number = {1},
pages = {39--44},
publisher = {Elsevier Ltd},
abstract = {This study investigated the time-course of attentional bias in socially phobic (SP) and non-phobic (NP) adults. Participants viewed angry and happy faces paired with neutral faces (i.e., face-face pairs) and angry, happy and neutral faces paired with household objects (i.e., face-object pairs) for 5000 ms. Eye movement (EM) was measured throughout to assess biases in early and sustained attention. Attentional bias occurred only for face-face pairs. SP adults were vigilant for angry faces relative to neutral faces in the first 500 ms of the 5000 ms exposure, relative to NP adults. SP adults were also vigilant for happy faces over 500 ms, although there were no group-based differences in attention to happy-neutral face pairs. There were no group differences in attention to faces throughout the remainder of the exposure. Results suggest that social phobia is characterised by early vigilance for social cues with no bias in subsequent processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigated the time-course of attentional bias in socially phobic (SP) and non-phobic (NP) adults. Participants viewed angry and happy faces paired with neutral faces (i.e., face-face pairs) and angry, happy and neutral faces paired with household objects (i.e., face-object pairs) for 5000 ms. Eye movement (EM) was measured throughout to assess biases in early and sustained attention. Attentional bias occurred only for face-face pairs. SP adults were vigilant for angry faces relative to neutral faces in the first 500 ms of the 5000 ms exposure, relative to NP adults. SP adults were also vigilant for happy faces over 500 ms, although there were no group-based differences in attention to happy-neutral face pairs. There were no group differences in attention to faces throughout the remainder of the exposure. Results suggest that social phobia is characterised by early vigilance for social cues with no bias in subsequent processing.

Close

  • doi:10.1016/j.jbtep.2009.08.008

Close

Thérèse Collins; Tobias Heed; Karine Doré-Mazars; Brigitte Röder

Presaccadic attention interferes with feature detection Journal Article

In: Experimental Brain Research, vol. 201, no. 1, pp. 111–117, 2010.

Abstract | Links | BibTeX

@article{Collins2010b,
title = {Presaccadic attention interferes with feature detection},
author = {Thérèse Collins and Tobias Heed and Karine Doré-Mazars and Brigitte Röder},
doi = {10.1007/s00221-009-2003-2},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {201},
number = {1},
pages = {111--117},
abstract = {Preparing a saccadic eye movement to a particular spatial location enhances the perception of visual targets at this location and decreases perception of nearby targets prior to movement onset. This effect has been termed the orientation of pre-saccadic attention. Here, we investigated whether pre-saccadic attention influenced the detection of a simple visual feature-a process that has been hypothesized to occur without the need for attention. Participants prepared a saccade to a cued location and detected the occurrence of a "pop-out" feature embedded in distracters at the same or different location. The results show that preparing a saccade to a given location decreased detection of features at non-aimed-for locations, suggesting that the selection of a location as the next saccade endpoint influences sensitivity to basic visual features across the visual field.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Preparing a saccadic eye movement to a particular spatial location enhances the perception of visual targets at this location and decreases perception of nearby targets prior to movement onset. This effect has been termed the orientation of pre-saccadic attention. Here, we investigated whether pre-saccadic attention influenced the detection of a simple visual feature-a process that has been hypothesized to occur without the need for attention. Participants prepared a saccade to a cued location and detected the occurrence of a "pop-out" feature embedded in distracters at the same or different location. The results show that preparing a saccade to a given location decreased detection of features at non-aimed-for locations, suggesting that the selection of a location as the next saccade endpoint influences sensitivity to basic visual features across the visual field.

Close

  • doi:10.1007/s00221-009-2003-2

Close

Kenny R. Coventry; Dermot Lynott; Angelo Cangelosi; Lynn Monrouxe; Dan Joyce; Daniel C. Richardson

Spatial language, visual attention, and perceptual simulation Journal Article

In: Brain and Language, vol. 112, no. 3, pp. 202–213, 2010.

Abstract | Links | BibTeX

@article{Coventry2010,
title = {Spatial language, visual attention, and perceptual simulation},
author = {Kenny R. Coventry and Dermot Lynott and Angelo Cangelosi and Lynn Monrouxe and Dan Joyce and Daniel C. Richardson},
doi = {10.1016/j.bandl.2009.06.001},
year = {2010},
date = {2010-01-01},
journal = {Brain and Language},
volume = {112},
number = {3},
pages = {202--213},
abstract = {Spatial language descriptions, such as The bottle is over the glass, direct the attention of the hearer to particular aspects of the visual world. This paper asks how they do so, and what brain mechanisms underlie this process. In two experiments employing behavioural and eye tracking methodologies we examined the effects of spatial language on people's judgements and parsing of a visual scene. The results underscore previous claims regarding the importance of object function in spatial language, but also show how spatial language differentially directs attention during examination of a visual scene. We discuss implications for existing models of spatial language, with associated brain mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Spatial language descriptions, such as The bottle is over the glass, direct the attention of the hearer to particular aspects of the visual world. This paper asks how they do so, and what brain mechanisms underlie this process. In two experiments employing behavioural and eye tracking methodologies we examined the effects of spatial language on people's judgements and parsing of a visual scene. The results underscore previous claims regarding the importance of object function in spatial language, but also show how spatial language differentially directs attention during examination of a visual scene. We discuss implications for existing models of spatial language, with associated brain mechanisms.

Close

  • doi:10.1016/j.bandl.2009.06.001

Close

Christopher D. Cowper-Smith; Esther Y. Y. Lau; Carl A. Helmick; Gail A. Eskes; David A. Westwood

Neural coding of movement direction in the healthy human brain Journal Article

In: PLoS ONE, vol. 5, no. 10, pp. e13330, 2010.

Abstract | Links | BibTeX

@article{CowperSmith2010,
title = {Neural coding of movement direction in the healthy human brain},
author = {Christopher D. Cowper-Smith and Esther Y. Y. Lau and Carl A. Helmick and Gail A. Eskes and David A. Westwood},
doi = {10.1371/journal.pone.0013330},
year = {2010},
date = {2010-01-01},
journal = {PLoS ONE},
volume = {5},
number = {10},
pages = {e13330},
abstract = {Neurophysiological studies in monkeys show that activity of neurons in primary cortex (M1), pre-motor cortex (PMC), and cerebellum varies systematically with the direction of reaching movements. These neurons exhibit preferred direction tuning, where the level of neural activity is highest when movements are made in the preferred direction (PD), and gets progressively lower as movements are made at increasing degrees of offset from the PD. Using a functional magnetic resonance imaging adaptation (fMRI-A) paradigm, we show that PD coding does exist in regions of the human motor system that are homologous to those observed in non-human primates. Consistent with predictions of the PD model, we show adaptation (i.e., a lower level) of the blood oxygen level dependent (BOLD) time-course signal in M1, PMC, SMA, and cerebellum when consecutive wrist movements were made in the same direction (0 degrees offset) relative to movements offset by 90 degrees or 180 degrees . The BOLD signal in dorsolateral prefrontal cortex adapted equally in all movement offset conditions, mitigating against the possibility that the present results are the consequence of differential task complexity or attention to action in each movement offset condition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Neurophysiological studies in monkeys show that activity of neurons in primary cortex (M1), pre-motor cortex (PMC), and cerebellum varies systematically with the direction of reaching movements. These neurons exhibit preferred direction tuning, where the level of neural activity is highest when movements are made in the preferred direction (PD), and gets progressively lower as movements are made at increasing degrees of offset from the PD. Using a functional magnetic resonance imaging adaptation (fMRI-A) paradigm, we show that PD coding does exist in regions of the human motor system that are homologous to those observed in non-human primates. Consistent with predictions of the PD model, we show adaptation (i.e., a lower level) of the blood oxygen level dependent (BOLD) time-course signal in M1, PMC, SMA, and cerebellum when consecutive wrist movements were made in the same direction (0 degrees offset) relative to movements offset by 90 degrees or 180 degrees . The BOLD signal in dorsolateral prefrontal cortex adapted equally in all movement offset conditions, mitigating against the possibility that the present results are the consequence of differential task complexity or attention to action in each movement offset condition.

Close

  • doi:10.1371/journal.pone.0013330

Close

Kirsten A. Dalrymple; Walter F. Bischof; David Cameron; Jason J. S. Barton; Alan Kingstone

Simulating simultanagnosia: Spatially constricted vision mimics local capture and the global processing deficit Journal Article

In: Experimental Brain Research, vol. 202, no. 2, pp. 445–455, 2010.

Abstract | Links | BibTeX

@article{Dalrymple2010,
title = {Simulating simultanagnosia: Spatially constricted vision mimics local capture and the global processing deficit},
author = {Kirsten A. Dalrymple and Walter F. Bischof and David Cameron and Jason J. S. Barton and Alan Kingstone},
doi = {10.1007/s00221-009-2152-3},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {202},
number = {2},
pages = {445--455},
abstract = {Patients with simultanagnosia, which is a component of Bálint syndrome, have a restricted spatial window of visual attention and cannot see more than one object at a time. As a result, these patients see the world in a piecemeal fashion, seeing the local components of objects or scenes at the expense of the global picture. To directly test the relationship between the restriction of the attentional window in simultanagnosia and patients' difficulty with global-level processing, we used a gaze-contingent display to create a literal restriction of vision for healthy participants while they performed a global/local identification task. Participants in this viewing condition were instructed to identify the global and local aspects of hierarchical letter stimuli of different sizes and densities. They performed well at the local identification task, and their patterns of inaccuracies for the global level task were highly similar to the pattern of inaccuracies typically seen with simultanagnosic patients. This suggests that a restricted spatial area of visual processing, combined with normal limits to visual processing, can lead to difficulties with global-level perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Patients with simultanagnosia, which is a component of Bálint syndrome, have a restricted spatial window of visual attention and cannot see more than one object at a time. As a result, these patients see the world in a piecemeal fashion, seeing the local components of objects or scenes at the expense of the global picture. To directly test the relationship between the restriction of the attentional window in simultanagnosia and patients' difficulty with global-level processing, we used a gaze-contingent display to create a literal restriction of vision for healthy participants while they performed a global/local identification task. Participants in this viewing condition were instructed to identify the global and local aspects of hierarchical letter stimuli of different sizes and densities. They performed well at the local identification task, and their patterns of inaccuracies for the global level task were highly similar to the pattern of inaccuracies typically seen with simultanagnosic patients. This suggests that a restricted spatial area of visual processing, combined with normal limits to visual processing, can lead to difficulties with global-level perception.

Close

  • doi:10.1007/s00221-009-2152-3

Close

Rong-Fuh Day

Examining the validity of the Needleman-Wunsch algorithm in identifying decision strategy with eye-movement data Journal Article

In: Decision Support Systems, vol. 49, no. 4, pp. 396–403, 2010.

Abstract | Links | BibTeX

@article{Day2010,
title = {Examining the validity of the Needleman-Wunsch algorithm in identifying decision strategy with eye-movement data},
author = {Rong-Fuh Day},
doi = {10.1016/j.dss.2010.05.001},
year = {2010},
date = {2010-01-01},
journal = {Decision Support Systems},
volume = {49},
number = {4},
pages = {396--403},
publisher = {Elsevier B.V.},
abstract = {A new generation of eye trackers shows us a promising alternative approach to tracing decision processes beyond the popular computerized-information-board approach. In order to exploit the eye-movement data, this study examined the validity of the Needleman-Wunsch algorithm (NWA) to characterize the decision process, and proposed an NWA-based classification method to predict which typical strategy an empirical search behavior might belong to. An eye-tracking based experiment was conducted. Our results showed that the resemblance score by NWA conformed to the assumption that the pair of information search behaviors based on the same strategy should have the closest resemblance. Moreover, with respect to our NWA-based classification method, our result showed that its overall prediction accuracy, hit-ratio, in identifying underlying strategies achieved 88%, significantly much higher than that gained from chance. On the whole, the combination of eye-fixation data and our NWA-based classification method is qualified. textcopyright 2010 Elsevier B.V. All rights reserved.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A new generation of eye trackers shows us a promising alternative approach to tracing decision processes beyond the popular computerized-information-board approach. In order to exploit the eye-movement data, this study examined the validity of the Needleman-Wunsch algorithm (NWA) to characterize the decision process, and proposed an NWA-based classification method to predict which typical strategy an empirical search behavior might belong to. An eye-tracking based experiment was conducted. Our results showed that the resemblance score by NWA conformed to the assumption that the pair of information search behaviors based on the same strategy should have the closest resemblance. Moreover, with respect to our NWA-based classification method, our result showed that its overall prediction accuracy, hit-ratio, in identifying underlying strategies achieved 88%, significantly much higher than that gained from chance. On the whole, the combination of eye-fixation data and our NWA-based classification method is qualified. textcopyright 2010 Elsevier B.V. All rights reserved.

Close

  • doi:10.1016/j.dss.2010.05.001

Close

Denise D. J. Grave; Nicola Bruno

The effect of the Müller-Lyer illusion on saccades is modulated by spatial predictability and saccadic latency Journal Article

In: Experimental Brain Research, vol. 203, no. 4, pp. 671–679, 2010.

Abstract | Links | BibTeX

@article{Grave2010,
title = {The effect of the Müller-Lyer illusion on saccades is modulated by spatial predictability and saccadic latency},
author = {Denise D. J. Grave and Nicola Bruno},
doi = {10.1007/s00221-010-2275-6},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {203},
number = {4},
pages = {671--679},
abstract = {Studies investigating the effect of visual illusions on saccadic eye movements have provided a wide variety of results. In this study, we test three factors that might explain this variability: the spatial predictability of the stimulus, the duration of the stimulus and the latency of the saccades. Participants made a saccade from one end of a Muller-Lyer figure to the other end. By changing the spatial predictability of the stimulus, we find that the illusion has a clear effect on saccades (16%) when the stimulus is at a highly predictable location. Even stronger effects of the illusion are found when the stimulus location becomes more unpredictable (19-23%). Conversely, manipulating the duration of the stimulus fails to reveal a clear difference in illusion effect. Finally, by computing the illusion effect for different saccadic latencies, we find a maximum illusion effect (about 30%) for very short latencies, which decreases by 7% with every 100 ms latency increase. We conclude that spatial predictability of the stimulus and saccadic latency influences the effect of the Muller-Lyer illusion on saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Studies investigating the effect of visual illusions on saccadic eye movements have provided a wide variety of results. In this study, we test three factors that might explain this variability: the spatial predictability of the stimulus, the duration of the stimulus and the latency of the saccades. Participants made a saccade from one end of a Muller-Lyer figure to the other end. By changing the spatial predictability of the stimulus, we find that the illusion has a clear effect on saccades (16%) when the stimulus is at a highly predictable location. Even stronger effects of the illusion are found when the stimulus location becomes more unpredictable (19-23%). Conversely, manipulating the duration of the stimulus fails to reveal a clear difference in illusion effect. Finally, by computing the illusion effect for different saccadic latencies, we find a maximum illusion effect (about 30%) for very short latencies, which decreases by 7% with every 100 ms latency increase. We conclude that spatial predictability of the stimulus and saccadic latency influences the effect of the Muller-Lyer illusion on saccades.

Close

  • doi:10.1007/s00221-010-2275-6

Close

Kurt Debono; Alexander C. Schütz; Miriam Spering; Karl R. Gegenfurtner

Receptive fields for smooth pursuit eye movements and motion perception Journal Article

In: Vision Research, vol. 50, no. 24, pp. 2729–2739, 2010.

Abstract | Links | BibTeX

@article{Debono2010,
title = {Receptive fields for smooth pursuit eye movements and motion perception},
author = {Kurt Debono and Alexander C. Schütz and Miriam Spering and Karl R. Gegenfurtner},
doi = {10.1016/j.visres.2010.09.034},
year = {2010},
date = {2010-01-01},
journal = {Vision Research},
volume = {50},
number = {24},
pages = {2729--2739},
abstract = {Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT).

Close

  • doi:10.1016/j.visres.2010.09.034

Close

Adriana M. Degani; Alessander Danna-Dos-Santos; Thomas Robert; Mark L. Latash

Kinematic synergies during saccades involving whole-body rotation: A study based on the uncontrolled manifold hypothesis Journal Article

In: Human Movement Science, vol. 29, no. 2, pp. 243–258, 2010.

Abstract | Links | BibTeX

@article{Degani2010,
title = {Kinematic synergies during saccades involving whole-body rotation: A study based on the uncontrolled manifold hypothesis},
author = {Adriana M. Degani and Alessander Danna-Dos-Santos and Thomas Robert and Mark L. Latash},
doi = {10.1016/j.humov.2010.02.003},
year = {2010},
date = {2010-01-01},
journal = {Human Movement Science},
volume = {29},
number = {2},
pages = {243--258},
abstract = {We used the framework of the uncontrolled manifold hypothesis to study the coordination of body segments and eye movements in standing persons during the task of shifting the gaze to a target positioned behind the body. The task was performed at a comfortable speed and fast. Multi-segment and head-eye synergies were quantified as co-varied changes in elemental variables (body segment rotations and eye rotation) that stabilized (reduced the across trials variability) of head rotation in space and gaze trajectory. Head position in space was stabilized by co-varied rotations of body segments prior to the action, during its later stages, and after its completion. The synergy index showed a drop that started prior to the action initiation (anticipatory synergy adjustment) and continued during the phase of quick head rotation. Gaze direction was stabilized only at movement completion and immediately after the saccade at movement initiation under the " fast" instruction. The study documents for the first time anticipatory synergy adjustments during whole-body actions. It shows multi-joint synergies stabilizing head trajectory in space. In contrast, there was no synergy between head and eye rotations during saccades that would achieve a relatively invariant gaze trajectory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We used the framework of the uncontrolled manifold hypothesis to study the coordination of body segments and eye movements in standing persons during the task of shifting the gaze to a target positioned behind the body. The task was performed at a comfortable speed and fast. Multi-segment and head-eye synergies were quantified as co-varied changes in elemental variables (body segment rotations and eye rotation) that stabilized (reduced the across trials variability) of head rotation in space and gaze trajectory. Head position in space was stabilized by co-varied rotations of body segments prior to the action, during its later stages, and after its completion. The synergy index showed a drop that started prior to the action initiation (anticipatory synergy adjustment) and continued during the phase of quick head rotation. Gaze direction was stabilized only at movement completion and immediately after the saccade at movement initiation under the " fast" instruction. The study documents for the first time anticipatory synergy adjustments during whole-body actions. It shows multi-joint synergies stabilizing head trajectory in space. In contrast, there was no synergy between head and eye rotations during saccades that would achieve a relatively invariant gaze trajectory.

Close

  • doi:10.1016/j.humov.2010.02.003

Close

Steve Dipaola; Caitlin Riebe; James T. Enns

Rembrandt's textural agency: A shared perspective in visual art and science Journal Article

In: Leonardo, vol. 43, no. 2, pp. 145–151, 2010.

Abstract | Links | BibTeX

@article{Dipaola2010,
title = {Rembrandt's textural agency: A shared perspective in visual art and science},
author = {Steve Dipaola and Caitlin Riebe and James T. Enns},
doi = {10.1162/leon.2010.43.2.145},
year = {2010},
date = {2010-01-01},
journal = {Leonardo},
volume = {43},
number = {2},
pages = {145--151},
abstract = {This interdisciplinary paper hypothesizes that Rembrandt developed new painterly techniques — novel to the early modern period — in order to engage and direct the gaze of the observer. Though these methods were not based on scientific evidence at the time, we show that they nonetheless are consistent with a contemporary understanding of human vision. Here we propose that artists in the late ‘early modern' period developed the technique of textural agency — involving selective variation in image detail — to guide the observer's eye and thereby influence the viewing experience. The paper begins by establishing the well-known use of textural agency among modern portrait artists, before considering the possibility that Rembrandt developed these techniques in his late portraits in reaction to his Italian contemporaries. A final section brings the argument full circle, with the presentation of laboratory evidence that Rembrandt's techniques indeed guide the modern viewer's eye in the way we propose.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This interdisciplinary paper hypothesizes that Rembrandt developed new painterly techniques — novel to the early modern period — in order to engage and direct the gaze of the observer. Though these methods were not based on scientific evidence at the time, we show that they nonetheless are consistent with a contemporary understanding of human vision. Here we propose that artists in the late ‘early modern' period developed the technique of textural agency — involving selective variation in image detail — to guide the observer's eye and thereby influence the viewing experience. The paper begins by establishing the well-known use of textural agency among modern portrait artists, before considering the possibility that Rembrandt developed these techniques in his late portraits in reaction to his Italian contemporaries. A final section brings the argument full circle, with the presentation of laboratory evidence that Rembrandt's techniques indeed guide the modern viewer's eye in the way we propose.

Close

  • doi:10.1162/leon.2010.43.2.145

Close

Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson

Watching the hourglass: Eye tracking reveals men's appreciation of the female form Journal Article

In: Human Nature, vol. 21, no. 4, pp. 355–370, 2010.

Abstract | Links | BibTeX

@article{Dixson2010,
title = {Watching the hourglass: Eye tracking reveals men's appreciation of the female form},
author = {Barnaby J. Dixson and Gina M. Grimshaw and Wayne L. Linklater and Alan F. Dixson},
doi = {10.1007/s12110-010-9100-6},
year = {2010},
date = {2010-01-01},
journal = {Human Nature},
volume = {21},
number = {4},
pages = {355--370},
abstract = {Eye-tracking techniques were used to measure men's attention to back-posed and front-posed images of women varying in waist-to-hip ratio (WHR). Irrespective of body pose, men rated images with a 0.7 WHR as most attractive. For back-posed images, initial visual fixations (occurring within 200 milliseconds of commencement of the eye-tracking session) most frequently involved the midriff. Numbers of fixations and dwell times throughout each of the five-second viewing sessions were greatest for the midriff and buttocks. By contrast, visual attention to front-posed images (first fixations, numbers of fixations, and dwell times) mainly involved the breasts, with attention shifting more to the midriff of images with a higher WHR. This report is the first to compare men's eye-tracking responses to back-posed and front-posed images of the female body. Results show the importance of the female midriff and of WHR upon men's attractiveness judgments, especially when viewing back-posed images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-tracking techniques were used to measure men's attention to back-posed and front-posed images of women varying in waist-to-hip ratio (WHR). Irrespective of body pose, men rated images with a 0.7 WHR as most attractive. For back-posed images, initial visual fixations (occurring within 200 milliseconds of commencement of the eye-tracking session) most frequently involved the midriff. Numbers of fixations and dwell times throughout each of the five-second viewing sessions were greatest for the midriff and buttocks. By contrast, visual attention to front-posed images (first fixations, numbers of fixations, and dwell times) mainly involved the breasts, with attention shifting more to the midriff of images with a higher WHR. This report is the first to compare men's eye-tracking responses to back-posed and front-posed images of the female body. Results show the importance of the female midriff and of WHR upon men's attractiveness judgments, especially when viewing back-posed images.

Close

  • doi:10.1007/s12110-010-9100-6

Close

Mieke Donk; Leroy Soesman

Salience is only briefly represented: Evidence from probe-detection performance Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 2, pp. 286–302, 2010.

Abstract | Links | BibTeX

@article{Donk2010,
title = {Salience is only briefly represented: Evidence from probe-detection performance},
author = {Mieke Donk and Leroy Soesman},
doi = {10.1037/a0017605},
year = {2010},
date = {2010-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {36},
number = {2},
pages = {286--302},
abstract = {Salient objects in the visual field tend to capture attention. The present study aimed to examine the time-course of salience effects using a probe-detection task. Eight experiments investigated how the salience of different orientation singletons affected probe reaction time as a function of stimulus onset asynchrony (SOA) between the presentation of a singleton display and a probe display. The results demonstrate that salience consistently affected probe reaction time at the shortest SOA. The effect of salience disappeared as SOA increased. These results suggest that contrary to the assumption of major theories on visual selection, salience is transiently represented in our visual system allowing the effects of salience on attentional selection to be only short-lived.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Salient objects in the visual field tend to capture attention. The present study aimed to examine the time-course of salience effects using a probe-detection task. Eight experiments investigated how the salience of different orientation singletons affected probe reaction time as a function of stimulus onset asynchrony (SOA) between the presentation of a singleton display and a probe display. The results demonstrate that salience consistently affected probe reaction time at the shortest SOA. The effect of salience disappeared as SOA increased. These results suggest that contrary to the assumption of major theories on visual selection, salience is transiently represented in our visual system allowing the effects of salience on attentional selection to be only short-lived.

Close

  • doi:10.1037/a0017605

Close

Michael Dorr; T. Martinetz; Karl R. Gegenfurtner; E. Barth

Variability of eye movements when viewing dynamic natural scenes Journal Article

In: Journal of Vision, vol. 10, no. 10, pp. 1–17, 2010.

Abstract | Links | BibTeX

@article{Dorr2010,
title = {Variability of eye movements when viewing dynamic natural scenes},
author = {Michael Dorr and T. Martinetz and Karl R. Gegenfurtner and E. Barth},
doi = {10.1167/10.10.28},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {10},
pages = {1--17},
abstract = {How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze.

Close

  • doi:10.1167/10.10.28

Close

Jacob Duijnhouwer; Bart Krekelberg; Albert V. Berg; Richard J. A. Wezel

Temporal integration of focus position signal during compensation for pursuit in optic flow. Journal Article

In: Journal of Vision, vol. 10, no. 14, pp. 1–15, 2010.

Abstract | Links | BibTeX

@article{Duijnhouwer2010,
title = {Temporal integration of focus position signal during compensation for pursuit in optic flow.},
author = {Jacob Duijnhouwer and Bart Krekelberg and Albert V. Berg and Richard J. A. Wezel},
doi = {10.1167/10.14.14},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {14},
pages = {1--15},
abstract = {Observer translation results in optic flow that specifies heading. Concurrent smooth pursuit causes distortion of the retinal flow pattern for which the visual system compensates. The distortion and its perceptual compensation are usually modeled in terms of instantaneous velocities. However, apart from adding a velocity to the flow field, pursuit also incrementally changes the direction of gaze. The effect of gaze displacement on optic flow perception has received little attention. Here we separated the effects of velocity and gaze displacement by measuring the perceived two-dimensional focus position of rotating flow patterns during pursuit. Such stimuli are useful in the current context because the two effects work in orthogonal directions. As expected, the instantaneous pursuit velocity shifted the perceived focus orthogonally to the pursuit direction. Additionally, the focus was mislocalized in the direction of the pursuit. Experiments that manipulated the presentation duration, flow speed, and uncertainty of the focus location supported the idea that the latter component of mislocalization resulted from temporal integration of the retinal trajectory of the focus. Finally, a comparison of the shift magnitudes obtained in conditions with and without pursuit (but with similar retinal stimulation) suggested that the compensation for both effects uses extraretinal information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Observer translation results in optic flow that specifies heading. Concurrent smooth pursuit causes distortion of the retinal flow pattern for which the visual system compensates. The distortion and its perceptual compensation are usually modeled in terms of instantaneous velocities. However, apart from adding a velocity to the flow field, pursuit also incrementally changes the direction of gaze. The effect of gaze displacement on optic flow perception has received little attention. Here we separated the effects of velocity and gaze displacement by measuring the perceived two-dimensional focus position of rotating flow patterns during pursuit. Such stimuli are useful in the current context because the two effects work in orthogonal directions. As expected, the instantaneous pursuit velocity shifted the perceived focus orthogonally to the pursuit direction. Additionally, the focus was mislocalized in the direction of the pursuit. Experiments that manipulated the presentation duration, flow speed, and uncertainty of the focus location supported the idea that the latter component of mislocalization resulted from temporal integration of the retinal trajectory of the focus. Finally, a comparison of the shift magnitudes obtained in conditions with and without pursuit (but with similar retinal stimulation) suggested that the compensation for both effects uses extraretinal information.

Close

  • doi:10.1167/10.14.14

Close

Wolfgang Einhäuser; Christof Koch; Olivia Carter

Pupil dilation betrays the timing of decisions Journal Article

In: Frontiers in Human Neuroscience, vol. 4, pp. 18, 2010.

Abstract | Links | BibTeX

@article{Einhaeuser2010,
title = {Pupil dilation betrays the timing of decisions},
author = {Wolfgang Einhäuser and Christof Koch and Olivia Carter},
doi = {10.3389/fnhum.2010.00018},
year = {2010},
date = {2010-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {4},
pages = {18},
abstract = {The notion of "mind-reading" by carefully observing another individual's physiological responses has recently become commonplace in popular culture, particularly in the context of brain imaging. The question remains, however, whether outwardly accessible physiological signals indeed betray a decision before a person voluntarily reports it. In one experiment we asked observers to push a button at any time during a 10-s period ("immediate overt response"). In a series of three additional experiments observers were asked to select one number from five sequentially presented digits but concealed their decision until the trial's end ("covert choice"). In these experiments observers either had to choose the digit themselves under conditions of reward and no reward, or were instructed which digit to select via an external cue provided at the time of the digit presentation. In all cases pupil dilation alone predicted the choice (timing of button response or chosen digit, respectively). Consideration of the average pupil-dilation responses, across all experiments, showed that this prediction of timing was distinct from a general arousal or reward-anticipation response. Furthermore, the pupil dilation appeared to reflect the post-decisional consolidation of the selected outcome rather than the pre-decisional cognitive appraisal component of the decision. Given the tight link between pupil dilation and norepinephrine levels during constant illumination, our results have implications beyond the tantalizing mind-reading speculations. These findings suggest that similar noradrenergic mechanisms may underlie the consolidation of both overt and covert decisions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The notion of "mind-reading" by carefully observing another individual's physiological responses has recently become commonplace in popular culture, particularly in the context of brain imaging. The question remains, however, whether outwardly accessible physiological signals indeed betray a decision before a person voluntarily reports it. In one experiment we asked observers to push a button at any time during a 10-s period ("immediate overt response"). In a series of three additional experiments observers were asked to select one number from five sequentially presented digits but concealed their decision until the trial's end ("covert choice"). In these experiments observers either had to choose the digit themselves under conditions of reward and no reward, or were instructed which digit to select via an external cue provided at the time of the digit presentation. In all cases pupil dilation alone predicted the choice (timing of button response or chosen digit, respectively). Consideration of the average pupil-dilation responses, across all experiments, showed that this prediction of timing was distinct from a general arousal or reward-anticipation response. Furthermore, the pupil dilation appeared to reflect the post-decisional consolidation of the selected outcome rather than the pre-decisional cognitive appraisal component of the decision. Given the tight link between pupil dilation and norepinephrine levels during constant illumination, our results have implications beyond the tantalizing mind-reading speculations. These findings suggest that similar noradrenergic mechanisms may underlie the consolidation of both overt and covert decisions.

Close

  • doi:10.3389/fnhum.2010.00018

Close

Nick C. Ellis; Nuria Sagarra

Learned attention effects in L2 temporal reference: The first hour and the next eight semesters Journal Article

In: Language Learning, vol. 60, pp. 85–108, 2010.

Abstract | Links | BibTeX

@article{Ellis2010,
title = {Learned attention effects in L2 temporal reference: The first hour and the next eight semesters},
author = {Nick C. Ellis and Nuria Sagarra},
doi = {10.1111/j.1467-9922.2010.00602.x},
year = {2010},
date = {2010-01-01},
journal = {Language Learning},
volume = {60},
pages = {85--108},
abstract = {This article relates adults' difficulty acquiring foreign languages to the associative learn- ing phenomena of cue salience, cue complexity, and the blocking of later experienced cues by earlier learned ones. It examines short- and long-term learned attention effects in adult acquisition of lexical (adverbs) and morphological cues (verbal inflections) for temporal reference in Latin (1 hr of controlled laboratory learning) and Spanish (three to eight semesters of classroom learning). Our experiments indicate that early adult learning is characterized by a general tendency to focus on lexical cues because of their physical salience in the input and their psychological salience resulting from their simplicity of form-function mapping and from learners' prior first language knowledge. Later, attention to verbal morphology is modulated by cue complexity and language experience: Acquisition is better in cases of cues of lesser complexity, speakers of morphologically rich native languages, and longer periods of study. Finally, instruc- tional practices that emphasize morphological cues by means either of preexposure or typographical enhancement increase attention to inflections thus to block reliance on adverbial cues. This},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article relates adults' difficulty acquiring foreign languages to the associative learn- ing phenomena of cue salience, cue complexity, and the blocking of later experienced cues by earlier learned ones. It examines short- and long-term learned attention effects in adult acquisition of lexical (adverbs) and morphological cues (verbal inflections) for temporal reference in Latin (1 hr of controlled laboratory learning) and Spanish (three to eight semesters of classroom learning). Our experiments indicate that early adult learning is characterized by a general tendency to focus on lexical cues because of their physical salience in the input and their psychological salience resulting from their simplicity of form-function mapping and from learners' prior first language knowledge. Later, attention to verbal morphology is modulated by cue complexity and language experience: Acquisition is better in cases of cues of lesser complexity, speakers of morphologically rich native languages, and longer periods of study. Finally, instruc- tional practices that emphasize morphological cues by means either of preexposure or typographical enhancement increase attention to inflections thus to block reliance on adverbial cues. This

Close

  • doi:10.1111/j.1467-9922.2010.00602.x

Close

David R. Evens; Casimir J. H. Ludwig

Dual-task costs and benefits in anti-saccade performance Journal Article

In: Experimental Brain Research, vol. 205, pp. 545–557, 2010.

Abstract | Links | BibTeX

@article{Evens2010,
title = {Dual-task costs and benefits in anti-saccade performance},
author = {David R. Evens and Casimir J. H. Ludwig},
doi = {10.1007/s00221-010-2393-1},
year = {2010},
date = {2010-01-01},
journal = {Experimental Brain Research},
volume = {205},
pages = {545--557},
abstract = {It has been reported that anti-saccade performance is facilitated by diverting attention through a secondary task (Kristja´nsson et al. in Nat Neurosci 4:1037–1042, 2001). This finding supports the idea that the withdrawal of resources that would be taken up by the erroneous movement plan makes it easier to overcome the tendency to look towards the imperative stimulus. We first report an attempt to replicate this finding. Four observers were extensively tested in an anti-saccade paradigm. The luminance of the fixation point or peripheral target was briefly increased or decreased. In the dual-task condition observers signalled the direction of the luminance change. In the single-task condition the discrimination stimulus was presented, but could be ignored as it required no response. We found an overall dual-task cost in anti-saccade latency, although some facilitation was observed in the accuracy. The discrepancy between the two studies was attributed to performance in the single-task condition. For latency facilitation to occur, performance should not be affected by the discrimination stimulus when it is task-irrelevant. We show that naive, untrained observers could not ignore this irrelevant visual event. If it occurred before the imperative movement signal, the event acted as a warning signal, speeding up anti-saccade generation. If it occurred after the imperative movement stimulus, it acted as a remote distractor and interfered with the generation of the correct movement. Under normal circumstances, these basic oculomotor effects operate in both single- and dual-task conditions. An overall dual-task cost rides on top of this latency modulation. This overall cost is best accounted for by an increase in the response criterion for saccade generation in the more demanding dual-task condition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has been reported that anti-saccade performance is facilitated by diverting attention through a secondary task (Kristja´nsson et al. in Nat Neurosci 4:1037–1042, 2001). This finding supports the idea that the withdrawal of resources that would be taken up by the erroneous movement plan makes it easier to overcome the tendency to look towards the imperative stimulus. We first report an attempt to replicate this finding. Four observers were extensively tested in an anti-saccade paradigm. The luminance of the fixation point or peripheral target was briefly increased or decreased. In the dual-task condition observers signalled the direction of the luminance change. In the single-task condition the discrimination stimulus was presented, but could be ignored as it required no response. We found an overall dual-task cost in anti-saccade latency, although some facilitation was observed in the accuracy. The discrepancy between the two studies was attributed to performance in the single-task condition. For latency facilitation to occur, performance should not be affected by the discrimination stimulus when it is task-irrelevant. We show that naive, untrained observers could not ignore this irrelevant visual event. If it occurred before the imperative movement signal, the event acted as a warning signal, speeding up anti-saccade generation. If it occurred after the imperative movement stimulus, it acted as a remote distractor and interfered with the generation of the correct movement. Under normal circumstances, these basic oculomotor effects operate in both single- and dual-task conditions. An overall dual-task cost rides on top of this latency modulation. This overall cost is best accounted for by an increase in the response criterion for saccade generation in the more demanding dual-task condition.

Close

  • doi:10.1007/s00221-010-2393-1

Close

Elizabeth R. Schotter; Raymond W. Berry; Craig R. M. McKenzie; Keith Rayner

Gaze bias: Selective encoding and liking effects Journal Article

In: Visual Cognition, vol. 18, no. 8, pp. 1113–1132, 2010.

Abstract | Links | BibTeX

@article{Schotter2010,
title = {Gaze bias: Selective encoding and liking effects},
author = {Elizabeth R. Schotter and Raymond W. Berry and Craig R. M. McKenzie and Keith Rayner},
doi = {10.1080/13506281003668900},
year = {2010},
date = {2010-01-01},
journal = {Visual Cognition},
volume = {18},
number = {8},
pages = {1113--1132},
abstract = {People look longer at things that they choose than things they do not choose. How much of this tendency—the gaze bias effect—is due to a liking effect compared to the information encoding aspect of the decision-making process? Do these processes compete under certain conditions? We monitored eye movements during a visual decision-making task with four decision prompts: Like, dislike, older, and newer. The gaze bias effect was present during the first dwell in all conditions except the dislike condition, when the preference to look at the liked item and the goal to identify the disliked item compete. Colour content (whether a photograph was colour or black-and-white), not decision type, influenced the gaze bias effect in the older/newer decisions because colour is a relevant feature for such decisions. These interactions appear early in the eye movement record, indicating that gaze bias is influenced during information encoding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

People look longer at things that they choose than things they do not choose. How much of this tendency—the gaze bias effect—is due to a liking effect compared to the information encoding aspect of the decision-making process? Do these processes compete under certain conditions? We monitored eye movements during a visual decision-making task with four decision prompts: Like, dislike, older, and newer. The gaze bias effect was present during the first dwell in all conditions except the dislike condition, when the preference to look at the liked item and the goal to identify the disliked item compete. Colour content (whether a photograph was colour or black-and-white), not decision type, influenced the gaze bias effect in the older/newer decisions because colour is a relevant feature for such decisions. These interactions appear early in the eye movement record, indicating that gaze bias is influenced during information encoding.

Close

  • doi:10.1080/13506281003668900

Close

Christopher R. Sears; Charmaine L. Thomas; Jessica M. Lehuquet; Jeremy C. S. Johnson

Attentional biases in dysphoria: An eye-tracking study of the allocation and disengagement of attention Journal Article

In: Cognition and Emotion, vol. 24, no. 8, pp. 1349–1368, 2010.

Abstract | Links | BibTeX

@article{Sears2010,
title = {Attentional biases in dysphoria: An eye-tracking study of the allocation and disengagement of attention},
author = {Christopher R. Sears and Charmaine L. Thomas and Jessica M. Lehuquet and Jeremy C. S. Johnson},
doi = {10.1080/02699930903399319},
year = {2010},
date = {2010-01-01},
journal = {Cognition and Emotion},
volume = {24},
number = {8},
pages = {1349--1368},
abstract = {This study looked for evidence of biases in the allocation and disengagement of attention in dysphoric individuals. Participants studied images for a recognition memory test while their eye fixations were tracked and recorded. Four image types were presented (depression-related, anxiety- related, positive, neutral) in each of two study conditions. For the simultaneous study condition, four images (one of each type) were presented simultaneously for 10 seconds, and the number of fixations and the total fixation time to each image was measured, similar to the procedure used by Eizenman et al. (2003) and Kellough, Beevers, Ellis, and Wells (2008). For the sequential study condition, four images (one of each type) were presented consecutively, each for 4 seconds; to measure disengagement of attention an endogenous cuing procedure was used (Posner, 1980). Dysphoric individuals spent significantly less time attending to positive images than non-dysphoric individuals, but there were no group differences in attention to depression-related images. There was also no evidence of a dysphoria-related bias in initial shifts of attention. With respect to the disengagement of attention, dysphoric individuals were slower to disengage their attention from depression-related images. The recognition memory data showed that dysphoric individuals had poorer memory for emotional images, but there was no evidence of a conventional mood-congruent memory bias. Differences in the attentional and memory biases observed in depressed and dysphoric individuals are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study looked for evidence of biases in the allocation and disengagement of attention in dysphoric individuals. Participants studied images for a recognition memory test while their eye fixations were tracked and recorded. Four image types were presented (depression-related, anxiety- related, positive, neutral) in each of two study conditions. For the simultaneous study condition, four images (one of each type) were presented simultaneously for 10 seconds, and the number of fixations and the total fixation time to each image was measured, similar to the procedure used by Eizenman et al. (2003) and Kellough, Beevers, Ellis, and Wells (2008). For the sequential study condition, four images (one of each type) were presented consecutively, each for 4 seconds; to measure disengagement of attention an endogenous cuing procedure was used (Posner, 1980). Dysphoric individuals spent significantly less time attending to positive images than non-dysphoric individuals, but there were no group differences in attention to depression-related images. There was also no evidence of a dysphoria-related bias in initial shifts of attention. With respect to the disengagement of attention, dysphoric individuals were slower to disengage their attention from depression-related images. The recognition memory data showed that dysphoric individuals had poorer memory for emotional images, but there was no evidence of a conventional mood-congruent memory bias. Differences in the attentional and memory biases observed in depressed and dysphoric individuals are discussed.

Close

  • doi:10.1080/02699930903399319

Close

Victor Sander; Brian Soper; Stefan Everling

Nonhuman primate event-related potentials associated with pro- and anti-saccades Journal Article

In: NeuroImage, vol. 49, no. 2, pp. 1650–1658, 2010.

Abstract | Links | BibTeX

@article{Sander2010,
title = {Nonhuman primate event-related potentials associated with pro- and anti-saccades},
author = {Victor Sander and Brian Soper and Stefan Everling},
doi = {10.1016/j.neuroimage.2009.09.038},
year = {2010},
date = {2010-01-01},
journal = {NeuroImage},
volume = {49},
number = {2},
pages = {1650--1658},
abstract = {Non-invasive event-related potential (ERP) recordings have become a popular technique to study neural activity associated with saccades in humans. To date, it is not known whether nonhuman primates exhibit similar saccade-related ERPs. Here, we recorded ERPs associated with the performance of randomly interleaved pro- and anti-saccades in macaque monkeys. Stimulus-aligned ERPs showed short-latency visual component with more negative P2 and N2 peak amplitudes on anti- than on pro-saccade trials. Saccade-aligned ERPs showed a larger presaccadic negativity on anti- than pro-saccade trials, and a presaccadic positivity on pro-saccade trials, which was attenuated or absent on anti-saccade trials. This was followed by sharp negative spike potential immediately prior to the movement. Overall, these findings demonstrate that macaque monkeys, like humans, exhibit task-related differences of visual ERPs associated with pro- and anti-saccades and furthermore share presaccadic positivity as well as a spike potential prior to these tasks. We suggest that the presaccadic positivity on pro-saccade trials is generated by a source in the contralateral frontal eye fields and that the more negative voltage on anti-saccade trials is the result of additional sources of opposite polarity in neighboring frontal areas.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Non-invasive event-related potential (ERP) recordings have become a popular technique to study neural activity associated with saccades in humans. To date, it is not known whether nonhuman primates exhibit similar saccade-related ERPs. Here, we recorded ERPs associated with the performance of randomly interleaved pro- and anti-saccades in macaque monkeys. Stimulus-aligned ERPs showed short-latency visual component with more negative P2 and N2 peak amplitudes on anti- than on pro-saccade trials. Saccade-aligned ERPs showed a larger presaccadic negativity on anti- than pro-saccade trials, and a presaccadic positivity on pro-saccade trials, which was attenuated or absent on anti-saccade trials. This was followed by sharp negative spike potential immediately prior to the movement. Overall, these findings demonstrate that macaque monkeys, like humans, exhibit task-related differences of visual ERPs associated with pro- and anti-saccades and furthermore share presaccadic positivity as well as a spike potential prior to these tasks. We suggest that the presaccadic positivity on pro-saccade trials is generated by a source in the contralateral frontal eye fields and that the more negative voltage on anti-saccade trials is the result of additional sources of opposite polarity in neighboring frontal areas.

Close

  • doi:10.1016/j.neuroimage.2009.09.038

Close

Daniel R. Saunders; David K. Williamson; Nikolaus F. Troje

Gaze patterns during perception of direction and gender from biological motion Journal Article

In: Journal of Vision, vol. 10, no. 11, pp. 1–10, 2010.

Abstract | Links | BibTeX

@article{Saunders2010,
title = {Gaze patterns during perception of direction and gender from biological motion},
author = {Daniel R. Saunders and David K. Williamson and Nikolaus F. Troje},
doi = {10.1167/10.11.9},
year = {2010},
date = {2010-01-01},
journal = {Journal of Vision},
volume = {10},
number = {11},
pages = {1--10},
abstract = {Humans can perceive many properties of a creature in motion from the movement of the major joints alone. However it is likely that some regions of the body are more informative than others, dependent on the task. We recorded eye movements while participants performed two tasks with point-light walkers: determining the direction of walking, or determining the walker's gender. To vary task difficulty, walkers were displayed from different view angles and with different degrees of expressed gender. The effects on eye movement were evaluated by generating fixation maps, and by analyzing the number of fixations in regions of interest representing the shoulders, pelvis, and feet. In both tasks participants frequently fixated the pelvis region, but there were relatively more fixations at the shoulders in the gender task, and more fixations at the feet in the direction task. Increasing direction task difficulty increased the focus on the foot region. An individual's task performance could not be predicted by their distribution of fixations. However by showing where observers seek information, the study supports previous findings that the feet play an important part in the perception of walking direction, and that the shoulders and hips are particularly important for the perception of gender.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans can perceive many properties of a creature in motion from the movement of the major joints alone. However it is likely that some regions of the body are more informative than others, dependent on the task. We recorded eye movements while participants performed two tasks with point-light walkers: determining the direction of walking, or determining the walker's gender. To vary task difficulty, walkers were displayed from different view angles and with different degrees of expressed gender. The effects on eye movement were evaluated by generating fixation maps, and by analyzing the number of fixations in regions of interest representing the shoulders, pelvis, and feet. In both tasks participants frequently fixated the pelvis region, but there were relatively more fixations at the shoulders in the gender task, and more fixations at the feet in the direction task. Increasing direction task difficulty increased the focus on the foot region. An individual's task performance could not be predicted by their distribution of fixations. However by showing where observers seek information, the study supports previous findings that the feet play an important part in the perception of walking direction, and that the shoulders and hips are particularly important for the perception of gender.

Close

  • doi:10.1167/10.11.9

Close

6509 entries « ‹ 58 of 66 › »

Let’s Keep in Touch

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
Newsletter
Newsletter Archive
Conferences

Contact

info@sr-research.com

Phone: +1-613-271-8686

Toll Free: +1-866-821-0731

Fax: +1-613-482-4866

Quick Links

Products

Solutions

Support Forum

Legal

Legal Notice

Privacy Policy | Accessibility Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog

Reading Profiles of Adults with Dyslexia

Reading Profile of Adults with Dyslexia


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.