• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
    • Purchase Licenses
  • Solutions
    • Reading and Language
    • Developmental
    • fMRI and MEG
    • EEG and fNIRS
    • Clinical and Oculomotor
    • Cognitive
    • Usability and Applied
    • Non Human Primate
  • Support
    • Forum
    • Resources
    • Useful Apps
    • Training
  • About
    • About Us
    • EyeLink Publications
    • Events
    • Manufacturing
    • Careers
    • About Eye Tracking
    • Newsletter
  • Blog
  • Contact
  • 简体中文
eye tracking research

EyeLink Eye-Tracking Publications Library

All EyeLink Publications

All 10,000+ peer-reviewed EyeLink research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!

10162 entries « ‹ 91 of 102 › »

2009

Julie A. Brefczynski-Lewis; Ritobrato Datta; James W. Lewis; Edgar A. DeYoe

The topography of visuospatial attention as revealed by a novel visual field mapping technique Journal Article

In: Journal of Cognitive Neuroscience, vol. 21, no. 7, pp. 1447–1460, 2009.

Abstract | Links | BibTeX

@article{BrefczynskiLewis2009,
title = {The topography of visuospatial attention as revealed by a novel visual field mapping technique},
author = {Julie A. Brefczynski-Lewis and Ritobrato Datta and James W. Lewis and Edgar A. DeYoe},
doi = {10.1162/jocn.2009.21005},
year = {2009},
date = {2009-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {21},
number = {7},
pages = {1447--1460},
abstract = {Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style."},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style."

Close

  • doi:10.1162/jocn.2009.21005

Close

Eli Brenner; Jeroen B. J. Smeets

Sources of variability in interceptive movements Journal Article

In: Experimental Brain Research, vol. 195, no. 1, pp. 117–133, 2009.

Abstract | Links | BibTeX

@article{Brenner2009,
title = {Sources of variability in interceptive movements},
author = {Eli Brenner and Jeroen B. J. Smeets},
doi = {10.1007/s00221-009-1757-x},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {195},
number = {1},
pages = {117--133},
abstract = {In order to successfully intercept a moving target one must be at the right place at the right time. But simply being there is seldom enough. One usually needs to make contact in a certain manner, for instance to hit the target in a certain direction. How this is best achieved depends on the exact task, but to get an idea of what factors may limit performance we asked people to hit a moving virtual disk through a virtual goal, and analysed the spatial and temporal variability in the way in which they did so. We estimated that for our task the standard deviations in timing and spatial accuracy are about 20 ms and 5 mm. Additional variability arises from individual movements being planned slightly differently and being adjusted during execution. We argue that the way that our subjects moved was precisely tailored to the task demands, and that the movement accuracy is not only limited by the muscles and their activation, but also-and probably even mainly-by the resolution of visual perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In order to successfully intercept a moving target one must be at the right place at the right time. But simply being there is seldom enough. One usually needs to make contact in a certain manner, for instance to hit the target in a certain direction. How this is best achieved depends on the exact task, but to get an idea of what factors may limit performance we asked people to hit a moving virtual disk through a virtual goal, and analysed the spatial and temporal variability in the way in which they did so. We estimated that for our task the standard deviations in timing and spatial accuracy are about 20 ms and 5 mm. Additional variability arises from individual movements being planned slightly differently and being adjusted during execution. We argue that the way that our subjects moved was precisely tailored to the task demands, and that the movement accuracy is not only limited by the muscles and their activation, but also-and probably even mainly-by the resolution of visual perception.

Close

  • doi:10.1007/s00221-009-1757-x

Close

Leonard A. Breslow; J. Gregory Trafton; Raj M. Ratwani

A perceptual process approach to selecting color scales for complex visualizations Journal Article

In: Journal of Experimental Psychology: Applied, vol. 15, no. 1, pp. 25–34, 2009.

Abstract | Links | BibTeX

@article{Breslow2009,
title = {A perceptual process approach to selecting color scales for complex visualizations},
author = {Leonard A. Breslow and J. Gregory Trafton and Raj M. Ratwani},
doi = {10.1037/a0015085},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Applied},
volume = {15},
number = {1},
pages = {25--34},
abstract = {Previous research has shown that multicolored scales are superior to ordered brightness scales for supporting identification tasks on complex visualizations (categorization, absolute numeric value judgments, etc.), whereas ordered brightness scales are superior for relative comparison tasks (greater/less). We examined the processes by which such tasks are performed. By studying eye movements and by comparing performance on scales of different sizes, we argued that (a) people perform identification tasks by conducting a serial visual search of the legend, whose speed is sensitive to the number of scale colors and the discriminability of the colors; and (b) people perform relative comparison tasks using different processes for multicolored versus brightness scales. With multicolored scales, they perform a parallel search of the legend, whose speed is relatively insensitive to the size of the scale, whereas with brightness scales, people usually directly compare the target colors in the visualization, while making little reference to the legend. Performance of comparisons was relatively robust against increases in scale size, whereas performance of identifications deteriorated markedly, especially with brightness scales, once scale sizes reached 10 colors or more.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has shown that multicolored scales are superior to ordered brightness scales for supporting identification tasks on complex visualizations (categorization, absolute numeric value judgments, etc.), whereas ordered brightness scales are superior for relative comparison tasks (greater/less). We examined the processes by which such tasks are performed. By studying eye movements and by comparing performance on scales of different sizes, we argued that (a) people perform identification tasks by conducting a serial visual search of the legend, whose speed is sensitive to the number of scale colors and the discriminability of the colors; and (b) people perform relative comparison tasks using different processes for multicolored versus brightness scales. With multicolored scales, they perform a parallel search of the legend, whose speed is relatively insensitive to the size of the scale, whereas with brightness scales, people usually directly compare the target colors in the visualization, while making little reference to the legend. Performance of comparisons was relatively robust against increases in scale size, whereas performance of identifications deteriorated markedly, especially with brightness scales, once scale sizes reached 10 colors or more.

Close

  • doi:10.1037/a0015085

Close

James R. Brockmole; Walter R. Boot

Should I stay or should I go? Attentional disengagement from visually unique and unexpected items at fixation Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 808–815, 2009.

Abstract | Links | BibTeX

@article{Brockmole2009,
title = {Should I stay or should I go? Attentional disengagement from visually unique and unexpected items at fixation},
author = {James R. Brockmole and Walter R. Boot},
doi = {10.1037/a0013707},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {3},
pages = {808--815},
abstract = {Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience.

Close

  • doi:10.1037/a0013707

Close

Anne-Marie M. Brouwer; Volker H. Franz; Karl R. Gegenfurtner

Differences in fixations between grasping and viewing objects Journal Article

In: Journal of Vision, vol. 9, no. 1, pp. 1–24, 2009.

Abstract | BibTeX

@article{Brouwer2009,
title = {Differences in fixations between grasping and viewing objects},
author = {Anne-Marie M. Brouwer and Volker H. Franz and Karl R. Gegenfurtner},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {1},
pages = {1--24},
abstract = {Where exactly do people look when they grasp an object? An object is usually contacted at two locations, whereas the gaze can only be at one location at the time. We investigated participants' fixation locations when they grasp objects with the contact positions of both index finger and thumb being visible and compared these to fixation locations when they only viewed the objects. Participants grasped with the index finger at the top and the thumb at the bottom of a flat shape. The main difference between grasping and viewing was that after a saccade roughly directed to the object's center of gravity, participants saccaded more upward and more into the direction of a region that was difficult to contact during grasping. A control experiment indicated that it was not the upper part of the shape that attracted fixation, while the results were consistent with an attraction by the index finger. Participants did not try to fixate both contact locations. Fixations were closer to the object's center of gravity in the viewing than in the grasping task. In conclusion, participants adapt their eye movements to the need of the task, such as acquiring information about regions with high required contact precision in grasping, even with small (graspable) objects. We suggest that in grasping, the main function of fixations is to acquire visual feedback of the approaching digits.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Where exactly do people look when they grasp an object? An object is usually contacted at two locations, whereas the gaze can only be at one location at the time. We investigated participants' fixation locations when they grasp objects with the contact positions of both index finger and thumb being visible and compared these to fixation locations when they only viewed the objects. Participants grasped with the index finger at the top and the thumb at the bottom of a flat shape. The main difference between grasping and viewing was that after a saccade roughly directed to the object's center of gravity, participants saccaded more upward and more into the direction of a region that was difficult to contact during grasping. A control experiment indicated that it was not the upper part of the shape that attracted fixation, while the results were consistent with an attraction by the index finger. Participants did not try to fixate both contact locations. Fixations were closer to the object's center of gravity in the viewing than in the grasping task. In conclusion, participants adapt their eye movements to the need of the task, such as acquiring information about regions with high required contact precision in grasping, even with small (graspable) objects. We suggest that in grasping, the main function of fixations is to acquire visual feedback of the approaching digits.

Close

Sarah Brown-Schmidt

Partner-specific interpretation of maintained referential precedents during interactive dialog Journal Article

In: Journal of Memory and Language, vol. 61, no. 2, pp. 171–190, 2009.

Abstract | Links | BibTeX

@article{BrownSchmidt2009,
title = {Partner-specific interpretation of maintained referential precedents during interactive dialog},
author = {Sarah Brown-Schmidt},
doi = {10.1016/j.jml.2009.04.003},
year = {2009},
date = {2009-01-01},
journal = {Journal of Memory and Language},
volume = {61},
number = {2},
pages = {171--190},
publisher = {Elsevier Inc.},
abstract = {In dialog settings, conversational partners converge on similar names for referents. These lexically entrained terms [Garrod, S., & Anderson, A. (1987). Saying what you mean in dialog: A study in conceptual and semantic co-ordination. Cognition, 27, 181-218] are part of the common ground between the particular individuals who established the entrained term [Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482-1493], and are thought to be encoded in memory with a partner-specific cue. Thus far, analyses of the time-course of interpretation suggest that partner-specific information may not constrain the initial interpretation of referring expressions [Barr, D. J., & Keysar, B. (2002). Anchoring comprehension in linguistic precedents. Journal of Memory and Language, 46, 391-418; Kronmüller, E., & Barr, D. J. (2007). Perspective-free pragmatics: Broken precedents and the recovery-from-preemption hypothesis. Journal of Memory and Language, 56, 436-455]. However, these studies used non-interactive paradigms, which may limit the use of partner-specific representations. This article presents the results of three eye-tracking experiments. Experiment 1a used an interactive conversation methodology in which the experimenter and participant jointly established entrained terms for various images. On critical trials, the same experimenter, or a new experimenter described a critical image using an entrained term, or a new term. The results demonstrated an early, on-line partner-specific effect for interpretation of entrained terms, as well as preliminary evidence for an early, partner-specific effect for new terms. Experiment 1b used a non-interactive paradigm in which participants completed the same task by listening to image descriptions recorded during Experiment 1a; the results showed that partner-specific effects were eliminated. Experiment 2 replicated the partner-specific findings of Experiment 1a with an interactive paradigm and scenes that contained previously unmentioned images. The results suggest that partner-specific interpretation is most likely to occur in interactive dialog settings; the number of critical trials and stimulus characteristics may also play a role. The results are consistent with a large body of work demonstrating that the language processing system uses a rich source of contextual and pragmatic representations to guide on-line processing decisions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In dialog settings, conversational partners converge on similar names for referents. These lexically entrained terms [Garrod, S., & Anderson, A. (1987). Saying what you mean in dialog: A study in conceptual and semantic co-ordination. Cognition, 27, 181-218] are part of the common ground between the particular individuals who established the entrained term [Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482-1493], and are thought to be encoded in memory with a partner-specific cue. Thus far, analyses of the time-course of interpretation suggest that partner-specific information may not constrain the initial interpretation of referring expressions [Barr, D. J., & Keysar, B. (2002). Anchoring comprehension in linguistic precedents. Journal of Memory and Language, 46, 391-418; Kronmüller, E., & Barr, D. J. (2007). Perspective-free pragmatics: Broken precedents and the recovery-from-preemption hypothesis. Journal of Memory and Language, 56, 436-455]. However, these studies used non-interactive paradigms, which may limit the use of partner-specific representations. This article presents the results of three eye-tracking experiments. Experiment 1a used an interactive conversation methodology in which the experimenter and participant jointly established entrained terms for various images. On critical trials, the same experimenter, or a new experimenter described a critical image using an entrained term, or a new term. The results demonstrated an early, on-line partner-specific effect for interpretation of entrained terms, as well as preliminary evidence for an early, partner-specific effect for new terms. Experiment 1b used a non-interactive paradigm in which participants completed the same task by listening to image descriptions recorded during Experiment 1a; the results showed that partner-specific effects were eliminated. Experiment 2 replicated the partner-specific findings of Experiment 1a with an interactive paradigm and scenes that contained previously unmentioned images. The results suggest that partner-specific interpretation is most likely to occur in interactive dialog settings; the number of critical trials and stimulus characteristics may also play a role. The results are consistent with a large body of work demonstrating that the language processing system uses a rich source of contextual and pragmatic representations to guide on-line processing decisions.

Close

  • doi:10.1016/j.jml.2009.04.003

Close

Sarah Brown-Schmidt

The role of executive function in perspective taking during online language comprehension Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 893–900, 2009.

Abstract | Links | BibTeX

@article{BrownSchmidt2009a,
title = {The role of executive function in perspective taking during online language comprehension},
author = {Sarah Brown-Schmidt},
doi = {10.3758/PBR.16.5.893},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {5},
pages = {893--900},
abstract = {During conversation, interlocutors build on the set of shared beliefs known as common ground. Although there is general agreement that interlocutors maintain representations of common ground, there is no consensus regarding whether common-ground representations constrain initial language interpretation processes. Here, I propose that executive functioning--specifically, failures in inhibition control--can account for some occasional insensitivities to common-ground information. The present article presents the results of an experiment that demonstrates that individual differences in inhibition control determine the degree to which addressees successfully inhibit perspective-inappropriate interpretations of temporary referential ambiguities in their partner's speech. Whether mentioned information was grounded or not also played a role, suggesting that addressees may show sensitivity to common ground only when it is established collaboratively. The results suggest that, in conversation, perspective information routinely guides online language processing and that occasional insensitivities to perspective can be attributed partly to difficulties in inhibiting perspective-inappropriate interpretations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

During conversation, interlocutors build on the set of shared beliefs known as common ground. Although there is general agreement that interlocutors maintain representations of common ground, there is no consensus regarding whether common-ground representations constrain initial language interpretation processes. Here, I propose that executive functioning--specifically, failures in inhibition control--can account for some occasional insensitivities to common-ground information. The present article presents the results of an experiment that demonstrates that individual differences in inhibition control determine the degree to which addressees successfully inhibit perspective-inappropriate interpretations of temporary referential ambiguities in their partner's speech. Whether mentioned information was grounded or not also played a role, suggesting that addressees may show sensitivity to common ground only when it is established collaboratively. The results suggest that, in conversation, perspective information routinely guides online language processing and that occasional insensitivities to perspective can be attributed partly to difficulties in inhibiting perspective-inappropriate interpretations.

Close

  • doi:10.3758/PBR.16.5.893

Close

Gerry T. M. Altmann; Yuki Kamide

Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation Journal Article

In: Cognition, vol. 111, no. 1, pp. 55–71, 2009.

Abstract | Links | BibTeX

@article{Altmann2009,
title = {Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation},
author = {Gerry T. M. Altmann and Yuki Kamide},
doi = {10.1016/j.cognition.2008.12.005},
year = {2009},
date = {2009-01-01},
journal = {Cognition},
volume = {111},
number = {1},
pages = {55--71},
publisher = {Elsevier B.V.},
abstract = {Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).

Close

  • doi:10.1016/j.cognition.2008.12.005

Close

John Palmer; Cathleen M. Moore

Using a filtering task to measure the spatial extent of selective attention Journal Article

In: Vision Research, vol. 49, no. 10, pp. 1045–1064, 2009.

Abstract | Links | BibTeX

@article{Palmer2009,
title = {Using a filtering task to measure the spatial extent of selective attention},
author = {John Palmer and Cathleen M. Moore},
doi = {10.1016/j.visres.2008.02.022},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {10},
pages = {1045--1064},
abstract = {The spatial extent of attention was investigated by measuring sensitivity to stimuli at to-be-ignored locations. Observers detected a stimulus at a cued location (target), while ignoring otherwise identical stimuli at nearby locations (foils). Only an attentional cue distinguished target from foil. Several experiments varied the contrast and separation of targets and foils. Two theories of selection were compared: contrast gain and a version of attention switching called an all-or-none mixture model. Results included large effects of separation, rejection of the contrast gain model, and the measurement of the size and profile of the spatial extent of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The spatial extent of attention was investigated by measuring sensitivity to stimuli at to-be-ignored locations. Observers detected a stimulus at a cued location (target), while ignoring otherwise identical stimuli at nearby locations (foils). Only an attentional cue distinguished target from foil. Several experiments varied the contrast and separation of targets and foils. Two theories of selection were compared: contrast gain and a version of attention switching called an all-or-none mixture model. Results included large effects of separation, rejection of the contrast gain model, and the measurement of the size and profile of the spatial extent of attention.

Close

  • doi:10.1016/j.visres.2008.02.022

Close

Daniele Panizza; Gennaro Chierchia; Charles Clifton

On the role of entailment patterns and scalar implicatures in the processing of numerals Journal Article

In: Journal of Memory and Language, vol. 61, no. 4, pp. 503–518, 2009.

Abstract | Links | BibTeX

@article{Panizza2009,
title = {On the role of entailment patterns and scalar implicatures in the processing of numerals},
author = {Daniele Panizza and Gennaro Chierchia and Charles Clifton},
doi = {10.1016/j.jml.2009.07.005},
year = {2009},
date = {2009-01-01},
journal = {Journal of Memory and Language},
volume = {61},
number = {4},
pages = {503--518},
abstract = {There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('exact') interpretations vs. lower-bounded ('at-least') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('exact') interpretations vs. lower-bounded ('at-least') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature.

Close

  • doi:10.1016/j.jml.2009.07.005

Close

Sebastian Pannasch; Boris M. Velichkovsky

Distractor effect and saccade amplitudes: Further evidence on different modes of processing in free exploration of visual images Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 1109–1131, 2009.

Abstract | Links | BibTeX

@article{Pannasch2009,
title = {Distractor effect and saccade amplitudes: Further evidence on different modes of processing in free exploration of visual images},
author = {Sebastian Pannasch and Boris M. Velichkovsky},
doi = {10.1080/13506280902764422},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {1109--1131},
abstract = {In view of a variety of everyday tasks, it is highly implausible that all visual fixations fulfil the same role. Earlier we demonstrated that a combination of fixation duration and amplitude of related saccades strongly correlates with the probability of correct recognition of objects and events both in static and in dynamic scenes (Velichkovsky, Joos, Helmert, Velichkovsky, Rothert, Kopf, Dornhoefer, see Pannasch, Dornhoefer, Unema, & Velichkovsky, 2001) in relation to amplitudes of the preceding saccade. In Experiment 1, it is shown that retinotopically identical visual events occurring 100 ms after the onset of a fixation have significantly less influence on fixation duration if the amplitude of the previous saccade exceeds the parafoveal range (set on 5° of arc). Experiment 2 demonstrates that this difference diminishes for distractors of obvious biological value such as looming motion patterns. In Experiment 3, we show that saccade amplitudes influence visual but not acoustic or haptic distractor effects. These results suggest an explanation in terms of a shifting balance of at least two modes of visual processing in free viewing of complex visual images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In view of a variety of everyday tasks, it is highly implausible that all visual fixations fulfil the same role. Earlier we demonstrated that a combination of fixation duration and amplitude of related saccades strongly correlates with the probability of correct recognition of objects and events both in static and in dynamic scenes (Velichkovsky, Joos, Helmert, Velichkovsky, Rothert, Kopf, Dornhoefer, see Pannasch, Dornhoefer, Unema, & Velichkovsky, 2001) in relation to amplitudes of the preceding saccade. In Experiment 1, it is shown that retinotopically identical visual events occurring 100 ms after the onset of a fixation have significantly less influence on fixation duration if the amplitude of the previous saccade exceeds the parafoveal range (set on 5° of arc). Experiment 2 demonstrates that this difference diminishes for distractors of obvious biological value such as looming motion patterns. In Experiment 3, we show that saccade amplitudes influence visual but not acoustic or haptic distractor effects. These results suggest an explanation in terms of a shifting balance of at least two modes of visual processing in free viewing of complex visual images.

Close

  • doi:10.1080/13506280902764422

Close

Muriel T. N. Panouillères; Tiffany Weiss; Christian Urquizar; Roméo Salemme; Douglas P. Munoz; Denis Pélisson

Behavioural evidence of separate adaptation mechanisms controlling saccade amplitude lengthening and shortening Journal Article

In: Journal of Neurophysiology, vol. 101, no. 3, pp. 1550–1559, 2009.

Abstract | Links | BibTeX

@article{Panouilleres2009,
title = {Behavioural evidence of separate adaptation mechanisms controlling saccade amplitude lengthening and shortening},
author = {Muriel T. N. Panouillères and Tiffany Weiss and Christian Urquizar and Roméo Salemme and Douglas P. Munoz and Denis Pélisson},
doi = {10.1152/jn.90988.2008},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {101},
number = {3},
pages = {1550--1559},
abstract = {The accuracy of saccadic eye movements is maintained over the long term by adaptation mechanisms that decrease or increase saccade amplitude. It is still unknown whether these opposite adaptive changes rely on common mechanisms. Here, a double-step target paradigm was used to adaptively decrease (backward second target step) or increase (forward step) the amplitude of reactive saccades in one direction only. To test which sensorimotor transformation stages are subjected to these adaptive changes, we measured their transfer to antisaccades in which sensory and motor vectors are spatially dissociated. In the backward adaptation condition, all subjects showed a significant amplitude decrease for adapted prosaccades and a significant transfer of adaptation to antisaccades performed in the adapted direction, but not to oppositely directed antisaccades elicited by a target jump in the adapted direction. In the forward adaptation condition, only 14 of 19 subjects showed a significant amplitude increase for prosaccades and no significant adaptation transfer to antisaccades was detected in either the adapted or nonadapted direction. These findings suggest that, whereas the level(s) of forward adaptation cannot be resolved, the mechanisms involved in backward adaptation of reactive saccades take place at a sensorimotor level downstream from the vector inversion process of antisaccades and differ markedly from those involved in forward adaptation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The accuracy of saccadic eye movements is maintained over the long term by adaptation mechanisms that decrease or increase saccade amplitude. It is still unknown whether these opposite adaptive changes rely on common mechanisms. Here, a double-step target paradigm was used to adaptively decrease (backward second target step) or increase (forward step) the amplitude of reactive saccades in one direction only. To test which sensorimotor transformation stages are subjected to these adaptive changes, we measured their transfer to antisaccades in which sensory and motor vectors are spatially dissociated. In the backward adaptation condition, all subjects showed a significant amplitude decrease for adapted prosaccades and a significant transfer of adaptation to antisaccades performed in the adapted direction, but not to oppositely directed antisaccades elicited by a target jump in the adapted direction. In the forward adaptation condition, only 14 of 19 subjects showed a significant amplitude increase for prosaccades and no significant adaptation transfer to antisaccades was detected in either the adapted or nonadapted direction. These findings suggest that, whereas the level(s) of forward adaptation cannot be resolved, the mechanisms involved in backward adaptation of reactive saccades take place at a sensorimotor level downstream from the vector inversion process of antisaccades and differ markedly from those involved in forward adaptation.

Close

  • doi:10.1152/jn.90988.2008

Close

Nikole D. Patson; Fernanda Ferreira

Conceptual plural information is used to guide early parsing decisions: Evidence from garden-path sentences with reciprocal verbs Journal Article

In: Journal of Memory and Language, vol. 60, no. 4, pp. 464–486, 2009.

Abstract | Links | BibTeX

@article{Patson2009,
title = {Conceptual plural information is used to guide early parsing decisions: Evidence from garden-path sentences with reciprocal verbs},
author = {Nikole D. Patson and Fernanda Ferreira},
doi = {10.1016/j.jml.2009.02.003},
year = {2009},
date = {2009-01-01},
journal = {Journal of Memory and Language},
volume = {60},
number = {4},
pages = {464--486},
publisher = {Elsevier Inc.},
abstract = {In three eyetracking studies, we investigated the role of conceptual plurality in initial parsing decisions in temporarily ambiguous sentences with reciprocal verbs (e.g., While the lovers kissed the baby played alone). We varied the subject of the first clause using three types of plural noun phrases: conjoined noun phrases (the bride and the groom), plural definite descriptions (the lovers), and numerically quantified noun phrases (the two lovers). We found no evidence for garden-path effects when the subject was conjoined [Ferreira, F., & McClure, K. K. (1997). Parsing of garden-path sentences with reciprocal verbs. Language and Cognitive Processes, 12, 273-306], but traditional garden-path effects were found with the other plural noun phrases. In addition, we tested plural anaphors that had a plural antecedent present in the discourse. We found that when the antecedent was conjoined, garden-path effects were absent compared to cases in which the antecedent was a plural definite description. Our results indicate that the parser is sensitive to the conceptual representation of a plural constituent. In particular, it appears that a Complex Reference Object [Moxey, L. M., Sanford, A. J., Sturt, P., & Morrow, L. I. (2004). Constrains on the formation of plural reference objects: The influence of role, conjunction, and type of description. Journal of Memory and Language, 51, 346-364] automatically activates a reciprocal reading of a reciprocal verb.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In three eyetracking studies, we investigated the role of conceptual plurality in initial parsing decisions in temporarily ambiguous sentences with reciprocal verbs (e.g., While the lovers kissed the baby played alone). We varied the subject of the first clause using three types of plural noun phrases: conjoined noun phrases (the bride and the groom), plural definite descriptions (the lovers), and numerically quantified noun phrases (the two lovers). We found no evidence for garden-path effects when the subject was conjoined [Ferreira, F., & McClure, K. K. (1997). Parsing of garden-path sentences with reciprocal verbs. Language and Cognitive Processes, 12, 273-306], but traditional garden-path effects were found with the other plural noun phrases. In addition, we tested plural anaphors that had a plural antecedent present in the discourse. We found that when the antecedent was conjoined, garden-path effects were absent compared to cases in which the antecedent was a plural definite description. Our results indicate that the parser is sensitive to the conceptual representation of a plural constituent. In particular, it appears that a Complex Reference Object [Moxey, L. M., Sanford, A. J., Sturt, P., & Morrow, L. I. (2004). Constrains on the formation of plural reference objects: The influence of role, conjunction, and type of description. Journal of Memory and Language, 51, 346-364] automatically activates a reciprocal reading of a reciprocal verb.

Close

  • doi:10.1016/j.jml.2009.02.003

Close

John M. Pearson; Benjamin Y. Hayden; Sridhar Raghavachari; Michael L. Platt

Neurons in posterior cingulate cortex signal exploratory decisions in a dynamic multioption choice task Journal Article

In: Current Biology, vol. 19, no. 18, pp. 1532–1537, 2009.

Abstract | Links | BibTeX

@article{Pearson2009,
title = {Neurons in posterior cingulate cortex signal exploratory decisions in a dynamic multioption choice task},
author = {John M. Pearson and Benjamin Y. Hayden and Sridhar Raghavachari and Michael L. Platt},
doi = {10.1016/j.cub.2009.07.048},
year = {2009},
date = {2009-01-01},
journal = {Current Biology},
volume = {19},
number = {18},
pages = {1532--1537},
publisher = {Elsevier Ltd},
abstract = {In dynamic environments, adaptive behavior requires striking a balance between harvesting currently available rewards (exploitation) and gathering information about alternative options (exploration) [1-4]. Such strategic decisions should incorporate not only recent reward history, but also opportunity costs and environmental statistics. Previous neuroimaging [5-8] and neurophysiological [9-13] studies have implicated orbitofrontal cortex, anterior cingulate cortex, and ventral striatum in distinguishing between bouts of exploration and exploitation. Nonetheless, the neuronal mechanisms that underlie strategy selection remain poorly understood. We hypothesized that posterior cingulate cortex (CGp), an area linking reward processing, attention [14], memory [15, 16], and motor control systems [17], mediates the integration of variables such as reward [18], uncertainty [19], and target location [20] that underlie this dynamic balance. Here we show that CGp neurons distinguish between exploratory and exploitative decisions made by monkeys in a dynamic foraging task. Moreover, firing rates of these neurons predict in graded fashion the strategy most likely to be selected on upcoming trials. This encoding is distinct from switching between targets and is independent of the absolute magnitudes of rewards. These observations implicate CGp in the integration of individual outcomes across decision making and the modification of strategy in dynamic environments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In dynamic environments, adaptive behavior requires striking a balance between harvesting currently available rewards (exploitation) and gathering information about alternative options (exploration) [1-4]. Such strategic decisions should incorporate not only recent reward history, but also opportunity costs and environmental statistics. Previous neuroimaging [5-8] and neurophysiological [9-13] studies have implicated orbitofrontal cortex, anterior cingulate cortex, and ventral striatum in distinguishing between bouts of exploration and exploitation. Nonetheless, the neuronal mechanisms that underlie strategy selection remain poorly understood. We hypothesized that posterior cingulate cortex (CGp), an area linking reward processing, attention [14], memory [15, 16], and motor control systems [17], mediates the integration of variables such as reward [18], uncertainty [19], and target location [20] that underlie this dynamic balance. Here we show that CGp neurons distinguish between exploratory and exploitative decisions made by monkeys in a dynamic foraging task. Moreover, firing rates of these neurons predict in graded fashion the strategy most likely to be selected on upcoming trials. This encoding is distinct from switching between targets and is independent of the absolute magnitudes of rewards. These observations implicate CGp in the integration of individual outcomes across decision making and the modification of strategy in dynamic environments.

Close

  • doi:10.1016/j.cub.2009.07.048

Close

Manuel Perea; Joana Acha

Space information is important for reading Journal Article

In: Vision Research, vol. 49, no. 15, pp. 1994–2000, 2009.

Abstract | Links | BibTeX

@article{Perea2009,
title = {Space information is important for reading},
author = {Manuel Perea and Joana Acha},
doi = {10.1016/j.visres.2009.05.009},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {15},
pages = {1994--2000},
abstract = {Reading a text without spaces in an alphabetic language causes disruption at the levels of word identification and eye movement control. In the present experiment, we examined how word discriminability affects the pattern of eye movements when reading unspaced text in an alphabetic language. More specifically, we designed an experiment in which participants read three types of sentences: normally written sentences, regular unspaced sentences, and alternatingbold unspaced sentences. Although there was a reading cost in the unspaced sentences relative to the normally written sentences, this cost was much smaller in alternatingbold unspaced sentences than in regular unspaced sentences.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Reading a text without spaces in an alphabetic language causes disruption at the levels of word identification and eye movement control. In the present experiment, we examined how word discriminability affects the pattern of eye movements when reading unspaced text in an alphabetic language. More specifically, we designed an experiment in which participants read three types of sentences: normally written sentences, regular unspaced sentences, and alternatingbold unspaced sentences. Although there was a reading cost in the unspaced sentences relative to the normally written sentences, this cost was much smaller in alternatingbold unspaced sentences than in regular unspaced sentences.

Close

  • doi:10.1016/j.visres.2009.05.009

Close

Manuel Perea; Joana Acha; Manuel Carreiras

Eye movements when reading text messaging (txt msgng) Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 62, no. 8, pp. 1560–1567, 2009.

Abstract | Links | BibTeX

@article{Perea2009a,
title = {Eye movements when reading text messaging (txt msgng)},
author = {Manuel Perea and Joana Acha and Manuel Carreiras},
doi = {10.1080/17470210902783653},
year = {2009},
date = {2009-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {62},
number = {8},
pages = {1560--1567},
abstract = {The growing popularity of mobile-phone technology has led to changes in the way people--particularly younger people--communicate. A clear example of this is the advent of Short Message Service (SMS) language, which includes orthographic abbreviations (e.g., omitting vowels, as in wk, week) and phonetic respelling (e.g., using u instead of you). In the present study, we examined the pattern of eye movements during reading of SMS sentences (e.g., my hols wr gr8), relative to normally written sentences, in a sample of skilled "texters". SMS sentences were created by using (mostly) orthographic or phonological abbreviations. Results showed that there is a reading cost--both at a local level and at a global level--for individuals who are highly expert in SMS language. Furthermore, phonological abbreviations resulted in a greater cost than orthographic abbreviations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The growing popularity of mobile-phone technology has led to changes in the way people--particularly younger people--communicate. A clear example of this is the advent of Short Message Service (SMS) language, which includes orthographic abbreviations (e.g., omitting vowels, as in wk, week) and phonetic respelling (e.g., using u instead of you). In the present study, we examined the pattern of eye movements during reading of SMS sentences (e.g., my hols wr gr8), relative to normally written sentences, in a sample of skilled "texters". SMS sentences were created by using (mostly) orthographic or phonological abbreviations. Results showed that there is a reading cost--both at a local level and at a global level--for individuals who are highly expert in SMS language. Furthermore, phonological abbreviations resulted in a greater cost than orthographic abbreviations.

Close

  • doi:10.1080/17470210902783653

Close

Yoni Pertzov; Galia Avidan; Ehud Zohary

Accumulation of visual information across multiple fixations Journal Article

In: Journal of Vision, vol. 9, no. 10, pp. 1–12, 2009.

Abstract | Links | BibTeX

@article{Pertzov2009,
title = {Accumulation of visual information across multiple fixations},
author = {Yoni Pertzov and Galia Avidan and Ehud Zohary},
doi = {10.1167/9.10.2},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {10},
pages = {1--12},
abstract = {Humans often redirect their gaze to the same objects within a scene, even without being consciously aware of it. Here, we investigated what type of visual information is accumulated across recurrent fixations on the same object. On each trial, subjects viewed an array comprised of several objects and were subsequently asked to report on various visual aspects of a randomly chosen target object from that array. Memory performance decreased as more fixations were directed to other objects, following the last fixation on the target object (i.e. post-target fixations). In contrast, performance was enhanced with increasing number of fixations on the target object. However, since the number of post-target fixations and the number of target fixations are usually anti-correlated, memory gain may simply reflect fewer post-target fixations, rather than true accumulation of information. To rule this out, we conducted a second experiment, in which the stimulus disappeared immediately after performing a predefined number of target fixations. Additional fixations on the target object resulted in improved memory performance even under these strict conditions. We conclude that, under the present conditions, various aspects of memory monotonically improve with repeated sampling of the same object.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Humans often redirect their gaze to the same objects within a scene, even without being consciously aware of it. Here, we investigated what type of visual information is accumulated across recurrent fixations on the same object. On each trial, subjects viewed an array comprised of several objects and were subsequently asked to report on various visual aspects of a randomly chosen target object from that array. Memory performance decreased as more fixations were directed to other objects, following the last fixation on the target object (i.e. post-target fixations). In contrast, performance was enhanced with increasing number of fixations on the target object. However, since the number of post-target fixations and the number of target fixations are usually anti-correlated, memory gain may simply reflect fewer post-target fixations, rather than true accumulation of information. To rule this out, we conducted a second experiment, in which the stimulus disappeared immediately after performing a predefined number of target fixations. Additional fixations on the target object resulted in improved memory performance even under these strict conditions. We conclude that, under the present conditions, various aspects of memory monotonically improve with repeated sampling of the same object.

Close

  • doi:10.1167/9.10.2

Close

Yoni Pertzov; Ehud Zohary; Galia Avidan

Implicitly perceived objects attract gaze during later free viewing Journal Article

In: Journal of Vision, vol. 9, no. 6, pp. 1–12, 2009.

Abstract | BibTeX

@article{Pertzov2009a,
title = {Implicitly perceived objects attract gaze during later free viewing},
author = {Yoni Pertzov and Ehud Zohary and Galia Avidan},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {6},
pages = {1--12},
abstract = {Everyday life frequently requires searching for objects in the visual scene. Visual search is typically accompanied by a series of eye movements. In an effort to explain subjects' scanning patterns, models of visual search propose that a template of the target is used, to guide gaze (and attention) to locations which exhibit "suspicious" similarity to this template. We show here that the scanning patterns are also clearly influenced by implicit (unrecognized) cues: A backward masked object, presented before the scene display, automatically attracts gaze to its corresponding location in the following inspected image. Interestingly, subliminally observed words describing objects do not have the same effect. This demonstrates that visual search can be unconsciously guided by activated target representations at the perceptual level, but it is much less affected by implicit information at the semantic level. Implications on search models are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Everyday life frequently requires searching for objects in the visual scene. Visual search is typically accompanied by a series of eye movements. In an effort to explain subjects' scanning patterns, models of visual search propose that a template of the target is used, to guide gaze (and attention) to locations which exhibit "suspicious" similarity to this template. We show here that the scanning patterns are also clearly influenced by implicit (unrecognized) cues: A backward masked object, presented before the scene display, automatically attracts gaze to its corresponding location in the following inspected image. Interestingly, subliminally observed words describing objects do not have the same effect. This demonstrates that visual search can be unconsciously guided by activated target representations at the perceptual level, but it is much less affected by implicit information at the semantic level. Implications on search models are discussed.

Close

Tobias Pflugshaupt; Klemens Gutbrod; Pascal Wurtz; Roman Von Wartburg; Thomas Nyffeler; Bianca De Haan; Hans-Otto Karnath; René M. Mueri

About the role of visual field defects in pure alexia Journal Article

In: Brain, vol. 132, no. 7, pp. 1907–1917, 2009.

Abstract | Links | BibTeX

@article{Pflugshaupt2009,
title = {About the role of visual field defects in pure alexia},
author = {Tobias Pflugshaupt and Klemens Gutbrod and Pascal Wurtz and Roman Von Wartburg and Thomas Nyffeler and Bianca De Haan and Hans-Otto Karnath and René M. Mueri},
doi = {10.1093/brain/awp141},
year = {2009},
date = {2009-01-01},
journal = {Brain},
volume = {132},
number = {7},
pages = {1907--1917},
abstract = {Pure alexia is an acquired reading disorder characterized by a disproportionate prolongation of reading time as a function of word length. Although the vast majority of cases reported in the literature show a right-sided visual defect, little is known about the contribution of this low-level visual impairment to their reading difficulties. The present study was aimed at investigating this issue by comparing eye movement patterns during text reading in six patients with pure alexia with those of six patients with hemianopic dyslexia showing similar right-sided visual field defects. We found that the role of the field defect in the reading difficulties of pure alexics was highly deficit-specific. While the amplitude of rightward saccades during text reading seems largely determined by the restricted visual field, other visuo-motor impairments—particularly the pronounced increases in fixation frequency and viewing time as a function of word length—may have little to do with their visual field defect. In addition, subtracting the lesions of the hemianopic dyslexics from those found in pure alexics revealed the largest group differences in posterior parts of the left fusiform gyrus, occipito-temporal sulcus and inferior temporal gyrus. These regions included the coordinate assigned to the centre of the visual word form area in healthy adults, which provides further evidence for a relation between pure alexia and a damaged visual word form area. Finally, we propose a list of three criteria that may improve the differential diagnosis of pure alexia and allow appropriate therapy recommendations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Pure alexia is an acquired reading disorder characterized by a disproportionate prolongation of reading time as a function of word length. Although the vast majority of cases reported in the literature show a right-sided visual defect, little is known about the contribution of this low-level visual impairment to their reading difficulties. The present study was aimed at investigating this issue by comparing eye movement patterns during text reading in six patients with pure alexia with those of six patients with hemianopic dyslexia showing similar right-sided visual field defects. We found that the role of the field defect in the reading difficulties of pure alexics was highly deficit-specific. While the amplitude of rightward saccades during text reading seems largely determined by the restricted visual field, other visuo-motor impairments—particularly the pronounced increases in fixation frequency and viewing time as a function of word length—may have little to do with their visual field defect. In addition, subtracting the lesions of the hemianopic dyslexics from those found in pure alexics revealed the largest group differences in posterior parts of the left fusiform gyrus, occipito-temporal sulcus and inferior temporal gyrus. These regions included the coordinate assigned to the centre of the visual word form area in healthy adults, which provides further evidence for a relation between pure alexia and a damaged visual word form area. Finally, we propose a list of three criteria that may improve the differential diagnosis of pure alexia and allow appropriate therapy recommendations.

Close

  • doi:10.1093/brain/awp141

Close

Tobias Pflugshaupt; Roman Wartburg; Pascal Wurtz; Silvia Chaves; Anouk Déruaz; Thomas Nyffeler; Sebastian Arx; Mathias Luethi; Dario Cazzoli; René M. Mueri

Linking physiology with behaviour: Functional specialisation of the visual field is reflected in gaze patterns during visual search Journal Article

In: Vision Research, vol. 49, no. 2, pp. 237–248, 2009.

Abstract | Links | BibTeX

@article{Pflugshaupt2009a,
title = {Linking physiology with behaviour: Functional specialisation of the visual field is reflected in gaze patterns during visual search},
author = {Tobias Pflugshaupt and Roman Wartburg and Pascal Wurtz and Silvia Chaves and Anouk Déruaz and Thomas Nyffeler and Sebastian Arx and Mathias Luethi and Dario Cazzoli and René M. Mueri},
doi = {10.1016/j.visres.2008.10.021},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {2},
pages = {237--248},
publisher = {Elsevier Ltd},
abstract = {Based on neurophysiological findings and a grid to score binocular visual field function, two hypotheses concerning the spatial distribution of fixations during visual search were tested and confirmed in healthy participants and patients with homonymous visual field defects. Both groups showed significant biases of fixations and viewing time towards the centre of the screen and the upper screen half. Patients displayed a third bias towards the side of their field defect, which represents oculomotor compensation. Moreover, significant correlations between the extent of these three biases and search performance were found. Our findings suggest a new, more dynamic view of how functional specialisation of the visual field influences behaviour.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Based on neurophysiological findings and a grid to score binocular visual field function, two hypotheses concerning the spatial distribution of fixations during visual search were tested and confirmed in healthy participants and patients with homonymous visual field defects. Both groups showed significant biases of fixations and viewing time towards the centre of the screen and the upper screen half. Patients displayed a third bias towards the side of their field defect, which represents oculomotor compensation. Moreover, significant correlations between the extent of these three biases and search performance were found. Our findings suggest a new, more dynamic view of how functional specialisation of the visual field influences behaviour.

Close

  • doi:10.1016/j.visres.2008.10.021

Close

Keith Rayner; Monica S. Castelhano; Jinmian Yang

Eye movements when looking at unusual/weird scenes: Are there cultural differences? Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 35, no. 1, pp. 254–259, 2009.

Abstract | Links | BibTeX

@article{Rayner2009a,
title = {Eye movements when looking at unusual/weird scenes: Are there cultural differences?},
author = {Keith Rayner and Monica S. Castelhano and Jinmian Yang},
doi = {10.1037/a0013508},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {35},
number = {1},
pages = {254--259},
abstract = {Recent studies have suggested that eye movement patterns while viewing scenes differ for people from different cultural backgrounds and that these differences in how scenes are viewed are due to differences in the prioritization of information (background or foreground). The current study examined whether there are cultural differences in how quickly eye movements are drawn to highly unusual aspects of a scene. American and Chinese viewers examined photographic scenes while performing a preference rating task. For each scene, participants were presented with either a normal or an unusual/weird version. Even though there were differences between the normal and weird versions of the scenes, there was no evidence of any cultural differences while viewing either scene type. The present study, along with other recent reports, raises doubts about the notion that cultural differences can influence oculomotor control in scene perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recent studies have suggested that eye movement patterns while viewing scenes differ for people from different cultural backgrounds and that these differences in how scenes are viewed are due to differences in the prioritization of information (background or foreground). The current study examined whether there are cultural differences in how quickly eye movements are drawn to highly unusual aspects of a scene. American and Chinese viewers examined photographic scenes while performing a preference rating task. For each scene, participants were presented with either a normal or an unusual/weird version. Even though there were differences between the normal and weird versions of the scenes, there was no evidence of any cultural differences while viewing either scene type. The present study, along with other recent reports, raises doubts about the notion that cultural differences can influence oculomotor control in scene perception.

Close

  • doi:10.1037/a0013508

Close

Keith Rayner; Monica S. Castelhano; Jinmian Yang

Eye movements and the perceptual span in older and younger readers Journal Article

In: Psychology and Aging, vol. 24, no. 3, pp. 755–760, 2009.

Abstract | Links | BibTeX

@article{Rayner2009b,
title = {Eye movements and the perceptual span in older and younger readers},
author = {Keith Rayner and Monica S. Castelhano and Jinmian Yang},
doi = {10.1037/a0014300},
year = {2009},
date = {2009-01-01},
journal = {Psychology and Aging},
volume = {24},
number = {3},
pages = {755--760},
abstract = {The size of the perceptual span (or the span of effective vision) in older readers was examined with the moving window paradigm (G. W. McConkie & K. Rayner, 1975). Two experiments demonstrated that older readers have a smaller and more symmetric span than that of younger readers. These 2 characteristics (smaller and more symmetric span) of older readers may be a consequence of their less efficient processing of nonfoveal information, which results in a riskier reading strategy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The size of the perceptual span (or the span of effective vision) in older readers was examined with the moving window paradigm (G. W. McConkie & K. Rayner, 1975). Two experiments demonstrated that older readers have a smaller and more symmetric span than that of younger readers. These 2 characteristics (smaller and more symmetric span) of older readers may be a consequence of their less efficient processing of nonfoveal information, which results in a riskier reading strategy.

Close

  • doi:10.1037/a0014300

Close

Keith Rayner; Tim J. Smith; George L. Malcolm; John M. Henderson

Eye movements and visual encoding during scene perception Journal Article

In: Psychological Science, vol. 20, no. 1, pp. 6–10, 2009.

Abstract | BibTeX

@article{Rayner2009,
title = {Eye movements and visual encoding during scene perception},
author = {Keith Rayner and Tim J. Smith and George L. Malcolm and John M. Henderson},
year = {2009},
date = {2009-01-01},
journal = {Psychological Science},
volume = {20},
number = {1},
pages = {6--10},
abstract = {The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist ofa scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist ofa scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways.

Close

Bob Rehder; Robert M. Colner; Aaron B. Hoffman

Feature inference learning and eyetracking Journal Article

In: Journal of Memory and Language, vol. 60, no. 3, pp. 393–419, 2009.

Abstract | Links | BibTeX

@article{Rehder2009,
title = {Feature inference learning and eyetracking},
author = {Bob Rehder and Robert M. Colner and Aaron B. Hoffman},
doi = {10.1016/j.jml.2008.12.001},
year = {2009},
date = {2009-01-01},
journal = {Journal of Memory and Language},
volume = {60},
number = {3},
pages = {393--419},
publisher = {Elsevier Inc.},
abstract = {Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of diagnostic information. We tracked learners' eye movements and found in Experiment 1 that inference learners indeed fixated features that were unnecessary for inferring the missing feature, behavior consistent with acquiring the categories' internal structure. However, Experiments 3 and 4 showed that fixations were generally limited to features that needed to be predicted on future trials. We conclude that inference learning induces both supervised and unsupervised learning of category-to-feature associations rather than a general motivation to learn the internal structure of categories.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of diagnostic information. We tracked learners' eye movements and found in Experiment 1 that inference learners indeed fixated features that were unnecessary for inferring the missing feature, behavior consistent with acquiring the categories' internal structure. However, Experiments 3 and 4 showed that fixations were generally limited to features that needed to be predicted on future trials. We conclude that inference learning induces both supervised and unsupervised learning of category-to-feature associations rather than a general motivation to learn the internal structure of categories.

Close

  • doi:10.1016/j.jml.2008.12.001

Close

Michael G. Reynolds; John D. Eastwood; Marita Partanen; Alexandra Frischen; Daniel Smilek

Monitoring eye movements while searching for affective faces Journal Article

In: Visual Cognition, vol. 17, no. 3, pp. 318–333, 2009.

Abstract | Links | BibTeX

@article{Reynolds2009,
title = {Monitoring eye movements while searching for affective faces},
author = {Michael G. Reynolds and John D. Eastwood and Marita Partanen and Alexandra Frischen and Daniel Smilek},
doi = {10.1080/13506280701623704},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {3},
pages = {318--333},
abstract = {A single experiment is reported in which we provide a novel analysis of eye movements during visual search to disentangle the contributions of unattended guidance and focal target processing to visual search performance. This technique is used to examine the controversial claim that unattended affective faces can guide attention during search. Results indicated that facial expression influences how efficiently the target was fixated for the first time as a function of set size. However, affective faces did not influence how efficiently the target was identified as a function of set size after it was first fixated. These findings suggest that, in the present context, facial expression can influence search before the target is attended and that the present measures are able to distinguish between the guidance of attention by targets and the processing of targets within the focus of attention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A single experiment is reported in which we provide a novel analysis of eye movements during visual search to disentangle the contributions of unattended guidance and focal target processing to visual search performance. This technique is used to examine the controversial claim that unattended affective faces can guide attention during search. Results indicated that facial expression influences how efficiently the target was fixated for the first time as a function of set size. However, affective faces did not influence how efficiently the target was identified as a function of set size after it was first fixated. These findings suggest that, in the present context, facial expression can influence search before the target is attended and that the present measures are able to distinguish between the guidance of attention by targets and the processing of targets within the focus of attention.

Close

  • doi:10.1080/13506280701623704

Close

Paola Ricciardelli; Elena Betta; Sonia Pruner; Massimo Turatto

Is there a direct link between gaze perception and joint attention behaviours? Effects of gaze contrast polarity on oculomotor behaviour Journal Article

In: Experimental Brain Research, vol. 194, no. 3, pp. 347–357, 2009.

Abstract | Links | BibTeX

@article{Ricciardelli2009,
title = {Is there a direct link between gaze perception and joint attention behaviours? Effects of gaze contrast polarity on oculomotor behaviour},
author = {Paola Ricciardelli and Elena Betta and Sonia Pruner and Massimo Turatto},
doi = {10.1007/s00221-009-1706-8},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {194},
number = {3},
pages = {347--357},
abstract = {Previous studies have found that attention is oriented in the direction of other people's gaze suggesting that gaze perception is related to the mechanisms of joint attention. However, the role of the perception of gaze direction on joint attention has been challenged. We investigated the effects of disrupting gaze perception on the orienting of observers' attention, in particular, whether orienting to gaze direction is affected by the disruptive effect of negative contrast polarity on gaze perception. A dynamic distracting gaze was presented to observers performing an endogenous saccadic task. Gaze perception was manipulated by reversing the contrast polarity between the sclera and the iris. With positive display polarity, eye movement recordings showed shorter saccadic latencies when the direction of the instructed saccade matched the direction of the distracting gaze, and a substantial number of erroneous saccades towards the direction of the perceived gaze when the latter did not match the instruction. Crucially, such effects were not found when gaze contrast polarity was reversed and gaze perception was impaired. These results extend previous studies by demonstrating the existence of a direct link between joint attention and the perception of gaze direction, and show how orienting of attention to other people's gaze can be suppressed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous studies have found that attention is oriented in the direction of other people's gaze suggesting that gaze perception is related to the mechanisms of joint attention. However, the role of the perception of gaze direction on joint attention has been challenged. We investigated the effects of disrupting gaze perception on the orienting of observers' attention, in particular, whether orienting to gaze direction is affected by the disruptive effect of negative contrast polarity on gaze perception. A dynamic distracting gaze was presented to observers performing an endogenous saccadic task. Gaze perception was manipulated by reversing the contrast polarity between the sclera and the iris. With positive display polarity, eye movement recordings showed shorter saccadic latencies when the direction of the instructed saccade matched the direction of the distracting gaze, and a substantial number of erroneous saccades towards the direction of the perceived gaze when the latter did not match the instruction. Crucially, such effects were not found when gaze contrast polarity was reversed and gaze perception was impaired. These results extend previous studies by demonstrating the existence of a direct link between joint attention and the perception of gaze direction, and show how orienting of attention to other people's gaze can be suppressed.

Close

  • doi:10.1007/s00221-009-1706-8

Close

Elmar H. Pinkhardt; Jan Kassubek; Sigurd Süssmuth; Albert C. Ludolph; Wolfgang Becker; Reinhart Jürgens

Comparison of smooth pursuit eye movement deficits in multiple system atrophy and Parkinson's disease Journal Article

In: Journal of Neurology, vol. 256, no. 9, pp. 1438–1446, 2009.

Abstract | Links | BibTeX

@article{Pinkhardt2009,
title = {Comparison of smooth pursuit eye movement deficits in multiple system atrophy and Parkinson's disease},
author = {Elmar H. Pinkhardt and Jan Kassubek and Sigurd Süssmuth and Albert C. Ludolph and Wolfgang Becker and Reinhart Jürgens},
doi = {10.1007/s00415-009-5131-5},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurology},
volume = {256},
number = {9},
pages = {1438--1446},
abstract = {Because of the large overlap and quantitative similarity of eye movement alterations in Parkinson's disease (PD) and multiple system atrophy (MSA), a measurement of eye movement is generally not considered helpful for the differential diagnosis. However, in view of the pathophysiological differences between MSA and PD as well as between the cerebellar (MSA-C) and Parkinsonian (MSA-P) subtypes of MSA, we wondered whether a detailed investigation of oculomotor performance would unravel parameters that could help to differentiate between these entities. We recorded eye movements during sinusoidal pursuit tracking by means of video-oculography in 11 cases of MSA-P, 8 cases of MSA-C and 27 cases of PD and compared them to 23 healthy controls (CTL). The gain of the smooth pursuit eye movement (SPEM) component exhibited significant group differences between each of the three subject groups (MSA, PD, controls) but not between MSA-P and MSA-C. The similarity of pursuit impairment in MSA-P and in MSA-C suggests a commencement of cerebellar pathology in MSA-P despite the lack of clinical signs. Otherwise, SPEM gain was of little use for differential diagnosis between MSA and PD because of wide overlap. However, inspection of the saccadic component of pursuit tracking revealed that in MSA saccades typically correct for position errors accumulated during SPEM epochs ("catch-up saccades"), whereas in PD, saccades were often directed toward future target positions ("anticipatory saccades"). The differences in pursuit tracking between PD and MSA were large enough to warrant their use as ancillary diagnostic criteria for the distinction between these disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Because of the large overlap and quantitative similarity of eye movement alterations in Parkinson's disease (PD) and multiple system atrophy (MSA), a measurement of eye movement is generally not considered helpful for the differential diagnosis. However, in view of the pathophysiological differences between MSA and PD as well as between the cerebellar (MSA-C) and Parkinsonian (MSA-P) subtypes of MSA, we wondered whether a detailed investigation of oculomotor performance would unravel parameters that could help to differentiate between these entities. We recorded eye movements during sinusoidal pursuit tracking by means of video-oculography in 11 cases of MSA-P, 8 cases of MSA-C and 27 cases of PD and compared them to 23 healthy controls (CTL). The gain of the smooth pursuit eye movement (SPEM) component exhibited significant group differences between each of the three subject groups (MSA, PD, controls) but not between MSA-P and MSA-C. The similarity of pursuit impairment in MSA-P and in MSA-C suggests a commencement of cerebellar pathology in MSA-P despite the lack of clinical signs. Otherwise, SPEM gain was of little use for differential diagnosis between MSA and PD because of wide overlap. However, inspection of the saccadic component of pursuit tracking revealed that in MSA saccades typically correct for position errors accumulated during SPEM epochs ("catch-up saccades"), whereas in PD, saccades were often directed toward future target positions ("anticipatory saccades"). The differences in pursuit tracking between PD and MSA were large enough to warrant their use as ancillary diagnostic criteria for the distinction between these disorders.

Close

  • doi:10.1007/s00415-009-5131-5

Close

Bettina Olk; Alan Kingstone

A new look at aging and performance in the antisaccade task: The impact of response selection Journal Article

In: European Journal of Cognitive Psychology, vol. 21, no. 2-3, pp. 406–427, 2009.

Abstract | Links | BibTeX

@article{Olk2009,
title = {A new look at aging and performance in the antisaccade task: The impact of response selection},
author = {Bettina Olk and Alan Kingstone},
doi = {10.1080/09541440802333190},
year = {2009},
date = {2009-01-01},
journal = {European Journal of Cognitive Psychology},
volume = {21},
number = {2-3},
pages = {406--427},
abstract = {Aged adults respond more slowly and less accurately in the antisaccade task, in which a saccade away from a visual stimulus is required. This decreased performance has been attributed to a decline in the ability to inhibit prepotent responses with age. Considering that antisaccades also involve response selection, the present experiment investigated the contribution of inhibition and response selection. Young and aged adults were compared between conditions that required varying percentages of prosaccades, antisaccades, and no-go trials. The comparison between no-go (inhibition of a prosaccade) and antisaccade trials (inhibition of a prosaccade \textit{and} selection of an antisaccade) showed significantly worse performance in the antisaccade task, especially for the older group, suggesting that they failed to select the antisaccade in a situation in which a competing, prepotent response is available. The impact of this response selection failure was underlined by an equivalent ability of both groups to impose inhibition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Aged adults respond more slowly and less accurately in the antisaccade task, in which a saccade away from a visual stimulus is required. This decreased performance has been attributed to a decline in the ability to inhibit prepotent responses with age. Considering that antisaccades also involve response selection, the present experiment investigated the contribution of inhibition and response selection. Young and aged adults were compared between conditions that required varying percentages of prosaccades, antisaccades, and no-go trials. The comparison between no-go (inhibition of a prosaccade) and antisaccade trials (inhibition of a prosaccade and selection of an antisaccade) showed significantly worse performance in the antisaccade task, especially for the older group, suggesting that they failed to select the antisaccade in a situation in which a competing, prepotent response is available. The impact of this response selection failure was underlined by an equivalent ability of both groups to impose inhibition.

Close

  • doi:10.1080/09541440802333190

Close

Tanja C. W. Nijboer; Stefan Van der Stigchel

Is attention essential for inducing synesthetic colors? Evidence from oculomotor distractors Journal Article

In: Journal of Vision, vol. 9, no. 6, pp. 1–9, 2009.

Abstract | Links | BibTeX

@article{Nijboer2009,
title = {Is attention essential for inducing synesthetic colors? Evidence from oculomotor distractors},
author = {Tanja C. W. Nijboer and Stefan Van der Stigchel},
doi = {10.1167/9.6.21},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {6},
pages = {1--9},
abstract = {In studies investigating visual attention in synesthesia, the targets usually induce a synesthetic color. To measure to what extent attention is necessary to induce synesthetic color experiences, one needs a task in which the synesthetic color is induced by a task-irrelevant distractor. In the current study, an oculomotor distractor task was used in which an eye movement was to be made to a physically colored target while ignoring a single physically colored or synesthetic distractor. Whereas many erroneous eye movements were made to distractors with an identical hue as the target (i.e., capture), much less interference was found with synesthetic distractors. The interference of synesthetic distractors was comparable with achromatic non-digit distractors. These results suggest that attention and hence overt recognition of the inducing stimulus are essential for the synesthetic color experience to occur.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In studies investigating visual attention in synesthesia, the targets usually induce a synesthetic color. To measure to what extent attention is necessary to induce synesthetic color experiences, one needs a task in which the synesthetic color is induced by a task-irrelevant distractor. In the current study, an oculomotor distractor task was used in which an eye movement was to be made to a physically colored target while ignoring a single physically colored or synesthetic distractor. Whereas many erroneous eye movements were made to distractors with an identical hue as the target (i.e., capture), much less interference was found with synesthetic distractors. The interference of synesthetic distractors was comparable with achromatic non-digit distractors. These results suggest that attention and hence overt recognition of the inducing stimulus are essential for the synesthetic color experience to occur.

Close

  • doi:10.1167/9.6.21

Close

Satoshi Nishida; Tomohiro Shibata; Kazushi Ikeda

Prediction of human eye movements in facial discrimination tasks Journal Article

In: Artificial Life and Robotics, vol. 14, no. 3, pp. 348–351, 2009.

Abstract | Links | BibTeX

@article{Nishida2009,
title = {Prediction of human eye movements in facial discrimination tasks},
author = {Satoshi Nishida and Tomohiro Shibata and Kazushi Ikeda},
doi = {10.1007/s10015-009-0679-9},
year = {2009},
date = {2009-01-01},
journal = {Artificial Life and Robotics},
volume = {14},
number = {3},
pages = {348--351},
abstract = {Under natural viewing conditions, human observers selectively allocate their attention to subsets of the visual input. Since overt allocation of attention appears as eye movements, the mechanism of selective attention can be uncovered through computational studies of eyemovement predictions. Since top-down attentional control in a task is expected to modulate eye movements significantly, the models that take a bottom-up approach based on low-level local properties are not expected to suffice for prediction. In this study, we introduce two representative models, apply them to a facial discrimination task with morphed face images, and evaluate their performance by comparing them with the human eye-movement data. The result shows that they are not good at predicting eye movements in this task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Under natural viewing conditions, human observers selectively allocate their attention to subsets of the visual input. Since overt allocation of attention appears as eye movements, the mechanism of selective attention can be uncovered through computational studies of eyemovement predictions. Since top-down attentional control in a task is expected to modulate eye movements significantly, the models that take a bottom-up approach based on low-level local properties are not expected to suffice for prediction. In this study, we introduce two representative models, apply them to a facial discrimination task with morphed face images, and evaluate their performance by comparing them with the human eye-movement data. The result shows that they are not good at predicting eye movements in this task.

Close

  • doi:10.1007/s10015-009-0679-9

Close

Atsushi Noritake; Bob Uttl; Masahiko Terao; Masayoshi Nagai; Junji Watanabe; Akihiro Yagi

Saccadic compression of rectangle and Kanizsa figures: Now you see it, now you don't Journal Article

In: PLoS ONE, vol. 4, no. 7, pp. e6383, 2009.

Abstract | Links | BibTeX

@article{Noritake2009,
title = {Saccadic compression of rectangle and Kanizsa figures: Now you see it, now you don't},
author = {Atsushi Noritake and Bob Uttl and Masahiko Terao and Masayoshi Nagai and Junji Watanabe and Akihiro Yagi},
doi = {10.1371/journal.pone.0006383},
year = {2009},
date = {2009-01-01},
journal = {PLoS ONE},
volume = {4},
number = {7},
pages = {e6383},
abstract = {BACKGROUND: Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles. CONCLUSIONS: Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

BACKGROUND: Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles. CONCLUSIONS: Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods.

Close

  • doi:10.1371/journal.pone.0006383

Close

Ulrich Nuding; Roger Kalla; Neil G. Muggleton; Ulrich Büttner; Vincent Walsh; Stefan Glasauer

TMS evidence for smooth pursuit gain control by the frontal eye fields Journal Article

In: Cerebral Cortex, vol. 19, no. 5, pp. 1144–1150, 2009.

Abstract | Links | BibTeX

@article{Nuding2009,
title = {TMS evidence for smooth pursuit gain control by the frontal eye fields},
author = {Ulrich Nuding and Roger Kalla and Neil G. Muggleton and Ulrich Büttner and Vincent Walsh and Stefan Glasauer},
doi = {10.1093/cercor/bhn162},
year = {2009},
date = {2009-01-01},
journal = {Cerebral Cortex},
volume = {19},
number = {5},
pages = {1144--1150},
abstract = {Smooth pursuit eye movements are used to continuously track slowly moving visual objects. A peculiar property of the smooth pursuit system is the nonlinear increase in sensitivity to changes in target motion with increasing pursuit velocities. We investigated the role of the frontal eye fields (FEFs) in this dynamic gain control mechanism by application of transcranial magnetic stimulation. Subjects were required to pursue a slowly moving visual target whose motion consisted of 2 components: a constant velocity component at 4 different velocities (0, 8, 16, and 24 deg/s) and a superimposed high-frequency sinusoidal oscillation (4 Hz, +/-8 deg/s). Magnetic stimulation of the FEFs reduced not only the overall gain of the system, but also the efficacy of the dynamic gain control. We thus provide the first direct evidence that the FEF population is significantly involved in the nonlinear computation necessary for continuously adjusting the feedforward gain of the pursuit system. We discuss this with relation to current models of smooth pursuit.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Smooth pursuit eye movements are used to continuously track slowly moving visual objects. A peculiar property of the smooth pursuit system is the nonlinear increase in sensitivity to changes in target motion with increasing pursuit velocities. We investigated the role of the frontal eye fields (FEFs) in this dynamic gain control mechanism by application of transcranial magnetic stimulation. Subjects were required to pursue a slowly moving visual target whose motion consisted of 2 components: a constant velocity component at 4 different velocities (0, 8, 16, and 24 deg/s) and a superimposed high-frequency sinusoidal oscillation (4 Hz, +/-8 deg/s). Magnetic stimulation of the FEFs reduced not only the overall gain of the system, but also the efficacy of the dynamic gain control. We thus provide the first direct evidence that the FEF population is significantly involved in the nonlinear computation necessary for continuously adjusting the feedforward gain of the pursuit system. We discuss this with relation to current models of smooth pursuit.

Close

  • doi:10.1093/cercor/bhn162

Close

Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo

Emotional scene content drives the saccade generation system reflexively Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 2, pp. 305–323, 2009.

Abstract | Links | BibTeX

@article{Nummenmaa2009,
title = {Emotional scene content drives the saccade generation system reflexively},
author = {Lauri Nummenmaa and Jukka Hyönä and Manuel G. Calvo},
doi = {10.1037/a0013626},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {2},
pages = {305--323},
abstract = {The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster when the cue pointed toward the emotional picture rather than toward the neutral picture. Experiment 2 replicated these findings with a reflexive saccade task, in which abrupt luminosity changes were used as exogenous saccade cues. In Experiment 3, participants performed vertical reflexive saccades that were orthogonal to the emotional-neutral picture locations. Saccade endpoints and trajectories deviated away from the visual field in which the emotional scenes were presented. Experiment 4 showed that computationally modeled visual saliency does not vary as a function of scene content and that inversion abolishes the rapid orienting toward the emotional scenes. Visual confounds cannot thus explain the results. The authors conclude that early saccade target selection and execution processes are automatically influenced by emotional picture content. This reveals processing of meaningful scene content prior to overt attention to the stimulus.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster when the cue pointed toward the emotional picture rather than toward the neutral picture. Experiment 2 replicated these findings with a reflexive saccade task, in which abrupt luminosity changes were used as exogenous saccade cues. In Experiment 3, participants performed vertical reflexive saccades that were orthogonal to the emotional-neutral picture locations. Saccade endpoints and trajectories deviated away from the visual field in which the emotional scenes were presented. Experiment 4 showed that computationally modeled visual saliency does not vary as a function of scene content and that inversion abolishes the rapid orienting toward the emotional scenes. Visual confounds cannot thus explain the results. The authors conclude that early saccade target selection and execution processes are automatically influenced by emotional picture content. This reveals processing of meaningful scene content prior to overt attention to the stimulus.

Close

  • doi:10.1037/a0013626

Close

Lauri Nummenmaa; Jukka Hyönä; Jari K. Hietanen

I'll walk this way: Eyes reveal the direction of locomotion and make passersby look and go the other way Journal Article

In: Psychological Science, vol. 20, no. 12, pp. 1454–1458, 2009.

Abstract | BibTeX

@article{Nummenmaa2009a,
title = {I'll walk this way: Eyes reveal the direction of locomotion and make passersby look and go the other way},
author = {Lauri Nummenmaa and Jukka Hyönä and Jari K. Hietanen},
year = {2009},
date = {2009-01-01},
journal = {Psychological Science},
volume = {20},
number = {12},
pages = {1454--1458},
abstract = {This study shows that humans (a) infer other people's movement trajectories from their gaze direction and (b) use this information to guide their own visual scanning of the environment and plan their own movement. In two eye-tracking experiments, participants viewed an animated character walking directly toward them on a street. The character looked constantly to the left or to the right (Experiment 1) or suddenly shifted his gaze from direct to the left or to the right (Experiment 2). Participants had to decide on which side they would skirt the character. They shifted their gaze toward the direction in which the character was not gazing, that is, away from his gaze, and chose to skirt him on that side. Gaze following is not always an obligatory social reflex; social-cognitive evaluations of gaze direction can lead to reversed gaze-following behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study shows that humans (a) infer other people's movement trajectories from their gaze direction and (b) use this information to guide their own visual scanning of the environment and plan their own movement. In two eye-tracking experiments, participants viewed an animated character walking directly toward them on a street. The character looked constantly to the left or to the right (Experiment 1) or suddenly shifted his gaze from direct to the left or to the right (Experiment 2). Participants had to decide on which side they would skirt the character. They shifted their gaze toward the direction in which the character was not gazing, that is, away from his gaze, and chose to skirt him on that side. Gaze following is not always an obligatory social reflex; social-cognitive evaluations of gaze direction can lead to reversed gaze-following behavior.

Close

Antje Nuthmann; Ralf Engbert

Mindless reading revisited: An analysis based on the SWIFT model of eye-movement control Journal Article

In: Vision Research, vol. 49, no. 3, pp. 322–336, 2009.

Abstract | Links | BibTeX

@article{Nuthmann2009,
title = {Mindless reading revisited: An analysis based on the SWIFT model of eye-movement control},
author = {Antje Nuthmann and Ralf Engbert},
doi = {10.1016/j.visres.2008.10.022},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {3},
pages = {322--336},
abstract = {In this article, we revisit the mindless reading paradigm from the perspective of computational modeling. In the standard version of the paradigm, participants read sentences in both their normal version as well as the transformed (or mindless) version where each letter is replaced with a z. z-String scanning shares the oculomotor requirements with reading but none of the higher-level lexical and semantic processes. Here we use the z-string scanning task to validate the SWIFT model of saccade generation [Engbert, R., Nuthmann, A., Richter, E., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112(4), 777-813] as an example for an advanced theory of eye-movement control in reading. We test the central assumption of spatially distributed processing across an attentional gradient proposed by the SWIFT model. Key experimental results like prolonged average fixation durations in z-string scanning compared to normal reading and the existence of a string-length effect on fixation durations and probabilities were reproduced by the model, which lends support to the model's assumptions on visual processing. Moreover, simulation results for patterns of regressive saccades in z-string scanning confirm SWIFT's concept of activation field dynamics for the selection of saccade targets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this article, we revisit the mindless reading paradigm from the perspective of computational modeling. In the standard version of the paradigm, participants read sentences in both their normal version as well as the transformed (or mindless) version where each letter is replaced with a z. z-String scanning shares the oculomotor requirements with reading but none of the higher-level lexical and semantic processes. Here we use the z-string scanning task to validate the SWIFT model of saccade generation [Engbert, R., Nuthmann, A., Richter, E., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112(4), 777-813] as an example for an advanced theory of eye-movement control in reading. We test the central assumption of spatially distributed processing across an attentional gradient proposed by the SWIFT model. Key experimental results like prolonged average fixation durations in z-string scanning compared to normal reading and the existence of a string-length effect on fixation durations and probabilities were reproduced by the model, which lends support to the model's assumptions on visual processing. Moreover, simulation results for patterns of regressive saccades in z-string scanning confirm SWIFT's concept of activation field dynamics for the selection of saccade targets.

Close

  • doi:10.1016/j.visres.2008.10.022

Close

Antje Nuthmann; Reinhold Kliegl

An examination of binocular reading fixations based on sentence corpus data Journal Article

In: Journal of Vision, vol. 9, no. 5, pp. 31–31, 2009.

Abstract | Links | BibTeX

@article{Nuthmann2009a,
title = {An examination of binocular reading fixations based on sentence corpus data},
author = {Antje Nuthmann and Reinhold Kliegl},
doi = {10.1167/9.5.31},
year = {2009},
date = {2009-01-01},
journal = {Journal of Vision},
volume = {9},
number = {5},
pages = {31--31},
abstract = {Binocular eye movements of normal adult readers were examined as they read single sentences. Analyses of horizontal and vertical fixation disparities indicated that the most prevalent type of disparate fixation is crossed (i.e., the left eye is located further to the right than the right eye) while the left eye frequently fixates somewhat above the right eye. The Gaussian distribution of the binocular fixation point peaked 2.6 cm in front of the plane of text, reflecting the prevalence of horizontally crossed fixations. Fixation disparity accumulates during the course of successive saccades and fixations within a line of text, but only to an extent that does not compromise single binocular vision. In reading, the version and vergence system interact in a way that is qualitatively similar to what has been observed in simple nonreading tasks. Finally, results presented here render it unlikely that vergence movements in reading aim at realigning the eyes at a given saccade target word.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Binocular eye movements of normal adult readers were examined as they read single sentences. Analyses of horizontal and vertical fixation disparities indicated that the most prevalent type of disparate fixation is crossed (i.e., the left eye is located further to the right than the right eye) while the left eye frequently fixates somewhat above the right eye. The Gaussian distribution of the binocular fixation point peaked 2.6 cm in front of the plane of text, reflecting the prevalence of horizontally crossed fixations. Fixation disparity accumulates during the course of successive saccades and fixations within a line of text, but only to an extent that does not compromise single binocular vision. In reading, the version and vergence system interact in a way that is qualitatively similar to what has been observed in simple nonreading tasks. Finally, results presented here render it unlikely that vergence movements in reading aim at realigning the eyes at a given saccade target word.

Close

  • doi:10.1167/9.5.31

Close

Ming Qian; Mario Aguilar; Karen N. Zachery; Claudio M. Privitera; Stanley A. Klein; Thom Carney; Loren W. Nolte

Decision-level fusion of EEG and pupil features for single-trial visual detection analysis Journal Article

In: IEEE Transactions on Biomedical Engineering, vol. 56, no. 7, pp. 1929–1937, 2009.

Abstract | Links | BibTeX

@article{Qian2009,
title = {Decision-level fusion of EEG and pupil features for single-trial visual detection analysis},
author = {Ming Qian and Mario Aguilar and Karen N. Zachery and Claudio M. Privitera and Stanley A. Klein and Thom Carney and Loren W. Nolte},
doi = {10.1109/TBME.2009.2016670},
year = {2009},
date = {2009-01-01},
journal = {IEEE Transactions on Biomedical Engineering},
volume = {56},
number = {7},
pages = {1929--1937},
abstract = {Several recent studies have reported success in applying EEG-based signal analysis to achieve accurate single-trial classification of responses to visual target detection. Pupil responses are proposed as a complementary modality that can support improved accuracy of single-trial signal analysis. We develop a pupillary response feature-extraction and -selection procedure that helps to improve the classification performance of a system based only on EEG signal analysis. We apply a two-level linear classifier to obtain cognitive-task-related analysis of EEG and pupil responses. The classification results based on the two modalities are then fused at the decision level. Here, the goal is to support increased classification confidence through the inherent modality complementarities. The fusion results show significant improvement over classification performance based on a single modality.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Several recent studies have reported success in applying EEG-based signal analysis to achieve accurate single-trial classification of responses to visual target detection. Pupil responses are proposed as a complementary modality that can support improved accuracy of single-trial signal analysis. We develop a pupillary response feature-extraction and -selection procedure that helps to improve the classification performance of a system based only on EEG signal analysis. We apply a two-level linear classifier to obtain cognitive-task-related analysis of EEG and pupil responses. The classification results based on the two modalities are then fused at the decision level. Here, the goal is to support increased classification confidence through the inherent modality complementarities. The fusion results show significant improvement over classification performance based on a single modality.

Close

  • doi:10.1109/TBME.2009.2016670

Close

Carrick C. Williams; Alexander Pollatsek; Kyle R. Cave; Michael J. Stroud

More than just finding color: Strategy in global visual search is shaped by learned target probabilities Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 688–699, 2009.

Abstract | Links | BibTeX

@article{Williams2009,
title = {More than just finding color: Strategy in global visual search is shaped by learned target probabilities},
author = {Carrick C. Williams and Alexander Pollatsek and Kyle R. Cave and Michael J. Stroud},
doi = {10.1037/a0013900},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {3},
pages = {688--699},
abstract = {In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue T) was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though clusters, the use of color was determined by the probability that the target would appear in a cluster of a certain color type: When the target was equally likely to be in any cluster containing the target color, fixations were directed to those clusters approximately equally, but when targets were more likely to appear in clusters with more target-color items, those clusters were likely to be fixated sooner. (The target probabilities guided search without explicit instruction.) Once fixated, the time spent within a cluster depended on the number of target-color elements, consistent with a search of only those elements. Thus, between-cluster search was influenced by global target probabilities signaled by amount of color or color ratios, whereas within-cluster search was directly driven by presence of the target color.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue T) was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though clusters, the use of color was determined by the probability that the target would appear in a cluster of a certain color type: When the target was equally likely to be in any cluster containing the target color, fixations were directed to those clusters approximately equally, but when targets were more likely to appear in clusters with more target-color items, those clusters were likely to be fixated sooner. (The target probabilities guided search without explicit instruction.) Once fixated, the time spent within a cluster depended on the number of target-color elements, consistent with a search of only those elements. Thus, between-cluster search was influenced by global target probabilities signaled by amount of color or color ratios, whereas within-cluster search was directly driven by presence of the target color.

Close

  • doi:10.1037/a0013900

Close

Heather Winskel

Reading in Thai: the case of misaligned vowels Journal Article

In: Reading and Writing, vol. 22, no. 1, pp. 1–24, 2009.

Abstract | Links | BibTeX

@article{Winskel2009,
title = {Reading in Thai: the case of misaligned vowels},
author = {Heather Winskel},
doi = {10.1007/s11145-007-9100-z},
year = {2009},
date = {2009-01-01},
journal = {Reading and Writing},
volume = {22},
number = {1},
pages = {1--24},
abstract = {Thai has its own distinctive alphabetic script with syllabic characteristics as it has implicit vowels for some consonants. Consonants are written in a linear order, but vowels can be written non-linearly above, below or to either side of the consonant. Of particular interest to the current study are that vowels can precede the consonant in writing but follow it in speech, hence a mismatch between the spoken and written sequence occurs. In order to investigate if there is a processing cost associated with this discrepancy between spoken and written sequence for vowels and the implications this has in relation to the grain size used when reading Thai, eye movements of adults reading words with and without misaligned vowels in sentences using the EyeLink II tracking system was conducted. Twenty-four university students read 50 pairs of words with misaligned and aligned vowel words matched for length and frequency embedded in same sentence frames. In addition, rapid naming data from forty adults was collected. Data from forty children 6;6-8;6 years old reading and spelling comparable words was also collected and analysed for errors. Results revealed a processing cost due to the more severely misaligned words where the vowel operates across the syllable, and gives support for a syllabic level of segmentation rather than phonemic for reading and spelling in Thai adults and children.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Thai has its own distinctive alphabetic script with syllabic characteristics as it has implicit vowels for some consonants. Consonants are written in a linear order, but vowels can be written non-linearly above, below or to either side of the consonant. Of particular interest to the current study are that vowels can precede the consonant in writing but follow it in speech, hence a mismatch between the spoken and written sequence occurs. In order to investigate if there is a processing cost associated with this discrepancy between spoken and written sequence for vowels and the implications this has in relation to the grain size used when reading Thai, eye movements of adults reading words with and without misaligned vowels in sentences using the EyeLink II tracking system was conducted. Twenty-four university students read 50 pairs of words with misaligned and aligned vowel words matched for length and frequency embedded in same sentence frames. In addition, rapid naming data from forty adults was collected. Data from forty children 6;6-8;6 years old reading and spelling comparable words was also collected and analysed for errors. Results revealed a processing cost due to the more severely misaligned words where the vowel operates across the syllable, and gives support for a syllabic level of segmentation rather than phonemic for reading and spelling in Thai adults and children.

Close

  • doi:10.1007/s11145-007-9100-z

Close

Heather Winskel; Ralph Radach; Sudaporn Luksaneeyanawin

Eye movements when reading spaced and unspaced Thai and English: A comparison of Thai-English bilinguals and English monolinguals Journal Article

In: Journal of Memory and Language, vol. 61, no. 3, pp. 339–351, 2009.

Abstract | Links | BibTeX

@article{Winskel2009a,
title = {Eye movements when reading spaced and unspaced Thai and English: A comparison of Thai-English bilinguals and English monolinguals},
author = {Heather Winskel and Ralph Radach and Sudaporn Luksaneeyanawin},
doi = {10.1016/j.jml.2009.07.002},
year = {2009},
date = {2009-01-01},
journal = {Journal of Memory and Language},
volume = {61},
number = {3},
pages = {339--351},
abstract = {The study investigated the eye movements of Thai-English bilinguals when reading both Thai and English with and without interword spaces, in comparison with English monolinguals. Thai is an alphabetic orthography without interword spaces. Participants read sentences with high and low frequency target words embedded in same sentence frames with and without interword spaces. Interword spaces had a selective effect on reading in Thai, as they facilitated word recognition, but did not affect eye guidance and lexical segmentation. Initial saccade landing positions were similar in spaced and unspaced text. As expected, removal of spaces severely disrupted reading in English, as reflected by the eye movement measures, in both bilinguals and monolinguals. Here, initial landing positions were significantly nearer the beginning of the target words when reading unspaced rather than spaced text. Effects were more accentuated in the bilinguals. In sum, results from reading in Thai give qualified support for a facilitatory function of interword spaces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study investigated the eye movements of Thai-English bilinguals when reading both Thai and English with and without interword spaces, in comparison with English monolinguals. Thai is an alphabetic orthography without interword spaces. Participants read sentences with high and low frequency target words embedded in same sentence frames with and without interword spaces. Interword spaces had a selective effect on reading in Thai, as they facilitated word recognition, but did not affect eye guidance and lexical segmentation. Initial saccade landing positions were similar in spaced and unspaced text. As expected, removal of spaces severely disrupted reading in English, as reflected by the eye movement measures, in both bilinguals and monolinguals. Here, initial landing positions were significantly nearer the beginning of the target words when reading unspaced rather than spaced text. Effects were more accentuated in the bilinguals. In sum, results from reading in Thai give qualified support for a facilitatory function of interword spaces.

Close

  • doi:10.1016/j.jml.2009.07.002

Close

Dagmar A. Wismeijer; Casper J. Erkelens

The effect of changing size on vergence is mediated by changing disparity Journal Article

In: Journal of vision, vol. 9, no. 13, pp. 12 1–10, 2009.

Abstract | Links | BibTeX

@article{Wismeijer2009,
title = {The effect of changing size on vergence is mediated by changing disparity},
author = {Dagmar A. Wismeijer and Casper J. Erkelens},
doi = {10.1167/9.13.12},
year = {2009},
date = {2009-01-01},
journal = {Journal of vision},
volume = {9},
number = {13},
pages = {12 1--10},
abstract = {In this study, we investigated the effect of changing size on vergence. Erkelens and Regan (1986) proposed that this cue to motion in depth affects vergence in a similar way as it affects perception. The measured effect on vergence was small and we wondered why the vergence system would use changing size as an additional cue to changing disparity. To elucidate the effect of changing size on vergence, we used an annulus carrying both changing size and changing disparity signals to motion in depth. The cues were either congruent or signaled a different depth. The results showed that vergence was affected by changing size, however in an opposite way than that perception was affected. These results were incongruent with those reported by Erkelens and Regan (1986). We therefore additionally measured the effects on vergence of the individual parameters associated with changing size, i.e., stimulus area, retinal eccentricity, and luminance. Stimulus (retinal) eccentricity was inversely related to vergence gain. Luminance, on the other hand, had a smaller but positive relation to vergence gain. Thus, changing size affected the disparity signal two-fold: it changed the retinal location of the disparity signal and it changed the strength of the disparity signal (luminance change). These effects of changing size on disparity can explain both our results (change in retinal location of the disparity signal) and those of Erkelens and Regan (1986; change in luminance). We thus conclude that changing size did not in itself contribute to vergence, rather its effect on vergence was mediated by disparity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this study, we investigated the effect of changing size on vergence. Erkelens and Regan (1986) proposed that this cue to motion in depth affects vergence in a similar way as it affects perception. The measured effect on vergence was small and we wondered why the vergence system would use changing size as an additional cue to changing disparity. To elucidate the effect of changing size on vergence, we used an annulus carrying both changing size and changing disparity signals to motion in depth. The cues were either congruent or signaled a different depth. The results showed that vergence was affected by changing size, however in an opposite way than that perception was affected. These results were incongruent with those reported by Erkelens and Regan (1986). We therefore additionally measured the effects on vergence of the individual parameters associated with changing size, i.e., stimulus area, retinal eccentricity, and luminance. Stimulus (retinal) eccentricity was inversely related to vergence gain. Luminance, on the other hand, had a smaller but positive relation to vergence gain. Thus, changing size affected the disparity signal two-fold: it changed the retinal location of the disparity signal and it changed the strength of the disparity signal (luminance change). These effects of changing size on disparity can explain both our results (change in retinal location of the disparity signal) and those of Erkelens and Regan (1986; change in luminance). We thus conclude that changing size did not in itself contribute to vergence, rather its effect on vergence was mediated by disparity.

Close

  • doi:10.1167/9.13.12

Close

Luke Woloszyn; David L. Sheinberg

Neural dynamics in inferior temporal cortex during a visual working memory task Journal Article

In: Journal of Neuroscience, vol. 29, no. 17, pp. 5494–5507, 2009.

Abstract | Links | BibTeX

@article{Woloszyn2009,
title = {Neural dynamics in inferior temporal cortex during a visual working memory task},
author = {Luke Woloszyn and David L. Sheinberg},
doi = {10.1523/JNEUROSCI.5785-08.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neuroscience},
volume = {29},
number = {17},
pages = {5494--5507},
abstract = {Intelligent organisms are capable of tracking objects even when they temporarily disappear from sight, a cognitive capacity commonly referred to as visual working memory (VWM). The neural basis of VWM has been the subject of significant scientific debate, with recent work focusing on the relative roles of posterior visual areas, such as the inferior temporal cortex (ITC), and the prefrontal cortex. Here we reexamined the contribution of ITC to VWM by recording from highly selective individual ITC neurons as monkeys engaged in multiple versions of an occlusion-based memory task. As expected, we found strong evidence for a role of ITC in stimulus encoding. We also found that almost half of these selective cells showed stimulus-selective delay period modulation, with a small but significant fraction exhibiting differential responses even in the presence of simultaneously visible interfering information. When we combined the informational content of multiple neurons, we found that the accuracy with which we could decode memory content increased drastically. The memory epoch analyses suggest that behaviorally relevant visual memories were reinstated in ITC. Furthermore, we observed a population-wide enhancement of neuronal response to a match stimulus compared with the same stimulus presented as a nonmatch. The single-cell enhancement preceded any match effects identified in the local field potential, leading us to speculate that enhancement is the result of neural processing local to ITC. Moreover, match enhancement was only later followed by the more commonly observed match suppression. Altogether, the data support the hypothesis that, when a stimulus is held in memory, ITC neurons are actively biased in favor of task-relevant visual representations and that this bias can immediately impact subsequent recognition events.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intelligent organisms are capable of tracking objects even when they temporarily disappear from sight, a cognitive capacity commonly referred to as visual working memory (VWM). The neural basis of VWM has been the subject of significant scientific debate, with recent work focusing on the relative roles of posterior visual areas, such as the inferior temporal cortex (ITC), and the prefrontal cortex. Here we reexamined the contribution of ITC to VWM by recording from highly selective individual ITC neurons as monkeys engaged in multiple versions of an occlusion-based memory task. As expected, we found strong evidence for a role of ITC in stimulus encoding. We also found that almost half of these selective cells showed stimulus-selective delay period modulation, with a small but significant fraction exhibiting differential responses even in the presence of simultaneously visible interfering information. When we combined the informational content of multiple neurons, we found that the accuracy with which we could decode memory content increased drastically. The memory epoch analyses suggest that behaviorally relevant visual memories were reinstated in ITC. Furthermore, we observed a population-wide enhancement of neuronal response to a match stimulus compared with the same stimulus presented as a nonmatch. The single-cell enhancement preceded any match effects identified in the local field potential, leading us to speculate that enhancement is the result of neural processing local to ITC. Moreover, match enhancement was only later followed by the more commonly observed match suppression. Altogether, the data support the hypothesis that, when a stimulus is held in memory, ITC neurons are actively biased in favor of task-relevant visual representations and that this bias can immediately impact subsequent recognition events.

Close

  • doi:10.1523/JNEUROSCI.5785-08.2009

Close

Eiling Yee; Eve Overton; Sharon L. Thompson-Schill

Looking for meaning: Eye movements are sensitive to overlapping semantic features, not association Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 869–874, 2009.

Abstract | Links | BibTeX

@article{Yee2009,
title = {Looking for meaning: Eye movements are sensitive to overlapping semantic features, not association},
author = {Eiling Yee and Eve Overton and Sharon L. Thompson-Schill},
doi = {10.3758/PBR.16.5.869},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {5},
pages = {869--874},
abstract = {Theories of semantic memory differ in the extent to which relationships among concepts are captured via associative or via semantic relatedness. We examined the contributions of these two factors, using a visual world paradigm in which participants selected the named object from a four-picture display. We controlled for semantic relatedness while manipulating associative strength by using the visual world paradigm's analogue to presenting asymmetrically associated pairs in either their forward or backward associative direction (e.g., ham-eggs vs. eggs-ham). Semantically related objects were preferentially fixated regardless of the direction of presentation (and the effect size was unchanged by presentation direction). However, when pairs were associated but not semantically related (e.g., iceberg-lettuce), associated objects were not preferentially fixated in either direction. These findings lend support to theories in which semantic memory is organized according to semantic relatedness (e.g., distributed models) and suggest that association by itself has little effect on this organization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Theories of semantic memory differ in the extent to which relationships among concepts are captured via associative or via semantic relatedness. We examined the contributions of these two factors, using a visual world paradigm in which participants selected the named object from a four-picture display. We controlled for semantic relatedness while manipulating associative strength by using the visual world paradigm's analogue to presenting asymmetrically associated pairs in either their forward or backward associative direction (e.g., ham-eggs vs. eggs-ham). Semantically related objects were preferentially fixated regardless of the direction of presentation (and the effect size was unchanged by presentation direction). However, when pairs were associated but not semantically related (e.g., iceberg-lettuce), associated objects were not preferentially fixated in either direction. These findings lend support to theories in which semantic memory is organized according to semantic relatedness (e.g., distributed models) and suggest that association by itself has little effect on this organization.

Close

  • doi:10.3758/PBR.16.5.869

Close

Miao-Hsuan Yen; Ralph Radach; Ovid J. L. Tzeng; Daisy L. Hung; Jie-Li Tsai

Early parafoveal processing in reading Chinese sentences Journal Article

In: Acta Psychologica, vol. 131, no. 1, pp. 24–33, 2009.

Abstract | Links | BibTeX

@article{Yen2009,
title = {Early parafoveal processing in reading Chinese sentences},
author = {Miao-Hsuan Yen and Ralph Radach and Ovid J. L. Tzeng and Daisy L. Hung and Jie-Li Tsai},
doi = {10.1016/j.actpsy.2009.02.005},
year = {2009},
date = {2009-01-01},
journal = {Acta Psychologica},
volume = {131},
number = {1},
pages = {24--33},
publisher = {Elsevier B.V.},
abstract = {The possibility that during Chinese reading information is extracted at the beginning of the current fixation was examined in this study. Twenty-four participants read for comprehension while their eye movements were being recorded. A pretarget-target two-character word pair was embedded in each sentence and target word visibility was manipulated in two time intervals (initial 140 ms or after 140 ms) during pretarget viewing. Substantial beginning- and end-of-fixation preview effects were observed together with beginning-of-fixation effects on the pretarget. Apparently parafoveal information at least at the character level can be extracted relatively early during ongoing fixations. Results are highly relevant for ongoing debates on spatially distributed linguistic processing and address fundamental questions about how the human mind solves the task of reading within the constraints of different writing systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The possibility that during Chinese reading information is extracted at the beginning of the current fixation was examined in this study. Twenty-four participants read for comprehension while their eye movements were being recorded. A pretarget-target two-character word pair was embedded in each sentence and target word visibility was manipulated in two time intervals (initial 140 ms or after 140 ms) during pretarget viewing. Substantial beginning- and end-of-fixation preview effects were observed together with beginning-of-fixation effects on the pretarget. Apparently parafoveal information at least at the character level can be extracted relatively early during ongoing fixations. Results are highly relevant for ongoing debates on spatially distributed linguistic processing and address fundamental questions about how the human mind solves the task of reading within the constraints of different writing systems.

Close

  • doi:10.1016/j.actpsy.2009.02.005

Close

Weilei Yi; Dana Ballard

Recognizing behavior in hand-eye coordination patterns Journal Article

In: International Journal of Humanoid Robotics, vol. 06, no. 03, pp. 337–359, 2009.

Abstract | Links | BibTeX

@article{Yi2009,
title = {Recognizing behavior in hand-eye coordination patterns},
author = {Weilei Yi and Dana Ballard},
doi = {10.1142/S0219843609001863},
year = {2009},
date = {2009-01-01},
journal = {International Journal of Humanoid Robotics},
volume = {06},
number = {03},
pages = {337--359},
abstract = {Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time.

Close

  • doi:10.1142/S0219843609001863

Close

Gregory J. Zelinsky; Joseph Schmidt

An effect of referential scene constraint on search implies scene segmentation Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 1004–1028, 2009.

Abstract | Links | BibTeX

@article{Zelinsky2009,
title = {An effect of referential scene constraint on search implies scene segmentation},
author = {Gregory J. Zelinsky and Joseph Schmidt},
doi = {10.1080/13506280902764315},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {1004--1028},
abstract = {Subjects searched aerial images for a UFO target, which appeared hovering over one of five scene regions: Water, fields, foliage, roads, or buildings. Prior to search scene onset, subjects were either told the scene region where the target could be found (specified condition) or not (unspecified condition). Search times were faster and fewer eye movements were needed to acquire targets when the target region was specified. Subjects also distributed their fixations disproportionately in this region and tended to fixate the cued region sooner. We interpret these patterns as evidence for the use of referential scene constraints to partially confine search to a specified scene region. Importantly, this constraint cannot be due to learned associations between the scene and its regions, as these spatial relationships were unpredictable. These findings require the modification of existing theories of scene constraint to include segmentation processes that can rapidly bias search to cued regions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Subjects searched aerial images for a UFO target, which appeared hovering over one of five scene regions: Water, fields, foliage, roads, or buildings. Prior to search scene onset, subjects were either told the scene region where the target could be found (specified condition) or not (unspecified condition). Search times were faster and fewer eye movements were needed to acquire targets when the target region was specified. Subjects also distributed their fixations disproportionately in this region and tended to fixate the cued region sooner. We interpret these patterns as evidence for the use of referential scene constraints to partially confine search to a specified scene region. Importantly, this constraint cannot be due to learned associations between the scene and its regions, as these spatial relationships were unpredictable. These findings require the modification of existing theories of scene constraint to include segmentation processes that can rapidly bias search to cued regions.

Close

  • doi:10.1080/13506280902764315

Close

Peng Zhou; Liqun Gao

Scope processing in Chinese Journal Article

In: Journal of Psycholinguistic Research, vol. 38, no. 1, pp. 11–24, 2009.

Abstract | Links | BibTeX

@article{Zhou2009,
title = {Scope processing in Chinese},
author = {Peng Zhou and Liqun Gao},
doi = {10.1007/s10936-008-9079-x},
year = {2009},
date = {2009-01-01},
journal = {Journal of Psycholinguistic Research},
volume = {38},
number = {1},
pages = {11--24},
abstract = {The standard view maintains that quantifier scope interpretation results from an interaction between different modules: the syntax, the semantics as well as the pragmatics. Thus, by examining the mechanism of quantifier scope interpretation, we will certainly gain some insight into how these different modules interact with one another. To observe it, two experiments, an offline judgment task and an eye-tracking experiment, were conducted to investigate the interpretation of doubly quantified sentences in Chinese, like Mei-ge qiangdao dou qiang-le yi-ge yinhang (Every robber robbed a bank). According to current literature, doubly quantified sentences in Chinese like the above are unambiguous, which can only be interpreted as "for every robber x, there is a bank y, such that x robbed y"(surface scope reading), contrary to their ambiguous English counterparts, which also allow the interpretation that "there is a bank y, such that for every robber x, x robbed y"(inverse scope reading). Specifically, three questions were examined, that is, (i) What is the initial reading of doubly quantified sentences in Chinese? (ii) Whether inverse scope interpretation can be available if appropriate contexts are provided? (iii) What are the processing time courses engaged in quantifier scope interpretation? The results showed that (i) Initially, the language processor computes the surface scope representation and the inverse scope representation in parallel, thus, doubly quantified sentences in Chinese are ambiguous; (ii) The discourse information is not employed in initial processing of relative scope, it serves to evaluate the two representations in reanalysis; (iii) The lexical information of verbs affects their scope-taking patterns. We suggest that these findings provide evidence for the Modular Model, one of the major contenders in the literature on sentence processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The standard view maintains that quantifier scope interpretation results from an interaction between different modules: the syntax, the semantics as well as the pragmatics. Thus, by examining the mechanism of quantifier scope interpretation, we will certainly gain some insight into how these different modules interact with one another. To observe it, two experiments, an offline judgment task and an eye-tracking experiment, were conducted to investigate the interpretation of doubly quantified sentences in Chinese, like Mei-ge qiangdao dou qiang-le yi-ge yinhang (Every robber robbed a bank). According to current literature, doubly quantified sentences in Chinese like the above are unambiguous, which can only be interpreted as "for every robber x, there is a bank y, such that x robbed y"(surface scope reading), contrary to their ambiguous English counterparts, which also allow the interpretation that "there is a bank y, such that for every robber x, x robbed y"(inverse scope reading). Specifically, three questions were examined, that is, (i) What is the initial reading of doubly quantified sentences in Chinese? (ii) Whether inverse scope interpretation can be available if appropriate contexts are provided? (iii) What are the processing time courses engaged in quantifier scope interpretation? The results showed that (i) Initially, the language processor computes the surface scope representation and the inverse scope representation in parallel, thus, doubly quantified sentences in Chinese are ambiguous; (ii) The discourse information is not employed in initial processing of relative scope, it serves to evaluate the two representations in reanalysis; (iii) The lexical information of verbs affects their scope-taking patterns. We suggest that these findings provide evidence for the Modular Model, one of the major contenders in the literature on sentence processing.

Close

  • doi:10.1007/s10936-008-9079-x

Close

Eckart Zimmermann; Markus Lappe

Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades Journal Article

In: Journal of Neuroscience, vol. 29, no. 35, pp. 11055–11064, 2009.

Abstract | Links | BibTeX

@article{Zimmermann2009,
title = {Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades},
author = {Eckart Zimmermann and Markus Lappe},
doi = {10.1523/JNEUROSCI.1604-09.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neuroscience},
volume = {29},
number = {35},
pages = {11055--11064},
abstract = {When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for >1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for >1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain.

Close

  • doi:10.1523/JNEUROSCI.1604-09.2009

Close

Ming Yan; Eike M. Richter; Hua Shu; Reinhold Kliegl

Chinese readers extract semantic information from parafoveal words during reading Journal Article

In: Psychonomic Bulletin & Review, vol. 16, pp. 561–566, 2009.

Abstract | BibTeX

@article{Yan2009,
title = {Chinese readers extract semantic information from parafoveal words during reading},
author = {Ming Yan and Eike M. Richter and Hua Shu and Reinhold Kliegl},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
pages = {561--566},
abstract = {Evidence for semantic preview benefit (PB) from parafoveal words has been elusive for reading alphabetic scripts such as English. Here we report semantic PB for noncompound characters in Chinese reading with the boundary paradigm. In addition, PBs for orthographic relatedness and, as a numeric trend, for phono- logical relatedness were obtained. Results are in agreement with other research suggesting that the Chinese writing system is based on a closer association between graphic form and meaning than is alphabetic script. We discuss implications for notions of serial attention shifts and parallel distributed processing of words during reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Evidence for semantic preview benefit (PB) from parafoveal words has been elusive for reading alphabetic scripts such as English. Here we report semantic PB for noncompound characters in Chinese reading with the boundary paradigm. In addition, PBs for orthographic relatedness and, as a numeric trend, for phono- logical relatedness were obtained. Results are in agreement with other research suggesting that the Chinese writing system is based on a closer association between graphic form and meaning than is alphabetic script. We discuss implications for notions of serial attention shifts and parallel distributed processing of words during reading.

Close

Ming Yan; Eike M. Richter; Hua Shu; Reinhold Kliegl

Readers of Chinese extract semantic information from parafoveal words Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 3, pp. 561–566, 2009.

Abstract | Links | BibTeX

@article{Yan2009a,
title = {Readers of Chinese extract semantic information from parafoveal words},
author = {Ming Yan and Eike M. Richter and Hua Shu and Reinhold Kliegl},
doi = {10.3758/PBR.16.3.561},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {3},
pages = {561--566},
abstract = {Evidence for semantic preview benefit (PB) from parafoveal words has been elusive for reading alphabetic scripts such as English. Here we report semantic PB for noncompound characters in Chinese reading with the boundary paradigm. In addition, PBs for orthographic relatedness and, as a numeric trend, for phonological relatedness were obtained. Results are in agreement with other research suggesting that the Chinese writing system is based on a closer association between graphic form and meaning than is alphabetic script. We discuss implications for notions of serial attention shifts and parallel distributed processing of words during reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Evidence for semantic preview benefit (PB) from parafoveal words has been elusive for reading alphabetic scripts such as English. Here we report semantic PB for noncompound characters in Chinese reading with the boundary paradigm. In addition, PBs for orthographic relatedness and, as a numeric trend, for phonological relatedness were obtained. Results are in agreement with other research suggesting that the Chinese writing system is based on a closer association between graphic form and meaning than is alphabetic script. We discuss implications for notions of serial attention shifts and parallel distributed processing of words during reading.

Close

  • doi:10.3758/PBR.16.3.561

Close

Hyejin Yang; Xin Chen; Gregory J. Zelinsky

A new look at novelty effects: Guiding search away from old distractors Journal Article

In: Attention, Perception, and Psychophysics, vol. 71, no. 3, pp. 554–564, 2009.

Abstract | Links | BibTeX

@article{Yang2009d,
title = {A new look at novelty effects: Guiding search away from old distractors},
author = {Hyejin Yang and Xin Chen and Gregory J. Zelinsky},
doi = {10.3758/APP.71.3.554},
year = {2009},
date = {2009-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {71},
number = {3},
pages = {554--564},
abstract = {We examined whether search is guided to novel distractors. In Experiment 1, subjects searched for a target among one new and a variable number of old distractors. Search displays in Experiment 2 consisted of an equal number of new, old, and familiar distractors (the latter repeated occasionally). We found that eye movements were preferentially directed to a new distractor on target-absent trials and that subjects tended to immediately fixate a new distractor after leaving the target on target-present trials. In both cases, first fixations on old distrac- tors were consistently less frequent than could be explained by chance. We interpret these patterns as evidence for negative guidance: Subjects learn the visual features associated with the set of old distractors and then guide their search away from these features, ultimately resulting in the preferential fixation of novel distractors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined whether search is guided to novel distractors. In Experiment 1, subjects searched for a target among one new and a variable number of old distractors. Search displays in Experiment 2 consisted of an equal number of new, old, and familiar distractors (the latter repeated occasionally). We found that eye movements were preferentially directed to a new distractor on target-absent trials and that subjects tended to immediately fixate a new distractor after leaving the target on target-present trials. In both cases, first fixations on old distrac- tors were consistently less frequent than could be explained by chance. We interpret these patterns as evidence for negative guidance: Subjects learn the visual features associated with the set of old distractors and then guide their search away from these features, ultimately resulting in the preferential fixation of novel distractors.

Close

  • doi:10.3758/APP.71.3.554

Close

Hyejin Yang; Gregory J. Zelinsky

Visual search is guided to categorically defined targets Journal Article

In: Vision Research, vol. 49, no. 16, pp. 2095–2103, 2009.

Abstract | BibTeX

@article{Yang2009c,
title = {Visual search is guided to categorically defined targets},
author = {Hyejin Yang and Gregory J. Zelinsky},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {16},
pages = {2095--2103},
abstract = {To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class.

Close

Jinmian Yang; Suiping Wang; Hsuan-Chih Chen; Keith Rayner

The time course of semantic and syntactic processing in Chinese sentence comprehension: Evidence from eye movements Journal Article

In: Memory and Cognition, vol. 37, no. 8, pp. 1164–1176, 2009.

Abstract | Links | BibTeX

@article{Yang2009,
title = {The time course of semantic and syntactic processing in Chinese sentence comprehension: Evidence from eye movements},
author = {Jinmian Yang and Suiping Wang and Hsuan-Chih Chen and Keith Rayner},
doi = {10.3758/MC.37.8.1164},
year = {2009},
date = {2009-01-01},
journal = {Memory and Cognition},
volume = {37},
number = {8},
pages = {1164--1176},
abstract = {In the present study, we examined the time course of semantic and syntactic processing when Chinese is read. Readers' eye movements were monitored, and the relation between a single-character critical word and the sentence context was manipulated such that three kinds of sentences were developed: (1) congruent, (2) those with a semantic violation, and (3) those with both a semantic and a syntactic violation. The eye movement data showed that the first-pass reading times were significantly longer for the target region in the two violation conditions than in the congruent condition. Moreover, the semantic+syntactic violation caused more severe disruption than did the pure semantic violation, as reflected by longer first-pass reading times for the target region and by longer go-past times for the target region and posttarget region in the former than in the latter condition. These results suggest that the effects of, at least, a semantic violation can be detected immediately by Chinese readers and that the processing of syntactic and semantic information is distinct in both first-pass and second-pass reading. Adapted from the source document.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the present study, we examined the time course of semantic and syntactic processing when Chinese is read. Readers' eye movements were monitored, and the relation between a single-character critical word and the sentence context was manipulated such that three kinds of sentences were developed: (1) congruent, (2) those with a semantic violation, and (3) those with both a semantic and a syntactic violation. The eye movement data showed that the first-pass reading times were significantly longer for the target region in the two violation conditions than in the congruent condition. Moreover, the semantic+syntactic violation caused more severe disruption than did the pure semantic violation, as reflected by longer first-pass reading times for the target region and by longer go-past times for the target region and posttarget region in the former than in the latter condition. These results suggest that the effects of, at least, a semantic violation can be detected immediately by Chinese readers and that the processing of syntactic and semantic information is distinct in both first-pass and second-pass reading. Adapted from the source document.

Close

  • doi:10.3758/MC.37.8.1164

Close

Jinmian Yang; Suiping Wang; Yimin Xu; Keith Rayner

Do Chinese readers obtain preview benefit from word n + 2? Evidence from eye movements Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 4, pp. 1192–1204, 2009.

Abstract | Links | BibTeX

@article{Yang2009e,
title = {Do Chinese readers obtain preview benefit from word n + 2? Evidence from eye movements},
author = {Jinmian Yang and Suiping Wang and Yimin Xu and Keith Rayner},
doi = {10.1037/a0013554},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {4},
pages = {1192--1204},
abstract = {The boundary paradigm (K. Rayner, 1975) was used to determine the extent to which Chinese readers obtain information from the right of fixation during reading. As characters are the basic visual unit in written Chinese, they were used as targets in Experiment 1 to examine whether readers obtain preview information from character n + 1 and character n + 2. The results from Experiment 1 suggest they do. In Experiment 2, 2-character target words were used to determine whether readers obtain preview information from word n + 2 as well as word n + 1. Robust preview effects were obtained for word n + 1. There was also evidence from gaze duration (but not first fixation duration), suggesting preview effects for word n + 2. Moreover, there was evidence for parafoveal-on-foveal effects in Chinese reading in both experiments. Implications of these results for models of eye movement control are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The boundary paradigm (K. Rayner, 1975) was used to determine the extent to which Chinese readers obtain information from the right of fixation during reading. As characters are the basic visual unit in written Chinese, they were used as targets in Experiment 1 to examine whether readers obtain preview information from character n + 1 and character n + 2. The results from Experiment 1 suggest they do. In Experiment 2, 2-character target words were used to determine whether readers obtain preview information from word n + 2 as well as word n + 1. Robust preview effects were obtained for word n + 1. There was also evidence from gaze duration (but not first fixation duration), suggesting preview effects for word n + 2. Moreover, there was evidence for parafoveal-on-foveal effects in Chinese reading in both experiments. Implications of these results for models of eye movement control are discussed.

Close

  • doi:10.1037/a0013554

Close

Shun Yang; Yu-chi Tai; Hannu Laukkanen; James E. Sheedy; Shun Yang

Effects of ocular transverse chromatic aberration on near foveal letter recognition Journal Article

In: Vision Research, vol. 49, no. 23, pp. 2881–2890, 2009.

Abstract | Links | BibTeX

@article{Yang2009a,
title = {Effects of ocular transverse chromatic aberration on near foveal letter recognition},
author = {Shun Yang and Yu-chi Tai and Hannu Laukkanen and James E. Sheedy and Shun Yang},
doi = {10.1016/j.visres.2009.09.005},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {23},
pages = {2881--2890},
publisher = {Elsevier Ltd},
abstract = {Transverse chromatic aberration (TCA) smears retinal images of peripheral stimuli. In reading, text information is extracted from both foveal and near fovea, where TCA magnitude is relatively small and variable. The present study investigated whether TCA significantly affects near foveal letter identification. Subjects were briefly presented a string of five letters centered one degree of visual angle to the left or right of fixation. They indicated whether the middle letter was the same as a comparison letter subsequently presented. Letter strings were rendered with a reddish fringe on the left edge of each letter and a bluish fringe on the right edge, consistent with expected left periphery TCA, or with the opposite fringe consistent with expected right periphery TCA. Effect of the color fringing on letter recognition was measured by comparing the response accuracy for fringed and non-fringed stimuli. Effects of lateral interference were examined by manipulating inter-letter spacing and similarity of neighboring letters. Results demonstrated significantly improved response accuracy with the color fringe opposite to the expected TCA, but decreased accuracy when consistent with it. Narrower letter spacing exacerbated the effect of the color fringe, whereas letter similarity did not. Our results suggest that TCA significantly reduces the ability to recognize letters in the near fovea by impeding recognition of individual letters and by enhancing lateral interference between letters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transverse chromatic aberration (TCA) smears retinal images of peripheral stimuli. In reading, text information is extracted from both foveal and near fovea, where TCA magnitude is relatively small and variable. The present study investigated whether TCA significantly affects near foveal letter identification. Subjects were briefly presented a string of five letters centered one degree of visual angle to the left or right of fixation. They indicated whether the middle letter was the same as a comparison letter subsequently presented. Letter strings were rendered with a reddish fringe on the left edge of each letter and a bluish fringe on the right edge, consistent with expected left periphery TCA, or with the opposite fringe consistent with expected right periphery TCA. Effect of the color fringing on letter recognition was measured by comparing the response accuracy for fringed and non-fringed stimuli. Effects of lateral interference were examined by manipulating inter-letter spacing and similarity of neighboring letters. Results demonstrated significantly improved response accuracy with the color fringe opposite to the expected TCA, but decreased accuracy when consistent with it. Narrower letter spacing exacerbated the effect of the color fringe, whereas letter similarity did not. Our results suggest that TCA significantly reduces the ability to recognize letters in the near fovea by impeding recognition of individual letters and by enhancing lateral interference between letters.

Close

  • doi:10.1016/j.visres.2009.09.005

Close

Shun-Nan Yang

Effects of gaze-contingent text changes on fixation duration in reading Journal Article

In: Vision Research, vol. 49, no. 23, pp. 2843–2855, 2009.

Abstract | Links | BibTeX

@article{Yang2009b,
title = {Effects of gaze-contingent text changes on fixation duration in reading},
author = {Shun-Nan Yang},
doi = {10.1016/j.visres.2009.08.023},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {23},
pages = {2843--2855},
abstract = {In reading, a text change during an eye fixation can increase the duration of that fixation. This increased fixation duration could be the result of disrupted text processing, or from the effect of perceiving the brief visual change (a visual transient). The present study was designed to test those two hypotheses. Subjects read multiple-line text while their eye movements were monitored. During randomly selected saccades, the text was masked with an alternate page, which was then replaced with a second alternate page, 75 or 150 ms after the onset of the subsequent (critical) fixation. The effect of the initial masking page, the text change during fixation, and the content of the second page on the likelihood of saccade initiation during the critical fixation, was measured. Results showed that a text change during fixation resulted in similar bilateral (forward and regressive) saccade suppression regardless of the nature of the first and second pages, or the timing of text change. This result likely reflects the effect of a low-level visual transient caused by text change. In addition, there was delay effect reflecting the content of the initial masking. How the suppression dissipated after text change depended on the nature of the first and second pages. These effects are attributed to high-level text processing. The present results suggest that in reading, visual and cognitive processes both can disrupt saccade initiation. The combination of processing difficulty and visually-induced saccade suppression is responsible for the change in fixation duration when gaze-contingent display change is utilized. Therefore, it is prudent to consider both factors when interpreting the effect of text change on eye movement patterns.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In reading, a text change during an eye fixation can increase the duration of that fixation. This increased fixation duration could be the result of disrupted text processing, or from the effect of perceiving the brief visual change (a visual transient). The present study was designed to test those two hypotheses. Subjects read multiple-line text while their eye movements were monitored. During randomly selected saccades, the text was masked with an alternate page, which was then replaced with a second alternate page, 75 or 150 ms after the onset of the subsequent (critical) fixation. The effect of the initial masking page, the text change during fixation, and the content of the second page on the likelihood of saccade initiation during the critical fixation, was measured. Results showed that a text change during fixation resulted in similar bilateral (forward and regressive) saccade suppression regardless of the nature of the first and second pages, or the timing of text change. This result likely reflects the effect of a low-level visual transient caused by text change. In addition, there was delay effect reflecting the content of the initial masking. How the suppression dissipated after text change depended on the nature of the first and second pages. These effects are attributed to high-level text processing. The present results suggest that in reading, visual and cognitive processes both can disrupt saccade initiation. The combination of processing difficulty and visually-induced saccade suppression is responsible for the change in fixation duration when gaze-contingent display change is utilized. Therefore, it is prudent to consider both factors when interpreting the effect of text change on eye movement patterns.

Close

  • doi:10.1016/j.visres.2009.08.023

Close

Kiyomi Yatabe; Martin J. Pickering; Scott A. McDonald

Lexical processing during saccades in text comprehension Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 1, pp. 62–66, 2009.

Abstract | Links | BibTeX

@article{Yatabe2009,
title = {Lexical processing during saccades in text comprehension},
author = {Kiyomi Yatabe and Martin J. Pickering and Scott A. McDonald},
doi = {10.3758/PBR.16.1.62},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {1},
pages = {62--66},
abstract = {We asked whether people process words during saccades when reading sentences. Irwin (1998) demonstrated that such processing occurs when words are presented in isolation. In our experiment, participants read part of a sentence ending in a high- or low-frequency target word and then made a long (40 degrees) or short (10 degrees) saccade to the rest of the sentence. We found a frequency effect on the target word and the first word after the saccade, but the effect was greater for short than for long saccades. Readers therefore performed more lexical processing during long saccades than during short ones. Hence, lexical processing takes place during saccades in text comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We asked whether people process words during saccades when reading sentences. Irwin (1998) demonstrated that such processing occurs when words are presented in isolation. In our experiment, participants read part of a sentence ending in a high- or low-frequency target word and then made a long (40 degrees) or short (10 degrees) saccade to the rest of the sentence. We found a frequency effect on the target word and the first word after the saccade, but the effect was greater for short than for long saccades. Readers therefore performed more lexical processing during long saccades than during short ones. Hence, lexical processing takes place during saccades in text comprehension.

Close

  • doi:10.3758/PBR.16.1.62

Close

Karli K. Watson; Jason H. Ghodasra; Michael L. Platt

Serotonin transporter genotype modulates social reward and punishment in rhesus macaques Journal Article

In: PLoS ONE, vol. 4, no. 1, pp. e4156, 2009.

Abstract | Links | BibTeX

@article{Watson2009a,
title = {Serotonin transporter genotype modulates social reward and punishment in rhesus macaques},
author = {Karli K. Watson and Jason H. Ghodasra and Michael L. Platt},
doi = {10.1371/journal.pone.0004156},
year = {2009},
date = {2009-01-01},
journal = {PLoS ONE},
volume = {4},
number = {1},
pages = {e4156},
abstract = {BACKGROUND: Serotonin signaling influences social behavior in both human and nonhuman primates. In humans, variation upstream of the promoter region of the serotonin transporter gene (5-HTTLPR) has recently been shown to influence both behavioral measures of social anxiety and amygdala response to social threats. Here we show that length polymorphisms in 5-HTTLPR predict social reward and punishment in rhesus macaques, a species in which 5-HTTLPR variation is analogous to that of humans. METHODOLOGY/PRINCIPAL FINDINGS: In contrast to monkeys with two copies of the long allele (L/L), monkeys with one copy of the short allele of this gene (S/L) spent less time gazing at face than non-face images, less time looking in the eye region of faces, and had larger pupil diameters when gazing at photos of a high versus low status male macaques. Moreover, in a novel primed gambling task, presentation of photos of high status male macaques promoted risk-aversion in S/L monkeys but promoted risk-seeking in L/L monkeys. Finally, as measured by a "pay-per-view" task, S/L monkeys required juice payment to view photos of high status males, whereas L/L monkeys sacrificed fluid to see the same photos. CONCLUSIONS/SIGNIFICANCE: These data indicate that genetic variation in serotonin function contributes to social reward and punishment in rhesus macaques, and thus shapes social behavior in humans and rhesus macaques alike.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

BACKGROUND: Serotonin signaling influences social behavior in both human and nonhuman primates. In humans, variation upstream of the promoter region of the serotonin transporter gene (5-HTTLPR) has recently been shown to influence both behavioral measures of social anxiety and amygdala response to social threats. Here we show that length polymorphisms in 5-HTTLPR predict social reward and punishment in rhesus macaques, a species in which 5-HTTLPR variation is analogous to that of humans. METHODOLOGY/PRINCIPAL FINDINGS: In contrast to monkeys with two copies of the long allele (L/L), monkeys with one copy of the short allele of this gene (S/L) spent less time gazing at face than non-face images, less time looking in the eye region of faces, and had larger pupil diameters when gazing at photos of a high versus low status male macaques. Moreover, in a novel primed gambling task, presentation of photos of high status male macaques promoted risk-aversion in S/L monkeys but promoted risk-seeking in L/L monkeys. Finally, as measured by a "pay-per-view" task, S/L monkeys required juice payment to view photos of high status males, whereas L/L monkeys sacrificed fluid to see the same photos. CONCLUSIONS/SIGNIFICANCE: These data indicate that genetic variation in serotonin function contributes to social reward and punishment in rhesus macaques, and thus shapes social behavior in humans and rhesus macaques alike.

Close

  • doi:10.1371/journal.pone.0004156

Close

Tamara L. Watson; Bart Krekelberg

The relationship between saccadic suppression and perceptual stability Journal Article

In: Current Biology, vol. 19, no. 12, pp. 1040–1043, 2009.

Abstract | Links | BibTeX

@article{Watson2009,
title = {The relationship between saccadic suppression and perceptual stability},
author = {Tamara L. Watson and Bart Krekelberg},
doi = {10.1016/j.cub.2009.04.052},
year = {2009},
date = {2009-01-01},
journal = {Current Biology},
volume = {19},
number = {12},
pages = {1040--1043},
publisher = {Elsevier Ltd},
abstract = {Introspection makes it clear that we do not see the visual motion generated by our saccadic eye movements. We refer to the lack of awareness of the motion across the retina that is generated by a saccade as saccadic omission [1]: the visual stimulus generated by the saccade is omitted from our subjective awareness. In the laboratory, saccadic omission is often studied by investigating saccadic suppression, the reduction in visual sensitivity before and during a saccade (see Ross et al. [2] and Wurtz [3] for reviews). We investigated whether perceptual stability requires that a mechanism like saccadic suppression removes perisaccadic stimuli from visual processing to prevent their presumed harmful effect on perceptual stability [4, 5]. Our results show that a stimulus that undergoes saccadic omission can nevertheless generate a shape contrast illusion. This illusion can be generated when the inducer and test stimulus are separated in space and is therefore thought to be generated at a later stage of visual processing [6]. This shows that perceptual stability is attained without removing stimuli from processing and suggests a conceptually new view of perceptual stability in which perisaccadic stimuli are processed by the early visual system, but these signals are prevented from reaching awareness at a later stage of processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introspection makes it clear that we do not see the visual motion generated by our saccadic eye movements. We refer to the lack of awareness of the motion across the retina that is generated by a saccade as saccadic omission [1]: the visual stimulus generated by the saccade is omitted from our subjective awareness. In the laboratory, saccadic omission is often studied by investigating saccadic suppression, the reduction in visual sensitivity before and during a saccade (see Ross et al. [2] and Wurtz [3] for reviews). We investigated whether perceptual stability requires that a mechanism like saccadic suppression removes perisaccadic stimuli from visual processing to prevent their presumed harmful effect on perceptual stability [4, 5]. Our results show that a stimulus that undergoes saccadic omission can nevertheless generate a shape contrast illusion. This illusion can be generated when the inducer and test stimulus are separated in space and is therefore thought to be generated at a later stage of visual processing [6]. This shows that perceptual stability is attained without removing stimuli from processing and suggests a conceptually new view of perceptual stability in which perisaccadic stimuli are processed by the early visual system, but these signals are prevented from reaching awareness at a later stage of processing.

Close

  • doi:10.1016/j.cub.2009.04.052

Close

Andrew E. Welchman; Julie M. Harris; Eli Brenner

Extra-retinal signals support the estimation of 3D motion Journal Article

In: Vision Research, vol. 49, no. 7, pp. 782–789, 2009.

Abstract | Links | BibTeX

@article{Welchman2009,
title = {Extra-retinal signals support the estimation of 3D motion},
author = {Andrew E. Welchman and Julie M. Harris and Eli Brenner},
doi = {10.1016/j.visres.2009.02.014},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {7},
pages = {782--789},
publisher = {Elsevier Ltd},
abstract = {In natural settings, our eyes tend to track approaching objects. To estimate motion, the brain should thus take account of eye movements, perhaps using retinal cues (retinal slip of static objects) or extra-retinal signals (motor commands). Previous work suggests that extra-retinal ocular vergence signals do not support the perceptual judgments. Here, we re-evaluate this conclusion, studying motion judgments based on retinal slip and extra-retinal signals. We find that (1) each cue can be sufficient, and, (2) retinal and extra-retinal signals are combined, when estimating motion-in-depth. This challenges the accepted view that observers are essentially blind to eye vergence changes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In natural settings, our eyes tend to track approaching objects. To estimate motion, the brain should thus take account of eye movements, perhaps using retinal cues (retinal slip of static objects) or extra-retinal signals (motor commands). Previous work suggests that extra-retinal ocular vergence signals do not support the perceptual judgments. Here, we re-evaluate this conclusion, studying motion judgments based on retinal slip and extra-retinal signals. We find that (1) each cue can be sufficient, and, (2) retinal and extra-retinal signals are combined, when estimating motion-in-depth. This challenges the accepted view that observers are essentially blind to eye vergence changes.

Close

  • doi:10.1016/j.visres.2009.02.014

Close

Åsa Wengelin; Mark Torrance; Kenneth Holmqvist; Sol Simpson; David Galbraith; Victoria Johansson; Roger Johansson

Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production Journal Article

In: Behavior Research Methods, vol. 41, no. 2, pp. 337–351, 2009.

Abstract | Links | BibTeX

@article{Wengelin2009,
title = {Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production},
author = {Åsa Wengelin and Mark Torrance and Kenneth Holmqvist and Sol Simpson and David Galbraith and Victoria Johansson and Roger Johansson},
doi = {10.3758/BRM.41.2.337},
year = {2009},
date = {2009-01-01},
journal = {Behavior Research Methods},
volume = {41},
number = {2},
pages = {337--351},
abstract = {Writers typically spend a certain proportion of time looking back over the text that they have written. This is likely to serve a number of different functions, which are currently poorly understood. In this article, we present two systems, ScriptLog+ TimeLine and EyeWrite, that adopt different and complementary approaches to exploring this activity by collecting and analyzing combined eye movement and keystroke data from writers composing extended texts. ScriptLog+ TimeLine is a system that is based on an existing keystroke-logging program and uses heuristic, pattern-matching methods to identify reading episodes within eye movement data. EyeWrite is an integrated editor and analysis system that permits identification of the words that the writer fixates and their location within the developing text. We demonstrate how the methods instantiated within these systems can be used to make sense of the large amount of data generated by eyetracking and keystroke logging in order to inform understanding of the cognitive processes that underlie written text production.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Writers typically spend a certain proportion of time looking back over the text that they have written. This is likely to serve a number of different functions, which are currently poorly understood. In this article, we present two systems, ScriptLog+ TimeLine and EyeWrite, that adopt different and complementary approaches to exploring this activity by collecting and analyzing combined eye movement and keystroke data from writers composing extended texts. ScriptLog+ TimeLine is a system that is based on an existing keystroke-logging program and uses heuristic, pattern-matching methods to identify reading episodes within eye movement data. EyeWrite is an integrated editor and analysis system that permits identification of the words that the writer fixates and their location within the developing text. We demonstrate how the methods instantiated within these systems can be used to make sense of the large amount of data generated by eyetracking and keystroke logging in order to inform understanding of the cognitive processes that underlie written text production.

Close

  • doi:10.3758/BRM.41.2.337

Close

Gregory L. West; Timothy N. Welsh; Jay Pratt

Saccadic trajectories receive online correction: Evidence for a feedback-based system of oculomotor control Journal Article

In: Journal of Motor Behavior, vol. 41, no. 2, pp. 117–126, 2009.

Abstract | Links | BibTeX

@article{West2009,
title = {Saccadic trajectories receive online correction: Evidence for a feedback-based system of oculomotor control},
author = {Gregory L. West and Timothy N. Welsh and Jay Pratt},
doi = {10.3200/JMBR.41.2.117-127},
year = {2009},
date = {2009-01-01},
journal = {Journal of Motor Behavior},
volume = {41},
number = {2},
pages = {117--126},
abstract = {Although a considerable amount of research has investigated the planning and production of saccadic eye movements, it remains unclear whether (a) central planning processes prior to movement onset largely determine these eye movements or (b) they receive online correction during the actual trajectory. To investigate this issue, the authors measured the spatial position of the eye at specific kinematic markers during saccadic movements (i.e., peak acceleration, peak velocity, peak deceleration, saccade endpoint). In 2 experiments, the authors examined saccades ranging in amplitude from 4 to 20 degrees and computed the variability profiles (SD) of eye position at each kinematic marker and the proportion of explained variance (R2) between each kinematic marker and the saccade endpoint. In Experiment 1, the authors examined differences in the kinematic signature of saccadic online control between eye movements made in gap or overlap conditions. In Experiment 2, the authors examined the online control of saccades made from stored target information after delays of 500, 1,500, and 3,500 ms. Findings evince a robust and consistent feedback-based system of online oculomotor control during saccadic eye movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Although a considerable amount of research has investigated the planning and production of saccadic eye movements, it remains unclear whether (a) central planning processes prior to movement onset largely determine these eye movements or (b) they receive online correction during the actual trajectory. To investigate this issue, the authors measured the spatial position of the eye at specific kinematic markers during saccadic movements (i.e., peak acceleration, peak velocity, peak deceleration, saccade endpoint). In 2 experiments, the authors examined saccades ranging in amplitude from 4 to 20 degrees and computed the variability profiles (SD) of eye position at each kinematic marker and the proportion of explained variance (R2) between each kinematic marker and the saccade endpoint. In Experiment 1, the authors examined differences in the kinematic signature of saccadic online control between eye movements made in gap or overlap conditions. In Experiment 2, the authors examined the online control of saccades made from stored target information after delays of 500, 1,500, and 3,500 ms. Findings evince a robust and consistent feedback-based system of online oculomotor control during saccadic eye movements.

Close

  • doi:10.3200/JMBR.41.2.117-127

Close

Carolin Wienrich; Uta Heße; Gisela Müller-Plath

Eye movements and attention in visual feature search with graded target-distractor-similarity Journal Article

In: Journal of Eye Movement Research, vol. 3, no. 1, pp. 1–19, 2009.

Abstract | Links | BibTeX

@article{Wienrich2009,
title = {Eye movements and attention in visual feature search with graded target-distractor-similarity},
author = {Carolin Wienrich and Uta Heße and Gisela Müller-Plath},
doi = {10.16910/jemr.3.1.4},
year = {2009},
date = {2009-01-01},
journal = {Journal of Eye Movement Research},
volume = {3},
number = {1},
pages = {1--19},
abstract = {We conducted a visual feature search experiment in which we varied the target-distractor- similarity in four steps, the number of items (4, 6, and 8), and the presence of the target. In addition to classical search parameters like error rate and reaction time (RT), we analyzed saccade amplitudes, fixation durations, and the portion of reinspections (recurred fixation on an item with at least one different item fixated in between) and refixations (recurred fixation on an item without a different item fixated in between) per trial. When target- distractor-similarity was increased, more errors and longer RTs were observed, accompa- nied by shorter saccade amplitudes, longer fixation durations, and more reinspec- tions/refixations. An increasing set size resulted in longer saccade amplitudes and shorter fixation durations. Finally, in target-absent trials we observed more reinspections than refixations, whereas in target-present trials refixations were more frequent than reinspec- tions. The results on saccade amplitude and fixation duration support saliency-based search theo- ries that assume an attentional focus variable in size according to task demands and a vari- able attentional dwell time. Reinspections and refixations seem to be rather a sign of in- complete perceptual processing of items than being due to memory failure.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We conducted a visual feature search experiment in which we varied the target-distractor- similarity in four steps, the number of items (4, 6, and 8), and the presence of the target. In addition to classical search parameters like error rate and reaction time (RT), we analyzed saccade amplitudes, fixation durations, and the portion of reinspections (recurred fixation on an item with at least one different item fixated in between) and refixations (recurred fixation on an item without a different item fixated in between) per trial. When target- distractor-similarity was increased, more errors and longer RTs were observed, accompa- nied by shorter saccade amplitudes, longer fixation durations, and more reinspec- tions/refixations. An increasing set size resulted in longer saccade amplitudes and shorter fixation durations. Finally, in target-absent trials we observed more reinspections than refixations, whereas in target-present trials refixations were more frequent than reinspec- tions. The results on saccade amplitude and fixation duration support saliency-based search theo- ries that assume an attentional focus variable in size according to task demands and a vari- able attentional dwell time. Reinspections and refixations seem to be rather a sign of in- complete perceptual processing of items than being due to memory failure.

Close

  • doi:10.16910/jemr.3.1.4

Close

Jan Zwickel; Hermann J. Müller

Eye movements as a means to evaluate and improve robots Journal Article

In: International Journal of Social Robotics, vol. 1, no. 4, pp. 357–366, 2009.

Abstract | Links | BibTeX

@article{Zwickel2009,
title = {Eye movements as a means to evaluate and improve robots},
author = {Jan Zwickel and Hermann J. Müller},
doi = {10.1007/s12369-009-0033-3},
year = {2009},
date = {2009-01-01},
journal = {International Journal of Social Robotics},
volume = {1},
number = {4},
pages = {357--366},
abstract = {Abstract With an increase in their capabilities, robots start to play a role in everyday settings. This necessitates a step from a robot-centered (i.e., teaching humans to adapt to robots) to a more human-centered approach (where robots integrate naturally into human activities). Achieving this will increase the effectiveness of robot usage (e.g., shortening the time required for learning), reduce errors, and increase user acceptance. Robotic camera control will play an important role for a more natural and easier-to-interpret behavior, owing to the central importance of gaze in human communication. This study is intended to provide a first step towards improving camera control by a better understanding of human gaze behavior in social situations. To this end, we registered the eye movements of humans watching different types of movies. In all movies, the same two triangles moved around in a self-propelled fashion. However, crucially, some of the movies elicited the attribution of mental states to the triangles, while others did not. This permitted us to directly distinguish eye movement patterns relating to the attribution of mental states in (perceived) social situations, from the patterns in non-social situations. We argue that a better understanding of what characterizes human gaze patterns in social situations will help shape robotic behavior, make it more natural for humans to communicate with robots, and establish joint attention (to certain objects) between humans and robots. In addition, a better understanding of human gaze in social situations will provide a measure for evaluating whether robots are perceived as social agents rather than non-intentional machines. This could help decide which behaviors a robot should display in order to be perceived as a social interaction partner.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Abstract With an increase in their capabilities, robots start to play a role in everyday settings. This necessitates a step from a robot-centered (i.e., teaching humans to adapt to robots) to a more human-centered approach (where robots integrate naturally into human activities). Achieving this will increase the effectiveness of robot usage (e.g., shortening the time required for learning), reduce errors, and increase user acceptance. Robotic camera control will play an important role for a more natural and easier-to-interpret behavior, owing to the central importance of gaze in human communication. This study is intended to provide a first step towards improving camera control by a better understanding of human gaze behavior in social situations. To this end, we registered the eye movements of humans watching different types of movies. In all movies, the same two triangles moved around in a self-propelled fashion. However, crucially, some of the movies elicited the attribution of mental states to the triangles, while others did not. This permitted us to directly distinguish eye movement patterns relating to the attribution of mental states in (perceived) social situations, from the patterns in non-social situations. We argue that a better understanding of what characterizes human gaze patterns in social situations will help shape robotic behavior, make it more natural for humans to communicate with robots, and establish joint attention (to certain objects) between humans and robots. In addition, a better understanding of human gaze in social situations will provide a measure for evaluating whether robots are perceived as social agents rather than non-intentional machines. This could help decide which behaviors a robot should display in order to be perceived as a social interaction partner.

Close

  • doi:10.1007/s12369-009-0033-3

Close

Michael Rohs; Robert Schleicher; Johannes Schöning; Georg Essl; Anja Naumann; Antonio Krüger

Impact of item density on the utility of visual context in magic lens interactions Journal Article

In: Personal and Ubiquitous Computing, vol. 13, no. 8, pp. 633–646, 2009.

Abstract | Links | BibTeX

@article{Rohs2009,
title = {Impact of item density on the utility of visual context in magic lens interactions},
author = {Michael Rohs and Robert Schleicher and Johannes Schöning and Georg Essl and Anja Naumann and Antonio Krüger},
doi = {10.1007/s00779-009-0247-2},
year = {2009},
date = {2009-01-01},
journal = {Personal and Ubiquitous Computing},
volume = {13},
number = {8},
pages = {633--646},
abstract = {This article reports on two user studies investi- gating the effect of visual context in handheld augmented reality interfaces.Adynamic peephole interface (without vi- sual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore items on a map and look for a specific attribute. We tested dif- ferent sizes of visual context as well as different numbers of items per area, i.e. different item densities. Hand motion patterns and eye movements were recorded. We found that visual context is most effective for sparsely distributed items and gets less helpful with increasing item density. User per- formance in the magic lens case is generally better than in the dynamic peephole case, but approaches the performance of the latter the more densely the items are spaced. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces, involving spa- tially tracked personal displays or combined personal and public displays, by suggesting when to use visual context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article reports on two user studies investi- gating the effect of visual context in handheld augmented reality interfaces.Adynamic peephole interface (without vi- sual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore items on a map and look for a specific attribute. We tested dif- ferent sizes of visual context as well as different numbers of items per area, i.e. different item densities. Hand motion patterns and eye movements were recorded. We found that visual context is most effective for sparsely distributed items and gets less helpful with increasing item density. User per- formance in the magic lens case is generally better than in the dynamic peephole case, but approaches the performance of the latter the more densely the items are spaced. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces, involving spa- tially tracked personal displays or combined personal and public displays, by suggesting when to use visual context.

Close

  • doi:10.1007/s00779-009-0247-2

Close

M. Carmen Romano; Marco Thiel; Jürgen Aurths; Konstantin Mergenthaler; Ralf Engbert

Hypothesis test for synchronization: Twin surrogates revisited Journal Article

In: Chaos, vol. 19, no. 1, pp. 1–14, 2009.

Abstract | Links | BibTeX

@article{Romano2009,
title = {Hypothesis test for synchronization: Twin surrogates revisited},
author = {M. Carmen Romano and Marco Thiel and Jürgen Aurths and Konstantin Mergenthaler and Ralf Engbert},
doi = {10.1063/1.3072784},
year = {2009},
date = {2009-01-01},
journal = {Chaos},
volume = {19},
number = {1},
pages = {1--14},
abstract = {The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.

Close

  • doi:10.1063/1.3072784

Close

Jessica Rosenberg; Kathrin Pusch; Rainer Dietrich; Christian Cajochen

The tick-tock of language: Is language processing sensitive to circadian rhythmicity and elevated sleep pressure? Journal Article

In: Chronobiology International, vol. 26, no. 5, pp. 974–991, 2009.

Abstract | Links | BibTeX

@article{Rosenberg2009,
title = {The tick-tock of language: Is language processing sensitive to circadian rhythmicity and elevated sleep pressure?},
author = {Jessica Rosenberg and Kathrin Pusch and Rainer Dietrich and Christian Cajochen},
doi = {10.1080/07420520903044471},
year = {2009},
date = {2009-01-01},
journal = {Chronobiology International},
volume = {26},
number = {5},
pages = {974--991},
abstract = {The master circadian pacemaker emits signals that trigger organ-specific oscillators and, therefore, constitutes a basic biological process that enables organisms to anticipate daily environmental changes by adjusting behavior, physiology, and gene regulation. Although circadian rhythms are well characterized on a physiological level, little is known about circadian modulations of higher cognitive functions. Thus, we investigated circadian repercussions on language performance at the level of minimal syntactic processing by means of German noun phrases in ten young healthy men under the unmasking conditions of a 40 h constant-routine protocol. Language performance for both congruent and incongruent noun phrases displayed a clear diurnal rhythm with a peak performance decrement during the biological night. The nadirs, however, differed such that worst syntactic processing of incongruent noun phrases occurred 3 h earlier (07:00 h) than that of congruent noun phrases (10:00 h). Our results indicate that language performance displays an internally generated circadian rhythmicity with optimal time for parsing language between 3 to 6 h after the habitual wake time, which usually corresponds to 10:00-13:00 h. These results may have important ramifications for establishing optimal times for shiftwork changes or testing linguistically impaired people.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The master circadian pacemaker emits signals that trigger organ-specific oscillators and, therefore, constitutes a basic biological process that enables organisms to anticipate daily environmental changes by adjusting behavior, physiology, and gene regulation. Although circadian rhythms are well characterized on a physiological level, little is known about circadian modulations of higher cognitive functions. Thus, we investigated circadian repercussions on language performance at the level of minimal syntactic processing by means of German noun phrases in ten young healthy men under the unmasking conditions of a 40 h constant-routine protocol. Language performance for both congruent and incongruent noun phrases displayed a clear diurnal rhythm with a peak performance decrement during the biological night. The nadirs, however, differed such that worst syntactic processing of incongruent noun phrases occurred 3 h earlier (07:00 h) than that of congruent noun phrases (10:00 h). Our results indicate that language performance displays an internally generated circadian rhythmicity with optimal time for parsing language between 3 to 6 h after the habitual wake time, which usually corresponds to 10:00-13:00 h. These results may have important ramifications for establishing optimal times for shiftwork changes or testing linguistically impaired people.

Close

  • doi:10.1080/07420520903044471

Close

David Souto; Dirk Kerzel

Evidence for an attentional component in saccadic inhibition of return Journal Article

In: Experimental Brain Research, vol. 195, no. 4, pp. 531–540, 2009.

Abstract | Links | BibTeX

@article{Souto2009,
title = {Evidence for an attentional component in saccadic inhibition of return},
author = {David Souto and Dirk Kerzel},
doi = {10.1007/s00221-009-1824-3},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {195},
number = {4},
pages = {531--540},
abstract = {After presentation of a peripheral cue, facilitation at the cued location is followed by inhibition of return (IOR). It has been recently proposed that IOR may originate at different processing stages for manual and ocular responses, with manual IOR resulting from inhibited attentional orienting, and ocular IOR resulting form inhibited motor preparation. Contrary to this interpretation, we found an effect of target contrast on saccadic IOR. The effect of contrast decreased with increasing reaction times (RTs) for saccades, but not for manual key-press responses. This may have masked the effect of contrast on IOR with saccades in previous studies (Hunt and Kingstone in J Exp Psychol Hum Percept Perform 29:1068-1074, 2003) because only mean RTs were considered. We also found that background luminance strongly influenced the effects of gap and target contrast on IOR.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

After presentation of a peripheral cue, facilitation at the cued location is followed by inhibition of return (IOR). It has been recently proposed that IOR may originate at different processing stages for manual and ocular responses, with manual IOR resulting from inhibited attentional orienting, and ocular IOR resulting form inhibited motor preparation. Contrary to this interpretation, we found an effect of target contrast on saccadic IOR. The effect of contrast decreased with increasing reaction times (RTs) for saccades, but not for manual key-press responses. This may have masked the effect of contrast on IOR with saccades in previous studies (Hunt and Kingstone in J Exp Psychol Hum Percept Perform 29:1068-1074, 2003) because only mean RTs were considered. We also found that background luminance strongly influenced the effects of gap and target contrast on IOR.

Close

  • doi:10.1007/s00221-009-1824-3

Close

David Souto; Dirk Kerzel

Involuntary cueing effects during smooth pursuit: Facilitation and inhibition of return in oculocentric coordinates Journal Article

In: Experimental Brain Research, vol. 192, no. 1, pp. 25–31, 2009.

Abstract | Links | BibTeX

@article{Souto2009a,
title = {Involuntary cueing effects during smooth pursuit: Facilitation and inhibition of return in oculocentric coordinates},
author = {David Souto and Dirk Kerzel},
doi = {10.1007/s00221-008-1555-x},
year = {2009},
date = {2009-01-01},
journal = {Experimental Brain Research},
volume = {192},
number = {1},
pages = {25--31},
abstract = {Peripheral cues induce facilitation with short cue-target intervals and inhibition of return (IOR) with long cue-target intervals. Modulations of facilitation and IOR by continuous displacements of the eye or the cued stimuli are poorly understood. Previously, the retinal coordinates of the cued location were changed by saccadic or smooth pursuit eye movements during the cue-target interval. In contrast, we probed the relevant coordinates for facilitation and IOR by orthogonally varying object motion (stationary, moving) and eye movement (fixation, smooth pursuit). In the pursuit conditions, cue and target were presented during the ongoing eye movement and observers made a saccade to the target. Importantly, we found facilitation and IOR of similar size during smooth pursuit and fixation. The results suggest that involuntary orienting is possible even when attention has to be allocated to the moving target during smooth pursuit. Comparison of conditions with stabilized and moving objects suggest an oculocentric basis for facilitation as well as inhibition. Facilitation and IOR were reduced with objects that moved on the retina both with smooth pursuit and eye fixation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Peripheral cues induce facilitation with short cue-target intervals and inhibition of return (IOR) with long cue-target intervals. Modulations of facilitation and IOR by continuous displacements of the eye or the cued stimuli are poorly understood. Previously, the retinal coordinates of the cued location were changed by saccadic or smooth pursuit eye movements during the cue-target interval. In contrast, we probed the relevant coordinates for facilitation and IOR by orthogonally varying object motion (stationary, moving) and eye movement (fixation, smooth pursuit). In the pursuit conditions, cue and target were presented during the ongoing eye movement and observers made a saccade to the target. Importantly, we found facilitation and IOR of similar size during smooth pursuit and fixation. The results suggest that involuntary orienting is possible even when attention has to be allocated to the moving target during smooth pursuit. Comparison of conditions with stabilized and moving objects suggest an oculocentric basis for facilitation as well as inhibition. Facilitation and IOR were reduced with objects that moved on the retina both with smooth pursuit and eye fixation.

Close

  • doi:10.1007/s00221-008-1555-x

Close

Oleg Špakov; Päivi Majaranta

Scrollable keyboards for casual eye typing Journal Article

In: PsychNology Journal, vol. 7, no. 2, pp. 159–173, 2009.

Abstract | Links | BibTeX

@article{Spakov2009,
title = {Scrollable keyboards for casual eye typing},
author = {Oleg Špakov and Päivi Majaranta},
doi = {10.1017/CBO9781107415324.004},
year = {2009},
date = {2009-01-01},
journal = {PsychNology Journal},
volume = {7},
number = {2},
pages = {159--173},
abstract = {In eye typing, a full on-screen keyboard often takes a lot of space because the inaccuracy in eye tracking requires big keys. We propose “scrollable keyboards” where one or more rows are hidden to save space. Results from an experiment with 8 expert participants show that the typing speed reduced by 51.4% for a 1-row keyboard and 25.3% for a 2-row keyboard compared to a full (3-row) QWERTY. By optimizing the keyboard layout according to letter- to-letter probabilities we were able to reduce the scroll button usage, which further increased the typing speed from 7.26 wpm (QWERTY) to 8.86 wpm (optimized layout) on the 1-row keyboard, and from 11.17 wpm to 12.18 wpm on the 2-row keyboard, respectively},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In eye typing, a full on-screen keyboard often takes a lot of space because the inaccuracy in eye tracking requires big keys. We propose “scrollable keyboards” where one or more rows are hidden to save space. Results from an experiment with 8 expert participants show that the typing speed reduced by 51.4% for a 1-row keyboard and 25.3% for a 2-row keyboard compared to a full (3-row) QWERTY. By optimizing the keyboard layout according to letter- to-letter probabilities we were able to reduce the scroll button usage, which further increased the typing speed from 7.26 wpm (QWERTY) to 8.86 wpm (optimized layout) on the 1-row keyboard, and from 11.17 wpm to 12.18 wpm on the 2-row keyboard, respectively

Close

  • doi:10.1017/CBO9781107415324.004

Close

Siddharth Srivastava; Guy A. Orban; Patrick A. De Maziere; Peter Janssen

A distinct representation of three-dimensional shape in macaque anterior intraparietal area: Fast, metric, and coarse Journal Article

In: Journal of Neuroscience, vol. 29, no. 34, pp. 10613–10626, 2009.

Abstract | Links | BibTeX

@article{Srivastava2009,
title = {A distinct representation of three-dimensional shape in macaque anterior intraparietal area: Fast, metric, and coarse},
author = {Siddharth Srivastava and Guy A. Orban and Patrick A. De Maziere and Peter Janssen},
doi = {10.1523/JNEUROSCI.6016-08.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neuroscience},
volume = {29},
number = {34},
pages = {10613--10626},
abstract = {Differences in the horizontal positions of retinal images—binocular disparity—provide important cues for three-dimensional object recognition and manipulation. We investigated the neural coding ofthree-dimensional shape defined by disparity in anterior intrapari- etal (AIP) area. Robust selectivity for disparity-defined slanted and curved surfaces was observed in a high proportion ofAIP neurons, emerging at relatively short latencies. The large majority of AIP neurons preserved their three-dimensional shape preference over different positions in depth, a hallmark of higher-order disparity selectivity. Yet both stimulus type (concave–convex) and position in depth could be reliably decoded from the AIP responses. The neural coding ofthree-dimensional shape was based on first-order (slanted surfaces) and second-order (curved surfaces) disparity selectivity. Many AIP neurons tolerated the presence ofdisparity discontinuities in the stimulus, but the population ofAIP neurons provided reliable information on the degree ofcurvedness ofthe stimulus. Finally, AIP neurons preserved their three-dimensional shape preference over different positions in the frontoparallel plane. Thus, AIP neurons extract or have access to three-dimensional object information defined by binocular disparity, consistent with previous functional magnetic resonance imaging data. Unlike the known representation ofthree-dimensional shape in inferior temporal cortex, the neural representation in AIP appears to emphasize object parameters required for the planning ofgrasping movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Differences in the horizontal positions of retinal images—binocular disparity—provide important cues for three-dimensional object recognition and manipulation. We investigated the neural coding ofthree-dimensional shape defined by disparity in anterior intrapari- etal (AIP) area. Robust selectivity for disparity-defined slanted and curved surfaces was observed in a high proportion ofAIP neurons, emerging at relatively short latencies. The large majority of AIP neurons preserved their three-dimensional shape preference over different positions in depth, a hallmark of higher-order disparity selectivity. Yet both stimulus type (concave–convex) and position in depth could be reliably decoded from the AIP responses. The neural coding ofthree-dimensional shape was based on first-order (slanted surfaces) and second-order (curved surfaces) disparity selectivity. Many AIP neurons tolerated the presence ofdisparity discontinuities in the stimulus, but the population ofAIP neurons provided reliable information on the degree ofcurvedness ofthe stimulus. Finally, AIP neurons preserved their three-dimensional shape preference over different positions in the frontoparallel plane. Thus, AIP neurons extract or have access to three-dimensional object information defined by binocular disparity, consistent with previous functional magnetic resonance imaging data. Unlike the known representation ofthree-dimensional shape in inferior temporal cortex, the neural representation in AIP appears to emphasize object parameters required for the planning ofgrasping movements.

Close

  • doi:10.1523/JNEUROSCI.6016-08.2009

Close

Christian Starzynski; Ralf Engbert

Noise-enhanced target discrimination under the influence of fixational eye movements and external noise Journal Article

In: Chaos, vol. 19, no. 1, pp. 1–7, 2009.

Abstract | Links | BibTeX

@article{Starzynski2009,
title = {Noise-enhanced target discrimination under the influence of fixational eye movements and external noise},
author = {Christian Starzynski and Ralf Engbert},
doi = {10.1063/1.3098950},
year = {2009},
date = {2009-01-01},
journal = {Chaos},
volume = {19},
number = {1},
pages = {1--7},
abstract = {Active motor processes are present in many sensory systems to enhance perception. In the human visual system, miniature eye movements are produced involuntarily and unconsciously when we fixate a stationary target. These fixational eye movements represent self-generated noise which serves important perceptual functions. Here we investigate fixational eye movements under the influence of external noise. In a two-choice discrimination task, the target stimulus performed a random walk with varying noise intensity. We observe noise-enhanced discrimination of the target stimulus characterized by a U-shaped curve of manual response times as a function of the diffusion constant of the stimulus. Based on the experiments, we develop a stochastic information-accumulator model for stimulus discrimination in a noisy environment. Our results provide a new explanation for the constructive role of fixational eye movements in visual perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Active motor processes are present in many sensory systems to enhance perception. In the human visual system, miniature eye movements are produced involuntarily and unconsciously when we fixate a stationary target. These fixational eye movements represent self-generated noise which serves important perceptual functions. Here we investigate fixational eye movements under the influence of external noise. In a two-choice discrimination task, the target stimulus performed a random walk with varying noise intensity. We observe noise-enhanced discrimination of the target stimulus characterized by a U-shaped curve of manual response times as a function of the diffusion constant of the stimulus. Based on the experiments, we develop a stochastic information-accumulator model for stimulus discrimination in a noisy environment. Our results provide a new explanation for the constructive role of fixational eye movements in visual perception.

Close

  • doi:10.1063/1.3098950

Close

Adrian Staub; Margaret Grant; Charles Clifton; Keith Rayner

Phonological typicality does not influence fixation durations in normal reading Journal Article

In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 35, no. 3, pp. 806–814, 2009.

Abstract | Links | BibTeX

@article{Staub2009,
title = {Phonological typicality does not influence fixation durations in normal reading},
author = {Adrian Staub and Margaret Grant and Charles Clifton and Keith Rayner},
doi = {10.1037/a0015123},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {35},
number = {3},
pages = {806--814},
abstract = {Using a word-by-word self-paced reading paradigm, T. A. Farmer, M. H. Christiansen, and P. Monaghan (2006) reported faster reading times for words that are phonologically typical for their syntactic category (i.e., noun or verb) than for words that are phonologically atypical. This result has been taken to suggest that language users are sensitive to subtle relationships between sound and syntactic function and that they make rapid use of this information in comprehension. The present article reports attempts to replicate this result using both eyetracking during normal reading (Experiment 1) and word-by-word self-paced reading (Experiment 2). No hint of a phonological typicality effect emerged on any reading-time measure in Experiment 1, nor did Experiment 2 replicate Farmer et al.'s finding from self-paced reading. Indeed, the differences between condition means were not consistently in the predicted direction, as phonologically atypical verbs were read more quickly than phonologically typical verbs, on most measures. Implications for research on visual word recognition are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Using a word-by-word self-paced reading paradigm, T. A. Farmer, M. H. Christiansen, and P. Monaghan (2006) reported faster reading times for words that are phonologically typical for their syntactic category (i.e., noun or verb) than for words that are phonologically atypical. This result has been taken to suggest that language users are sensitive to subtle relationships between sound and syntactic function and that they make rapid use of this information in comprehension. The present article reports attempts to replicate this result using both eyetracking during normal reading (Experiment 1) and word-by-word self-paced reading (Experiment 2). No hint of a phonological typicality effect emerged on any reading-time measure in Experiment 1, nor did Experiment 2 replicate Farmer et al.'s finding from self-paced reading. Indeed, the differences between condition means were not consistently in the predicted direction, as phonologically atypical verbs were read more quickly than phonologically typical verbs, on most measures. Implications for research on visual word recognition are discussed.

Close

  • doi:10.1037/a0015123

Close

J. Stephen Higgins; David E. Irwin; Ranxiao Frances Wang; Laura E. Thomas

Visual direction constancy across eyeblinks Journal Article

In: Attention, Perception, and Psychophysics, vol. 71, no. 7, pp. 1607–1617, 2009.

Abstract | Links | BibTeX

@article{StephenHiggins2009,
title = {Visual direction constancy across eyeblinks},
author = {J. Stephen Higgins and David E. Irwin and Ranxiao Frances Wang and Laura E. Thomas},
doi = {10.3758/APP.71.7.1607},
year = {2009},
date = {2009-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {71},
number = {7},
pages = {1607--1617},
abstract = {When a visual target is displaced during a saccade, the perception of its displacement is suppressed. Its movement can usually only be detected if the displacement is quite large. This suppression can be eliminated by introducing a short blank period after the saccade and before the target reappears in a new location. This has been termed the blanking effect and has been attributed to the use of otherwise ignored extraretinal information. We examined whether similar effects occur with eyeblinks and other visual distractions. We found that suppression of displacement perception can also occur due to a blink (both immediately prior to the blink and during the blink), and that introducing a blank period after a blink reduces the displacement suppression in much the same way as after a saccade. The blanking effect does not occur when other visual distractions are used. This provides further support for the conclusion that the blanking effect arises from extraretinal signals about eye position.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When a visual target is displaced during a saccade, the perception of its displacement is suppressed. Its movement can usually only be detected if the displacement is quite large. This suppression can be eliminated by introducing a short blank period after the saccade and before the target reappears in a new location. This has been termed the blanking effect and has been attributed to the use of otherwise ignored extraretinal information. We examined whether similar effects occur with eyeblinks and other visual distractions. We found that suppression of displacement perception can also occur due to a blink (both immediately prior to the blink and during the blink), and that introducing a blank period after a blink reduces the displacement suppression in much the same way as after a saccade. The blanking effect does not occur when other visual distractions are used. This provides further support for the conclusion that the blanking effect arises from extraretinal signals about eye position.

Close

  • doi:10.3758/APP.71.7.1607

Close

Timothy J. Slattery

Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements Journal Article

In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 6, pp. 1969–1975, 2009.

Abstract | Links | BibTeX

@article{Slattery2009,
title = {Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements},
author = {Timothy J. Slattery},
doi = {10.1037/a0016894},
year = {2009},
date = {2009-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {35},
number = {6},
pages = {1969--1975},
abstract = {An eye movement experiment was conducted to investigate whether the processing of a word can be affected by its higher frequency neighbor (HFN). Target words with an HFN (birch) or without one (spruce) were embedded into 2 types of sentence frames: 1 in which the HFN (birth) could fit given the prior sentence context, and 1 in which it could not. The results suggest that words can be misperceived as their HFN, and that top-down information from sentence context strongly modulates this effect. Implications for models of word recognition and eye movements during reading are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

An eye movement experiment was conducted to investigate whether the processing of a word can be affected by its higher frequency neighbor (HFN). Target words with an HFN (birch) or without one (spruce) were embedded into 2 types of sentence frames: 1 in which the HFN (birth) could fit given the prior sentence context, and 1 in which it could not. The results suggest that words can be misperceived as their HFN, and that top-down information from sentence context strongly modulates this effect. Implications for models of word recognition and eye movements during reading are discussed.

Close

  • doi:10.1037/a0016894

Close

Daniel Smilek; Grayden J. F. Solman; Peter Murawski; Jonathan S. A. Carriere

The eyes fixate the optimal viewing position of task-irrelevant words Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 1, pp. 57–61, 2009.

Abstract | Links | BibTeX

@article{Smilek2009,
title = {The eyes fixate the optimal viewing position of task-irrelevant words},
author = {Daniel Smilek and Grayden J. F. Solman and Peter Murawski and Jonathan S. A. Carriere},
doi = {10.3758/PBR.16.1.57},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {1},
pages = {57--61},
abstract = {We evaluated whether one's eyes tend to fixate the optimal viewing position (OVP) of words even when the words are task irrelevant and should be ignored. Participants completed the standard Stroop task, in which they named the physical color of congruent and incongruent color words without regard to the meanings of the color words. We monitored the horizontal position of the first eye fixation that occurred after the onset of each color word to evaluate whether these fixations would be at the OVP, which is just to the left of word midline. The results showed that (1) the peak of the distribution of eye fixation positions was to the left of the midline of the color words, (2) the majority of the fixations landed on the left side of the color words, and (3) the average leftward displacement of the first fixation from word midline was greater for longer color words. These results suggest that the eyes tend to fixate the OVP of words even when those words are task irrelevant.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We evaluated whether one's eyes tend to fixate the optimal viewing position (OVP) of words even when the words are task irrelevant and should be ignored. Participants completed the standard Stroop task, in which they named the physical color of congruent and incongruent color words without regard to the meanings of the color words. We monitored the horizontal position of the first eye fixation that occurred after the onset of each color word to evaluate whether these fixations would be at the OVP, which is just to the left of word midline. The results showed that (1) the peak of the distribution of eye fixation positions was to the left of the midline of the color words, (2) the majority of the fixations landed on the left side of the color words, and (3) the average leftward displacement of the first fixation from word midline was greater for longer color words. These results suggest that the eyes tend to fixate the OVP of words even when those words are task irrelevant.

Close

  • doi:10.3758/PBR.16.1.57

Close

Tim J. Smith; John M. Henderson

Facilitation of return during scene viewing Journal Article

In: Visual Cognition, vol. 17, no. 6-7, pp. 1083–1108, 2009.

Abstract | Links | BibTeX

@article{Smith2009,
title = {Facilitation of return during scene viewing},
author = {Tim J. Smith and John M. Henderson},
doi = {10.1080/13506280802678557},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {6-7},
pages = {1083--1108},
abstract = {Inhibition of Return (IOR) is a delay in initiating attentional shifts to previously attended locations. It is believed to facilitate attentional exploration of a scene. Computational models of attention have implemented IOR as a simple mechanism for driving attention through a scene. However, evidence for IOR during scene viewing is inconclusive. In this study IOR during scene memorization and in response to sudden onsets at the last (1-back) and penultimate (2-back) fixation location was measured. The results indicate that there is a tendency for saccades to continue the trajectory of the last saccade (Saccadic Momentum), but contrary to the “foraging facilitator” hypothesis of IOR, there is also a distinct population of saccades directed back to the last fixation location, especially in response to onsets. Voluntary return saccades to the 1-back location experience temporal delay but this does not affect their likelihood of occurrence. No localized temporal delay is exhibited at 2-back. These results suggest that IOR exists at the last fixation location during scene memorization but that this temporal delay is overridden by Facilitation of Return. Computational models of attention will fail to capture the pattern of saccadic eye movements during scene viewing unless they model the dynamics of visual encoding and can account for the interaction between Facilitation of Return, Saccadic Momentum, and Inhibition of Return.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition of Return (IOR) is a delay in initiating attentional shifts to previously attended locations. It is believed to facilitate attentional exploration of a scene. Computational models of attention have implemented IOR as a simple mechanism for driving attention through a scene. However, evidence for IOR during scene viewing is inconclusive. In this study IOR during scene memorization and in response to sudden onsets at the last (1-back) and penultimate (2-back) fixation location was measured. The results indicate that there is a tendency for saccades to continue the trajectory of the last saccade (Saccadic Momentum), but contrary to the “foraging facilitator” hypothesis of IOR, there is also a distinct population of saccades directed back to the last fixation location, especially in response to onsets. Voluntary return saccades to the 1-back location experience temporal delay but this does not affect their likelihood of occurrence. No localized temporal delay is exhibited at 2-back. These results suggest that IOR exists at the last fixation location during scene memorization but that this temporal delay is overridden by Facilitation of Return. Computational models of attention will fail to capture the pattern of saccadic eye movements during scene viewing unless they model the dynamics of visual encoding and can account for the interaction between Facilitation of Return, Saccadic Momentum, and Inhibition of Return.

Close

  • doi:10.1080/13506280802678557

Close

John F. Soechting; John Z. Juveli; Hrishikesh M. Rao

Models for the extrapolation of target motion for manual interception Journal Article

In: Journal of Neurophysiology, vol. 102, no. 3, pp. 1491–1502, 2009.

Abstract | Links | BibTeX

@article{Soechting2009,
title = {Models for the extrapolation of target motion for manual interception},
author = {John F. Soechting and John Z. Juveli and Hrishikesh M. Rao},
doi = {10.1152/jn.00398.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {102},
number = {3},
pages = {1491--1502},
abstract = {Intercepting a moving target requires a prediction of the target's future motion. This extrapolation could be achieved using sensed parameters of the target motion, e.g., its position and velocity. However, the accuracy of the prediction would be improved if subjects were also able to incorporate the statistical properties of the target's motion, accumulated as they watched the target move. The present experiments were designed to test for this possibility. Subjects intercepted a target moving on the screen of a computer monitor by sliding their extended finger along the monitor's surface. Along any of the six possible target paths, target speed could be governed by one of three possible rules: constant speed, a power law relation between speed and curvature, or the trajectory resulting from a sum of sinusoids. A go signal was given to initiate interception and was always presented when the target had the same speed, irrespective of the law of motion. The dependence of the initial direction of finger motion on the target's law of motion was examined. This direction did not depend on the speed profile of the target, contrary to the hypothesis. However, finger direction could be well predicted by assuming that target location was extrapolated using target velocity and that the amount of extrapolation depended on the distance from the finger to the target. Subsequent analysis showed that the same model of target motion was also used for on-line, visually mediated corrections of finger movement when the motion was initially misdirected.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intercepting a moving target requires a prediction of the target's future motion. This extrapolation could be achieved using sensed parameters of the target motion, e.g., its position and velocity. However, the accuracy of the prediction would be improved if subjects were also able to incorporate the statistical properties of the target's motion, accumulated as they watched the target move. The present experiments were designed to test for this possibility. Subjects intercepted a target moving on the screen of a computer monitor by sliding their extended finger along the monitor's surface. Along any of the six possible target paths, target speed could be governed by one of three possible rules: constant speed, a power law relation between speed and curvature, or the trajectory resulting from a sum of sinusoids. A go signal was given to initiate interception and was always presented when the target had the same speed, irrespective of the law of motion. The dependence of the initial direction of finger motion on the target's law of motion was examined. This direction did not depend on the speed profile of the target, contrary to the hypothesis. However, finger direction could be well predicted by assuming that target location was extrapolated using target velocity and that the amount of extrapolation depended on the distance from the finger to the target. Subsequent analysis showed that the same model of target motion was also used for on-line, visually mediated corrections of finger movement when the motion was initially misdirected.

Close

  • doi:10.1152/jn.00398.2009

Close

Hiroyuki Sogo; Yuji Takeda

Effect of spatial inhibition on saccade trajectory depends on location-based mechanisms Journal Article

In: Japanese Psychological Research, vol. 51, no. 1, pp. 35–46, 2009.

Abstract | Links | BibTeX

@article{Sogo2009,
title = {Effect of spatial inhibition on saccade trajectory depends on location-based mechanisms},
author = {Hiroyuki Sogo and Yuji Takeda},
doi = {10.1111/j.1468-5884.2009.00386.x},
year = {2009},
date = {2009-01-01},
journal = {Japanese Psychological Research},
volume = {51},
number = {1},
pages = {35--46},
abstract = {Saccade trajectory often curves away from a previously attended, inhibited location. A recent study of curved saccades showed that an inhibitory effect prevents ineffective reexamination during serial visual search. The time course of this effect differs from that of a similar inhibitory effect, known as inhibition of return (IOR). In the present study, we examined whether this saccade-related inhibitory effect can operate in an object-based manner (similar to IOR). Using a spatial cueing paradigm, we demonstrated that if a cue is presented on a placeholder that is then shifted from its original location, the saccade trajectory curves away from the original (cued) location (Experiment 1), yet the IOR effect is observed on the cued placeholder (Experiment 2). The inhibitory mechanism that causes curved saccades appears to operate in a location-based manner, whereas the mechanism underlying IOR appears to operate in an object-based manner. We propose that these inhibitory mechanisms work in a complementary fashion to guide eye movements efficiently under conditions of a dynamic visual environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccade trajectory often curves away from a previously attended, inhibited location. A recent study of curved saccades showed that an inhibitory effect prevents ineffective reexamination during serial visual search. The time course of this effect differs from that of a similar inhibitory effect, known as inhibition of return (IOR). In the present study, we examined whether this saccade-related inhibitory effect can operate in an object-based manner (similar to IOR). Using a spatial cueing paradigm, we demonstrated that if a cue is presented on a placeholder that is then shifted from its original location, the saccade trajectory curves away from the original (cued) location (Experiment 1), yet the IOR effect is observed on the cued placeholder (Experiment 2). The inhibitory mechanism that causes curved saccades appears to operate in a location-based manner, whereas the mechanism underlying IOR appears to operate in an object-based manner. We propose that these inhibitory mechanisms work in a complementary fashion to guide eye movements efficiently under conditions of a dynamic visual environment.

Close

  • doi:10.1111/j.1468-5884.2009.00386.x

Close

Joo-Hyun Song; Robert M. McPeek

Eye-hand coordination during target selection in a pop-out visual search Journal Article

In: Journal of Neurophysiology, vol. 102, no. 5, pp. 2681–2692, 2009.

Abstract | Links | BibTeX

@article{Song2009,
title = {Eye-hand coordination during target selection in a pop-out visual search},
author = {Joo-Hyun Song and Robert M. McPeek},
doi = {10.1152/jn.91352.2008},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {102},
number = {5},
pages = {2681--2692},
abstract = {We examined the coordination of saccades and reaches in a visual search task in which monkeys were rewarded for reaching to an odd-colored target among distractors. Eye movements were unconstrained, and monkeys typically made one or more saccades before initiating a reach. Target selection for reaching and saccades was highly correlated with the hand and eyes landing near the same final stimulus both for correct reaches to the target and for incorrect reaches to a distractor. Incorrect reaches showed a bias in target selection: they were directed to the distractor in the same hemifield as the target more often than to other distractors. A similar bias was seen in target selection for the initial saccade in correct reaching trials with multiple saccades. We also examined the temporal coupling of saccades and reaches. In trials with a single saccade, a reaching movement was made after a fairly stereotyped delay. In multiple-saccade trials, a reach to the target could be initiated near or even before the onset of the final target-directed saccade. In these trials, the initial trajectory of the reach was often directed toward the fixated distractor before veering toward the target around the time of the final saccade. In virtually all cases, the eyes arrived at the target before the hand, and remained fixated until reach completion. Overall, these results are consistent with flexible temporal coupling of saccade and reach initiation, but fairly tight coupling of target selection for the two types of action.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined the coordination of saccades and reaches in a visual search task in which monkeys were rewarded for reaching to an odd-colored target among distractors. Eye movements were unconstrained, and monkeys typically made one or more saccades before initiating a reach. Target selection for reaching and saccades was highly correlated with the hand and eyes landing near the same final stimulus both for correct reaches to the target and for incorrect reaches to a distractor. Incorrect reaches showed a bias in target selection: they were directed to the distractor in the same hemifield as the target more often than to other distractors. A similar bias was seen in target selection for the initial saccade in correct reaching trials with multiple saccades. We also examined the temporal coupling of saccades and reaches. In trials with a single saccade, a reaching movement was made after a fairly stereotyped delay. In multiple-saccade trials, a reach to the target could be initiated near or even before the onset of the final target-directed saccade. In these trials, the initial trajectory of the reach was often directed toward the fixated distractor before veering toward the target around the time of the final saccade. In virtually all cases, the eyes arrived at the target before the hand, and remained fixated until reach completion. Overall, these results are consistent with flexible temporal coupling of saccade and reach initiation, but fairly tight coupling of target selection for the two types of action.

Close

  • doi:10.1152/jn.91352.2008

Close

Franziska Schrammel; Sebastian Pannasch; Sven-Thomas Graupner; Andreas Mojzisch; Boris M. Velichkovsky

Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience Journal Article

In: Psychophysiology, vol. 46, no. 5, pp. 922–931, 2009.

Abstract | Links | BibTeX

@article{Schrammel2009,
title = {Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience},
author = {Franziska Schrammel and Sebastian Pannasch and Sven-Thomas Graupner and Andreas Mojzisch and Boris M. Velichkovsky},
doi = {10.1111/j.1469-8986.2009.00831.x},
year = {2009},
date = {2009-01-01},
journal = {Psychophysiology},
volume = {46},
number = {5},
pages = {922--931},
abstract = {The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction.

Close

  • doi:10.1111/j.1469-8986.2009.00831.x

Close

Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner

Chromatic contrast sensitivity during optokinetic nystagmus, visually enhanced vestibulo-ocular reflex, and smooth pursuit eye movements Journal Article

In: Journal of Neurophysiology, vol. 101, no. 5, pp. 2317–2327, 2009.

Abstract | Links | BibTeX

@article{Schuetz2009,
title = {Chromatic contrast sensitivity during optokinetic nystagmus, visually enhanced vestibulo-ocular reflex, and smooth pursuit eye movements},
author = {Alexander C. Schütz and Doris I. Braun and Karl R. Gegenfurtner},
doi = {10.1152/jn.91248.2008},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neurophysiology},
volume = {101},
number = {5},
pages = {2317--2327},
abstract = {Recently we showed that sensitivity for chromatic- and high-spatial frequency luminance stimuli is enhanced during smooth-pursuit eye movements (SPEMs). Here we investigated whether this enhancement is a general property of slow eye movements. Besides SPEM there are two other classes of eye movements that operate in a similar range of eye velocities: the optokinetic nystagmus (OKN) is a reflexive pattern of alternating fast and slow eye movements elicited by wide-field visual motion and the vestibulo-ocular reflex (VOR) stabilizes the gaze during head movements. In a natural environment all three classes of eye movements act synergistically to allow clear central vision during self- and object motion. To test whether the same improvement of chromatic sensitivity occurs during all of these eye movements, we measured human detection performance of chromatic and luminance line stimuli during OKN and contrast sensitivity during VOR and SPEM at comparable velocities. For comparison, performance in the same tasks was tested during fixation. During the slow phase of OKN we found a similar enhancement of chromatic detection rate like that during SPEM, whereas no enhancement was observable during VOR. This result indicates similarities between slow-phase OKN and SPEM, which are distinct from VOR.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recently we showed that sensitivity for chromatic- and high-spatial frequency luminance stimuli is enhanced during smooth-pursuit eye movements (SPEMs). Here we investigated whether this enhancement is a general property of slow eye movements. Besides SPEM there are two other classes of eye movements that operate in a similar range of eye velocities: the optokinetic nystagmus (OKN) is a reflexive pattern of alternating fast and slow eye movements elicited by wide-field visual motion and the vestibulo-ocular reflex (VOR) stabilizes the gaze during head movements. In a natural environment all three classes of eye movements act synergistically to allow clear central vision during self- and object motion. To test whether the same improvement of chromatic sensitivity occurs during all of these eye movements, we measured human detection performance of chromatic and luminance line stimuli during OKN and contrast sensitivity during VOR and SPEM at comparable velocities. For comparison, performance in the same tasks was tested during fixation. During the slow phase of OKN we found a similar enhancement of chromatic detection rate like that during SPEM, whereas no enhancement was observable during VOR. This result indicates similarities between slow-phase OKN and SPEM, which are distinct from VOR.

Close

  • doi:10.1152/jn.91248.2008

Close

Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner

Object recognition during foveating eye movements Journal Article

In: Vision Research, vol. 49, no. 18, pp. 2241–2253, 2009.

Abstract | Links | BibTeX

@article{Schuetz2009a,
title = {Object recognition during foveating eye movements},
author = {Alexander C. Schütz and Doris I. Braun and Karl R. Gegenfurtner},
doi = {10.1016/j.visres.2009.05.022},
year = {2009},
date = {2009-01-01},
journal = {Vision Research},
volume = {49},
number = {18},
pages = {2241--2253},
publisher = {Elsevier Ltd},
abstract = {We studied how saccadic and smooth pursuit eye movements affect the recognition of briefly presented letters appearing within the eye movement target. First we compared the recognition performance during steady-state pursuit and during fixation. Single letters were presented for seven different durations ranging from 10 to 400 ms and four contrast levels ranging from 5% to 40%. For both types of eye movements the recognition rates increased with duration and contrast, but they were on average 11% lower during pursuit. In daily life humans use a combination of saccadic and smooth pursuit eye movements to foveate a peripheral moving object. To investigate this more natural situation, we presented a peripheral target that was either stationary or moving horizontally, above or below the fixation spot. Participants were asked to saccade to the target and to keep it foveated. The letters were presented at different times relative to the first target directed saccade. As would be expected from retinal masking and motion blur during saccades, the discrimination performance increased with increasing post-saccadic delay. If the target moved and the saccade was followed by pursuit, letter recognition performance was on average 16% lower than if the target was stationary and the saccade was followed by fixation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We studied how saccadic and smooth pursuit eye movements affect the recognition of briefly presented letters appearing within the eye movement target. First we compared the recognition performance during steady-state pursuit and during fixation. Single letters were presented for seven different durations ranging from 10 to 400 ms and four contrast levels ranging from 5% to 40%. For both types of eye movements the recognition rates increased with duration and contrast, but they were on average 11% lower during pursuit. In daily life humans use a combination of saccadic and smooth pursuit eye movements to foveate a peripheral moving object. To investigate this more natural situation, we presented a peripheral target that was either stationary or moving horizontally, above or below the fixation spot. Participants were asked to saccade to the target and to keep it foveated. The letters were presented at different times relative to the first target directed saccade. As would be expected from retinal masking and motion blur during saccades, the discrimination performance increased with increasing post-saccadic delay. If the target moved and the saccade was followed by pursuit, letter recognition performance was on average 16% lower than if the target was stationary and the saccade was followed by fixation.

Close

  • doi:10.1016/j.visres.2009.05.022

Close

Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner; Alexander C. Schu

Improved visual sensitivity during smooth pursuit eye movements: Temporal and spatial characteristics Journal Article

In: Visual Neuroscience, vol. 26, no. 3, pp. 329–340, 2009.

Abstract | Links | BibTeX

@article{Schuetz2009b,
title = {Improved visual sensitivity during smooth pursuit eye movements: Temporal and spatial characteristics},
author = {Alexander C. Schütz and Doris I. Braun and Karl R. Gegenfurtner and Alexander C. Schu},
doi = {10.1017/S0952523809990083},
year = {2009},
date = {2009-01-01},
journal = {Visual Neuroscience},
volume = {26},
number = {3},
pages = {329--340},
abstract = {Recently, we showed that contrast sensitivity for color and high–spatial frequency luminance stimuli is enhanced during smooth pursuit eye movements (Schütz et al., 2008). In this study, we investigated the enhancement over a wide range of temporal and spatial frequencies. In Experiment 1, we measured the temporal impulse response function (TIRF) for colored stimuli. The TIRF for pursuit and fixation differed mostly with respect to the gain but not with respect to the natural temporal frequency. Hence, the sensitivity enhancement seems to be rather independent of the temporal frequency of the stimuli. In Experiment 2, we measured the spatial contrast sensitivity function for luminance-defined Gabor patches with spatial frequencies ranging from 0.2 to 7 cpd. We found a sensitivity improvement during pursuit for spatial frequencies above 2–3 cpd. Between 0.5 and 3 cpd, sensitivity was impaired by smooth pursuit eye movements, but no consistent difference was observed below 0.5 cpd. The results of both experiments are consistent with an increased contrast gain of the parvocellular retinogeniculate pathway.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Recently, we showed that contrast sensitivity for color and high–spatial frequency luminance stimuli is enhanced during smooth pursuit eye movements (Schütz et al., 2008). In this study, we investigated the enhancement over a wide range of temporal and spatial frequencies. In Experiment 1, we measured the temporal impulse response function (TIRF) for colored stimuli. The TIRF for pursuit and fixation differed mostly with respect to the gain but not with respect to the natural temporal frequency. Hence, the sensitivity enhancement seems to be rather independent of the temporal frequency of the stimuli. In Experiment 2, we measured the spatial contrast sensitivity function for luminance-defined Gabor patches with spatial frequencies ranging from 0.2 to 7 cpd. We found a sensitivity improvement during pursuit for spatial frequencies above 2–3 cpd. Between 0.5 and 3 cpd, sensitivity was impaired by smooth pursuit eye movements, but no consistent difference was observed below 0.5 cpd. The results of both experiments are consistent with an increased contrast gain of the parvocellular retinogeniculate pathway.

Close

  • doi:10.1017/S0952523809990083

Close

Clive R. Rosenthal; Emma E. Roche-Kelly; Masud Husain; Christopher Kennard

Response-dependent contributions of human primary motor cortex and angular gyrus to manual and perceptual sequence learning Journal Article

In: Journal of Neuroscience, vol. 29, no. 48, pp. 15115–15125, 2009.

Abstract | Links | BibTeX

@article{Rosenthal2009,
title = {Response-dependent contributions of human primary motor cortex and angular gyrus to manual and perceptual sequence learning},
author = {Clive R. Rosenthal and Emma E. Roche-Kelly and Masud Husain and Christopher Kennard},
doi = {10.1523/JNEUROSCI.2603-09.2009},
year = {2009},
date = {2009-01-01},
journal = {Journal of Neuroscience},
volume = {29},
number = {48},
pages = {15115--15125},
abstract = {Motor sequence learning on the serial reaction time task involves the integration of response-, stimulus-, and effector-based information. Human primary motor cortex (M1) and the inferior parietal lobule (IPL) have been identified with supporting the learning of effector-dependent and -independent information, respectively. Current neurocognitive data are, however, exclusively based on learning complex sequence information via perceptual-motor responses. Here, we investigated the effects of continuous theta-burst transcranial magnetic stimulation (cTBS)-induced disruption of M1 and the angular gyrus (AG) of the IPL on learning a probabilistic sequence via sequential perceptual-motor responses (experiment 1) or covert orienting of visuospatial attention (experiment 2). Functional effects on manual sequence learning were evident during 75% of training trials in the cTBS M1 condition, whereas cTBS over the AG resulted in interference confined to a midpoint during the training phase. Posttraining direct (declarative) tests of sequence knowledge revealed that cTBS over M1 modulated the availability of newly acquired sequence knowledge, whereby sequence knowledge was implicit in the cTBS M1 condition but was available to conscious awareness in the cTBS AG and control conditions. In contrast, perceptual sequence learning was abolished in the perceptual cTBS AG condition, whereas learning was intact and available to conscious awareness in the cTBS M1 and control conditions. These results show that the right AG had a critical role in perceptual sequence learning, whereas M1 had a causal role in developing experience-dependent functional attributes relevant to conscious knowledge on manual but not perceptual sequence learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Motor sequence learning on the serial reaction time task involves the integration of response-, stimulus-, and effector-based information. Human primary motor cortex (M1) and the inferior parietal lobule (IPL) have been identified with supporting the learning of effector-dependent and -independent information, respectively. Current neurocognitive data are, however, exclusively based on learning complex sequence information via perceptual-motor responses. Here, we investigated the effects of continuous theta-burst transcranial magnetic stimulation (cTBS)-induced disruption of M1 and the angular gyrus (AG) of the IPL on learning a probabilistic sequence via sequential perceptual-motor responses (experiment 1) or covert orienting of visuospatial attention (experiment 2). Functional effects on manual sequence learning were evident during 75% of training trials in the cTBS M1 condition, whereas cTBS over the AG resulted in interference confined to a midpoint during the training phase. Posttraining direct (declarative) tests of sequence knowledge revealed that cTBS over M1 modulated the availability of newly acquired sequence knowledge, whereby sequence knowledge was implicit in the cTBS M1 condition but was available to conscious awareness in the cTBS AG and control conditions. In contrast, perceptual sequence learning was abolished in the perceptual cTBS AG condition, whereas learning was intact and available to conscious awareness in the cTBS M1 and control conditions. These results show that the right AG had a critical role in perceptual sequence learning, whereas M1 had a causal role in developing experience-dependent functional attributes relevant to conscious knowledge on manual but not perceptual sequence learning.

Close

  • doi:10.1523/JNEUROSCI.2603-09.2009

Close

T. Roth; Alexander N. Sokolov; A. Messias; P. Roth; M. Weller; Susanne Trauzettel-Klosinski

Comparing explorative saccade and flicker training in hemianopia: A randomized controlled study Journal Article

In: Neurology, vol. 72, pp. 324–331, 2009.

Abstract | BibTeX

@article{Roth2009,
title = {Comparing explorative saccade and flicker training in hemianopia: A randomized controlled study},
author = {T. Roth and Alexander N. Sokolov and A. Messias and P. Roth and M. Weller and Susanne Trauzettel-Klosinski},
year = {2009},
date = {2009-01-01},
journal = {Neurology},
volume = {72},
pages = {324--331},
abstract = {Objective: Patients with homonymous hemianopia are disabled on everyday exploratory activities. We examined whether explorative saccade training (EST), compared with flicker-stimulation training (FT), would selectively improve saccadic behavior on the patients' blind side and benefit performance on natural exploratory tasks. Methods: Twenty-eight hemianopic patients were randomly assigned to distinct groups performing for 6 weeks either EST (a digit-search task) or FT (blind-hemifield stimulation by flickering letters). Outcome variables (response times [RTs] during natural search, number of fixations during natural scene exploration, fixation stability, visual fields, and quality-of-life scores) were collected before, directly after, and 6 weeks after training. Results: EST yielded a reduced (post/pre, 47%) digit-search RT for the blind side. Natural search RT decreased (post/pre, 23%) on the blind side but not on the seeing side. After FT, both sides' RT remained unchanged. Only with EST did the number of fixations during natural scene exploration increase toward the blind and decrease on the seeing side (follow-up/pre difference, 238%). Even with the target located on the seeing side, after EST more fixations occurred toward the blind side. The EST group showed decreased (post/pre, 43%) fixation stability and increased (post/pre, 482%) asymmetry of fixations toward the blind side. Visual field size remained constant after both treatments. EST patients reported improvements in social domain. Conclusions: Explorative saccade training selectively improves saccadic behavior, natural search, and scene exploration on the blind side. Flicker-stimulation training does not improve saccadic behavior or visual fields. The findings show substantial benefits of compensatory exploration training, including subjective improvements in mastering daily-life activities, in a randomized controlled trial.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: Patients with homonymous hemianopia are disabled on everyday exploratory activities. We examined whether explorative saccade training (EST), compared with flicker-stimulation training (FT), would selectively improve saccadic behavior on the patients' blind side and benefit performance on natural exploratory tasks. Methods: Twenty-eight hemianopic patients were randomly assigned to distinct groups performing for 6 weeks either EST (a digit-search task) or FT (blind-hemifield stimulation by flickering letters). Outcome variables (response times [RTs] during natural search, number of fixations during natural scene exploration, fixation stability, visual fields, and quality-of-life scores) were collected before, directly after, and 6 weeks after training. Results: EST yielded a reduced (post/pre, 47%) digit-search RT for the blind side. Natural search RT decreased (post/pre, 23%) on the blind side but not on the seeing side. After FT, both sides' RT remained unchanged. Only with EST did the number of fixations during natural scene exploration increase toward the blind and decrease on the seeing side (follow-up/pre difference, 238%). Even with the target located on the seeing side, after EST more fixations occurred toward the blind side. The EST group showed decreased (post/pre, 43%) fixation stability and increased (post/pre, 482%) asymmetry of fixations toward the blind side. Visual field size remained constant after both treatments. EST patients reported improvements in social domain. Conclusions: Explorative saccade training selectively improves saccadic behavior, natural search, and scene exploration on the blind side. Flicker-stimulation training does not improve saccadic behavior or visual fields. The findings show substantial benefits of compensatory exploration training, including subjective improvements in mastering daily-life activities, in a randomized controlled trial.

Close

Annie Roy-Charland; Jean Saint-Aubin; Michael A. Lawrence; Raymond M. Klein

Solving the chicken-and-egg problem of letter detection and fixation duration in reading Journal Article

In: Attention, Perception, and Psychophysics, vol. 71, no. 7, pp. 1553–1562, 2009.

Abstract | Links | BibTeX

@article{RoyCharland2009,
title = {Solving the chicken-and-egg problem of letter detection and fixation duration in reading},
author = {Annie Roy-Charland and Jean Saint-Aubin and Michael A. Lawrence and Raymond M. Klein},
doi = {10.3758/APP.71.7.1553},
year = {2009},
date = {2009-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {71},
number = {7},
pages = {1553--1562},
abstract = {When asked to detect target letters while reading a text, participants miss more letters in frequent function words than in less frequent content words. According to the truncation assumption that characterizes most models of this effect, misses occur when word-processing time is shorter than letter-processing time. Fixation durations for detections and omissions were compared with fixation durations from a baseline condition when participants were searching for a target letter embedded in different words. Although, as predicted by truncation, fixation durations were longer for detections than for omissions, fixation durations for detections were also longer than those for the same words in the baseline condition, demonstrating that longer fixation durations when targets are detected are more likely to be due to demands associated with producing a detection response than to truncation. Also, contrary to predictions from the truncation assumption, the standard deviation of fixation durations for detections was larger than that from the baseline condition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When asked to detect target letters while reading a text, participants miss more letters in frequent function words than in less frequent content words. According to the truncation assumption that characterizes most models of this effect, misses occur when word-processing time is shorter than letter-processing time. Fixation durations for detections and omissions were compared with fixation durations from a baseline condition when participants were searching for a target letter embedded in different words. Although, as predicted by truncation, fixation durations were longer for detections than for omissions, fixation durations for detections were also longer than those for the same words in the baseline condition, demonstrating that longer fixation durations when targets are detected are more likely to be due to demands associated with producing a detection response than to truncation. Also, contrary to predictions from the truncation assumption, the standard deviation of fixation durations for detections was larger than that from the baseline condition.

Close

  • doi:10.3758/APP.71.7.1553

Close

Gary S. Rubin; Mary P. Feely

The role of eye movements during reading in patients with age-related macular degeneration (AMD) Journal Article

In: Neuro-Ophthalmology, vol. 33, no. 3, pp. 120–126, 2009.

Abstract | Links | BibTeX

@article{Rubin2009,
title = {The role of eye movements during reading in patients with age-related macular degeneration (AMD)},
author = {Gary S. Rubin and Mary P. Feely},
doi = {10.1080/01658100902998732},
year = {2009},
date = {2009-01-01},
journal = {Neuro-Ophthalmology},
volume = {33},
number = {3},
pages = {120--126},
abstract = {AMD patients often have particular difficulty reading, even when the text is magnified to compensate for reduced visual acuity. This study explores whether reading performance can be explained by eye movement factors. Forty patients with advanced AMD were tested with a high-speed video eye tracker to evaluate fixation stability and saccadic eye movements. Reading speed was measured for standardized texts viewed at the critical print size. Visual acuity and contrast sensitivity were unrelated to reading speed, but fixation stability, proportion of regressive saccades and size of forward saccades were all significantly associated with reading performance, accounting for 74% of the variance. The implications of these findings for low-vision training programmes are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

AMD patients often have particular difficulty reading, even when the text is magnified to compensate for reduced visual acuity. This study explores whether reading performance can be explained by eye movement factors. Forty patients with advanced AMD were tested with a high-speed video eye tracker to evaluate fixation stability and saccadic eye movements. Reading speed was measured for standardized texts viewed at the critical print size. Visual acuity and contrast sensitivity were unrelated to reading speed, but fixation stability, proportion of regressive saccades and size of forward saccades were all significantly associated with reading performance, accounting for 74% of the variance. The implications of these findings for low-vision training programmes are discussed.

Close

  • doi:10.1080/01658100902998732

Close

Jennifer D. Ryan; Christina Villate

Building visual representations: The binding of relative spatial relations across time Journal Article

In: Visual Cognition, vol. 17, no. 1-2, pp. 254–272, 2009.

Abstract | Links | BibTeX

@article{Ryan2009,
title = {Building visual representations: The binding of relative spatial relations across time},
author = {Jennifer D. Ryan and Christina Villate},
doi = {10.1080/13506280802336362},
year = {2009},
date = {2009-01-01},
journal = {Visual Cognition},
volume = {17},
number = {1-2},
pages = {254--272},
abstract = {In this study, the construction of, and subsequent access to, representations regarding the relative spatial and temporal relations among sequentially presented objects was examined using eye movement monitoring. Participants were presented with a series of single objects. Subsequently, a test display revealed all three objects simultaneously and participants judged whether the relative relations were maintained. Eye movements revealed the binding of relations across study images; eye movements transitioned between the location of the presented object and the locations that were previously occupied by objects in prior study images. For the test displays, changes in the relative relations were accurately detected. Eye movements distinguished intact displays from those in which the relations had been altered. Order of fixations to objects in test images mimicked the temporal order in which objects had been studied, but disruption of temporal order was observed for manipulated images. The present findings suggest that memory representations regarding the visual world include information about the relative spatial and temporal relations among objects. Eye movements may be the conduit by which information is integrated into a lasting representation, and by which current information is compared to stored representations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In this study, the construction of, and subsequent access to, representations regarding the relative spatial and temporal relations among sequentially presented objects was examined using eye movement monitoring. Participants were presented with a series of single objects. Subsequently, a test display revealed all three objects simultaneously and participants judged whether the relative relations were maintained. Eye movements revealed the binding of relations across study images; eye movements transitioned between the location of the presented object and the locations that were previously occupied by objects in prior study images. For the test displays, changes in the relative relations were accurately detected. Eye movements distinguished intact displays from those in which the relations had been altered. Order of fixations to objects in test images mimicked the temporal order in which objects had been studied, but disruption of temporal order was observed for manipulated images. The present findings suggest that memory representations regarding the visual world include information about the relative spatial and temporal relations among objects. Eye movements may be the conduit by which information is integrated into a lasting representation, and by which current information is compared to stored representations.

Close

  • doi:10.1080/13506280802336362

Close

Ladislao Salmerón; Thierry Baccino; José J. Cañas; Rafael I. Madrid; Inmaculada Fajardo

Do graphical overviews facilitate or hinder comprehension in hypertext? Journal Article

In: Computers and Education, vol. 53, no. 4, pp. 1308–1319, 2009.

Abstract | Links | BibTeX

@article{Salmeron2009,
title = {Do graphical overviews facilitate or hinder comprehension in hypertext?},
author = {Ladislao Salmerón and Thierry Baccino and José J. Cañas and Rafael I. Madrid and Inmaculada Fajardo},
doi = {10.1016/j.compedu.2009.06.013},
year = {2009},
date = {2009-01-01},
journal = {Computers and Education},
volume = {53},
number = {4},
pages = {1308--1319},
publisher = {Elsevier Ltd},
abstract = {Educational hypertexts usually include graphical overviews, conveying the structure of the text schematically with the aim of fostering comprehension. Despite the claims about their relevance, there is currently no consensus on the impact that hypertext overviews have on the reader's comprehension. In the present paper we have explored how hypertext overviews might affect comprehension with regard to (a) the time at which students read the overview and (b) the hypertext difficulty. The results from two eye-tracking studies revealed that reading a graphical overview at the beginning of the hypertext is related to an improvement in the participant's comprehension of quite difficult hypertexts, whereas reading an overview at the end of the hypertext is linked to a decrease in the student's comprehension of easier hypertexts. These findings are interpreted in light of the Assimilation Theory and the Active Processing model. Finally, the key educational and hypertext design implications of the results are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Educational hypertexts usually include graphical overviews, conveying the structure of the text schematically with the aim of fostering comprehension. Despite the claims about their relevance, there is currently no consensus on the impact that hypertext overviews have on the reader's comprehension. In the present paper we have explored how hypertext overviews might affect comprehension with regard to (a) the time at which students read the overview and (b) the hypertext difficulty. The results from two eye-tracking studies revealed that reading a graphical overview at the beginning of the hypertext is related to an improvement in the participant's comprehension of quite difficult hypertexts, whereas reading an overview at the end of the hypertext is linked to a decrease in the student's comprehension of easier hypertexts. These findings are interpreted in light of the Assimilation Theory and the Active Processing model. Finally, the key educational and hypertext design implications of the results are discussed.

Close

  • doi:10.1016/j.compedu.2009.06.013

Close

Stephen V. Shepherd; Jeffrey T. Klein; Robert O. Deaner; Michael L. Platt

Mirroring of attention by neurons in macaque parietal cortex Journal Article

In: Proceedings of the National Academy of Sciences, vol. 106, no. 23, pp. 9489–9494, 2009.

Abstract | Links | BibTeX

@article{Shepherd2009,
title = {Mirroring of attention by neurons in macaque parietal cortex},
author = {Stephen V. Shepherd and Jeffrey T. Klein and Robert O. Deaner and Michael L. Platt},
doi = {10.1093/joclec/nhs015},
year = {2009},
date = {2009-01-01},
journal = {Proceedings of the National Academy of Sciences},
volume = {106},
number = {23},
pages = {9489--9494},
abstract = {Macaques, like humans, rapidly orient their attention in the direction other individuals are looking. Both cortical and subcortical pathways have been proposed as neural mediators of social gaze following, but neither pathway has been characterized electrophysiologically in behaving animals. To address this gap, we recorded the activity of single neurons in the lateral intraparietal area (LIP) of rhesus macaques to determine whether and how this area might contribute to gaze following. A subset of LIP neurons mirrored observed attention by firing both when the subject looked in the preferred direction of the neuron, and when observed monkeys looked in the preferred direction of the neuron, despite the irrelevance of the monkey images to the task. Importantly, the timing of these modulations matched the time course of gaze-following behavior. A second population of neurons was suppressed by social gaze cues, possibly subserving task demands by maintaining fixation on the observed face. These observations suggest that LIP contributes to sharing of observed attention and link mirror representations in parietal cortex to a well studied imitative behavior.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Macaques, like humans, rapidly orient their attention in the direction other individuals are looking. Both cortical and subcortical pathways have been proposed as neural mediators of social gaze following, but neither pathway has been characterized electrophysiologically in behaving animals. To address this gap, we recorded the activity of single neurons in the lateral intraparietal area (LIP) of rhesus macaques to determine whether and how this area might contribute to gaze following. A subset of LIP neurons mirrored observed attention by firing both when the subject looked in the preferred direction of the neuron, and when observed monkeys looked in the preferred direction of the neuron, despite the irrelevance of the monkey images to the task. Importantly, the timing of these modulations matched the time course of gaze-following behavior. A second population of neurons was suppressed by social gaze cues, possibly subserving task demands by maintaining fixation on the observed face. These observations suggest that LIP contributes to sharing of observed attention and link mirror representations in parietal cortex to a well studied imitative behavior.

Close

  • doi:10.1093/joclec/nhs015

Close

Heather Sheridan; Eyal M. Reingold; Meredyth Daneman

Using puns to study contextual influences on lexical ambiguity resolution: Evidence from eye movements Journal Article

In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 875–881, 2009.

Abstract | Links | BibTeX

@article{Sheridan2009,
title = {Using puns to study contextual influences on lexical ambiguity resolution: Evidence from eye movements},
author = {Heather Sheridan and Eyal M. Reingold and Meredyth Daneman},
doi = {10.3758/PBR.16.5.875},
year = {2009},
date = {2009-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {16},
number = {5},
pages = {875--881},
abstract = {Participants' eye movements were monitored while they read sentences containing biased homographs in either a single-meaning context condition that instantiated the subordinate meaning of the homograph without ruling out the dominant meaning (e.g., "The man with a toothache had a crown made by the best dentist in town") or a dual-meaning pun context condition that supported both the subordinate and dominant meanings (e.g., "The king with a toothache had a crown made by the best dentist in town"). In both of these conditions, the homographs were followed by disambiguating material that supported the subordinate meaning and ruled out the dominant meaning. Fixation times on the homograph were longer in the single-meaning condition than in the dual-meaning condition, whereas the reverse pattern was demonstrated for fixation times on the disambiguating region; these effects were observed as early as first-fixation duration. The findings strongly support the reordered access model of lexical ambiguity resolution.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Participants' eye movements were monitored while they read sentences containing biased homographs in either a single-meaning context condition that instantiated the subordinate meaning of the homograph without ruling out the dominant meaning (e.g., "The man with a toothache had a crown made by the best dentist in town") or a dual-meaning pun context condition that supported both the subordinate and dominant meanings (e.g., "The king with a toothache had a crown made by the best dentist in town"). In both of these conditions, the homographs were followed by disambiguating material that supported the subordinate meaning and ruled out the dominant meaning. Fixation times on the homograph were longer in the single-meaning condition than in the dual-meaning condition, whereas the reverse pattern was demonstrated for fixation times on the disambiguating region; these effects were observed as early as first-fixation duration. The findings strongly support the reordered access model of lexical ambiguity resolution.

Close

  • doi:10.3758/PBR.16.5.875

Close

Naoyuki Sato; Yoko Yamaguchi

A computational predictor of human episodic memory based on a theta phase precession network Journal Article

In: PLoS ONE, vol. 4, no. 10, pp. e7536, 2009.

Abstract | Links | BibTeX

@article{Sato2009,
title = {A computational predictor of human episodic memory based on a theta phase precession network},
author = {Naoyuki Sato and Yoko Yamaguchi},
doi = {10.1371/journal.pone.0007536},
year = {2009},
date = {2009-01-01},
journal = {PLoS ONE},
volume = {4},
number = {10},
pages = {e7536},
abstract = {In the rodent hippocampus, a phase precession phenomena of place cell firing with the local field potential (LFP) theta is called "theta phase precession" and is considered to contribute to memory formation with spike time dependent plasticity (STDP). On the other hand, in the primate hippocampus, the existence of theta phase precession is unclear. Our computational studies have demonstrated that theta phase precession dynamics could contribute to primate-hippocampal dependent memory formation, such as object-place association memory. In this paper, we evaluate human theta phase precession by using a theory-experiment combined analysis. Human memory recall of object-place associations was analyzed by an individual hippocampal network simulated by theta phase precession dynamics of human eye movement and EEG data during memory encoding. It was found that the computational recall of the resultant network is significantly correlated with human memory recall performance, while other computational predictors without theta phase precession are not significantly correlated with subsequent memory recall. Moreover the correlation is larger than the correlation between human recall and traditional experimental predictors. These results indicate that theta phase precession dynamics are necessary for the better prediction of human recall performance with eye movement and EEG data. In this analysis, theta phase precession dynamics appear useful for the extraction of memory-dependent components from the spatio-temporal pattern of eye movement and EEG data as an associative network. Theta phase precession may be a common neural dynamic between rodents and humans for the formation of environmental memories.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the rodent hippocampus, a phase precession phenomena of place cell firing with the local field potential (LFP) theta is called "theta phase precession" and is considered to contribute to memory formation with spike time dependent plasticity (STDP). On the other hand, in the primate hippocampus, the existence of theta phase precession is unclear. Our computational studies have demonstrated that theta phase precession dynamics could contribute to primate-hippocampal dependent memory formation, such as object-place association memory. In this paper, we evaluate human theta phase precession by using a theory-experiment combined analysis. Human memory recall of object-place associations was analyzed by an individual hippocampal network simulated by theta phase precession dynamics of human eye movement and EEG data during memory encoding. It was found that the computational recall of the resultant network is significantly correlated with human memory recall performance, while other computational predictors without theta phase precession are not significantly correlated with subsequent memory recall. Moreover the correlation is larger than the correlation between human recall and traditional experimental predictors. These results indicate that theta phase precession dynamics are necessary for the better prediction of human recall performance with eye movement and EEG data. In this analysis, theta phase precession dynamics appear useful for the extraction of memory-dependent components from the spatio-temporal pattern of eye movement and EEG data as an associative network. Theta phase precession may be a common neural dynamic between rodents and humans for the formation of environmental memories.

Close

  • doi:10.1371/journal.pone.0007536

Close

Joseph Schmidt; Gregory J. Zelinsky

Search guidance is proportional to the categorical specificity of a target cue Journal Article

In: Quarterly Journal of Experimental Psychology, vol. 62, no. 10, pp. 1904–1914, 2009.

Abstract | Links | BibTeX

@article{Schmidt2009,
title = {Search guidance is proportional to the categorical specificity of a target cue},
author = {Joseph Schmidt and Gregory J. Zelinsky},
doi = {10.1080/17470210902853530},
year = {2009},
date = {2009-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {62},
number = {10},
pages = {1904--1914},
abstract = {Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.

Close

  • doi:10.1080/17470210902853530

Close

2008

Yasuhiro Seya; Hidetoshi Nakayasu; Patrick Patterson

Visual search of trained and untrained drivers in a driving simulator Journal Article

In: Japanese Psychological Research, vol. 50, no. 4, pp. 242–252, 2008.

Abstract | Links | BibTeX

@article{Seya2008,
title = {Visual search of trained and untrained drivers in a driving simulator},
author = {Yasuhiro Seya and Hidetoshi Nakayasu and Patrick Patterson},
doi = {10.1111/j.1468-5884.2008.00380.x},
year = {2008},
date = {2008-11-01},
journal = {Japanese Psychological Research},
volume = {50},
number = {4},
pages = {242--252},
abstract = {To investigate the effects of driving experience on visual search during driving, we measured eye movements during driving tasks using a driving simulator. We evaluated trained and untrained drivers for selected driving road section types (for example, intersections and straight roads). Participants in the trained group had received driving training by the simulator before the experiment, while the others had no driving training by it. In the experiment, the participants were instructed to drive safely in the simulator. The results of scan paths showed that eye positions were less variable in the trained group than in the untrained group. Total eye-movement distances were shorter, and fixation durations were longer in the trained group than in the untrained group. These results suggest that trained drivers may perceive relevant information efficiently with few eye movements by using their anticipation skills and useful field of view, which may have been developed through their driving training in the simulator.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To investigate the effects of driving experience on visual search during driving, we measured eye movements during driving tasks using a driving simulator. We evaluated trained and untrained drivers for selected driving road section types (for example, intersections and straight roads). Participants in the trained group had received driving training by the simulator before the experiment, while the others had no driving training by it. In the experiment, the participants were instructed to drive safely in the simulator. The results of scan paths showed that eye positions were less variable in the trained group than in the untrained group. Total eye-movement distances were shorter, and fixation durations were longer in the trained group than in the untrained group. These results suggest that trained drivers may perceive relevant information efficiently with few eye movements by using their anticipation skills and useful field of view, which may have been developed through their driving training in the simulator.

Close

  • doi:10.1111/j.1468-5884.2008.00380.x

Close

S. M. EMRICH; J. D. N. RUPPEL; N. AL-AIDROOS; J. PRATT; S. FERBER

Out with the old: Inhibition of old items in a preview search is limited Journal Article

In: Perception and Psychophysics, vol. 70, no. 8, pp. 1552–1557, 2008.

Abstract | Links | BibTeX

@article{EMRICH2008,
title = {Out with the old: Inhibition of old items in a preview search is limited},
author = {S. M. EMRICH and J. D. N. RUPPEL and N. AL-AIDROOS and J. PRATT and S. FERBER},
doi = {10.3758/PP.70.8.1552},
year = {2008},
date = {2008-11-01},
journal = {Perception and Psychophysics},
volume = {70},
number = {8},
pages = {1552--1557},
abstract = {If some of the distractors in a visual search task are previewed prior to the presentation of the remaining distractors and the target, search time is reduced relative to when all of the items are displayed simultaneously. Here, we tested whether the ability to preferentially search new items during such a preview search is limited. We confirmed previous studies: The proportion of fixations on old items was significantly less than chance. However, the probability of fixating old locations was negatively affected by increasing the number of previewed distractors, suggesting that inhibition is limited to a small number of old items. Furthermore, the ability to inhibit old locations was limited to the first four fixations, indicating that by the fifth fixation, the resources required to sustain inhibition had been depleted. Together, these findings suggest that inhibition of old items in a preview search is a top-down mediated process dependent on capacity-limited cognitive resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

If some of the distractors in a visual search task are previewed prior to the presentation of the remaining distractors and the target, search time is reduced relative to when all of the items are displayed simultaneously. Here, we tested whether the ability to preferentially search new items during such a preview search is limited. We confirmed previous studies: The proportion of fixations on old items was significantly less than chance. However, the probability of fixating old locations was negatively affected by increasing the number of previewed distractors, suggesting that inhibition is limited to a small number of old items. Furthermore, the ability to inhibit old locations was limited to the first four fixations, indicating that by the fifth fixation, the resources required to sustain inhibition had been depleted. Together, these findings suggest that inhibition of old items in a preview search is a top-down mediated process dependent on capacity-limited cognitive resources.

Close

  • doi:10.3758/PP.70.8.1552

Close

R. GODIJN; A. F. KRAMER

The effect of attentional demands on the antisaccade cost Journal Article

In: Perception and Psychophysics, vol. 70, no. 5, pp. 795–806, 2008.

Abstract | Links | BibTeX

@article{GODIJN2008,
title = {The effect of attentional demands on the antisaccade cost},
author = {R. GODIJN and A. F. KRAMER},
doi = {10.3758/PP.70.5.795},
year = {2008},
date = {2008-07-01},
journal = {Perception and Psychophysics},
volume = {70},
number = {5},
pages = {795--806},
abstract = {In the present study, we examined the effect of attentional demands on the antisaccade cost (the latency difference between antisaccades and prosaccades). Participants performed a visual search for a target digit and were required to execute a saccade toward (prosaccade) or away from (antisaccade) the target. The results of Experiment 1 revealed that the antisaccade cost was greater when the target was premasked (i.e., presented through the removal of line segments) than when it appeared as an onset. Furthermore, in premasked target conditions, the antisaccade cost was increased by the presentation of onset distractors. The results of Experiment 2 revealed that the antisaccade cost was greater in a difficult search task (a numeral 2 among 5s) than in an easy one (a 2 among 7s). The findings provide evidence that attentional demands increase the antisaccade cost. We propose that the attentional demands of the search task interfere with the attentional control required to select the antisaccade goal.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In the present study, we examined the effect of attentional demands on the antisaccade cost (the latency difference between antisaccades and prosaccades). Participants performed a visual search for a target digit and were required to execute a saccade toward (prosaccade) or away from (antisaccade) the target. The results of Experiment 1 revealed that the antisaccade cost was greater when the target was premasked (i.e., presented through the removal of line segments) than when it appeared as an onset. Furthermore, in premasked target conditions, the antisaccade cost was increased by the presentation of onset distractors. The results of Experiment 2 revealed that the antisaccade cost was greater in a difficult search task (a numeral 2 among 5s) than in an easy one (a 2 among 7s). The findings provide evidence that attentional demands increase the antisaccade cost. We propose that the attentional demands of the search task interfere with the attentional control required to select the antisaccade goal.

Close

  • doi:10.3758/PP.70.5.795

Close

J. PRATT; B. NEGGERS

Inhibition of return in single and dual tasks: Examining saccadic, keypress, and pointing responses Journal Article

In: Perception and Psychophysics, vol. 70, no. 2, pp. 257–265, 2008.

Abstract | Links | BibTeX

@article{PRATT2008,
title = {Inhibition of return in single and dual tasks: Examining saccadic, keypress, and pointing responses},
author = {J. PRATT and B. NEGGERS},
doi = {10.3758/PP.70.2.257},
year = {2008},
date = {2008-02-01},
journal = {Perception and Psychophysics},
volume = {70},
number = {2},
pages = {257--265},
abstract = {Two experiments are reported in which inhibition of return (IOR) was examined with single-response tasks (ither manual responses alone or saccadic responses alone) and dual-response tasks (simultaneous manual and saccadic responses). The first experiment—using guided limb movements that require considerable spatial information—showed more IOR for saccades than for pointing responses. In addition, saccadic IOR was reduced with concurrent pointing movements, but manual IOR was not affected by concurrent saccades. Importantly, at the time of saccade initiation, the arm movements did not start yet, indicating that the influence on saccade IOR is due to arm-movement preparation. In the second experiment, using localization keypress responses that required only minimal spatial information, greater IOR was again found for saccadic than for manual responses, but no effect of concurrent movements was found. These findings add further support that there is a dissociation between oculomotor and skeletal-motor IOR. Moreover, the results show that the preparation manual responses tend to mediate saccadic behavior—but only when the manual responses require high levels of spatial accuracy—and that the superior colliculus is the likely neural substrate integrating IOR for eye and arm movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two experiments are reported in which inhibition of return (IOR) was examined with single-response tasks (ither manual responses alone or saccadic responses alone) and dual-response tasks (simultaneous manual and saccadic responses). The first experiment—using guided limb movements that require considerable spatial information—showed more IOR for saccades than for pointing responses. In addition, saccadic IOR was reduced with concurrent pointing movements, but manual IOR was not affected by concurrent saccades. Importantly, at the time of saccade initiation, the arm movements did not start yet, indicating that the influence on saccade IOR is due to arm-movement preparation. In the second experiment, using localization keypress responses that required only minimal spatial information, greater IOR was again found for saccadic than for manual responses, but no effect of concurrent movements was found. These findings add further support that there is a dissociation between oculomotor and skeletal-motor IOR. Moreover, the results show that the preparation manual responses tend to mediate saccadic behavior—but only when the manual responses require high levels of spatial accuracy—and that the superior colliculus is the likely neural substrate integrating IOR for eye and arm movements.

Close

  • doi:10.3758/PP.70.2.257

Close

Archana Pradeep; Shery Thomas; Eryl O. Roberts; Frank A. Proudlock; Irene Gottlob

Reduction of congenital nystagmus in a patient after smoking cannabis Journal Article

In: Strabismus, vol. 16, no. 1, pp. 29–32, 2008.

Abstract | Links | BibTeX

@article{Pradeep2008,
title = {Reduction of congenital nystagmus in a patient after smoking cannabis},
author = {Archana Pradeep and Shery Thomas and Eryl O. Roberts and Frank A. Proudlock and Irene Gottlob},
doi = {10.1080/09273970701821063},
year = {2008},
date = {2008-01-01},
journal = {Strabismus},
volume = {16},
number = {1},
pages = {29--32},
abstract = {INTRODUCTION: Smoking cannabis has been described to reduce acquired pendular nystagmus in MS, but its effect on congenital nystagmus is not known. PURPOSE: To report the effect of smoking cannabis in a case of congenital nystagmus. METHODS: A 19-year-old male with congenital horizontal nystagmus presented to the clinic after smoking 10 mg of cannabis. He claimed that the main reason for smoking cannabis was to improve his vision. At the next clinic appointment, he had not smoked cannabis for 3weeks. Full ophthalmologic examination and eye movement recordings were performed at each visit. RESULTS: Visual acuity improved by 3 logMar lines in the left eye and by 2 logMar lines in the right eye after smoking cannabis. The nystagmus intensities were reduced by 30% in primary position and 44%, 11%, 10% and 40% at 20-degree eccentricity to the right, left, elevation and depression, respectively, after smoking cannabis. CONCLUSION: Cannabis may be beneficial in the treatment of congenital idiopathic nystagmus (CIN). Further research to clarify the safety and efficacy of cannabis in patients with CIN, administered for example by capsules or spray, would be important.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

INTRODUCTION: Smoking cannabis has been described to reduce acquired pendular nystagmus in MS, but its effect on congenital nystagmus is not known. PURPOSE: To report the effect of smoking cannabis in a case of congenital nystagmus. METHODS: A 19-year-old male with congenital horizontal nystagmus presented to the clinic after smoking 10 mg of cannabis. He claimed that the main reason for smoking cannabis was to improve his vision. At the next clinic appointment, he had not smoked cannabis for 3weeks. Full ophthalmologic examination and eye movement recordings were performed at each visit. RESULTS: Visual acuity improved by 3 logMar lines in the left eye and by 2 logMar lines in the right eye after smoking cannabis. The nystagmus intensities were reduced by 30% in primary position and 44%, 11%, 10% and 40% at 20-degree eccentricity to the right, left, elevation and depression, respectively, after smoking cannabis. CONCLUSION: Cannabis may be beneficial in the treatment of congenital idiopathic nystagmus (CIN). Further research to clarify the safety and efficacy of cannabis in patients with CIN, administered for example by capsules or spray, would be important.

Close

  • doi:10.1080/09273970701821063

Close

Heinz-Werner Priess; Sabine Born; Ulrich Ansorge

Inhibition of return after color singletons Journal Article

In: Journal of Eye Movement Research, vol. 5, no. 5, pp. 1–12, 2008.

Abstract | BibTeX

@article{Priess2008,
title = {Inhibition of return after color singletons},
author = {Heinz-Werner Priess and Sabine Born and Ulrich Ansorge},
year = {2008},
date = {2008-01-01},
journal = {Journal of Eye Movement Research},
volume = {5},
number = {5},
pages = {1--12},
abstract = {Inhibition of return (IOR) is the faster selection of hitherto unattended than previously attended positions. Some previous studies failed to find evidence for IOR after attention capture by color singletons. Others, however, did report IOR effects after color singletons. The current study examines the role of cue relevance for obtaining IOR effects. By using a potentially more sensitive method - saccadic IOR - we tested and found IOR after relevant color singleton cues that required an attention shift (Experiment 1). In contrast, irrelevant color singletons failed to produce reliable IOR effects in Experiment 2. Also, Experiment 2 rules out an alternative explanation of our IOR findings in terms of masking. We discuss our results in light of pertaining theories of IOR.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Inhibition of return (IOR) is the faster selection of hitherto unattended than previously attended positions. Some previous studies failed to find evidence for IOR after attention capture by color singletons. Others, however, did report IOR effects after color singletons. The current study examines the role of cue relevance for obtaining IOR effects. By using a potentially more sensitive method - saccadic IOR - we tested and found IOR after relevant color singleton cues that required an attention shift (Experiment 1). In contrast, irrelevant color singletons failed to produce reliable IOR effects in Experiment 2. Also, Experiment 2 rules out an alternative explanation of our IOR findings in terms of masking. We discuss our results in light of pertaining theories of IOR.

Close

10162 entries « ‹ 91 of 102 › »

Let’s Keep in Touch

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
Newsletter
Newsletter Archive
Conferences

Contact

info@sr-research.com

Phone: +1-613-271-8686

Toll Free: +1-866-821-0731

Fax: +1-613-482-4866

Quick Links

Products

Solutions

Support Forum

Legal

Legal Notice

Privacy Policy | Accessibility Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog

Reading Profiles of Adults with Dyslexia

Reading Profile of Adults with Dyslexia


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.