All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2012 |
Parampal Grewal; Jayalakshmi Viswanathan; Jason J. S. Barton; Linda J. Lanyon Line bisection under an attentional gradient induced by simulated neglect in healthy subjects Journal Article In: Neuropsychologia, vol. 50, no. 6, pp. 1190–1201, 2012. @article{Grewal2012, Whether an attentional gradient favouring the ipsilesional side is responsible for the line bisection errors in visual neglect is uncertain. We explored this by using a conjunction-search task on the right side of a computer screen to bias attention while healthy subjects performed line bisection. The first experiment used a probe detection task to confirm that the conjunction-search task created a rightward attentional gradient, as manifest in response times, detection rates, and fixation patterns. In the second experiment subjects performed line bisection with or without a simultaneous conjunction-search task. Fixation patterns in the latter condition were biased rightwards as in visual neglect, and bisection also showed a rightward bias, though modest. A third experiment using the probe detection task again showed that the attentional gradient induced by the conjunction-search task was reduced when subjects also performed line bisection, perhaps explaining the modest effects on bisection bias. Finally, an experiment with briefly viewed pre-bisected lines produced similar results, showing that the small size of the bisection bias was not due to an unlimited view allowing deployment of attentional resources to counteract the conjunction-search task's attentional gradient. These results show that an attentional gradient induced in healthy subjects can produce visual neglect-like visual scanning and a rightward shift of perceived line midpoint, but the modest size of this shift points to limitations of this physiological model in simulating the pathologic effects of visual neglect. |
Marc Grosjean; Gerhard Rinkenauer; Stephanie Jainta Where do the eyes really go in the hollow-face illusion? Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e44706, 2012. @article{Grosjean2012, The hollow-face illusion refers to the finding that people typically perceive a concave (hollow) mask as being convex, despite the presence of binocular disparity cues that indicate the contrary. Unlike other illusions of depth, recent research has suggested that the eyes tend to converge at perceived, rather than actual, depths. However, technical and methodological limitations prevented one from knowing whether disparity cues may still have influenced vergence. In the current study, we presented participants with virtual normal or hollow masks and asked them to fixate the tip of the face's nose until they had indicated whether they perceived it as pointing towards or away from them. The results showed that the direction of vergence was indeed determined by perceived depth, although vergence responses were both somewhat delayed and of smaller amplitude (by a factor of about 0.5) for concave than convex masks. These findings demonstrate how perceived depth can override disparity cues when it comes to vergence, albeit not entirely. |
Shaobo Guan; Yu Liu; Ruobing Xia; Mingsha Zhang Covert attention regulates saccadic reaction time by routing between different visual-oculomotor pathways Journal Article In: Journal of Neurophysiology, vol. 107, no. 6, pp. 1748–1755, 2012. @article{Guan2012, Covert attention modulates saccadic performance, e.g., the abrupt onset of a task-irrelevant visual stimulus grabs attention as measured by a decrease in saccadic reaction time (SRT). The attentional advantage bestowed by the task-irrelevant stimulus is short-lived: SRT is actually longer ~200 ms after the onset of a stimulus than it is when no stimulus appears, known as inhibition of return. The mechanism by which attention modulates saccadic reaction is not well-understood. Here, we propose two possible mechanisms: by selective routing of the visuomotor signal through different pathways (routing hypothesis) or by general modulation of the speed of visuomotor transformation (shifting hypothesis). To test them, we designed a cue gap paradigm in which a 100-ms gap was introduced between the fixation point disappearance and the target appearance to the conventional cued visual reaction time paradigm. The cue manipulated the location of covert attention, and the gap interval resulted in a bimodal distribution of SRT, with an early mode (express saccade) and a late mode (regular saccade). The routing hypothesis predicts changes in the proportion of express saccades vs. regular saccades, whereas the shifting hypothesis predicts a shift of SRT distribution. The addition of the cue had no effect on mean reaction time of express and regular saccades, but it changed the relative proportion of two modes. These results demonstrate that the covert attention modification of the mean SRT is largely attributed to selective routing between visuomotor pathways rather than general modulation of the speed of visuomotor transformation. |
Katherine Guérard; Jean Saint-Aubin; Marie Poirier Assessing the influence of letter position in reading normal and transposed texts using a letter detection task Journal Article In: Canadian Journal of Experimental Psychology, vol. 66, no. 4, pp. 227–238, 2012. @article{Guerard2012, During word recognition, some letters appear to play a more important role than others. Although some studies have suggested that the first and last letters of a word have a privileged status, there is no consensus with regards to the importance of the different letter positions when reading connected text. In the current experiments, we used a simple letter search task to examine the impact of letter position on word identification in connected text using a classic paper and pencil procedure (Experiment 1) and an eye movement monitoring procedure (Experiment 2). In Experiments 3 and 4, a condition with transposed letters was included. Our results show that the first letter of a word is detected more easily than the other letters, and transposing letters in a word revealed the importance of the final letter. It is concluded that both the initial and final letters play a special role in word identification during reading but that the underlying processes might differ. |
Scott A. Guerin; Clifford A. Robbins; Adrian W. Gilmore; Daniel L. Schacter Retrieval failure contributes to gist-based false recognition Journal Article In: Journal of Memory and Language, vol. 66, no. 1, pp. 68–78, 2012. @article{Guerin2012, People often falsely recognize items that are similar to previously encountered items. This robust memory error is referred to as gist-based false recognition. A widely held view is that this error occurs because the details fade rapidly from our memory. Contrary to this view, an initial experiment revealed that, following the same encoding conditions that produce high rates of gist-based false recognition, participants overwhelmingly chose the correct target rather than its related foil when given the option to do so. A second experiment showed that this result is due to increased access to stored details provided by reinstatement of the originally encoded photograph, rather than to increased attention to the details. Collectively, these results suggest that details needed for accurate recognition are, to a large extent, still stored in memory and that a critical factor determining whether false recognition will occur is whether these details can be accessed during retrieval. |
Maria J. S. Guerreiro; Jos J. Adam; Pascal W. M. Van Gerven Automatic selective attention as a function of sensory modality in aging Journal Article In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, vol. 67, no. 2, pp. 194–202, 2012. @article{Guerreiro2012, Objectives. It was recently hypothesized that age-related differences in selective attention depend on sensory modality (Guerreiro, M. J. S., Murphy, D. R., & Van Gerven, P. W. M. (2010). The role of sensory modality in age-related distrac- tion: A critical review and a renewed view. Psychological Bulletin, 136, 975–1022. doi:10.1037/a0020731). So far, this hypothesis has not been tested in automatic selective attention. The current study addressed this issue by investigating age-related differences in automatic spatial cueing effects (i.e., facilitation and inhibition of return [IOR]) across sensory modalities. Methods. Thirty younger (mean age = 22.4 years) and 25 older adults (mean age = 68.8 years) performed 4 left–right target localization tasks, involving all combinations of visual and auditory cues and targets. We used stimulus onset asyn- chronies (SOAs) of 100, 500, 1,000, and 1,500 ms between cue and target. Results. The results showed facilitation (shorter reaction times with valid relative to invalid cues at shorter SOAs) in the unimodal auditory and in both cross-modal tasks but not in the unimodal visual task. In contrast, there was IOR (longer reaction times with valid relative to invalid cues at longer SOAs) in both unimodal tasks but not in either of the cross-modal tasks. Most important, these spatial cueing effects were independent of age. Discussion. The results suggest that the modality hypothesis of age-related differences in selective attention does not extend into the realm of automatic selective attention. |
A. Guillaume Saccadic inhibition is accompanied by large and complex amplitude modulations when induced by visual backward masking Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 1–20, 2012. @article{Guillaume2012, Saccadic inhibition refers to the strong temporary decrease in saccadic initiation observed when a visual distractor appears shortly after the onset of a saccadic target. Here, to gain a better understanding of this phenomenon, we assessed whether saccade amplitude changes could accompany these modulations of latency distributions. As previous studies on the saccadic system using visual backward masking–a protocol in which the mask appears shortly after the target–showed latency increases and amplitude changes, we suspected that this could be a condition in which amplitude changes would accompany saccadic inhibition. We show here that visual backward masking produces a strong saccadic inhibition. In addition, this saccadic inhibition was accompanied by large and complex amplitude changes: a first phase of gain decrease occurred before the saccadic inhibition; when saccades reappeared after the inhibition, they were accurate before rapidly entering into a second phase of gain decrease. We observed changes in saccade kinematics that were consistent with the possibility of saccades being interrupted during these two phases of gain decrease. These results show that the onset of a large stimulus shortly after a first one induces the previously reported saccadic inhibition, but also induces a complex pattern of amplitude changes resulting from a dual amplitude perturbation mechanism with fast and slow components. |
Fei Guo; Tim J. Preston; Koel Das; Barry Giesbrecht; Miguel P. Eckstein Feature-independent neural coding of target detection during search of natural scenes Journal Article In: Journal of Neuroscience, vol. 32, no. 28, pp. 9499–9510, 2012. @article{Guo2012, Visual search requires humans to detect a great variety of target objects in scenes cluttered by other objects or the natural environment. It is unknown whether there is a general purpose neural detection mechanism in the brain that codes the presence of a wide variety of categories of objects embedded in natural scenes. We provide evidence for a feature-independent coding mechanism for detecting behaviorally relevant targets in natural scenes in the dorsal frontoparietal network. Pattern classifiers using single-trial fMRI responses in the dorsal frontoparietal network reliably predicted the presence of 368 different target objects and also the observer's choices. Other vision-related areas such as the primary visual cortex, lateral occipital complex, the parahippocampal, and the fusiform gyri did not predict target presence, while high-level association areas related to general purpose decision making, including the dorsolateral prefrontal cortex and anterior cingulate, did. Activity in the intraparietal sulcus, a main area in the dorsal frontoparietal network, correlated with observers' decision confidence and with the task difficulty of individual images. These results cannot be explained by physical differences across images or eye movements. Thus, the dorsal frontoparietal network detects behaviorally relevant targets in natural scenes independent of their defining visual features and may be the human analog of the priority map in monkey lateral intraparietal cortex. |
Rashmi Gupta; Jane E. Raymond Emotional distraction unbalances visual processing Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 2, pp. 184–189, 2012. @article{Gupta2012, Brain mechanisms used to control nonemotional aspects of cognition may be distinct from those regulating responses to emotional stimuli, with activity of the latter being detrimental to the former. Previous studies have shown that suppression of irrelevant emotional stimuli produces a largely right-lateralized pattern of frontal brain activation, thus predicting that emotional stimuli may invoke temporary, lateralized costs to performance on nonemotional cognitive tasks. To test this, we briefly (85 ms) presented a central, irrelevant, expressive (angry, happy, sad, or fearful) or neutral face 100 ms prior to a letter search task. The presentation of emotional versus neutral faces slowed subsequent search for targets appearing in the left, but not the right, hemifield, supporting the notion of a right-lateralized, emotional response mechanism that competes for control with nonemotional cognitive processes. Presentation of neutral, scrambled, or inverted neutral faces produced no such laterality effects on visual search response times. |
Josselin Gautier; O. Le Meur A time-dependent saliency model combining center and depth biases for 2D and 3D viewing conditions Journal Article In: Cognitive Computation, vol. 4, no. 2, pp. 141–156, 2012. @article{Gautier2012, The role of the binocular disparity in the deployment of visual attention is examined in this paper. To address this point, we compared eye tracking data recorded while observers viewed natural images in 2D and 3D conditions. The influence of disparity on saliency, center and depth biases is first studied. Results show that visual exploration is affected by the introduction of the binocular disparity. In particular, participants tend to look first at closer areas in 3D condition and then direct their gaze to more widespread locations. Beside this behavioral analysis, we assess the extent to which state-of-the-art models of bottom-up visual attention predict where observers looked at in both viewing conditions. To improve their ability to predict salient regions, low-level features as well as higher-level foreground/background cues are examined. Results indicate that, consecutively to initial centering response, the foreground feature plays an active role in the early but also middle instants of attention deployments. Importantly, this influence is more pronounced in stereoscopic conditions. It supports the notion of a quasi-instantaneous bottom-up saliency modulated by higher figure/ground processing. Beyond depth information itself, the foreground cue might constitute an early process of “selection for action”. Finally, we propose a time-dependent computational model to predict saliency on still pictures. The proposed approach combines low-level visual features, center and depth biases. Its performance outperforms state-of-the-art models of bottom-up attention. |
Stephani Foraker; Gregory L. Murphy Polysemy in sentence comprehension: Effects of meaning dominance Journal Article In: Journal of Memory and Language, vol. 67, no. 4, pp. 407–425, 2012. @article{Foraker2012, Words like church are polysemous, having two related senses (a building and an organization). Three experiments investigated how polysemous senses are represented and processed during sentence comprehension. On one view, readers retrieve an underspecified, core meaning, which is later specified more fully with contextual information. On another view, readers retrieve one or more specific senses. In a reading task, context that was neutral or biased towards a particular sense preceded a polysemous word. Disambiguating material consistent with only one sense followed, in a second sentence (Experiment 1) or the same sentence (Experiments 2 and 3). Reading the disambiguating material was faster when it was consistent with that context, and dominant senses were committed to more strongly than subordinate senses. Critically, following neutral context, the continuation was read more quickly when it selected the dominant sense, and the degree of sense dominance partially explained the reading time advantage. Similarity of the senses also affected reading times. Across experiments, we found that sense selection may not be completed immediately following a polysemous word but is completed at a sentence boundary. Overall, the results suggest that readers select an individual sense when reading a polysemous word, rather than a core meaning. |
Tom Foulsham; Richard Dewhurst; Marcus Nyström; Halszka Jarodzka; Roger Johansson; Geoffrey Underwood; Kenneth Holmqvist Comparing scanpaths during scene encoding and recognition: A multi-dimensional approach Journal Article In: Journal of Eye Movement Research, vol. 5, no. 3, pp. 1–14, 2012. @article{Foulsham2012, Complex stimuli and tasks elicit particular eye movement sequences. Previous research has focused on comparing between these scanpaths, particularly in memory and imagery research where it has been proposed that observers reproduce their eye movements when recognizing or imagining a stimulus. However, it is not clear whether scanpath similarity is related to memory performance and which particular aspects of the eye movements recur. We therefore compared eye movements in a picture memory task, using a recently proposed comparison method, MultiMatch, which quantifies scanpath similarity across multiple dimensions including shape and fixation duration. Scanpaths were more similar when the same participant's eye movements were compared from two viewings of the same image than between different images or different participants viewing the same image. In addition, fixation durations were similar within a participant and this similarity was associated with memory performance. |
Steven L. Franconeri; Jason M. Scimeca; Jessica C. Roth; Sarah A. Helseth; Lauren E. Kahn Flexible visual processing of spatial relationships Journal Article In: Cognition, vol. 122, no. 2, pp. 210–227, 2012. @article{Franconeri2012, Visual processing breaks the world into parts and objects, allowing us not only to examine the pieces individually, but also to perceive the relationships among them. There is work exploring how we perceive spatial relationships within structures with existing representations, such as faces, common objects, or prototypical scenes. But strikingly, there is little work on the perceptual mechanisms that allow us to flexibly represent arbitrary spatial relationships, e.g., between objects in a novel room, or the elements within a map, graph or diagram. We describe two classes of mechanism that might allow such judgments. In the simultaneous class, both objects are selected concurrently. In contrast, we propose a sequential class, where objects are selected individually over time. We argue that this latter mechanism is more plausible even though it violates our intuitions. We demonstrate that shifts of selection do occur during spatial relationship judgments that feel simultaneous, by tracking selection with an electrophysiological correlate. We speculate that static structure across space may be encoded as a dynamic sequence across time. Flexible visual spatial relationship processing may serve as a case study of more general visual relation processing beyond space, to other dimensions such as size or numerosity. |
Steven Frisson; Mary Wakefield Psychological essentialist reasoning and perspective taking during reading: A donkey is not a zebra, but a plate can be a clock Journal Article In: Memory & Cognition, vol. 40, no. 2, pp. 297–310, 2012. @article{Frisson2012, In an eyetracking study, we examined whether readers use psychological essentialist reasoning and perspective taking online. Stories were presented in which an animal or an artifact was transformed into another animal (e.g., a donkey into a zebra) or artifact (e.g., a plate into a clock). According to psychological essentialism, the essence of the animal did not change in these stories, while the transformed artifact would be thought to have changed categories. We found evidence that readers use this kind of reasoning online: When reference was made to the transformed animal, the nontransformed term ("donkey") was preferred, but the opposite held for the transformed artifact ("clock" was read faster than "plate"). The immediacy of the effect suggests that this kind of reasoning is employed automatically. Perspective taking was examined within the same stories by the introduction of a novel story character. This character, who was naïve about the transformation, commented on the transformed animal or artifact. If the reader were to take this character's perspective immediately and exclusively for reference solving, then only the transformed term ("zebra" or "clock") would be felicitous. However, the results suggested that while this character's perspective could be taken into account, it seems difficult to completely discard one's own perspective at the same time. |
Nathan Faivre; Vincent Berthet; Sid Kouider Nonconscious influences from emotional faces: A comparison of visual crowding, masking, and continuous flash suppression Journal Article In: Frontiers in Psychology, vol. 3, pp. 129, 2012. @article{Faivre2012, In the study of nonconscious processing, different methods have been used in order to render stimuli invisible. While their properties are well described, the level at which they disrupt nonconscious processing remains unclear. Yet, such accurate estimation of the depth of nonconscious processes is crucial for a clear differentiation between conscious and nonconscious cognition. Here, we compared the processing of facial expressions rendered invisible through gaze-contingent crowding (GCC), masking, and continuous flash suppression (CFS), three techniques relying on different properties of the visual system. We found that both pictures and videos of happy faces suppressed from awareness by GCC were processed such as to bias subsequent preference judgments. The same stimuli manipulated with visual masking and CFS did not bias significantly preference judgments, although they were processed such as to elicit perceptual priming. A significant difference in preference bias was found between GCC and CFS, but not between GCC and masking. These results provide new insights regarding the nonconscious impact of emotional features, and highlight the need for rigorous comparisons between the different methods employed to prevent perceptual awareness. |
Nathan Faivre; Sylvain Charron; Paul Roux; Stephane Lehericy; Sid Kouider Nonconscious emotional processing involves distinct neural pathways for pictures and videos Journal Article In: Neuropsychologia, vol. 50, pp. 3736–3744, 2012. @article{Faivre2012a, Facial expressions are known to impact observers' behavior, even when they are not consciously identifiable. Relying on visual crowding, a perceptual phenomenon whereby peripheral faces become undiscriminable, we show that participants exposed to happy vs. neutral crowded faces rated the pleasantness of subsequent neutral targets accordingly to the facial expression's valence. Using functional magnetic resonance imaging (fMRI) along with psychophysiological interaction analysis, we investigated the neural determinants of this nonconscious preference bias, either induced by static (i.e., pictures) or dynamic (i.e., videos) facial expressions. We found that while static expressions activated primarily the ventral visual pathway (including task-related functional connectivity between the fusiform face area and the amygdala), dynamic expressions triggered the dorsal visual pathway (i.e., posterior partietal cortex) and the substantia innominata, a structure that is contiguous with the dorsal amygdala. As temporal cues are known to improve the processing of visible facial expressions, the absence of ventral activation we observed with crowded videos questions the capacity to integrate facial features and facial motions without awareness. Nevertheless, both static and dynamic facial expressions activated the hippocampus and the orbitofrontal cortex, suggesting that nonconscious preference judgments may arise from the evaluation of emotional context and the computation of aesthetic evaluation. |
Claudia Felser; Ian Cunnings Processing reflexives in a second language: The timing of structural and discourse-level constraints Journal Article In: Applied Psycholinguistics, vol. 33, no. 3, pp. 571–603, 2012. @article{Felser2012, We report the results from two eye-movement monitoring experiments examining the processing of reflexive pronouns by proficient German-speaking learners of second language (L2) English. Our results showthat the nonnative speakers initially tried to linkEnglish argument reflexives to a discourse-prominent but structurally inaccessible antecedent, thereby violating binding condition A. Our native speaker controls, in contrast, showed evidence of applying conditionAimmediately during processing. Together, our findings show that L2 learners' initial focusing on a structurally inaccessible antecedent cannot be due to first language influence and is also independent of whether the inaccessible antecedent c-commands the reflexive. This suggests that unlike native speakers, nonnative speakers of English initially attempt to interpret reflexives through discourse-based coreference assignment rather than syntactic binding. |
Claudia Felser; Ian Cunnings; Claire Batterham; Harald Clahsen The timing of island effects in nonnative sentence processing Journal Article In: Studies in Second Language Acquisition, vol. 34, no. 1, pp. 67–98, 2012. @article{Felser2012a, Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in fi rst-and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect L1 and L2 comprehenders in essentially the same way. Furthermore, these results show that the timing of island effects in L1 compared to L2 sentence comprehension is affected differently by the type of cue (semantic fi t versus fi lled gaps) signaling whether dependency formation is possible at a potential gap site. Even though L1 English speakers showed immediate sensitivity to fi lled gaps but not to lack of semantic fi t, profi cient German-speaking learners of English as a L2 showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in L2 processing is based on semantic feature matching rather than being structurally mediated as in L1 comprehension. |
Gary Feng Is there a common control mechanism for anti-saccades and reading eye movements? Evidence from distributional analyses Journal Article In: Vision Research, vol. 57, pp. 35–50, 2012. @article{Feng2012, In the saccadic literature, the voluntary control of eye movement involves inhibiting automatic saccadic plans. In contrast, the dominant view in reading is that linguistic processes trigger saccade planning. The present study explores the possibility of a common control mechanism, in which cognitively driven responses compete to inhibit automatic, perceptually driven saccade plans. A probabilistic model is developed to account for empirical distributions of saccadic response time in anti-saccade tasks (Studies 1 and 2) and fixation duration in reading and reading-like tasks (Studies 3 and 4). In all cases the distributions can be decomposed into a perceptually based component and a component sensitive to cognitive demands. Parametric similarities among the models strongly suggest a shared cognitive control mechanism between reading and other voluntary saccadic tasks. |
Heather J. Ferguson Eye movements reveal rapid concurrent access to factual and counterfactual interpretations of the world Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 5, pp. 939–961, 2012. @article{Ferguson2012, Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events. |
2011 |
Frederic Benmussa; Charles Aissani; A. -L. Paradis; Jean Lorenceau Coupled dynamics of bistable distant motion displays Journal Article In: Journal of Vision, vol. 11, no. 8, pp. 14–14, 2011. @article{Benmussa2011, This study explores the extent to which a display changing periodically in perceptual interpretation through smooth periodic physical changes-an inducer-is able to elicit perceptual switches in an intrinsically bistable distant probe display. Four experiments are designed to examine the coupling strength and bistable dynamics with displays of varying degree of ambiguity, similarity, and symmetry-in motion characteristics-as a function of their locations in visual space. The results show that periodic fluctuations of a remote inducer influence a bistable probe and regulate its dynamics through coupling. Coupling strength mainly depends on the relative locations of the probe display and the contextual inducer in the visual field, with stronger coupling when both displays are symmetrical around the vertical meridian and weaker coupling otherwise. Smaller effects of common fate and symmetry are also found. Altogether, the results suggest that long-range interhemispheric connections, presumably involving the corpus callosum, are able to synchronize perceptual transitions across the vertical meridian. If true, bistable dynamics may provide a behavioral method to probe interhemispheric connectivity in behaving human. Consequences of these findings for studies using stimuli symmetrical around the vertical meridian are evaluated. |
Stefanie I. Becker Determinants of dwell time in visual search: Similarity or perceptual difficulty? Journal Article In: PLoS ONE, vol. 6, no. 3, pp. e17740, 2011. @article{Becker2011, The present study examined the factors that determine the dwell times in a visual search task, that is, the duration the gaze remains fixated on an object. It has been suggested that an item's similarity to the search target should be an important determiner of dwell times, because dwell times are taken to reflect the time needed to reject the item as a distractor, and such discriminations are supposed to be harder the more similar an item is to the search target. In line with this similarity view, a previous study shows that, in search for a target ring of thin line-width, dwell times on thin linewidth Landolt C's distractors were longer than dwell times on Landolt C's with thick or medium linewidth. However, dwell times may have been longer on thin Landolt C's because the thin line-width made it harder to detect whether the stimuli had a gap or not. Thus, it is an open question whether dwell times on thin line-width distractors were longer because they were similar to the target or because the perceptual decision was more difficult. The present study de-coupled similarity from perceptual difficulty, by measuring dwell times on thin, medium and thick line-width distractors when the target had thin, medium or thick line-width. The results showed that dwell times were longer on target-similar than target-dissimilar stimuli across all target conditions and regardless of the line-width. It is concluded that prior findings of longer dwell times on thin linewidth-distractors can clearly be attributed to target similarity. As will be discussed towards the end, the finding of similarity effects on dwell times has important implications for current theories of visual search and eye movement control. |
Stefanie I. Becker; Gernot Horstmann; Roger W. Remington Perceptual grouping, not emotion, accounts for search asymmetries with schematic faces Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 6, pp. 1739–1757, 2011. @article{Becker2011a, Several different explanations have been proposed to account for the search asymmetry (SA) for angry schematic faces (i.e., the fact that an angry face target among friendly faces can be found faster than vice versa). The present study critically tested the perceptual grouping account, (a) that the SA is not due to emotional factors, but to perceptual differences that render angry faces more salient than friendly faces, and (b) that the SA is mainly attributable to differences in distractor grouping, with angry faces being more difficult to group than friendly faces. In visual search for angry and friendly faces, the number of distractors visible during each fixation was systematically manipulated using the gaze-contingent window technique. The results showed that the SA emerged only when multiple distractors were visible during a fixation, supporting the grouping account. To distinguish between emotional and perceptual factors in the SA, we altered the perceptual properties of the faces (dented-chin face) so that the friendly face became more salient. In line with the perceptual account, the SA was reversed for these faces, showing faster search for a friendly face target. These results indicate that the SA reflects feature-level perceptual grouping, not emotional valence. |
Artem V. Belopolsky; Christel Devue; Jan Theeuwes Angry faces hold the eyes Journal Article In: Visual Cognition, vol. 19, no. 1, pp. 27–36, 2011. @article{Belopolsky2011a, Efficient processing of complex social and biological stimuli associated with threat is crucial for survival. Previous studies have suggested that threatening stimuli such as angry faces not only capture visual attention, but also delay the disengagement of attention from their location. However, in the previous studies disengagement of attention was measured indirectly and was inferred on the basis of delayed manual responses. The present study employed a novel paradigm that allows direct examination of the delayed disengagement hypothesis by measuring the time it takes to disengage the eyes from threatening stimuli. The results showed that participants were indeed slower to make an eye movement away from an angry face presented at fixation than from either a neutral or a happy face. This finding provides converging support that the delay in disengagement of attention is an important component of processing threatening information. |
Artem V. Belopolsky; Jan Theeuwes Selection within visual memory representations activates the oculomotor system Journal Article In: Neuropsychologia, vol. 49, no. 6, pp. 1605–1610, 2011. @article{Belopolsky2011, Humans tend to create and maintain internal representations of the environment that help guiding actions during the everyday activities. Previous studies have shown that the oculomotor system is involved in coding and maintenance of locations in visual-spatial working memory. In these studies selection of the relevant location for maintenance in working memory took place on the screen (selecting the location of a dot presented on the screen). The present study extended these findings by showing that the oculomotor system also codes selection of location from an internal memory representation. Participants first memorized two locations and after a retention interval selected one location for further maintenance. The results show that saccade trajectories deviated away from the ultimately remembered location. Furthermore, selection of the location from the memorized representation produced sustained oculomotor preparation to it. The results show that oculomotor system is very flexible and plays an active role for coding and maintaining information selected within internal memory representations. |
Boaz M. Ben-David; Craig G. Chambers; Meredyth Daneman; M. Kathleen Pichora-Fuller; Eyal M. Reingold; Bruce A. Schneider Effects of aging and noise on real-time spoken word recognition: Evidence from eye movements Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 54, pp. 243–262, 2011. @article{BenDavid2011, PURPOSE: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. METHOD: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted objects, for example, "Look at the candle." Eye movements captured listeners' ability to differentiate the target noun (candle) from a similar-sounding phonological competitor (e.g., candy or sandal). Manipulations included the presence/absence of noise, the type of phonological overlap in target-competitor pairs, and the number of syllables. RESULTS: Having controlled for age-related differences in word recognition accuracy (by tailoring noise levels), similar online processing profiles were found for younger and older adults when targets were discriminated from competitors that shared onset sounds. Age-related differences were found when target words were differentiated from rhyming competitors and were more extensive in noise. CONCLUSIONS: Real-time spoken word recognition processes appear similar for younger and older adults in most conditions; however, age-related differences may be found in the discrimination of rhyming words (especially in noise), even when there are no age differences in word recognition accuracy. These results highlight the utility of eye movement methodologies for studying speech processing across the life span. |
Nick Berggren; Samuel B. Hutton; Nazanin Derakshan The effects of self-report cognitive failures and cognitive load on antisaccade performance Journal Article In: Frontiers in Psychology, vol. 2, pp. 280, 2011. @article{Berggren2011, Individuals reporting high levels of distractibility in everyday life show impaired performance in standard laboratory tasks measuring selective attention and inhibitory processes. Similarly, increasing cognitive load leads to more errors/distraction in a variety of cognitive tasks. How these two factors interact is currently unclear; highly distractible individuals may be affected more when their cognitive resources are taxed, or load may linearly affect performance for all individuals. We investigated the relationship between self-reported levels of cognitive failures (CF) in daily life and performance in the antisaccade task, a widely used tool examining attentional control. Levels of concurrent cognitive demand were manipulated using a secondary auditory discrimination task. We found that both levels of self-reported CF and task load increased antisaccade latencies while having no effect on prosaccade eye-movements. However individuals rating themselves as suffering few daily life distractions showed a comparable load cost to those who experience many. These findings suggest that the likelihood of distraction is governed by the addition of both internal susceptibility and the external current load placed on working memory. |
Raymond Bertram; Victor Kuperman; R. Harald Baayen; Jukka Hyönä The hyphen as a segmentation cue in triconstituent compound processing: It's getting better all the time Journal Article In: Scandinavian Journal of Psychology, vol. 52, no. 6, pp. 530–544, 2011. @article{Bertram2011, Inserting a hyphen in Dutch and Finnish compounds is most often illegal given spelling conventions. However, the current two eye movement experiments on triconstituent Dutch compounds like voetbalbond "footballassociation" (Experiment 1) and triconstituent Finnish compounds like lentokenttätaksi "airporttaxi" (Experiment 2) show that inserting a hyphen at constituent boundaries does not have to be detrimental to compound processing. In fact, when hyphens were inserted at the major constituent boundary (voetbal-bond "football-association"; lentokenttä-taksi "airport-taxi"), processing of the first part (voetbal "football"; lentokenttä "airport") turns out to be faster when it is followed by a hyphen than when it is legally concatenated. Inserting a hyphen caused a delay in later eye movement measures, which is probably due to the illegality of inserting hyphens in normally concatenated compounds. However, in both Dutch and Finnish we found a learning effect in the course of the experiment, such that by the end of the experiments hyphenated compounds are read faster than in the beginning of the experiment. By the end of the experiment, compounds with a hyphen at the major constituent boundary were actually processed equally fast as (Dutch) or even faster than (Finnish) their concatenated counterparts. In contrast, hyphenation at the minor constituent boundary (voet-balbond "foot-ballassociation"; lento-kenttätaksi "air-porttaxi") was detrimental to compound processing speed throughout the experiment. The results imply that the hyphen may be an efficient segmentation cue and that spelling illegalities can be overcome easily, as long as they make sense. |
Nicola C. Anderson; Evan F. Risko; Alan Kingstone Exploiting human sensitivity to gaze for tracking the eyes Journal Article In: Behavior Research Methods, vol. 43, pp. 843–852, 2011. @article{Anderson2011, Given the prevalence, quality, and low cost of web cameras, along with the remarkable human sensitivity to gaze, we examined the accuracy of eye tracking using only a web camera. Participants were shown webcamera recordings of a person's eyes moving 1°, 2°, or 3° of visual angle in one of eight radial directions (north, northeast, east, southeast, etc.), or no eye movement occurred at all. Observers judged whether an eye movement was made and, if so, its direction. Our findings demonstrate that for all saccades of any size or direction, observers can detect and discriminate eye movements significantly better than chance. Critically, the larger the saccade, the better the judgments, so that for eye movements of 3°, people can tell whether an eye movement occurred, and where it was going, at about 90% or better. This simple methodology of using a web camera and looking for eye movements offers researchers a simple, reliable, and cost-effective research tool that can be applied effectively both in studies where it is important that participants maintain central fixation (e.g., covert attention investigations) and in those where they are free or required to move their eyes (e.g., visual search). |
Richard Andersson; Fernanda Ferreira; John M. Henderson I see what you're saying: The integration of complex speech and scenes during language comprehension Journal Article In: Acta Psychologica, vol. 137, no. 2, pp. 208–216, 2011. @article{Andersson2011, The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load. |
Bernhard Angele; Keith Rayner Parafoveal processing of word n + 2 during reading: Do the preceding words matter? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1210–1220, 2011. @article{Angele2011, We used the boundary paradigm (Rayner, 1975) to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n + 2 preprocessing effects. In Experiment 1, we tested whether parafoveal processing of the second word to the right of fixation (n + 2) takes place only when the preceding word (n + 1) is very short (Angele, Slattery, Yang, Kliegl, & Rayner, 2008); word n + 1 was always a three-letter word. Before crossing the boundary, preview for both words n + 1 and n + 2 was either incorrect or correct. In a third condition, only the preview for word n + 1 was incorrect. In Experiment 2, we tested whether word frequency of the preboundary word (n) had an influence on the presence of preview benefit and parafoveal-on-foveal effects. Additionally, Experiment 2 contained a condition in which only preview of n + 2 was incorrect. Our findings suggest that effects of parafoveal n + 2 preprocessing are not modulated by either n + 1 word length or n frequency. Furthermore, we did not observe any evidence of parafoveal lexical preprocessing of word n + 2 in either experiment. |
Jens K. Apel; Gavin F. Revie; Angelo Cangelosi; Rob Ellis; Jeremy Goslin; Martin H. Fischer Attention deployment during memorizing and executing complex instructions Journal Article In: Experimental Brain Research, vol. 214, no. 2, pp. 249–259, 2011. @article{Apel2011, We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution. |
Keith S. Apfelbaum; Sheila E. Blumstein; Bob Mcmurray Semantic priming is affected by real-time phonological competition: Evidence for continuous cascading systems Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 1, pp. 141–149, 2011. @article{Apfelbaum2011, Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words. |
Colas N. Authié; Daniel R. Mestre Optokinetic nystagmus is elicited by curvilinear optic flow during high speed curve driving Journal Article In: Vision Research, vol. 51, no. 16, pp. 1791–1800, 2011. @article{Authie2011, When analyzing gaze behavior during curve driving, it is commonly accepted that gaze is mostly located in the vicinity of the tangent point, being the point where gaze direction tangents the curve inside edge. This approach neglects the fact that the tangent point is actually motionless only in the limit case when the trajectory precisely follows the curve's geometry. In this study, we measured gaze behavior during curve driving, with the general hypothesis that gaze is not static, when exposed to a global optical flow due to self-motion. In order to study spatio-temporal aspects of gaze during curve driving, we used a driving simulator coupled to a gaze recording system. Ten participants drove seven runs on a track composed of eight curves of various radii (50, 100, 200 and 500. m), with each radius appearing in both right and left directions. Results showed that average gaze position was, as previously described, located in the vicinity of the tangent point. However, analysis also revealed the presence of a systematic optokinetic nystagmus (OKN) around the tangent point position. The OKN slow phase direction does not match the local optic flow direction, while slow phase speed is about half of the local speed. Higher directional gains are observed when averaging the entire optical flow projected on the simulation display, whereas the best speed gain is obtained for a 2° optic flow area, centered on the instantaneous gaze location. The present study confirms that the tangent point is a privileged feature in the dynamic visual scene during curve driving, and underlines a contribution of the global optical flow to gaze behavior during active self-motion. |
Sheena K. Au-Yeung; Valerie Benson; Monica S. Castelhano; Keith Rayner Eye movement sequences during simple versus complex information processing of scenes in autism spectrum disorder Journal Article In: Autism Research and Treatment, vol. 2011, pp. 1–7, 2011. @article{AuYeung2011, Minshew and Goldstein (1998) postulated that autism spectrum disorder (ASD) is a disorder of complex information processing. The current study was designed to investigate this hypothesis. Participants with and without ASD completed two scene perception tasks: a simple “spot the difference” task, where they had to say which one of a pair of pictures had a detail missing, and a complex “which one's weird” task, where they had to decide which one of a pair of pictures looks “weird”. Participants with ASD did not differ from TD participants in their ability to accurately identify the target picture in both tasks. However, analysis of the eye movement sequences showed that participants with ASD viewed scenes differently from normal controls exclusively for the complex task. This difference in eye movement patterns, and the method used to examine different patterns, adds to the knowledge base regarding eye movements and ASD. Our results are in accordance with Minshew and Goldstein's theory that complex, but not simple, information processing is impaired in ASD. |
Narcisse P. Bichot; Matthew T. Heard; Robert Desimone Stimulation of the nucleus accumbens as behavioral reward in awake behaving monkeys Journal Article In: Journal of Neuroscience Methods, vol. 199, no. 2, pp. 265–272, 2011. @article{Bichot2011, It has been known that monkeys will repeatedly press a bar for electrical stimulation in several different brain structures. We explored the possibility of using electrical stimulation in one such structure, the nucleus accumbens, as a substitute for liquid reward in animals performing a complex task, namely visual search. The animals had full access to water in the cage at all times on days when stimulation was used to motivate them. Electrical stimulation was delivered bilaterally at mirror locations in and around the accumbens, and the animals' motivation to work for electrical stimulation was quantified by the number of trials they performed correctly per unit of time. Acute mapping revealed that stimulation over a large area successfully supported behavioral performance during the task. Performance improved with increasing currents until it reached an asymptotic, theoretically maximal level. Moreover, stimulation with chronically implanted electrodes showed that an animal's motivation to work for electrical stimulation was at least equivalent to, and often better than, when it worked for liquid reward while on water control. These results suggest that electrical stimulation in the accumbens is a viable method of reward in complex tasks. Because this method of reward does not necessitate control over water or food intake, it may offer an alternative to the traditional liquid or food rewards in monkeys, depending on the goals and requirements of the particular research project. |
Elina Birmingham; Moran Cerf; Ralph Adolphs Comparing social attention in autism and amygdala lesions: Effects of stimulus and task condition Journal Article In: Social Neuroscience, vol. 6, no. 5-6, pp. 420–435, 2011. @article{Birmingham2011, The amygdala plays a critical role in orienting gaze and attention to socially salient stimuli. Previous work has demonstrated that SM a patient with rare bilateral amygdala lesions, fails to fixate and make use of information from the eyes in faces. Amygdala dysfunction has also been implicated as a contributing factor in autism spectrum disorders (ASD), consistent with some reports of reduced eye fixations in ASD. Yet, detailed comparisons between ASD and patients with amygdala lesions have not been undertaken. Here we carried out such a comparison, using eye tracking to complex social scenes that contained faces. We presented participants with three task conditions. In the Neutral task, participants had to determine what kind of room the scene took place in. In the Describe task, participants described the scene. In the Social Attention task, participants inferred where people in the scene were directing their attention. SM spent less time looking at the eyes and much more time looking at the mouths than control subjects, consistent with earlier findings. There was also a trend for the ASD group to spend less time on the eyes, although this depended on the particular image and task. Whereas controls and SM looked more at the eyes when the task required social attention, the ASD group did not. This pattern of impairments suggests that SM looks less at the eyes because of a failure in stimulus-driven attention to social features, whereas individuals with ASD look less at the eyes because they are generally insensitive to socially relevant information and fail to modulate attention as a function of task demands. We conclude that the source of the social attention impairment in ASD may arise upstream from the amygdala, rather than in the amygdala itself. |
Hazel I. Blythe; Tuomo Häikiö; Raymond Bertam; Simon P. Liversedge; Jukka Hyönä Reading disappearing text: Why do children refixate words? Journal Article In: Vision Research, vol. 51, no. 1, pp. 84–92, 2011. @article{Blythe2011, We compared Finnish adults' and children's eye movements on long (8-letter) and short (4-letter) target words embedded in sentences, presented either normally or as disappearing text. When reading disappearing text, where refixations did not provide new information, the 8- to 9-year-old children made fewer refixations but more regressions back to long words compared to when reading normal text. This difference was not observed in the adults or 10- to 11-year-old children. We conclude that the younger children required a second visual sample on the long words, and they adapted their eye movement behaviour when reading disappearing text accordingly. |
Mathias Abegg; Dara S. Manoach; Jason J. S. Barton Knowing the future: Partial foreknowledge effects on the programming of prosaccades and antisaccades Journal Article In: Vision Research, vol. 51, no. 1, pp. 215–221, 2011. @article{Abegg2011, Foreknowledge about the demands of an upcoming trial may be exploited to optimize behavioural responses. In the current study we systematically investigated the benefits of partial foreknowledge - that is, when some but not all aspects of a future trial are known in advance. For this we used an ocular motor paradigm with horizontal prosaccades and antisaccades. Predictable sequences were used to create three partial foreknowledge conditions: one with foreknowledge about the stimulus location only, one with foreknowledge about the task set only, and one with foreknowledge about the direction of the required response only. These were contrasted with a condition of no-foreknowledge and a condition of complete foreknowledge about all three parameters. The results showed that the three types of foreknowledge affected saccadic efficiency differently. While foreknowledge about stimulus-location had no effect on efficiency, task foreknowledge had some effect and response-foreknowledge was as effective as complete foreknowledge. Foreknowledge effects on switch costs followed a similar pattern in general, but were not specific for switching of the trial attribute for which foreknowledge was available. We conclude that partial foreknowledge has a differential effect on efficiency, most consistent with preparatory activation of a motor schema in advance of the stimulus, with consequent benefits for both switched and repeated trials. |
David J. Acunzo; John M. Henderson No emotional "Pop-out" effect in natural scene viewing Journal Article In: Emotion, vol. 11, no. 5, pp. 1134–1143, 2011. @article{Acunzo2011, It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided. |
Carlos Aguilar; Eric Castet Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies Journal Article In: Vision Research, vol. 51, no. 9, pp. 997–1012, 2011. @article{Aguilar2011, Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays. |
Mehrnoosh Ahmadi; Mitra Judi; Anahita Khorrami; Javad Mahmoudi-Gharaei; Mehdi Tehrani-Doost Initial orientation of attention towards emotional faces in children with attention deficit hyperactivity disorder Journal Article In: Iranian Journal of Psychiatry, vol. 6, no. 3, pp. 87–91, 2011. @article{Ahmadi2011, OBJECTIVE: Early recognition of negative emotions is considered to be of vital importance. It seems that children with attention deficit hyperactivity disorder have some difficulties recognizing facial emotional expressions, especially negative ones. This study investigated the preference of children with attention deficit hyperactivity disorder for negative (angry, sad) facial expressions compared to normal children. METHOD: Participants were 35 drug naive boys with ADHD, aged between 6-11 years,and 31 matched healthy children. Visual orientation data were recorded while participants viewed face pairs (negative-neutral pairs) shown for 3000ms. The number of first fixations made to each expression was considered as an index of initial orientation. RESULTS: Group comparisons revealed no difference between attention deficit hyperactivity disorder group and their matched healthy counterparts in initial orientation of attention. A tendency towards negative emotions was found within the normal group, while no difference was observed between initial allocation of attention toward negative and neutral expressions in children with ADHD. CONCLUSION: Children with attention deficit hyperactivity disorder do not have significant preference for negative facial expressions. In contrast, normal children have a significant preference for negative facial emotions rather than neutral faces. |
Brian Bartek; Richard L. Lewis; Shravan Vasishth; Mason R. Smith In search of on-line locality effects in sentence comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1178–1198, 2011. @article{Bartek2011, Many comprehension theories assert that increasing the distance between elements participating in a linguistic relation (e.g., a verb and a noun phrase argument) increases the difficulty of establishing that relation during on-line comprehension. Such locality effects are expected to increase reading times and are thought to reveal properties and limitations of the short-term memory system that supports comprehension. Despite their theoretical importance and putative ubiquity, however, evidence for on-line locality effects is quite narrow linguistically and methodologically: It is restricted almost exclusively to self-paced reading of complex structures involving a particular class of syntactic relation. We present 4 experiments (2 self-paced reading and 2 eyetracking experiments) that demonstrate locality effects in the course of establishing subject-verb dependencies; locality effects are seen even in materials that can be read quickly and easily. These locality effects are observable in the earliest possible eye-movement measures and are of much shorter duration than previously reported effects. To account for the observed empirical patterns, we outline a processing model of the adaptive control of button pressing and eye movements. This model makes progress toward the goal of eliminating linking assumptions between memory constructs and empirical measures in favor of explicit theories of the coordinated control of motor responses and parsing. |
Vanessa Baudiffier; David Caplan; Daniel Gaonac'h; David Chesnet The effect of noun animacy on the processing of unambiguous sentences: Evidence from French relative clauses Journal Article In: Quarterly Journal of Experimental Psychology, vol. 64, no. 10, pp. 1896–1905, 2011. @article{Baudiffier2011, Two experiments, one using self-paced reading and one using eye tracking, investigated the influence of noun animacy on the processing of subject relative (SR) clauses, object relative (OR) clauses, and object relative clauses with stylistic inversion (OR-SI) in French. Each sentence type was presented in two versions: either with an animate relative clause (RC) subject and an inanimate object (AS/IO), or with an inanimate RC subject and an animate object (IS/AO). There was an interaction between the RC structure and noun animacy. The advantage of SR sentences over OR and OR-SI sentences disappeared in AS/IO sentences. The interaction between animacy and structure occurred in self-paced reading times and in total fixation times on the RCs, but not in first-pass reading times. The results are consistent with a late interaction between animacy and structural processing during parsing and provide data relevant to several models of parsing. |
Sarah J. Bayless; Missy Glover; Margot J. Taylor; Roxane J. Itier Is it in the eyes? Dissociating the role of emotion and perceptual features of emotionally expressive faces in modulating orienting to eye gaze Journal Article In: Visual Cognition, vol. 19, no. 4, pp. 483–510, 2011. @article{Bayless2011, This study investigated the role of the eye region of emotional facial expressions in modulating gaze orienting effects. Eye widening is characteristic of fearful and surprised expressions and may significantly increase the salience of perceived gaze direction. This perceptual bias rather than the emotional valence of certain expressions may drive enhanced gaze orienting effects. In a series of three experiments involving low anxiety participants, different emotional expressions were tested using a gaze-cueing paradigm. Fearful and surprised expressions enhanced the gaze orienting effect compared with happy or angry expressions. Presenting only the eye regions as cueing stimuli eliminated this effect whereas inversion globally reduced it. Both inversion and the use of eyes only attenuated the emotional valence of stimuli without affecting the perceptual salience of the eyes. The findings thus suggest that low-level stimulus features alone are not sufficient to drive gaze orienting modulations by emotion. Rather, they interact with the emotional valence of the expression that appears critical. The study supports the view that rapid processing of fearful and surprised emotional expressions can potentiate orienting to another person's averted gaze in non-anxious people. |
Paul M. Bays; Emma Y. Wu; Masud Husain Storage and binding of object features in visual working memory Journal Article In: Neuropsychologia, vol. 49, pp. 1622–1631, 2011. @article{Bays2011, An influential conception of visual working memory is of a small number of discrete memory “slots”, each storing an integrated representation of a single visual object, including all its component features. When a scene contains more objects than there are slots, visual attention controls which objects gain access to memory. A key prediction of such a model is that the absolute error in recalling multiple features of the same object will be correlated, because features belonging to an attended object are all stored, bound together. Here,wetested participants' ability to reproduce frommemoryboth the color and orientation ofan object indicated by a location cue. We observed strong independence oferrors between feature dimensions even for large memory arrays (6 items), inconsistent with an upper limit on the number of objects held in memory. Examining the pattern of responses in each dimension revealed a gaussian distribution of error cen- tered on the target value that increased in width under higher memory loads. For large arrays, a subset of responses were not centered on the target but instead predominantly corresponded to mistakenly reproducing one of the other features held in memory. These misreporting responses again occurred independently in each feature dimension, consistent with ‘misbinding' due to errors in maintaining the binding information that assigns features to objects. The results support a shared-resource model of working memory, in which increasing memory load incrementally degrades storage of visual information, reducing the fidelity with which both object fea- tures and feature bindings are maintained. |
Genna M. Bebko; Steven L. Franconeri; Kevin N. Ochsner; Joan Y. Chiao Look before you regulate: Differential perceptual strategies underlying expressive suppression and cognitive reappraisal Journal Article In: Emotion, vol. 11, no. 4, pp. 732–742, 2011. @article{Bebko2011, Successful emotion regulation is important for maintaining psychological well-being. Although it is known that emotion regulation strategies, such as cognitive reappraisal and expressive suppression, may have divergent consequences for emotional responses, the cognitive processes underlying these differences remain unclear. Here we used eye-tracking to investigate the role of attentional deployment in emotion regulation success. We hypothesized that differences in the deployment of attention to emotional areas of complex visual scenes may be a contributing factor to the differential effects of these two strategies on emotional experience. Eye-movements, pupil size, and self-reported negative emotional experience were measured while healthy young adult participants viewed negative IAPS images and regulated their emotional responses using either cognitive reappraisal or expressive suppression. Consistent with prior work, reappraisers reported feeling significantly less negative than suppressers when regulating emotion as compared to a baseline condition. Across both groups, participants looked away from emotional areas during emotion regulation, an effect that was more pronounced for suppressers. Critically, irrespective of emotion regulation strategy, participants who looked toward emotional areas of a complex visual scene were more likely to experience emotion regulation success. Taken together, these results demonstrate that attentional deployment varies across emotion regulation strategies and that successful emotion regulation depends on the extent to which people look toward emotional content in complex visual scenes. |
Robert G. Alexander; Gregory J. Zelinsky Visual similarity effects in categorical search Journal Article In: Journal of Vision, vol. 11, no. 8, pp. 1–15, 2011. @article{Alexander2011, We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets. |
Gerry T. M. Altmann Language can mediate eye movement control within 100milliseconds, regardless of whether there is anything to move the eyes to Journal Article In: Acta Psychologica, vol. 137, no. 2, pp. 190–200, 2011. @article{Altmann2011, The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man. .' or 'the girl. .', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100. ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word. |
George J. Andersen; Rui Ni; Zheng Bian; Julie Kang Limits of spatial attention in three-dimensional space and dual-task driving performance Journal Article In: Accident Analysis and Prevention, vol. 43, no. 1, pp. 381–390, 2011. @article{Andersen2011, The present study examined the limits of spatial attention while performing two driving relevant tasks that varied in depth. The first task was to maintain a fixed headway distance behind a lead vehicle that varied speed. The second task was to detect a light-change target in an array of lights located above the roadway. In Experiment 1 the light detection task required drivers to encode color and location. The results indicated that reaction time to detect a light-change target increased and accuracy decreased as a function of the horizontal location of the light-change target and as a function of the distance from the driver. In a second experiment the light change task was changed to a singleton search (detect the onset of a yellow light) and the workload of the car following task was systematically varied. The results of Experiment 2 indicated that RT increased as a function of task workload, the 2D position of the light-change target and the distance of the light-change target. A multiple regression analysis indicated that the effect of distance on light detection performance was not due to changes in the projected size of the light target. In Experiment 3 we found that the distance effect in detecting a light change could not be explained by the location of eye fixations. The results demonstrate that when drivers attend to a roadway scene attention is limited in three-dimensional space. These results have important implications for developing tests for assessing crash risk among drivers as well as the design of in vehicle technologies such as head-up displays. |
Snigdha Banerjee; Adam C. Snyder; Sophie Molholm; John J. Foxe In: Journal of Neuroscience, vol. 31, no. 27, pp. 9923–9932, 2011. @article{Banerjee2011, Oscillatory alpha-band activity (8-15 Hz) over parieto-occipital cortex in humans plays an important role in suppression of processing for inputs at to-be-ignored regions of space, with increased alpha-band power observed over cortex contralateral to locations expected to contain distractors. It is unclear whether similar processes operate during deployment of spatial attention in other sensory modalities. Evidence from lesion patients suggests that parietal regions house supramodal representations of space. The parietal lobes are prominent generators of alpha oscillations, raising the possibility that alpha is a neural signature of supramodal spatial attention. Furthermore, when spatial attention is deployed within vision, processing of task-irrelevant auditory inputs at attended locations is also enhanced, pointing to automatic links between spatial deployments across senses. Here, we asked whether lateralized alpha-band activity is also evident in a purely auditory spatial-cueing task and whether it had the same underlying generator configuration as in a purely visuospatial task. If common to both sensory systems, this would provide strong support for "supramodal" attention theory. Alternately, alpha-band differences between auditory and visual tasks would support a sensory-specific account. Lateralized shifts in alpha-band activity were indeed observed during a purely auditory spatial task. Crucially, there were clear differences in scalp topographies of this alpha activity depending on the sensory system within which spatial attention was deployed. Findings suggest that parietally generated alpha-band mechanisms are central to attentional deployments across modalities but that they are invoked in a sensory-specific manner. The data support an "interactivity account," whereby a supramodal system interacts with sensory-specific control systems during deployment of spatial attention. |
Peter J. Etchells; Christopher P. Benton; Casimir J. H. Ludwig; Iain D. Gilchrist Testing a simplified method for measuring velocity integration in saccades using a manipulation of target contrast Journal Article In: Frontiers in Psychology, vol. 2, pp. 115, 2011. @article{Etchells2011, A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object velocity information is extracted on a trial-by-trial basis in order to produce an accurate intercepting saccade to a moving target. Here, we present a simplified version of this methodology, and use it to investigate how changes in stimulus contrast affect the temporal velocity integration window used when generating saccades to moving targets. Observers generated saccades to one of two moving targets which were presented at high (80%) or low (7.5%) contrast. In 50% of trials, target velocity stepped up or down after a variable interval after the saccadic go signal. The extent to which the saccade endpoint can be accounted for as a weighted combination of the pre- or post-step velocities allows for identification of the temporal velocity integration window. Our results show that the temporal integration window takes longer to peak in the low when compared to high contrast condition. By enabling the assessment of how information such as changes in velocity can be used in the programming of a saccadic eye movement on single trials, this study describes and tests a novel methodology with which to look at the internal processing mechanisms that transform sensory visual inputs into oculomotor outputs. |
William S. Evans; David Caplan; Gloria Waters Effects of concurrent arithmetical and syntactic complexity on self-paced reaction times and eye fixations Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 6, pp. 1203–1211, 2011. @article{Evans2011, Two dual-task experiments (replications of Experiments 1 and 2 in Fedorenko, Gibson, & Rohde, Journal of Memory and Language, 56, 246-269 2007) were conducted to determine whether syntactic and arithmetical operations share working memory resources. Subjects read object- or subject-extracted relative clause sentences phrase by phrase in a self-paced task while simultaneously adding or subtracting numbers. Experiment 2 measured eye fixations as well as self-paced reaction times. In both experiments, there were main effects of syntax and of mathematical operation on self-paced reading times, but no interaction of the two. In the Experiment 2 eye-tracking results, there were main effects of syntax on first-pass reading time and total reading time and an interaction between syntax and math in total reading time on the noun phrase within the relative clause. The findings point to differences in the ways individuals process sentences under these dual-task conditions, as compared with viewing sentences during "normal" reading conditions, and do not support the view that arithmetical and syntactic integration operations share a working memory system. |
Nathan Faivre; Sid Kouider Increased sensory evidence reverses nonconscious priming during crowding Journal Article In: Journal of Vision, vol. 11, no. 13, pp. 1–13, 2011. @article{Faivre2011, Sensory adaptation reflects the fact that the responsiveness of a perceptual system changes after the processing of a specific stimulus. Two manifestations of this property have been used in order to infer the mechanisms underlying vision: priming, in which the processing of a target is facilitated by prior exposure to a related adaptor, and habituation, in which this processing is hurt by overexposure to an adaptor. In the present study, we investigated the link between priming and habituation by measuring how sensory evidence (short vs. long adaptor exposure) and perceptual awareness (discriminable vs. undiscriminable adaptor stimulus) affects the adaptive response on a related target. Relying on gaze-contingent crowding, we manipulated independently adaptor discriminability and adaptor duration and inferred sensory adaptation from reaction times on the discrimination of a subsequent oriented target. When adaptor orientation was undiscriminable, we found that increasing its duration reversed priming into habituation. When adaptor orientation was discriminable, priming effects were larger after short exposure, but increasing adaptor duration led to a decrease of priming instead of a reverse into habituation. We discuss our results as reflecting changes in the temporal dynamics of angular orientation processing, depending on the mechanisms associated with perceptual awareness and attentional amplification. |
Nathan Faivre; Sid Kouider Multi-feature objects elicit nonconscious priming despite crowding Journal Article In: Journal of Vision, vol. 11, no. 3, pp. 1–10, 2011. @article{Faivre2011a, The conscious representation we build from the visual environment appears jumbled in the periphery, reflecting a phenomenon known as crowding. Yet, it remains possible that object-level representations (i.e., resulting from the binding of the stimulus' different features) are preserved even if they are not consciously accessible. With a paradigm involving gaze-contingent substitution, which allows us to ensure the constant absence of peripheral stimulus discrimination, we show that, despite their jumbled appearance, multi-feature crowded objects, such as faces and directional symbols, are encoded in a nonconscious manner and can influence subsequent behavior. Furthermore, we show that the encoding of complex crowded contents is modulated by attention in the absence of consciousness. These results, in addition to bringing new insights concerning the fate of crowded information, illustrate the potential of the Gaze-Contingent Crowding (GCC) approach for probing nonconscious cognition. |
Joost Felius; Valeria L. N. Fu; Eileen E. Birch; Richard W. Hertle; Reed M. Jost; Vidhya Subramanian Quantifying nystagmus in infants and young children: Relation between foveation and visual acuity deficit Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 12, pp. 8724–8731, 2011. @article{Felius2011, PURPOSE. Nystagmus eye movement data from infants and young children are often not suitable for advanced quantitative analysis. A method was developed to capture useful informa- tion from noisy data and validate the technique by showing meaningful relationships with visual functioning. METHODS. Horizontal eye movements from patients (age 5 months–8 years) with idiopathic infantile nystagmus syndrome (INS) were used to develop a quantitative outcome measure that allowed for head and body movement during the record- ing. The validity of this outcome was assessed by evaluating its relation to visual acuity deficit in 130 subjects, its relation to actual fixation as assessed under simultaneous fundus imaging, its correlation with the established expanded nystagmus acuity function (NAFX), and its test–retest variability. RESULTS. The nystagmus optimal fixation function (NOFF) was defined as the logit transform of the fraction of data points meeting position and velocity criteria within a moving win- dow. A decreasing exponential relationship was found be- tween visual acuity deficit and the NOFF, yielding a 0.75 logMAR deficit for the poorest NOFF and diminishing deficits with improving foveation. As much as 96% of the points iden- tified as foveation events fell within 0.25° of the actual target. Good correlation (r ⫽ 0.96) was found between NOFF and NAFX. Test–retest variability was 0.49 logit units. CONCLUSIONS. The NOFF is a feasible method to quantify noisy nystagmus eye movement data. Its validation makes it a prom- ising outcome measure for the progression and treatment of nystagmus during early childhood. |
Katherine Guérard; Jean Saint-Aubin; Pierre Boucher; Sébastien Tremblay The role of awareness in anticipation and recall performance in the hebb repetition paradigm: Implications for sequence learning Journal Article In: Memory & Cognition, vol. 39, no. 6, pp. 1012–1022, 2011. @article{Guerard2011, Sequence learning has notably been studied using the Hebb repetition paradigm (Hebb, 1961) and the serial reaction time (SRT) task (Nissen & Bullemer, Cognitive Psychology 19:1-32, 1987). These two paradigms produce robust learning effects but differ with regard to the role of awareness: Awareness does not affect learning a repeated sequence in the Hebb repetition paradigm, as is evidenced by recall performance, whereas in the SRT task, awareness helps to anticipate the location of the next stimulus. In this study, we examined the role of awareness in anticipation and recall performance, using the Hebb repetition paradigm. Eye movements were monitored during a spatial reconstruction task where participants had to memorize sequences of dot locations. One sequence was repeated every four trials. Results showed that recall performance for the repeated sequence improved across repetitions for all participants but that anticipation increased only for participants aware of the repetition. |
Maria J. S. Guerreiro; Pascal W. M. Van Gerven Now you see it, now you don't: Evidence for age-dependent and age-independent cross-modal distraction Journal Article In: Psychology and Aging, vol. 26, no. 2, pp. 415–426, 2011. @article{Guerreiro2011, Age-related deficits in selective attention have often been demonstrated in the visual modality and, to a lesser extent, in the auditory modality. In contrast, a mounting body of evidence has suggested that cross-modal selective attention is intact in aging, especially in visual tasks that require ignoring the auditory modality. Our goal in this study was to investigate age-related differences in the ability to ignore cross-modal auditory and visual distraction and to assess the role of cognitive control demands thereby. In a set of two experiments, 30 young (mean age = 23.3 years) and 30 older adults (mean age = 67.7 years) performed a visual and an auditory n-back task (0 ≤ n ≤ 2), with and without cross-modal distraction. The results show an asymmetry in cross-modal distraction as a function of sensory modality and age: Whereas auditory distraction did not disrupt performance on the visual task in either age group, visual distraction disrupted performance on the auditory task in both age groups. Most important, however, visual distraction was disproportionately larger in older adults. These results suggest that age-related distraction is modality dependent, such that suppression of cross-modal auditory distraction is preserved and suppression of cross-modal visual distraction is impaired in aging. |
Mackenzie G. Glaholt; Eyal M. Reingold Eye movement monitoring as a process tracing methodology in decision making research Journal Article In: Journal of Neuroscience, Psychology, and Economics, vol. 4, no. 2, pp. 125–146, 2011. @article{Glaholt2011, Over the past half century, research on human decision making has expanded from a purely behaviorist approach that focuses on decision outcomes, to include a more cognitive approach that focuses on the decision processes that occur prior to the response. This newer approach, known as process tracing, has employed various methods, such as verbal protocols, information search displays, and eye movement monitoring, to identify and track psychological events that occur prior to the response (such as cognitive states, stages, or processes). In the present article, we review empirical studies that have employed eye movement monitoring as a process tracing method in decision making research, and we examine the potential of eye movement monitoring as a process tracing methodology. We also present an experiment that further illustrates the experimental manipulations and analysis techniques that are possible with modern eye tracking technology. In this experiment, a gaze-contingent display was used to manipulate stimulus exposure during decision making, which allowed us to test a specific hypothesis about the role of eye movements in preference decisions (the Gaze Cascade model; Shimojo, Simion, Shimojo, & Scheier, 2003). The results of the experiment did not confirm the predictions of the Gaze Cascade model, but instead support the idea that eye movements in these decisions reflect the screening and evaluation of decision alternatives. In summary, we argue that eye movement monitoring is a valuable tool for capturing decision makers' information search behaviors, and that modern eye tracking technology is highly compatible with other process tracing methods such as retrospective verbal protocols and neuroimaging techniques, and hence it is poised to be an integral part of the next wave of decision research. |
Davis M. Glasser; James M. G. Tsui; Christopher C. Pack; Duje Tadin Perceptual and neural consequences of rapid motion adaptation Journal Article In: Proceedings of the National Academy of Sciences, vol. 108, no. 45, pp. E1080–E1088, 2011. @article{Glasser2011, Nervous systems adapt to the prevailing sensory environment, and the consequences of this adaptation can be observed in the responses of single neurons and in perception. Given the variety of timescales underlying events in the natural world, determining the temporal characteristics of adaptation is important to understanding how perception adjusts to its sensory environment. Previous work has shown that neural adaptation can occur on a timescale of milliseconds, but perceptual adaptation has generally been studied over relatively long timescales, typically on the order of seconds. This disparity raises important questions. Can perceptual adaptation be observed at brief, functionally relevant timescales? And if so, how do its properties relate to the rapid adaptation seen in cortical neurons? We address these questions in the context of visual motion processing, a perceptual modality characterized by rapid temporal dynamics. We demonstrate objectively that 25 ms of motion adaptation is sufficient to generate a motion aftereffect, an illusory sensation of movement experienced when a moving stimulus is replaced by a stationary pattern. This rapid adaptation occurs regardless of whether the adapting motion is perceived. In neurophysiological recordings from the middle temporal area of primate visual cortex, we find that brief motion adaptation evokes direction-selective responses to subsequently presented stationary stimuli. A simple model shows that these neural responses can explain the consequences of rapid perceptual adaptation. Overall, we show that the motion aftereffect is not merely an intriguing perceptual illusion, but rather a reflection of rapid neural and perceptual processes that can occur essentially every time we experience motion. |
Tamar H. Gollan; Timothy J. Slattery; Diane Goldenberg; Eva Van Assche; Wouter Duyck; Keith Rayner Frequency drives lexical access in reading but not in speaking: The frequency-lag hypothesis Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 2, pp. 186–209, 2011. @article{Gollan2011, To contrast mechanisms of lexical access in production versus comprehension we compared the effects of word frequency (high, low), context (none, low constraint, high constraint), and level of English proficiency (monolingual, Spanish-English bilingual, Dutch-English bilingual) on picture naming, lexical decision, and eye fixation times. Semantic constraint effects were larger in production than in reading. Frequency effects were larger in production than in reading without constraining context but larger in reading than in production with constraining context. Bilingual disadvantages were modulated by frequency in production but not in eye fixation times, were not smaller in low-constraint contexts, and were reduced by high-constraint contexts only in production and only at the lowest level of English proficiency. These results challenge existing accounts of bilingual disadvantages and reveal fundamentally different processes during lexical access across modalities, entailing a primarily semantically driven search in production but a frequency-driven search in comprehension. The apparently more interactive process in production than comprehension could simply reflect a greater number of frequency-sensitive processing stages in production. |
N. Gorgoraptis; R. F. G. Catalao; Paul M. Bays; Masud Husain Dynamic updating of working memory resources for visual objects Journal Article In: Journal of Neuroscience, vol. 31, no. 23, pp. 8502–8511, 2011. @article{Gorgoraptis2011, Recent neurophysiological and imaging studies have investigated how neural representations underlying working memory (WM) are dynamically updated for objects presented sequentially. Although such studies implicate information encoded in oscillatory activity across distributed brain networks, interpretation of findings depends crucially on the underlying conceptual model of how memory resources are distributed. Here, we quantify the fidelity of human memory for sequences of colored stimuli of different orientation. The precision with which each orientation was recalled declined with increases in total memory load, but also depended on when in the sequence it appeared. When one item was prioritized, its recall was enhanced, but with corresponding decrements in precision for other objects. Comparison with the same number of items presented simultaneously revealed an additional performance cost for sequential display that could not be explained by temporal decay. Memory precision was lower for sequential compared with simultaneous presentation, even when each item in the sequence was presented at a different location. Importantly, stochastic modeling established this cost for sequential display was due to misbinding object features (color and orientation). These results support the view that WM resources can be dynamically and flexibly updated as new items have to be stored, but redistribution of resources with the addition of new items is associated with misbinding object features, providing important constraints and a framework for interpreting neural data. |
Dan J. Graham; Robert W. Jeffery Location, location, location: Eye-tracking evidence that consumers preferentially view prominently positioned nutrition information Journal Article In: Journal of the American Dietetic Association, vol. 111, no. 11, pp. 1704–1711, 2011. @article{Graham2011, Background: Nutrition Facts labels can keep consumers better informed about their diets' nutritional composition, however, consumers currently do not understand these labels well or use them often. Thus, modifying existing labels may benefit public health. Objective: The present study tracked the visual attention of individuals making simulated food-purchasing decisions to assess Nutrition Facts label viewing. Primary research questions were how self-reported viewing of Nutrition Facts labels and their components relates to measured viewing and whether locations of labels and specific label components relate to viewing. Design: The study involved a simulated grocery shopping exercise conducted on a computer equipped with an eye-tracking camera. A post-task survey assessed self-reported nutrition information viewing, health behaviors, and demographics. Subjects/setting: Individuals 18 years old and older and capable of reading English words on a computer (n=203) completed the 1-hour protocol at the University of Minnesota during Spring 2010. Statistical analyses: Primary analyses included X2, analysis of variance, and t tests comparing self-reported and measured viewing of label components in different presentation configurations. Results: Self-reported viewing of Nutrition Facts label components was higher than objectively measured viewing. Label components at the top of the label were viewed more than those at the bottom, and labels positioned in the center of the screen were viewed more than those located on the sides. Conclusions: Nutrition Facts label position within a viewing area and position of specific components on a label relate to viewing. Eye tracking is a valuable technology for evaluating consumers' attention to nutrition information, informing nutrition labeling policy (eg, front-of-pack labels), and designing labels that best support healthy dietary decisions. |
Sven-Thomas Graupner; Sebastian Pannasch; Boris M. Velichkovsky In: International Journal of Psychophysiology, vol. 80, no. 1, pp. 54–62, 2011. @article{Graupner2011, Attention, visual information processing, and oculomotor control are integrated functions of closely related brain mechanisms. Recently, it was shown that the processing of visual distractors appearing during a fixation is modulated by the amplitude of its preceding saccade (Pannasch & Velichkovsky, 2009). So far, this was demonstrated only at the behavioral level in terms of saccadic inhibition. The present study investigated distractor-related brain activity with cortical eye fixation-related potentials (EFRPs). Moreover, the following saccade was included as an additional classification criterion. Eye movements and EFRPs were recorded during free visual exploration of paintings. During some of the fixations, a visual distractor was shown as an annulus around the fixation position, 100. ms after the fixation onset. The saccadic context of a fixation was classified by its preceding and following saccade amplitudes with the cut-off criterion set to 4° of visual angle. The prolongation of fixation duration induced by distractors was largest for fixations preceded and followed by short saccades. EFRP data revealed a difference in distractor-related P2 amplitude between the saccadic context conditions, following the same trend as in eye movements. Furthermore, influences of the following saccade amplitude on the latency of the saccadic inhibition and on the N1 amplitude were found. The EFRP results cannot be explained by the influence of saccades per se since this bias was removed by subtracting the baseline from the distractor EFRP. Rather, the data suggest that saccadic context indicates differences in how information is processed within single visual fixations. |
Angélica Pérez Fornos; Jörg Sommerhalder; Marco Pelizzone Reading with a simulated 60-channel implant Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 57, 2011. @article{Fornos2011, First generation retinal prostheses containing 50-60 electrodes are currently in clinical trials. The purpose of this study was to evaluate the theoretical upper limit (best possible) reading performance attainable with a state-of-the-art 60-channel retinal implant and to find the optimum viewing conditions for the task. Four normal volunteers performed full-page text reading tasks with a low-resolution, 60-pixel viewing window that was stabilized in the central visual field. Two parameters were systematically varied: (1) spatial resolution (image magnification) and (2) the orientation of the rectangular viewing window. Performance was measured in terms of reading accuracy (% of correctly read words) and reading rates (words/min). Maximum reading performances were reached at spatial resolutions between 3.6 and 6 pixels/char. Performance declined outside this range for all subjects. In optimum viewing conditions (4.5 pixels/char), subjects achieved almost perfect reading accuracy and mean reading rates of 26 words/min for the vertical viewing window and of 34 words/min for the horizontal viewing window. These results suggest that, theoretically, some reading abilities can be restored with actual state-of-the-art retinal implant prototypes if "image magnification" is within an "optimum range." Future retinal implants providing higher pixel resolutions, thus allowing for a wider visual span might allow faster reading rates. |
Tom Foulsham; Rana Alan; Alan Kingstone Scrambled eyes? Disrupting scene structure impedes focal processing and increases bottom-up guidance Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 7, pp. 2008–2025, 2011. @article{Foulsham2011b, Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes. |
Tom Foulsham; Jason J. S. Barton; Alan Kingstone; Richard Dewhurst; Geoffrey Underwood Modeling eye movements in visual agnosia with a saliency map approach: Bottom-up guidance or top-down strategy? Journal Article In: Neural Networks, vol. 24, no. 6, pp. 665–677, 2011. @article{Foulsham2011, Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not. |
Tom Foulsham; Robert Teszka; Alan Kingstone Saccade control in natural images is shaped by the information visible at fixation: Evidence from asymmetric gaze-contingent windows Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 1, pp. 266–283, 2011. @article{Foulsham2011c, When people view images, their saccades are predominantly horizontal and show a positively skewed distribution of amplitudes. How are these patterns affected by the information close to fixation and the features in the periphery? We recorded saccades while observers encoded a set of scenes with a gaze-contingent window at fixation: Features inside a rectangular (Experiment 1) or elliptical (Experiment 2) window were intact; peripheral background was masked completely or blurred. When the window was asymmetric, with more information preserved either horizontally or vertically, saccades tended to follow the information within the window, rather than exploring unseen regions, which runs counter to the idea that saccades function to maximize information gain on each fixation. Window shape also affected fixation and amplitude distributions, but horizontal windows had less of an impact. The findings suggest that saccades follow the features currently being processed and that normal vision samples these features from a horizontally elongated region. |
Tom Foulsham; Geoffrey Underwood If visual saliency predicts search, then why? Evidence from normal and gaze-contingent search tasks in natural scenes Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 48–63, 2011. @article{Foulsham2011a, The Itti and Koch (Vision Research 40: 14891506, 2000) saliency map model has inspired a wealth of research testing the claim that bottom-up saliency determines the placement of eye fixations in natural scenes. Although saliency seems to correlate with (although not necessarily cause) fixation in free-viewing or encoding tasks, it has been suggested that visual saliency can be overridden in a search task, with saccades being planned on the basis of target features, rather than being captured by saliency. Here, we find that target regions of a scene that are salient according to this model are found quicker than control regions (Experiment 1). However, this does not seem to be altered by filtering features in the periphery using a gaze-contingent display (Experiment 2), and a deeper analysis of the eye movements made suggests that the saliency effect is instead due to the meaning of the scene regions. Experiment 3 supports this interpretation, showing that scene inversion reduces the saliency effect. These results suggest that saliency effects on search may have nothing to do with bottom-up saccade guidance. |
Tom Foulsham; Esther Walker; Alan Kingstone The where, what and when of gaze allocation in the lab and the natural environment Journal Article In: Vision Research, vol. 51, no. 17, pp. 1920–1931, 2011. @article{Foulsham2011d, How do people distribute their visual attention in the natural environment? We and our colleagues have usually addressed this question by showing pictures, photographs or videos of natural scenes under controlled conditions and recording participants' eye movements as they view them. In the present study, we investigated whether people distribute their gaze in the same way when they are immersed and moving in the world compared to when they view video clips taken from the perspective of a walker. Participants wore a mobile eye tracker while walking to buy a coffee, a trip that required a short walk outdoors through the university campus. They subsequently watched first-person videos of the walk in the lab. Our results focused on where people directed their eyes and their head, what objects were gazed at and when attention-grabbing items were selected. Eye movements were more centralised in the real world, and locations around the horizon were selected with head movements. Other pedestrians, the path, and objects in the distance were looked at often in both the lab and the real world. However, there were some subtle differences in how and when these items were selected. For example, pedestrians close to the walker were fixated more often when viewed on video than in the real world. These results provide a crucial test of the relationship between real behaviour and eye movements measured in the lab. |
Jeremy Freeman; G. J. Brouwer; David J. Heeger; Elisha P. Merriam Orientation decoding depends on maps, not columns Journal Article In: Journal of Neuroscience, vol. 31, no. 13, pp. 4792–4804, 2011. @article{Freeman2011a, The representation of orientation in primary visual cortex (V1) has been examined at a fine spatial scale corresponding to the columnar architecture. We present functional magnetic resonance imaging (fMRI) measurements providing evidence for a topographic map of orientation preference in human V1 at a much coarser scale, in register with the angular-position component of the retinotopic map of V1. This coarse-scale orientation map provides a parsimonious explanation for why multivariate pattern analysis methods succeed in decoding stimulus orientation from fMRI measurements, challenging the widely held assumption that decoding results reflect sampling of spatial irregularities in the fine-scale columnar architecture. Decoding stimulus attributes and cognitive states from fMRI measurements has proven useful for a number of applications, but our results demonstrate that the interpretation cannot assume decoding reflects or exploits columnar organization. |
Jeremy Freeman; Eero P. Simoncelli Metamers of the ventral stream Journal Article In: Nature Neuroscience, vol. 14, no. 9, pp. 1195–1204, 2011. @article{Freeman2011, The human capacity to recognize complex visual patterns emerges in a sequence of brain areas known as the ventral stream, beginning with primary visual cortex (V1). We developed a population model for mid-ventral processing, in which nonlinear combinations of V1 responses are averaged in receptive fields that grow with eccentricity. To test the model, we generated novel forms of visual metamers, stimuli that differ physically but look the same. We developed a behavioral protocol that uses metameric stimuli to estimate the receptive field sizes in which the model features are represented. Because receptive field sizes change along the ventral stream, our behavioral results can identify the visual area corresponding to the representation. Measurements in human observers implicate visual area V2, providing a new functional account of neurons in this area. The model also explains deficits of peripheral vision known as crowding, and provides a quantitative framework for assessing the capabilities and limitations of everyday vision. |
Hans Peter Frey; Kerstin Wirz; Verena Willenbockel; Torsten Betz; Cornell Schreiber; Tom Troscianko; Peter König Beyond correlation: Do color features influence attention in rainforest? Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 36, 2011. @article{Frey2011a, Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red-green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red-green color-contrast. The effects of blue-yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red-green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red-green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion. |
Jared Frey; Dario L. Ringach Binocular eye movements evoked by self-induced motion parallax Journal Article In: Journal of Neuroscience, vol. 31, no. 47, pp. 17069–17073, 2011. @article{Frey2011, Perception often triggers actions, but actions may sometimes be necessary to evoke percepts. This is most evident in the recovery of depth by self-induced motion parallax. Here we show that depth information derived from one's movement through a stationary environment evokes binocular eye movements consistent with the perception of three-dimensional shape. Human subjects stood in front of a display and viewed a simulated random-dot sphere presented monocularly or binocularly. Eye movements were recorded by a head-mounted eye tracker, while head movements were monitored by a motion capture system. The display was continuously updated to simulate the perspective projection of a stationary, transparent random dot sphere viewed from the subject's vantage point. Observers were asked to keep their gaze on a red target dot on the surface of the sphere as they moved relative to the display. The movement of the target dot simulated jumps in depth between the front and back surfaces of the sphere along the line of sight. We found the subjects' eyes converged and diverged concomitantly with changes in the perceived depth of the target. Surprisingly, even under binocular viewing conditions, when binocular disparity signals conflict with depth information from motion parallax, transient vergence responses were observed. These results provide the first demonstration that self-induced motion parallax is sufficient to drive vergence eye movements under both monocular and binocular viewing conditions. |
Teresa C. Frohman; Scott L. Davis; Elliot M. Frohman Modeling the mechanisms of Uhthoff's phenomenon in MS patients with internuclear ophthalmoparesis Journal Article In: Annals of the New York Academy of Sciences, vol. 1233, no. 1, pp. 313–319, 2011. @article{Frohman2011, Internuclear ophthalmoparesis (INO) is the most common saccadic eye movement disorder observed in patients with multiple sclerosis (MS). It is characterized by slowing of the adducting eye during horizontal saccades, and most commonly results from a demyelinating lesion in the medial longitudinal fasciculus (MLF) within the midline tegmentum of the pons (ventral to the fourth ventricle) or midbrain (ventral to the cerebral aqueduct). Recent research has demonstrated that adduction velocity in MS-related INO, as measured by infrared eye movement recording techniques, is further reduced by a systematic increase in core body temperature (utilizing tube-lined water infusion suits in conjunction with an ingestible temperature probe and transabdominal telemetry) and reversed to baseline with active cooling. These results suggest that INO may represent a model syndrome by which we can carefully study the Uhthoff's phenomenon and objectively test therapeutic agents for its prevention. |
Isabella Fuchs; Ulrich Ansorge; Christoph Redies; Helmut Leder Salience in paintings: Bottom-up influences on eye fixations Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 25–36, 2011. @article{Fuchs2011, In the current study, we investigated whether visual salience attracts attention in a bottom-up manner. We presented abstract and depictive paintings as well as photographs to naı¨ve participants in free-viewing (Experiment 1) and target-search (Experiment 2) tasks. Image salience was computed in terms of local feature contrasts in color, luminance, and orientation. Based on the theories of stimulus-driven salience effects on attention and fixations, we expected salience effects in all conditions and a characteristic short-lived temporal profile of the salience-driven effect on fixations. Our results confirmed the predictions. Results are discussed in terms of their potential implications. |
Shai Gabay; Yoni Pertzov; Avishai Henik Orienting of attention, pupil size, and the norepinephrine system Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 1, pp. 123–129, 2011. @article{Gabay2011, This research examined a novel suggestion regard-ing the involvement of the locus coeruleus–norepinephrine (LC–NE) system in orienting reflexive (exogenous) attention. A common procedure for studying exogenous orienting of attention is Posner's cuing task. Importantly, one can manipulate the required level of target processing by changing task requirements, which, in turn, can elicit a different time course of inhibition of return (IOR). An easy task (responding to target location) produces earlier onset IOR, whereas a demanding task (responding to target identity) produces later onset IOR. Aston-Jones and Cohen (Annual Review of Neuroscience, 28, 403–450, 2005) presented a theory suggesting two different modes of LC activity: tonic and phasic. Accordingly, we suggest that in the more demanding task, the LC–NE system is activated in phasic mode, and in the easier task, it is activated in tonic mode. This, in turn, influences the appearance of IOR. We examined this suggestion by measuring participants' pupil size, which has been demonstrated to correlate with the LC–NE system, while they performed cuing tasks. We found a response-locked phasic dilation of the pupil in the discrimination task, as compared with the localization task, which may reflect different firing modes of the LC–NE system during the two tasks. We also demonstrated a correlation between pupil size at the time of cue presentation and magnitude of IOR. |
Benjamin Gagl; Stefan Hawelka; Florian Hutzler Systematic influence of gaze position on pupil size measurement: Analysis and correction Journal Article In: Behavior Research Methods, vol. 43, no. 4, pp. 1171–1181, 2011. @article{Gagl2011, Cognitive effort is reflected in pupil dilation, but the assessment of pupil size is potentially susceptible to changes in gaze position. This study exemplarily used sentence reading as a stand-in for paradigms that assess pupil size in tasks during which changes in gaze position are unavoidable. The influence of gaze position on pupil size was first investigated by an artificial eye model with a fixed pupil size. Despite its fixed pupil size, the systematic measurements of the artificial eye model revealed substantial gaze-position-dependent changes in the measured pupil size. We evaluated two functions and showed that they can accurately capture and correct the gaze-dependent measurement error of pupil size recorded during a sentence-reading and an effortless z-string-scanning task. Implications for previous studies are discussed, and recommendations for future studies are provided. |
Xiao Gao; Quanchuan Wang; Todd Jackson; Guang Zhao; Yi Liang; Hong Chen Biases in orienting and maintenance of attention among weight dissatisfied women: An eye-movement study Journal Article In: Behaviour Research and Therapy, vol. 49, no. 4, pp. 252–259, 2011. @article{Gao2011, Despite evidence indicating fatness and thinness information are processed differently among weight-preoccupied and eating disordered individuals, the exact nature of these attentional biases is not clear. In this research, eye movement (EM) tracking assessed biases in specific component processes of visual attention (i.e., orientation, detection, maintenance and disengagement of gaze) in relation to body-related stimuli among 20 weight dissatisfied (WD) and 20 weight satisfied young women. Eye movements were recorded while participants completed a dot-probe task that featured fatness-neutral and thinness-neutral word pairs. Compared to controls, WD women were more likely to direct their initial gaze toward fatness words, had a shorter mean latency of first fixation on both fatness and thinness words, had longer first fixation on fatness words but shorter first fixation on thinness words, and shorter total gaze duration on thinness words. Reaction time data showed a maintenance bias towards fatness words among the WD women. In sum, results indicated WD women show initial orienting, speeded detection and initial maintenance biases towards fat body words in addition to a speeded detection - avoidance pattern of biases in relation to thin body words. In sum, results highlight the importance of the utility of EM-tracking as a means of identifying subtle attentional biases among weight dissatisfied women drawn from a non-clinical setting and the need to assess attentional biases as a dynamic process. |
Tyler W. Garaas; Marc Pomplun Distorted object perception following whole-field adaptation of saccadic eye movements Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 1–11, 2011. @article{Garaas2011, The adaptation of an observer's saccadic eye movements to artificial post-saccadic visual error can lead to perceptual mislocalization of individual, transient visual stimuli. In this study, we demonstrate that simultaneous saccadic adaptation to a consistent error pattern across a large number of saccade vectors is accompanied by corresponding spatial distortions in the perception of persistent objects. To induce this adaptation, we artificially introduced several post-saccadic error patterns, which led to a systematic distortion in participants' oculomotor space and a corresponding distortion in their perception of the relative dimensions of a cross-figure. The results indicate a tight coupling between the oculomotor and visual-perceptual spaces that is not limited to misperception of individual visual locations but also affects metrics in the visual-perceptual space. This coupling suggests that our visual perception is continuously recalibrated by the post-saccadic error signal. |
Peggy Gerardin; Valérie Gaveau; Denis Pélisson; Claude Prablanc Integration of visual information for saccade production Journal Article In: Human Movement Science, vol. 30, no. 6, pp. 1009–1021, 2011. @article{Gerardin2011, To foveate a visual target, subjects usually execute a primary hypometric saccade (S1) bringing the target in perifoveal vision, followed by a corrective saccade (S2) or by more than one S2. It is still debated to what extent these S2 are pre-programmed or dependent only on post-saccadic retinal error. To answer this question, we used a visually-triggered saccade task in which target position and target visibility were manipulated. In one-third of the trials, the target was slightly displaced at S1 onset (so-called double step paradigm) and was maintained until the end of S1, until the start of the first S2 or until the end of the trial. Experiments took place in two visual environments: in the dark and in a dimly lit room with a visible random square background. The results showed that S2 were less accurate for shortest target durations. The duration of post-saccadic visual integration thus appears as the main factor responsible for corrective saccade accuracy. We also found that the visual context modulates primary saccade accuracy, especially for the most hypometric subjects. These findings suggest that the saccadic system is sensitive to the visual properties of the environment and uses different strategies to maintain final gaze accuracy. |
Ian C. Fiebelkorn; John J. Foxe; John S. Butler; Manuel R. Mercier; Adam C. Snyder; Sophie Molholm Ready, set, reset: Stimulus-locked Periodicity in behavioral performance demonstrates the consequences of cross-sensory phase reset Journal Article In: Journal of Neuroscience, vol. 31, no. 27, pp. 9971–9981, 2011. @article{Fiebelkorn2011, The simultaneous presentation of a stimulus in one sensory modality often enhances target detection in another sensory modality,but the neural mechanisms that govern these effects are still under investigation. Here, we test a hypothesis proposed in the neurophysiological literature: that auditory facilitation of visual-target detection operates through cross-sensory phase reset ofongoing neural oscillations (Lakatos et al., 2009). To date, measurement limitations have prevented this potentially powerful neural mechanism from being directly linked with its predicted behavioral consequences. The present experiment uses a psychophysical approach in humans to demonstrate, forthe first time, stimulus-locked periodicity in visual-target detection,following a temporally informative sound. Our data further demonstrate that periodicity in behavioral performance is strongly influenced by the probability of audiovisual co-occurrence. We argue that fluctuations in visual-target detection result from cross-sensory phase reset, both at the moment it occurs and persisting for seconds thereafter. The precise frequency at which this periodicity operates remains to be determined through a method that allows for a higher sampling rate. |
Katja Fiehler; Immo Schütz; Denise Y. P. Henriques Gaze-centered spatial updating of reach targets across different memory delays Journal Article In: Vision Research, vol. 51, no. 8, pp. 890–897, 2011. @article{Fiehler2011, Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present. |
Ruth Filik; Emma Barber Inner speech during silent reading reflects the reader's regional accent Journal Article In: PLoS ONE, vol. 6, no. 10, pp. e25782, 2011. @article{Filik2011, While reading silently, we often have the subjective experience of inner speech. However, there is currently little evidence regarding whether this inner voice resembles our own voice while we are speaking out loud. To investigate this issue, we compared reading behaviour of Northern and Southern English participants who have differing pronunciations for words like 'glass', in which the vowel duration is short in a Northern accent and long in a Southern accent. Participants' eye movements were monitored while they silently read limericks in which the end words of the first two lines (e.g., glass/class) would be pronounced differently by Northern and Southern participants. The final word of the limerick (e.g., mass/sparse) then either did or did not rhyme, depending on the reader's accent. Results showed disruption to eye movement behaviour when the final word did not rhyme, determined by the reader's accent, suggesting that inner speech resembles our own voice. |
C. D. Fiorillo Transient activation of midbrain dopamine neurons by reward risk Journal Article In: Neuroscience, vol. 197, pp. 162–171, 2011. @article{Fiorillo2011, Dopamine neurons of the ventral midbrain are activated transiently following stimuli that predict future reward. This response has been shown to signal the expected value of future reward, and there is strong evidence that it drives positive reinforcement of stimuli and actions associated with reward in accord with reinforcement learning models. Behavior is also influenced by reward uncertainty, or risk, but it is not known whether the transient response of dopamine neurons is sensitive to reward risk. To investigate this, monkeys were trained to associate distinct visual stimuli with certain or uncertain volumes of juice of nearly the same expected value. In a choice task, monkeys preferred the stimulus predicting an uncertain (risky) reward outcome. In a Pavlovian task, in which the neuronal responses to each stimulus could be measured in isolation, it was found that dopamine neurons were more strongly activated by the stimulus associated with reward risk. Given extensive evidence that dopamine drives reinforcement, these results strongly suggest that dopamine neurons can reinforce risk-seeking behavior (gambling), at least under certain conditions. Risk-seeking behavior has the virtue of promoting exploration and learning, and these results support the hypothesis that dopamine neurons represent the value of exploration. |
Gemma Fitzsimmons; Denis Drieghe The influence of number of syllables on word skipping during reading Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 4, pp. 736–741, 2011. @article{Fitzsimmons2011, In an eye-tracking experiment, participants read sentences containing a monosyllabic (e.g., grain) or a disyllabic (e.g., cargo) five-letter word. Monosyllabic target words were skipped more often than disyllabic target words, indicating that syllabic structure was extracted from the parafovea early enough to influence the decision of saccade target selection. Fixation times on the target word when it was fixated did not show an influence of number of syllables, demonstrating that number of syllables differentially impacts skipping rates and fixation durations during reading. |
Heather Flowe An exploration of visual behaviour in eyewitness identification tests Journal Article In: Applied Cognitive Psychology, vol. 25, no. 2, pp. 244–254, 2011. @article{Flowe2011, The contribution of internal (eyes, nose and mouth) and external (hair-line, cheek and jaw-line) features across eyewitness identification tests was examined using eye tracking. In Experiment 1, participants studied faces and were tested with lineups, either simultaneous (test faces presented in an array) or sequential (test faces presented one at a time). In Experiment 2, the recognition of previously studied faces was tested in a showup (a suspect face alone was presented). Results indicated that foils were analysed for a shorter period of time in the simultaneous compared to the sequential condition, whereas a positively identified face was analysed for a comparable period of time across lineup procedures. In simultaneous lineups and showups, a greater proportion of time was spent analysing internal features of the test faces compared to sequential lineups. Different decision processes across eyewitness identification tests are inferred based on the results. |
Heather Flowe; Garrison W. Cottrell An examination of simultaneous lineup identification decision processes using eye tracking Journal Article In: Applied Cognitive Psychology, vol. 25, pp. 443–451, 2011. @article{Flowe2011a, Decision processes in simultaneous lineups (an array of faces in which a ‘suspect' face is displayed along with foil faces) were examined using eye tracking to capture the length and number oftimes that individual faces were visually analysed. The similarity of the lineup target face relative to the study face was manipulated, and face dwell times on the first visit and on return visits to the individual lineup faces were measured. On first visits, positively identified faces were examined for a longer duration compared to faces that were not identified. When no face was identified from the lineup, the suspect was visited for a longer duration compared to a foil face. On return visits, incorrectly identified faces were examined for a longer duration and visited more often compared to correctly identified faces. The results indicate that lineup decisions can be predicted by face dwell time and the number of visits made to faces. |
Stacey E. Parrott; Brian R. Levinthal; Steven L. Franconeri Complex attentional control settings Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 12, pp. 2297–2304, 2011. @article{Parrott2011, The visual system prioritizes information through a variety of mechanisms, including “attentional control settings” that specify features (e.g., colour) that are relevant to current goals. Recent work shows that these control settings may be more complex than previously thought, such that participants can monitor for independent features at different locations (Adamo, Pun, Pratt, & Ferber, 2008). However, this result leaves unclear whether these control settings affect early attentional selection or later target processing. We dissociated between these possibilities in two ways. In Experiment 1, participants were asked to determine whether a target object, which was preceded by an uninformative cue, matched one of two target templates (e.g., a blue vertical object or a green horizontal object). Participants monitored for independent features in the same location, but in different objects, which should reduce the effectiveness of the control setting if it is due to early attentional selection, but not if it is due to later target processing. In Experiment 2, we removed the ability of the cue to prime the target identity, which makes the opposite prediction. Together, the results suggest that complex attentional control settings primarily affect later target identity processing, and not early attentional selection. |
Nikole D. Patson; Tessa Warren Building complex reference objects from dual sets Journal Article In: Journal of Memory and Language, vol. 64, no. 4, pp. 443–459, 2011. @article{Patson2011, There has been considerable psycholinguistic investigation into the conditions that allow separately introduced individuals to be joined into a plural set and represented as a complex reference object (e.g., Eschenbach et al., 1989; Garrod & Sanford, 1982; Koh & Clifton, 2002; Koh et al., 2008; Moxey, Sanford, Sturt, & Morrow, 2004; Sanford & Lockhart, 1990). The current paper reports three eye-tracking experiments that investigate the less-well understood question of what conditions allow pointers to be assigned to the individuals within a previously undifferentiated set, turning it into a complex reference object. The experiments made use of a methodology used in Patson and Ferreira (2009) to distinguish between complex reference objects and undifferentiated sets. Experiments 1 and 2 demonstrated that assigning different properties to the members of an undifferentiated dual set via a conjoined modifier or a comparative modifier transformed it into a complex reference object. Experiment 3 indicated that assigning a property to only one member of an undifferentiated dual set introduced pointers to both members. These results demonstrate that pointers can be established to referents within a plural set without picking them out via anaphors; they set boundaries on the kinds of implicit contrasts between referents that establish pointers; and they illustrate that extremely subtle properties of the semantic and referential context can affect early parsing decisions. |
Olufunmilola Ogun; Jayalakshmi Viswanathan; Jason J. S. Barton The effect of central (macula) sparing of contralateral line bisection bias: A study with virtual hemianopia Journal Article In: Neuropsychologia, vol. 49, no. 12, pp. 3377–3382, 2011. @article{Ogun2011, Hemianopic patients show a contralesional bisection bias, but it is unclear whether this is a consequence of their field loss or related to extrastriate damage. One observation cited against the former is that hemianopic bisection bias does not vary with the degree of central (macular) sparing; however, it is unclear to what extent central sparing should affect this bias. Our goal was to determine the effect of central sparing on line bisection biases from field loss alone, with two approaches. First, we studied 12 healthy subjects viewing lines under conditions of virtual hemianopia, created by a gaze-contingent technique. Second, we calculated the effect predicted by a visuospatial model of the effect of central magnification on line representations in the visual system. Our results first replicated the contralateral line bisection bias with hemianopia, confirming that this can be generated by visual hemifield loss in the absence of extrastriate damage. Central sparing had only a modest effect on hemianopic bisection bias, with only slightly less bias with 10° compared to 2° of central sparing. In accordance with these empiric data, computing the center of mass for line representations in our model showed only a shallow decline in bisection bias as central sparing increased from 0 to 10°. We conclude that contralateral bisection bias only decreases slightly with central sparing, and that the absence of a statistically significant effect of central sparing in patients cannot be taken as evidence against a visual origin of contralateral hemianopic line bisection bias. |
Sven Ohl; Stephan A. Brandt; Reinhold Kliegl Secondary (micro-)saccades: The influence of primary saccade end point and target eccentricity on the process of postsaccadic fixation Journal Article In: Vision Research, vol. 51, no. 23-24, pp. 2340–2347, 2011. @article{Ohl2011, We examine how the size of saccadic under-/overshoot and target eccentricity influence the latency, amplitude and orientation of secondary (micro-)saccades. In our experiment, a target appeared at an eccentricity of either 6° or 14° of visual angle. Subjects were instructed to direct their gaze as quickly as possible to the target and hold fixation at the new location until the end of the trial. Typically, increasing saccadic error is associated with faster and larger secondary saccades. We show that secondary saccades at distant in contrast to close targets have in a specific error range a shorter latency, larger amplitude, and follow more often the direction of the primary saccade. Finally, we demonstrate that an undershooting primary saccade is followed almost exclusively by secondary saccades into the same direction while overshooting primary saccades are followed by secondary saccades into both directions. This supports the notion that under- and overshooting imply different consequences for postsaccadic oculomotor processing. Results are discussed using a model, introduced by Rolfs, Kliegl, and Engbert (2008), to account for the generation of microsaccades. We argue that the dynamic interplay of target eccentricity and the magnitude of the saccadic under-/overshoot can be explained by a different strength of activation in the two hemispheres of the saccadic motor map in this model. |
Anna Oleksiak; P. Christiaan Klink; Albert Postma; Ineke J. M. Ham; Martin J. Lankheet; Richard J. A. Wezel Spatial summation in macaque parietal area 7a follows a winner-take-all rule Journal Article In: Journal of Neurophysiology, vol. 105, no. 3, pp. 1150–1158, 2011. @article{Oleksiak2011, While neurons in posterior parietal cortex have been found to signal the presence of a salient stimulus among multiple items in a display, spatial summation within their receptive field in the absence of an attentional bias has never been investigated. This information, however, is indispensable when one investigates the mechanisms of spatial attention and competition between multiple visual objects. To examine the spatial summation rule in parietal area 7a neurons, we trained rhesus monkeys to fixate on a central cross while two identical stimuli were briefly displayed in a neuron's receptive field. The response to a pair of dots was compared with the responses to the same dots when they were presented individually. The scaling and power parameters of a generalized summation algorithm varied greatly, both across neurons and across combinations of stimulus locations. However, the averaged response of the recorded population of 7a neurons was consistent with a winner-take-all rule for spatial summation. A control experiment where a monkey covertly attended to both stimuli simultaneously suggests that attention introduces additional competition by facilitating the less optimal stimulus. Thus an averaging stage is introduced between ∼ 200 and 300 ms of the response to a pair of stimuli. In short, the summation algorithm over the population of area 7a neurons carries the signature of a winner-take-all operation, with spatial attention possibly influencing the temporal dynamics of stimulus competition, that is the moment that the "winner" takes "victory" over the "loser" stimulus. |
Bettina Olk; Yu Jin Effects of aging on switching the response direction of pro-and antisaccades Journal Article In: Experimental Brain Research, vol. 208, no. 1, pp. 139–150, 2011. @article{Olk2011, The present study investigated effects of task switching between pro- and antisaccades and switching the direction of these saccades (response switching) on performance of younger and older adults. Participants performed single-task blocks, in which only pro- or only antisaccades had to be made as well as mixed-task blocks, in which pro- and antisaccades were required. Analysis of specific task switch effects in the mixed-task blocks showed switch costs for error rates for prosaccades for both groups, suggesting that antisaccade task rules persisted and affected the following prosaccade. The comparison between single- and mixed-task blocks showed that mixing costs were either equal or smaller for older than younger participants, indicating that the older participants were well able to keep task sets in working memory. The most prominent age difference that was observed for response switching was that for the older but not younger group task switching and response switching interacted, resulting in less errors when two consecutive antisaccades were made in the same direction. This finding is best explained with a facilitation of these consecutive antisaccades. The present study clearly demonstrated the impact of response switching and a difference between age groups, underlining the importance of considering this factor when investigating pro- and antisaccades, especially antisaccades, and when investigating task switching and aging. |
Samantha C. Otero; Brendan S. Weekes; Samuel B. Hutton Pupil size changes during recognition memory Journal Article In: Psychophysiology, vol. 48, no. 10, pp. 1346–1353, 2011. @article{Otero2011, Pupils dilate to a greater extent when participants view old compared to new items during recognition memory tests. We report three experiments investigating the cognitive processes associated with this pupil old/new effect. Using a remember/know procedure, we found that the effect occurred for old items that were both remembered and known at recognition, although it was attenuated for known compared to remembered items. In Experiment 2, the pupil old/new effect was observed when items were presented acoustically, suggesting the effect does not depend on low-level visual processes. The pupil old/new effect was also greater for items encoded under deep compared to shallow orienting instructions, suggesting it may reflect the strength of the underlying memory trace. Finally, the pupil old/new effect was also found when participants falsely recognized items as being old. We propose that pupils respond to a strength-of-memory signal and suggest that pupillometry provides a useful technique for exploring the underlying mechanisms of recognition memory. |
Jorge Otero-Millan; Stephen L. Macknik; Apollo Robbins; Susana Martinez-Conde Stronger misdirection in curved than in straight motion Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 133, 2011. @article{OteroMillan2011, Illusions developed by magicians are a rich and largely untapped source of insight into perception and cognition. Here we show that curved motion, as employed by the magician in a classic sleight of hand trick, generates stronger misdirection than rectilinear motion, and that this difference can be explained by the differential engagement of the smooth pursuit and the saccadic oculomotor systems. This research exemplifies how the magician's intuitive understanding of the spectator's mindset can surpass that of the cognitive scientist in specific instances, and that observation-based behavioral insights developed by magicians are worthy of quantitative investigation in the neuroscience laboratory. |
Jorge Otero-Millan; Alessandro Serra; R. John Leigh; Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde Distinctive features of saccadic intrusions and microsaccades in progressive supranuclear palsy Journal Article In: Journal of Neuroscience, vol. 31, no. 12, pp. 4379–4387, 2011. @article{OteroMillan2011a, The eyes do not stay perfectly still during attempted fixation; fixational eye movements and saccadic intrusions (SIs) continuously change the position of gaze. The most common type of SI, square-wave jerks (SWJs), consists of saccade pairs that appear purely horizontal on clinical inspection: the first saccade moves the eye away from the fixation target, and after a short interval, the second saccade brings it back toward the target. SWJs are prevalent in certain neurological disorders, including progressive supranuclear palsy (PSP). Here, we developed an objective method to identify SWJs. We found that SWJs are more frequent, larger, and more markedly horizontal in PSP patients than in healthy human subjects. Furthermore, the loss of a vertical component in fixational saccades and SWJs was the eye movement feature that best distinguished PSP patients from controls. We moreover determined that, in PSP patients and controls, the larger the saccade the more likely it was part of a SWJ. Furthermore, saccades produced by PSP patients had equivalent properties whether they were part of a SWJ or not, suggesting that normal fixational saccades (microsaccades) are rare in PSP. We propose that fixational saccades and SIs are generated by the same neural circuit and that, both in PSP patients and in controls, SWJs result from a coupling mechanism that generates a second corrective saccade shortly after a large fixation saccade. Because of brainstem and/or cerebellum impairment, fixational saccades in PSP are abnormally large and thus more likely to trigger a corrective saccade, giving rise to SWJs. |
Elmar H. Pinkhardt; Jan Kassubek Ocular motor abnormalities in Parkinsonian syndromes Journal Article In: Parkinsonism and Related Disorders, vol. 17, no. 4, pp. 223–230, 2011. @article{Pinkhardt2011, Oculomotor abnormalities can be observed in all Parkinsonian syndromes (PS). Nevertheless, due to the considerable overlap of oculomotor pathology in Parkinsonism, oculomotor changes are not generally considered to contribute substantially to the differential diagnosis of PS. Here we review the characteristics of oculomotor disturbances in the major PS, we provide a survey of the current concepts of the underlying neural physiology of oculomotor control and a summary of the major recording techniques for eye movements. The main focus of this review is to outline the subtle differences between apparently similar oculomotor alterations in Parkinson's disease (PD) and atypical neurodegenerative PS that can contribute to the early differential diagnosis of these entities. |
L. Pisella; N. Alahyane; A. Blangero; F. Thery; S. Blanc; Denis Pelisson Right-hemispheric dominance for visual remapping in humans Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 366, pp. 572–585, 2011. @article{Pisella2011, We review evidence showing a right-hemispheric dominance for visuo-spatial processing and representation in humans. Accordingly, visual disorganization symptoms (intuitively related to remapping impairments) are observed in both neglect and constructional apraxia. More specifically, we review findings from the intervening saccade paradigm in humans–and present additional original data–which suggest a specific role of the asymmetrical network at the temporo-parietal junction (TPJ) in the right hemisphere in visual remapping: following damage to the right dorsal posterior parietal cortex (PPC) as well as part of the corpus callosum connecting the PPC to the frontal lobes, patient OK in a double-step saccadic task exhibited an impairment when the second saccade had to be directed rightward. This singular and lateralized deficit cannot result solely from the patient's cortical lesion and, therefore, we propose that it is due to his callosal lesion that may specifically interrupt the interhemispheric transfer of information necessary to execute accurate rightward saccades towards a remapped target location. This suggests a specialized right-hemispheric network for visuo-spatial remapping that subsequently transfers target location information to downstream planning regions, which are symmetrically organized. |
Alexander Pollatsek; Raymond Bertram; Jukka Hyönä Processing novel and lexicalised Finnish compound words Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 7, pp. 795–810, 2011. @article{Pollatsek2011, Participants read sentences in which novel and lexicalized two-constituent compound words appeared while their eye movements were measured. The frequency of the first constituent of the compounds was also varied factorially and the frequency of the lexicalized compounds was equated over the two conditions. The sentence frames prior to the target word were matched across conditions. Both lexicality and first constituent frequency had large and significant effects on gaze durations on the target word; moreover the constituent frequency effect was significantly larger for the novel words. These results indicate that first constituent frequency has an effect in two stages: in the initial encoding of the compound and in the construction of meaning for the novel compound. The difference between this pattern of results and those for English prefixed words (Pollatsek, Slattery, & Juhasz, 2008) is apparently due to differences in the construction of meaning stage. A general model of the relationship of the processing of polymorphemic words to how they are fixated is presented. |