EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2012 |
A. Caglar Tas; Michael D. Dodd; Andrew Hollingworth The role of surface feature and spatiotemporal continuity in object-based inhibition of return Journal Article In: Visual Cognition, vol. 20, no. 1, pp. 29–47, 2012. @article{Tas2012, The contribution of surface feature continuity to object-based inhibition of return (IOR) was tested in three experiments. Participants executed a saccade to a previously fixated or unfixated colored disk after the object had moved to a new location. Object-based IOR was observed as lengthened saccade latency to a previously fixated object. The consistency of surface feature (color) and spatiotemporal information was manipulated to examine the feature used to define the persisting objects to which inhibition is assigned. If the two objects traded colors during motion, object-based IOR was reliably reduced (Experiment 2), suggesting a role for surface feature properties in defining the objects of object-based IOR. However, if the two objects changed to new colors during motion, object-based IOR was preserved (Experiment 1), and color consistency was not sufficient to support object continuity across a salient spatiotemporal discontinuity (Experiment 3). These results suggest that surface feature consistency plays a significant role in defining object persistence for the purpose of IOR, although surface features may be weighted less strongly than spatiotemporal features in this domain. |
Shuichiro Taya; David Windridge; Magda Osman Looking to score: The dissociation of goal influence on eye movement and meta-attentional allocation in a complex dynamic natural scene Journal Article In: PLoS ONE, vol. 7, no. 6, pp. e39060, 2012. @article{Taya2012, Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers' beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer's beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior. |
Jan Theeuwes; Artem V. Belopolsky Reward grabs the eye: Oculomotor capture by rewarding stimuli Journal Article In: Vision Research, vol. 74, pp. 80–85, 2012. @article{Theeuwes2012, It is well known that salient yet task irrelevant stimuli may capture our eyes independent of our goals and intentions. The present study shows that a task-irrelevant stimulus that is previously associated with high monetary reward captures the eyes much stronger than that very same stimulus when previously associated with low monetary reward. We conclude that reward changes the salience of a stimulus such that a stimulus that is associated with high reward becomes more pertinent and therefore captures the eyes above and beyond its physical salience. Because the stimulus capture the eyes and disrupts goal-directed behavior we argue that this effect is automatic not driven by strategic, top-down control. |
Aidan A. Thompson; Christopher V. Glover; Denise Y. P. Henriques Allocentrically implied target locations are updated in an eye-centred reference frame Journal Article In: Neuroscience Letters, vol. 514, no. 2, pp. 214–218, 2012. @article{Thompson2012, When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a " target" location at the midpoint of the stimulus. After determining the implied " target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered " target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. |
Rebecca M. Todd; Deborah Talmi; Taylor W. Schmitz; Josh Susskind; Adam K. Anderson Psychophysical and neural evidence for emotion-enhanced perceptual vividness Journal Article In: Journal of Neuroscience, vol. 32, no. 33, pp. 11201–11212, 2012. @article{Todd2012, Highly emotional events are associated with vivid ‘flashbulb' memories. Here we examine whether the flashbulb metaphor characterizes a previously unknown emotion enhanced vividness (EEV) during initial perceptual experience. Using a magnitude estimation procedure, human observers estimated the relative magnitude of visual noise overlaid on scenes. After controlling for computational metrics of objective visual salience, emotional salience was associated with decreased noise, or heightened perceptual vividness, demonstrating EEV, which predicted later memory vividness. ERPs revealed a posterior P2 component at ~200 ms that was associated with both increased emotional salience and decreased objective noise levels, consistent with EEV. BOLD response in the lateral occipital complex (LOC), insula, and amygdala predicted online EEV. The LOC and insula represented complementary influences on EEV, with the amygdala statistically mediating both. These findings indicate that the metaphorical vivid light surrounding emotional memories is embodied directly in perceptual cortices during initial experience, supported by cortico-limbic interactions. |
Jianliang Tong; Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor The effect of perceptual grouping on perisaccadic spatial distortions Journal Article In: Journal of Vision, vol. 12, no. 10, pp. 1–16, 2012. @article{Tong2012, Perisaccadic spatial distortion (PSD) occurs when a target is flashed immediately before the onset of a saccade and it appears displaced in the direction of the saccade. In previous studies, the magnitude of PSD of a single target was affected by multiple experimental parameters, such as the target's luminance and its position relative to the central fixation target. Here we describe a contextual effect in which the magnitude of the PSD for a target was influenced by the synchronous presentation of another target: PSD for simultaneously presented targets was more uniform than when each was presented individually. Perisaccadic compression was ruled out as a causal factor, and the results suggest that both low- and high-level perceptual grouping mechanisms may account for the change in PSD magnitude. We speculate that perceptual grouping could play a key role in preserving shape constancy during saccadic eye movements. |
Jason Satel; Zhiguo Wang Investigating a two causes theory of inhibition of return Journal Article In: Experimental Brain Research, vol. 223, no. 4, pp. 469–478, 2012. @article{Satel2012, It has recently been demonstrated that there are independent sensory and motor mechanisms underlying inhibition of return (IOR) when measured with oculomotor responses (Wang et al. in Exp Brain Res 218:441-453, 2012). However, these results are seemingly in conflict with previous empirical results which led to the proposal that there are two mutually exclusive flavors of IOR (Taylor and Klein in J Exp Psychol Hum Percept Perform 26:1639-1656, 2000). The observed differences in empirical results across these studies and the theoretical frameworks that were proposed based on the results are likely due to differences in the experimental designs. The current experiments establish that the existence of additive sensory and motor contributions to IOR do not depend on target type, repeated spatiotopic stimulation, attentional control settings, or a temporal gap between fixation offset and cue onset, when measured with saccadic responses. Furthermore, our experiments show that the motor mechanism proposed by Wang et al. in Exp Brain Res 218:441-453, (2012) is likely restricted to the oculomotor system, since the additivity effect does not carry over into the manual response modality. |
Elisa Scheller; Christian Büchel; Matthias Gamer Diagnostic features of emotional expressions are processed preferentially Journal Article In: PLoS ONE, vol. 7, no. 7, pp. e41792, 2012. @article{Scheller2012, Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders. |
Lisette J. Schmidt; Artem V. Belopolsky; Jan Theeuwes The presence of threat affects saccade trajectories Journal Article In: Visual Cognition, vol. 20, no. 3, pp. 284–299, 2012. @article{Schmidt2012, In everyday life, fast identification and processing of threat-related stimuli is of critical importance for survival. Previous studies suggested that spatial attention is automatically allocated to threatening stimuli, such as angry faces. However, in the previous studies the threatening stimuli were not completely irrelevant for the task. In the present study we used saccadic curvature to investigate whether attention is automatically allocated to threatening emotional information. Participants had to make an endogenous saccade up or down while an irrelevant face paired with an object was present in the periphery. The eyes curved away more from the angry faces than from either neutral or happy faces. This effect was not observed when the faces were inverted, excluding the possible role of low-level differences. Since the angry faces were completely irrelevant to the task, the results suggest that attention is automatically allocated to the threatening stimuli, which generates activity in the oculomotor system, and biases behaviour. |
Dana Schneider; Andrew P. Bayliss; Stefanie I. Becker; Paul E. Dux Eye movements reveal sustained implicit processing of others' mental states Journal Article In: Journal of Experimental Psychology: General, vol. 141, no. 3, pp. 433–438, 2012. @article{Schneider2012, The ability to attribute mental states to others is crucial for social competency. To assess mentalizing abilities, in false-belief tasks participants attempt to identify an actor's belief about an object's location as opposed to the object's actual location. Passing this test on explicit measures is typically achieved by 4 years of age, but recent eye movement studies reveal registration of others' beliefs by 7 to 15 months. Consequently, a 2-path mentalizing system has been proposed, consisting of a late developing, cognitively demanding component and an early developing, implicit/automatic component. To date, investigations on the implicit system have been based on single-trial experiments only or have not examined how it operates across time. In addition, no study has examined the extent to which participants are conscious of the belief states of others during these tasks. Thus, the existence of a distinct implicit mentalizing system is yet to be demonstrated definitively. Here we show that adults engaged in a primary unrelated task display eye movement patterns consistent with mental state attributions across a sustained temporal period. Debriefing supported the hypothesis that this mentalizing was implicit. It appears there indeed exists a distinct implicit mental state attribution system. |
Dana Schneider; Rebecca Lam; Andrew P. Bayliss; Paul E. Dux Cognitive load disrupts implicit theory-of-mind processing Journal Article In: Psychological Science, vol. 23, no. 8, pp. 842–847, 2012. @article{Schneider2012a, Eye movements in Sally-Anne false-belief tasks appear to reflect the ability to implicitly monitor the mental states of other individuals (theory of mind, or ToM). It has recently been proposed that an early-developing, efficient, and automatically operating ToM system subserves this ability. Surprisingly absent from the literature, however, is an empirical test of the influence of domain-general executive processing resources on this implicit ToM system. In the study reported here, a dual-task method was employed to investigate the impact of executive load on eye movements in an implicit Sally-Anne false-belief task. Under no-load conditions, adult participants displayed eye movement behavior consistent with implicit belief processing, whereas evidence for belief processing was absent for participants under cognitive load. These findings indicate that the cognitive system responsible for implicitly tracking beliefs draws at least minimally on executive processing resources. Thus, even the most low- level processing of beliefs appears to reflect a capacity-limited operation. |
Elisa Schneider; Masaki Maruyama; Stanislas Dehaene; Mariano Sigman Eye gaze reveals a fast, parallel extraction of the syntax of arithmetic formulas Journal Article In: Cognition, vol. 125, no. 3, pp. 475–490, 2012. @article{Schneider2012b, Mathematics shares with language an essential reliance on the human capacity for recursion, permitting the generation of an infinite range of embedded expressions from a finite set of symbols. We studied the role of syntax in arithmetic thinking, a neglected component of numerical cognition, by examining eye movement sequences during the calculation of arithmetic expressions. Specifically, we investigated whether, similar to language, an expression has to be scanned sequentially while the nested syntactic structure is being computed or, alternatively, whether this structure can be extracted quickly and in parallel. Our data provide evidence for the latter: fixations sequences were stereotypically organized in clusters that reflected a fast identification of syntactic embeddings. A syntactically relevant pattern of eye movement was observed even when syntax was defined by implicit procedural rules (precedence of multiplication over addition) rather than explicit parentheses. While the total number of fixations was determined by syntax, the duration of each fixation varied with the complexity of the arithmetic operation at each step. These findings provide strong evidence for a syntactic organization for arithmetic thinking, paving the way for further comparative analysis of differences and coincidences in the instantiation of recursion in language and mathematics. |
Casey A. Schofield; Ashley L. Johnson; Albrecht W. Inhoff; Meredith E. Coles Social anxiety and difficulty disengaging threat: Evidence from eye-tracking Journal Article In: Cognition and Emotion, vol. 26, no. 2, pp. 300–311, 2012. @article{Schofield2012, Theoretical models of social phobia propose that biased attention contributes to the maintenance of symptoms; however these theoretical models make opposing predictions. Specifically, whereas Rapee and Heimberg (1997) suggested the biases are characterised by hypervigilance to threat cues and difficulty disengaging attention from threat, Clark and Wells (1995) suggested that threat cues are largely avoided. Previous research has been limited by the almost exclusive reliance on behavioural response times to experimental tasks to provide an index of attentional biases. The current study evaluated the relationship between the time-course of attention and symptoms of social anxiety and depression. Forty-two young adults completed a dot-probe task with emotional faces while eye- movement data were collected. The results revealed that increased social anxiety was associated with attention to emotional (rather than neutral) faces over time as well as difficulty disengaging attention from angry expressions; some evidence was found for a relationship between heightened depressive symptoms and increased attention to fear faces. |
Elizabeth R. Schotter; Cainen Gerety; Keith Rayner Heuristics and criterion setting during selective encoding in visual decision making: Evidence from eye movements Journal Article In: Visual Cognition, vol. 20, no. 9, pp. 1110–1129, 2012. @article{Schotter2012, When making a decision, people spend longer looking at the option they ultimately choose compared to other options - termed the gaze bias effect - even during their first encounter with the options (Glaholt &Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, online during the first encounter with them. To extend their findings and test this claim, we recorded subjects' eye movements as they made judgements about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same colour content (e.g., both in colour or both in black-and-white) or whether they differed in colour content and the extent to which colour content was a reliable cue to relative recentness of the images. We found that the magnitude ofthe gaze bias effect decreasedwhen the colour content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information online. |
Kilian G. Seeber; Dirk Kerzel Cognitive load in simultaneous interpreting: Model meets data Journal Article In: International Journal of Bilingualism, vol. 16, no. 2, pp. 228–242, 2012. @article{Seeber2012, Seeber (2011) recently introduced a series of analytical cognitive load models, providing a detailed illustration of conjectured cognitive resource allocation during simultaneous interpreting. In this article, the authors set out to compare these models with data gathered in an experiment using task-evoked pupillary responses to measure online cognitive load during simultaneous interpreting when embedded in single-sentence context and discourse context. Verb-final and verb-initial constructions were analysed in terms of the load they cause to an inherently capacity-limited system when interpreted simultaneously into a verb-initial language like English. The results show larger pupil dilation with verb-final than with verb-initial constructions, suggesting higher cognitive load with asymmetrical structures. A tendency for reduced cognitive load in the discourse context compared to the sentence context was also found. These data support the models' prediction of an increase in cognitive load towards (and beyond) the end of verb-final constructions. |
Michael J. Seiler; Poornima Madhavan; Molly Liechty Toward an understanding of real estate homebuyer internet search behavior: An application of ocular tracking technology Journal Article In: Journal of Real Estate Research, vol. 34, no. 2, pp. 211–241, 2012. @article{Seiler2012, This paper examines the eye movements of potential homebuyers searching for houses on the Internet. Total dwell time (looking at the photo), fixation duration (time spent at each focal point), and saccade amplitude (average distance between focal points) significantly explain someone's opinion of a house. The sections that are viewed first are the photo of the house, the description section, distantly followed by the real estate agent's remarks. The findings indicate that charm pricing, where agents list properties at slightly less than round numbers, works in opposition to its intended effect. Given that people dwell significantly longer on the first house they view, and since charm pricing typically causes a property to appear towards the end of a search when sorted by price from low to high, is charm pricing an effective marketing strategy? |
Yasuhiro Seya; Katsumi Watanabe The minimal time required to process visual information in visual search tasks measured by using gaze-contingent visual masking Journal Article In: Perception, vol. 41, no. 7, pp. 819–830, 2012. @article{Seya2012, To estimate the minimal time required to process visual information (i.e., "effective acquisition time") during a visual search task, we used a gaze-contingent visual masking method. In the experiment, an opaque mask that restricted the central vision was presented at a current gaze position. We manipulated a temporal delay from a gaze shift to mask movement. Participants were asked to search for a target letter (T) among distractor letters (L)s as quickly as possible under various delays. The results showed that the reaction times and search rate decreased when the delay was increased. When the delay was longer than 50 ms, the reaction times and search rate reached a plateau. These results indicate that the effective acquisition time during the visual search task used in the study is equal to or less than 50 ms. The present study indicates that the gaze-contingent visual masking method used is useful for revealing the effective acquisition time. |
Madeleine E. Sharp; Jayalakshmi Viswanathan; Linda J. Lanyon; Jason J. S. Barton Sensitivity and bias in decision-making under risk: Evaluating the perception of reward, Its Probability and Value Journal Article In: PLoS ONE, vol. 7, no. 4, pp. e33460, 2012. @article{Sharp2012, BACKGROUND: There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour.$backslash$n$backslash$nOBJECTIVE: We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk.$backslash$n$backslash$nDESIGN/METHODS: Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%.$backslash$n$backslash$nRESULTS: Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a 'risk premium' of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability.$backslash$n$backslash$nCONCLUSIONS: This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia. |
Heather Sheridan; Eyal M. Reingold Perceptual specificity effects in rereading: Evidence from eye movements Journal Article In: Journal of Memory and Language, vol. 67, no. 2, pp. 255–269, 2012. @article{Sheridan2012c, The present experiments examined perceptual specificity effects using a rereading paradigm. Eye movements were monitored while participants read the same target word twice, in two different low-constraint sentence frames. The congruency of perceptual processing was manipulated by either presenting the target word in the same distortion typography (i.e., font) during the first and second presentations (i.e., the congruent condition), or changing the distortion typography of the word across the two presentations (i.e., the incongruent condition). Fixation times for the second presentation of the target word were shorter for the congruent condition compared to the incongruent condition, and did not differ across the incongruent condition and an additional baseline condition that employed a normal (i.e., non-distorted) typography during the first presentation and a distortion typography during the second presentation. In Experiment 1, we employed both unusual and subtle distortion typographies, and we demonstrated that the typography congruency effect (i.e., the congruent < incongruent difference) was significant for low frequency but not for high frequency target words. In Experiment 2, the congruency effect persisted across a 1. week lag between the first and second presentations of the target words. Overall, the present demonstration of the long-term retention of superficial perceptual details (i.e., typography) supports the existence of perceptually specific memory representations. |
Zhuanghua Shi; Romi Nijhawan Motion extrapolation in the central fovea Journal Article In: PLoS ONE, vol. 7, no. 3, pp. e33651, 2012. @article{Shi2012, Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea. |
Kerry Shields; Paul E. Engelhardt; Magdalena Ietswaart Processing emotion information from both the face and body: An eye-movement study Journal Article In: Cognition and Emotion, vol. 26, no. 4, pp. 699–709, 2012. @article{Shields2012, This study examined the perception of emotional expressions, focusing on the face and the body. Photographs of four actors expressing happiness, sadness, anger, and fear were presented in congruent (e. g., happy face with happy body) and incongruent (e. g., happy face with fearful body) combinations. Participants selected an emotional label using a four-option categorisation task. Reaction times and accuracy for the categorisation judgement, and eye movements were the dependent variables. Two regions of interest were examined: face and body. Results showed better accuracy and faster reaction times for congruent images compared to incongruent images. Eye movements showed an interaction in which there were more fixations and longer dwell times to the face and fewer fixations and shorter dwell times to the body with incongruent images. Thus, conflicting information produced a marked effect on information processing in which participants focused to a greater extent on the face compared to the body. |
Shui-I Shih; Katie L. Meadmore; Simon P. Liversedge Using eye movement measures to investigate effects of age on memory for objects in a scene Journal Article In: Memory, vol. 20, no. 6, pp. 629–637, 2012. @article{Shih2012, We examined whether there were age-related differences in eye movements during intentional encoding of a photographed scene that might account for age-related differences in memory of objects in the scene. Younger and older adults exhibited similar scan path patterns, and visited each region of interest in the scene with similar frequency and duration. Despite the similarity in viewing, there were fundamental differences in the viewing?memory relationship. Although overall recognition was poorer in the older than younger adults, there was no age effect on recognition probability for objects visited only once. More importantly, re-visits to objects brought gain in recognition probability for the younger adults, but not for the older adults. These results suggest that the age-related differences in object recognition performance are in part due to inefficient integration of information from working memory to longer- term memory. |
Masanori Shimono; Hiroaki Mano; Kazuhisa Niki The brain structural hub of interhemispheric information integration for visual motion perception Journal Article In: Cerebral Cortex, vol. 22, no. 2, pp. 337–344, 2012. @article{Shimono2012, We investigated the key anatomical structures mediating interhemispheric integration during the perception of apparent motion across the retinal midline. Previous studies of commissurotomized patients suggest that subcortical structures mediate interhemispheric transmission but the specific regions involved remain unclear. Here, we exploit interindividual variations in the propensity of normal subjects to perceive horizontal motion, in relation to vertical motion. We characterize these differences psychophysically using a Dynamic Dot Quartet (an ambiguous stimulus that induces illusory motion). We then tested for correlations between a tendency to perceive horizontal motion and fractional anisotropy (FA) (from structural diffusion tensor imaging), over subjects. FA is an indirect measure of the orientation and integrity of white matter tracts. Subjects who found it easy to perceive horizontal motion showed significantly higher FA values in the pulvinar. Furthermore, fiber tracking from an independently identified (subject-specific) visual motion area converged on the pulvinar nucleus. These results suggest that the pulvinar is an anatomical hub and may play a central role in interhemispheric integration. |
Steven S. Shimozaki; Wade A. Schoonveld; Miguel P. Eckstein A unified Bayesian observer analysis for set size and cueing effects on perceptual decisions and saccades Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 1–26, 2012. @article{Shimozaki2012, Visual search and cueing tasks have been employed extensively in attentional research, with each having a standard effect (visual search: set size effects, cueing: cue validity). Generally these effects have been treated with different (but often similar) attentional theories. The present study aims to consolidate cueing and set size effects within an ideal observer approach. Four observers performed a yes/no contrast discrimination of a gaussian signal in a task combining cueing with visual search. The signal appeared in half the trials, and effective set size (M, 2 to 8) was determined by one primary precue (having 50% validity in signal present trials) and M-1 secondary precues. There were two stimulus durations: 1 second (eye movements allowed), and the first-saccade latency (in the 1 second duration condition) minus 80 milliseconds. Simulations found that an ideal observer for the perceptual yes/no decisions and the first saccadic localization decisions predicted both set size and cueing effects with a single weighting mechanism, providing a unifying account. For the human observer results, a modified ideal observer (with performance matched to human performance) fit the yes/no perceptual decisions well. For the first saccadic decisions, there was evidence of use of the primary cue, but the modified ideal observer was not a good fit, indicating a suboptimal use of the cue. We discuss possible underlying assumptions about the task that might explain the apparent suboptimal nature of saccadic decisions and the overall utility of the ideal observer for cueing and visual search studies in visual attention and saccades. |
Claudio Simoncini; Laurent U. Perrinet; Anna Montagnini; Pascal Mamassian; Guillaume S. Masson More is not always better: Adaptive gain control explains dissociation between perception and action Journal Article In: Nature Neuroscience, vol. 15, no. 11, pp. 1596–1603, 2012. @article{Simoncini2012, Moving objects generate motion information at different scales, which are processed in the visual system with a bank of spatiotemporal frequency channels. It is not known how the brain pools this information to reconstruct object speed and whether this pooling is generic or adaptive; that is, dependent on the behavioral task. We used rich textured motion stimuli of varying bandwidths to decipher how the human visual motion system computes object speed in different behavioral contexts. We found that, although a simple visuomotor behavior such as short-latency ocular following responses takes advantage of the full distribution of motion signals, perceptual speed discrimination is impaired for stimuli with large bandwidths. Such opposite dependencies can be explained by an adaptive gain control mechanism in which the divisive normalization pool is adjusted to meet the different constraints of perception and action. |
Chris R. Sims; Robert A. Jacobs; David C. Knill An ideal observer analysis of visual working memory Journal Article In: Psychological Review, vol. 119, no. 4, pp. 807–830, 2012. @article{Sims2012, Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM. |
Mackenzie G. Glaholt; Keith Rayner; Eyal M. Reingold The mask-onset delay paradigm and the availability of central and peripheral visual information during scene viewing Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–19, 2012. @article{Glaholt2012a, We employed a variant of the mask-onset delay paradigm in order to limit the availability of visual information in central and peripheral vision within individual fixations during scene viewing. Subjects viewed full-color scene photos with instructions to search for a target object (Experiment 1) or to study them for a later memory test (Experiment 2). After a fixed interval following the onset of each eye fixation (50-100 ms), the scene was scrambled either in the central visual field or over the entire display. The intact scene was presented when the subject made an eye movement. Our results reconcile different sets of findings from prior research regarding the masking of central and peripheral visual information at different intervals following fixation onset. In particular, we found that when the entire display was scrambled, both search and memory performance were impaired even at relatively long mask-onset intervals. In contrast, when central vision was scrambled, there were subtle impairments that depended on the viewing task. In the 50-ms mask-onset interval, subjects were selectively impaired at identifying, but not in locating, the search target (Experiment 1), while memory performance (Experiment 2) was unaffected in this condition, and hence, the reliance on central and peripheral visual information depends partly on the viewing task. |
Mackenzie G. Glaholt; Eyal M. Reingold Direct control of fixation times in scene viewing: Evidence from analysis of the distribution of first fixation duration Journal Article In: Visual Cognition, vol. 20, no. 6, pp. 605–626, 2012. @article{Glaholt2012, Participants' eye movements were monitored in two scene viewing experiments that manipulated the task-relevance of scene stimuli and their availability for extrafoveal processing. In both experiments, participants viewed arrays containing eight scenes drawn from two categories. The arrays of scenes were either viewed freely (Free Viewing) or in a gaze-contingent viewing mode where extrafoveal preview of the scenes was restricted (No Preview). In Experiment 1a, participants memorized the scenes from one category that was designated as relevant, and in Experiment 1b, participants chose their preferred scene from within the relevant category. We examined first fixations on scenes from the relevant category compared to the irrelevant category (Experiments 1a and 1b), and those on the chosen scene compared to other scenes not chosen within the relevant category (Experiment 1b). A survival analysis was used to estimate the first discernible influence of the task-relevance on the distribution of first-fixation durations. In the free viewing condition in Experiment 1a, the influence of task relevance occurred as early as 81 ms from the start of fixation. In contrast, the corresponding value in the no preview condition was 254 ms, demonstrating the crucial role of extrafoveal processing in enabling direct control of fixation durations in scene viewing. First fixation durations were also influenced by whether or not the scene was eventually chosen (Experiment 1b), but this effect occurred later and affected fewer fixations than the effect of scene category, indicating that the time course of scene processing is an important variable mediating direct control of fixation durations. |
Richard Godijn; Jan Theeuwes Overt is no better than covert when rehearsing visuo-spatial information in working memory Journal Article In: Memory & Cognition, vol. 40, no. 1, pp. 52–61, 2012. @article{Godijn2012, In the present study, we examined whether eye movements facilitate retention of visuo-spatial information in working memory. In two experiments, participants memorised the sequence of the spatial locations of six digits across a retention interval. In some conditions, participants were free to move their eyes during the retention interval, but in others they either were required to remain fixated or were instructed to move their eyes exclusively to a selection of the memorised locations. Memory performance was no better when participants were free to move their eyes during the memory interval than when they fixated a single location. Furthermore, the results demonstrated a primacy effect in the eye movement behaviour that corresponded with the memory performance. We conclude that overt eye movements do not provide a benefit over covert attention for rehearsing visuo-spatial information in working memory. |
Nick Donnelly; Katherine Cornes; Tamaryn Menneer An examination of the processing capacity of features in the Thatcher illusion Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 7, pp. 1475–1487, 2012. @article{Donnelly2012, Detection of the Thatcher illusion (Thompson, Perception, 9:483-484, 1980) is widely upheld as being dependent on configural processing (e.g., Bartlett & Searcy, Cognitive Psychology, 25:281-316, 1993; Boutsen, Humphreys, Praamstra, & Warbrick, NeuroImage, 32:352-367, 2006; Donnelly & Hadwin, Visual Cognition, 10:1001-1017, 2003; Leder & Bruce, Quarterly Journal of Experimental Psychology, 53A:513-536, 2000; Lewis, Perception, 30:769-774, 2001; Maurer, Grand, & Mondloch, Trends in Cognitive Sciences, 6:255-260, 2002; Stürzel & Spillmann, Perception, 29:937-942, 2000). Given that supercapacity processing accompanies configural processing (see Wenger & Townsend, 2001), supercapacity processing should occur in the processing of Thatcherised upright faces. The purpose of this study was to test for evidence that the grotesqueness of upright Thatcherised faces results from supercapacity processing. Two tasks were employed: categorisation of a single face as odd or normal, and a same/different task for sequentially presented faces. The stimuli were typical faces, partially Thatcherised faces (either eyes or mouth inverted) and fully Thatcherised faces. All of the faces were presented upright. The data from both experiments were analysed using mean response times and a number of capacity measures (capacity coefficient, the Miller and Grice inequalities, and the proportional-hazards ratio). The results of both experiments demonstrated some evidence of a redundancy gain for the redundant-target condition over the single-target condition, especially in the response times in Experiment 1. However, there was very limited evidence, in either experiment, that the redundancy gains resulted from supercapacity processing. We concluded that the oddity signalled by inversion of eyes and mouths does not arise from positive interdependencies between these features. |
Tim Donovan; Trevor J. Crawford; Damien Litchfield Negative priming for target selection with saccadic eye movements Journal Article In: Experimental Brain Research, vol. 222, no. 4, pp. 483–494, 2012. @article{Donovan2012, We conducted a series of experiments to determine whether negative priming is used in the process of target selection for a saccadic eye movement. The key questions addressed the circumstances in which the negative priming of an object takes place, and the distinction between spatial and object-based effects. Experiment 1 revealed that after fixating a target (cricket ball) amongst an array of semantically related distracters, saccadic eye movements in a subsequent display were faster to the target than to the distracters or new objects, irrespective of location. The main finding was that of the facilitation of a recent target, not the inhibition of a recent distracter or location. Experiment 2 replicated this finding by using silhouettes of objects for selection that is based on feature shape. Error rates were associated with distracters with high target-shape similarity; therefore, Experiment 3 presented silhouettes of animals using distracters with low target-shape similarity. The pattern of results was similar to that of Experiment 2, with clear evidence of target facilitation rather than the inhibition of distracters. Experiment 4 and 5 introduced a distracter together with the target into the probe display, to generate a level of competitive selection in the probe condition. In these circumstances, clear evidence of spatial inhibition at the location of the previous distracters emerged. We discuss the implications for our understanding of selective attention and consider why it is essential to supplement response time data with the analysis of eye movement behaviour in spatial negative priming paradigms. |
Michael Dorr; Eleonora Vig; Erhardt Barth Eye movement prediction and variability on natural video data sets Journal Article In: Visual Cognition, vol. 20, no. 4-5, pp. 495–514, 2012. @article{Dorr2012, We here study the predictability of eye movements when viewing high-resolution natural videos. We use three recently published gaze data sets that contain a wide range of footage, from scenes of almost still-life character to professionally made, fast-paced advertisements and movie trailers. Inter-subject gaze variability differs significantly between data sets, with variability being lowest for the professional movies. We then evaluate three state-of-the-art saliency models on these data sets. A model that is based on the invariants of the structure tensor and that combines very generic, sparse video representations with machine learning techniques outperforms the two reference models; performance is further improved for two data sets when the model is extended to a perceptually inspired colour space. Finally, a combined analysis of gaze variability and predictability shows that eye movements on the professionally made movies are the most coherent (due to implicit gaze-guidance strategies of the movie directors), yet the least predictable (presumably due to the frequent cuts). Our results highlight the need for standardized benchmarks to comparatively evaluate eye movement prediction algorithms. |
Trafton Drew; Corbin Cunningham; Jeremy M. Wolfe When and why might a computer-aided detection (CAD) system interfere with visual search? An eye-tracking study Journal Article In: Academic Radiology, vol. 19, no. 10, pp. 1260–1267, 2012. @article{Drew2012, Rational and Objectives: Computer-aided detection (CAD) systems are intended to improve performance. This study investigates how CAD might actually interfere with a visual search task. This is a laboratory study with implications for clinical use of CAD. Methods: Forty-seven naive observers in two studies were asked to search for a target, embedded in 1/f2.4 noise while we monitored their eye movements. For some observers, a CAD system marked 75% of targets and 10% of distractors, whereas other observers completed the study without CAD. In experiment 1, the CAD system's primary function was to tell observers where the target might be. In experiment 2, CAD provided information about target identity. Results: In experiment 1, there was a significant enhancement of observer sensitivity in the presence of CAD (t(22) = 4.74, P < .001), but there was also a substantial cost. Targets that were not marked by the CAD system were missed more frequently than equivalent targets in no-CAD blocks of the experiment (t(22) = 7.02, P < .001). Experiment 2 showed no behavioral benefit from CAD, but also no significant cost on sensitivity to unmarked targets (t(22) = 0.6 |
Thomas Ellenbuerger; Arnaud Boutin; Stefan Panzer; Yannick Blandin; Lennart Fischer; Jörg Schorer; Charles H. Shea Observational training in visual half-fields and the coding of movement sequences Journal Article In: Human Movement Science, vol. 31, no. 6, pp. 1436–1448, 2012. @article{Ellenbuerger2012, An experiment was conducted to determine if gating information to different hemispheres during observational training facilitates the development of a movement representation. Participants were randomly assigned to one of three observation groups that differed in terms of the type of visual half-field presentation during observation (right visual half-field (RVF), left visual half-field (LVF), or in central position (CE)), and a control group (CG). On Day 1, visual stimuli indicating the pattern of movement to be produced were projected on the respective hemisphere. The task participants observed was a 1300. ms spatial-temporal pattern of elbow flexions and extensions. On Day 2, participants physically performed the task in an inter-manual transfer paradigm with a retention test, and two contralateral transfer tests; a mirror transfer test which required the same pattern of muscle activation and limb joint angles and a non-mirror transfer test which reinstated the visual-spatial pattern of the sequence. The results demonstrated that participants of the CE, RVF and the LVF groups showed superior retention and transfer performance compared to participants of the CG. Participants of the CE- and LVF-groups demonstrated an advantage when the visual-spatial coordinates were reinstated compared to the motor coordinates, while participants of the RVF-group did not promote specific transfer patterns. These results will be discussed in the context of hemisphere specialization. |
Ben Harkin; Sébastien Miellet; Klaus Kessler What checkers actually check: An eye tracking study of inhibitory control and working memory Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e44689, 2012. @article{Harkin2012, Background: Not only is compulsive checking the most common symptom in Obsessive Compulsive Disorder (OCD) with an estimated prevalence of 50–80% in patients, but approximately ~15% of the general population reveal subclinical checking tendencies that impact negatively on their performance in daily activities. Therefore, it is critical to understand how checking affects attention and memory in clinical as well as subclinical checkers. Eye fixations are commonly used as indicators for the distribution of attention but research in OCD has revealed mixed results at best. Methodology/Principal Finding: Here we report atypical eye movement patterns in subclinical checkers during an ecologically valid working memory (WM) manipulation. Our key manipulation was to present an intermediate probe during the delay period of the memory task, explicitly asking for the location of a letter, which, however, had not been part of the encoding set (i.e., misleading participants). Using eye movement measures we now provide evidence that high checkers' inhibitory impairments for misleading information results in them checking the contents of WM in an atypical manner. Checkers fixate more often and for longer when misleading information is presented than non-checkers. Specifically, checkers spend more time checking stimulus locations as well as locations that had actually been empty during encoding. Conclusions/Significance: We conclude that these atypical eye movement patterns directly reflect internal checking of memory contents and we discuss the implications of our findings for the interpretation of behavioural and neuropsychological data. In addition our results highlight the importance of ecologically valid methodology for revealing the impact of detrimental attention and memory checking on eye movement patterns. |
William J. Harrison; Jason B. Mattingley; Roger W. Remington Pre-saccadic shifts of visual attention Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e45670, 2012. @article{Harrison2012, The locations of visual objects to which we attend are initially mapped in a retinotopic frame of reference. Because each saccade results in a shift of images on the retina, however, the retinotopic mapping of spatial attention must be updated around the time of each eye movement. Mathôt and Theeuwes [1] recently demonstrated that a visual cue draws attention not only to the cue's current retinotopic location, but also to a location shifted in the direction of the saccade, the "future-field". Here we asked whether retinotopic and future-field locations have special status, or whether cue-related attention benefits exist between these locations. We measured responses to targets that appeared either at the retinotopic or future-field location of a brief, non-predictive visual cue, or at various intermediate locations between them. Attentional cues facilitated performance at both the retinotopic and future-field locations for cued relative to uncued targets, as expected. Critically, this cueing effect also occurred at intermediate locations. Our results, and those reported previously [1], imply a systematic bias of attention in the direction of the saccade, independent of any predictive remapping of attention that compensates for retinal displacements of objects across saccades [2]. |
Bronson Harry; Chris Davis; Jeesun Kim Exposure in central vision facilitates view-invariant face recognition in the periphery Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–9, 2012. @article{Harry2012, The present study investigated the extent to which a face presented in the visual periphery is processed and whether such processing can be influenced by a recent encounter in central vision. To probe face processing, a series of studies was conducted in which participants classified the sex and identity of faces presented in central and peripheral vision. The results showed that when target faces had not been previously viewed in central vision, recognition in peripheral vision was limited whereas sex categorization was not. When faces were previously viewed in central vision, recognition in peripheral vision improved even with the pose, hairstyle, and lighting conditions of these faces changed. These results are discussed with regard to possible mechanisms unpinning this exposure effect. |
Ryusuke Hayashi; Manabu Tanifuji Which image is in awareness during binocular rivalry? Reading perceptual status from eye movements Journal Article In: Journal of Vision, vol. 12, no. 3, pp. 1–11, 2012. @article{Hayashi2012, Binocular rivalry is a useful psychophysical tool to investigate neural correlates of visual consciousness because the alternation between awareness of the left and right eye images occurs without any accompanying change in visual input. The conventional experiments on binocular rivalry require participants to voluntarily report their perceptual state. Obtaining reliable reports from non-human primates about their subjective visual experience, however, requires long-term training, which has made electrophysiological experiments on binocular rivalry quite difficult. Here, we developed a new binocular rivalry stimulus that consists of two different object images that are phase-shifted to move in opposite directions from each other: One eye receives leftward motion while the other eye receives rightward motion, although both eyes' images are perceived to remain at the same position. Experiments on adult human participants showed that eye movements (optokinetic nystagmus, OKN) are involuntarily evoked during the observation of our stimulus. We also found that the evoked OKN can serve as a cue for accurate estimation about which object image was dominant during rivalry, since OKN follows the motion associated with the image in awareness at a given time. This novel visual presentation technique enables us to effectively explore the neural correlates of visual awareness using animal models. |
Katrin Herrmann; David J. Heeger; Marisa Carrasco Feature-based attention enhances performance by increasing response gain Journal Article In: Vision Research, vol. 74, pp. 10–20, 2012. @article{Herrmann2012, Covert spatial attention can increase contrast sensitivity either by changes in contrast gain or by changes in response gain, depending on the size of the attention field and the size of the stimulus (. Herrmann et al., 2010), as predicted by the normalization model of attention (. Reynolds & Heeger, 2009). For feature-based attention, unlike spatial attention, the model predicts only changes in response gain, regardless of whether the featural extent of the attention field is small or large. To test this prediction, we measured the contrast dependence of feature-based attention. Observers performed an orientation-discrimination task on a spatial array of grating patches. The spatial locations of the gratings were varied randomly so that observers could not attend to specific locations. Feature-based attention was manipulated with a 75% valid and 25% invalid pre-cue, and the featural extent of the attention field was manipulated by introducing uncertainty about the upcoming grating orientation. Performance accuracy was better for valid than for invalid pre-cues, consistent with a change in response gain, when the featural extent of the attention field was small (low uncertainty) or when it was large (high uncertainty) relative to the featural extent of the stimulus. These results for feature-based attention clearly differ from results of analogous experiments with spatial attention, yet both support key predictions of the normalization model of attention. |
Matthew D. Hilchey; Raymond M. Klein; Jason Ivanoff Perceptual and motor inhibition of return: Components or flavors? Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 7, pp. 1416–1429, 2012. @article{Hilchey2012, The most common evidence for inhibition of return (IOR) is the robust finding of increased response times to targets that appear at previously cued locations following a cue-target interval exceeding ~300 ms. In a variation on this paradigm, Abrams and Dobkin (Journal of Experimental Psychology: Human Perception and Performance 20:467-477, 1994b) observed that IOR was greater when measured with a saccadic response to a peripheral target than with that to a central arrow, leading to the conclusion that saccadic responses to peripheral targets comprise motoric and perceptual components (the two-components theory for saccadic IOR), whereas saccadic responses to a central target comprise a single motoric component. In contrast, Taylor and Klein (Journal of Experimental Psychology: Human Perception and Performance 26:1639-1656, 2000) discovered that IOR for saccadic responses was equivalent for central and peripheral targets, suggesting a single motoric effect under these conditions. Rooted in methodological differences between the studies, three possible explanations for this discrepancy can be found in the literature. Here, we demonstrate that the empirical discrepancy is rooted in the following methodological difference: Whereas Abrams and Dobkin (Journal of Experimental Psychology: Human Perception and Performance 20:467-477, 1994b) administered central arrow and peripheral onset targets in separate blocks, Taylor and Klein (Journal of Experimental Psychology: Human Perception and Performance 26:1639-1656, 2000) randomly intermixed these stimuli in a single block. Our results demonstrate that (1) blocking central arrow targets fosters a spatial attentional control setting that allows for the long-lasting IOR normally generated by irrelevant peripheral cues to be filtered and (2) repeated sensory stimulation has no direct effect on the magnitude of IOR measured by saccadic responses to targets presented about 1 s after a peripheral cue. |
Matthew D. Hilchey; Raymond M. Klein; Jason Satel; Zhiguo Wang Oculomotor inhibition of return: how soon is it "recoded" into spatiotopic coordinates? Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 6, pp. 1145–1153, 2012. @article{Hilchey2012a, When, in relation to the execution of an eye movement, does the recoding of visual information from retinotopic to spatiotopic coordinates happen? Two laboratories seeking to answer this question using oculomotor inhibition of return (IOR) have generated different answers: Mathôt and Theeuwes (Psychological Science 21:1793-1798, 2010) found evidence for the initial coding of IOR to be retinotopic, while Pertzov, Zohary, and Avidan (Journal of Neuroscience 30:8882-8887, 2010) found evidence for spatiotopic IOR at even shorter postsaccadic intervals than were tested by Mathôt and Theeuwes (Psychological Science 21:1793-1798, 2010). To resolve this discrepancy, we conducted two experiments that combined the methods of the previous two studies while testing as early as possible. We found early spatiotopic IOR in both experiments, suggesting that visual events, including prior fixations, are typically coded into an abstract, allocentric representation of space either before or during eye movements. This type of coding enables IOR to encourage orienting toward novelty and, consequently, to perform the role of a foraging facilitator. |
Anne P. Hillstrom; Helen Scholey; Simon P. Liversedge; Valerie Benson The effect of the first glimpse at a scene on eye movements during search Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 2, pp. 204–210, 2012. @article{Hillstrom2012, Previewing scenes briefly makes finding target objects more efficient when viewing is through a gaze-contingent window (windowed viewing). In contrast, showing a preview of a randomly arranged search display does not benefit search efficiency when viewing during search is of the full display. Here, we tested whether a scene preview is beneficial when the scene is fully visible during search. Scene previews, when presented, were 250 ms in duration. During search, the scene was either fully visible or windowed. A preview always provided an advantage, in terms of decreasing the time to initially fixate and respond to targets and in terms of the total number of fixations. In windowed visibility, a preview reduced the distance of fixations from the target position until at least the fourth fixation. In full visibility, previewing reduced the distance of the second fixation but not of later fixations. The gist information derived from the initial glimpse of a scene allowed for placement of the first one or two fixations at information-rich locations, but when nonfoveal information was available, subsequent eye movements were only guided by online information. |
Annabelle Goujon; James R. Brockmole; Krista A. Ehinger How visual and semantic information influence learning in familiar contexts Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 5, pp. 1315–1327, 2012. @article{Goujon2012, Previous research using the contextual cuing paradigm has revealed both quantitative and qualitative differences in learning depending on whether repeated contexts are defined by letter arrays or real-world scenes. To clarify the relative contributions of visual features and semantic information likely to account for such differences, the typical contextual cuing procedure was adapted to use meaningless but nevertheless visually complex images. The data in reaction time and in eye movements show that, like scenes, such repeated contexts can trigger large, stable, and explicit cuing effects, and that those effects result from facilitated attentional guidance. Like simpler stimulus arrays, however, those effects were impaired by a sudden change of a repeating image's color scheme at the end of the learning phase (Experiment 1), or when the repeated images were presented in a different and unique color scheme across each presentation (Experiment 2). In both cases, search was driven by explicit memory. Collectively, these results suggest that semantic information is not required for conscious awareness of context-target covariation, but it plays a primary role in overcoming variability in specific features within familiar displays. |
Dan J. Graham; Robert W. Jeffery Predictors of nutrition label viewing during food purchase decision making: An eye tracking investigation Journal Article In: Public Health Nutrition, vol. 15, no. 2, pp. 189–197, 2012. @article{Graham2012, OBJECTIVE: Nutrition label use could help consumers eat healthfully. Despite consumers reporting label use, diets are not very healthful and obesity rates continue to rise. The present study investigated whether self-reported label use matches objectively measured label viewing by monitoring the gaze of individuals viewing labels. DESIGN: The present study monitored adults viewing sixty-four food items on a computer equipped with an eye-tracking camera as they made simulated food purchasing decisions. ANOVA and t tests were used to compare label viewing across various subgroups (e.g. normal weight v. overweight v. obese; married v. unmarried) and also across various types of foods (e.g. snacks v. fruits and vegetables). SETTING: Participants came to the University of Minnesota's Epidemiology Clinical Research Center in spring 2010. SUBJECTS: The 203 participants were >/=18 years old and capable of reading English words on a computer 76 cm (30 in) away. RESULTS: Participants looked longer at labels for 'meal' items like pizza, soup and yoghurt compared with fruits and vegetables, snack items like crackers and nuts, and dessert items like ice cream and cookies. Participants spent longer looking at labels for foods they decided to purchase compared with foods they decided not to purchase. There were few between-group differences in nutrition label viewing across sex, race, age, BMI, marital status, income or educational attainment. CONCLUSIONS: Nutrition label viewing is related to food purchasing, and labels are viewed more when a food's healthfulness is ambiguous. Objectively measuring nutrition label viewing provides new insight into label use by various sociodemographic groups. |
Jeroen J. M. Granzier; Matteo Toscani; Karl R. Gegenfurtner Role of eye movements in chromatic induction Journal Article In: Journal of the Optical Society of America A, vol. 29, no. 2, pp. A353–A365, 2012. @article{Granzier2012, There exist large interindividual differences in the amount of chromatic induction [Vis. Res.49, 2261 (2009)]. One possible reason for these differences between subjects could be differences in subjects? eye movements. In experiment 1, subjects either had to look exclusively at the background or at the adjustable disk while they set the disk to a neutral gray as their eye position was being recorded. We found a significant difference in the amount of induction between the two viewing conditions. In a second experiment, subjects were freely looking at the display. We found no correlation between subjects? eye movements and the amount of induction. We conclude that eye movements only play a role under artificial (forced looking) viewing conditions and that eye movements do not seem to play a large role for chromatic induction under natural viewing conditions. |
Harold H. Greene; Deborah Simpson; Jennifer Bennion The perceptual span during foveally-demanding visual target localization Journal Article In: Acta Psychologica, vol. 139, no. 3, pp. 434–439, 2012. @article{Greene2012, Foveally-induced processing load deteriorates target localization performance in vision-guided tasks. Here, participants searched for a target embedded among coded distractors. High processing load was effected by instructing some participants to use the coded distractors to guide their search for the target. Other participants (in the low processing load condition) were not apprised of the code. The experiment examined whether increased processing load alters the span of effective processing (i.e. perceptual span) by (a) reducing its size, (b) altering its shape, or (c) reducing its size and altering its shape. The results demonstrated a reduction in the size of the perceptual span, with no significant change to its shape. It is argued that when distractors are processed beyond simply rejecting them as non targets, the perceptual span shrinks with increasing processing load. The findings are discussed in contrast to a general interference theory that predicts a change in vision-guided performance without a shrinking of the perceptual span. |
Michelle R. Greene; Tommy Liu; Jeremy M. Wolfe Reconsidering Yarbus: A failure to predict observers' task from eye movement patterns. Journal Article In: Vision Research, vol. 62, pp. 1–8, 2012. @article{Greene2012a, In 1967, Yarbus presented qualitative data from one observer showing that the patterns of eye movements were dramatically affected by an observer's task, suggesting that complex mental states could be inferred from scan paths. The strong claim of this very influential finding has never been rigorously tested. Our observers viewed photographs for 10s each. They performed one of four image-based tasks while eye movements were recorded. A pattern classifier, given features from the static scan paths, could identify the image and the observer at above-chance levels. However, it could not predict a viewer's task. Shorter and longer (60s) viewing epochs produced similar results. Critically, human judges also failed to identify the tasks performed by the observers based on the static scan paths. The Yarbus finding is evocative, and while it is possible an observer's mental state might be decoded from some aspect of eye movements, static scan paths alone do not appear to be adequate to infer complex mental states of an observer. |
Nicola J. Gregory; Timothy L. Hodgson Giving subjects the eye and showing them the finger: Socio-biological cues and saccade generation in the anti-saccade task Journal Article In: Perception, vol. 41, no. 2, pp. 131–147, 2012. @article{Gregory2012, Pointing with the eyes or the finger occurs frequently in social interaction to indicate$backslash$r$backslash$ndirection of attention and one's intentions. Research with a voluntary saccade task (where saccade$backslash$r$backslash$ndirection is instructed by the colour of a fixation point) suggested that gaze cues automatically$backslash$r$backslash$nactivate the oculomotor system, but non-biological cues, like arrows, do not. However, other work$backslash$r$backslash$nhas failed to support the claim that gaze cues are special. In the current research we introduced$backslash$r$backslash$nbiological and non-biological cues into the anti-saccade task, using a range of stimulus onset$backslash$r$backslash$nasynchronies (SOAs). The anti-saccade task recruits both top ^ down and bottom^ up attentional$backslash$r$backslash$nmechanisms, as occurs in naturalistic saccadic behaviour. In experiment 1 gaze, but not arrows,$backslash$r$backslash$nfacilitated saccadic reaction times (SRTs) in the opposite direction to the cues over all SOAs,$backslash$r$backslash$nwhereas in experiment 2 directional word cues had no effect on saccades. In experiment 3 finger$backslash$r$backslash$npointing cues caused reduced SRTs in the opposite direction to the cues at short SOAs. These$backslash$r$backslash$nfindings suggest that biological cues automatically recruit the oculomotor system whereas non-$backslash$r$backslash$nbiological cues do not. Furthermore, the anti-saccade task set appears to facilitate saccadic responses in the opposite direction to the cues. |
Parampal Grewal; Jayalakshmi Viswanathan; Jason J. S. Barton; Linda J. Lanyon Line bisection under an attentional gradient induced by simulated neglect in healthy subjects Journal Article In: Neuropsychologia, vol. 50, no. 6, pp. 1190–1201, 2012. @article{Grewal2012, Whether an attentional gradient favouring the ipsilesional side is responsible for the line bisection errors in visual neglect is uncertain. We explored this by using a conjunction-search task on the right side of a computer screen to bias attention while healthy subjects performed line bisection. The first experiment used a probe detection task to confirm that the conjunction-search task created a rightward attentional gradient, as manifest in response times, detection rates, and fixation patterns. In the second experiment subjects performed line bisection with or without a simultaneous conjunction-search task. Fixation patterns in the latter condition were biased rightwards as in visual neglect, and bisection also showed a rightward bias, though modest. A third experiment using the probe detection task again showed that the attentional gradient induced by the conjunction-search task was reduced when subjects also performed line bisection, perhaps explaining the modest effects on bisection bias. Finally, an experiment with briefly viewed pre-bisected lines produced similar results, showing that the small size of the bisection bias was not due to an unlimited view allowing deployment of attentional resources to counteract the conjunction-search task's attentional gradient. These results show that an attentional gradient induced in healthy subjects can produce visual neglect-like visual scanning and a rightward shift of perceived line midpoint, but the modest size of this shift points to limitations of this physiological model in simulating the pathologic effects of visual neglect. |
Rashmi Gupta; Jane E. Raymond Emotional distraction unbalances visual processing Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 2, pp. 184–189, 2012. @article{Gupta2012, Brain mechanisms used to control nonemotional aspects of cognition may be distinct from those regulating responses to emotional stimuli, with activity of the latter being detrimental to the former. Previous studies have shown that suppression of irrelevant emotional stimuli produces a largely right-lateralized pattern of frontal brain activation, thus predicting that emotional stimuli may invoke temporary, lateralized costs to performance on nonemotional cognitive tasks. To test this, we briefly (85 ms) presented a central, irrelevant, expressive (angry, happy, sad, or fearful) or neutral face 100 ms prior to a letter search task. The presentation of emotional versus neutral faces slowed subsequent search for targets appearing in the left, but not the right, hemifield, supporting the notion of a right-lateralized, emotional response mechanism that competes for control with nonemotional cognitive processes. Presentation of neutral, scrambled, or inverted neutral faces produced no such laterality effects on visual search response times. |
Nathan Faivre; Vincent Berthet; Sid Kouider Nonconscious influences from emotional faces: A comparison of visual crowding, masking, and continuous flash suppression Journal Article In: Frontiers in Psychology, vol. 3, pp. 129, 2012. @article{Faivre2012, In the study of nonconscious processing, different methods have been used in order to render stimuli invisible. While their properties are well described, the level at which they disrupt nonconscious processing remains unclear. Yet, such accurate estimation of the depth of nonconscious processes is crucial for a clear differentiation between conscious and nonconscious cognition. Here, we compared the processing of facial expressions rendered invisible through gaze-contingent crowding (GCC), masking, and continuous flash suppression (CFS), three techniques relying on different properties of the visual system. We found that both pictures and videos of happy faces suppressed from awareness by GCC were processed such as to bias subsequent preference judgments. The same stimuli manipulated with visual masking and CFS did not bias significantly preference judgments, although they were processed such as to elicit perceptual priming. A significant difference in preference bias was found between GCC and CFS, but not between GCC and masking. These results provide new insights regarding the nonconscious impact of emotional features, and highlight the need for rigorous comparisons between the different methods employed to prevent perceptual awareness. |
Tom Foulsham; Richard Dewhurst; Marcus Nyström; Halszka Jarodzka; Roger Johansson; Geoffrey Underwood; Kenneth Holmqvist Comparing scanpaths during scene encoding and recognition: A multi-dimensional approach Journal Article In: Journal of Eye Movement Research, vol. 5, no. 3, pp. 1–14, 2012. @article{Foulsham2012, Complex stimuli and tasks elicit particular eye movement sequences. Previous research has focused on comparing between these scanpaths, particularly in memory and imagery research where it has been proposed that observers reproduce their eye movements when recognizing or imagining a stimulus. However, it is not clear whether scanpath similarity is related to memory performance and which particular aspects of the eye movements recur. We therefore compared eye movements in a picture memory task, using a recently proposed comparison method, MultiMatch, which quantifies scanpath similarity across multiple dimensions including shape and fixation duration. Scanpaths were more similar when the same participant's eye movements were compared from two viewings of the same image than between different images or different participants viewing the same image. In addition, fixation durations were similar within a participant and this similarity was associated with memory performance. |
Lynn Huestegge; Iring Koch Eye movements as a gatekeeper for memorization: Evidence for the persistence of attentional sets in visual memory search Journal Article In: Psychological Research, vol. 76, no. 3, pp. 270–279, 2012. @article{Huestegge2012, Attention is known to serve multiple goals, including the selection of information for further perceptual analysis (selection for perception) and for goal-directed behavior (selection for action). Here, we study the role of overt attention (i.e., eye movements) as a gatekeeper for memorization processes (selection for memorization). Subjects memorized complex multidimensional stimulus displays and subsequently indicated whether a specific (probe) item was present. In Experiment 1 we utilized an incidental learning setting where in the beginning only a subset of display stimuli was relevant, whereas in a transfer block all stimuli were possible probe items. In Experiment 2, we used an explicit learning setting within a between-group design. Response times and gaze patterns indicated that subjects learned to ignore irrelevant stimuli while forming memory representations. The findings suggest that complex feature binding processes in peripheral vision may serve to guide overt selective attention, which eventually contributes to filtering out irrelevant information even in highly complex environments. Gaze patterns suggested that attentional control settings persisted even when they were no longer required. |
L. A. Issen; David C. Knill Decoupling eye and hand movement control: Visual short-term memory influences reach planning more than saccade planning Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–13, 2012. @article{Issen2012, When reaching for objects, humans make saccades to fixate the object at or near the time the hand begins to move. In order to address whether the CNS relies on a common representation of target positions to plan both saccades and hand movements, we quantified the contributions of visual short-term memory (VSTM) to hand and eye movements executed during the same coordinated actions. Subjects performed a sequential movement task in which they picked up one of two objects on the right side of a virtual display (the "weapon"), moved it to the left side of the display (to a "reloading station") and then moved it back to the right side to hit the other object (the target). On some trials, the target was perturbed by 1° of visual angle while subjects moved the weapon to the reloading station. Although subjects did not notice the change, the original position of the target, encoded in VSTM, influenced the motor plans for both the hand and the eye back to the target. Memory influenced motor plans for distant targets more than for near targets, indicating that sensorimotor planning is sensitive to the reliability of available information; however, memory had a larger influence on hand movements than on eye movements. This suggests that spatial planning for coordinated saccades and hand movements are dissociated at the level of processing at which online visual information is integrated with information in short-term memory. |
Andrew F. Jarosz; Jennifer Wiley Why does working memory capacity predict RAPM performance? A possible role of distraction Journal Article In: Intelligence, vol. 40, no. 5, pp. 427–438, 2012. @article{Jarosz2012, Current theories concerning individual differences in working memory capacity (WMC) suggest that WMC reflects the ability to control the focus of attention and resist interference and distraction. The current set of experiments tested whether susceptibility to distraction is partially responsible for the established relationship between performance on complex span tasks and the Raven's Advanced Progressive Matrices (RAPM). This hypothesis was examined by manipulating the level of distraction among the incorrect responses contained in RAPM problems, by varying whether the response bank included the most commonly selected incorrect response. When entered hierarchically into a regression predicting a composite score on span tasks, items with highly distracting incorrect answers significantly improved the predictive power of a model predicting an individual's WMC, compared to the model containing only items with less distracting incorrect responses. Additional analyses were performed examining the types of errors that were made. A second experiment used eye-tracking to demonstrate that these effects seem to be rooted in differences in susceptibility to distraction as well as strategy differences between high and low WMC individuals. Results are discussed in terms of current theories about the role of attentional control in performance on general fluid intelligence tasks. |
John L. Jones; Michael P. Kaschak Global statistical learning in a visual search task Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 1, pp. 152–160, 2012. @article{Jones2012, Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials, but with a target location bias (i.e., the target appeared on one half of the display twice as often as the other). Participants quickly learned to make more first saccades to the side more likely to contain the target. With item-by-item search first saccades to the target were at chance. With a distributed search strategy first saccades to a target located on the biased side increased above chance. The results confirm that visual search behavior is sensitive to simple global statistics in the absence of trial-to-trial target location repetitions. |
Solène Kalénine; Daniel Mirman; Laurel J. Buxbaum A combination of thematic and similarity-based semantic processes confers resistance to deficit following left hemisphere stroke Journal Article In: Frontiers in Human Neuroscience, vol. 6, pp. 106, 2012. @article{Kalenine2012, Semantic knowledge may be organized in terms of similarity relations based on shared features and/or complementary relations based on co-occurrence in events. Thus, relationships between manipulable objects such as tools may be defined by their functional properties (what the objects are used for) or thematic properties (e.g., what the objects are used with or on). A recent study from our laboratory used eye-tracking to examine incidental activation of semantic relations in a word-picture matching task and found relatively early activation of thematic relations (e.g., broom-dustpan), later activation of general functional relations (e.g., broom-sponge), and an intermediate pattern for specific functional relations (e.g., broom-vacuum cleaner). Combined with other recent studies, these results suggest that there are distinct semantic systems for thematic and similarity-based knowledge and that the "specific function" condition drew on both systems. This predicts that left hemisphere stroke that damages either system (but not both) may spare specific function processing. The present experiment tested these hypotheses using the same experimental paradigm with participants with left hemisphere lesions (N = 17). The results revealed that, compared to neurologically intact controls (N = 12), stroke participants showed later activation of thematic and general function relations, but activation of specific function relations was spared and was significantly earlier for stroke participants than controls. Across the stroke participants, activation of thematic and general function relations was negatively correlated, further suggesting that damage tended to affect either one semantic system or the other. These results support the distinction between similarity-based and complementarity-based semantic relations and suggest that relations that draw on both systems are relatively more robust to damage. |
Solène Kalénine; Daniel Mirman; Erica L. Middleton; Laurel J. Buxbaum Temporal dynamics of activation of thematic and functional knowledge during conceptual processing of manipulable artifacts Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 5, pp. 1274–1295, 2012. @article{Kalenine2012a, The current research aimed at specifying the activation time course of different types of semantic information during object conceptual processing and the effect of context on this time course. We distinguished between thematic and functional knowledge and the specificity of functional similarity. Two experiments were conducted with healthy older adults using eye tracking in a word-to-picture matching task. The time course of gaze fixations was used to assess activation of distractor objects during the identification of manipulable artifact targets (e.g., broom). Distractors were (a) thematically related (e.g., dustpan), (b) related by a specific function (e.g., vacuum cleaner), or (c) related by a general function (e.g., sponge). Growth curve analyses were used to assess competition effects when target words were presented in isolation (Experiment 1) and embedded in contextual sentences of different generality levels (Experiment 2). In the absence of context, there was earlier and shorter lasting activation of thematically related as compared to functionally related objects. The time course difference was more pronounced for general functions than specific functions. When contexts were provided, functional similarities that were congruent with context generality level increased in salience with earlier activation of those objects. Context had little impact on thematic activation time course. These data demonstrate that processing a single manipulable artifact concept implicitly activates thematic and functional knowledge with different time courses and that context speeds activation of context-congruent functional similarity. |
Juan E. Kamienkowski; Joaquin Navajas; Mariano Sigman Eye movements blink the attentional blink Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 3, pp. 555–560, 2012. @article{Kamienkowski2012a, When presented with a sequence of visual stimuli in rapid succession, participants often fail to detect a second salient target, a phenomenon referred as the attentional blink (AB; Raymond, Shapiro, & Arnell, 1992; Shapiro, Raymond, & Arnell, 1997). On the basis of a vast corpus of experiments, several cognitive theories suggest that the blink results from a discrete structuring of attention, sampling information from temporal episodes during which several items can access encoding process (Wyble, Bowman, & Nieuwenstein, 2009; Wyble, Potter, Bowman, & Nieuwenstein, 2011). The objective of this work is to explore the AB when multiple items are presented at the fovea during ocular movements. The authors reasoned that each fixation may cohesively form an episode and hence expected that the blink may vanish within a single fixation. In turn, they expected saccades to accentuate episodic borders and hence shorten the regime of interference when 2 targets are presented fovealy in successive fixations. Evidence is provided in favor of this hypothesis, showing that the blink vanishes when both targets are presented in the core of a single fixation (far from the saccadic boundaries) and that it recovers more rapidly in successive fixations. These studies support current views that episodes should have an effect on the AB and provide evidence that eye movements play an important role in the formation of episodes. |
Marc R. Kamke; Michelle G. Hall; Harley F. Lye; Martin V. Sale; Laura R. Fenlon; Timothy J. Carroll; Stephan Riek; Jason B. Mattingley Visual attentional load influences plasticity in the human motor cortex Journal Article In: Journal of Neuroscience, vol. 32, no. 20, pp. 7001–7008, 2012. @article{Kamke2012, Neural plasticity plays a critical role in learning, memory, and recovery from injury to the nervous system. Although much is known about the physical and physiological determinants of plasticity, little is known about the influence of cognitive factors. In this study, we investigated whether selective attention plays a role in modifying changes in neural excitability reflecting long-term potentiation (LTP)-like plasticity. We induced LTP-like effects in the hand area of the human motor cortex using transcranial magnetic stimulation (TMS). During the induction of plasticity, participants engaged in a visual detection task with either low or high attentional demands. Changes in neural excitability were assessed by measuring motor-evoked potentials in a small hand muscle before and after the TMS procedures. In separate experiments plasticity was induced either by paired associative stimulation (PAS) or intermittent theta-burst stimulation (iTBS). Because these procedures induce different forms of LTP-like effects, they allowed us to investigate the generality of any attentional influence on plasticity. In both experiments reliable changes in motor cortex excitability were evident under low-load conditions, but this effect was eliminated under high-attentional load. In a third experiment we investigated whether the attentional task was associated with ongoing changes in the excitability of motor cortex, but found no difference in evoked potentials across the levels of attentional load. Our findings indicate that in addition to their role in modifying sensory processing, mechanisms of attention can also be a potent modulator of cortical plasticity. |
Janis Y. Y. Kan; Ullanda Niel; Michael C. Dorris Evidence for a link between the experiential allocation of saccade preparation and visuospatial attention Journal Article In: Journal of Neurophysiology, vol. 107, no. 5, pp. 1413–1420, 2012. @article{Kan2012, Whether a link exists between the two orienting processes of saccade preparation and visuospatial attention has typically been studied by using either sensory cues or predetermined rules that instruct subjects where to allocate these limited resources. In the real world, explicit instructions are not always available and presumably expectations shaped by previous experience play an important role in the allocation of these processes. Here we examined whether manipulating two experiential factors that clearly influence saccade preparation–the probability and timing of saccadic responses–also influences the allocation of visuospatial attention. Occasionally, a visual probe was presented whose spatial location and time of presentation varied relative to those of the saccade target. The proportion of erroneous saccades directed toward this probe indexed saccade preparation, and the proportion of correct discriminations of probe orientation indexed visuospatial attention. Overall, preparation and attention were significantly correlated to each other across these manipulations of saccade probability and timing. Saccade probability influenced both preparation and attention processes, whereas saccade timing influenced only preparation processes. Unexpectedly, discrimination ability was not improved in those trials in which the probe triggered an erroneous saccade despite particularly heightened levels of saccade preparation. To account for our results, we propose a conceptual dual-purpose threshold model based on neurophysiological considerations that link the processes of saccade preparation and visuospatial attention. The threshold acts both as the minimum activity level required for eliciting saccades and a maximum level for which neural activity can provide attentional benefits. |
Ryota Kanai; Neil G. Muggleton; Vincent Walsh Transcranial direct current stimulation of the frontal eye fields during pro- and antisaccade tasks Journal Article In: Frontiers in Psychiatry, vol. 3, pp. 45, 2012. @article{Kanai2012, Transcranial direct current stimulation (tDCS) has been successfully applied to cortical areas such as the motor cortex and visual cortex. In the present study, we examined whether tDCS can reach and selectively modulate the excitability of the frontal eye field (FEF). In order to assess potential effects of tDCS, we measured saccade latency, landing point, and its variability in a simple prosaccade task and in an antisaccade task. In the prosaccade task, we found that anodal tDCS shortened the latency of saccades to a contralateral visual cue. However, cathodal tDCS did not show a significant modulation of saccade latency. In the antisaccade task, on the other hand, we found that the latency for ipisilateral antisaccades was prolonged during the stimulation, whereas anodal stimulation did not modulate the latency of antisaccades. In addition, anodal tDCS reduced the erroneous saccades toward the contralateral visual cue. These results in the antisaccade task suggest that tDCS modulates the function of FEF to suppress reflexive saccades to the contralateral visual cue. Both in the prosaccade and antisaccade tasks, we did not find any effect of tDCS on saccade landing point or its variability. Our present study is the first to show effects of tDCS over FEF and opens the possibility of applying tDCS for studying the functions of FEF in oculomotor and attentional performance. |
Alex O. Holcombe; Wei-Ying Chen Exhausting attentional tracking resources with a single fast-moving object Journal Article In: Cognition, vol. 123, no. 2, pp. 218–228, 2012. @article{Holcombe2012, Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional pointers available to follow objects. Spatial interference theory proposes that when targets are near each other, their attentional spotlights mutually interfere. Resource theory asserts that a limited resource is divided among targets, and performance reflects the amount available per target. Utilising widely separated objects to avoid spatial interference, the present experiments validated the predictions of resource theory. The fastest target speed at which two targets could be tracked was much slower than the fastest speed at which one target could be tracked. This speed limit for tracking two targets was approximately that predicted if at high speeds, only a single target could be tracked. This result cannot be accommodated by the fixed-limit or interference theories. Evidently a fast target, if it moves fast enough, can exhaust attentional resources. |
Andrew Hollingworth Task specificity and the influence of memory on visual search: Comment on Võ and Wolfe (2012) Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 6, pp. 1596–1603, 2012. @article{Hollingworth2012, Recent results from Võ and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a preview task did not improve later search, but Võ and Wolfe used a relatively insensitive, between-subjects design. Here, we replicated the Võ and Wolfe study using a within-subject manipulation of scene preview. A preview session (focused either on object location memory or on the assessment of object semantics) reliably facilitated later search. In addition, information acquired from distractors in a scene-facilitated search when the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied flexibly to guide attention and gaze during visual search. |
Linus Holm; Stephen A. Engel; Paul Schrater Object learning improves feature extraction but does not improve feature selection Journal Article In: PLoS ONE, vol. 7, no. 12, pp. e51325, 2012. @article{Holm2012, A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more effort. This superiority in recognition performance for learned objects has at least two possible sources. For familiar objects observers might: 1) select more informative image locations upon which to fixate their eyes, or 2) extract more information from a given eye fixation. To test these possibilities, we had observers localize fragmented objects embedded in dense displays of random contour fragments. Eight participants searched for objects in 600 images while their eye movements were recorded in three daily sessions. Performance improved as subjects trained with the objects: The number of fixations required to find an object decreased by 64% across the 3 sessions. An ideal observer model that included measures of fragment confusability was used to calculate the information available from a single fixation. Comparing human performance to the model suggested that across sessions information extraction at each eye fixation increased markedly, by an amount roughly equal to the extra information that would be extracted following a 100% increase in functional field of view. Selection of fixation locations, on the other hand, did not improve with practice. |
Tien Ho-Phuoc; N. Guyader; F. Landragin; Anne Guerin-Dugué When viewing natural scenes, do abnormal colors impact on spatial or temporal parameters of eye movements? Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–13, 2012. @article{HoPhuoc2012, Since Treisman's theory, it has been generally accepted that color is an elementary feature that guides eye movements when looking at natural scenes. Hence, most computational models of visual attention predict eye movements using color as an important visual feature. In this paper, using experimental data, we show that color does not affect where observers look when viewing natural scene images. Neither colors nor abnormal colors modify observers' fixation locations when compared to the same scenes in grayscale. In the same way, we did not find any significant difference between the scanpaths under grayscale, color, or abnormal color viewing conditions. However, we observed a decrease in fixation duration for color and abnormal color, and this was particularly true at the beginning of scene exploration. Finally, we found that abnormal color modifies saccade amplitude distribution. |
Youyang Hou; Taosheng Liu Neural correlates of object-based attentional selection in human cortex Journal Article In: Neuropsychologia, vol. 50, no. 12, pp. 2916–2925, 2012. @article{Hou2012, Humans can attend to different objects independent of their spatial locations. While selecting an object has been shown to modulate object processing in high-level visual areas in occipitotemporal cortex, where/how behavioral importance (i.e., priority) for objects is represented is unknown. Here we examined the patterns of distributed neural activity during an object-based selection task. We measured brain activity with functional magnetic resonance imaging (fMRI), while participants viewed two superimposed, dynamic objects (left- and right-pointing triangles) and were cued to attend to one of the triangle objects. Enhanced fMRI response was observed for the attention conditions compared to a neutral condition, but no significant difference was found in overall response amplitude between two attention conditions. By using multi-voxel pattern classification (MVPC), however, we were able to distinguish the neural patterns associated with attention to different objects in early visual cortex (V1 to hMT+) and lateral occipital complex (LOC). Furthermore, distinct multi-voxel patterns were also observed in frontal and parietal areas. Our results demonstrate that object-based attention has a wide-spread modulation effect along the visual hierarchy and suggest that object-specific priority information is represented by patterned neural activity in the dorsal frontoparietal network. |
Michael C. Hout; Stephen D. Goldinger Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 1, pp. 90–112, 2012. @article{Hout2012, When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. |
I. S. Howard; James N. Ingram; David W. Franklin; Daniel M. Wolpert Gone in 0.6 seconds: The encoding of motor memories depends on recent sensorimotor states Journal Article In: Journal of Neuroscience, vol. 32, no. 37, pp. 12756–12768, 2012. @article{Howard2012, Real-world tasks often require movements that depend on a previous action or on changes in the state of the world. Here we investigate whether motor memories encode the current action in a manner that depends on previous sensorimotor states. Human subjects performed trials in which they made movements in a randomly selected clockwise or counterclockwise velocity-dependent curl force field. Movements during this adaptation phase were preceded by a contextual phase that determined which of the two fields would be experienced on any given trial. As expected from previous research, when static visual cues were presented in the contextual phase, strong interference (resulting in an inability to learn either field) was observed. In contrast, when the contextual phase involved subjects making a movement that was continuous with the adaptation-phase movement, a substantial reduction in interference was seen. As the time between the contextual and adaptation movement increased, so did the interference, reaching a level similar to that seen for static visual cues for delays >600 ms. This contextual effect generalized to purely visual motion, active movement without vision, passive movement, and isometric force generation. Our results show that sensorimotor states that differ in their recent temporal history can engage distinct representations in motor memory, but this effect decays progressively over time and is abolished by ∼600 ms. This suggests that motor memories are encoded not simply as a mapping from current state to motor command but are encoded in terms of the recent history of sensorimotor states. |
Janet H. Hsiao; Tina T. Liu The optimal viewing position in face recognition Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–9, 2012. @article{Hsiao2012a, In English word recognition, the best recognition performance is usually obtained when the initial fixation is directed to the left of the center (optimal viewing position, OVP). This effect has been argued to involve an interplay of left hemisphere lateralization for language processing and the perceptual experience of fixating at word beginnings most often. While both factors predict a left-biased OVP in visual word recognition, in face recognition they predict contrasting biases: People prefer to fixate the left half-face, suggesting that the OVP should be to the left of the center; nevertheless, the right hemisphere lateralization in face processing suggests that the OVP should be to the right of the center in order to project most of the face to the right hemisphere. Here, we show that the OVP in face recognition was to the left of the center, suggesting greater influence from the perceptual experience than hemispheric asymmetry in central vision. In contrast, hemispheric lateralization effects emerged when faces were presented away from the center; there was an interaction between presented visual field and location (center vs. periphery), suggesting differential influence from perceptual experience and hemispheric asymmetry in central and peripheral vision. |
Jhih-Yun Hsiao; Yi-Chuan Chen; Charles Spence; Su-Ling Yeh Assessing the effects of audiovisual semantic congruency on the perception of a bistable figure Journal Article In: Consciousness and Cognition, vol. 21, no. 2, pp. 775–787, 2012. @article{Hsiao2012, Bistable figures provide a fascinating window through which to explore human visual awareness. Here we demonstrate for the first time that the semantic context provided by a background auditory soundtrack (the voice of a young or old female) can modulate an observer's predominant percept while watching the bistable "my wife or my mother-in-law" figure (Experiment 1). The possibility of a response-bias account-that participants simply reported the percept that happened to be congruent with the soundtrack that they were listening to-was excluded in Experiment 2. We further demonstrate that this crossmodal semantic effect was additive with the manipulation of participants' visual fixation (Experiment 3), while it interacted with participants' voluntary attention (Experiment 4). These results indicate that audiovisual semantic congruency constrains the visual processing that gives rise to the conscious perception of bistable visual figures. Crossmodal semantic context therefore provides an important mechanism contributing to the emergence of visual awareness. |
Yu-Feng Huang; Feng-Yang Kuo How impulsivity affects consumer decision-making in e-commerce Journal Article In: Electronic Commerce Research and Applications, vol. 11, no. 6, pp. 582–590, 2012. @article{Huang2012, This research investigates whether a person's mood can influence impulsivity in online shopping decisions, and how involvement can regulate it. We adopt a process view of impulsivity, and recorded the detailed information search patterns of consumers using an eye-tracker methodology. The results show that incidental moods tend to increase process impulsivity, and this effect may not be restrained by involvement. We also demonstrate that the decision-making process can be separated into two stages - orientation and evaluation. We further find that differences in impulsivity are most evident in the evaluation stage. These results suggest the importance of mood-elicited impulsivity of purchases in e-commerce. |
Christophe C. Le Dantec; Elizabeth E. Melton; Aaron R. Seitz A triple dissociation between learning of target, distractors, and spatial contexts Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–12, 2012. @article{LeDantec2012a, When we perform any task, we engage a diverse set of processes. These processes can be optimized with learning. While there exists substantial research that probes specific aspects of learning, there is a scarcity of research regarding interactions between different types of learning. Here, we investigate possible interactions between Perceptual Learning (PL) and Contextual Learning (CL), two types of implicit learning that have garnered much attention in the psychological sciences and that often co-occur in natural settings. PL increases sensitivity to features of task targets and distractors and is thought to involve improvements in low-level perceptual processing. CL regards learning of regularities in the environment (such as spatial relations between objects) and is consistent with improvements in higher level perceptual processes. Surprisingly, we found CL, PL for target features, and PL for distractor features to be independent. This triple dissociation demonstrates how different learning processes may operate in parallel as tasks are mastered. |
Christophe C. Le Dantec; Aaron R. Seitz High resolution, high capacity, spatial specificity in perceptual learning Journal Article In: Frontiers in Psychology, vol. 3, pp. 222, 2012. @article{LeDantec2012, Research of perceptual learning has received significant interest due to findings that training on perceptual tasks can yield learning effects that are specific to the stimulus features of that task. However, recent studies have demonstrated that while training a single stimulus at a single location can yield a high-degree of stimulus specificity, training multiple features, or at multiple locations can reveal a broad transfer of learning to untrained features or stimulus locations. We devised a high resolution, high capacity, perceptual learning procedure with the goal of testing whether spatial specificity can be found in cases where observers are highly trained to discriminate stimuli in many different locations in the visual field. We found a surprising degree of location specific learning, where performance was significantly better when target stimuli were presented at 1 of the 24 trained locations compared to when they were placed in 1 of the 12 untrained locations. This result is particularly impressive given that untrained locations were within a couple degrees of visual angle of those that were trained. Given the large number of trained locations, the fact that the trained and untrained locations were interspersed, and the high-degree of spatial precision of the learning, we suggest that these results are difficult to account for using attention or decision strategies and instead suggest that learning may have taken place for each location separately in retinotopically organized visual cortex. |
R. J. Lee; H. E. Smithson Context-dependent judgments of color that might allow color constancy in scenes with multiple regions of illumination Journal Article In: Journal of the Optical Society of America A, vol. 29, no. 2, pp. A247–A257, 2012. @article{Lee2012, For a color-constant observer, a change in the spectral composition of the illumination is accompanied by a corresponding change in the chromaticity associated with an achromatic percept. However, maintaining color constancy for different regions of illumination within a scene implies the maintenance of multiple perceptual references. We investigated the features of a scene that enable the maintenance of separate perceptual references for two displaced but overlapping chromaticity distributions. The time-averaged, retinotopically localized stimulus was the primary determinant of color appearance judgments. However, spatial separation of test samples additionally served as a symbolic cue that allowed observers to maintain two separate perceptual references. |
Lisa Kloft; Benedikt Reuter; Jayalakshmi Viswanathan; Norbert Kathmann; Jason J. S. Barton Response selection in prosaccades, antisaccades, and other volitional saccades Journal Article In: Experimental Brain Research, vol. 222, pp. 345–353, 2012. @article{Kloft2012, Saccades made to the opposite side of a visual stimulus (antisaccades) and to central cues (simple volitional saccades) both require active response selection but whether the mechanisms of response selection differ between these tasks is unclear. Response selection can be assessed by increasing the number of response alternatives: this leads to increased reaction times when response selection is more demanding. We compared the reaction times of prosaccades, antisaccades, saccades cued by a central arrow, and saccades cued by a central number, in blocks of either two or six possible responses. In the two-response blocks, reaction times were fastest for prosaccades and antisaccades, and slowest for arrow-cued and number-cued saccades. Increasing response alternatives from two to six caused a paradoxical reduction in reaction times of prosaccades, had no effect on arrow-cued saccades, and led to a large increase in reaction times of number-cued saccades. For antisaccade reaction times, the effect of increasing response alternatives was intermediate, greater than that for arrow-cued saccades but less than that for number-cued saccades. We suggest that this pattern of results may reflect two components of saccadic processing: (a) response triggering, which is more rapid with a peripheral stimulus as in the prosaccade and antisaccade tasks and (b) response selection, which is more demanding for the antisaccade and number-cued saccade tasks, and more automatic when there is direct stimulus-response mapping as with prosaccades, or over-learned symbols as with arrow-cued saccades. |
Ellen M. Kok; Anique B. H. Bruin; Simon G. F. Robben; Jeroen J. G. Merriënboer Looking in the same manner but seeing it differently: Bottom-up and expertise effects in radiology Journal Article In: Applied Cognitive Psychology, vol. 26, no. 6, pp. 854–862, 2012. @article{Kok2012, Models of expertise differences in radiology often do not take into account visual differences between diseases. This study investigates the bottom-up effects of three types of images on viewing patterns of students, residents and radiologists: focal diseases (localized abnormality), diffuse diseases (distributed abnormality) and images showing no abnormalities (normal). Participants inspected conventional chest radiographs while their eye movements were recorded. Regardless of expertise, in focal diseases, participants fixated relatively long at specific locations, whereas in diffuse diseases, fixations were more dispersed and shorter. Moreover, for students, dispersion of fixations was higher on diffuse compared with normal images, whereas for residents and radiologists, dispersion was highest on normal images. Despite this difference, students showed relatively high performance on normal images but low performance on focal and diffuse images. Viewing patterns were strongly influenced by bottom-up stimulus effects. Although viewing behavior of students was similar to that of radiologists, they lack knowledge that helps them diagnose the disease correctly. |
Miyoung Kwon; Chaithanya Ramachandra; PremNandhini Satgunam; Bartlett W. Mel; Eli Peli; Bosco S. Tjan Contour enhancement benefits older adults with simulated central field loss Journal Article In: Optometry and Vision Science, vol. 89, no. 9, pp. 1374–1384, 2012. @article{Kwon2012, PURPOSE: Age-related macular degeneration is the leading cause of vision loss among Americans aged >65 years. Currently, no effective treatment can reverse the central vision loss associated with most age-related macular degeneration. Digital image-processing techniques have been developed to improve image visibility for peripheral vision; however, both the selection and efficacy of such methods are limited. Progress has been difficult for two reasons: the exact nature of image enhancement that might benefit peripheral vision is not well understood, and efficient methods for testing such techniques have been elusive. The current study aims to develop both an effective image enhancement technique for peripheral vision and an efficient means for validating the technique. METHODS: We used a novel contour-detection algorithm to locate shape-defining edges in images based on natural-image statistics. We then enhanced the scene by locally boosting the luminance contrast along such contours. Using a gaze-contingent display, we simulated central visual field loss in normally sighted young (aged 18-30 years) and older adults (aged 58-88 years). Visual search performance was measured as a function of contour enhancement strength ["original" (unenhanced), "medium," and "high"]. For preference task, a separate group of subjects judged which image in a pair "would lead to better search performance." RESULTS: We found that although contour enhancement had no significant effect on search time and accuracy in young adults, Medium enhancement resulted in significantly shorter search time in older adults (about 13% reduction relative to original). Both age-groups preferred images with Medium enhancement over original (2-7 times). Furthermore, across age-groups, image content types, and enhancement strengths, there was a robust correlation between preference and performance. CONCLUSIONS: Our findings demonstrate a beneficial role of contour enhancement in peripheral vision for older adults. Our findings further suggest that task-specific preference judgments can be an efficient surrogate for performance testing. |
Kaitlin E. W. Laidlaw; Evan F. Risko; Alan Kingstone A new look at social attention: Orienting to the eyes is not (entirely) under volitional control Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 5, pp. 1132–1143, 2012. @article{Laidlaw2012, People tend to look at other people's eyes, but whether this bias is automatic or volitional is unclear. To discriminate between these two possibilities, we used a "don't look" (DL) paradigm. Participants looked at a series of upright or inverted faces, and were asked either to freely view the faces or to avoid looking at the eyes, or as a control, the mouth. As previously demonstrated, participants showed a bias to attend to both eyes and mouths during free viewing. In the DL condition, participants told to avoid the eyes of upright faces were unable to fully suppress the tendency to fixate on the faces' eyes, whereas participants told to avoid the mouth of upright faces successfully eliminated their bias to overtly attend to that feature. When faces were inverted, participants were equally able to suppress looks to the eyes and mouth. Together, these results suggest that the tendency to look at the eyes reflects orienting that is both volitional and automatic, and that the engagement of holistic or configural face processing mechanisms during upright face viewing has an influence in guiding gaze automatically to the eyes. |
Elke B. Lange; Christian Starzynski; Ralf Engbert Capture of the gaze does not capture the mind Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 6, pp. 1168–1182, 2012. @article{Lange2012, Sudden visual changes attract our gaze, and related eye movement control requires attentional resources. Attention is a limited resource that is also involved in working memory–for instance, memory encoding. As a consequence, theory suggests that gaze capture could impair the buildup of memory respresentations due to an attentional resource bottleneck. Here we developed an experimental design combining a serial memory task (verbal or spatial) and concurrent gaze capture by a distractor (of high or low similarity to the relevant item). The results cannot be explained by a general resource bottleneck. Specifically, we observed that capture by the low-similar distractor resulted in delayed and reduced saccade rates to relevant items in both memory tasks. However, while spatial memory performance decreased, verbal memory remained unaffected. In contrast, the high-similar distractor led to capture and memory loss for both tasks. Our results lend support to the view that gaze capture leads to activation of irrelevant representations in working memory that compete for selection at recall. Activation of irrelevant spatial representations distracts spatial recall, whereas activation of irrelevant verbal features impairs verbal memory performance. |
Kohitij Kar; Bart Krekelberg Transcranial electrical stimulation over visual cortex evokes phosphenes with a retinal origin Journal Article In: Journal of Neurophysiology, vol. 108, no. 8, pp. 2173–2178, 2012. @article{Kar2012, Transcranial electrical stimulation (tES) is a promising therapeutic tool for a range of neurological diseases. Understanding how the small currents used in tES spread across the scalp and penetrate the brain will be important for the rational design of tES therapies. Alternating currents applied transcranially above visual cortex induce the perception of flashes of light (phosphenes). This makes the visual system a useful model to study tES. One hypothesis is that tES generates phosphenes by direct stimulation of the cortex underneath the transcranial electrode. Here, we provide evidence for the alternative hypothesis that phosphenes are generated in the retina by current spread from the occipital electrode. Building on the existing literature, we first confirm that phosphenes are induced at lower currents when electrodes are placed farther away from visual cortex and closer to the eye. Second, we explain the temporal frequency tuning of phosphenes based on the well-known response properties of primate retinal ganglion cells. Third, we show that there is no difference in the time it takes to evoke phosphenes in the retina or by stimulation above visual cortex. Together, these findings suggest that phosphenes induced by tES over visual cortex originate in the retina. From this, we infer that tES currents spread well beyond the area of stimulation and are unlikely to lead to focal neural activation. Novel stimulation protocols that optimize current distributions are needed to overcome these limitations of tES. |
Betty E. Kim; Darryl Seligman; Joseph W. Kable Preference reversals in decision making under risk are accompanied by changes in attention to different attributes Journal Article In: Frontiers in Neuroscience, vol. 6, pp. 109, 2012. @article{Kim2012, Recent work has shown that visual fixations reflect and influence trial-to-trial variability in people's preferences between goods. Here we extend this principle to attribute weights during decision making under risk. We measured eye movements while people chose between two risky gambles or bid on a single gamble. Consistent with previous work, we found that people exhibited systematic preference reversals between choices and bids. For two gambles matched in expected value, people systematically chose the higher probability option but provided a higher bid for the option that offered the greater amount to win. This effect was accompanied by a shift in fixations of the two attributes, with people fixating on probabilities more during choices and on amounts more during bids. Our results suggest that the construction of value during decision making under risk depends on task context partly because the task differentially directs attention at probabilities vs. amounts. Since recent work demonstrates that neural correlates of value vary with visual fixations, our results also suggest testable hypotheses regarding how task context modulates the neural computation of value to generate preference reversals. |
2011 |
Frederic Benmussa; Charles Aissani; A. -L. Paradis; Jean Lorenceau Coupled dynamics of bistable distant motion displays Journal Article In: Journal of Vision, vol. 11, no. 8, pp. 14–14, 2011. @article{Benmussa2011, This study explores the extent to which a display changing periodically in perceptual interpretation through smooth periodic physical changes-an inducer-is able to elicit perceptual switches in an intrinsically bistable distant probe display. Four experiments are designed to examine the coupling strength and bistable dynamics with displays of varying degree of ambiguity, similarity, and symmetry-in motion characteristics-as a function of their locations in visual space. The results show that periodic fluctuations of a remote inducer influence a bistable probe and regulate its dynamics through coupling. Coupling strength mainly depends on the relative locations of the probe display and the contextual inducer in the visual field, with stronger coupling when both displays are symmetrical around the vertical meridian and weaker coupling otherwise. Smaller effects of common fate and symmetry are also found. Altogether, the results suggest that long-range interhemispheric connections, presumably involving the corpus callosum, are able to synchronize perceptual transitions across the vertical meridian. If true, bistable dynamics may provide a behavioral method to probe interhemispheric connectivity in behaving human. Consequences of these findings for studies using stimuli symmetrical around the vertical meridian are evaluated. |
Sarah J. Bayless; Missy Glover; Margot J. Taylor; Roxane J. Itier Is it in the eyes? Dissociating the role of emotion and perceptual features of emotionally expressive faces in modulating orienting to eye gaze Journal Article In: Visual Cognition, vol. 19, no. 4, pp. 483–510, 2011. @article{Bayless2011, This study investigated the role of the eye region of emotional facial expressions in modulating gaze orienting effects. Eye widening is characteristic of fearful and surprised expressions and may significantly increase the salience of perceived gaze direction. This perceptual bias rather than the emotional valence of certain expressions may drive enhanced gaze orienting effects. In a series of three experiments involving low anxiety participants, different emotional expressions were tested using a gaze-cueing paradigm. Fearful and surprised expressions enhanced the gaze orienting effect compared with happy or angry expressions. Presenting only the eye regions as cueing stimuli eliminated this effect whereas inversion globally reduced it. Both inversion and the use of eyes only attenuated the emotional valence of stimuli without affecting the perceptual salience of the eyes. The findings thus suggest that low-level stimulus features alone are not sufficient to drive gaze orienting modulations by emotion. Rather, they interact with the emotional valence of the expression that appears critical. The study supports the view that rapid processing of fearful and surprised emotional expressions can potentiate orienting to another person's averted gaze in non-anxious people. |
Paul M. Bays; Emma Y. Wu; Masud Husain Storage and binding of object features in visual working memory Journal Article In: Neuropsychologia, vol. 49, pp. 1622–1631, 2011. @article{Bays2011, An influential conception of visual working memory is of a small number of discrete memory “slots”, each storing an integrated representation of a single visual object, including all its component features. When a scene contains more objects than there are slots, visual attention controls which objects gain access to memory. A key prediction of such a model is that the absolute error in recalling multiple features of the same object will be correlated, because features belonging to an attended object are all stored, bound together. Here,wetested participants' ability to reproduce frommemoryboth the color and orientation ofan object indicated by a location cue. We observed strong independence oferrors between feature dimensions even for large memory arrays (6 items), inconsistent with an upper limit on the number of objects held in memory. Examining the pattern of responses in each dimension revealed a gaussian distribution of error cen- tered on the target value that increased in width under higher memory loads. For large arrays, a subset of responses were not centered on the target but instead predominantly corresponded to mistakenly reproducing one of the other features held in memory. These misreporting responses again occurred independently in each feature dimension, consistent with ‘misbinding' due to errors in maintaining the binding information that assigns features to objects. The results support a shared-resource model of working memory, in which increasing memory load incrementally degrades storage of visual information, reducing the fidelity with which both object fea- tures and feature bindings are maintained. |
Genna M. Bebko; Steven L. Franconeri; Kevin N. Ochsner; Joan Y. Chiao Look before you regulate: Differential perceptual strategies underlying expressive suppression and cognitive reappraisal Journal Article In: Emotion, vol. 11, no. 4, pp. 732–742, 2011. @article{Bebko2011, Successful emotion regulation is important for maintaining psychological well-being. Although it is known that emotion regulation strategies, such as cognitive reappraisal and expressive suppression, may have divergent consequences for emotional responses, the cognitive processes underlying these differences remain unclear. Here we used eye-tracking to investigate the role of attentional deployment in emotion regulation success. We hypothesized that differences in the deployment of attention to emotional areas of complex visual scenes may be a contributing factor to the differential effects of these two strategies on emotional experience. Eye-movements, pupil size, and self-reported negative emotional experience were measured while healthy young adult participants viewed negative IAPS images and regulated their emotional responses using either cognitive reappraisal or expressive suppression. Consistent with prior work, reappraisers reported feeling significantly less negative than suppressers when regulating emotion as compared to a baseline condition. Across both groups, participants looked away from emotional areas during emotion regulation, an effect that was more pronounced for suppressers. Critically, irrespective of emotion regulation strategy, participants who looked toward emotional areas of a complex visual scene were more likely to experience emotion regulation success. Taken together, these results demonstrate that attentional deployment varies across emotion regulation strategies and that successful emotion regulation depends on the extent to which people look toward emotional content in complex visual scenes. |
Stefanie I. Becker Determinants of dwell time in visual search: Similarity or perceptual difficulty? Journal Article In: PLoS ONE, vol. 6, no. 3, pp. e17740, 2011. @article{Becker2011, The present study examined the factors that determine the dwell times in a visual search task, that is, the duration the gaze remains fixated on an object. It has been suggested that an item's similarity to the search target should be an important determiner of dwell times, because dwell times are taken to reflect the time needed to reject the item as a distractor, and such discriminations are supposed to be harder the more similar an item is to the search target. In line with this similarity view, a previous study shows that, in search for a target ring of thin line-width, dwell times on thin linewidth Landolt C's distractors were longer than dwell times on Landolt C's with thick or medium linewidth. However, dwell times may have been longer on thin Landolt C's because the thin line-width made it harder to detect whether the stimuli had a gap or not. Thus, it is an open question whether dwell times on thin line-width distractors were longer because they were similar to the target or because the perceptual decision was more difficult. The present study de-coupled similarity from perceptual difficulty, by measuring dwell times on thin, medium and thick line-width distractors when the target had thin, medium or thick line-width. The results showed that dwell times were longer on target-similar than target-dissimilar stimuli across all target conditions and regardless of the line-width. It is concluded that prior findings of longer dwell times on thin linewidth-distractors can clearly be attributed to target similarity. As will be discussed towards the end, the finding of similarity effects on dwell times has important implications for current theories of visual search and eye movement control. |
Stefanie I. Becker; Gernot Horstmann; Roger W. Remington Perceptual grouping, not emotion, accounts for search asymmetries with schematic faces Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 6, pp. 1739–1757, 2011. @article{Becker2011a, Several different explanations have been proposed to account for the search asymmetry (SA) for angry schematic faces (i.e., the fact that an angry face target among friendly faces can be found faster than vice versa). The present study critically tested the perceptual grouping account, (a) that the SA is not due to emotional factors, but to perceptual differences that render angry faces more salient than friendly faces, and (b) that the SA is mainly attributable to differences in distractor grouping, with angry faces being more difficult to group than friendly faces. In visual search for angry and friendly faces, the number of distractors visible during each fixation was systematically manipulated using the gaze-contingent window technique. The results showed that the SA emerged only when multiple distractors were visible during a fixation, supporting the grouping account. To distinguish between emotional and perceptual factors in the SA, we altered the perceptual properties of the faces (dented-chin face) so that the friendly face became more salient. In line with the perceptual account, the SA was reversed for these faces, showing faster search for a friendly face target. These results indicate that the SA reflects feature-level perceptual grouping, not emotional valence. |
Artem V. Belopolsky; Christel Devue; Jan Theeuwes Angry faces hold the eyes Journal Article In: Visual Cognition, vol. 19, no. 1, pp. 27–36, 2011. @article{Belopolsky2011a, Efficient processing of complex social and biological stimuli associated with threat is crucial for survival. Previous studies have suggested that threatening stimuli such as angry faces not only capture visual attention, but also delay the disengagement of attention from their location. However, in the previous studies disengagement of attention was measured indirectly and was inferred on the basis of delayed manual responses. The present study employed a novel paradigm that allows direct examination of the delayed disengagement hypothesis by measuring the time it takes to disengage the eyes from threatening stimuli. The results showed that participants were indeed slower to make an eye movement away from an angry face presented at fixation than from either a neutral or a happy face. This finding provides converging support that the delay in disengagement of attention is an important component of processing threatening information. |
Carlos Aguilar; Eric Castet Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies Journal Article In: Vision Research, vol. 51, no. 9, pp. 997–1012, 2011. @article{Aguilar2011, Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays. |
Robert G. Alexander; Gregory J. Zelinsky Visual similarity effects in categorical search Journal Article In: Journal of Vision, vol. 11, no. 8, pp. 1–15, 2011. @article{Alexander2011, We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets. |
Jens K. Apel; Gavin F. Revie; Angelo Cangelosi; Rob Ellis; Jeremy Goslin; Martin H. Fischer Attention deployment during memorizing and executing complex instructions Journal Article In: Experimental Brain Research, vol. 214, no. 2, pp. 249–259, 2011. @article{Apel2011, We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution. |
Sofie Moresi; Jos J. Adam; Jons Rijcken; Harm Kuipers; Marianne Severens; Pascal W. M. Van Gerven Response preparation with adjacent versus overlapped hands: A pupillometric study Journal Article In: International Journal of Psychophysiology, vol. 79, no. 2, pp. 280–286, 2011. @article{Moresi2011, Preparatory cues facilitate performance in speeded choice tasks. It is debated, however, whether the lateralized neuro-anatomical organization of the human motor system contributes to this facilitation. To investigate this issue, we examined response preparation in a finger-cuing task using two conditions. In the hands adjacent condition, the hands were placed adjacently to each other with index and middle fingers placed on four linearly arrayed response keys. In the overlapped hand placement condition, the fingers of different hands alternated, thus dissociating hand and spatial position factors. Preparatory cues specified a subset of two fingers. Left-right cues specified the two leftmost or two rightmost fingers. Inner-outer cues specified the two inner or outer fingers. Alternate cues specified the first and third, or the second and fourth finger in the response set. In addition to reaction time and response errors, we measured the pupillary response to assess the cognitive processing load associated with response preparation. Results showed stronger pupil dilations (and also longer RTs and more errors) for the overlapped than for the adjacent hand placement condition, reflecting an overall increase in cognitive processing load. Furthermore, the negative impact of overlapping the hands on pupil dilation interacted with cue type, indicating that left-right cues (associated with two fingers on one hand) suffered most from overlapping the hands. With the hands overlapped, alternate cues (now associated with two fingers on the same hand) produced the shortest RTs. These findings demonstrate the importance of motoric factors in response preparation. |
Isamu Motoyoshi Attentional modulation of temporal contrast sensitivity in human vision Journal Article In: PLoS ONE, vol. 6, no. 4, pp. e19303, 2011. @article{Motoyoshi2011, Recent psychophysical studies have shown that attention can alter contrast sensitivities for temporally broadband stimuli such as flashed gratings. The present study examined the effect of attention on the contrast sensitivity for temporally narrowband stimuli with various temporal frequencies. Observers were asked to detect a drifting grating of 0-40 Hz presented gradually in the peripheral visual field with or without a concurrent letter identification task in the fovea. We found that removal of attention by the concurrent task reduced the contrast sensitivity for gratings with low temporal frequencies much more profoundly than for gratings with high temporal frequencies and for flashed gratings. The analysis revealed that the temporal contrast sensitivity function had a more band-pass shape with poor attention. Additional experiments showed that this was also true when the target was presented in various levels of luminance noise. These results suggest that regardless of the presence of external noise, attention extensively modulates visual sensitivity for sustained retinal inputs. |
Christina Moutsiana; David T. Field; John P. Harris The neural basis of centre-surround interactions in visual motion processing Journal Article In: PLoS ONE, vol. 6, no. 7, pp. e22902, 2011. @article{Moutsiana2011, Perception of a moving visual stimulus can be suppressed or enhanced by surrounding context in adjacent parts of the visual field. We studied the neural processes underlying such contextual modulation with fMRI. We selected motion selective regions of interest (ROI) in the occipital and parietal lobes with sufficiently well defined topography to preclude direct activation by the surround. BOLD signal in the ROIs was suppressed when surround motion direction matched central stimulus direction, and increased when it was opposite. With the exception of hMT+/V5, inserting a gap between the stimulus and the surround abolished surround modulation. This dissociation between hMT+/V5 and other motion selective regions prompted us to ask whether motion perception is closely linked to processing in hMT+/V5, or reflects the net activity across all motion selective cortex. The motion aftereffect (MAE) provided a measure of motion perception, and the same stimulus configurations that were used in the fMRI experiments served as adapters. Using a linear model, we found that the MAE was predicted more accurately by the BOLD signal in hMT+/V5 than it was by the BOLD signal in other motion selective regions. However, a substantial improvement in prediction accuracy could be achieved by using the net activity across all motion selective cortex as a predictor, suggesting the overall conclusion that visual motion perception depends upon the integration of activity across different areas of visual cortex. |
Neil G. Muggleton; Roger Kalla; Chi-Hung Juan; Vincent Walsh Dissociating the contributions of human frontal eye fields and posterior parietal cortex to visual search Journal Article In: Journal of Neurophysiology, vol. 105, no. 6, pp. 2891–2896, 2011. @article{Muggleton2011, Imaging, lesion, and transcranial magnetic stimulation (TMS) studies have implicated a number of regions of the brain in searching for a target defined by a combination of attributes. The necessity of both frontal eye fields (FEF) and posterior parietal cortex (PPC) in task performance has been shown by the application of TMS over these regions. The effects of stimulation over these two areas have, thus far, proved to be remarkably similar and the only dissociation reported being in the timing of their involvement. We tested the hypotheses that 1) FEF contributes to performance in terms of visual target detection (possibly by modulation of activity in extrastriate areas with respect to the target), and 2) PPC is involved in translation of visual information for action. We used a task where the presence (and location) of the target was indicated by an eye movement. Task disruption was seen with FEF TMS (with reduced accuracy on the task) but not with PPC stimulation. When a search task requiring a manual response was presented, disruption with PPC TMS was seen. These results show dissociation of FEF and PPC contributions to visual search performance and that PPC involvement seems to be dependent on the response required by the task, whereas this is not the case for FEF. This supports the idea of FEF involvement in visual processes in a manner that might not depend on the required response, whereas PPC seems to be involved when a manual motor response to a stimulus is required. |
Marnix Naber; Stefan Frässle; Wolfgang Einhäuser Perceptual rivalry: Reflexes reveal the gradual nature of visual awareness Journal Article In: PLoS ONE, vol. 6, no. 6, pp. e20910, 2011. @article{Naber2011, Rivalry is a common tool to probe visual awareness: a constant physical stimulus evokes multiple, distinct perceptual interpretations (‘‘percepts'') that alternate over time. Percepts are typically described as mutually exclusive, suggesting that a discrete (all-or-none) process underlies changes in visual awareness. Here we follow two strategies to address whether rivalry is an all-or-none process: first, we introduce two reflexes as objective measures of rivalry, pupil dilation and optokinetic nystagmus (OKN); second, we use a continuous input device (analog joystick) to allow observers a gradual subjective report. We find that the ‘‘reflexes'' reflect the percept rather than the physical stimulus. Both reflexes show a gradual dependence on the time relative to perceptual transitions. Similarly, observers' joystick deflections, which are highly correlated with the reflex measures, indicate gradual transitions. Physically simulating wave-like transitions between percepts suggest piece-meal rivalry (i.e., different regions of space belonging to distinct percepts) as one possible explanation for the gradual transitions. Furthermore, the reflexes show that dominance durations depend on whether or not the percept is actively reported. In addition, reflexes respond to transitions with shorter latencies than the subjective report and show an abundance of short dominance durations. This failure to report fast changes in dominance may result from limited access of introspection to rivalry dynamics. In sum, reflexes reveal that rivalry is a gradual process, rivalry's dynamics is modulated by the required action (response mode), and that rapid transitions in perceptual dominance can slip away from awareness. |
Olufunmilola Ogun; Jayalakshmi Viswanathan; Jason J. S. Barton The effect of central (macula) sparing of contralateral line bisection bias: A study with virtual hemianopia Journal Article In: Neuropsychologia, vol. 49, no. 12, pp. 3377–3382, 2011. @article{Ogun2011, Hemianopic patients show a contralesional bisection bias, but it is unclear whether this is a consequence of their field loss or related to extrastriate damage. One observation cited against the former is that hemianopic bisection bias does not vary with the degree of central (macular) sparing; however, it is unclear to what extent central sparing should affect this bias. Our goal was to determine the effect of central sparing on line bisection biases from field loss alone, with two approaches. First, we studied 12 healthy subjects viewing lines under conditions of virtual hemianopia, created by a gaze-contingent technique. Second, we calculated the effect predicted by a visuospatial model of the effect of central magnification on line representations in the visual system. Our results first replicated the contralateral line bisection bias with hemianopia, confirming that this can be generated by visual hemifield loss in the absence of extrastriate damage. Central sparing had only a modest effect on hemianopic bisection bias, with only slightly less bias with 10° compared to 2° of central sparing. In accordance with these empiric data, computing the center of mass for line representations in our model showed only a shallow decline in bisection bias as central sparing increased from 0 to 10°. We conclude that contralateral bisection bias only decreases slightly with central sparing, and that the absence of a statistically significant effect of central sparing in patients cannot be taken as evidence against a visual origin of contralateral hemianopic line bisection bias. |
Bettina Olk; Yu Jin Effects of aging on switching the response direction of pro-and antisaccades Journal Article In: Experimental Brain Research, vol. 208, no. 1, pp. 139–150, 2011. @article{Olk2011, The present study investigated effects of task switching between pro- and antisaccades and switching the direction of these saccades (response switching) on performance of younger and older adults. Participants performed single-task blocks, in which only pro- or only antisaccades had to be made as well as mixed-task blocks, in which pro- and antisaccades were required. Analysis of specific task switch effects in the mixed-task blocks showed switch costs for error rates for prosaccades for both groups, suggesting that antisaccade task rules persisted and affected the following prosaccade. The comparison between single- and mixed-task blocks showed that mixing costs were either equal or smaller for older than younger participants, indicating that the older participants were well able to keep task sets in working memory. The most prominent age difference that was observed for response switching was that for the older but not younger group task switching and response switching interacted, resulting in less errors when two consecutive antisaccades were made in the same direction. This finding is best explained with a facilitation of these consecutive antisaccades. The present study clearly demonstrated the impact of response switching and a difference between age groups, underlining the importance of considering this factor when investigating pro- and antisaccades, especially antisaccades, and when investigating task switching and aging. |
Samantha C. Otero; Brendan S. Weekes; Samuel B. Hutton Pupil size changes during recognition memory Journal Article In: Psychophysiology, vol. 48, no. 10, pp. 1346–1353, 2011. @article{Otero2011, Pupils dilate to a greater extent when participants view old compared to new items during recognition memory tests. We report three experiments investigating the cognitive processes associated with this pupil old/new effect. Using a remember/know procedure, we found that the effect occurred for old items that were both remembered and known at recognition, although it was attenuated for known compared to remembered items. In Experiment 2, the pupil old/new effect was observed when items were presented acoustically, suggesting the effect does not depend on low-level visual processes. The pupil old/new effect was also greater for items encoded under deep compared to shallow orienting instructions, suggesting it may reflect the strength of the underlying memory trace. Finally, the pupil old/new effect was also found when participants falsely recognized items as being old. We propose that pupils respond to a strength-of-memory signal and suggest that pupillometry provides a useful technique for exploring the underlying mechanisms of recognition memory. |