EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2014 |
Marc R. Kamke; Alexander E. Ryan; Martin V. Sale; Megan E. J. Campbell; Stephan Riek; Timothy J. Carroll; Jason B. Mattingley Visual spatial attention has opposite effects on bidirectional plasticity in the human motor cortex Journal Article In: Journal of Neuroscience, vol. 34, no. 4, pp. 1475–1480, 2014. @article{Kamke2014, Long-term potentiation (LTP) and long-term depression (LTD) are key mechanisms of synaptic plasticity that are thought to act in concert to shape neural connections. Here we investigated the influence of visual spatial attention on LTP-like and LTD-like plasticity in the human motor cortex. Plasticity was induced using paired associative stimulation (PAS), which involves repeated pairing of peripheral nerve stimulation and transcranial magnetic stimulation to alter functional responses in the thumb area of the primary motor cortex. PAS-induced changes in cortical excitability were assessed using motor-evoked potentials. During plasticity induction, participants directed their attention to one of two visual stimulus streams located adjacent to each hand. When participants attended to visual stimuli located near the left thumb, which was targeted by PAS, LTP-like increases in excitability were significantly enhanced, and LTD-like decreases in excitability reduced, relative to when they attended instead to stimuli located near the right thumb. These differential effects on (bidirectional) LTP-like and LTD-like plasticity suggest that voluntary visual attention can exert an important influence on the functional organization of the motor cortex. Specifically, attention acts to both enhance the strengthening and suppress the weakening of neural connections representing events that fall within the focus of attention. |
James H. Kryklywy; Derek G. V. Mitchell Emotion modulates allocentric but not egocentric stimulus localization: implications for dual visual systems perspectives Journal Article In: Experimental Brain Research, vol. 232, no. 12, pp. 3719–3726, 2014. @article{Kryklywy2014, Considerable evidence suggests that emotional cues influence processing prioritization and neural representations of stimuli. Specifically, within the visual domain, emotion is known to impact ventral stream processes and ventral stream-mediated behaviours; it remains unclear, however, the extent to which emotion impacts dorsal stream processes. In the present study, participants localized a visual target stimulus embedded within a background array utilizing allocentric localization (requiring an object-centred representation of visual space to perform an action) and egocentric localization (requiring purely target-directed actions), which are thought to differentially rely on the ventral versus dorsal visual stream, respectively. Simultaneously, a task-irrelevant negative, positive or neutral sound was presented to produce an emotional context. In line with predictions, we found that during allocentric localization, response accuracy was enhanced in the context of negative compared to either neutral or positive sounds. In contrast, no significant effects of emotion were identified during egocentric localization. These results raise the possibility that negative emotional auditory contexts enhance ventral stream, but not dorsal stream, processing in the visual domain. Furthermore, this study highlights the complexity of emotion-cognition interactions, indicating how emotion can have a differential impact on almost identical overt behaviours that may be governed by distinct neurocognitive systems. |
I. Kurki; Miguel P. Eckstein Template changes with perceptual learning are driven by feature informativeness Journal Article In: Journal of Vision, vol. 14, no. 11, pp. 1–18, 2014. @article{Kurki2014, Perceptual learning changes the way the human visual system processes stimulus information. Previous studies have shown that the human brain's weightings of visual information (the perceptual template) become better matched to the optimal weightings. However, the dynamics of the template changes are not well understood. We used the classification image method to investigate whether visual field or stimulus properties govern the dynamics of the changes in the perceptual template. A line orientation discrimination task where highly informative parts were placed in the peripheral visual field was used to test three hypotheses: (1) The template changes are determined by the visual field structure, initially covering stimulus parts closer to the fovea and expanding tward the periphery with learning; (2) the template changes are object centered, starting from the center and expanding toward edges; and (3) the template changes are determined by stimulus information, starting from the most informative parts and expanding to less informative parts. Results show that, initially, the perceptual template contained only the more peripheral, highly informative parts. Learning expanded the template to include less informative parts, resulting in an increase in sampling efficiency. A second experiment interleaved parts with high and low signal-to-noise ratios and showed that template reweighting through learning was restricted to stimulus elements that are spatially contiguous to parts with initial high template weights. The results suggest that the informativeness of features determines how the perceptual template changes with learning. Further, the template expansion is constrained by spatial proximity. |
MiYoung Kwon; Pinglei Bao; Rachel Millin; Bosco S. Tjan Radial-tangential anisotropy of crowding in the early visual areas Journal Article In: Journal of Neurophysiology, vol. 112, no. 10, pp. 2413–2422, 2014. @article{Kwon2014, Crowding, the inability to recognize an individual object in clutter (Bouma H. Nature 226: 177–178, 1970), is considered a major impediment to object recognition in peripheral vision. Despite its significance, the cortical loci of crowding are not well understood. In particular, the role of the primary visual cortex (V1) remains unclear. Here we utilize a diagnostic feature of crowding to identify the earliest cortical locus of crowding. Controlling for other factors, radially arranged flankers induce more crowding than tangentially arranged ones (Toet A, Levi DM. Vision Res 32: 1349–1357, 1992). We used functional magnetic resonance imaging (fMRI) to measure the change in mean blood oxygenation level-dependent (BOLD) response due to the addition of a middle letter between a pair of radially or tangen- tially arranged flankers. Consistent with the previous finding that crowding is associated with a reduced BOLD response [Millin R, Arman AC, Chung ST, Tjan BS. Cereb Cortex (July 5, 2013). doi:10.1093/cercor/bht159], we found that the BOLD signal evoked by the middle letter depended on the arrangement of the flankers: less BOLD response was associated with adding the middle letter between radially arranged flankers compared with adding it between tangen- tially arranged flankers. This anisotropy in BOLD response was present as early as V1 and remained significant in downstream areas. The effect was observed while subjects' attention was diverted away from the testing stimuli. Contrast detection threshold for the middle letter was unaffected by flanker arrangement, ruling out surround suppression of contrast response as a major factor in the observed BOLD anisotropy. Our findings support the view that V1 contributes to crowding. |
Alexandre Lang; Chrystal Gaertner; Elham Ghassemi; Qing Yang; Christophe Orssaud; Zoï Kapoula Saccade-vergence properties remain more stable over short-time repetition under overlap than under gap task: A preliminary study Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 372, 2014. @article{Lang2014, Under natural circumstances, saccade-vergence eye movements are among the most frequently occurring. This study examines the properties of such movements focusing on short-term repetition effects. Are such movements robust over time or are they subject to tiredness? 12 healthy adults performed convergent and divergent combined eye movements either in a gap task (i.e., 200 ms between the end of the fixation stimulus and the beginning of the target stimulus) or in an overlap task (i.e., the peripheral target begins 200 ms before the end of the fixation stimulus). Latencies were shorter in the gap task than in the overlap task for both saccade and vergence components. Repetition had no effect on latency, which is a novel result. In both tasks, saccades were initiated later and executed faster (mean and peak velocities) than the vergence component. The mean and peak velocities of both components decreased over trials in the gap task but remained constant in the overlap task. This result is also novel and has some clinical implications. Another novel result concerns the accuracy of the saccade component that was better in the gap than in the overlap task. The accuracy also decreased over trials in the gap task but remained constant in the overlap task. The major result of this study is that under a controlled mode of initiation (overlap task) properties of combined eye movements are more stable than under automatic triggering (gap task). These results are discussed in terms of saccade-vergence interactions, convergence-divergence specificities and repetition versus adaptation protocols. |
Axel Larsen Deconstructing mental rotation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1072–1091, 2014. @article{Larsen2014, A random walk model of the classical mental rotation task is explored in two experiments. By assuming that a mental rotation is repeated until sufficient evidence for a match/mismatch is obtained, the model accounts for the approximately linearly increasing reaction times (RTs) on positive trials, flat RTs on negative trials, false alarms and miss rates, effects of complexity, and for the number of eye movement switches between stimuli as functions of angular difference in orientation. Analysis of eye movements supports key aspects of the model and shows that initial processing time is roughly constant until the first saccade switch between stimulus objects, while the duration of the remaining trial increases approximately linearly as a function of angular discrepancy. The increment results from additive effects of (a) a linear increase in the number of saccade switches between stimulus objects, (b) a linear increase in the number of saccades on a stimulus, and (c) a linear increase in the number and in the duration of fixations on a stimulus object. The fixation duration increment was the same on simple and complex trials (about 15 ms per 60°), which suggests that the critical orientation alignment take place during fixations at very high speed. |
Adam M. Larson; Tyler E. Freeman; Ryan V. Ringer; Lester C. Loschky The spatiotemporal dynamics of scene gist recognition Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 2, pp. 471–487, 2014. @article{Larson2014, Viewers can rapidly extract a holistic semantic representation of a real-world scene within a single eye fixation, an ability called recognizing the gist of a scene, and operationally defined here as recognizing an image's basic-level scene category. However, it is unknown how scene gist recognition unfolds over both time and space-within a fixation and across the visual field. Thus, in 3 experiments, the current study investigated the spatiotemporal dynamics of basic-level scene categorization from central vision to peripheral vision over the time course of the critical first fixation on a novel scene. The method used a window/scotoma paradigm in which images were briefly presented and processing times were varied using visual masking. The results of Experiments 1 and 2 showed that during the first 100 ms of processing, there was an advantage for processing the scene category from central vision, with the relative contributions of peripheral vision increasing thereafter. Experiment 3 tested whether this pattern could be explained by spatiotemporal changes in selective attention. The results showed that manipulating the probability of information being presented centrally or peripherally selectively maintained or eliminated the early central vision advantage. Across the 3 experiments, the results are consistent with a zoom-out hypothesis, in which, during the first fixation on a scene, gist extraction extends from central vision to peripheral vision as covert attention expands outward. |
Nida Latif; Arlene Gehmacher; Monica S. Castelhano; Kevin G. Munhall The art of gaze guidance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 33–39, 2014. @article{Latif2014, An ongoing challenge in scene perception is identifying the factors that influence how we explore our visual world. By using multiple versions of paintings as a tool to control for high-level influences, we show that variation in the visual details of a painting causes differences in observers' gaze despite constant task and content. Further, we show that by switching locations of highly salient regions through textural manipulation, a corresponding switch in eye movement patterns is observed. Our results present the finding that salient regions and gaze behavior are not simply correlated; variation in saliency through textural differences causes an observer to direct their viewing accordingly. This work demonstrates the direct contribution of low-level factors in visual exploration by showing that examination of a scene, even for aesthetic purposes, can be easily manipulated by altering the low-level properties and hence, the saliency of the scene. |
Claudio Lavín; René San Martín; Eduardo Rosales Jubal Pupil dilation signals uncertainty and surprise in a learning gambling task Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 218, 2014. @article{Lavin2014, Pupil dilation under constant illumination is a physiological marker where modulation is related to several cognitive functions involved in daily decision making. There is evidence for a role of pupil dilation change during decision-making tasks associated with uncertainty, reward-prediction errors and surprise. However, while some work suggests that pupil dilation is mainly modulated by reward predictions, others point out that this marker is related to uncertainty signaling and surprise. Supporting the latter hypothesis, the neural substrate of this marker is related to noradrenaline (NA) activity which has been also related to uncertainty signaling. In this work we aimed to test whether pupil dilation is a marker for uncertainty and surprise in a learning task. We recorded pupil dilation responses in 10 participants performing the Iowa Gambling Task (IGT), a decision-making task that requires learning and constant monitoring of outcomes' feedback, which are important variables within the traditional study of human decision making. Results showed that pupil dilation changes were modulated by learned uncertainty and surprise regardless of feedback magnitudes. Interestingly, greater pupil dilation changes were found during positive feedback (PF) presentation when there was lower uncertainty about a future negative feedback (NF); and by surprise during NF presentation. These results support the hypothesis that pupil dilation is a marker of learned uncertainty, and may be used as a marker of NA activity facing unfamiliar situations in humans. |
Ada Le; Matthias Niemeier Visual field preferences of object analysis for grasping with one hand Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 782, 2014. @article{Le2014, When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al., 2007; Rice et al., 2007). However, it is unclear whether visual object analysis for grasp control relies more on inputs (a) from the contralateral than the ipsilateral visual field, (b) from one dominant visual field regardless of the grasping hand, or (c) from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier, 2013a,b), consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2014). But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs) were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling with the left hand showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields. |
Chia-lin Lee; Daniel Mirman; Laurel J. Buxbaum Abnormal dynamics of activation of object use information in apraxia: Evidence from eyetracking Journal Article In: Neuropsychologia, vol. 59, no. 1, pp. 13–26, 2014. @article{Lee2014, Action representations associated with object use may be incidentally activated during visual object processing, and the time course of such activations may be influenced by lexical-semantic context (e.g., Lee, Middleton, Mirman, Kalénine, & Buxbaum (2012). Journal of Experimental Psychology: Human Perception and Performance, 39(1), 257-270). In this study we used the "visual world" eye-tracking paradigm to examine whether a deficit in producing skilled object-use actions (apraxia) is associated with abnormalities in incidental activation of action information, and assessed the neuroanatomical substrates of any such deficits. Twenty left hemisphere stroke patients, ten of whom were apraxic, performed a task requiring identification of a named object in a visual display containing manipulation-related and unrelated distractor objects. Manipulation relationships among objects were not relevant to the identification task. Objects were cued with neutral ("S/he saw the. . .."), or action-relevant ("S/he used the. . ..") sentences. Non-apraxic participants looked at use-related non-target objects significantly more than at unrelated non-target objects when cued both by neutral and action-relevant sentences, indicating that action information is incidentally activated. In contrast, apraxic participants showed delayed activation of manipulation-based action information during object identification when cued by neutral sentences. The magnitude of delayed activation in the neutral sentence condition was reliably predicted by lower scores on a test of gesture production to viewed objects, as well as by lesion loci in the inferior parietal and posterior temporal lobes. However, when cued by a sentence containing an action verb, apraxic participants showed fixation patterns that were statistically indistinguishable from non-apraxic controls. In support of grounded theories of cognition, these results suggest that apraxia and temporal-parietal lesions may be associated with abnormalities in incidental activation of action information from objects. Further, they suggest that the previously-observed facilitative role of action verbs in the retrieval of object-related action information extends to participants with apraxia. |
Dongpyo Lee; Howard Poizner; Daniel M. Corcos; Denise Y. P. Henriques Unconstrained reaching modulates eye-hand coupling Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 211–223, 2014. @article{Lee2014b, Eye–hand coordination is a crucial element of goal-directed movements. However, few studies have looked at the extent to which unconstrained movements of the eyes and hand made to targets influence each other. We studied human participants who moved either their eyes or both their eyes and hand to one of three static or flashed targets presented in 3D space. The eyes were directed, and hand was located at a common start position on either the right or left side of the body. We found that the velocity and scatter of memory-guided saccades (flashed targets) differed significantly when produced in combination with a reaching movement than when produced alone. Specifically, when accompanied by a reach, peak saccadic velocities were lower than when the eye moved alone. Peak saccade velocities, as well as latencies, were also highly correlated with those for reaching movements, especially for the briefly flashed targets compared to the continuous visible target. The scatter of saccade endpoints was greater when the saccades were produced with the reaching movement than when produced without, and the size of the scatter for both saccades and reaches was weakly correlated. These findings suggest that the saccades and reaches made to 3D targets are weakly to moderately coupled both temporally and spatially and that this is partly the result of the arm movement influencing the eye movement. Taken together, this study provides further evidence that the oculomotor and arm motor systems interact above and beyond any common target representations shared by the two motor systems. |
Kang Woo Lee; Yubu Lee Scanpath generated by cue-driven activation and spatial strategy: A comparative study Journal Article In: Cognitive Computation, vol. 6, no. 3, pp. 585–594, 2014. @article{Lee2014a, A comparative study of a cued face search task is presented in this paper. Human participants and a computer model carried out a task in which they were required to locate a color-cued target face. Human-generated eye fixations and scanpaths were compared with those generated by the computational model. Throughout the comparison, we considered the similarities and dissimilarities between the two systems' performances. Their results show that the eye fixations in a valid cue search are highly correlated with the computer-generated fixation points in a valid cue search but not to those in random and invalid cue searches. Moreover, the comparison between human- and computer-generated scanpaths showed that the scanpath that links the fixation points is not randomly generated. Our results imply that eye movement is accomplished not only by cue-driven activation, but also by a spatial strategy. |
Sunjung Kim; Linda J. Lombardino; Wind Cowles; Lori J. Altmann Investigating graph comprehension in students with dyslexia: An eye tracking study Journal Article In: Research in Developmental Disabilities, vol. 35, no. 7, pp. 1609–1622, 2014. @article{Kim2014, The purpose of this study was to examine graph comprehension in college students with developmental dyslexia. We investigated how graph types (line, vertical bar, and horizontal bar graphs), graphic patterns (single and double graphic patterns), and question types (point locating and comparison questions) differentially affect graph comprehension of students with and without dyslexia. Groups were compared for (1) reaction times for answering comprehension questions based on graphed data and (2) eye gaze times for specific graph subregions (x-axis, y-axis, pattern, legend, question, and answer). Dyslexic readers were significantly slower in their graph comprehension than their peers with group differences becoming more robust with the increasing complexity of graphs and tasks. In addition, dyslexic readers' initial eye gaze viewing times for linguistic subregions (question and answer) and total viewing times for both linguistic (question and answer) and nonlinguistic (pattern) subregions were significantly longer than their control peers' times. In spite of using elementary-level paragraphs for comprehension and simple graph forms, young adults with dyslexia needed more time to process linguistic and nonlinguistic stimuli. These findings are discussed relative to theories proposed to address fundamental processing deficits in individuals with dyslexia. |
Matthew O. Kimble; Mariam Boxwala; Whitney Bean; Kristin Maletsky; Jessica Halper; Kaleigh Spollen; Kevin Fleming The impact of hypervigilance: Evidence for a forward feedback loop Journal Article In: Journal of Anxiety Disorders, vol. 28, no. 2, pp. 241–524, 2014. @article{Kimble2014, A number of prominent theories suggest that hypervigilance and attentional bias play a central role in anxiety disorders and PTSD. It is argued that hypervigilance may focus attention on potential threats and precipitate or maintain a forward feedback loop in which anxiety is increased. While there is considerable data to suggest that attentional bias exists, there is little evidence to suggest that it plays this proposed but critical role. This study investigated how manipulating hypervigilance would impact the forward feedback loop via self-reported anxiety, visual scanning, and pupil size. Seventy-one participants were assigned to either a hypervigilant, pleasant, or control condition while looking at a series of neutral pictures. Those in the hypervigilant condition had significantly more fixations than those in the other two groups. These fixations were more spread out and covered a greater percentage of the ambiguous scene. Pupil size was also significantly larger in the hypervigilant condition relative to the control condition. Thus the study provided support for the role of hypervigilance in increasing visual scanning and arousal even to neutral stimuli and even when there is no change in self-reported anxiety. Implications for the role this may play in perpetuating a forward feedback loop are discussed. |
Barrie P. Klein; Ben M. Harvey; Serge O. Dumoulin Attraction of position preference by spatial attention throughout human visual cortex Journal Article In: Neuron, vol. 84, no. 1, pp. 227–237, 2014. @article{Klein2014, Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. |
Elise Klein; S. Huber; Hans-Christoph Nuerk; Korbinian Moeller Operational momentum affects eye fixation behaviour Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 8, pp. 1614–1625, 2014. @article{Klein2014a, The operational momentum effect (OM) indicates an association of mental addition with a rightward spatial bias, whereas subtraction is associated with a leftward bias. To evaluate the assumed attentional origin of the OM effect, we evaluated not only participants' relative estimation error in a task requiring them to locate addition and subtraction results on a given number line but also their eye-fixation behaviour. Furthermore, to investigate the situatedness of spatial-numerical associations, the orientation of the number line (left-to-right vs. right-to left) was manipulated. OM biases in participants' explicit number line estimations and more implicit eye-fixation behaviour are integrated into a two-process hypothesis of the OM effect suggesting a first rough spatial anticipation followed by an evaluation/correction process. This account not only is capable of accounting for the results observed for participants' relative estimation error but is also corroborated by the eye-fixation results. Importantly, the fact that all effects were found independent of the orientation of the number line indicates that spatial-numerical associations such as the OM effect may not be hard-wired associations of spatial and numerical representations but rather reflect influences of situatedness on numerical cognition. |
Nadine Kloth; Susannah E. Shields; Gillian Rhodes On the other side of the fence: Effects of social categorization and spatial grouping on memory and attention for own-race and other-race faces Journal Article In: PLoS ONE, vol. 9, no. 9, pp. e105979, 2014. @article{Kloth2014, The term ‘‘own-race bias'' refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own- race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias. |
Zuzanna Klyszejko; Masih Rahmati; Clayton E. Curtis Attentional priority determines working memory precision Journal Article In: Vision Research, vol. 105, pp. 70–76, 2014. @article{Klyszejko2014, Visual working memory is a system used to hold information actively in mind for a limited time. The number of items and the precision with which we can store information has limits that define its capacity. How much control do we have over the precision with which we store information when faced with these severe capacity limitations? Here, we tested the hypothesis that rank-ordered attentional priority determines the precision of multiple working memory representations. We conducted two psychophysical experiments that manipulated the priority of multiple items in a two-alternative forced choice task (2AFC) with distance discrimination. In Experiment 1, we varied the probabilities with which memorized items were likely to be tested. To generalize the effects of priority beyond simple cueing, in Experiment 2, we manipulated priority by varying monetary incentives contingent upon successful memory for items tested. Moreover, we illustrate our hypothesis using a simple model that distributed attentional resources across items with rank-ordered priorities. Indeed, we found evidence in both experiments that priority affects the precision of working memory in a monotonic fashion. Our results demonstrate that representations of priority may provide a mechanism by which resources can be allocated to increase the precision with which we encode and briefly store information. |
Kathryn Koehler; Fei Guo; Sheng Zhang; Miguel P. Eckstein What do saliency models predict? Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–27, 2014. @article{Koehler2014, Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. |
Christof Körner; Margit Höfler; Barbara Tröbinger; Iain D. Gilchrist Eye movements indicate the temporal organisation of information processing in graph comprehension Journal Article In: Applied Cognitive Psychology, vol. 28, no. 3, pp. 360–373, 2014. @article{Koerner2014a, Hierarchical graphs (e.g. file system browsers and preference trees) represent objects (e.g. files and folders) as graph nodes and relations between them (e.g. sub-folder relations) as lines. We investigated the temporal organisation of two processes that are necessary for comprehending such graphs—search for the graph nodes and reasoning about their relation. We tracked eye movements to change graphs while participants interpreted them. In Experiment 1, we masked the graph at a time when search processes had finished but reasoning was hypothetically ongoing. We observed a dramatic deterioration in comprehension compared with unmasked graphs. In Experiment 2, we changed the relation between critical graph nodes after search for them had finished, unbeknownst to participants. Participants mostly based their response on the graph as presented after the change. These results suggest that comprehension processes are organised in a sequential manner, an observation that can potentially be applied to the interactive presentation of graphs. |
Anna A. Kosovicheva; Benjamin A. Wolfe; David Whitney Visual motion shifts saccade targets Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 6, pp. 1778–1788, 2014. @article{Kosovicheva2014, Saccades are made thousands of times a day and are the principal means of localizing objects in our environment. However, the saccade system faces the challenge of accurately localizing objects as they are constantly moving relative to the eye and head. Any delays in processing could cause errors in saccadic localization. To compensate for these delays, the saccade system might use one or more sources of information to predict future target locations, including changes in position of the object over time, or its motion. Another possibility is that motion influences the represented position of the object for saccadic targeting, without requiring an actual change in target position. We tested whether the saccade system can use motion-induced position shifts to update the represented spatial location of a saccade target, by using static drifting Gabor patches with either a soft or a hard aperture as saccade targets. In both conditions, the aperture always remained at a fixed retinal location. The soft aperture Gabor patch resulted in an illusory position shift, whereas the hard aperture stimulus maintained the motion signals but resulted in a smaller illusory position shift. Thus, motion energy and target location were equated, but a position shift was generated in only one condition. We measured saccadic localization of these targets and found that saccades were indeed shifted, but only with a soft-aperture Gabor patch. Our results suggest that motion shifts the programmed locations of saccade targets, and this remapped location guides saccadic localization. |
Christopher K. Kovach; Matthew J. Sutterer; Sara N. Rushia; Adrianna Teriakidis; Rick L. Jenison Two systems drive attention to rewards Journal Article In: Frontiers in Psychology, vol. 5, pp. 46, 2014. @article{Kovach2014, How options are framed can dramatically influence choice preference. While salience of information plays a central role in this effect, precisely how it is mediated by attentional processes remains unknown. Current models assume a simple relationship between attention and choice, according to which preference should be uniformly biased towards the attended item over the whole time-course of a decision between similarly valued items. To test this prediction we considered how framing alters the orienting of gaze during a simple choice between two options, using eye movements as a sensitive online measure of attention. In one condition participants selected the less preferred item to discard and in the other, the more preferred item to keep. We found that gaze gravitates towards the item ultimately selected, but did not observe the effect to be uniform over time. Instead, we found evidence for distinct early and late processes that guide attention according to preference in the first case and task demands in the second. We conclude that multiple time-dependent processes govern attention during choice, and that these may contribute to framing effects in different ways. |
Lynn Huestegge; Iring Koch When two actions are easier than one: How inhibitory control demands affect response processing Journal Article In: Acta Psychologica, vol. 151, pp. 230–236, 2014. @article{Huestegge2014, Numerous studies showed that the simultaneous execution of multiple actions is associated with performance costs. Here, we demonstrate that when highly automatic responses are involved, performance in single-response conditions can actually be worse than in dual-response conditions. Participants responded to peripheral visual stimuli with an eye movement (saccade), a manual key press, or both. To manipulate saccade automaticity, a central fixation cross either remained present throughout the trial (overlap condition, lower automaticity) or disappeared 200. ms before visual target onset (gap condition, greater automaticity). Crucially, single-response conditions yielded more performance errors than dual-response conditions (i.e., dual-response benefit), especially in gap trials. This was due to difficulties associated with inhibiting saccades when only manual responses were required, suggesting that response inhibition (remaining fixated) can be even more resource-demanding than overt response execution (saccade to peripheral target). |
Kazuya Inoue; Yuji Takeda The properties of object representations constructed during visual search in natural scenes Journal Article In: Visual Cognition, vol. 22, no. 9-10, pp. 1135–1153, 2014. @article{Inoue2014, To investigate properties of object representations constructed during a visual search task, we manipulated the proportion of trials/task within a block: In a search-frequent block, 80% of trials were search tasks; remaining trials presented a memory task; in a memory-frequent block, this proportion was reversed. In the search task, participants searched for a toy car (Experiments 1 and 2) or a T-shape object (Experiment 3). In the memory task, participants had to memorize objects in a scene. Memory performance was worse in the search-frequent block than in the memory-frequent block in Experiments 1 and 3, but not in Experiment 2 (token change in Experiment 1; type change in Experiments 2 and 3). Experiment 4 demonstrated that lower performance in the search-frequent block was not due to eye-movement behaviour. Results suggest that object representations constructed during visual search are different from those constructed during memorization and they are modulated by type of target. |
David E. Irwin Short-term memory across eye blinks Journal Article In: Memory, vol. 22, no. 8, pp. 898–906, 2014. @article{Irwin2014, The effect of eye blinks on short-term memory was examined in two experiments. On each trial, participants viewed an initial display of coloured, oriented lines, then after a retention interval they viewed a test display that was either identical or different by one feature. Participants kept their eyes open throughout the retention interval on some blocks of trials, whereas on others they made a single eye blink. Accuracy was measured as a function of the number of items in the display to determine the capacity of short-term memory on blink and no-blink trials. In separate blocks of trials participants were instructed to remember colour only, orientation only, or both colour and orientation. Eye blinks reduced short-term memory capacity by approximately 0.6-0.8 items for both feature and conjunction stimuli. A third, control, experiment showed that a button press during the retention interval had no effect on short-term memory capacity, indicating that the effect of an eye blink was not due to general motoric dual-task interference. Eye blinks might instead reduce short-term memory capacity by interfering with attention-based rehearsal processes. |
David E. Irwin; Maria M. Robinson Perceiving stimulus displacements across saccades Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 548–575, 2014. @article{Irwin2014a, The visual world appears stable despite frequent retinal image movements caused by saccades. Many theories of visual stability assume that extraretinal eye position information is used to spatially adjust perceived locations across saccades, whereas others have proposed that visual stability depends upon coding of the relative positions of objects. McConkie and Currie (1996) proposed a refined combination of these views (called the Saccade Target Object Theory) in which the perception of stability across saccades relies on a local evaluation process centred on the saccade target object rather than on a remapping of the entire scene, with some contribution from memory for the relative positions of objects as well. Three experiments investigated the saccade target object theory, along with an alternative hypothesis that proposes that multiple objects are updated across saccades, but with variable resolution, with the saccade target object (by virtue of being the focus of attention before the saccade and residing near the fovea after the saccade) having priority in the perception of displacement. Although support was found for the saccade target object theory in Experiment 1, the results of Experiments 2 and 3 found that multiple objects are updated across saccades and that their positions are evaluated to determine perceived stability. There is an advantage for detecting displacements of the saccade target, most likely because of visual acuity or attentional focus being better near the fovea, but it is not the saccade target alone that determines the perception of stability and of displacements across saccades. Rather, multiple sources of information appear to contribute. |
Tomohiro Ishizu; Semir Zeki Varieties of perceptual instability and their neural correlates Journal Article In: NeuroImage, vol. 91, pp. 203–209, 2014. @article{Ishizu2014, We report experiments designed to learn whether different kinds of perceptually unstable visual images engage different neural mechanisms. 21 subjects viewed two types of bi-stable images while we scanned the activity in their brains with functional magnetic resonance imaging (fMRI); in one (intra-categorical type) the two percepts remained within the same category (e.g. face-face) while in the other (cross-categorical type) they crossed categorical boundaries (e.g. face-body). The results showed that cross- and intra-categorical reversals share a common reversal-related neural circuitry, which includes fronto-parietal cortex and primary visual cortex (area V1). Cross-categorical reversals alone engaged additional areas, notably anterior cingulate cortex and superior temporal gyrus, which have been posited to be involved in conflict resolution. |
Leyla Isik; Ethan M. Meyers; Joel Z. Leibo; Tomaso Poggio The dynamics of invariant object recognition in the human visual system Journal Article In: Journal of Neurophysiology, vol. 111, no. 1, pp. 91–102, 2014. @article{Isik2014, The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream. |
Anshul Jain; Stuart Fuller; Benjamin T. Backus Cue-recruitment for extrinsic signals after training with low information stimuli Journal Article In: PLoS ONE, vol. 9, no. 5, pp. e96383, 2014. @article{Jain2014, Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable. |
Kohitij Kar; Bart Krekelberg Transcranial alternating current stimulation attenuates visual motion adaptation Journal Article In: Journal of Neuroscience, vol. 34, no. 21, pp. 7334–7340, 2014. @article{Kar2014, Transcranial alternating current stimulation (tACS) is used in clinical applications and basic neuroscience research. Although its behavioral effects are evident from prior reports, current understanding of the mechanisms that underlie these effects is limited. We used motion perception, a percept with relatively well known properties and underlying neural mechanisms to investigate tACS mechanisms. Healthy human volunteers showed a surprising improvement in motion sensitivity when visual stimuli were paired with 10 Hz tACS. In addition, tACS reduced the motion-after effect, and this reduction was correlated with the improvement in motion sensitivity. Electrical stimulation had no consistent effect when applied before presenting a visual stimulus or during recovery from motion adaptation. Together, these findings suggest that perceptual effects of tACS result from an attenuation of adaptation. Important consequences for the practical use of tACS follow from our work. First, because this mechanism interferes only with adaptation, this suggests that tACS can be targeted at subsets of neurons (by adapting them), even when the applied currents spread widely throughout the brain. Second, by interfering with adaptation, this mechanism provides a means by which electrical stimulation can generate behavioral effects that outlast the stimulation. |
Ioanna Katidioti; Jelmer P. Borst; Niels A. Taatgen What happens when we switch tasks: Pupil Dilation in Multitasking Journal Article In: Journal of Experimental Psychology: Applied, vol. 20, no. 4, pp. 380–396, 2014. @article{Katidioti2014, Interruption studies typically focus on external interruptions, even though self-interruptions occur at least as often in real work environments. In this article, we therefore contrast external interruptions with self-interruptions. Three multitasking experiments were conducted, in which we examined changes in pupil size when participants switched from a primary to a secondary task. Results showed an increase in pupil dilation several seconds before a self-interruption, which we could attribute to the decision to switch. This indicates that the decision takes a relatively large amount of time. This was supported by the fact that in Experiment 2, participants were significantly slower on the self-interruption blocks than on the external interruption blocks. These findings suggest that the decision to switch is costly, but may also be open for modification through appropriate training. In addition, we propose that if one must switch tasks, it can be more efficient to implement a forced switch after the completion of a subtask instead of leaving the decision to the user. |
Kerry Kawakami; Amanda Williams; David M. Sidhu; Becky L. Choma; Rosa Rodriguez-Bailón; Elena Cañadas; Derek Chung; Kurt Hugenberg An eye for the I: Preferential attention to the eyes of ingroup members. Journal Article In: Journal of Personality and Social Psychology, vol. 107, no. 1, pp. 1–20, 2014. @article{Kawakami2014, Human faces, and more specifically the eyes, play a crucial role in social and nonverbal communication because they signal valuable information about others. It is therefore surprising that few studies have investigated the impact of intergroup contexts and motivations on attention to the eyes of ingroup and outgroup members. Four experiments investigated differences in eye gaze to racial and novel ingroups using eye tracker technology. Whereas Studies 1 and 3 demonstrated that White participants attended more to the eyes of White compared to Black targets, Study 2 showed a similar pattern of attention to the eyes of novel ingroup and outgroup faces. Studies 3 and 4 also provided new evidence that eye gaze is flexible and can be meaningfully influenced by current motivations. Specifically, instructions to individuate specific social categories increased attention to the eyes of target group members. Furthermore, the latter experiments demonstrated that preferential attention to the eyes of ingroup members predicted important intergroup biases such as recognition of ingroup over outgroup faces (i.e., the own-race bias; Study 3) and willingness to interact with outgroup members (Study 4). The implication of these findings for general theorizing on face perception, individuation processes, and intergroup relations are discussed. |
Roozbeh Kiani; Leah Corthell; Michael N. Shadlen Choice certainty is informed by both evidence and decision time Journal Article In: Neuron, vol. 84, no. 6, pp. 1329–1342, 2014. @article{Kiani2014, "Degree of certainty" refers to the subjective belief, prior to feedback, that a decision is correct. A reliable estimate of certainty is essential for prediction, learning from mistakes, and planning subsequent actions when outcomes are not immediate. It is generally thought that certainty is informed by a neural representation of evidence at the time of a decision. Here we show that certainty is also informed by the time taken to form the decision. Human subjects reported simultaneously their choice and confidence about the direction of a noisy display of moving dots. Certainty was inversely correlated with reaction times and directly correlated with motion strength. Moreover, these correlations were preserved even for error responses, a finding that contradicts existing explanations of certainty based on signal detection theory. We also contrived a stimulus manipulation that led to longer decision times without affecting choice accuracy, thus demonstrating that deliberation time itself informs the estimate of certainty. We suggest that elapsed decision time informs certainty because it serves as a proxy for task difficulty. |
Renske S. Hoedemaker; Peter C. Gordon It takes time to prime: Semantic priming in the ocular lexical decision task Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 6, pp. 2179–2197, 2014. @article{Hoedemaker2014a, Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (tau), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT. |
Margit Hofler; Iain D. Gilchrist; Christof Korner Searching the same display twice: Properties of short-term memory in repeated search Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 2, pp. 335–352, 2014. @article{Hofler2014, Consecutive search for different targets in the same display is supported by a short-term memory mechanism: Distractors that have recently been inspected in the first search are found more quickly in the second search when they become the target (Exp. 1). Here, we investigated the properties of this memory process. We found that this recency advantage is robust to a delay between the two searches (Exp. 2) and that it is only slightly disrupted by an interference task between the two searches (Exp. 3). Introducing a concurrent secondary task (Exp. 4) showed that the memory representations formed in the first search are based on identity as well as location information. Together, these findings show that the short-term memory that supports repeated visual search stores a complex combination of item identity and location that is robust to disruption by either time or interference. |
Pawel Holas; Izabela Krejtz; Marzena Cypryańska; John B. Nezlek Orienting and maintenance of attention to threatening facial expressions in anxiety - An eye movement study Journal Article In: Psychiatry Research, vol. 220, no. 1-2, pp. 362–369, 2014. @article{Holas2014, Cognitive models posit that anxiety disorders stem in part from underlying attentional biases to threat. Consistent with this, studies have found that the attentional bias to threat-related stimuli is greater in high vs. low anxious individuals. Nevertheless, it is not clear if similar biases exist for different threatening emotions or for any facial emotional stimulus. In the present study, we used eye-tracking to measure orienting and maintenance of attention to faces displaying anger, fear and disgust as threats, and faces displaying happiness and sadness. Using a free viewing task, we examined differences between low and high trait anxious (HTA) individuals in the attention they paid to each of these emotional faces (paired with a neutral face). We found that initial orienting was faster for angry and happy faces, and high trait anxious participants were more vigilant to fearful and disgust faces. Our results for attentional maintenance were not consistent. The results of the present study suggest that attentional processes may be more emotion-specific than previously believed. Our results suggest that attentional processes for different threatening emotions may not be the same and that attentional processes for some negative and some positive emotions may be similar. |
Dayana Hristova; Matej Guid; Ivan Bratko Assessing the difficulty of chess tactical problems Journal Article In: International Journal on Advances in Intelligent Systems, vol. 7, no. 3-4, pp. 728–738, 2014. @article{Hristova2014, We investigate experts' ability to assess the difficulty of a mental task for a human. The final aim is to find formalized measures of difficulty that could be used in automated assessment of the difficulty of a task. In experiments with tactical chess problems, the experts' estimations of difficulty are compared to the statistic-based difficulty ratings on the Chess Tempo website. In an eye tracking experiment, the subjects' solutions to chess problems and the moves that they considered are analyzed. Performance data (time and accuracy) are used as indicators of subjectively perceived difficulty. We also aim to identify the attributes of tactical positions that affect the difficulty of the problem. Understanding the connection between players' estimation of difficulty and the properties of the search trees of variations considered is essential, but not sufficient, for modeling the difficulty of tactical problems. Our findings include that (a) assessing difficulty is also very difficult for human experts, and (b) algorithms designed to estimate difficulty should interpret the complexity of a game tree in the light of knowledge-based patterns that human players are able to detect in a chess problem. |
S. Huber; Alon Mann; Hans-Christoph Nuerk; Korbinian Moeller Cognitive control in number magnitude processing: Evidence from eye-tracking Journal Article In: Psychological Research, vol. 78, no. 4, pp. 539–548, 2014. @article{Huber2014, The unit-decade compatibility effect describes longer response times and higher error rates for incompatible (e.g., 37_52) than compatible (e.g., 42_57) number comparisons. Recent research indicated that the effect depends on the percentage of same-decade filler items. In the present study, we further examined this relationship by recording participants' eye-fixation behaviour. In four conditions, participants had to compare item sets with different filler item types (i.e., same-decade and same-unit filler items) and different numbers of same-decade filler items (i.e., 25, 50, and 75 %). We found a weaker unit-decade compatibility effect with most fixations on tens in the condition with same-unit filler items. Moreover, the compatibility effect increased with the percentage of same-decade filler items which was accompanied by less fixations on tens and more fixations on units. Thus, our study provides first eye-tracking evidence for the influence of cognitive control in number processing. |
S. Huber; Korbinian Moeller; Hans-Christoph Nuerk Adaptive processing of fractions - Evidence from eye-tracking Journal Article In: Acta Psychologica, vol. 148, pp. 37–48, 2014. @article{Huber2014a, Recent evidence indicated that fraction pair type determined whether a particular fraction is processed holistically, componentially or in a hybrid manner. Going beyond previous studies, we investigated how participants adapt their processing of fractions not only to fraction type, but also to experimental context. To examine adaptation in fraction processing, we recorded participants' eye-fixation behaviour in a fraction magnitude comparison task.Participants' eye fixation behaviour indicated componential processing of fraction pairs with common components for which the decision-relevant components are easy to identify. Importantly, we observed that fraction processing was adapted to experimental context: Evidence for componential processing was stronger, when experimental context allowed valid expectations about which components are decision-relevant.Taken together, we conclude that fraction processing is adaptive beyond the comparison of different fraction types, because participants continuously adjust to the experimental context in which fractions are processed. |
Kathryn Louise McCabe; Rebbekah Josephine Atkinson; Gavin Cooper; Jessica Lauren Melville; Jill Harris; Ulrich Schall; Carmel M. Loughland; Renate Thienel; Linda E. Campbell In: Journal of Neurodevelopmental Disorders, vol. 6, no. 1, pp. 1–8, 2014. @article{McCabe2014, BACKGROUND: 22q11.2 deletion syndrome (22q11DS) is associated with a number of physical anomalies and neuropsychological deficits including impairments in executive and sensorimotor function. It is estimated that 25% of children with 22q11DS will develop schizophrenia and other psychotic disorders later in life. Evidence of genetic transmission of information processing deficits in schizophrenia suggests performance in 22q11DS individuals will enhance understanding of the neurobiological and genetic substrates associated with information processing. In this report, we examine information processing in 22q11DS using measures of startle eyeblink modification and antisaccade inhibition to explore similarities with schizophrenia and associations with neurocognitive performance.$backslash$n$backslash$nMETHODS: Startle modification (passive and active tasks; 120- and 480-ms pre-pulse intervals) and antisaccade inhibition were measured in 25 individuals with genetically confirmed 22q11DS and 30 healthy control subjects.$backslash$n$backslash$nRESULTS: Individuals with 22q11DS exhibited increased antisaccade error as well as some evidence (trend-level effect) of impaired sensorimotor gating during the active condition, suggesting a dysfunction in controlled attentional processing, rather than a pre-attentive dysfunction using this paradigm.$backslash$n$backslash$nCONCLUSIONS: The findings from the present study show similarities with previous studies in clinical populations associated with 22q11DS such as schizophrenia that may indicate shared dysfunction of inhibition pathways in these groups. |
Gerald P. McDonnell; Brian H. Bornstein; Cindy E. Laub; Mark Mills; Michael D. Dodd Perceptual processes in the cross-race effect: Evidence from eyetracking Journal Article In: Basic and Applied Social Psychology, vol. 36, no. 6, pp. 478–493, 2014. @article{McDonnell2014, The cross-race effect (CRE) is the tendency to have better recognition accuracy for same-race than for other-race faces due to differential encoding strategies. Research exploring the nature of encoding differences has yielded few definitive conclusions. The present experiments explored this issue using an eyetracker during a recognition task involving White participants viewing White and African American faces. Participants fixated faster and longer on the upper features of White faces and the lower features of African American faces. When instructing participants to attend to certain features in African American faces, this pattern was exaggerated. Gaze patterns were related to improved recognition accuracy. |
Eugene McSorley; Clare Lyne; Rachel McCloy Dissociation between the impact of evidence on eye movement target choice and confidence judgements Journal Article In: Experimental Brain Research, vol. 232, no. 6, pp. 1927–1940, 2014. @article{McSorley2014, It has been suggested that the evidence used to support a decision to move our eyes and the confidence we have in that decision are derived from a common source. Alternatively, confidence may be based on further post-decisional processes. In three experiments, we examined this. In Experiment 1, participants chose between two targets on the basis of varying levels of evidence (i.e., the direction of motion coherence in a random dot kinematogram). They indicated this choice by making a saccade to one of two targets and then indicated their confidence. Saccade trajectory deviation was taken as a measure of the inhibition of the non-selected target. We found that as evidence increased so did confidence and deviations of saccade trajectory away from the non-selected target. However, a correlational analysis suggested they were not related. In Experiment 2, an option to opt-out of the choice was offered on some trials if choice proved too difficult. In this way, we isolated trials on which confidence in target selection was high (i.e., when the option to opt-out was available but not taken). Again saccade trajectory deviations were found not to differ in relation to confidence. In Experiment 3, we directly manipulated confidence, such that participants had high or low task confidence. They showed no differences in saccade trajectory deviations. These results support post-decisional accounts of confidence: evidence supporting the decision to move the eyes is reflected in saccade control, but the confidence that we have in that choice is subject to further post-decisional processes. |
Weston Pack; Stanley A. Klein; Thom Carney Bias corrected double judgment accuracy during spatial attention cueing: Unmasked stimuli with non-predictive and semi-predictive cues Journal Article In: Vision Research, vol. 105, pp. 213–225, 2014. @article{Pack2014, The present experiments indicate that in a 7-AFC double judgment accuracy task with unmasked stimuli, cue location response bias can be quantified and removed, revealing unbiased improvements in response accuracy for valid cues compared to invalid cues. By testing for cueing effects over a range of contrast levels with unmasked stimuli, changes in the psychometric function were examined and provide insight into the mechanisms of involuntary attention which might account for the observed cueing effects. Cue validity was varied between two separate experiments showing that non-predictive (14.3%) and moderately-predictive cues (50%) equally facilitate stimulus identification and localization during transient involuntary attention capture. Observers had improved accuracy at identifying both the location and the feature identity of target letters throughout a range of contrast levels, without any dependence on backward masking. There was a leftward shift of the psychometric function threshold with valid cued data and no slope reduction suggesting that any additive hypothesis based on spatial uncertainty reduction or perceptual enhancement is not a sufficient explanation for the observed cueing effects. The interdependence of the perceptual processes of stimulus discrimination and localization were also investigated by analyzing response contingencies, showing that observers were equally skilled at making identification and localization accuracy judgments with unmasked stimuli. |
Weston Pack; Stanley A. Klein; Thom Carney Bias-free double judgment accuracy during spatial attention cueing: Performance enhancement from voluntary and involuntary attention Journal Article In: Vision Research, vol. 105, pp. 204–212, 2014. @article{Pack2014a, Recent research has demonstrated that involuntary attention improves target identification accuracy for letters using non-predictive peripheral cues, helping to resolve some of the controversy over performance enhancement from involuntary attention. While various cueing studies have demonstrated that their reported cueing effects were not due to response bias to the cue, very few investigations have quantified the extent of any response bias or developed methods of removing bias from observed results in a double judgment accuracy task. We have devised a method to quantify and remove response bias to cued locations in a double judgment accuracy cueing task, revealing the true, unbiased performance enhancement from involuntary and voluntary attention. In a 7-alternative forced choice cueing task using backward masked stimuli to temporally constrain stimulus processing, non-predictive cueing increased target detection and discrimination at cued locations relative to uncued locations even after cue location bias had been corrected. |
Céline Paeye; Laurent Madelain Reinforcing saccadic amplitude variability in a visual search task Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–18, 2014. @article{Paeye2014, Human observers often adopt rigid scanning strategies in visual search tasks, even though this may lead to suboptimal performance. Here we ask whether specific levels of saccadic amplitude variability may be induced in a visual search task using reinforcement learning. We designed a new gaze-contingent visual foraging task in which finding a target among distractors was made contingent upon specific saccadic amplitudes. When saccades of rare amplitudes led to displaying the target, the U values (measuring uncertainty) increased by 54.89% on average. They decreased by 41.21% when reinforcing frequent amplitudes. In a noncontingent control group no consistent change in variability occurred. A second experiment revealed that this learning transferred to conventional visual search trials. These results provide experimental support for the importance of reinforcement learning for saccadic amplitude variability in visual search. |
Adam Palanica; Roxane J. Itier Effects of peripheral eccentricity and head orientation on gaze discrimination Journal Article In: Visual Cognition, vol. 22, no. 9-10, pp. 1216–1232, 2014. @article{Palanica2014, Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgement tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgements, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. |
Sebastian Pannasch; Jens R. Helmert; Bruce C. Hansen; M. Adam; Lester C. Loschky Commonalities and differences in eye movement behavior when exploring aerial and terrestrial scenes Journal Article In: Cartography from Pole to Pole, pp. 421–430, 2014. @article{Pannasch2014, Eye movements can provide fast and precise insights into ongoing mechanisms of attention and information processing. In free exploration of natural scenes, it has repeatedly been shown that fixation durations increase over time, while saccade amplitudes decrease. This gaze behavior has been explained as a shift from ambient (global) to focal (local) processing as a means to efficiently understand different environments. In the current study, we analyzed eye movement behavior during the inspection of terrestrial and aerial views of real-world scene images. Our results show that the ambient to focal strategy is preserved across both perspectives. However, there are several perspective-related differences: For aerial views, the first fixation duration is prolonged, showing immediate processing difficulties. Furthermore, fixation durations and saccade amplitudes are longer throughout the overall time of scene exploration, showing continued difficulties that affect both processing of information and image scanning strategies. The temporal and spatial scanning of aerial views is also less similar between observers than for terrestrial scenes, suggesting an inability to use normal scanning patterns. The observed differences in eye movement behavior when inspecting terrestrial and aerial views suggest an increased processing effort for visual information that deviates from our everyday experiences. |
Angelina Paolozza; Carmen Rasmussen; Jacqueline Pei; Ana Hanlon-Dearman; Sarah M. Nikkel; Gail Andrew; Audrey McFarlane; Dawa Samdup; James N. Reynolds Deficits in response inhibition correlate with oculomotor control in children with fetal alcohol spectrum disorder and prenatal alcohol exposure Journal Article In: Behavioural Brain Research, vol. 259, pp. 97–105, 2014. @article{Paolozza2014, Children with fetal alcohol spectrum disorder (FASD) or prenatal alcohol exposure (PAE) frequently exhibit impairment on tasks measuring inhibition. The objective of this study was to determine if a performance-based relationship exists between psychometric tests and eye movement tasks in children with FASD. Participants for this dataset were aged 5-17 years and included those diagnosed with an FASD (n= 72), those with PAE but no clinical FASD diagnosis (n= 21), and typically developing controls (n= 139). Participants completed a neurobehavioral test battery, which included the NEPSY-II subtests of auditory attention, response set, and inhibition. Each participant completed a series of saccadic eye movement tasks, which included the antisaccade and memory-guided tasks. Both the FASD and the PAE groups performed worse than controls on the subtest measures of attention and inhibition. Compared with controls, the FASD group made more errors on the antisaccade and memory-guided tasks. Among the combined FASD/PAE group, inhibition and switching errors were negatively correlated with direction errors on the antisaccade task but not on the memory-guided task. There were no significant correlations in the control group. These data suggests that response inhibition deficits in children with FASD/PAE are associated with difficulty controlling saccadic eye movements which may point to overlapping brain regions damaged by prenatal alcohol exposure. The results of this study demonstrate that eye movement control tasks directly relate to outcome measures obtained with psychometric tests that are used during FASD diagnosis, and may therefore help with early identification of children who would benefit from a multidisciplinary diagnostic assessment. |
Angela M. Pazzaglia; Adrian Staub; Caren M. Rotello Encoding time and the mirror effect in recognition memory: Evidence from eyetracking Journal Article In: Journal of Memory and Language, vol. 75, pp. 77–92, 2014. @article{Pazzaglia2014, Low-frequency (LF) words have higher hit rates and lower false alarm rates than high-frequency (HF) words in recognition memory, a phenomenon termed the mirror effect. Visual word recognition latencies are longer for LF words. We examined the relationship between eye fixation durations during study and later recognition memory for individual words to test whether (1) increased fixation time on a word is associated with better memory, and (2) increased fixation times on LF words can account for their hit rate advantage. In Experiments 1 and 2, words of various frequencies were presented in lists in an intentional study design. In Experiment 3, HF and LF critical words were presented in matched sentence frames in an incidental study design. In all cases, the standard frequency effect on eye movements emerged, with longer reading times for lower frequency words. At test, studied words and new words from each frequency class were presented. The hit rate portion of the mirror effect was evident in all experiments. The time spent fixating a word did predict memory performance in the intentional encoding experiments, but critically, the frequency effect on hit rates was independent of this effect. Time spent fixating a word during incidental encoding did not predict later memory performance. These results suggest that the hit rate advantage for LF words is not due to the additional time spent on these words at encoding, which is consistent with retrieval-stage models of the mirror effect. |
Benjamin Pearson; Julius Raskevicius; Paul M. Bays; Yoni Pertzov; Masud Husain Working memory retrieval as a decision process Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–15, 2014. @article{Pearson2014, Working memory (WM) is a core cognitive process fundamental to human behavior, yet the mechanisms underlying it remain highly controversial. Here we provide a new framework for understanding retrieval of information from WM, conceptualizing it as a decision based on the quality of internal evidence. Recent findings have demonstrated that precision of WM decreases with memory load. IfWMretrieval uses a decision process that depends on memory quality, systematic changes in response time distribution should occur as a function of WM precision.We asked participants to view sample arrays and, after a delay, report the direction of change in location or orientation of a probe. As WM precision deteriorated with increasing memory load, retrieval time increased systematically. Crucially, the shape of reaction time distributions was consistent with a linear accumulator decision process. Varying either task relevance of items or maintenance duration influenced memory precision, with corresponding shifts in retrieval time. These results provide strong support for a decision-making account ofWMretrieval based on noisy storage of items. Furthermore, they show that encoding, maintenance, and retrieval in WM need not be considered as separate processes, but may instead be conceptually unified as operations on the same noise- limited, neural representation. |
Florian Perdreau; Patrick Cavanagh Drawing skill is related to the efficiency of encoding object structure Journal Article In: i-Perception, vol. 5, no. 2, pp. 101–119, 2014. @article{Perdreau2014, Accurate drawing calls on many skills beyond simple motor coordination. A good internal representation of the target object's structure is necessary to capture its proportion and shape in the drawing. Here, we assess two aspects of the perception of object structure and relate them to participants' drawing accuracy. First, we assessed drawing accuracy by computing the geometrical dissimilarity of their drawing to the target object. We then used two tasks to evaluate the efficiency of encoding object structure. First, to examine the rate of temporal encoding, we varied presentation duration of a possible versus impossible test object in the fovea using two different test sizes (8° and 28°). More skilled participants were faster at encoding an object's structure, but this difference was not affected by image size. A control experiment showed that participants skilled in drawing did not have a general advantage that might have explained their faster processing for object structure. Second, to measure the critical image size for accurate classification in the periphery, we varied image size with possible versus impossible object tests centered at two different eccentricities (3° and 8°). More skilled participants were able to categorise object structure at smaller sizes, and this advantage did not change with eccentricity. A control experiment showed that the result could not be attributed to differences in visual acuity, leaving attentional resolution as a possible explanation. Overall, we conclude that drawing accuracy is related to faster encoding of object structure and better access to crowded details. |
Effie J. Pereira; Monica S. Castelhano Peripheral guidance in scenes: The interaction of scene context and object content Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 5, pp. 2056–2072, 2014. @article{Pereira2014, In the present study, we examined how gaze guidance is affected by immediately available information in the periphery and investigated how search strategies differed across manipulations in the availability of scene context and object content information. Across 3 experiments, participants performed a visual search task in scenes while using a gaze-contingent moving-window paradigm. Extrafoveal information was manipulated across conditions to examine the contributions of object content, scene context, or some combination of the two. Experiment 1 demonstrated a possible interaction between scene context and object content information in improving guidance. Experiments 2 and 3 supported the notion that object content is selected for further scrutiny based on its position within scene context. These results suggest a prioritization of object information based on scene context, such that contextual information acts as a framework in the selection of relevant regions, and object information can then affect which specific locations in those regions are selected for further examination. |
Carolyn J. Perry; Abdullah Tahiri; Mazyar Fallah Feature integration within and across visual streams occurs at different visual processing stages Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–8, 2014. @article{Perry2014, Direction repulsion is a perceptual illusion in which the directions of two superimposed surfaces are repulsed away from the real directions of motion. The repulsion is reduced when the surfaces differ in dorsal stream features such as speed. We have previously shown that segmenting the surfaces by color, a ventral stream feature, did not affect repulsion but instead reduced the time needed to process both surfaces. The current study investigated whether segmenting two superimposed surfaces by a feature coprocessed with direction in the dorsal stream (i.e., speed) would also reduce processing time. We found that increasing the speed of one or both surfaces reduced direction repulsion. Since color segmentation does not affect direction repulsion, these results suggest that motion processing integrates speed and direction prior to forming an object representation that includes ventral stream features such as color. Like our previous results for differences in color, differences in speed also decreased processing time. Therefore, the reduction in processing time derives from a later processing stage where both ventral and dorsal features bound into the object representations can reduce the time needed for decision making when those features differentiate the superimposed surfaces from each other. |
Yoni Pertzov; Masud Husain The privileged role of location in visual working memory Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 7, pp. 1914–1924, 2014. @article{Pertzov2014, Reports have conflicted about the possible special role of location in visual working memory (WM). One important question is: Do we maintain the locations of objects in WM even when they are irrelevant to the task at hand? Here we used a continuous response scale to study the types of reporting errors that participants make when objects are presented at the same or at different locations in space. When several objects successively shared the same location, participants exhibited a higher tendency to report features of the wrong object in memory; that is, they responded with features that belonged to objects retained in memory but not probed at retrieval. On the other hand, a similar effect was not observed when objects shared a nonspatial feature, such as color. Furthermore, the effect of location on reporting errors was present even when its manipulation was orthogonal to the task at hand. These findings are consistent with the view that binding together different nonspatial features of an object in memory might be mediated through an object's location. Hence, spatial location may have a privileged role in WM. The relevance of these findings to conceptual models, as well as to neural accounts of visual WM, is discussed. |
Matthew F. Peterson; Miguel P. Eckstein Learning optimal eye movements to unusual faces Journal Article In: Vision Research, vol. 99, pp. 57–68, 2014. @article{Peterson2014, Eye movements, which guide the fovea's high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer's default face identification eye movement behavior to the new optimal fixation point and the observer's peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. |
Katrin Preckel; Karlijn Massar Imprinting effects on visual attention to faces and judgments of attractiveness Journal Article In: EvoS Journal, vol. 6, no. 2, pp. 1–16, 2014. @article{Preckel2014, Previous studies have shown that human mate-choice can be influenced by exposure to opposite-sex parent characteristics. In this study we examined whether there are sexual-imprinting effects of fathers on their daughter's partner-choice. To this end our participants were asked to bring a picture of their father to the laboratory, and next an eye-tracker was used to determine participants' gaze directions while they were judging male faces for attractiveness. Participants were single, female undergraduates (n = 50, M age = 22 |
Katrin H. Preller; Marcus Herdener; Leonhard Schilbach; Philliipp Stampfli; Lea M. Hulka; Matthias Vonmoos; Nina Ingold; Kai Vogeley; Philippe N. Tobler; Erich Seifritz; Boris B. Quednow Functional changes of the reward system underlie blunted response to social gaze in cocaine users Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 7, pp. 2842–2847, 2014. @article{Preller2014, Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants' gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction. |
Heinz-Werner Priess; Nils Heise; Florian Fischmeister; Sabine Born; Herbert Bauer; Ulrich Ansorge Attentional capture and inhibition of saccades after irrelevant and relevant cues Journal Article In: Journal of Ophthalmology, pp. 1–12, 2014. @article{Priess2014, Attentional capture is usually stronger for task-relevant than irrelevant stimuli, whereas irrelevant stimuli can trigger equal or even stronger amounts of inhibition than relevant stimuli. Capture and inhibition, however, are typically assessed in separate trials, leaving it open whether or not inhibition of irrelevant stimuli is a consequence of preceding attentional capture by the same stimuli or whether inhibition is the only response to these stimuli. Here, we tested the relationship between capture and inhibition in a setup allowing for estimates of the capture and inhibition based on the very same trials. We recorded saccadic inhibition after relevant and irrelevant stimuli. At the same time, we recorded the N2pc, an event-related potential, reflecting initial capture of attention. We found attentional capture not only for, relevant but importantly also for irrelevant stimuli, although the N2pc was stronger for relevant than irrelevant stimuli. In addition, inhibition of saccades was the same for relevant and irrelevant stimuli. We conclude with a discussion of the mechanisms that are responsible for these effects. |
Anis Rahman; Denis Pellerin; Dominique Houzet Influence of number, location and size of faces on gaze in video Journal Article In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–11, 2014. @article{Rahman2014, Many studies have reported the preference for faces and influence of faces on gaze, most of them in static images and a few in videos. In this paper, we study the influence of faces in complex free-viewing videos, with respect to the effects of number, location and size of the faces. This knowledge could be used to enrich a face pathway in a visual saliency model. We used eye fixation data from an eye movement experiment, hand-labeled all the faces in the videos watched, and compared the labeled face regions against the eye fixations. We observed that fixations made are in proximity to, or inside the face regions. We found that 50% of the fixations landed directly on face regions that occupy less than 10% of the entire visual scene. Moreover, the fixation duration on videos with face is longer than without face, and longer than fixation duration on static images with faces. Finally, we analyzed the three influencing factors (Eccentricity, Area, Closeness) with linear regression models. For one face, the E +A combined model is slightly better than the E model and better than the A model. For two faces, the three variables (E,A,C) are tightly coupled and the E +A+C model had the highest score. |
Brandon C. W. Ralph; Paul Seli; Vivian O. Y. Cheng; Grayden J. F. Solman; Daniel Smilek Running the figure to the ground: Figure-ground segmentation during visual search Journal Article In: Vision Research, vol. 97, pp. 65–73, 2014. @article{Ralph2014, We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. |
James Rankin; Andrew Isaac Meso; Guillaume S. Masson; O. Faugeras; Pierre Kornprobst Bifurcation study of a neural field competition model with an application to perceptual switching in motion integration Journal Article In: Journal of Computational Neuroscience, vol. 36, no. 2, pp. 193–213, 2014. @article{Rankin2014, Perceptual multistability is a phenomenon in which alternate interpretations of a fixed stimulus are perceived intermittently. Although correlates between activity in specific cortical areas and perception have been found, the complex patterns of activity and the underlying mechanisms that gate multistable perception are little understood. Here, we present a neural field competition model in which competing states are represented in a continuous feature space. Bifurcation analysis is used to describe the different types of complex spatio-temporal dynamics produced by the model in terms of several parameters and for different inputs. The dynamics of the model was then compared to human perception investigated psychophysically during long presentations of an ambiguous, multistable motion pattern known as the barberpole illusion. In order to do this, the model is operated in a parameter range where known physiological response properties are reproduced whilst also working close to bifurcation. The model accounts for characteristic behaviour from the psychophysical experiments in terms of the type of switching observed and changes in the rate of switching with respect to contrast. In this way, the modelling study sheds light on the underlying mechanisms that drive perceptual switching in different contrast regimes. The general approach presented is applicable to a broad range of perceptual competition problems in which spatial interactions play a role. |
Hai Lin; Joshua D. Rizak; Yuan-ye Ma; Shang-chuan Yang; Lin Chen; Xin-tian Hu Face recognition increases during saccade preparation Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e93112, 2014. @article{Lin2014, Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition. |
Angelika Lingnau; Thorsten Albrecht; Jens Schwarzbach; Dirk Vorberg Visual search without central vision - no single pseudofovea location is best Journal Article In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–14, 2014. @article{Lingnau2014, We typically fixate targets such that they are projected onto the fovea for best spatial resolution. Macular degeneration patients often develop fixation strategies such that targets are projected to an intact eccentric part of the retina, called pseudofovea. A longstanding debate concerns which pseudofovea-location is optimal for non-foveal vision. We examined how pseudofovea position and eccentricity affect performance in visual search, when vision is restricted to an off-foveal retinal region by a gaze-contingent display that dynamically blurs the stimulus except within a small viewing window (forced field location). Trained normally sighted participants were more accurate when forced field location was congruent with the required scan path direction; this contradicts the view that a single pseudofovea location is generally best. Rather, performance depends on the congruence between pseudofovea location and scan path direction. |
Christina Liossi; Daniel E. Schoth; Hayward J. Godwin; Simon P. Liversedge Using eye movements to investigate selective attention in chronic daily headache Journal Article In: Pain, vol. 155, no. 3, pp. 503–510, 2014. @article{Liossi2014, Previous research has demonstrated that chronic pain is associated with biased processing of pain-related information. Most studies have examined this bias by measuring response latencies. The present study extended previous work by recording eye movement behaviour in individuals with chronic headache and in healthy controls while participants viewed a set of images (ie, facial expressions) from 4 emotion categories (pain, angry, happy, neutral). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze on the picture that was initially fixated, and the mean number of visits, and mean fixation duration per image category. The eye movement behaviour of the participants in the chronic headache group was characterised by a bias in initial shift of orienting to pain. There was no evidence of individuals with chronic headache visiting more often, or spending significantly more time viewing, pain images compared to other images. Both participant groups showed a significantly greater bias to maintain gaze longer on happy images, relative to pain, angry, and neutral images. Results are consistent with a pain-related bias that operates in the orienting of attention on pain-related stimuli, and suggest that chronic pain participants' attentional biases for pain-related information are evident even when other emotional stimuli are present. Pain-related information-processing biases appear to be a robust feature of chronic pain and may have an important role in the maintenance of the disorder. |
Alexandra List; Lucica Iordanescu; Marcia Grabowecky; Satoru Suzuki Haptic guidance of overt visual attention Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 8, pp. 2221–2228, 2014. @article{List2014, Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention. |
Shih-Yu Lo; Alex O. Holcombe How do we select multiple features? Transient costs for selecting two colors rather than one, persistent costs for color-location conjunctions Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 2, pp. 304–321, 2014. @article{Lo2014, In a previous study Lo, Howard, & Holcombe (Vision Research 63:20-33, 2012), selecting two colors did not induce a performance cost, relative to selecting one color. For example, requiring possible report of both a green and a red target did not yield a worse performance than when both targets were green. Yet a cost of selecting multiple colors was observed when selection needed be contingent on both color and location. When selecting a red target to the left and a green target to the right, superimposing a green distractor to the left and a red distractor to the right impeded performance. Possibly, participants cannot confine attention to a color at a particular location. As a result, distractors that share the target colors disrupt attentional selection of the targets. The attempt to select the targets must then be repeated, which increases the likelihood that the trial terminates when selection is not effective, even for long trials. Consistent with this, here we find a persistent cost of selecting two colors when the conjunction of color and location is needed, but the cost is confined to short exposure durations when the observer just has to monitor red and green stimuli without the need to use the location information. These results suggest that selecting two colors is time-consuming but effective, whereas selection of simultaneous conjunctions is never entirely successful. |
Jella Pfeiffer; Martin Meißner; Eduard Brandstätter; René Riedl; Reinhold Decker; Franz Rothlauf On the influence of context-based complexity on information search patterns: An individual perspective Journal Article In: Journal of Neuroscience, Psychology, and Economics, vol. 7, no. 2, pp. 103–124, 2014. @article{Pfeiffer2014, Although context-based complexity measured as the similarity and conflict across alternatives is dependent on individual preference structures, existing studies investi- gating the influence of context-based complexity on information search patterns have largely ignored that context-based complexity is user- and preference-dependent. Addressing this research gap, this article elicits the individual preferences of decision makers by using the pairwise-comparison-based preference measurement (PCPM) technique and records individuals' search patterns using eye tracking. Our results show that an increased context-based complexity leads to an increase in information acqui- sition and the use of a more attribute-wise search pattern. Moreover, the information search pattern changes within a choice task as information is processed attribute-wise in earlier stages of the search process and alternative-wise in later ones. The fact that we do not find an interaction effect of context-based complexity and decision stages on the search patterns indicates that the influence of complexity on search patterns stays constant throughout the decision process and suggests that the more complex the choice task is, the later the switch from attribute-wise strategies to alternative-wise strategies will be. |
Alessandro Piras; Roberto Lobietti; Salvatore Squatrito Response time, visual search strategy, and anticipatory skills in volleyball players Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–10, 2014. @article{Piras2014, This paper aimed at comparing expert and novice volleyball players in a visuomotor task using realistic stimuli. Videos of a volleyball setter performing offensive action were presented to participants, while their eye movements were recorded by a head-mounted video based eye tracker. Participants were asked to foresee the direction (forward or backward) of the setter's toss by pressing one of two keys. Key-press response time, response accuracy, and gaze behaviour were measured from the first frame showing the setter's hand-ball contact to the button pressed by the participants. Experts were faster and more accurate in predicting the direction of the setting than novices, showing accurate predictions when they used a search strategy involving fewer fixations of longer duration, as well as spending less time in fixating all display areas from which they extract critical information for the judgment. These results are consistent with the view that superior performance in experts is due to their ability to efficiently encode domain-specific information that is relevant to the task. |
Katja Poellmann; Holger Mitterer; James M. McQueen Use what you can: Storage, abstraction processes, and perceptual adjustments help listeners recognize reduced forms Journal Article In: Frontiers in Psychology, vol. 5, pp. 437, 2014. @article{Poellmann2014, Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., minderij instead of binderij, "book binder") and a syllabic reduction group was exposed to full-vowel deletions (e.g., p'raat instead of paraat, "ready"), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 and 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations. |
Arezoo Pooresmaeili; Thomas H. B. FitzGerald; Dominik R. Bach; Ulf Toelch; Florian Ostendorf; Raymond J. Dolan Cross-modal effects of value on perceptual acuity and stimulus encoding Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 42, pp. 15244–15249, 2014. @article{Pooresmaeili2014, Cross-modal interactions are very common in perception. An important feature of many perceptual stimuli is their reward-predicting properties, the utilization of which is essential for adaptive behavior. What is unknown is whether reward associations in one sensory modality influence perception of stimuli in another modality. Here we show that auditory stimuli with high-reward associations increase the sensitivity of visual perception, even when sounds and reward associations are both irrelevant for the visual task. This increased sensitivity correlates with a change in stimulus representation in the visual cortex, indexed by increased multivariate decoding accuracy in simultaneously acquired functional MRI data. Univariate analysis showed that reward associations modulated responses in regions associated with multisensory processing in which the strength of modulation was a better predictor of the magnitude of the behavioral effect than the modulation in classical reward regions. Our findings demonstrate a value-driven cross-modal interaction that affects perception and stimulus encoding, with a resemblance to well-described modulatory effects of attention. We suggest that multisensory processing areas may mediate the transfer of value signals across senses. |
Cai S. Longman; Aureliu Lavric; Cristian Munteanu; Stephen Monsell Attentional inertia and delayed orienting of spatial attention in task-switching Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1580–1602, 2014. @article{Longman2014, Among the potential, but neglected, sources of task-switch costs is the need to reallocate attention to different attributes or objects. Even theorists who recognize the importance of attentional resetting in task-switching sometimes think it too efficient to result in significant behavioral costs. We examined the dynamics of spatial attention in a task-cuing paradigm using eye-tracking. Digits appeared simultaneously at 3 locations. A cue preceded this display by a variable interval, instructing the performance of 1 of 3 classification tasks (odd-even, low-high, inner-outer) each consistently associated with a location, so that task preparation could be tracked via fixation of the task-relevant location. Task-switching led to a delay in selecting the relevant location and a tendency to misallocate attention; the previously relevant location attracted attention much more than the other irrelevant location on switch trials, indicating "inertia" in attentional parameters rather than mere distractibility. These effects predicted reaction time switch costs within and over participants. The switch-induced delay was not confined to trials with slow/late orienting, but characteristic of most switch trials. The attentional pull of the previously relevant location was substantially reduced, but not eliminated, by extending the preparation interval to more than 1 sec, suggesting that attentional inertia contributes to the "residual" switch cost. A control condition, using identical displays but only 1 task, showed that these effects could not be attributed to the (small and transient) delays or inertia observed when the required orientation changed between trials in the absence of a task change. |
Lester C. Loschky; Ryan V. Ringer; Aaron P. Johnson; Adam M. Larson; Mark B. Neider; Arthur F. Kramer Blur detection is unaffected by cognitive load Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 522–547, 2014. @article{Loschky2014, Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects ofselective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N-back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N-back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N-back level on N-back performance, scene recognition memory, and gaze dispersion, but no effect of N-back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze- contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N-back task. |
Casimir J. H. Ludwig; J. Rhys Davies; Miguel P. Eckstein Foveal analysis and peripheral selection during active visual sampling Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 2, pp. E291–E299, 2014. @article{Ludwig2014, Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. |
Gang Luo; Tyler W. Garaas; Marc Pomplun Salient stimulus attracts focus of peri-saccadic mislocalization Journal Article In: Vision Research, vol. 100, pp. 93–98, 2014. @article{Luo2014, Visual localization during saccadic eye movements is prone to error. Flashes shortly before and after the onset of saccades are usually perceived to shift towards the saccade target, creating a "compression" pattern. Typically, the saccade landing point coincides with a salient saccade target. We investigated whether the mislocalization focus follows the actual saccade landing point or a salient stimulus. Subjects made saccades to either a target or a memorized location without target. In some conditions, another salient marker was presented between the initial fixation and the saccade landing point. The experiments were conducted on both black and picture backgrounds. The results show that: (a) when a saccade target or a marker (spatially separated from the saccade landing point) was present, the compression pattern of mislocalization was significantly stronger than in conditions without them, for both black and picture background conditions, and (b) the mislocalization focus tended towards the salient stimulus regardless of whether it was the saccade target or the marker. Our results suggest that a salient stimulus presented in the scene may have an attracting effect and therefore contribute to the non-uniformity of saccadic mislocalization of a probing flash. |
Christine Macare; Thomas Meindl; Igor Nenadic; Dan Rujescu; Ulrich Ettinger Preliminary findings on the heritability of the neural correlates of response inhibition Journal Article In: Biological Psychology, vol. 103, no. 1, pp. 19–23, 2014. @article{Macare2014, Imaging genetics examines genetic influences on brain structure and function. This preliminary study tested a fundamental assumption of that approach by estimating the heritability of the blood oxygen level dependent (BOLD) signal during antisaccades, a measure of response inhibition impaired in different psychiatric conditions. One hundred thirty-two healthy same-sex reared-together twins (90 monozygotic (MZ; 32 male) and 42 dizygotic (DZ; 24 male)) performed antisaccades in the laboratory. Of these, 96 twins (60 MZ, 28 male; 36 DZ, 22 male) subsequently underwent functional magnetic resonance imaging (fMRI) during antisaccades. Variation in antisaccade direction errors in the laboratory showed significant heritability (47%; 95% confidence interval (CI) 22-65). In fMRI, the contrast of antisaccades with prosaccades yielded BOLD signal in fronto-parietal-subcortical networks. Twin modelling provided tentative evidence of significant heritability (50%, 95% CI: 18-72) of BOLD in the left thalamus only. However, due to the limited power to detect heritability in this study, replications in larger samples are needed. |
Bart Machilsen; Johan Wagemans Both predictability and familiarity facilitate contour integration Journal Article In: Journal of Vision, vol. 14, no. 5, pp. 1–15, 2014. @article{Machilsen2014, Research has shown that contour detection is impaired in the visual periphery for snake-shaped Gabor contours but not for circular and elliptical contours. This discrepancy in findings could be due to differences in intrinsic shape properties, including shape closure and curvature variation, as well as to differences in stimulus predictability and familiarity. In a detection task using only circular contours, the target shape is both more familiar and more predictable to the observer compared with a detection task in which a different snake-shaped contour is presented on each trial. In this study, we investigated the effects of stimulus familiarity and predictability on contour integration by manipulating and disentangling the familiarity and predictability of snake-like stimuli. We manipulated stimulus familiarity by extensively training observers with one particular snake shape. Predictability was varied by alternating trial blocks with only a single target shape and trial blocks with multiple target shapes. Our results show that both predictability and familiarity facilitated contour integration, which constitutes novel behavioral evidence for the adaptivity of the contour integration mechanism in humans. If familiarity or predictability facilitated contour integration in the periphery specifically, this could explain the discrepant findings obtained with snake contours as compared with circles or ellipses. However, we found that their facilitatory effects did not differ between central and peripheral vision and thus cannot explain that particular discrepancy in the literature. |
W. Joseph MacInnes; Amelia R. Hunt Attentional load interferes with target localization across saccades Journal Article In: Experimental Brain Research, vol. 232, no. 12, pp. 3737–3748, 2014. @article{MacInnes2014, The retinal positions of objects in the world change with each eye movement, but we seem to have little trouble keeping track of spatial information from one fixation to the next. We examined the role of attention in trans-saccadic localization by asking participants to localize targets while performing an attentionally demanding secondary task. In the first experiment, attentional load decreased localization precision for a remembered target, but only when a saccade intervened between target presentation and report. We then repeated the experiment and included a salient landmark that shifted on half the trials. The shifting landmark had a larger effect on localization under high load, indicating that observers rely more on landmarks to make localization judgments under high than under low attentional load. The results suggest that attention facilitates trans-saccadic localization judgments based on spatial updating of gaze-centered coordinates when visual landmarks are not available. The availability of reliable landmarks (present in most natural circumstances) can compensate for the effects of scarce attentional resources on trans-saccadic localization. |
Indra T. Mahayana; Chia-Lun Liu; Chi Fu Chang; Daisy L. Hung; Ovid J. L. Tzeng; Chi-Hung Juan; Neil G. Muggleton Far-space neglect in conjunction but not feature search following transcranial magnetic stimulation over right posterior parietal cortex Journal Article In: Journal of Neurophysiology, vol. 111, no. 4, pp. 705–714, 2014. @article{Mahayana2014, Near- and far-space coding in the human brain is a dynamic process. Areas in dorsal, as well as ventral visual association cortex, including right posterior parietal cortex (rPPC), right frontal eye field (rFEF), and right ventral occipital cortex (rVO), have been shown to be important in visuospatial processing, but the involvement of these areas when the information is in near or far space remains unclear. There is a need for investigations of these representations to help explain the pathophysiology of hemispatial neglect, and the role of near and far space is crucial to this. We used a conjunction visual search task using an elliptical array to investigate the effects of transcranial magnetic stimulation delivered over rFEF, rPPC, and rVO on the processing of targets in near and far space and at a range of horizontal eccentricities. As in previous studies, we found that rVO was involved in far-space search, and rFEF was involved regardless of the distance to the array. It was found that rPPC was involved in search only in far space, with a neglect-like effect when the target was located in the most eccentric locations. No effects were seen for any site for a feature search task. As the search arrays had higher predictability with respect to target location than is often the case, these data may form a basis for clarifying both the role of PPC in visual search and its contribution to neglect, as well as the importance of near and far space in these. |
Guido Maiello; Manuela Chessa; Fabio Solari; Peter J. Bex Simulated disparity and peripheral blur interact during binocular fusion Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–14, 2014. @article{Maiello2014, We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. |
George L. Malcolm; Antje Nuthmann; Philippe G. Schyns Beyond gist: Strategic and incremental information accumulation for scene categorization Journal Article In: Psychological Science, vol. 25, no. 5, pp. 1087–1097, 2014. @article{Malcolm2014, Research on scene categorization generally concentrates on gist processing, particularly the speed and minimal features with which the "story" of a scene can be extracted. However, this focus has led to a paucity of research into how scenes are categorized at specific hierarchical levels (e.g., a scene could be a road or more specifically a highway); consequently, research has disregarded a potential diagnostically driven feedback process. We presented participants with scenes that were low-pass filtered so only their gist was revealed, while a gaze-contingent window provided the fovea with full-resolution details. By recording where in a scene participants fixated prior to making a basic- or subordinate-level judgment, we identified the scene information accrued when participants made either categorization. We observed a feedback process, dependent on categorization level, that systematically accrues sufficient and detailed diagnostic information from the same scene. Our results demonstrate that during scene processing, a diagnostically driven bidirectional interplay between top-down and bottom-up information facilitates relevant category processing. |
Jonathan T. Mall; Candice C. Morey; Michael J. Wolff; Franziska Lehnert In: Attention, Perception, and Psychophysics, vol. 76, no. 7, pp. 1998–2014, 2014. @article{Mall2014, Selective attention and working memory capacity (WMC) are related constructs, but debate about the manner in which they are related remains active. One elegant explanation of variance in WMC is that the efficiency of filtering irrelevant information is the crucial determining factor, rather than differences in capacity per se. We examined this hypothesis by relating WMC (as measured by complex span tasks) to accuracy and eye movements during visual change detection tasks with different degrees of attentional filtering and allocation requirements. Our results did not indicate strong filtering differences between high- and low-WMC groups, and where differences were observed, they were counter to those predicted by the strongest attentional filtering hypothesis. Bayes factors indicated evidence favoring positive or null relationships between WMC and correct responses to unemphasized information, as well as between WMC and the time spent looking at unemphasized information. These findings are consistent with the hypothesis that individual differences in storage capacity, not only filtering efficiency, underlie individual differences in working memory. |
Klara Marečková; Jennifer S. Perrin; Irum Nawaz Khan; Claire Lawrence; Erin Dickie; Douglas A. McQuiggan; Tomáš Paus Hormonal contraceptives, menstrual cycle and brain response to faces Journal Article In: Social Cognitive and Affective Neuroscience, vol. 9, no. 2, pp. 191–200, 2014. @article{Mareckova2014, Both behavioral and neuroimaging evidence support a female advantage in the perception of human faces. Here we explored the possibility that this relationship may be partially mediated by female sex hormones by investigating the relationship between the brain's response to faces and the use of oral contraceptives, as well as the phase of the menstrual cycle. First, functional magnetic resonance images were acquired in 20 young women [10 freely cycling and 10 taking oral contraception (OC)] during two phases of their cycle: mid-cycle and menstruation. We found stronger neural responses to faces in the right fusiform face area (FFA) in women taking oral contraceptives (vs freely cycling women) and during mid-cycle (vs menstruation) in both groups. Mean blood oxygenation level-dependent response in both left and right FFA increased as function of the duration of OC use. Next, this relationship between the use of OC and FFA response was replicated in an independent sample of 110 adolescent girls. Finally in a parallel behavioral study carried out in another sample of women, we found no evidence of differences in the pattern of eye movements while viewing faces between freely cycling women vs those taking oral contraceptives. The imaging findings might indicate enhanced processing of social cues in women taking OC and women during mid-cycle. |
Sebastiaan Mathot; Edwin S. Dalmaijer; Jonathan Grainger; Stefan Van der Stigchel The pupillary light response reflects exogenous attention and inhibition of return Journal Article In: Journal of Vision, vol. 14, no. 14, pp. 1–9, 2014. @article{Mathot2014, Here we show that the pupillary light response reflects exogenous (involuntary) shifts of attention and inhibition of return. Participants fixated in the center of a display that was divided into a bright and a dark half. An exogenous cue attracted attention to the bright or dark side of the display. Initially, the pupil constricted when the bright, as compared to the dark, side of the display was cued, reflecting a shift of attention toward the exogenous cue. Crucially, this pattern reversed about 1 s after cue presentation. This later-occurring, relative dilation (when the bright side was cued) reflected disengagement from the previously attended location, analogous to the behavioral phenomenon of inhibition of return. Indeed, we observed a reliable correlation between ‘‘pupillary inhibition'' and behavioral inhibition of return. Our results support the view that inhibition of return results from habituation to (or short-term depression of) visual input. We conclude that the pupillary light response is a complex eye movement that reflects how we selectively parse and interpret visual input. |
Shunichi Matsuda; Hideyuki Matsumoto; Toshiaki Furubayashi; Hideki Fukuda; Masaki Emoto; Ritsuko Hanajima; Shoji Tsuji; Yoshikazu Ugawa; Yasuo Terao Top-down but not bottom-up visual scanning is affected in hereditary pure cerebellar ataxia Journal Article In: PLoS ONE, vol. 9, no. 12, pp. e116181, 2014. @article{Matsuda2014, The aim of this study was to clarify the nature of visual processing deficits caused by cerebellar disorders. We studied the performance of two types of visual search (top-down visual scanning and bottom-up visual scanning) in 18 patients with pure cerebellar types of spinocerebellar degeneration (SCA6: 11; SCA31: 7). The gaze fixation position was recorded with an eye-tracking device while the subjects performed two visual search tasks in which they looked for a target Landolt figure among distractors. In the serial search task, the target was similar to the distractors and the subject had to search for the target by processing each item with top-down visual scanning. In the pop-out search task, the target and distractor were clearly discernible and the visual salience of the target allowed the subjects to detect it by bottom-up visual scanning. The saliency maps clearly showed that the serial search task required top-down visual attention and the pop-out search task required bottom-up visual attention. In the serial search task, the search time to detect the target was significantly longer in SCA patients than in normal subjects, whereas the search time in the pop-out search task was comparable between the two groups. These findings suggested that SCA patients cannot efficiently scan a target using a top-down attentional process, whereas scanning with a bottom-up attentional process is not affected. In the serial search task, the amplitude of saccades was significantly smaller in SCA patients than in normal subjects. The variability of saccade amplitude (saccadic dysmetria), number of re-fixations, and unstable fixation (nystagmus) were larger in SCA patients than in normal subjects, accounting for a substantial proportion of scattered fixations around the items. Saccadic dysmetria, re-fixation, and nystagmus may play important roles in the Impaired top-down visual scanning in SCA, hampering precise visual processing of individual items. |
Justin T. Maxfield; Westri D. Stalder; Gregory J. Zelinsky Effects of target typicality on categorical search Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 1–11, 2014. @article{Maxfield2014, The role of target typicality in a categorical visual search task was investigated by cueing observers with a target name, followed by a five-item target present/absent search array in which the target images were rated in a pretest to be high, medium, or low in typicality with respect to the basic-level target cue. Contrary to previous work, we found that search guidance was better for high-typicality targets compared to low-typicality targets, as measured by both the proportion of immediate target fixations and the time to fixate the target. Consistent with previous work, we also found an effect of typicality on target verification times, the time between target fixation and the search judgment; as target typicality decreased, verification times increased. To model these typicality effects, we trained Support Vector Machine (SVM) classifiers on the target categories, and tested these on the corresponding specific targets used in the search task. This analysis revealed significant differences in classifier confidence between the high-, medium-, and low-typicality groups, paralleling the behavioral results. Collectively, these findings suggest that target typicality broadly affects both search guidance and verification, and that differences in typicality can be predicted by distance from an SVM classification boundary. |
Olivia M. Maynard; Angela Attwood; Laura O'Brien; Sabrina Brooks; Craig Hedge; Ute Leonards; Marcus R. Munafò Avoidance of cigarette pack health warnings among regular cigarette smokers Journal Article In: Drug and Alcohol Dependence, vol. 136, no. 1, pp. 170–174, 2014. @article{Maynard2014, Background: Previous research with adults and adolescents indicates that plain cigarette packs increase visual attention to health warnings among non-smokers and non-regular smokers, but not among regular smokers. This may be because regular smokers: (1) are familiar with the health warnings, (2) preferentially attend to branding, or (3) actively avoid health warnings. We sought to distinguish between these explanations using eye-tracking technology. Method: A convenience sample of 30 adult dependent smokers participated in an eye-tracking study. Participants viewed branded, plain and blank packs of cigarettes with familiar and unfamiliar health warnings. The number of fixations to health warnings and branding on the different pack types were recorded. Results: Analysis of variance indicated that regular smokers were biased towards fixating the branding rather than the health warning on all three pack types. This bias was smaller, but still evident, for blank packs, where smokers preferentially attended the blank region over the health warnings. Time-course analysis showed that for branded and plain packs, attention was preferentially directed to the branding location for the entire 10. s of the stimulus presentation, while for blank packs this occurred for the last 8. s of the stimulus presentation. Familiarity with health warnings had no effect on eye gaze location. Conclusion: Smokers actively avoid cigarette pack health warnings, and this remains the case even in the absence of salient branding information. Smokers may have learned to divert their attention away from cigarette pack health warnings. These findings have implications for cigarette packaging and health warning policy. |
2013 |
Matthew F. Asher; David J. Tolhurst; Tom Troscianko; Iain D. Gilchrist Regional effects of clutter on human target detection performance. Journal Article In: Journal of vision, vol. 13, no. 5, pp. 25–25, 2013. @article{Asher2013, Clutter is something that is encountered in everyday life, from a messy desk to a crowded street. Such clutter may interfere with our ability to search for objects in such environments, like our car keys or the person we are trying to meet. A number of computational models of clutter have been proposed and shown to work well for artificial and other simplified scene search tasks. In this paper, we correlate the performance of different models of visual clutter to human performance in a visual search task using natural scenes. The models we evaluate are Feature Congestion (Rosenholtz, Li, & Nakano, 2007), Sub-band Entropy (Rosenholtz et al., 2007), Segmentation (Bravo & Farid, 2008), and Edge Density (Mack & Oliva, 2004) measures. The correlations were performed across a range of target-centered subregions to produce a correlation profile, indicating the scale at which clutter was affecting search performance. Overall clutter was rather weakly correlated with performance (r ≈ 0.2). However, different measures of clutter appear to reflect different aspects of the search task: correlations with Feature Congestion are greatest for the actual target patch, whereas the Sub-band Entropy is most highly correlated in a region 12° × 12° centered on the target. |
Louise Marshall; Paul M. Bays Obligatory encoding of task-irrelevant features depletes working memory resources Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 21–21, 2013. @article{Marshall2013, Selective attention is often considered the "gateway" to visual working memory (VWM). However, the extent to which we can voluntarily control which of an object's features enter memory remains subject to debate. Recent research has converged on the concept of VWM as a limited commodity distributed between elements of a visual scene. Consequently, as memory load increases, the fidelity with which each visual feature is stored decreases. Here we used changes in recall precision to probe whether task-irrelevant features were encoded into VWM when individuals were asked to store specific feature dimensions. Recall precision for both color and orientation was significantly enhanced when task-irrelevant features were removed, but knowledge of which features would be probed provided no advantage over having to memorize both features of all items. Next, we assessed the effect an interpolated orientation-or color-matching task had on the resolution with which orientations in a memory array were stored. We found that the presence of orientation information in the second array disrupted memory of the first array. The cost to recall precision was identical whether the interfering features had to be remembered, attended to, or could be ignored. Therefore, it appears that storing, or merely attending to, one feature of an object is sufficient to promote automatic encoding of all its features, depleting VWM resources. However, the precision cost was abolished when the match task preceded the memory array. So, while encoding is automatic, maintenance is voluntary, allowing resources to be reallocated to store new visual information. |
Ryan T. Maloney; Tamara L. Watson; Colin W. G. Clifford Human cortical and behavioral sensitivity to patterns of complex motion at eccentricity Journal Article In: Journal of Neurophysiology, vol. 110, no. 11, pp. 2545–2556, 2013. @article{Maloney2013, Complex patterns of image motion (contracting, expanding, rotating, and spiraling fields) are important in the coordination of visually guided behaviors. Whereas specialized detectors in monkey visual cortex show selectivity for particular patterns of complex motion, their representation in human visual cortex remains unclear. In the present study, functional magnetic resonance imaging (fMRI) was used to investigate the sensitivity of functionally defined regions of human visual cortex to parametrically modulated complex motion trajectories, coupled with complementary psychophysical testing. A unique stimulus design made it possible to disambiguate the neural responses and psychophysical sensitivity to complex motions per se from the distribution of local motions relative to the fovea, which are known to enhance cortical activity when presented radial to fixation. This involved presenting several small, separate motion fields in the periphery in a manner that distinguished them from global optic flow patterns. The patterns were morphed through complex motion space in a systematic time-locked fashion when presented in the scanner. Anisotropies were observed in the fMRI signal, marked by an enhanced response to expanding vs. contracting fields, even in early visual cortex. Anisotropies in the psychophysical sensitivity measures followed a similar pattern that was correlated with activity in areas hV4, V5/MT, and MST. This represents the first systematic examination of complex motion perception at both a behavioral and neural level in human observers. The characteristic processing anisotropy revealed in both data sets can inform models of complex motion processing, particularly with respect to computations performed in early visual cortex. |
Sanjay G. Manohar; Masud Husain Attention as foraging for information and value Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 711, 2013. @article{Manohar2013, What is the purpose of attention? One avenue of research has led to the proposal that attention might be crucial for gathering information about the environment, while other lines of study have demonstrated how attention may play a role in guiding behavior to rewarded options. Many experiments that study attention require participants to make a decision based on information acquired discretely at one point in time. In real-world situations, however, we are usually not presented with information about which option to select in such a manner. Rather we must initially search for information, weighing up reward values of options before we commit to a decision. Here, we propose that attention plays a role in both foraging for information and foraging for value. When foraging for information, attention is guided toward the unknown. When foraging for reward, attention is guided toward high reward values, allowing decision-making to proceed by accept-or-reject decisions on the currently attended option. According to this account, attention can be regarded as a low-cost alternative to moving around and physically interacting with the environment-"teleforaging"-before a decision is made to interact physically with the world. To track the timecourse of attention, we asked participants to seek out and acquire information about two gambles by directing their gaze, before choosing one of them. Participants often made multiple refixations on items before making a decision. Their eye movements revealed that early in the trial, attention was guided toward information, i.e., toward locations that reduced uncertainty about value. In contrast, late in the trial, attention was guided by expected value of the options. At the end of the decision period, participants were generally attending to the item they eventually chose. We suggest that attentional foraging shifts from an uncertainty-driven to a reward-driven mode during the evolution of a decision, permitting decisions to be made by an engage-or-search strategy. |
Sophie Marat; Anis Rahman; Denis Pellerin; Nathalie Guyader; Dominique Houzet Improving visual saliency by adding 'face feature map' and 'center bias' Journal Article In: Cognitive Computation, vol. 5, no. 1, pp. 63–75, 2013. @article{Marat2013, Faces play an important role in guiding visual attention, thus the inclusion of face detection into a classical visual attention model can improve eye movement predictions. In this study, we proposed a visual saliency model to predict eye movements during free viewing of videos. The model is inspired by the biology of the visual system, and breaks down each frame of a video database into three saliency maps, each earmarked for a particular visual feature. (i) A ‘static' saliency map emphasizes regions that differ from their context in terms of luminance, orientation and spatial frequency. (ii) A ‘dynamic' saliency map emphasizes moving regions with values proportional to motion amplitude. (iii) A ‘face' saliency map emphasizes areas where a face is detected with a value proportional to the confidence of the detection. In parallel, a behavioral experiment was carried out to record eye movements of participants when viewing the videos. These eye movements were compared with the models' saliency maps to quantify their efficiency.We also examined the influence of center bias on the saliency maps, and incorporated it into the model in a suitable way. Finally, we proposed an efficient fusion method of all these saliency maps. Consequently, the fused master saliency map developed in this research is a good predictor of participants' eye positions. |
Jan Bernard C. Marsman; Remco Renken; Koen V. Haak; Frans W. Cornelissen Linking cortical visual processing to viewing behaviour using fMRI Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 109, 2013. @article{Marsman2013, One characteristic of natural visual behavior in humans is the frequent shifting of eye position. It has been argued that the characteristics of these eye movements can be used to distinguish between distinct modes of visual processing (Unema et al., 2005). These viewing modes would be distinguishable on the basis of the eye-movement parameters fixation duration and saccade amplitude and have been hypothesized to reflect the differential involvement of dorsal and ventral systems in saccade planning and information processing. According to this hypothesis, on the one hand, while in a "pre-attentive" or ambient mode, primarily scanning eye movements are made; in this mode fixation are relatively brief and saccades tends to be relatively large. On the other hand, in "attentive" focal mode, fixations last longer and saccades are relatively small, and result in viewing behavior which could be described as detailed inspection. Thus far, no neuroscientific basis exists to support the idea that such distinct viewing modes are indeed linked to processing in distinct cortical regions. Here, we used fixation-based event-related (FIBER) fMRI in combination with independent component analysis (ICA) to investigate the neural correlates of these viewing modes. While we find robust eye-movement-related activations, our results do not support the theory that the above mentioned viewing modes modulate dorsal and ventral processing. Instead, further analyses revealed that eye-movement characteristics such as saccade amplitude and fixation duration did differentially modulate activity in three clusters in early, ventromedial and ventrolateral visual cortex. In summary, we conclude that evaluating viewing behavior is crucial for unraveling cortical processing in natural vision. |
Tommaso Mastropasqua; Massimo Turatto Perceptual grouping enhances visual plasticity Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e53683, 2013. @article{Mastropasqua2013, Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. |
Sebastiaan Mathôt; Jan Theeuwes A reinvestigation of the reference frame of the tilt-adaptation aftereffect Journal Article In: Scientific Reports, vol. 3, pp. 1152, 2013. @article{Mathot2013, The tilt-adaptation aftereffect (TAE) is the phenomenon that prolonged perception of a tilted ‘adapter' stimulus affects the perceived tilt ofa subsequent ‘tester' stimulus. Although it is clear that TAE is strongest when adapter and tester are presented at the same location, the reference frame ofthe effect is debated. Some authors have reported that TAE is spatiotopic (world centred): It occurs when adapter and tester are presented at the same display location, even when this corresponds to different retinal locations. Others have reported that TAE is exclusively retinotopic (eye centred): It occurs only when adapter and tester are presented at the same retinal location, even when this corresponds to different display locations. Because this issue is crucial for models oftranssaccadic perception, we reinvestigated the reference frame ofTAE.We report that TAE is exclusively retinotopic, supporting the notion that there is no transsaccadic integration of low-level visual information. |
Sebastiaan Mathôt; Lotje Linden; Jonathan Grainger; Françoise Vitu The pupillary light response reveals the focus of covert visual attention. Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e78168, 2013. @article{Mathot2013a, The pupillary light response is often assumed to be a reflex that is not susceptible to cognitive influences. In line with recent converging evidence, we show that this reflexive view is incomplete, and that the pupillary light response is modulated by covert visual attention: Covertly attending to a bright area causes a pupillary constriction, relative to attending to a dark area under identical visual input. This attention-related modulation of the pupillary light response predicts cuing effects in behavior, and can be used as an index of how strongly participants attend to a particular location. Therefore, we suggest that pupil size may offer a new way to continuously track the focus of covert visual attention, without requiring a manual response from the participant. The theoretical implication of this finding is that the pupillary light response is neither fully reflexive, nor under complete voluntary control, but is instead best characterized as a stereotyped response to a voluntarily selected target. In this sense, the pupillary light response is similar to saccadic and smooth pursuit eye movements. Together, eye movements and the pupillary light response maximize visual acuity, stabilize visual input, and selectively filter visual information as it enters the eye. |
Maria Matziridi; Eli Brenner; Jeroen B. J. Smeets In: PLoS ONE, vol. 8, no. 4, pp. e62436, 2013. @article{Matziridi2013, A stimulus that is flashed around the time of a saccade tends to be mislocalized in the direction of the saccade target. Our question is whether the mislocalization is related to the position of the saccade target within the image or to the gaze position at the end of the saccade. We separated the two with a visual illusion that influences the perceived distance to the target of the saccade and thus saccade endpoint without affecting the perceived position of the saccade target within the image. We asked participants to make horizontal saccades from the left to the right end of the shaft of a Müller-Lyer figure. Around the time of the saccade, we flashed a bar at one of five possible positions and asked participants to indicate its location by touching the screen. As expected, participants made shorter saccades along the fins-in (<->) configuration than along the fins-out (>-<) configuration of the figure. The illusion also influenced the mislocalization pattern during saccades, with flashes presented with the fins-out configuration being perceived beyond flashes presented with the fins-in configuration. The difference between the patterns of mislocalization for bars flashed during the saccade for the two configurations corresponded quantitatively with a prediction based on compression towards the saccade endpoint considering the magnitude of the effect of the illusion on saccade amplitude. We conclude that mislocalization is related to the eye position at the end of the saccade, rather than to the position of the saccade target within the image. |
Ashleigh M. Maxcey-Richard; Andrew Hollingworth The strategic retention of task-relevant objects in visual working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 3, pp. 760–772, 2013. @article{MaxceyRichard2013, The serial and spatially extended nature of many real-world visual tasks suggests the need for control over the content of visual working memory (VWM). We examined the management of VWM in a task that required participants to prioritize individual objects for retention during scene viewing. There were 5 principal findings: (a) Strategic retention of task-relevant objects was effective and was dissociable from the current locus of visual attention; (b) strategic retention was implemented by protection from interference rather than by preferential encoding; (c) this prioritization was flexibly transferred to a new object as task demands changed; (d) no-longer-relevant items were efficiently eliminated from VWM; and (e) despite this level of control, attended and fixated objects were consolidated into VWM regardless of task relevance. These results are consistent with a model of VWM control in which each fixated object is automatically encoded into VWM, replacing a portion of the content in VWM. However, task-relevant objects can be selectively protected from replacement. |
Clare M. Press; James M. Kilner The time course of eye movements during action observation reflects sequence learning Journal Article In: NeuroReport, vol. 24, no. 14, pp. 822–826, 2013. @article{Press2013, When we observe object-directed actions such as grasping, we make predictive eye movements. However, eye movements are reactive when observing similar actions without objects. This reactivity may reflect a lack of attribution of intention to observed actors when they perform actions without 'goals'. Alternatively, it may simply signal that there is no cue present that has been predictive of the subsequent trajectory in the observer's experience. To test this hypothesis, the present study investigated how the time course of eye movements changes as a function of visual experience of predictable, but arbitrary, actions without objects. Participants observed a point-light display of a model performing sequential finger actions in a serial reaction time task. Eye movements became less reactive across blocks. In addition, participants who exhibited more predictive eye movements subsequently demonstrated greater learning when required either to execute, or to recognize, the sequence. No measures were influenced by whether participants had been instructed that the observed movements were human or lever generated. The present data indicate that eye movements when observing actions without objects reflect the extent to which the trajectory can be predicted through experience. The findings are discussed with reference to the implications for the mechanisms supporting perception of actions both with and without objects as well as those mediating inanimate object processing. |
Tim J. Preston; Fei Guo; Koel Das; Barry Giesbrecht; Miguel P. Eckstein Neural representations of contextual guidance in visual search of real-world scenes Journal Article In: Journal of Neuroscience, vol. 33, no. 18, pp. 7846–7855, 2013. @article{Preston2013, Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. |