EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2014 |
Julieanne Blum; Nicholas S. C. Price Reflexive tracking eye movements and motion perception: One or two neural populations? Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–14, 2014. @article{Blum2014, Motion-sensitive neurons in the middle temporal (MT) and medial superior temporal (MST) areas perform the sensory analysis required for both motion perception and controlling smooth eye movements. The perceptual and oculomotor systems are characterized by high variability, even when responding to identical stimulus repetitions. If a single population of neurons performs the motion analysis driving perception and eye movements, errors in perception and action might show similar direction-dependent biases, or their variability might be correlated across trials. However, previous studies have produced conflicting reports of the presence of significant single-trial correlations between motion perception and the velocity of smooth pursuit, a volitional tracking eye movement. We studied ocular following, a reflexive tracking eye movement, simultaneously measuring eye movement direction and perceived direction of a moving random dot field. Oculomotor errors were largest for near-cardinal directions, providing the first evidence for cardinal repulsion in reflexive eye movements. Biases in perceptual and oculomotor errors were correlated across test directions, but not across single trials with the same direction. Based on the similar direction-dependent anisotropies in eye movements and perception, there is reason to believe that partially overlapping populations of sensory neurons underlie motion perception and oculomotor behaviors, with independent downstream sources of noise masking trial-by-trial correlations between perception and action. |
Sabrina Boll; Matthias Gamer 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions Journal Article In: Frontiers in Behavioral Neuroscience, vol. 8, pp. 255, 2014. @article{Boll2014, Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR) have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes toward diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n = 39) and a high (n = 40) 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes toward the eyes and toward the mouth could be identified. We found that the low vs. the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation toward diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features. |
Ali Borji; Laurent Itti Defending Yarbus: Eye movements reveal observers' task Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–22, 2014. @article{Borji2014, In a very influential yet anecdotal illustration, Yarbus suggested that human eye-movement patterns are modulated top down by different task demands. While the hypothesis that it is possible to decode the observer's task from eye movements has received some support (e.g., Henderson, Shinkareva, Wang, Luke, & Olejarczyk, 2013; Iqbal & Bailey, 2004), Greene, Liu, and Wolfe (2012) argued against it by reporting a failure. In this study, we perform a more systematic investigation of this problem, probing a larger number of experimental factors than previously. Our main goal is to determine the informativeness of eye movements for task and mental state decoding. We perform two experiments. In the first experiment, we reanalyze the data from a previous study by Greene et al. (2012) and contrary to their conclusion, we report that it is possible to decode the observer's task from aggregate eye-movement features slightly but significantly above chance, using a Boosting classifier (34.12% correct vs. 25% chance level; binomial test, p ¼ 1.0722e – 04). In the second experiment, we repeat and extend Yarbus's original experiment by collecting eye movements of 21 observers viewing 15 natural scenes (including Yarbus's scene) under Yarbus's seven questions. We show that task decoding is possible, also moderately but significantly above chance (24.21% vs. 14.29% chance- level; binomial test, p ¼ 2.4535e – 06). We thus conclude that Yarbus's idea is supported by our data and continues to be an inspiration for future computational and experimental eye-movement research. From a broader perspective, we discuss techniques, features, limitations, societal and technological impacts, and future directions in task decoding from eye movements. |
Ali Borji; Daniel Parks; Laurent Itti Complementary effects of gaze direction and early saliency in guiding fixations during free viewing Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–32, 2014. @article{Borji2014a, Gaze direction provides an important and ubiquitous communication channel in daily behavior and social interaction of humans and some animals. While several studies have addressed gaze direction in synthesized simple scenes, few have examined how it can bias observer attention and how it might interact with early saliency during free viewing of natural and realistic scenes. Experiment 1 used a controlled, staged setting in which an actor was asked to look at two different objects in turn, yielding two images that differed only by the actor's gaze direction, to causally assess the effects of actor gaze direction. Over all scenes, the median probability of following an actor's gaze direction was higher than the median probability of looking toward the single most salient location, and higher than chance. Experiment 2 confirmed these findings over a larger set of unconstrained scenes collected from the Web and containing people looking at objects and/or other people. To further compare the strength of saliency versus gaze direction cues, we computed gaze maps by drawing a cone in the direction of gaze of the actors present in the images. Gaze maps predicted observers' fixation locations significantly above chance, although below saliency. Finally, to gauge the relative importance of actor face and eye directions in guiding observer's fixations, in Experiment 3, observers were asked to guess the gaze direction from only an actor's face region (with the rest of the scene masked), in two conditions: actor eyes visible or masked. Median probability of guessing the true gaze direction within ±9° was significantly higher when eyes were visible, suggesting that the eyes contribute significantly to gaze estimation, in addition to face region. Our results highlight that gaze direction is a strong attentional cue in guiding eye movements, complementing low-level saliency cues, and derived from both face and eyes of actors in the scene. Thus gaze direction should be considered in constructing more predictive visual attention models in the future. |
Myriam Chanceaux; Anne Guérin-Dugué; Benoît Lemaire; Thierry Baccino A computational cognitive model of information search in textual materials Journal Article In: Cognitive Computation, vol. 6, no. 1, pp. 1–17, 2014. @article{Chanceaux2014a, Document foraging for information is a crucial and increasingly prevalent activity nowadays. We designed a computational cognitive model to simulate the oculomotor scanpath of an average web user searching for specific information from textual materials. In particular, the developed model dynamically combines visual, semantic, and memory processes to predict the user's focus of attention during information seeking from paragraphs of text. A series of psychological experiments was conducted using eye-tracking techniques in order to validate and refine the proposed model. Comparisons between model simulations and human data are reported and discussed taking into account the strengths and shortcomings of the model. The proposed model provides a unique contribution to the investigation of the cognitive processes involved during information search and bears significant implications for web page design and evaluation. |
Samuel W. Cheadle; Semir Zeki The role of parietal cortex in the formation of color and motion based concepts Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 535, 2014. @article{Cheadle2014, Imaging evidence shows that separate subdivisions of parietal cortex, in and around the intraparietal sulcus (IPS), are engaged when stimuli are grouped according to color and to motion (Zeki and Stutters, 2013). Since grouping is an essential step in the formation of concepts, we wanted to learn whether parietal cortex is also engaged in the formation of concepts according to these two attributes. Using functional magnetic resonance imaging (fMRI), and choosing the recognition of concept-based color or motion stimuli as our paradigm, we found that there was strong concept-related activity in and around the IPS, a region whose homolog in the macaque monkey is known to receive direct but segregated anatomical inputs from V4 and V5. Parietal activity related to color concepts was juxtaposed but did not overlap with activity related to motion concepts, thus emphasizing the continuation of the segregation of color and motion into the conceptual system. Concurrent retinotopic mapping experiments showed that within the parietal cortex, concept-related activity increases within later stage IPS areas. |
Samuel Cheadle; Valentin Wyart; Konstantinos Tsetsos; Nicholas E. Myers; Vincent DeGardelle; Santiago Herce Castañón; Christopher Summerfield Adaptive gain control during human perceptual choice Journal Article In: Neuron, vol. 81, no. 6, pp. 1429–1441, 2014. @article{Cheadle2014a, Neural systems adapt to background levels of stimulation. Adaptive gain control has been extensively studied in sensory systems but overlooked in decision-theoretic models. Here, we describe evidence for adaptive gain control during the serial integration of decision-relevant information. Human observers judged the average information provided by a rapid stream of visual events (samples). The impact that each sample wielded over choices depended on its consistency with the previous sample, with more consistent or expected samples wielding the greatest influence over choice. This bias was also visible in the encoding of decision information in pupillometric signals and in cortical responses measured with functional neuroimaging. These data can be accounted for with a serial sampling model in which the gain of information processing adapts rapidly to reflect the average of the available evidence. |
Jiaqing Chen; Matthias Niemeier Do head-on-trunk signals modulate disengagement of spatial attention? Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 147–157, 2014. @article{Chen2014, Body schema is indispensable for sensorimotor control and learning, but whether it is associated with cognitive functions, such as allocation of spatial attention, remains unclear. Observations in patients with unilateral spatial neglect support this view, yet data from neurologically normal participants are inconsistent. Here, we investigated the influence of head-on-trunk positions (30° left or right, straight ahead) on disengagement of attention in healthy participants. Five experiments examined the effects of valid or invalid cues on spatial shifts of attention using the Posner paradigm. Experiment 1 used a forced-choice task. Participants quickly reported the location of a target that appeared left or right of the fixation point, preceded by a cue on the same (valid) or opposite side (invalid). Experiments 2, 3, and 4 also used valid and invalid cues but required participants to simply detect a target appearing on the left or right side. Experiment 5 used a speeded discrimination task, in which participants quickly reported the orientation of a Gabor. We observed expected influences of validity and stimulus onset asynchrony as well as inhibition of return; however, none of the experiments suggested that head-on-trunk position created or changed visual field advantages, contrary to earlier reports. Our results showed that the manipulations of the body schema did not modulate attentional processes in the healthy brain, unlike neuropsychological studies on neglect patients. Our findings suggest that spatial neglect reflects a state of the lesioned brain that is importantly different from that of the normally functioning brain. |
Sheng-Chang Chen; Hsiao-Ching She; Ming-Hua Chuang; Jiun-Yu Wu; Jie-Li Tsai; Tzyy-Ping Jung Eye movements predict students' computer-based assessment performance of physics concepts in different presentation modalities Journal Article In: Computers and Education, vol. 74, pp. 61–72, 2014. @article{Chen2014a, Despite decades of studies on the link between eye movements and human cognitive processes, the exact nature of the link between eye movements and computer-based assessment performance still remains unknown. To bridge this gap, the present study investigates whether human eye movement dynamics can predict computer-based assessment performance (accuracy of response) in different presentation modalities (picture vs. text). Eye-tracking system was employed to collect 63 college students' eye movement behaviors while they are engaging in the computer-based physics concept questions presented as either pictures or text. Students' responses were collected immediately after the picture or text presentations in order to determine the accuracy of responses. The results demonstrated that students' eye movement behavior can successfully predict their computer-based assessment performance. Remarkably, the mean fixation duration has the greatest power to predict the likelihood of responding the correct physics concepts successfully, followed by re-reading time in proportion. Additionally, the mean saccade distance has the least and negative power to predict the likelihood of responding the physics concepts correctly in the picture presentation. Interestingly, pictorial presentations appear to convey physics concepts more quickly and efficiently than do textual presentations. This study adds empirical evidence of a prediction model between eye movement behaviors and successful cognitive performance. Moreover, it provides insight into the modality effects on students' computer-based assessment performance through the use of eye movement behavior evidence. |
Xiaorong Cheng; Qi Yang; Yaqian Han; Xianfeng Ding; Zhao Fan Capacity limit of simultaneous temporal processing: How many concurrent 'clocks' in vision? Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e91797, 2014. @article{Cheng2014, A fundamental ability for humans is to monitor and process multiple temporal events that occur at different spatial locations simultaneously. A great number of studies have demonstrated simultaneous temporal processing (STP) in human and animal participants, i.e., multiple 'clocks' rather than a single 'clock'. However, to date, we still have no knowledge about the exact limitation of the STP in vision. Here we provide the first experimental measurement to this critical parameter in human vision by using two novel and complementary paradigms. The first paradigm combines merits of a temporal oddball-detection task and a capacity measurement widely used in the studies of visual working memory to quantify the capacity of STP (CSTP). The second paradigm uses a two-interval temporal comparison task with various encoded spatial locations involved in the standard temporal intervals to rule out an alternative, 'object individuation'-based, account of CSTP, which is measured by the first paradigm. Our results of both paradigms indicate consistently that the capacity limit of simultaneous temporal processing in vision is around 3 to 4 spatial locations. Moreover, the binding of the 'local clock' and its specific location is undermined by bottom-up competition of spatial attention, indicating that the time-space binding is resource-consuming. Our finding that the capacity of STP is not constrained by the capacity of visual working memory (VWM) supports the idea that the representations of STP are likely stored and operated in units different from those of VWM. A second paradigm confirms further that the limited number of location-bound 'local clocks' are activated and maintained during a time window of several hundreds milliseconds. |
Kimberly S. Chiew; Todd S. Braver Dissociable influences of reward motivation and positive emotion on cognitive control Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 14, no. 2, pp. 509–529, 2014. @article{Chiew2014, It is becoming increasingly appreciated that affective and/or motivational influences contribute strongly to goal-oriented cognition and behavior. An unresolved question is whether emotional manipulations (i.e., direct induction of affectively valenced subjective experience) and motivational manipulations (e.g., delivery of performance-contingent rewards and punishments) have similar or distinct effects on cognitive control. Prior work has suggested that reward motivation can reliably enhance a proactive mode of cognitive control, whereas other evidence is suggestive that positive emotion improves cognitive flexibility, but reduces proactive control. However, a limitation of the prior research is that reward motivation and positive emotion have largely been studied independently. Here, we directly compared the effects of positive emotion and reward motivation on cognitive control with a tightly matched, within-subjects design, using the AX-continuous performance task paradigm, which allows for relative measurement of proactive versus reactive cognitive control. High-resolution pupillometry was employed as a secondary measure of cognitive dynamics during task performance. Robust increases in behavioral and pupillometric indices of proactive control were observed with reward motivation. The effects of positive emotion were much weaker, but if anything, also reflected enhancement of proactive control, a pattern that diverges from some prior findings. These results indicate that reward motivation has robust influences on cognitive control, while also highlighting the complexity and heterogeneity of positive-emotion effects. The findings are discussed in terms of potential neurobiological mechanisms. © 2014 The Author(s). |
Joseph D. Chisholm; Alan Kingstone Knowing and avoiding: The influence of distractor awareness on oculomotor capture Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 5, pp. 1258–1264, 2014. @article{Chisholm2014, Kramer, Hahn, Irwin, and Theeuwes (2000) reported that the interfering effect of distractors is reduced when participants are aware of the to-be-ignored information. In contrast, recent evidence indicates that distractor interference increases when individuals are aware of the distractors. In the present investigation, we directly assessed the influence of distractor awareness on oculomotor capture, with the hope of resolving this contradiction in the literature and gaining further insight into the influence of awareness on attention. Participants completed a traditional oculomotor capture task. They were not informed of the presence of the distracting information (unaware condition), were informed of distractors (aware condition), or were informed of distractor information and told to avoid attending to it (avoid condition). Being aware of the distractors yielded a performance benefit, relative to the unaware condition; however, this benefit was eliminated when participants were told to actively avoid distraction. This pattern of results reconciles past contradictions in the literature and suggests an inverted-U function of awareness in distractor performance. Too little or too much emphasis yields a performance decrement, but an intermediate level of emphasis provides a performance benefit. |
Kyoung Whan Choe; Randolph Blake; Sang-Hun Lee Dissociation between neural signatures of stimulus and choice in population activity of human V1 during perceptual decision-making Journal Article In: Journal of Neuroscience, vol. 34, no. 7, pp. 2725–2743, 2014. @article{Choe2014, Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. |
Jennie E. S. Choi; Pavan A. Vaswani; Reza Shadmehr Vigor of movements and the cost of time in decision making Journal Article In: Journal of Neuroscience, vol. 34, no. 4, pp. 1212–1223, 2014. @article{Choi2014, If we assume that the purpose of a movement is to acquire a rewarding state, the duration of the movement carries a cost because it delays acquisition of reward. For some people, passage of time carries a greater cost, as evidenced by how long they are willing to wait for a rewarding outcome. These steep discounters are considered impulsive. Is there a relationship between cost of time in decision making and cost of time in control of movements? Our theory predicts that people who are more impulsive should in general move faster than subjects who are less impulsive. To test our idea, we considered elementary voluntary movements: saccades of the eye. We found that in humans, saccadic vigor, assessed using velocity as a function of amplitude, was as much as 50% greater in one subject than another; that is, some people consistently moved their eyes with high vigor. We measured the cost of time in a decision-making task in which the same subjects were given a choice between smaller odds of success immediately and better odds if they waited. We measured how long they were willing to wait to obtain the better odds and how much they increased their wait period after they failed. We found that people that exhibited greater vigor in their movements tended to have a steep temporal discount function, as evidenced by their waiting patterns in the decision-making task. The cost of time may be shared between decision making and motor control. |
Mina Choi; Joel Wang; Wei Chung Cheng; Giovanni Ramponi; Luigi Albani; Aldo Badano Effect of veiling glare on detectability in high-dynamic-range medical images Journal Article In: IEEE/OSA Journal of Display Technology, vol. 10, no. 5, pp. 420–428, 2014. @article{Choi2014a, We describe a methodology for predicting the detectability of subtle targets in dark regions of high-dynamic-range (HDR) images in the presence of veiling glare in the human eye. The method relies on predictions of contrast detection thresholds for the human visual system within a HDR image based on psychophysics measurements and modeling of the HDR display device characteristics. We present experimental results used to construct the model and discuss an image-dependent empirical veiling glare model and the validation of the model predictions with test patterns, natural scenes, and medical images. The model predictions are compared to a previously reported model (HDR-VDP2) for predicting HDR image quality accounting for glare effects. © 2005-2012 IEEE. |
Scott A. Reed; Paul Dassonville Adaptation to leftward-shifting prisms enhances local processing in healthy individuals Journal Article In: Neuropsychologia, vol. 56, no. 1, pp. 418–427, 2014. @article{Reed2014, In healthy individuals, adaptation to left-shifting prisms has been shown to simulate the symptoms of hemispatial neglect, including a reduction in global processing that approximates the local bias observed in neglect patients. The current study tested whether leftward prism adaptation can more specifically enhance local processing abilities. In three experiments, the impact of local and global processing was assessed through tasks that measure susceptibility to illusions that are known to be driven by local or global contextual effects. Susceptibility to the rod-and-frame illusion - an illusion disproportionately driven by both local and global effects depending on frame size - was measured before and after adaptation to left- and right-shifting prisms. A significant increase in rod-and-frame susceptibility was found for the left-shifting prism group, suggesting that adaptation caused an increase in local processing effects. The results of a second experiment confirmed that leftward prism adaptation enhances local processing, as assessed with susceptibility to the simultaneous-tilt illusion. A final experiment employed a more specific measure of the global effect typically associated with the rod-and-frame illusion, and found that although the global effect was somewhat diminished after leftward prism adaptation, the trend failed to reach significance (p=.078). Rightward prism adaptation had no significant effects on performance in any of the experiments. Combined, these findings indicate that leftward prism adaptation in healthy individuals can simulate the local processing bias of neglect patients primarily through an increased sensitivity to local visual cues, and confirm that prism adaptation not only modulates lateral shifts of attention, but also prompts shifts from one level of processing to another. |
Eyal M. Reingold Eye tracking research and technology: Towards objective measurement of data quality Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 635–652, 2014. @article{Reingold2014, Two methods for objectively measuring eye tracking data quality are explored. The first method works by tricking the eye tracker to detect an abrupt change in the gaze position of an artificial eye that in actuality does not move. Such a device, referred to as an artificial saccade generator, is shown to be extremely useful for measuring the temporal accuracy and precision of eye tracking systems and for validating the latency to display change in gaze contingent display paradigms. The second method involves an artificial pupil that is mounted on a computer controlled moving platform. This device is designed to be able to provide the eye tracker with motion sequences that closely resemble biological eye movements. The main advantage of using artificial motion for testing eye tracking data quality is the fact that the spatiotemporal signal is fully specified in a manner independent of the eye tracker that is being evaluated and that nearly identical motion sequence can be reproduced multiple times with great precision. The results of the present study demonstrate that the equipment described has the potential to become an important tool in the comprehensive evaluation of data quality. |
Eyal M. Reingold; Mackenzie G. Glaholt Cognitive control of fixation duration in visual search: The role of extrafoveal processing Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 610–634, 2014. @article{Reingold2014a, Participants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed.$backslash$nParticipants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed. |
Gabriel Reyes; Jérôme Sackur Introspection during visual search Journal Article In: Consciousness and Cognition, vol. 29, pp. 212–229, 2014. @article{Reyes2014, Recent advances in the field of metacognition have shown that human participants are introspectively aware of many different cognitive states, such as confidence in a decision. Here we set out to expand the range of experimental introspection by asking whether participants could access, through pure mental monitoring, the nature of the cognitive processes that underlie two visual search tasks: an effortless "pop-out" search, and a difficult, effortful, conjunction search. To this aim, in addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of items scanned before a decision was reached. By controlling response times and eye movements, we assessed the contribution of self-observation of behavior in these subjective estimates. Results showed that introspection is a flexible mechanism and that pure mental monitoring of cognitive processes is possible in elementary tasks. |
Theo Rhodes; Christopher T. Kello; Bryan Kerster Intrinsic and extrinsic contributions to heavy tails in visual foraging Journal Article In: Visual Cognition, vol. 22, no. 6, pp. 809–842, 2014. @article{Rhodes2014, Eyes move over visual scenes to gather visual information. Studies have found heavy-tailed distributions in measures of eye movements during visual search, which raises questions about whether these distributions are pervasive to eye movements, and whether they arise from intrinsic or extrinsic factors. Three different measures of eye movement trajectories were examined during visual foraging of complex images, and all three were found to exhibit heavy tails: Spatial clustering of eye movements followed a power law distribution, saccade length distributions were lognormally distributed, and the speeds of slow, small amplitude movements occurring during fixations followed a 1/f spectral power law relation. Images were varied to test whether the spatial clustering of visual scene information is responsible for heavy tails in eye movements. Spatial clustering of eye movements and saccade length distributions were found to vary with image type and task demands, but no such effects were found for eye movement speeds during fixations. Results showed that heavy-tailed distributions are general and intrinsic to visual foraging, but some of them become aligned with visual stimuli when required by task demands. The potentially adaptive value of heavy-tailed distributions in visual foraging is discussed. |
Ardi Roelofs Tracking eye movements to localize Stroop interference in naming: Word planning versus articulatory buffering Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 5, pp. 1332–1347, 2014. @article{Roelofs2014, Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the interference occurs in an articulatory buffer after word planning. Here, 2 color-word Stroop experiments are reported that tested between these accounts using eye tracking. Previous research has indicated that the shifting of eye gaze from a stimulus to another occurs before the articulatory buffer is reached in spoken word planning. In the present experiments, participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to manually indicate its direction. If Stroop interference arises in the articulatory buffer, the interference should be present in the color-naming latencies but not in the gaze shift and manual response latencies. Contrary to these predictions, Stroop interference was present in all 3 behavioral measures. These results indicate that Stroop interference arises during spoken word planning rather than in articulatory buffering. |
Gustavo Rohenkohl; Ian C. Gould; Jessica Pessoa; Anna C. Nobre Combining spatial and temporal expectations to improve visual perception Journal Article In: Journal of Vision, vol. 14, no. 4, pp. 1–13, 2014. @article{Rohenkohl2014, The importance of temporal expectations in modulating perceptual functions is increasingly recognized. However, the means through which temporal expectations can bias perceptual information processing remains ill understood. Recent theories propose that modulatory effects of temporal expectations rely on the co-existence of other biases based on receptive-field properties, such as spatial location. We tested whether perceptual benefits of temporal expectations in a perceptually demanding psychophysical task depended on the presence of spatial expectations. Foveally presented symbolic arrow cues indicated simultaneously where (location) and when (time) target events were more likely to occur. The direction of the arrow indicated target location (80% validity), while its color (pink or blue) indicated the interval (80% validity) for target appearance. Our results confirmed a strong synergistic interaction between temporal and spatial expectations in enhancing visual discrimination. Temporal expectation significantly boosted the effectiveness of spatial expectation in sharpening perception. However, benefits for temporal expectation disappeared when targets occurred at unattended locations. Our findings suggest that anticipated receptive-field properties of targets provide a natural template upon which temporal expectations can operate in order to help prioritize goal-relevant events from early perceptual stages. |
Michaël Sassi; Maarten Demeyer; Johan Wagemans Peripheral contour grouping and saccade targeting: The role of mirror symmetry Journal Article In: Symmetry, vol. 6, no. 1, pp. 1–22, 2014. @article{Sassi2014, Integrating shape contours in the visual periphery is vital to our ability to locate objects and thus make targeted saccadic eye movements to efficiently explore our surroundings. We tested whether global shape symmetry facilitates peripheral contour integration and saccade targeting in three experiments, in which observers responded to a successful peripheral contour detection by making a saccade towards the target shape. The target contours were horizontally (Experiment 1) or vertically (Experiments 2 and 3) mirror symmetric. Observers responded by making a horizontal (Experiments 1 and 2) or vertical (Experiment 3) eye movement. Based on an analysis of the saccadic latency and accuracy, we conclude that the figure-ground cue of global mirror symmetry in the periphery has little effect on contour integration or on the speed and precision with which saccades are targeted towards objects. The role of mirror symmetry may be more apparent under natural viewing conditions with multiple objects competing for attention, where symmetric regions in the visual field can pre-attentively signal the presence of objects, and thus attract eye movements. |
Daniel R. Saunders; Russell L. Woods Direct measurement of the system latency of gaze-contingent displays Journal Article In: Behavior Research Methods, vol. 46, no. 2, pp. 439–447, 2014. @article{Saunders2014, Gaze-contingent displays combine a display device with an eyetracking system to rapidly update an image on the basis of the measured eye position. All such systems have a delay, the system latency, between a change in gaze location and the related change in the display. The system latency is the result of the delays contributed by the eyetracker, the display computer, and the display, and it is affected by the properties of each component, which may include variability. We present a direct, simple, and low-cost method to measure the system latency. The technique uses a device to briefly blind the eyetracker system (e.g., for video-based eyetrackers, a device with infrared light-emitting diodes (LED)), creating an eyetracker event that triggers a change to the display monitor. The time between these two events, as captured by a relatively low-cost consumer camera with high-speed video capability (1,000 Hz), is an accurate measurement of the system latency. With multiple measurements, the distribution of system latencies can be characterized. The same approach can be used to synchronize the eye position time series and a video recording of the visual stimuli that would be displayed in a particular gaze-contingent experiment. We present system latency assessments for several popular types of displays and discuss what values are acceptable for different applications, as well as how system latencies might be improved. |
Sebastian Schneegans; John P. Spencer; Gregor Schoner; Seongmin Hwang; Andrew Hollingworth Dynamic interactions between visual working memory and saccade target selection Journal Article In: Journal of Vision, vol. 14, no. 11, pp. 1–23, 2014. @article{Schneegans2014, Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task- irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. |
Dana Schneider; Zoie E. Nott; Paul E. Dux Task instructions and implicit theory of mind Journal Article In: Cognition, vol. 133, no. 1, pp. 43–47, 2014. @article{Schneider2014, It has been hypothesized that humans are able to track other's mental states efficiently and without being conscious of doing so using their implicit theory of mind (iToM) system. However, while iToM appears to operate unconsciously recent work suggests it does draw on executive attentional resources (Schneider, Lam, Bayliss, & Dux, 2012) bringing into question whether iToM is engaged efficiently. Here, we examined other aspects relating to automatic processing: The extent to which the operation of iToM is controllable and how it is influenced by behavioral intentions. This was implemented by assessing how task instructions affect eye-movement patterns in a Sally-Anne false-belief task. One group of subjects was given no task instructions (No Instructions), another overtly judged the location of a ball a protagonist interacted with (Ball Tracking) and a third indicated the location consistent with the actor's belief about the ball's location (Belief Tracking). Despite different task goals, all groups' eye-movement patterns were consistent with belief analysis, and the No Instructions and Ball Tracking groups reported no explicit mentalizing when debriefed. These findings represent definitive evidence that humans implicitly track the belief states of others in an uncontrollable and unintentional manner. |
Tom Schonberg; Akram Bakkour; Ashleigh M. Hover; Jeanette A. Mumford; Lakshya Nagar; Jacob Perez; Russell A. Poldrack Changing value through cued approach: An automatic mechanism of behavior change Journal Article In: Nature Neuroscience, vol. 17, no. 4, pp. 625–630, 2014. @article{Schonberg2014, It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex. |
Mark W. Schurgin; Jonathan I. Flombaum How undistorted spatial memories can produce distorted responses Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 5, pp. 1371–1380, 2014. @article{Schurgin2014, Reproducing the location of an object from the contents of spatial working memory requires the translation of a noisy representation into an action at a single location-for instance, a mouse click or a mark with a writing utensil. In many studies, these kinds of actions result in biased responses that suggest distortions in spatial working memory. We sought to investigate the possibility of one mechanism by which distortions could arise, involving an interaction between undistorted memories and nonuniformities in attention. Specifically, the resolution of attention is finer below than above fixation, which led us to predict that bias could arise if participants tend to respond in locations below as opposed to above fixation. In Experiment 1 we found such a bias to respond below the true position of an object. Experiment 2 demonstrated with eye-tracking that fixations during response were unbiased and centered on the remembered object's true position. Experiment 3 further evidenced a dependency on attention relative to fixation, by shifting the effect horizontally when participants were required to tilt their heads. Together, these results highlight the complex pathway involved in translating probabilistic memories into discrete actions, and they present a new attentional mechanism by which undistorted spatial memories can lead to distorted reproduction responses. |
Alexander C. Schütz Interindividual differences in preferred directions of perceptual and motor decisions. Journal Article In: Journal of vision, vol. 14, no. 12, pp. 1–17, 2014. @article{Schuetz2014, Both the perceptual system and the motor system can be faced with ambiguous information and then have to choose between different alternatives. Often these alternatives involve decisions about directions, and anisotropies have been reported for different tasks. Here we measured interindividual differences and temporal stability of directional preferences in eye movement, motion perception, and thumb movement tasks. In all tasks, stimuli were created such that observers had to decide between two opposite directions in each trial and preferences were measured at 12 axes around the circle. There were clear directional preferences in all utilized tasks. The strongest effects were present in tasks that involved motion, like the smooth pursuit eye movement, apparent motion, and structure-from-motion tasks. The weakest effects were present in the saccadic eye movement task. Observers with strong directional preferences in the eye movement tasks showed shorter latency costs for target-conflict trials compared to single-target trials, suggesting that directional preferences might be advantageous for solving the target conflict. Although there were consistent preferences across observers in most of the tasks, there was also considerable variability in preferred directions between observers. The magnitude of preferences and the preferred directions were correlated only between few tasks. While the magnitude of preferences varied substantially over time, the direction of these preferences was stable over several weeks. These results indicate that individually stable directional preferences exist in a range of perceptual and motor tasks. |
Amy Rouinfar; Elise Agra; Adam M. Larson; N. Sanjay Rebello; Lester C. Loschky In: Frontiers in Psychology, vol. 5, pp. 1094, 2014. @article{Rouinfar2014, This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. |
Annie Roy-Charland; Melanie Perron; Olivia Beaudry; Kaylee Eady Confusion of fear and surprise: A test of the perceptual-attentional limitation hypothesis with eye movement monitoring Journal Article In: Cognition and Emotion, vol. 28, no. 7, pp. 1214–1222, 2014. @article{RoyCharland2014, Of the basic emotional facial expressions, fear is typically less accurately recognised as a result of being confused with surprise. According to the perceptual-attentional limitation hypothesis, the difficulty in recognising fear could be attributed to the similar visual configuration with surprise. In effect, they share more muscle movements than they possess distinctive ones. The main goal of the current study was to test the perceptual-attentional limitation hypothesis in the recognition of fear and surprise using eye movement recording and by manipulating the distinctiveness between expressions. Results revealed that when the brow lowerer is the only distinctive feature between expressions, accuracy is lower, participants spend more time looking at stimuli and they make more comparisons between expressions than when stimuli include the lip stretcher. These results not only support the perceptual-attentional limitation hypothesis but extend its definition by suggesting that it is not solely the number of distinctive features that is important but also their qualitative value. |
Douglas A. Ruff; Marlene R. Cohen Attention can increase or decrease spike count correlations between pairs of neurons depending on their role in a task Journal Article In: Nature Neuroscience, vol. 17, no. 11, pp. 1591–1597, 2014. @article{Ruff2014, Visual attention enhances the responses of visual neurons that encode the attended location. Several recent studies showed that attention also decreases correlations between fluctuations in the responses of pairs of neurons (termed spike count correlation or rSC). The previous results are consistent with two hypotheses. Attention–related changes in rate and rSC might be linked (perhaps through a common mechanism), so that attention always decreases rSC. Alternately, attention might either increase or decrease rSC, possibly depending on the role the neurons play in the behavioral task. We recorded simultaneously from dozens of neurons in area V4 while monkeys performed a discrimination task. We found strong evidence in favor of the second hypothesis, showing that attention can flexibly increase or decrease correlations, depending on whether the neurons provide evidence for the same or opposite perceptual decisions. These results place important constraints on models of the neuronal mechanisms underlying cognitive factors. |
Hélène Samson; Nicole Fiori-Duharcourt; Karine Doré-Mazars; Christelle Lemoine; Dorine Vergilino-Perez Perceptual and gaze biases during face processing: Related or not? Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e85746, 2014. @article{Samson2014, Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. |
Antonia F. Ten Brink; Tanja C. W. Nijboer; Nathan Van der Stoep; Stefan Van der Stigchel The influence of vertically and horizontally aligned visual distractors on aurally guided saccadic eye movements Journal Article In: Experimental Brain Research, vol. 232, no. 4, pp. 1357–1366, 2014. @article{TenBrink2014, Eye movements towards a new target can be guided or disrupted by input from multiple modalities. The degree of oculomotor competition evoked by a distractor depends on both distractor and target properties, such as distractor salience or certainty regarding the target location. The ability to localize the target is particularly important when studying saccades made towards auditory targets, since determination of elevation and azimuth of a sound are based on different processes, and these processes may be affected independently by a distractor. We investigated the effects of a visual distractor on saccadic eye movements made to an auditory target in a two-dimensional plane. Results showed that the competition evoked by a vertical visual distractor was stronger compared with a horizontal visual distractor. The eye movements that were not captured by the vertical visual distractor were still influenced by it: a deviation of endpoints was seen in the direction of the visual distractor. Furthermore, the interference evoked by a high-contrast visual distractor was stronger compared with low-contrast visual stimuli, which was reflected by a faster initiation of an eye movement towards the high-contrast visual distractor and a stronger shift of endpoints in the direction of the high-contrast visual distractor. Together, these findings show that the influence of a visual distractor on aurally guided eye movements depends strongly on its location relative to the target, and to a lesser extent, on stimulus contrast. |
Paul M. J. Thomas; Margaret C. Jackson; Jane E. Raymond A threatening face in the crowd: Effects of emotional singletons on visual working memory Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 253–263, 2014. @article{Thomas2014, Faces with threatening versus positive expressions are better remembered in visual working memory (WM) and are especially effective at capturing attention. We asked how the presence of a single threatening or happy face affects WM for concurrently viewed faces with neutral expressions. If threat captures attention and attention determines WM, then a WM performance cost for neutral faces should be evident. However, if threat boosts processing in an object-specific, noncompetitive manner, then no such costs should be produced. Participants viewed three neutral and one angry or happy face for 2 s. Face recognition was tested 1 s later. Although WM was better for singletons than nonsingletons and better for angry versus happy singletons, WM for neutral faces remained unaffected by either singleton. These results, combined with eye movement and response time analyses, argue against a selective attention account of threat-based benefits to WM and support object-specific enhancement via threat processing. |
Aidan A. Thompson; Patrick A. Byrne; Denise Y. P. Henriques Visual targets aren't irreversibly converted to motor coordinates: Eye-centered updating of visuospatial memory in online reach control Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e92455, 2014. @article{Thompson2014, Counter to current and widely accepted hypotheses that sensorimotor transformations involve converting target locations in spatial memory from an eye-fixed reference frame into a more stable motor-based reference frame, we show that this is not strictly the case. Eye-centered representations continue to dominate reach control even during movement execution; the eye-centered target representation persists after conversion to a motor-based frame and is continuously updated as the eyes move during reach, and is used to modify the reach plan accordingly during online control. While reaches are known to be adjusted online when targets physically shift, our results are the first to show that similar adjustments occur in response to changes in representations of remembered target locations. Specifically, we find that shifts in gaze direction, which produce predictable changes in the internal (specifically eye-centered) representation of remembered target locations also produce mid-transport changes in reach kinematics. This indicates that representations of remembered reach targets (and visuospatial memory in general) continue to be updated relative to gaze even after reach onset. Thus, online motor control is influenced dynamically by both the external and internal updating mechanisms. |
Jennifer G. Tichon; Guy Wallis; Stephan Riek; Timothy Mavin Physiological measurement of anxiety to evaluate performance in simulation training Journal Article In: Cognition, Technology and Work, vol. 16, no. 2, pp. 203–210, 2014. @article{Tichon2014a, The ability to control emotion is a skill which contributes to performance in the same way as cognitive and technical skills do to the successful completion of high stress operations. The interdependence between emotion, problem-solving and decision-making makes a negative emotion such as anxiety of interest in evaluating trainee performance in simulations which replicate stressful work conditions. Self-report measures of anxiety require trainees to interrupt the simulation experience to either complete psychological scales or make verbal reports of state anxiety. An uninterrupted, continuous measure of anxiety is, therefore, preferable for simulation environments. During this study, the anxiety levels of trainee pilots were tracked via electromyography, eye movements and pupillometry while undertaking required tasks in a flight simulation. Fixation duration and saccade rate corresponded reliably to pilot self-reports of anxiety, while pupil size and saccade amplitude did not show a strong comparison to changes in affective state. Large increases in muscle activation where recorded when higher anxiety was reported. The results suggest that a combination of physiological measures could provide a robust, continuous indicator of anxiety level. The implications of the current study on further development of physiological measures to support tracking anxiety as a tool for simulation training assessment are discussed. © 2013 Springer-Verlag London. |
Jedediah M. Singer; Gabriel Kreiman Short temporal asynchrony disrupts visual object recognition Journal Article In: Journal of vision, vol. 14, no. 5, pp. 1–14, 2014. @article{Singer2014, Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. |
Grayden J. F. Solman; Kersondra Hickey; Daniel Smilek Comparing target detection errors in visual search and manually-assisted search Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 4, pp. 945–958, 2014. @article{Solman2014, Subjects searched for low- or high-prevalence targets among static nonoverlapping items or items piled in heaps that could be moved using a computer mouse. We replicated the classical prevalence effect both in visual search and when unpacking items from heaps, with more target misses under low prevalence. Moreover, we replicated our previous finding that while unpacking, people often move the target item without noticing (the unpacking error) and determined that these errors also increase under low prevalence. On the basis of a comparison of item movements during the manually-assisted search and eye movements during static visual search, we suggest that low prevalence leads to broadly reduced diligence during search but that the locus of this reduced diligence depends on the nature of the task. In particular, while misses during visual search often arise from a failure to inspect all of the items, misses during manually-assisted search more often result from a failure to adequately inspect individual items. Indeed, during manually-assisted search, over 90 % of target misses occurred despite subjects having moved the target item during search. |
Grayden J. F. Solman; Alan Kingstone Balancing energetic and cognitive resources: Memory use during search depends on the orienting effector Journal Article In: Cognition, vol. 132, no. 3, pp. 443–454, 2014. @article{Solman2014a, Search outside the laboratory involves tradeoffs among a variety of internal and external exploratory processes. Here we examine the conditions under which item specific memory from prior exposures to a search array is used to guide attention during search. We extend the hypothesis that memory use increases as perceptual search becomes more difficult by turning to an ecologically important type of search difficulty - energetic cost. Using optical motion tracking, we introduce a novel head-contingent display system, which enables the direct comparison of search using head movements and search using eye movements. Consistent with the increased energetic cost of turning the head to orient attention, we discover greater use of memory in head-contingent versus eye-contingent search, as reflected in both timing and orienting metrics. Our results extend theories of memory use in search to encompass embodied factors, and highlight the importance of accounting for the costs and constraints of the specific motor groups used in a given task when evaluating cognitive effects. |
David Souto; Dirk Kerzel Ocular tracking responses to background motion gated by feature-based attention Journal Article In: Journal of Neurophysiology, vol. 112, no. 5, pp. 1074–1081, 2014. @article{Souto2014, Involuntary ocular tracking responses to background motion offer a window on the dynamics of motion computations. In contrast to spatial attention, we know little about the role of feature-based attention in determining this ocular response. To probe feature-based effects of background motion on involuntary eye movements, we presented human observers with a balanced background perturbation. Two clouds of dots moved in opposite vertical directions while observers tracked a target moving in horizontal direction. Additionally, they had to discriminate a change in the direction of motion (±10° from vertical) of one of the clouds. A vertical ocular following response occurred in response to the motion of the attended cloud. When motion selection was based on motion direction and color of the dots, the peak velocity of the tracking response was 30% of the tracking response elicited in a single task with only one direction of background motion. In two other experiments, we tested the effect of the perturbation when motion selection was based on color, by having motion direction vary unpredictably, or on motion direction alone. Although the gain of pursuit in the horizontal direction was significantly reduced in all experiments, indicating a trade-off between perceptual and oculomotor tasks, ocular responses to perturbations were only observed when selection was based on both motion direction and color. It appears that selection by motion direction can only be effective for driving ocular tracking when the relevant elements can be segregated before motion onset. |
Beth A. Stankevich; Joy J. Geng Reward associations and spatial probabilities produce additive effects on attentional selection Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 8, pp. 2315–2325, 2014. @article{Stankevich2014, Recent studies have shown that reward history acts as a powerful attentional bias, even overcoming top-down goals. This has led to the suggestion that rewards belong to a class of attentional cues based on selection history, which are defined by past outcomes with a stimulus feature. Selection history is thought to be separate from traditional attentional cues based on physical salience and voluntary goals, but there is relatively little understanding of how selection history operates as a mechanism of attentional selection. Critically, it has yet to be understood how multiple sources of selection history interact when presented simultaneously. For example, it may be easier to find something we like if it also appears in a predictable location. We therefore pitted spatial probabilities against reward associations and found that the two sources of information had independent and additive effects. Additionally, the strength of the two sources in biasing attentional selection could be equated. In contrast, while a nonpredictive but perceptually salient cue also exhibited independent and additive effects with reward, reward associations dominated the perceptually salient cue at all levels. Our data indicate that reward associations are part of a class of particularly potent attentional cues that guide behavior through learned expectations. However, selection history should not be thought of as a unitary concept but should be understood as a collection of independent sources of information that bias attention in a similar fashion. |
Caspar M. Schwiedrzik; Christian C. Ruff; Andreea Lazar; Frauke C. Leitner; Wolf Singer; Lucia Melloni Untangling perceptual memory: Hysteresis and adaptation map into separate cortical networks Journal Article In: Cerebral Cortex, vol. 24, no. 5, pp. 1152–1164, 2014. @article{Schwiedrzik2014, Perception is an active inferential process in which prior knowledge is combined with sensory input, the result of which determines the contents of awareness. Accordingly, previous experience is known to help the brain "decide" what to perceive. However, a critical aspect that has not been addressed is that previous experience can exert 2 opposing effects on perception: An attractive effect, sensitizing the brain to perceive the same again (hysteresis), or a repulsive effect, making it more likely to perceive something else (adaptation). We used functional magnetic resonance imaging and modeling to elucidate how the brain entertains these 2 opposing processes, and what determines the direction of such experience-dependent perceptual effects. We found that although affecting our perception concurrently, hysteresis and adaptation map into distinct cortical networks: a widespread network of higher-order visual and fronto-parietal areas was involved in perceptual stabilization, while adaptation was confined to early visual areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. We provide a Bayesian model that accounts for the coexistence of hysteresis and adaptation by separating their causes into 2 distinct terms: Hysteresis alters the prior, whereas adaptation changes the sensory evidence (the likelihood function). |
Mehrdad Seirafi; Peter De Weerd; Beatrice De Gelder Suppression of face perception during saccadic eye movements Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–7, 2014. @article{Seirafi2014, Lack of awareness of a stimulus briefly presented during saccadic eye movement is known as saccadic omission. Studying the reduced visibility of visual stimuli around the time of saccade-known as saccadic suppression-is a key step to investigate saccadic omission. To date, almost all studies have been focused on the reduced visibility of simple stimuli such as flashes and bars. The extension of the results from simple stimuli to more complex objects has been neglected. In two experimental tasks, we measured the subjective and objective awareness of a briefly presented face stimuli during saccadic eye movement. In the first task, we measured the subjective awareness of the visual stimuli and showed that in most of the trials there is no conscious awareness of the faces. In the second task, we measured objective sensitivity in a two-alternative forced choice (2AFC) face detection task, which demonstrated chance-level performance. Here, we provide the first evidence of complete suppression of complex visual stimuli during the saccadic eye movement. |
Aasef G. Shaikh; Fatema F. Ghasia Gaze holding after anterior-inferior temporal lobectomy Journal Article In: Neurological Sciences, vol. 35, no. 11, pp. 1749–1756, 2014. @article{Shaikh2014, Eye position-sensitive neurons are found in parietooccipital and anterior-inferior temporal cortex. Putative role of these neurons is to facilitate transformation of reference frame from the retina-fixed to world-fixed coordinates and assure precise action. We assessed the nature of ocular motor disorder in a subject who had selective resection of the right anterior-inferior temporal cortex for the treatment of intractable epilepsy from cortical dysplasia. The gaze was stable when the subject was viewing straight-ahead, but centrally directed drifts in the eye position were seen during eccentric horizontal gaze holding. Eye-in-orbit position determined drift velocity and its direction. Conjugate and sinusoidal vertical oscillations were also present. Horizontal drifts and vertical oscillations became prominent and disconjugate in the absence of visual cue. The gaze-holding deficit was consistent with impairment in neural integration, but in the absence of cerebellar and visual deficits. We speculate that brainstem neural integrator might receive cortical feedback regarding world-fixed coordinates. Visual system might calibrate this process. Hence the lesion of the anterior-inferior temporal lobe leads to impairment in the function of neural integrator. Vision might be used to calibrate such feedback, hence the lack of visual cue further impairs the function of the neural integrator leading to worsening of gaze-holding deficits. |
Martha M. Shiell; François Champoux; Robert J. Zatorre Enhancement of visual motion detection thresholds in early deaf people Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e90498, 2014. @article{Shiell2014, In deaf people, the auditory cortex can reorganize to support visual motion processing. Although this cross-modal reorganization has long been thought to subserve enhanced visual abilities, previous research has been unsuccessful at identifying behavioural enhancements specific to motion processing. Recently, research with congenitally deaf cats has uncovered an enhancement for visual motion detection. Our goal was to test for a similar difference between deaf and hearing people. We tested 16 early and profoundly deaf participants and 20 hearing controls. Participants completed a visual motion detection task, in which they were asked to determine which of two sinusoidal gratings was moving. The speed of the moving grating varied according to an adaptive staircase procedure, allowing us to determine the lowest speed necessary for participants to detect motion. Consistent with previous research in deaf cats, the deaf group had lower motion detection thresholds than the hearing. This finding supports the proposal that cross-modal reorganization after sensory deprivation will occur for supramodal sensory features and preserve the output functions. |
Alisha Siebold; Mieke Donk Reinstating salience effects over time: The influence of stimulus changes on visual selection behavior over a sequence of eye movements Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 6, pp. 1655–1670, 2014. @article{Siebold2014a, Recently, we showed that salience affects initial saccades only in a static stimulus environment; subsequent saccades were unaffected by salience but, instead, were directed in line with task requirements (Siebold, van Zoest, & Donk, PLoS ONE 6(9): e23552, 2011). Yet multiple studies have shown that people tend to fixate salient regions more often than nonsalient ones when they are looking at images-in particular, when salience is defined by dynamic changes. The goal of the present study was to investigate how oculomotor selection beyond an initial saccade is affected by salience as derived from changing, as opposed to static, stimuli. Observers were presented with displays containing two fixation dots, one target, one distractor, and multiple background elements. They were instructed to fixate on one of the fixation dots and make a speeded eye movement to the target, either directly or preceded by an initial eye movement to the other fixation dot. In Experiment 1, target and distractor differed in orientation contrast relative to the background, such that one was more salient than the other, whereas in Experiments 2 and 3, the orientation contrast between the two elements was identical. Here, salience was implemented by a continuous luminance flicker or by a difference in luminance contrast, respectively, which was presented either simultaneously with display onset or contingent upon the first saccade. The results showed that in all experiments, initial saccades were strongly guided by salience, whereas second saccades were consistently goal directed if the salience manipulation was present from display onset. However, if the flicker or luminance contrast was presented contingent upon the initial saccade, salience effects were reinstated. We argue that salience effects are short-lived but can be reinstated if new information is presented, even when this occurs during an eye movement. |
Alisha Siebold; Mieke Donk On the importance of relative salience: Comparing overt selection behavior of single versus simultaneously presented stimuli Journal Article In: PLoS ONE, vol. 9, no. 6, pp. e99707, 2014. @article{Siebold2014, The goal of the current study was to investigate time-dependent effects of the number of targets presented and its interaction with stimulus salience on oculomotor selection performance. To this end, observers were asked to make a speeded eye movement to a target orientation singleton embedded in a homogeneous background of vertically oriented lines. In Experiment 1, either one or two physically identical targets were presented, whereas in Experiment 2 an additional orientation-based salience manipulation was performed. The results showed that the probability of a singleton being available for selection is reduced in the presence of an identical singleton (Experiment 1) and that this effect is modulated by the salience of the other singleton (Experiment 2). While the absolute orientation contrast of a target relative to the background contributed to the probability that it is available for selection, the crucial factor affecting selection was the relative salience between singletons. These findings are incompatible with a processing speed account, which highlights the importance of visibility and claims that a certain singleton identity has a unique speed with which it can be processed. In contrast, the finding that the number of targets presented affected a target's availability suggests an important role of the broader display context in determining oculomotor selection performance. |
Heida M. Sigurdardottir; Suzanne M. Michalak; David L. Sheinberg Shape beyond recognition: Form-derived directionality and its effects on visual attention and motion perception Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 1, pp. 434–454, 2014. @article{Sigurdardottir2014, The shape of an object restricts its movements and therefore its future location. The rules governing selective sampling of the environment likely incorporate any available data, including shape, that provide information about where important things are going to be in the near future so that the object can be located, tracked, and sampled for information. We asked people to assess in which direction several novel objects pointed or directed them. With independent groups of people, we investigated whether their attention and sense of motion were systematically biased in this direction. Our work shows that nearly any novel object has intrinsic directionality derived from its shape. This shape information is swiftly and automatically incorporated into the allocation of overt and covert visual orienting and the detection of motion, processes that themselves are inherently directional. The observed connection between form and space suggests that shape processing goes beyond recognition alone and may help explain why shape is a relevant dimension throughout the visual brain. |
J. D. Silvis; Stefan Van der Stigchel How memory mechanisms are a key component in the guidance of our eye movements: Evidence from the global effect Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 2, pp. 357–362, 2014. @article{Silvis2014, Investigating eye movements has been a promising approach to uncover the role of visual working memory in early attentional processes. Prior research has already demonstrated that eye movements in search tasks are more easily drawn toward stimuli that show similarities to working memory content, as compared with neutral stimuli. Previous saccade tasks, however, have always required a selection process, thereby automatically recruiting working memory. The present study was an attempt to confirm the role of working memory in oculomotor selection in an unbiased saccade task that rendered memory mechanisms irrelevant. Participants executed a saccade in a display with two elements, without any instruction to aim for one particular element. The results show that when two objects appear simultaneously, a working memory match attracts the first saccade more profoundly than do mismatch objects, an effect that was present throughout the saccade latency distribution. These findings demonstrate that memory plays a fundamental biasing role in the earliest competitive processes in the selection of visual objects, even when working memory is not recruited during selection. |
Chess Stetson; Richard A. Andersen The parietal reach region selectively anti-synchronizes with dorsal premotor cortex during planning Journal Article In: Journal of Neuroscience, vol. 34, no. 36, pp. 11948–11958, 2014. @article{Stetson2014, Recent reports have indicated that oscillations shared across distant cortical regions can enhance their connectivity, but do coherent oscillations ever diminish connectivity? We investigated oscillatory activity in two distinct reach-related regions in the awake behaving monkey (Macaca mulatta): the parietal reach region (PRR) and the dorsal premotor cortex (PMd). PRR and PMd were found to oscillate at similar frequencies (beta, 15–30 Hz) during periods of fixation and movement planning. At first glance, the stronger oscillator of the two, PRR, would seem to drive the weaker, PMd. However, a more fine-grained measure, the partial spike-field coherence, revealed a different relationship. Relative to global beta-band activity in the brain, action potentials in PRR anti-synchronize with PMd oscillations. These data suggest that, rather than driving PMd during planning, PRR neurons fire in such a way that they are less likely to communicate information to PMd. |
Lars Strother; Danila Alferov Inter-element orientation and distance influence the duration of persistent contour integration Journal Article In: Frontiers in Psychology, vol. 5, pp. 1273, 2014. @article{Strother2014, Contour integration is a fundamental form of perceptual organization. We introduce a new method of studying the mechanisms responsible for contour integration. This method capitalizes on the perceptual persistence of contours under conditions of impending camouflage. Observers viewed arrays of randomly arranged line segments upon which circular contours comprised of similar line segments were superimposed via abrupt onset. Crucially, these contours remained visible for up to a few seconds following onset, but eventually disappeared due to the camouflaging effects of surrounding background line segments. Our main finding was that the duration of contour visibility depended on the distance and degree of co-alignment between adjacent contour segments such that relatively dense smooth contours persisted longest. The stimulus-related effects reported here parallel similar results from contour detection studies, and complement previous reported top-down influences on contour persistence (Strother et al., 2011). We propose that persistent contour visibility reflects the sustained activity of recurrent processing loops within and between visual cortical areas involved in contour integration and other important stages of visual object recognition. |
Hong-Yue Sun; Li-Lin Rao; Kun Zhou; Shu Li Formulating an emergency plan based on expectation-maximization is one thing, but applying it to a single case is another Journal Article In: Journal of Risk Research, vol. 17, no. 7, pp. 785–814, 2014. @article{Sun2014, This research extends the exploration of single-play/multiple-play distinctions from monetary gambling paradigm to emergency management situation. We conducted three studies (two survey studies and one eye tracking study) to test whether an emergency plan we formulated in advance based on expectation-maximization would be likely to be applied in a single case. In the first two survey studies we found that the plan with the higher EV was more likely to be preferred when the plan was applied 100 times or to 100 areas than when the plan was applied only once or to only one area. We also found significant framing and reflection effects, both of which violated the invariance principle in the single-application condition, but not in the multiple-application condition. Furthermore, in the eye tracking study, we found distinctly different eye movement patterns in the single-application condition and the multiple-application condition. The eye movement patterns in the multiple-application condition are more consistent with the predictions deduced from expectation computation. The overall results suggest that a gap exists between the formulation and the implementation of an emergency plan. Formulating an emergency plan based on expectation-maximization is doable, but applying it to a single case may be more challenging. |
Sarit F. A. Szpiro; Miriam Spering; Marisa Carrasco Perceptual learning modifies untrained pursuit eye movements Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–13, 2014. @article{Szpiro2014, Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. |
Bahareh Taghizadeh; Alexander Gail Spatial task context makes short-latency reaches prone to induced Roelofs illusion Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 673, 2014. @article{Taghizadeh2014, The perceptual localization of an object is often more prone to illusions than an immediate visuomotor action towards that object. The induced Roelofs effect (IRE) probes the illusory influence of task-irrelevant visual contextual stimuli on the processing of task-relevant visuospatial instructions during movement preparation. In the IRE, the position of a task-irrelevant visual object induces a shift in the localization of a visual target when subjects indicate the position of the target by verbal response, key-presses or delayed pointing to the target ("perception" tasks), but not when immediately pointing or reaching towards it without instructed delay ("action" tasks). This discrepancy was taken as evidence for the dual-visual-stream or perception-action hypothesis, but was later explained by a phasic distortion of the egocentric spatial reference frame which is centered on subjective straight-ahead (SSA) and used for reach planning. Both explanations critically depend on delayed movements to explain the IRE for action tasks. Here we ask: first, if the IRE can be observed for short-latency reaches; second, if the IRE in fact depends on a distorted egocentric frame of reference. Human subjects were tested in new versions of the IRE task in which the reach goal had to be localized with respect to another object, i.e., in an allocentric reference frame. First, we found an IRE even for immediate reaches in our allocentric task, but not for an otherwise similar egocentric control task. Second, the IRE depended on the position of the task-irrelevant frame relative to the reference object, not relative to SSA. We conclude that the IRE for reaching does not mandatorily depend on prolonged response delays, nor does it depend on motor planning in an egocentric reference frame. Instead, allocentric encoding of a movement goal is sufficient to make immediate reaches susceptible to IRE, underlining the context dependence of visuomotor illusions. |
B. Vintch; Justin L. Gardner Cortical correlates of human motion perception biases Journal Article In: Journal of Neuroscience, vol. 34, no. 7, pp. 2592–2604, 2014. @article{Vintch2014, Human sensory perception is not a faithful reproduction of the sensory environment. For example, at low contrast, objects appear to move slower and flicker faster than veridical. Although these biases have been observed robustly, their neural underpinning is unknown, thus suggesting a possible disconnect of the well established link between motion perception and cortical responses. We used functional imaging to examine the encoding of speed in the human cortex at the scale of neuronal populations and asked where and how these biases are encoded. Decoding, voxel population, and forward-encoding analyses revealed biases toward slow speeds and high temporal frequencies at low contrast in the earliest visual cortical regions, matching perception. These findings thus offer a resolution to the disconnect between cortical responses and motion perception in humans. Moreover, biases in speed perception are considered a leading example of Bayesian inference because they can be interpreted as a prior for slow speeds. Therefore, our data suggest that perceptual priors of this sort can be encoded by neural populations in the same early cortical areas that provide sensory evidence. |
Simone Vossel; Markus Bauer; Christoph Mathys; Rick A. Adams; Raymond J. Dolan; Klaas E. Stephan; Karl J. Friston Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention Journal Article In: Journal of Neuroscience, vol. 34, no. 47, pp. 15735–15742, 2014. @article{Vossel2014, The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision. |
Simone Vossel; Christoph Mathys; Jean Daunizeau; Markus Bauer; Jon Driver; Karl J. Friston; Klaas E. Stephan Spatial attention, precision, and bayesian inference: A study of saccadic response speed Journal Article In: Cerebral Cortex, vol. 24, no. 6, pp. 1436–1450, 2014. @article{Vossel2014a, Inferring the environment's statistical structure and adapting behavior accordingly is a fundamental modus operandi of the brain. A simple form of this faculty based on spatial attentional orienting can be studied with Posner's location-cueing paradigm in which a cue indicates the target location with a known probability. The present study focuses on a more complex version of this task, where probabilistic context (percentage of cue validity) changes unpredictably over time, thereby creating a volatile environment. Saccadic response speed (RS) was recorded in 15 subjects and used to estimate subject-specific parameters of a Bayesian learning scheme modeling the subjects' trial-by-trial updates of beliefs. Different response models-specifying how computational states translate into observable behavior-were compared using Bayesian model selection. Saccadic RS was most plausibly explained as a function of the precision of the belief about the causes of sensory input. This finding is in accordance with current Bayesian theories of brain function, and specifically with the proposal that spatial attention is mediated by a precision-dependent gain modulation of sensory input. Our results provide empirical support for precision-dependent changes in beliefs about saccade target locations and motivate future neuroimaging and neuropharmacological studies of how Bayesian inference may determine spatial attention. |
Stephanie Waechter; Andrea L. Nelson; Caitlin A. Wright; Ashley Hyatt; Jonathan Oakman Measuring attentional bias to threat: Reliability of dot probe and eye movement indices Journal Article In: Cognitive Therapy and Research, vol. 38, no. 3, pp. 313–333, 2014. @article{Waechter2014, A variety of methodological paradigms, including dot probe and eye movement tasks, have been used to examine attentional biases to threat in anxiety disorders. Unfortunately, little attention has been devoted to the psychometric properties of measures from these paradigms. In the current study, participants selected for high and low social anxiety completed a dot probe and eye movement task using angry, disgust and happy facial expressions paired with neutral expressions. Results indicated that dot probe bias scores, eye movement first fixa- tion indices, and eye movement proportions of viewing time in the first 1,500 ms had unacceptably low reliability. However, eye movement indices of attentional bias over the full 5,000 ms time course had excellent reliability. Individuals' dot probe and eye movement biases were largely uncorrelated across the two tasks and demonstrated little relation with social anxiety scores. Implications for future research are discussed. |
David V. Walsh; Lei Liu Adaptation to a simulated central scotoma during visual search training Journal Article In: Vision Research, vol. 96, pp. 75–86, 2014. @article{Walsh2014, Patients with a central scotoma usually use a preferred retinal locus (PRL) consistently in daily activities. The selection process and time course of the PRL development are not well understood. We used a gaze-contingent display to simulate an isotropic central scotoma in normal subjects while they were practicing a difficult visual search task. As compared to foveal search, initial exposure to the simulated scotoma resulted in prolonged search reaction time, many more fixations and unorganized eye movements during search. By the end of a 1782-trial training with the simulated scotoma, the search performance improved to within 25% of normal foveal search. Accompanying the performance improvement, there were also fewer fixations, fewer repeated fixations in the same area of the search stimulus and a clear tendency of using one area near the border of the scotoma to identify the search target. The results were discussed in relation to natural development of PRL in central scotoma patients and potential visual training protocols to facilitate PRL development. |
Benchi Wang; Matthew D. Hilchey; Xiaohua Cao; Zhiguo Wang The spatial distribution of inhibition of return revisited: No difference found between manual and saccadic responses Journal Article In: Neuroscience Letters, vol. 578, pp. 128–132, 2014. @article{Wang2014a, Inhibition of return (IOR) commonly refers to the effect of prolonged response times to targets at previously attended locations. It is a well-documented fact that IOR is not restricted to previously attended locations, but rather has a spatial gradient. Based on a myriad of manual/saccadic dissociations, many researchers now believe that there are at least two forms of IOR completely dissociable on the basis of response type. The present study evaluated whether these two forms of IOR are encoded in similar representations of space. Across a range of conditions, there was little indication that the two forms could be differentiated on the basis of their spatial distributions. Furthermore, the present study also found that the gradient of IOR was steepest for cues appearing nearest fixation. |
Hsiao-shen Wang; Yi-Ting Chen; Chih-Hung Lin The learning benefits of using eye trackers to enhance the geospatial abilities of elementary school students Journal Article In: British Journal of Educational Technology, vol. 45, no. 2, pp. 340–355, 2014. @article{Wang2014c, In this study, we examined the spatial abilities of students using eye-movement tracking devices to identify and analyze their characteristics. For this research, 12 students aged 11–12 years participated as novices and 4 mathematics students participated as experts. A comparison of the visual-spatial abilities of each group showed key factors of superior spatial ability, and a spatial ability instructional strategy was developed. After training, the same spatial ability test was conducted again, and eye-tracking records were used to compare the participants' line-of-sight and answer rate results with those of the previous test. Specific references and recommendations are provided for spatial ability training education and assessment. |
Lihui Wang; Yunyan Duan; Jan Theeuwes; Xiaolin Zhou Reward breaks through the inhibitory region around attentional focus Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 2–2, 2014. @article{Wang2014e, It is well known that directing attention to a location in space enhances the processing efficiency of stimuli presented at that location. Research has also shown that around this area of enhanced processing, there is an inhibitory region within which processing of information is suppressed. In this study, we investigated whether a reward-associated stimulus can break through the inhibitory surround. A distractor that was previously associated with high or low reward was presented near the target with a variable distance between them. For low-reward distractors, only the distractor very close to the target caused interference to target processing; for high-reward distractors, both near and relatively far distractors caused interference, demonstrating that task-irrelevant reward-associated stimuli can capture attention even when presented within the inhibitory surround. |
A. Zenon; M. Sidibe; Etienne Olivier Pupil size variations correlate with physical effort perception Journal Article In: Frontiers in Behavioral Neuroscience, vol. 8, pp. 286, 2014. @article{Zenon2014, It has long been established that the pupil diameter increases during mental activities in proportion to the difficulty of the task at hand. However, it is still unclear whether this relationship between the pupil size and effort applies also to physical effort. In order to address this issue, we asked healthy volunteers to perform a power grip task, at varied intensity, while evaluating their effort both implicitly and explicitly, and while concurrently monitoring their pupil size. Each trial started with a contraction of imposed intensity, under the control of a continuous visual feedback. Upon completion of the contraction, participants had to choose whether to replicate, without feedback, the first contraction for a variable monetary reward, or whether to skip this step and go directly to the next trial. The rate of acceptance of effort replication and the amount of force exerted during the replication were used as implicit measures of the perception of the effort exerted during the first contraction. In addition, the participants were asked to rate on an analog scale, their explicit perception of the effort for each intensity condition. We found that pupil diameter increased during physical effort and that the magnitude of this response reflected not only the actual intensity of the contraction but also the subjects' perception of the effort. This finding indicates that the pupil size signals the level of effort invested in a task, irrespective of whether it is physical or mental. It also helps refining the potential brain circuits involved since the results of the current study imply a convergence of mental and physical effort information at some level along this pathway. |
Alexandre Zénon; Brian D. Corneil; Andrea Alamia; Nabil Filali-Sadouk; Etienne Olivier Counterproductive effect of saccadic suppression during attention shifts Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e86633, 2014. @article{Zenon2014a, During saccadic eye movements, the processing of visual information is transiently interrupted by a mechanism known as "saccadic suppression" [1] that is thought to ensure perceptual stability [2]. If, as proposed in the premotor theory of attention [3], covert shifts of attention rely on sub-threshold recruitment of oculomotor circuits, then saccadic suppression should also occur during covert shifts. In order to test this prediction, we designed two experiments in which participants had to orient towards a cued letter, with or without saccades. We analyzed the time course of letter identification score in an "attention" task performed without saccades, using the saccadic latencies measured in the "saccade" task as a marker of covert saccadic preparation. Visual conditions were identical in all tasks. In the "attention" task, we found a drop in perceptual performance around the predicted onset time of saccades that were never performed. Importantly, this decrease in letter identification score cannot be explained by any known mechanism aligned on cue onset such as inhibition of return, masking, or microsaccades. These results show that attentional allocation triggers the same suppression mechanisms as during saccades, which is relevant during eye movements but detrimental in the context of covert orienting. |
Luming Zhang; Yue Gao; Rongrong Ji; Yingjie Xia; Qionghai Dai; Xuelong Li Actively learning human gaze shifting paths for semantics-aware photo cropping Journal Article In: IEEE Transactions on Image Processing, vol. 23, no. 5, pp. 2235–2245, 2014. @article{Zhang2014, Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr The visual component to saccadic compression Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 13–, 2014. @article{Zimmermann2014, Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after. |
Siavash Vaziri; Eric T. Carlson; Zhihong Wang; Charles E. Connor A channel for 3D environmental shape in anterior inferotemporal cortex Journal Article In: Neuron, vol. 84, no. 1, pp. 55–62, 2014. @article{Vaziri2014, Inferotemporal cortex (IT) has long been studied as asingle pathway dedicated to object vision, but connectivity analysis reveals anatomically distinct channels, through ventral superior temporal sulcus (STSv) and dorsal/ventral inferotemporal gyrus (TEd, TEv). Here, we report a major functional distinction between channels. We studied individual IT neurons in monkeys viewing stereoscopic 3D images projected on a large screen. We used adaptive stimuli to explore neural tuning for 3D abstract shapes ranging in scale and topology from small, closed, bounded objects to large, open, unbounded environments (landscape-like surfaces and cave-like interiors). In STSv, most neurons were more responsive to objects, as expected. In TEd, surprisingly, most neurons were more responsive to 3D environmental shape. Previous studies have localized environmental information to posterior cortical modules. Our results show it is also channeled through anterior IT, where extensive cross-connections between STSv and TEd could integrate object and environmental shape information. |
Boris M. Velichkovsky; Mikhail A. Rumyantsev; Mikhail A. Morozov New solution to the Midas Touch Problem: Identification of visual commands via extraction of focal fixations Journal Article In: Procedia Computer Science, vol. 39, pp. 75–82, 2014. @article{Velichkovsky2014, Reliable identification of intentional visual commands is a major problem in the development of eye-movements based user interfaces. This work suggests that the presence of focal visual fixations is indicative of visual commands. Two experiments are described which assessed the effectiveness of this approach in a simple gaze-control interface. Identification accuracy was shown to match that of the commonly used dwell time method. Using focal fixations led to less visual fatigue and higher speed of work. Perspectives of using focal fixations for identification of visual commands in various kinds of eye-movements based interfaces are discussed. |
Dustin Venini; Roger W. Remington; Gernot Horstmann; Stefanie I. Becker Centre-of-gravity fixations in visual search: When looking at nothing helps to find something Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 9–12, 2014. @article{Venini2014, In visual search, some fixations are made between stimuli on empty regions, commonly referred to as "centre-of-gravity" fixations (henceforth: COG fixations). Previous studies have shown that observers with task expertise show more COG fixations than novices. This led to the view that COG fixations reflect simultaneous encoding of multiple stimuli, allowing more efficient processing of task-related items. The present study tested whether COG fixations also aid performance in visual search tasks with unfamiliar and abstract stimuli. Moreover, to provide evidence for the multiple-item processing view, we analysed the effects of COG fixations on the number and dwell times of stimulus fixations. The results showed that (1) search efficiency increased with increasing COG fixations even in search for unfamiliar stimuli and in the absence of special higher-order skills, (2) COG fixations reliably reduced the number of stimulus fixations and their dwell times, indicating processing of multiple distractors, and (3) the proportion of COG fixations was dynamically adapted to potential information gain of COG locations. A second experiment showed that COG fixations are diminished when stimulus positions unpredictably vary across trials. Together, the results support the multiple-item processing view, which has important implications for current theories of visual search. |
Frederick Verbruggen; Tobias Stevens; Christopher D. Chambers Proactive and reactive stopping when distracted: An attentional account Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1295–1300, 2014. @article{Verbruggen2014, Performance in response inhibition paradigms is typically attributed to inhibitory control. Here we examined the idea that stopping may largely depend on the outcome of a sensory detection process. Subjects performed a speeded go task, but they were instructed to withhold their response when a visual stop signal was presented. The stop signal could occur in the center of the screen or in the periphery. On half of the trials, perceptual distractors were presented throughout the trial. We found that these perceptual distractors impaired stopping, especially when stop signals could occur in the periphery. Furthermore, the effect of the distractors on going was smallest in the central stop-signal condition, medium in a condition in which no signals could occur, and largest in the condition in which stop signals could occur in the periphery. The results show that an important component of stopping is finding a balance between ignoring irrelevant information in the environment and monitoring for the occurrence of occasional stop signals. These findings highlight the importance of sensory detection processes when stopping and could shed new light on a range of phenomena and findings in the response inhibition literature. |
Karl Verfaillie; S. Huysegems; Peter De Graef; Goedele Van Belle Impaired holistic and analytic face processing in congenital prosopagnosia: Evidence from the eye-contingent mask/window paradigm Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 503–521, 2014. @article{Verfaillie2014, There is abundant evidence that face recognition, in comparison to the recognition of other objects, is based on holistic processing rather than analytic processing. One line of research that provides evidence for this hypothesis is based on the study of people who experience pronounced difficulties in visually identifying conspecifics on the basis of their face. Earlier, we developed a behavioural paradigm to directly test analytic vs. holistic face processing. In comparison to a to be remembered reference face stimulus, one of two test stimuli was either presented in full view, with an eye-contingently moving window (only showing the fixated face feature, and therefore only affording analytic processing), or with an eye-contingently moving mask or scotoma (masking the fixated face feature, but still allowing holistic processing). In the present study we use this paradigm (that we used earlier in acquired prosopagnosia) to study face perception in congenital prosopagnosia (people having difficulties recognizing faces from birth on, without demonstrable brain damage). We observe both holistic and analytic face processing deficits in people with congenital prosopagnosia. Implications for a better understanding, both of congenital prosopagnosia and of normal face perception, are discussed. There is abundant evidence that face recognition, in comparison to the recognition of other objects, is based on holistic processing rather than analytic processing. One line of research that provides evidence for this hypothesis is based on the study of people who experience pronounced difficulties in visually identifying conspecifics on the basis of their face. Earlier, we developed a behavioural paradigm to directly test analytic vs. holistic face processing. In comparison to a to be remembered reference face stimulus, one of two test stimuli was either presented in full view, with an eye-contingently moving window (only showing the fixated face feature, and therefore only affording analytic processing), or with an eye-contingently moving mask or scotoma (masking the fixated face feature, but still allowing holistic processing). In the present study we use this paradigm (that we used earlier in acquired prosopagnosia) to study face perception in congenital prosopagnosia (people having difficulties recognizing faces from birth on, without demonstrable brain damage). We observe both holistic and analytic face processing deficits in people with congenital prosopagnosia. Implications for a better understanding, both of congenital prosopagnosia and of normal face perception, are discussed. |
Alison M. Trude; Melissa C. Duff; Sarah Brown-Schmidt Talker-specific learning in amnesia: Insight into mechanisms of adaptive speech perception Journal Article In: Cortex, vol. 54, no. 1, pp. 117–123, 2014. @article{Trude2014, A hallmark of human speech perception is the ability to comprehend speech quickly and effortlessly despite enormous variability across talkers. However, current theories of speech perception do not make specific claims about the memory mechanisms involved in this process. To examine whether declarative memory is necessary for talker-specific learning, we tested the ability of amnesic patients with severe declarative memory deficits to learn and distinguish the accents of two unfamiliar talkers by monitoring their eye-gaze as they followed spoken instructions. Analyses of the time-course of eye fixations showed that amnesic patients rapidly learned to distinguish these accents and tailored perceptual processes to the voice of each talker. These results demonstrate that declarative memory is not necessary for this ability and points to the involvement of non-declarative memory mechanisms. These results are consistent with findings that other social and accommodative behaviors are preserved in amnesia and contribute to our understanding of the interactions of multiple memory systems in the use and understanding of spoken language. |
Chien-Chih Tseng; Ching-Hui Chen; Hsuan-Chih Chen; Yao-Ting Sung; Kuo-En Chang Verification of Dual Factors theory with eye movements during a matchstick arithmetic insight problem Journal Article In: Thinking Skills and Creativity, vol. 13, pp. 129–140, 2014. @article{Tseng2014, Representational Change Theory claims that participants form inappropriate representations at the beginning of the insight problem solving process and that these initial representations must be transformed to discover the solution (Knoblich, Ohlsson, Haider, & Rhenius, 1999; Knoblich, Ohlsson, & Raney, 2001; Ohlsson, 1992). The theory also claims that all participants are trapped by inappropriate representations, regardless of the result, but it is easier for successful participants to transform their initial representations. However, the transformation of representations is not the only critical factor. This study investigates the hypothesis that the process of fixedness averting plays an important role in insight problem solving and is helpful for representational change. To verify the influence of fixedness averting on representational change processes, matchstick arithmetic problems were employed as an experimental model. In experiment 1, insight problem solving results could be predicted within the first third of the duration of the task. The gaze duration in the fixation region of successful participants was shorter than the gaze duration of unsuccessful participants. In experiment 2, participants' foci of attention were experimentally manipulated by presenting different animated diagrams to guide their attention. We found that the rate of correct responses was significantly reduced when participants' attention was guided to the fixation region. Representational Change Theory declares that changing inappropriate initial representations is necessary for solving insight problems. The present study demonstrates that in addition to representational change, fixedness averting is also crucial to problem solving. |
Lin-Yuan Tseng; Philip Tseng; Wei-Kuang Liang; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan The role of superior temporal sulcus in the control of irrelevant emotional face processing: A transcranial direct current stimulation study Journal Article In: Neuropsychologia, vol. 64, pp. 124–133, 2014. @article{Tseng2014a, Emotional faces are often salient cues of threats or other important contexts, and may therefore have a large effect on cognitive processes of the visual environment. Indeed, many behavioral studies have demonstrated that emotional information can modulate visual attention and eye movements. The aim of the present study was to investigate (1) how irrelevant emotional face distractors affect saccadic behaviors and (2) whether such emotional effects reflect a specific neural mechanism or merely biased selective attention. We combined a visual search paradigm that incorporated manipulation of different types of distractor (fearful faces or scrambled faces) and delivered anodal transcranial direct current stimulation (tDCS) over the superior temporal sulcus and the frontal eye field to investigate the functional roles of these areas in processing facial expressions and eye movements. Our behavioral data suggest that irrelevant emotional distractors can modulate saccadic behaviors. The tDCS results showed that while rFEF played a more general role in controlling saccadic behavior, rSTS is mainly involved in facial expression processing. Furthermore, rSTS played a critical role in processing facial expressions even when such expressions were not relevant to the task goal, implying that facial expressions and processing may be automatic irrespective of the task goal. |
Yuan-Chi Tseng; Joshua I. Glaser; Eamon Caddigan; Alejandro Lleras Modeling the effect of selection history on pop-out visual search Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e89996, 2014. @article{Tseng2014b, While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. |
Yusuke Uchida; Nobuaki Mizuguchi; Masaaki Honda; Kazuyuki Kanosue Prediction of shot success for basketball free throws: Visual search strategy Journal Article In: European Journal of Sport Science, vol. 14, no. 5, pp. 426–432, 2014. @article{Uchida2014, Abstract In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed. |
Hiroshi Ueda; Kohske Takahashi; Katsumi Watanabe Influence of removal of invisible fixation on the saccadic and manual gap effect Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 329–336, 2014. @article{Ueda2014, Saccadic and manual reactions to a peripherally presented target are facilitated by removing a central fixation stimulus shortly before a target onset (the gap effect). The present study examined the effects of removal of a visible and invisible fixation point on the saccadic gap effect and the manual gap effect. Participants were required to fixate a central fixation point and respond to a peripherally presented target as quickly and accurately as possible by making a saccade (Experiment 1) or pressing a corresponding key (Experiment 2). The fixation point was dichoptically presented, and visibility was manipulated by using binocular rivalry and continuous flash suppression technique. In both saccade and key-press tasks, removing the visible fixation strongly quickened the responses. Furthermore, the invisible fixation, which remained on the display but suppressed, significantly delayed the saccadic response. Contrarily, the invisible fixation had no effect on the manual task. These results indicate that partially different processes mediate the saccadic gap effect and the manual gap effect. In particular, unconscious processes might modulate an oculomotor-specific component of the saccadic gap effect, presumably via subcortical mechanisms. |
Hiroshi Ueda; Kohske Takahashi; Katsumi Watanabe Effects of direct and averted gaze on the subsequent saccadic response Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 4, pp. 1085–1092, 2014. @article{Ueda2014a, The saccadic latency to visual targets is susceptible to the properties of the currently fixated objects. For example, the disappearance of a fixation stimulus prior to presentation of a peripheral target shortens saccadic latencies (the gap effect). In the present study, we investigated the influences of a social signal from a facial fixation stimulus (i.e., gaze direction) on subsequent saccadic responses in the gap paradigm. In Experiment 1, a cartoon face with a direct or averted gaze was used as a fixation stimulus. The pupils of the face were unchanged (overlap), disappeared (gap), or were translated vertically to make or break eye contact (gaze shift). Participants were required to make a saccade toward a target to the left or the right of the fixation stimulus as quickly as possible. The results showed that the gaze direction influenced saccadic latencies only in the gaze shift condition, but not in the gap or overlap condition; the direct-to-averted gaze shift (i.e., breaking eye contact) yielded shorter saccadic latencies than did the averted-to-direct gaze shift (i.e., making eye contact). Further experiments revealed that this effect was eye contact specific (Exp. 2) and that the appearance of an eye gaze immediately before the saccade initiation also influenced the saccadic latency, depending on the gaze direction (Exp. 3). These results suggest that the latency of target-elicited saccades can be modulated not only by physical changes of the fixation stimulus, as has been seen in the conventional gap effect, but also by a social signal from the attended fixation stimulus. |
Avinash R. Vaidya; Chenshuo Jin; Lesley K. Fellows Eye spy: The predictive value of fixation patterns in detecting subtle and extreme emotions from faces Journal Article In: Cognition, vol. 133, no. 2, pp. 443–456, 2014. @article{Vaidya2014, Successful social interaction requires recognizing subtle changes in the mental states of others. Deficits in emotion recognition are found in several neurological and psychiatric illnesses, and are often marked by disturbances in gaze patterns to faces, typically interpreted as a failure to fixate on emotionally informative facial features. However, there has been very little research on how fixations inform emotion recognition in healthy people. Here, we asked whether fixations predicted detection of subtle and extreme emotions in faces. We used a simple model to predict emotion detection scores from participants' fixation patterns. The best fit of this model heavily weighted fixations to the eyes in detecting subtle fear, disgust and surprise, with less weight, or zero weight, given to mouth and nose fixations. However, this model could not successfully predict detection of subtle happiness, or extreme emotional expressions, with the exception of fear. These findings argue that detection of most subtle emotions is best served by fixations to the eyes, with some contribution from nose and mouth fixations. In contrast, detection of extreme emotions and subtle happiness appeared to be less dependent on fixation patterns. The results offer a new perspective on some puzzling dissociations in the neuropsychological literature, and a novel analytic approach for the study of eye gaze in social or emotional settings. |
David E. Warren; Melissa C. Duff Not so fast: Hippocampal amnesia slows word learning despite successful fast mapping Journal Article In: Hippocampus, vol. 24, no. 8, pp. 920–933, 2014. @article{Warren2014, The human hippocampus is widely believed to be necessary for the rapid acquisition of new declarative relational memories. However, processes supporting on-line inferential word use ("fast mapping") may also exercise a dissociable learning mechanism and permit rapid word learning without the hippocampus (Sharon et al. (2011) Proc Natl Acad Sci USA 108:1146-1151). We investigated fast mapping in severely amnesic patients with hippocampal damage (N = 4), mildly amnesic patients (N = 6), and healthy comparison participants (N = 10) using on-line measures (eye movements) that reflected ongoing processing. All participants studied unique word-picture associations in two encoding conditions. In the explicit-encoding condition, uncommon items were paired with their names (e.g., "This is a numbat."). In the fast mapping study condition, participants heard an instruction using a novel word (e.g., "Click on the numbat.") while two items were presented (an uncommon target such as a numbat, and a common distracter such as a dog). All groups performed fast mapping well at study, and on-line eye movement measures did not reveal group differences. However, while comparison participants showed robust word learning irrespective of encoding condition, severely amnesic patients showed no evidence of learning after fast mapping or explicit encoding on any behavioral or eye-movement measure. Mildly amnesic patients showed some learning, but performance was unaffected by encoding condition. The findings are consistent with the following propositions: the hippocampus is not essential for on-line fast mapping of novel words; but is necessary for the rapid learning of arbitrary relational information irrespective of encoding conditions. |
Matthew David Weaver; Davide Paoletti; Wieske Zoest The impact of predictive cues and visual working memory on dynamic oculomotor selection Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–15, 2014. @article{Weaver2014, Strategic use of advanced information about search display properties can benefit covert attentional selection. However, little work has investigated this benefit on overt selection. The present study examined how cued information impacts oculomotor selection over time and the role played by individual differences in visual working memory (VWM) capacity in utilizing such cues. Participants searched for a specific orientation target in a saccade localization search task. Prior to each trial, additional information regarding secondary display features (color singleton identity) was either provided by a word cue or not. The cue increased accuracy performance from the earliest saccadic responses. VWM capacity was measured via a change-detection task and results showed that individuals' VWM capacity scores were associated with cue impact, whereby participants with higher capacity derived an increased cue performance benefit. These findings suggest that strategic use of cue information to select and reject salient singletons can develop very early following display presentation and is related to an individual's VWM capacity. This research indicates that stimulus- driven and goal-directed processes are not simply additive in oculomotor selection, but instead exhibit a distinct and dynamic profile of interaction. |
Katharina Weiß; Werner X. Schneider; Arvid Herwig Associating peripheral and foveal visual input across saccades: A default mode of the human visual system? Journal Article In: Journal of Vision, vol. 14, no. 11, pp. 1–15, 2014. @article{Weiss2014, Spatial processing resolution of a particular object in the visual field can differ considerably due to eye movements. The same object will be represented with high acuity in the fovea but only coarsely in periphery. Herwig and Schneider (in press) proposed that the visual system counteracts such resolution differences by predicting, based on previous experience, how foveal objects will look in the periphery and vice versa. They demonstrated that previously learned transsaccadic associations between peripheral and foveal object information facilitate performance in visual search, irrespective of the correctness of these associations. False associations were learned by replacing the presaccadic object with a slightly different object during the saccade. Importantly, participants usually did not notice this object change. This raises the question of whether perception of object continuity is a critical factor in building transsaccadic associations. We disturbed object continuity during learning with a postsaccadic blank or a task-irrelevant shape change. Interestingly, visual search performance revealed that neither disruption of temporal object continuity (blank) nor disruption of spatial object continuity (shape change) impaired transsaccadic learning. Thus, transsaccadic learning seems to be a very robust default mechanism of the visual system that is probably related to the more general concept of action–effect learning. |
Mike Wendt; Andrea Kiesel; Franziska Geringswald; Sascha Purmann; Rico Fischer In: Experimental Psychology, vol. 61, no. 1, pp. 55–67, 2014. @article{Wendt2014, Current models of cognitive control assume gradual adjustment of processing selectivity to the strength of conflict evoked by distractor stimuli. Using a flanker task, we varied conflict strength by manipulating target and distractor onset. Replicating previous findings, flanker interference effects were larger on trials associated with advance presentation of the flankers compared to simultaneous presentation. Controlling for stimulus and response sequence effects by excluding trials with feature repetitions from stimulus administration (Experiment 1) or from the statistical analyses (Experiment 2), we found a reduction of the flanker interference effect after high-conflict predecessor trials (i.e., trials associated with advance presentation of the flankers) but not after low-conflict predecessor trials (i.e., trials associated with simultaneous presentation of target and flankers). This result supports the assumption of conflict-strength-dependent adjustment of visual attention. The selective adaptation effect after high-conflict trials was associated with an increase in prestimulus pupil diameter, possibly reflecting increased cognitive effort of focusing attention. |
Jessica Werthmann; Matt Field; Anne Roefs; Chantal Nederkoorn; Anita Jansen Attention bias for chocolate increases chocolate consumption - An attention bias modification study Journal Article In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 45, no. 1, pp. 136–143, 2014. @article{Werthmann2014a, Objective The current study examined experimentally whether a manipulated attention bias for food cues increases craving, chocolate intake and motivation to search for hidden chocolates. Method To test the effect of attention for food on subsequent chocolate intake, attention for chocolate was experimentally modified by instructing participants to look at chocolate stimuli ("attend chocolate" group) or at non-food stimuli ("attend shoes" group) during a novel attention bias modification task (antisaccade task). Chocolate consumption, changes in craving and search time for hidden chocolates were assessed. Eye-movement recordings were used to monitor the accuracy during the experimental attention modification task as possible moderator of effects. Regression analyses were conducted to test the effect of attention modification and modification accuracy on chocolate intake, craving and motivation to search for hidden chocolates. Results Results showed that participants with higher accuracy (+1 SD), ate more chocolate when they had to attend to chocolate and ate less chocolate when they had to attend to non-food stimuli. In contrast, for participants with lower accuracy (-1 SD), the results were exactly reversed. No effects of the experimental attention modification on craving or search time for hidden chocolates were found. Limitation We used chocolate as food stimuli so it remains unclear how our findings generalize to other types of food. Conclusion These findings demonstrate further evidence for a link between attention for food and food intake, and provide an indication about the direction of this relationship. © 2013 Elsevier Ltd. All rights reserved. |
Jessica Werthmann; Fritz Renner; Anne Roefs; Marcus J. H. Huibers; Lana Plumanns; Nora Krott; Anita Jansen Looking at food in sad mood: Do attention biases lead emotional eaters into overeating after a negative mood induction? Journal Article In: Eating Behaviors, vol. 15, no. 2, pp. 230–236, 2014. @article{Werthmann2014, Background: Emotional eating is associated with overeating and the development of obesity. Yet, empirical evidence for individual (trait) differences in emotional eating and cognitive mechanisms that contribute to eating during sad mood remain equivocal. Aim: The aim of this study was to test if attention bias for food moderates the effect of self-reported emotional eating during sad mood (vs neutral mood) on actual food intake. It was expected that emotional eating is predictive of elevated attention for food and higher food intake after an experimentally induced sad mood and that attentional maintenance on food predicts food intake during a sad versus a neutral mood. Method: Participants (N. = 85) were randomly assigned to one of the two experimental mood induction conditions (sad/neutral). Attentional biases for high caloric foods were measured by eye tracking during a visual probe task with pictorial food and neutral stimuli. Self-reported emotional eating was assessed with the Dutch Eating Behavior Questionnaire (DEBQ) and ad libitum food intake was tested by a disguised food offer. Results: Hierarchical multivariate regression modeling showed that self-reported emotional eating did not account for changes in attention allocation for food or food intake in either condition. Yet, attention maintenance on food cues was significantly related to increased intake specifically in the neutral condition, but not in the sad mood condition. Discussion: The current findings show that self-reported emotional eating (based on the DEBQ) might not validly predict who overeats when sad, at least not in a laboratory setting with healthy women. Results further suggest that attention maintenance on food relates to eating motivation when in a neutral affective state, and might therefore be a cognitive mechanism contributing to increased food intake in general, but maybe not during sad mood. |
Eva Wiese; Agnieszka Wykowska; Hermann J. Muller In: PLoS ONE, vol. 9, no. 4, pp. e94529, 2014. @article{Wiese2014, For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top- down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes. |
Benjamin A. Wolfe; David Whitney Facilitating recognition of crowded faces with presaccadic attention Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 103, 2014. @article{Wolfe2014, In daily life, we make several saccades per second to objects we cannot normally recognize in the periphery due to visual crowding. While we are aware of the presence of these objects, we cannot identify them and may, at best, only know that an object is present at a particular location. The process of planning a saccade involves a presaccadic attentional component known to be critical for saccadic accuracy, but whether this or other presaccadic processes facilitate object identification as opposed to object detection-especially with high level natural objects like faces-is less clear. In the following experiments, we show that presaccadic information about a crowded face reduces the deleterious effect of crowding, facilitating discrimination of two emotional faces, even when the target face is never foveated. While accurate identification of crowded objects is possible in the absence of a saccade, accurate identification of a crowded object is considerably facilitated by presaccadic attention. Our results provide converging evidence for a selective increase in available information about high level objects, such as faces, at a presaccadic stage. |
Schnotz Wolfgang; Ludewig Ulrich; Ullrich Mark; Horz Holger; McElvany Nele; Baumert Jürgen Strategy shifts during learning from texts and pictures Journal Article In: Journal of Educational Psychology, vol. 106, no. 4, pp. 974, 2014. @article{Wolfgang2014, Reading for learning frequently requires integrating text and picture information into coherent knowledge structures. This article presents an experimental study aimed at analyzing the strategies used by students for integrating text and picture information. Four combinations of texts and pictures (text–picture units) were selected from textbooks on biology and geography, each combined with 3 comprehension test items of different complexity. Item difficulties were assessed in terms of item-response theory and through a cognitive task analysis. The texts, pictures, and items were presented to 40 students from Grades 5 and 8 from the higher tier and the lower tier of the German school system. Participants were asked to process the material and answer the items. Students' eye movements were recorded and analyzed in terms of number of fixations on different areas of interest as well as eye-movement transitions between these areas. Results suggest that text and pictures serve fundamentally different functions associated with different processing strategies in goal-directed knowledge acquisition. Texts are more likely to be used for coherence-oriented general processing. They guide the learner's conceptual analysis of the subject matter, which results in a coherent semantic network and initial mental model. Pictures are used as scaffolds for the initial mental model construction. Afterward, however, they are more likely to be used for task-driven selective processing serving as easily accessible visual representations on demand for item-specific mental model updates. |
Caitlin A. Wright; Keith S. Dobson; Christopher R. Sears Does a high working memory capacity attenuate the negative impact of trait anxiety on attentional control? Evidence from the antisaccade task Journal Article In: Journal of Cognitive Psychology, vol. 26, no. 4, pp. 400–412, 2014. @article{Wright2014, According to attentional control theory, high trait anxious individuals experience reduced attentional control as compared to low trait anxious individuals due to the imbalance between goal-directed and stimulus-driven attentional systems. One consequence is that high trait anxious individuals have difficulty resisting distraction, as compared to low trait anxious individuals. A separate line of research on individual differences in working memory capacity (WMC) has shown that individuals with higher WMC have better attentional control and thus are better able to resist distraction. The present study investigated the hypothesis that high WMC compensates for high trait anxiety in a task that evaluates the ability to resist distraction, the antisaccade task. Participants completed the State-Trait Anxiety Inventory to measure trait anxiety and the Operation Span and Reading Span tasks to measure WMC. As hypothesised, individuals who were high trait anxious exhibited increased attentional control on the antisaccade task when they had high WMC. Theoretical implications and directions for future research are discussed. |
Jessica M. Wright; Bart Krekelberg Transcranial direct current stimulation over posterior parietal cortex modulates visuospatial localization Journal Article In: Journal of Vision, vol. 14, no. 9, pp. 5–5, 2014. @article{Wright2014a, Visual localization is based on the complex interplay of bottom-up and top-down processing. Based on previous work, the posterior parietal cortex (PPC) is assumed to play an essential role in this interplay. In this study, we investigated the causal role of the PPC in visual localization. Specifically, our goal was to determine whether modulation of the PPC via transcranial direct current stimulation (tDCS) could induce visual mislocalization similar to that induced by an exogenous attentional cue (Wright, Morris, & Krekelberg, 2011). We placed one stimulation electrode over the right PPC and the other over the left PPC (dual tDCS) and varied the polarity of the stimulation. We found that this manipulation altered visual localization; this supports the causal involvement of the PPC in visual localization. Notably, mislocalization was more rightward when the cathode was placed over the right PPC than when the anode was placed over the right PPC. This mislocalization was found within a few minutes of stimulation onset, it dissipated during stimulation, but then resurfaced after stimulation offset and lasted for another 10-15 min. On the assumption that excitability is reduced beneath the cathode and increased beneath the anode, these findings support the view that each hemisphere biases processing to the contralateral hemifield and that the balance of activation between the hemispheres contributes to position perception (Kinsbourne, 1977; Szczepanski, Konen, & Kastner, 2010). |
Chia-Chien Wu; Hsueh-Cheng Wang; Marc Pomplun The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes Journal Article In: Vision Research, vol. 105, pp. 10–20, 2014. @article{Wu2014, A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. |
David W. -L. Wu; Nicola C. Anderson; Walter F. Bischof; Alan Kingstone Temporal dynamics of eye movements are related to differences in scene complexity and clutter Journal Article In: Journal of Vision, vol. 14, no. 9, pp. 8–8, 2014. @article{Wu2014a, Recent research has begun to explore not just the spatial distribution of eye fixations but also the temporal dynamics of how we look at the world. In this investigation, we assess how scene characteristics contribute to these fixation dynamics. In a free-viewing task, participants viewed three scene types: fractal, landscape, and social scenes. We used a relatively new method, recurrence quantification analysis (RQA), to quantify eye movement dynamics. RQA revealed that eye movement dynamics were dependent on the scene type viewed. To understand the underlying cause for these differences we applied a technique known as fractal analysis and discovered that complexity and clutter are two scene characteristics that affect fixation dynamics, but only in scenes with meaningful content. Critically, scene primitives—revealed by saliency analysis—had no impact on performance. In addition, we explored how RQA differs from the first half of the trial to the second half, as well as the potential to investigate the precision of fixation targeting by changing RQA radius values. Collectively, our results suggest that eye movement dynamics result from top- down viewing strategies that vary according to the meaning of a scene and its associated visual complexity and clutter. |
David W. -L. Wu; Walter F. Bischof; Nicola C. Anderson; Tanya Jakobsen; Alan Kingstone The influence of personality on social attention Journal Article In: Personality and Individual Differences, vol. 60, pp. 25–29, 2014. @article{Wu2014b, The intersection between personality psychology and the study of social attention has been relatively untouched. We present an initial study that investigates the influence of the Big Five personality traits on eye movement behaviour towards social stimuli. By combining a free-viewing eye-tracking paradigm with canonical correlation and regression analyses, we discover that personality relates to fixations towards eye regions. Specifically, Extraversion and Agreeableness were related to greater gaze selection, while Openness to Experience was related to diminished gaze selection. The results demonstrate that who a person is affects how they move their eyes to social stimuli. The results also indicate that personality is a stronger factor in predicting social attention than past studies have suggested. Critical to the influence of personality on attention is the social situations viewers are placed in. |
Juan Xu; Ming Jiang; Shuo Wang; Mohan S. Kankanhalli; Qi Zhao Predicting human gaze beyond pixels Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–20, 2014. @article{Xu2014, A large body of previous models to predict where people look in natural scenes focused on pixel-level image attributes. To bridge the semantic gap between the predictive power of computational saliency models and human behavior, we propose a new saliency architecture that incorporates information at three layers: pixel-level image attributes, object-level attributes, and semantic- level attributes. Object- and semantic-level information is frequently ignored, or only a few sample object categories are discussed where scaling to a large number of object categories is not feasible nor neurally plausible. To address this problem, this work constructs a principled vocabulary of basic attributes to describe object- and semantic-level information thus not restricting to a limited number of object categories. We build a new dataset of 700 images with eye-tracking data of 15 viewers and annotation data of 5,551 segmented objects with fine contours and 12 semantic attributes (publicly available with the paper). Experimental results demonstrate the importance of the object- and semantic-level information in the prediction of visual attention. |
Yoshiko Yabe; Melvyn A. Goodale; Hiroaki Shigemasu Temporal order judgments are disrupted more by reflexive than by voluntary saccades Journal Article In: Journal of Neurophysiology, vol. 111, no. 10, pp. 2103–2108, 2014. @article{Yabe2014, We do not always perceive the sequence of events as they actually unfold. For example, when two events occur before a rapid eye movement (saccade), the interval between them is often perceived as shorter than it really is and the order of those events can be sometimes reversed (Morrone MC, Ross J, Burr DC. Nat Neurosci 8: 950-954, 2005). In the present article we show that these misperceptions of the temporal order of events critically depend on whether the saccade is reflexive or voluntary. In the first experiment, participants judged the temporal order of two visual stimuli that were presented one after the other just before a reflexive or voluntary saccadic eye movement. In the reflexive saccade condition, participants moved their eyes to a target that suddenly appeared. In the voluntary saccade condition, participants moved their eyes to a target that was present already. Similarly to the above-cited study, we found that the temporal order of events was often misjudged just before a reflexive saccade to a suddenly appearing target. However, when people made a voluntary saccade to a target that was already present, there was a significant reduction in the probability of misjudging the temporal order of the same events. In the second experiment, the reduction was seen in a memory-delay task. It is likely that the nature of the motor command and its origin determine how time is perceived during the moments preceding the motor act. |
Keir X. X. Yong; Timothy J. Shakespeare; Dave Cash; Susie M. D. Henley; Jennifer M. Nicholas; Gerard R. Ridgway; Hannah L. Golden; Elizabeth K. Warrington; Amelia M. Carton; Diego Kaski; Jonathan M. Schott; Jason D. Warren; Sebastian J. Crutch Prominent effects and neural correlates of visual crowding in a neurodegenerative disease population Journal Article In: Brain, vol. 137, no. 12, pp. 3284–3299, 2014. @article{Yong2014, Crowding is a breakdown in the ability to identify objects in clutter, and is a major constraint on object recognition. Crowding particularly impairs object perception in peripheral, amblyopic and possibly developing vision. Here we argue that crowding is also a critical factor limiting object perception in central vision of individuals with neurodegeneration of the occipital cortices. In the current study, individuals with posterior cortical atrophy (n=26), typical Alzheimer's disease (n=17) and healthy control subjects (n=14) completed centrally-presented tests of letter identification under six different flanking conditions (unflanked, and with letter, shape, number, same polarity and reverse polarity flankers) with two different target-flanker spacings (condensed, spaced). Patients with posterior cortical atrophy were significantly less accurate and slower to identify targets in the condensed than spaced condition even when the target letters were surrounded by flankers of a different category. Importantly, this spacing effect was observed for same, but not reverse, polarity flankers. The difference in accuracy between spaced and condensed stimuli was significantly associated with lower grey matter volume in the right collateral sulcus, in a region lying between the fusiform and lingual gyri. Detailed error analysis also revealed that similarity between the error response and the averaged target and flanker stimuli (but not individual target or flanker stimuli) was a significant predictor of error rate, more consistent with averaging than substitution accounts of crowding. Our findings suggest that crowding in posterior cortical atrophy can be regarded as a pre-attentive process that uses averaging to regularize the pathologically noisy representation of letter feature position in central vision. These results also help to clarify the cortical localization of feature integration components of crowding. More broadly, we suggest that posterior cortical atrophy provides a neurodegenerative disease model for exploring the basis of crowding. These data have significant implications for patients with, or who will go on to develop, dementia-related visual impairment, in whom acquired excessive crowding likely contributes to deficits in word, object, face and scene perception. |
Angela J. Yu; He Huang; Pradeep Shenoy Maximizing masquerading as matching in human visuosaccadic choice Journal Article In: Cognitive Science, vol. 1, no. 4, pp. 1–23, 2014. @article{Yu2014, There has been a long-running debate over whether humans match or maximize when faced with differentially rewarding options under conditions of uncertainty. While maximizing, that is, consistently choosing the most rewarding option, is theoretically optimal, humans have often been observed to match, that is, allocating choices stochastically in proportion to the underlying reward rates. Previous models assumed matching behavior to arise from biological limitations or heuristic decision strategies; this, however, would stand in curious contrast to the accumulating evidence that humans have sophisticated machinery for tracking environmental statistics. It begs the questions of why the brain would build sophisticated representations of environmental statistics, only then to adopt a heuristic decision policy that fails to take full advantage of that information. Here, we revisit this debate by presenting data from a novel visual search task, which are shown to favor a particular Bayesian inference and decision- making account over other heuristic and normative models. Specifically, while sub- jects' first-fixation strategy appears to indicate matching in aggregate data, they actually maximize on a finer, trial-by-trial timescale, based on continuously updated internal beliefs about the spatial distribution of potential target locations. In other words, matching-like stochasticity in human visual search is neither random nor heuristics- based, but attributable specifically to fluctuating beliefs about stimulus statistics. These results not only shed light on the matching versus maximizing debate, but also more broadly on human decision-making strategies under conditions of uncertainty. |
Lisa N. Jefferies; Leon Gmeindl; Steven Yantis Attending to illusory differences in object size Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 5, pp. 1393–1402, 2014. @article{Jefferies2014, Focused visual attention can be shifted between objects and locations (attentional orienting) or expanded and contracted in spatial extent (attentional focusing). Although orienting and focusing both modulate visual processing, they have been shown to be distinct, independent modes of attentional control. Objects play a central role in visual attention, and it is known that high-level object representations guide attentional orienting. It not known, however, whether attentional focusing is driven by low-level object representations (which code object size in terms of retinotopic extent) or by high-level representations (which code perceived size). We manipulated the perceived size of physically identical objects by using line drawings or photographs that induced the Ponzo illusion, in a task requiring the detection of a target within these objects. The distribution of attention was determined by the perceived size and not by the retinotopic size of an attended object, indicating that attentional focusing is guided by high-level object representations. |
Yu-Cin Jian; Chao-Jung Wu; Jia-Han Su Learners' eye movements during construction of mechanical kinematic representations from static diagrams Journal Article In: Learning and Instruction, vol. 32, pp. 51–62, 2014. @article{Jian2014, We investigated the influence of numbered arrows on construction of mechanical kinematic representations by using static diagrams. Undergraduate participants viewed a two-stage diagram depicting a flushing cistern (with or without numbered arrows) and answered questions about its function, step-by-step. The arrow group demonstrated greater overall accuracy and made fewer errors on the measure of continuous relations than did the non-arrow group. The arrow group also spent more time looking at components relevant to the operational sequence and had longer first-pass fixation times and shorter saccade lengths. The non-arrow group made more saccades between the two diagrams. Analysis of transition probabilities indicated that both groups viewed components according to their continuous relations. The arrow group followed the numbered arrows whereas the unique pathway of the non-arrow group was to compare the two diagrams. These findings indicate that numbered arrows provide perceptual information but also facilitate cognitive processing. |