EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2015 |
Anna Wilschut; Jan Theeuwes; Christian N. L. Olivers Nonspecific competition underlies transient attention Journal Article In: Psychological Research, vol. 79, no. 5, pp. 844–860, 2015. @article{Wilschut2015, Cueing a target by abrupt visual stimuli enhances its perception in a rapid but short-lived fashion, an effect known as transient attention. Our recent study showed that when targets are cued at a constant, central location, the emergence of the transient performance pattern was dependent on the presence of competing distractors, whereas targets presented in isolation were enhanced in a sustained manner (Wilschut et al., PLoS ONE, 6:e27661, 2011). The current study examined in more detail whether the transience depends on the specific nature of the competition. We first replicated and extended the competition-dependent transient pattern for peripheral and variable target locations. We then investigated the role of feature similarity, compatibility, and proximity. Both competition by feature similarity and compatibility between the target and distractors were found to impair performance, but effects were additive with the effects of the cueing interval and did not change the transient performance function. Varying the spatial distance between target and distractors yielded mixed evidence, but here too a transient pattern could be observed for targets flanked by both close and far distractors. The results thus show that the presence or absence of competition determines whether attention appears transient or sustained, while the specific nature of the competition (in terms of location or feature) affects selection independent of time. |
Christian Wolf; Alexander C. Schütz Trans-saccadic integration of peripheral and foveal feature information is close to optimal Journal Article In: Journal of Vision, vol. 15, no. 16, pp. 1–18, 2015. @article{Wolf2015, Due to the inhomogenous visual representation across the visual field, humans use peripheral vision to select objects of interest and foveate them by saccadic eye movements for further scrutiny. Thus, there is usually peripheral information available before and foveal information after a saccade. In this study we investigated the integration of information across saccades. We measured reliabilities-i.e., the inverse of variance-separately in a presaccadic peripheral and a postsaccadic foveal orientation-discrimination task. From this, we predicted trans-saccadic performance and compared it to observed values. We show that the integration of incongruent peripheral and foveal information is biased according to their relative reliabilities and that the reliability of the trans-saccadic information equals the sum of the peripheral and foveal reliabilities. Both results are consistent with and indistinguishable from statistically optimal integration according to the maximum-likelihood principle. Additionally, we tracked the gathering of information around the time of the saccade with high temporal precision by using a reverse correlation method. Information gathering starts to decline between 100 and 50 ms before saccade onset and recovers immediately after saccade offset. Altogether, these findings show that the human visual system can effectively use peripheral and foveal information about object features and that visual perception does not simply correspond to disconnected snapshots during each fixation. |
Benjamin A. Wolfe; Anna A. Kosovicheva; Allison Yamanashi Leib; Katherine Wood; David Whitney Foveal input is not required for perception of crowd facial expression Journal Article In: Journal of Vision, vol. 15, no. 4, pp. 1–11, 2015. @article{Wolfe2015a, The visual system extracts average features from groups of objects (Ariely, 2001; Dakin & Watt, 1997; Watamaniuk & Sekuler, 1992), including high-level stimuli such as faces (Haberman & Whitney, 2007, 2009). This phenomenon, known as ensemble perception, implies a covert process, which would not require fixation of individual stimulus elements. However, some evidence suggests that ensemble perception may instead be a process of averaging foveal input across sequential fixations (Ji, Chen, & Fu, 2013; Jung, Bulthoff, Thornton, Lee, & Armann, 2013). To test directly whether foveating objects is necessary, we measured observers' sensitivity to average facial emotion in the absence of foveal input. Subjects viewed arrays of 24 faces, either in the presence or absence of a gaze-contingent foveal occluder, and adjusted a test face to match the average expression of the array. We found no difference in accuracy between the occluded and non-occluded conditions, demonstrating that foveal input is not required for ensemble perception. Unsurprisingly, without foveal input, subjects spent significantly less time directly fixating faces, but this did not translate into any difference in sensitivity to ensemble expression. Next, we varied the number of faces visible from the set to test whether subjects average multiple faces from the crowd. In both conditions, subjects' performance improved as more faces were presented, indicating that subjects integrated information from multiple faces in the display regardless of whether they had access to foveal information. Our results demonstrate that ensemble perception can be a covert process, not requiring access to direct foveal information. |
Benjamin A. Wolfe; David Whitney Saccadic remapping of object-selective information Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 7, pp. 2260–2269, 2015. @article{Wolfe2015, Saccadic remapping, a presaccadic increase in neural activity when a saccade is about to bring an object into a neuron's receptive field, may be crucial for our perception of a stable world. Studies of perception and saccadic remapping, like ours, focus on the presaccadic acquisition of information from the saccade target, with no direct reference to underlying physiology. While information is known to be acquired prior to a saccade, it is unclear whether object-selective or feature-specific information is remapped. To test this, we performed a series of psychophysical experiments in which we presented a peripheral, nonfoveated face as a presaccadic target. The target face disappeared at saccade onset. After making a saccade to the location of the peripheral target face (which was no longer visible), subjects misperceived the expression of a subsequent, foveally presented neutral face as being repelled away from the peripheral presaccadic face target. This effect was similar to a sequential shape contrast or negative aftereffect but required a saccade, because covert attention was not sufficient to generate the illusion. Additional experiments further revealed that inverting the faces disrupted the illusion, suggesting that presaccadic remapping is object-selective and not based on low-level features. Our results demonstrate that saccadic remapping can be an object-selective process, spatially tuned to the target of the saccade and distinct from covert attention in the absence of a saccade. |
Michael J. Wolff; Sabine Scholz; Elkan G. Akyürek; Hedderik Rijn Two visual targets for the price of one? Pupil dilation shows reduced mental effort through temporal integration Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 1, pp. 251–257, 2015. @article{Wolff2015, In dynamic sensory environments, successive stimuli may be combined perceptually and represented as a single, comprehensive event by means of temporal integration. Such perceptual segmentation across time is intuitively plausible. However, the possible costs and benefits of temporal integration in perception remain underspecified. In the present study pupil dilation was analyzed as a measure of mental effort. Observers viewed either one or two successive targets amidst distractors in rapid serial visual presentation, which they were asked to identify. Pupil dilation was examined dependent on participants' report: dilation associated with the report of a single target, of two targets, and of an integrated percept consisting of the features of both targets. There was a clear distinction between dilation observed for single-target reports and integrations on the one side, and two-target reports on the other. Regardless of report order, two-target reports produced increased pupil dilation, reflecting increased mental effort. The results thus suggested that temporal integration reduces mental effort and may thereby facilitate perceptual processing. |
Thomas D. Wright; Aaron Margolis; Jamie Ward Using an auditory sensory substitution device to augment vision: evidence from eye movements Journal Article In: Experimental Brain Research, vol. 233, no. 3, pp. 851–860, 2015. @article{Wright2015, Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes. |
Timothy J. Wright; Walter R. Boot; James R. Brockmole Functional fixedness: The functional significance of delayed disengagement based on attention set. Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 1, pp. 17–21, 2015. @article{Wright2015a, During search, the disengagement of attention is automatically delayed when a fixated but task-irrelevant object shares features of the search target. We examined whether delayed disengagement based on top-down attention set is potentially functional, resulting in additional processing of the fixated item. To accomplish this, we adapted the oculomotor disengagement paradigm. Participants saccaded to a peripheral object of a particular color and responded to the identity of the letter within it. To initiate search participants made a saccade away from an always irrelevant object at the center of the screen that matched or mismatched the target's color and contained a letter that was congruent or incongruent with the target letter. We found that delayed disengagement based on attention set was associated with deeper processing of the center item: a congruency effect between the center letter and peripheral target letter was only observed when the center object's color matched participants' attention set. Results are consistent with the proposal that delayed disengagement based on attention set is functionally significant, automatically encouraging deeper levels of processing of target-like objects that fall within the focus of attention. |
Timothy J. Wright; Walter R. Boot; John L. Jones Exploring the breadth of the top-down representations that control attentional disengagement Journal Article In: Quarterly Journal of Experimental Psychology, vol. 68, no. 5, pp. 993–1006, 2015. @article{Wright2015b, A cross-sectional prospective study was conducted between the period December 1991 and November 1992, to identify the extent of smoking among practising doctors and other health professionals in general hospitals and health clinics in Al-Ain, United Arab Emirates. The study population consisted of 300 health professionals (doctors, specialists both clinical and non-clinical, pharmacists and dentists). They were handed self-administered questionnaires adapted from the World Health Organization standard questionnaire on smoking among health professionals. Among the responding 268 (89%) health professionals 197 (73.5%) were men, and 71 (26.5%) women. Among the men health professionals 86 (43.7%) were current smokers, 24 (12.2%) were ex-smokers and 87 (44.2%) were non-smokers, while among the women health professionals 4 (5.6%) were smokers, 1 (1.4%) was an ex-smoker and 66 (93%) were non-smokers. Doctors were uniformly aware of the detrimental effects of smoking, particularly its association with lung cancer, coronary artery disease, chronic bronchitis, and laryngeal cancer, and this was the major reason for their abstaining or wanting to quit the habit. The relationship of smoking with bladder cancer, soft tissue lesion (mouth and lip) and neonatal death was not well appreciated. Counselling patients about the hazards of smoking was practised significantly less often by doctors who smoked. The majority (83.6%) expressed the need for specific training for counselling patients to stop smoking. The options favoured by the health professionals for preventing smoking included a ban on tobacco advertising, specific health warnings on cigarette packets and restriction on smoking in public places, particularly in hospitals and clinics. |
Timothy J. Wright; Thomas Vitale; Walter R. Boot; Neil Charness The impact of red light running camera flashes on younger and older drivers' attention and oculomotor control Journal Article In: Psychology and Aging, vol. 30, no. 4, pp. 755–767, 2015. @article{Wright2015c, Recent empirical evidence suggests that the flashes associated with red light running cameras (RLRCs) distract younger drivers, pulling attention away from the roadway and delaying processing of safety-relevant events. Considering the perceptual and attentional declines that occur with age, older drivers may be especially susceptible to the distracting effects of RLRC flashes, particularly in situations in which the flash is more salient (a bright flash at night compared to the day). The current study examined how age and situational factors potentially influence attention capture by RLRC flashes using covert (cuing effects) and overt (eye movement) indices of capture. We manipulated the salience of the flash by varying its luminance and contrast with respect to the background of the driving scene (either day or night scenes). Results of two experiments suggest that simulated RLRC flashes capture observers' attention, but, surprisingly, no age differences in capture were observed. However, an analysis examining early and late eye movements revealed that older adults may have been strategically delaying their eye movements in order to avoid capture. Additionally, older adults took longer to disengage attention following capture, suggesting at least one age-related disadvantage in capture situations. Findings have theoretical implications for understanding age differences in attention capture, especially with respect to capture in real-world scenes, and inform future work that should examine how the distracting effects of RLRC flashes influence driver behavior. |
Valentin Wyart; Nicholas E. Myers; Christopher Summerfield Neural mechanisms of human perceptual choice under focused and divided attention Journal Article In: Journal of Neuroscience, vol. 35, no. 8, pp. 3485–3498, 2015. @article{Wyart2015, Perceptual decisions occur after the evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information toward an appropriate response. Here we recorded human electroencephalographic (EEG) activity while participants categorized one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioral and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10-30 Hz) signals, resulting in a "leaky" accumulation process that conferred greater behavioral influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and places new capacity constraints on decision-theoretic models of information integration under cognitive load. |
Jianbo Xiao; Xin Huang Distributed and dynamic neural encoding of multiple motion directions of transparently moving stimuli in cortical area MT Journal Article In: Journal of Neuroscience, vol. 35, no. 49, pp. 16180–16198, 2015. @article{Xiao2015, Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus com- ponents is computed dynamically and distributed across neurons. |
Ying-Zi Xiong; Cong Yu; Jun-Yun Zhang Perceptual learning eases crowding by reducing recognition errors but not position errors Journal Article In: Journal of Vision, vol. 15, no. 11, pp. 16, 2015. @article{Xiong2015, When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three- letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order.We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by- product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors. |
Yangqing Xu; Steven L. Franconeri Capacity for visual features in mental rotation Journal Article In: Psychological Science, vol. 26, no. 8, pp. 1241–1251, 2015. @article{Xu2015, Although mental rotation is a core component of scientific reasoning, little is known about its underlying mechanisms. For instance, how much visual information can someone rotate at once? We asked participants to rotate a simple multipart shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low: Only one feature could remain attached to one part. Behavioral and eye-tracking data showed that this single feature remained "glued" via a singular focus of attention, typically on the object's top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science-education contexts. |
Yoshiko Yabe; Melvyn A. Goodale Time flies when we intend to act: Temporal distortion in a Go/No-Go task Journal Article In: Journal of Neuroscience, vol. 35, no. 12, pp. 5023–5029, 2015. @article{Yabe2015, Although many of our actions are triggered by sensory events, almost nothing is known about our perception of the timing of those sensory events. Here we show that, when people react to a sudden visual stimulus that triggers an action, that stimulus is perceived to occur later than an identical stimulus that does not trigger an action. In our experiments, participants fixated the center of a clock face with a rotating second hand. When the clock changed color, they were required to make a motor response and then to report the position of the second hand at the moment the clock changed color. In Experiment 1, in which participants made a target-directed saccade, the color change was perceived to occur 59 ms later than when they maintained fixation. In Experiment 2, in which we used a go/no-go paradigm, this temporal distortion was observed even when participants were required to cancel a prepared saccade. Finally, in Exper-iment 3, the same distortion in perceived time was observed for both go and no-go trials in a manual task in which no eye movements were required. These results suggest that, when a visual stimulus triggers an action, it is perceived to occur significantly later than an identical stimulus unrelated to action. Moreover, this temporal distortion appears to be related not to the execution of the action (or its effect) but rather to the programming of the action. In short, there seems to be a temporal binding between a triggering event and the triggered action |
Lingling Yang; Howard Leung; Markus Plank; Joe Snider; Howard Poizner EEG activity during movement planning encodes upcoming peak speed and acceleration and improves the accuracy in predicting hand kinematics Journal Article In: IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 1, pp. 22–28, 2015. @article{Yang2015, The relationship between movement kinematics and human brain activity is an important and fundamental question for the development of neural prosthesis. The peak velocity and the peak acceleration could best reflect the feedforward-type movement thus it is worthwhile to investigate them further. Most related studies focused on the correlation between kinematics and brain activity during the movement execution or imagery. However, human movement is the result of the motor planning phase as well as the execution phase and researchers have demonstrated that statistical correlations exist between EEG activity during the motor planning and the peak velocity and the peak acceleration using grand-average analysis. In this paper, we examined whether the correlations were concealed in trial-to-trial decoding from the low signal-to-noise ratio of EEG activity. The alpha and beta powers from the movement planning phase were combined with the alpha and beta powers from the movement execution phase to predict the peak tangential speed and acceleration. The results showed that EEG activity from the motor planning phase could also predict the peak speed and the peak acceleration with a reasonable accuracy. Furthermore, the decoding accuracy of the peak speed and the peak acceleration could both be improved by combining band powers from the motor planning phase with the band powers from the movement execution. |
Ting Yang; Hong Chen; Yuanyan Hu; Yingcan Zheng; Wei Wang Preferences for sexual dimorphism on attractiveness levels: An eye-tracking study Journal Article In: Personality and Individual Differences, vol. 77, pp. 179–185, 2015. @article{Yang2015b, Previous studies on sexual dimorphism showed feminine preferences in female faces and mixed findings in male faces by choosing which is more attractive in a pair of a masculine face and a feminine face. However, very little is known about how people make fine-grained visual assessments of such images and the attractiveness levels of faces are not received much attention. Recently a large number of androgynous stars appear in the media, which triggers a hot phenomenon of imitating them. Here we examine the influence of androgynous stars on people's facial preferences for sexual dimorphism in male and female faces on different attractiveness levels using eye-tracking techniques. In male faces we found both male and female participants preferred masculine faces to androgynous faces in high attractiveness, but mixed results in low attractiveness. In female faces we found both male and female participants preferred feminine faces to androgynous faces in high attractiveness, but no preferences in low attractiveness. Results suggest that attractiveness levels of faces might be a factor causing inconsistency in sexual dimorphism preference for male faces and that androgynous faces are not preferred, which reveals that androgynous phenomenon might not be caused by facial attractiveness. |
Amit Yashar; Jiageng Chen; Marisa Carrasco Rapid and long-lasting reduction of crowding through training Journal Article In: Journal of Vision, vol. 15, no. 10, pp. 1–15, 2015. @article{Yashar2015, Crowding is the failure to identify an object in the peripheral visual field in the presence of nearby objects. Recent studies have shown that crowding can be alleviated after several days of training, but the processes underlying this improvement are still unclear. Here we tested whether a few hundred trials within a short period of training can alleviate crowding, whether the learning is location specific, and whether the improvement reflects facilitation by target enhancement or flankers suppression. Observers were asked to identify the orientation of a letter in the periphery surrounded by two flanker letters. Observers were tested before (pretest) and after (posttest) training (600 trials). In Experiment 1 we tested whether learning is location specific or can transfer to a different location; the training and test occurred at the same or different hemifields. In a control experiment, we ruled out alternative explanations for the learning effect in Experiment 1. In Experiment 2, we assessed different components of feature selection by training with either the same flanker polarity as the pre/posttest but opposite polarity group (flanker polarity group) or the same target polarity as the pre/posttest but opposite flanker polarity (target polarity group). Following training, overall performance increased in all four conditions, but only the same-location group (Experiment 1) and the same flanker polarity (Experiment 2) showed a significant reduction in crowding as assessed by the distance at which the flankers no longer interfere with target identification, that is, the critical spacing. These results show that training can rapidly reduce crowding and that improvement primarily reflects learning to ignore the irrelevant flankers. Remarkably, in the two conditions in which training significantly reduced crowding, the benefit of short training persisted for up to a year. |
Menahem Yeari; Paul W. Broek; Marja Oudega Processing and memory of central versus peripheral information as a function of reading goals: evidence from eye-movements Journal Article In: Reading and Writing, vol. 28, no. 8, pp. 1071–1097, 2015. @article{Yeari2015, The present study examined the effect of reading goals on the processing and memory of central and peripheral textual information. Using eye-tracking methodology, we compared the effect of four common reading goals—entertainment, presentation, studying for a close-ended (multiple-choice) questions test, and studying for an open-ended questions test—on the specific reading time of central and peripheral information and the overall reading time of expository texts. Text memory was tested using multiple-choice questions. Results showed that readers devoted more time to central information than peripheral information during initial reading, regardless of reading goal, but that they adjusted their rereading to the reading goal, with total reading time being longer for central information under some (entertainment and presentation) but not all (open-ended and close-ended questions tests) reading goals. Moreover, readers devoted more time to reading the texts for a study purpose (test or presentation) than for an entertainment purpose, and devoted more time in reading the texts to answer open-ended questions than close-ended questions. Finally, we found that readers remembered more central information than peripheral information under all reading goals. These findings suggest that centrality affects readers' early processing of text whereas reading goals only affect subsequent processing. Interestingly, processing time during reading predicted memory for peripheral information but not for central information. |
Yaffa Yeshurun; Einat Rashal; Shira Tkacz-Domb Temporal crowding and its interplay with spatial crowding Journal Article In: Journal of Vision, vol. 15, no. 3, pp. 11, 1–16, 2015. @article{Yeshurun2015, Spatial crowding refers to impaired target identification when the target is surrounded by other stimuli in space temporal crowding refers to impaired target identification when the target is surrounded by other stimuli in time previously, when spatial and temporal crowding were measured in the fovea they were interrelated with amblyopic observers but almost absent with normal observers bonneh, sagi, & polat, 2007. In the current study we examined whether reliable temporal crowding can be found for normal observers with peripheral presentation 9° of eccentricity, and whether similar relations between temporal and spatial crowding will emerge to that end, we presented a sequence of three displays separated by a varying interstimulus interval (ISI). Each display included either one letter : experiments 1a ,: 1b ,: 1c or three letters separated by a varying interletter spacing: Experiments 2a ,: 2b). One of these displays included an oriented T. Observers indicated the T's orientation. As expected, we found spatial crowding: accuracy improved as the interletter spacing increased. Critically, we also found temporal crowding: in all experiments accuracy increased as the ISI increased, even when only stimulus-onset asynchronies (SOAs) larger than 150 ms were included, ensuring this effect does not reflect mere ordinary masking. Thus, with peripheral presentation, temporal crowding also emerged for normal observers. However, only a weak interaction between temporal and spatial crowding was found. |
Funda Yildirim; Frans W. Cornelissen Saccades follow perception when judging location Journal Article In: i-Perception, vol. 6, no. 6, pp. 1–10, 2015. @article{Yildirim2015, An unresolved question in vision research is whether perceptual decision making and action are based on the same or on different neural representations. Here, we address this question for a straightforward task, the judgment of location. In our experiment, observers decided on the closer of two peripheral objects-situated on the horizontal meridian in opposite hemifields-and made a saccade to indicate their choice. Correct saccades landed close to the actual (physical) location of the target. However, in case of errors, saccades went in the direction of the more distant object, yet landed on a position approximating that of the closer one. Our finding supports the notion that perception and action-related decisions on object location rely on the same neural representation. |
Funda Yildirim; Vincent Meyer; Frans W. Cornelissen Eyes on crowding: Crowding is preserved when responding by eye and similarly affects identity and position accuracy Journal Article In: Journal of Vision, vol. 15, no. 2, pp. 1–14, 2015. @article{Yildirim2015a, Peripheral vision guides recognition and selection of targets for eye movements. Crowding—a decline in recognition performance that occurs when a potential target is surrounded by other, similar, objects—influences peripheral object recognition. A recent model study suggests that crowding may be due to increased uncertainty about both the identity and the location of peripheral target objects, but very few studies have assessed these properties in tandem. Eye tracking can integrally provide information on both the perceived identity and the position of a target and therefore could become an important approach in crowding studies. However, recent reports suggest that around the moment of saccade preparation crowding may be significantly modified. If these effects were to generalize to regular crowding tasks, it would complicate the interpretation of results obtained with eye tracking and the comparison to results obtained using manual responses. For this reason, we first assessed whether the manner by which participants responded—manually or by eye—affected their performance.We found that neither recognition performance nor response time was affected by the response type. Hence, we conclude that crowding magnitude was preserved when observers responded by eye. In our main experiment, observers made eye movements to the location of a tilted Gabor target while we varied flanker tilt to manipulate target–flanker similarity. The results indicate that this similarly affected the accuracy of peripheral recognition and saccadic target localization. Our results inform about the importance of both location and identity uncertainty in crowding. |
Takumi Yokosaka; Scinob Kuroki; Shin'ya Nishida; Junji Watanabe Apparent time interval of visual stimuli is compressed during fast hand movement Journal Article In: PLoS ONE, vol. 10, no. 4, pp. e0124901, 2015. @article{Yokosaka2015, The influence of body movements on visual time perception is receiving increased attention. Past studies showed apparent expansion of visual time before and after the execution of hand movements and apparent compression of visual time during the execution of eye movements. Here we examined whether the estimation of sub-second time intervals between visual events is expanded, compressed, or unaffected during the execution of hand movements. The results show that hand movements, at least the fast ones, reduced the apparent time interval between visual events. A control experiment indicated that the apparent time compression was not produced by the participants' involuntary eye movements during the hand movements. These results, together with earlier findings, suggest hand movement can change apparent visual time either in a compressive way or in an expansive way, depending on the relative timing between the hand movement and visual stimulus. |
Petra Warschburger; Claudia Calvano; Eike M. Richter; Ralf Engbert Analysis of attentional bias towards attractive and unattractive body regions among overweight males and females: An eye-movement study Journal Article In: PLoS ONE, vol. 10, no. 10, pp. e0140813, 2015. @article{Warschburger2015, BACKGROUND: Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others' attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias. METHODS/DESIGN: We analyzed eye movements in 30 overweight individuals (18 females) and 28 normal-weight individuals (16 females) with respect to the participants' own pictures as well as gender- and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires. DISCUSSION: The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive compared to unattractive regions of both their own and the control body. For one's own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results. |
Scott N. J. Watamaniuk; Stephen J. Heinen Allocation of attention during pursuit of large objects is no different than during fixation. Journal Article In: Journal of vision, vol. 15, no. 9, pp. 9, 2015. @article{Watamaniuk2015, Attention allocation during pursuit of a spot is usually characterized as asymmetric with more attention placed ahead of the target than behind it. However, attention is symmetrically allocated across larger pursuit stimuli. An unresolved issue is how tightly attention is constrained on large stimuli during pursuit. Although some work shows it is tightly locked to the fovea, other work shows it is allocated flexibly. To investigate this, we had observers perform a character identification task on large pursuit stimuli composed of arrays of five, nine, or 15 characters spaced between 0.6° and 4.0° apart. Initially, the characters were identical, but at a random time, they all changed briefly, rendering one of them unique. Observers identified the unique character. Consistent with previous literature, attention appeared narrow and symmetric around the pursuit target for tightly spaced (0.6°) characters. Increasing spacing dramatically expanded the attention scope, presumably by mitigating crowding. However, when we controlled for crowding, performance was limited by set size, suffering more for eccentric targets. Interestingly, the same limitations on attention allocation were observed with stationary and pursued stimuli-evidence that attention operates similarly during fixation and pursuit of a stimulus that extends into the periphery. The results suggest that attention is flexibly allocated during pursuit, but performance is limited by crowding and set size. In addition, performing the identification task did not hurt pursuit performance, further evidence that pursuit of large stimuli is relatively inattentive. |
Jeffrey Weiler; Cameron D. Hassall; Olave E. Krigolson; Matthew Heath The unidirectional prosaccade switch-cost: Electroencephalographic evidence of task-set inertia in oculomotor control Journal Article In: Behavioural Brain Research, vol. 278, pp. 323–329, 2015. @article{Weiler2015, The execution of an antisaccade selectively increases the reaction time (RT) of a subsequent prosaccade (the unidirectional prosaccade switch-cost). To explain this finding, the task-set inertia hypothesis asserts that an antisaccade requires a cognitively mediated non-standard task-set that persists inertially and delays the planning of a subsequent prosaccade. The present study sought to directly test the theoretical tenets of the task-set inertia hypothesis by examining the concurrent behavioural and the event-related brain potential (ERP) data associated with the unidirectional prosaccade switch-cost. Participants pseudo-randomly alternated between pro- and antisaccades while electroencephalography (EEG) data were recorded. As expected, the completion of an antisaccade selectively increased the RT of a subsequent prosaccade, whereas the converse switch did not influence RTs. Thus, the behavioural results demonstrated the unidirectional prosaccade switch-cost. In terms of the ERP findings, we observed a reliable change in the amplitude of the P3 - time-locked to task-instructions - when trials were switched from a prosaccade to an antisaccade; however, no reliable change was observed when switching from an antisaccade to a prosaccade. This is a salient finding because extensive work has shown that the P3 provides a neural index of the task-set required to execute a to-be-completed response. As such, results showing that prosaccades completed after antisaccades exhibited increased RTs in combination with a P3 amplitude comparable to antisaccades provides convergent evidence that the unidirectional prosaccade switch-cost is attributed to the persistent activation of a non-standard antisaccade task-set. |
Katharina Weiß; Werner X. Schneider; Arvid Herwig A "blanking effect" for surface features: Transsaccadic spatial-frequency discrimination is improved by postsaccadic blanking Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 5, pp. 1500–1506, 2015. @article{Weiss2015, Although saccadic eye movements occur frequently-about three or four times a second- humans are astonishingly blind to transsaccadic changes. Locational displacements of the saccade target of up to 2 deg of visual angle, and even large changes of a visual scene, can go unnoticed. For a long time, this insensitivity was ascribed to deficits in transsaccadic memory: Only a coarse, (spatially) imprecise representation would be retained across a saccade. This assumption was contradicted by Deubel's and Schneider's (Behavioral and Brain Sciences 17:259-260, 1994) striking finding that locational discrimination performance across a saccade is greatly improved by inserting a short postsaccadic blank. Surprisingly, the question of whether blanking effects occur also for other forms of transsaccadic changes (i.e., surface-feature changes) has been widely ignored. We tested this question by means of a transsaccadic change in spatial frequency. Postsaccadic blanking facilitated spatial-frequency discrimination, but to a smaller amount than the usual blanking effects obtained with locational displacements. This finding bears important implications for models of visual stability and transsaccadic memory. |
Jessica Werthmann; Anita Jansen; Anita C. E. Vreugdenhil; Chantal Nederkoorn; Ghislaine Schyns; Anne Roefs Food through the child's eye: An eye-tracking study on attentional bias for food in healthy-weight children and children with obesity. Journal Article In: Health Psychology, vol. 34, no. 12, pp. 1123–1132, 2015. @article{Werthmann2015, Objective: Obesity prevalence among children is high and knowledge on cognitive factors that contribute to children's reactivity to the "obesogenic" food environment could help to design effective treatment and prevention campaigns. Empirical studies in adults suggest that attention bias for food could be a risk factor for overeating. Accordingly, the current study tested if children with obesity have an elevated attention bias for food when compared to healthy-weight children. Another aim was to explore whether attention biases for food predicted weight-change after 3 and 6 months in obese children. Method: Obese children (n = 34) were recruited from an intervention program and tested prior to the start of this intervention. Healthy-weight children (n = 36) were recruited from local schools. First, attention biases for food were compared between children with obesity (n = 30) and matched healthy-weight children (n = 30). Second, regression analyses were conducted to test if food-related attention biases predicted weight changes after 3 and 6 months in children with obesity following a weight loss lifestyle intervention. Results: Results showed that obese children did not differ from healthy-weight children in their attention bias to food. Yet automatically directing attention toward food (i.e., initial orientation bias) was related to a reduced weight loss (R2 = .14 |
Alex L. White; Martin Rolfs; Marisa Carrasco Stimulus competition mediates the joint effects of spatial and feature-based attention Journal Article In: Journal of Vision, vol. 15, no. 14, pp. 1–21, 2015. @article{White2015, Distinct attentional mechanisms enhance the sensory processing of visual stimuli that appear at task-relevant locations and have task-relevant features. We used a combination of psychophysics and computational modeling to investigate how these two types of attention—spatial and feature based—interact to modulate sensitivity when combined in one task. Observers monitored overlapping groups of dots for a target change in color saturation, which they had to localize as being in the upper or lower visual hemifield. Pre-cues indicated the target's most likely location (left/ right), color (red/green), or both location and color. We measured sensitivity (d0) for every combination of the location cue and the color cue, each of which could be valid, neutral, or invalid. When three competing saturation changes occurred simultaneously with the target change, there was a clear interaction: The spatial cueing effect was strongest for the cued color, and the color cueing effect was strongest at the cued location. In a second experiment, only the target dot group changed saturation, such that stimulus competition was low. The resulting cueing effects were statistically independent and additive: The color cueing effect was equally strong at attended and unattended locations. We account for these data with a computational model in which spatial and feature-based attention independently modulate the gain of sensory responses, consistent with measurements of cortical activity. Multiple responses then compete via divisive normalization. Sufficient competition creates interactions between the two cueing effects, although the attentional systems are themselves independent. This model helps reconcile seemingly disparate behavioral and physiological findings. |
Renée M. Visser; Anna E. Kunze; Bianca Westhoff; H. Steven Scholte; Merel Kindt Representational similarity analysis offers a preview of the noradrenergic modulation of long-term fear memory at the time of encoding Journal Article In: Psychoneuroendocrinology, vol. 55, pp. 8–20, 2015. @article{Visser2015, Neuroimaging research on emotional memory has greatly advanced our understanding of the pathogenesis of anxiety disorders. While the behavioral expression of fear at the time of encoding does not predict whether an aversive experience will evolve into long-term fear memory, the application of multi-voxel pattern analysis (MVPA) for the analysis of BOLD-MRI data has recently provided a unique marker for memory formation. Here, we aimed to further investigate the utility of this marker by modulating the strength of fear memory with an α2-adrenoceptor antagonist (yohimbine HCl). Fifty-two healthy participants were randomly assigned to two conditions - either receiving 20. mg yohimbine or a placebo pill (double-blind) - prior to differential fear conditioning and MRI-scanning. We examined the strength of fear associations during acquisition and retention of fear (48. h later) by assessing the similarity of BOLD-MRI patterns and pupil dilation responses. Additionally, participants returned for a follow-up test outside the scanner (2-4 weeks), during which we assessed fear-potentiated startle responses. Replicating our previous findings, neural pattern similarity reflected the development of fear associations over time, and unlike average activation or pupil dilation, predicted the later expression of fear memory (pupil dilation 48. h later). While no effect of yohimbine was observed on markers of autonomic arousal, including salivary α-amylase (sAA), we obtained indirect evidence for the noradrenergic enhancement of fear memory consolidation: sAA levels showed a strong increase prior to fMRI scanning, irrespective of whether participants had received yohimbine, and this increase correlated with the subsequent expression of fear (48. h later). Remarkably, this noradrenergic enhancement of fear was associated with changes in neural response patterns at the time of learning. These findings provide further evidence that representational similarity analysis is a sensitive tool for studying (enhanced) memory formation. |
Caroline Voges; Christoph Helmchen; Wolfgang Heide; Andreas Sprenger Ganzfeld stimulation or sleep enhance long term motor memory consolidation compared to normal viewing in saccadic adaptation paradigm Journal Article In: PLoS ONE, vol. 10, no. 4, pp. e0123831, 2015. @article{Voges2015, Adaptation of saccade amplitude in response to intra-saccadic target displacement is a type of implicit motor learning which is required to compensate for physiological changes in saccade performance. Once established trials without intra-saccadic target displacement lead to de-adaptation or extinction, which has been attributed either to extra-retinal mechanisms of spatial constancy or to the influence of the stable visual surroundings. Therefore we investigated whether visual deprivation ("Ganzfeld"-stimulation or sleep) can partially maintain this motor learning compared to free viewing of the natural surroundings. Thirty-five healthy volunteers performed two adaptation blocks of 100 inward adaptation trials - interspersed by an extinction block - which were followed by a two-hour break with or without visual deprivation (VD). Using additional adaptation and extinction blocks short and long (4 weeks) term memory of this implicit motor learning were tested. In the short term, motor memory tested immediately after free viewing was superior to adaptation performance after VD. In the long run, however, effects were opposite: motor memory and relearning of adaptation was superior in the VD conditions. This could imply independent mechanisms that underlie the short-term ability of retrieving learned saccadic gain and its long term consolidation. We suggest that subjects mainly rely on visual cues (i.e., retinal error) in the free viewing condition which makes them prone to changes of the visual stimulus in the extinction block. This indicates the role of a stable visual array for resetting adapted saccade amplitudes. In contrast, visual deprivation (GS and sleep), might train subjects to rely on extra-retinal cues, e.g., efference copy or prediction to remap their internal representations of saccade targets, thus leading to better consolidation of saccadic adaptation. |
Simone Vossel; Christoph Mathys; Klaas E. Stephan; Karl J. Friston Cortical coupling reflects Bayesian belief updating in the deployment of spatial attention Journal Article In: Journal of Neuroscience, vol. 35, no. 33, pp. 11532–11542, 2015. @article{Vossel2015, The deployment of visuospatial attention and the programming of saccades are governed by the inferred likelihood of events. In the present study, we combined computational modeling of psychophysical data with fMRI to characterize the computational and neural mechanisms underlying this flexible attentional control. Sixteen healthy human subjects performed a modified version of Posner's location-cueing paradigm in which the percentage of cue validity varied in time and the targets required saccadic responses. Trialwise estimates of the certainty (precision) of the prediction that the target would appear at the cued location were derived from a hierarchical Bayesian model fitted to individual trialwise saccadic response speeds. Trial-specific model parameters then entered analyses of fMRI data as parametric regressors. Moreover, dynamic causal modeling (DCM) was performed to identify the most likely functional architecture of the attentional reorienting network and its modulation by (Bayes-optimal) precision-dependent attention. While the frontal eye fields (FEFs), intraparietal sulcus, and temporoparietal junction (TPJ) of both hemispheres showed higher activity on invalid relative to valid trials, reorienting responses in right FEF, TPJ, and the putamen were significantly modulated by precision-dependent attention. Our DCM results suggested that the precision of predictability underlies the attentional modulation of the coupling of TPJ with FEF and the putamen. Our results shed new light on the computational architecture and neuronal network dynamics underlying the context-sensitive deployment of visuospatial attention.$backslash$n$backslash$nSIGNIFICANCE STATEMENT: Spatial attention and its neural correlates in the human brain have been studied extensively with the help of fMRI and cueing paradigms in which the location of targets is pre-cued on a trial-by-trial basis. One aspect that has so far been neglected concerns the question of how the brain forms attentional expectancies when no a priori probability information is available but needs to be inferred from observations. This study elucidates the computational and neural mechanisms under which probabilistic inference governs attentional deployment. Our results show that Bayesian belief updating explains changes in cortical connectivity; in that directional influences from the temporoparietal junction on the frontal eye fields and the putamen were modulated by (Bayes-optimal) updates. |
Basil Wahn; Peter König Vision and haptics share spatial attentional resources and visuotactile integration is not affected by high attentional load Journal Article In: Multisensory Research, vol. 28, no. 3-4, pp. 371–392, 2015. @article{Wahn2015, Human information processing is limited by attentional resources. Two questions that are discussed in multisensory research are (1) whether there are separate spatial attentional resources for each sensory modality and (2) whether multisensory integration is influenced by attentional load. We investigated these questions using a dual task paradigm: Participants performed two spatial tasks (a multiple object tracking ['MOT'] task and a localization ['LOC'] task) either separately (single task condition) or simultaneously (dual task condition). In the MOT task, participants visually tracked a small subset of several randomly moving objects. In the LOC task, participants either received visual, tactile, or redundant visual and tactile location cues. In the dual task condition, we found a substantial decrease in participants' performance and an increase in participants' mental effort (indicated by an increase in pupil size) relative to the single task condition. Importantly, participants performed equally well in the dual task condition regardless of whether they received visual, tactile, or redundant multisensory (visual and tactile) location cues in the LOC task. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the tactile and visual modality. Also, we found that participants integrated redundant multisensory information optimally even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) spatial attentional resources for the tactile and visual modality overlap and that (2) the integration of spatial cues from these two modalities occurs at an early pre-attentive processing stage. |
Basil Wahn; Peter König Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration Journal Article In: Frontiers in Psychology, vol. 6, pp. 1084, 2015. @article{Wahn2015a, Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage. |
George Wallis; Mark G. Stokes; Craig Arnold; Anna C. Nobre Reward boosts working memory encoding over a brief temporal window Journal Article In: Visual Cognition, vol. 23, no. 1-2, pp. 291–312, 2015. @article{Wallis2015, Selection mechanisms for WM are ordinarily studied by explicitly cueing a subset of memory items. However, we might also expect the reward associations of stimuli we encounter to modulate their probability of being represented in working memory (WM). Theoretical and computational models explicitly predict that reward value should determine which items will be gated into WM. For example, a model by Braver and colleagues in which phasic dopamine signalling gates WM updating predicts a temporally-specific but not item-specific reward-driven boost to encoding. In contrast, Hazy and colleagues invoke reinforcement learning in cortico-striatal loops and predict an item-wise reward-driven encoding bias. Furthermore, a body of prior work has demonstrated that reward-associated items can capture attention, and it has been shown that attentional capture biases WM encoding. We directly investigated the relationship between reward history and WM encoding. In our first experiment, we found an encoding benefit associated with reward-associated items, but the benefit generalized to all items in the memory array. In a second experiment this effect was shown to be highly temporally specific. We speculate that in real-world contexts in which the environment is sampled sequentially with saccades/shifts in attention, this mechanism could effectively mediate an item-wise encoding bias, because encoding boosts would occur when rewarded items were fixated. |
George Wallis; Mark Stokes; Helena Cousijn; Mark W. Woolrich; Anna C. Nobre Frontoparietal and cingulo-opercular networks play dissociable roles in control of working memory Journal Article In: Journal of Cognitive Neuroscience, vol. 27, pp. 2019–2034, 2015. @article{Wallis2015a, We used magnetoencephalography to characterize the spatiotemporal dynamics of cortical activity during top–down control of working memory (WM). fMRI studies have previously implicated both the frontoparietal and cingulo-opercular networks in control over WM, but their respective contributions are unclear. In our task, spatial cues indicating the relevant item in a WM array occurred either before the memory array or during the maintenance period, providing a direct comparison between prospective and retrospective control of WM. We found that in both cases a frontoparietal network activated following the cue, but following retrocues this activation was transient and was succeeded by a cinguloopercular network activation. We also characterized the time course of top–down modulation of alpha activity in visual/parietal cortex. This modulation was transient following retrocues, occurring in parallel with the frontoparietal network activation. We suggest that the frontoparietal network is responsible for top–down modulation of activity in sensory cortex during both preparatory attention and orienting within memory. In contrast, the cinguloopercular network plays a more downstream role in cognitive control, perhaps associated with output gating of memory |
Thomas S. A. Wallis; Michael Dorr; Peter J. Bex Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison Journal Article In: Journal of Vision, vol. 15, no. 8, pp. 1–33, 2015. @article{Wallis2015b, Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. |
R. Calen Walshe; Antje Nuthmann Mechanisms of saccadic decision making while encoding naturalistic scenes Journal Article In: Journal of Vision, vol. 15, no. 5, pp. 21, 2015. @article{Walshe2015, Saccadic eye movements are the primary vehicle by which human gaze is brought in alignment with vital visual information present in naturalistic scenes. Although numerous studies using the double-step paradigm have demonstrated that saccade preparation is subject to modification under certain conditions, this has yet to be studied directly within a naturalistic scene-viewing context. To reveal characteristic properties of saccade programming during naturalistic scene viewing, we contrasted behavior across three conditions. In the Static condition of the main experiment, double-step targets were presented following a period of stable fixation on a central cross. In a Scene condition, targets were presented while participants actively explored a naturalistic scene. During a Noise condition, targets were presented during active exploration of a 1/f noise-filtered scene. In Experiment 2, we measure saccadic responses in three Static conditions (Uniform, Scene, and Noise) in which the backgrounds are the same as Experiment 1 but scene exploration is no longer permitted. We find that the mechanisms underlying saccade modification generalize to both dynamic conditions. However, we show that a property of saccade programming known as the saccadic dead time (SDT), the interval prior to saccade onset during which a saccade may not be canceled or modified, is lower in the Static task than it is in the dynamic tasks. We also find a trend toward longer SDT in the Scene as compared with Noise conditions. We discuss the implication of these results for computational models of scene viewing, reading, and visual search tasks. |
Chin-An Wang; Donald C. Brien; Douglas P. Munoz Pupil size reveals preparatory processes in the generation of pro-saccades and anti-saccades Journal Article In: European Journal of Neuroscience, vol. 41, no. 8, pp. 1102–1110, 2015. @article{Wang2015c, The ability to generate flexible behaviors to accommodate changing goals in response to identical sensory stimuli is a signature that is inherited in humans and higher-level animals. In the oculomotor system, this function has often been examined with the anti-saccade task, in which subjects are instructed, prior to stimulus appearance, to either automatically look at the peripheral stimulus (pro-saccade) or to suppress the automatic response and voluntarily look in the opposite direction from the stimulus (anti-saccade). Distinct neural preparatory activity between the pro-saccade and anti-saccade conditions has been well documented, particularly in the superior colliculus (SC) and the frontal eye field (FEF), and this has shown higher inhibition-related fixation activity in preparation for anti-saccades than in preparation for pro-saccades. Moreover, the level of preparatory activity related to motor preparation is negatively correlated with reaction times. We hypothesised that preparatory signals may be reflected in pupil size through a link between the SC and the pupil control circuitry. Here, we examined human pupil dynamics during saccade preparation prior to the execution of pro-saccades and anti-saccades. Pupil size was larger in preparation for correct anti-saccades than in preparation for correct pro-saccades and erroneous pro-saccades made in the anti-saccade condition. Furthermore, larger pupil dilation prior to stimulus appearance accompanied saccades with faster reaction times, with a trial-by-trial correlation between dilation size and anti-saccade reaction times. Overall, our results demonstrate that pupil size is modulated by saccade preparation, and neural activity in the SC, together with the FEF, supports these findings, providing unique insights into the neural substrate coordinating cognitive processing and pupil diameter. |
Qiandong Wang; Naiqi G. Xiao; Paul C. Quinn; Chao S. Hu; Miao Qian; Genyue Fu; Kang Lee In: Vision Research, vol. 107, pp. 67–75, 2015. @article{Wang2015e, Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. |
Karly N. Neath; Roxane J. Itier Fixation to features and neural processing of facial expressions in a gender discrimination task Journal Article In: Brain and Cognition, vol. 99, pp. 97–111, 2015. @article{Neath2015, Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (~120. ms) for happy faces was seen at occipital sites and was sustained until ~350. ms post-stimulus. For fearful faces, an early effect was seen around 80. ms followed by a later effect appearing at ~150. ms until ~300. ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. |
Andrea L. Nelson; Christine Purdon; Leanne Quigley; Jonathan Carriere; Daniel Smilek In: Cognition and Emotion, vol. 29, no. 3, pp. 504–526, 2015. @article{Nelson2015, Although attentional biases to threatening information are thought to contribute to the development and persistence of anxiety disorders, it is not clear whether an attentional bias to threat (ABT) is driven by trait anxiety, state anxiety or an interaction between the two. ABT may also be influenced by "top down" processes of motivation to attend or avoid threat. In the current study, participants high, mid and low in trait anxiety viewed high threat-neutral, mild threat-neutral and positive-neutral image pairs for 5 seconds in both calm and anxious mood states while their eye movements were recorded. State anxiety alone, but not trait anxiety, predicted greater maintenance of attention to high threat images (relative to neutral) following the first fixation (i.e., delayed disengagement) and over the time course. Motivation was associated with the time course of attention as would be expected, such that those motivated to look towards negative images showed the greatest ABT over time, and those highly motivated to look away from negative images showed the greatest avoidance. Interestingly, those ambivalent about where to direct their attention when viewing negative images showed the greatest ABT in the first 500 ms of viewing. Implications for theory and treatment of anxiety disorders, as well as areas for further study, are discussed. |
Kristin R. Newman; Christopher R. Sears Eye gaze tracking reveals different effects of a sad mood induction on the attention of previously depressed and never depressed women Journal Article In: Cognitive Therapy and Research, vol. 39, no. 3, pp. 292–306, 2015. @article{Newman2015, This study examined the effect of a sad mood induction (MI) on attention to emotional information and whether the effect varies as a function of depression vulnerability. Previously depressed (N = 42) and never depressed women (N = 58) were randomly assigned to a sad or a neutral MI and then viewed sets of depression-related, anxiety-related, positive, and neutral images. Attention was measured by tracking eye fixations to the images throughout an 8-s presentation. The sad MI had a substantial impact on the attention of never depressed participants: never depressed participants who experienced the sad MI increased their attention to positive images and decreased their attention to anxiety-related images relative to those who experienced the neutral MI. In contrast, previously depressed participants who experienced the sad MI did not attend to emotional images any differently than previously depressed participants who experienced the neutral MI. These results suggest that for never depressed individuals, a sad MI activates an emotion regulation strategy that changes the way that emotional information is attended to in order to counteract the sad mood; the absence of a difference for previously depressed individuals likely reflects a maladaptive emotion regulation response associated with depression vulnerability. Implications for cognitive theories of depression and depression-vulnerability are discussed. |
Bruno Nicenboim; Shravan Vasishth; Carolina A. Gattei; Mariano Sigman; Reinhold Kliegl Working memory differences in long-distance dependency resolution Journal Article In: Frontiers in Psychology, vol. 6, pp. 312, 2015. @article{Nicenboim2015, There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component. |
Babak Noory; Michael H. Herzog; Haluk Ogmen Retinotopy of visual masking and non-retinotopic perception during masking Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 4, pp. 1263–1284, 2015. @article{Noory2015, Due to the movements of the observer and those of objects in the environment, retinotopic representations are highly unstable during ecological viewing conditions. The phenomenal stability of our perception suggests that retinotopic representations are transformed into non-retinotopic representations. It remains to show, however, which visual processes operate under retinotopic representations and which ones operate under non-retinotopic representations. Visual masking refers to the reduced visibility of one stimulus, called the target, due to the presence of a second stimulus, called the mask. Masking has been used extensively to study the dynamic aspects of visual perception. Previous studies using Saccadic Stimulus Presentation Paradigm (SSPP) suggested both retinotopic and non-retinotopic bases for visual masking. In order to understand how the visual system deals with retinotopic changes induced by moving targets, we investigated the retinotopy of visual masking and the fate of masked targets under conditions that do not involve eye movements. We have developed a series of experiments based on a radial Ternus-Pikler display. In this paradigm, the perceived Ternus-Pikler motion is used as a non-retinotopic reference frame to pit retinotopic against non-retinotopic visual masking hypothesis. Our results indicate that both metacontrast and structure masking are retinotopic. We also show that, under conditions that allow observers to read-out effectively non-retinotopic feature attribution, the target becomes visible at a destination different from its retinotopic/ spatiotopic location. We discuss the implications of our findings within the context of ecological vision and dynamic form perception. |
Antje Nuthmann; Wolfgang Einhäuser A new approach to modeling the influence of image features on fixation selection in scenes Journal Article In: Annals of the New York Academy of Sciences, vol. 1339, no. 1, pp. 82–96, 2015. @article{Nuthmann2015, Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. |
Jennifer E. Arnold; Shin-Yi C. Lao Effects of psychological attention on pronoun comprehension Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 7, pp. 832–852, 2015. @article{Arnold2015, Pronoun comprehension is facilitated for referents that are focused in the discourse context. Discourse focus has been described as a function of attention, especially shared attention, but few studies have explicitly tested this idea. Two experiments used an exogenous capture cue paradigm to demonstrate that listeners' visual attention at the onset of a story influences their preferences during pronoun resolution later in the story. In both experiments trial-initial attention modulated listeners' transitory biases while considering referents for the pronoun, whether it was in response to the capture cue or not. These biases even had a small influence on listeners' final interpretation of the pronoun. These results provide independently motivated evidence that the listener's attention influences the online processes of pronoun comprehension. Trial-initial attentional shifts were made on the basis of non-shared, private information, demonstrating that attentional effects on pronoun comprehension are not restricted to shared attention among interlocutors. |
David M. Arnoldussen; Jeroen Goossens; Albert V. Den Berg Dissociation of retinal and headcentric disparity signals in dorsal human cortex Journal Article In: Frontiers in Systems Neuroscience, vol. 9, pp. 16, 2015. @article{Arnoldussen2015, Recent fMRI studies have shown fusion of visual motion and disparity signals for shape perception (Ban et al., 2012), and unmasking camouflaged surfaces (Rokers et al., 2009), but no such interaction is known for typical dorsal motion pathway tasks, like grasping and navigation. Here, we investigate human speed perception of forward motion and its representation in the human motion network. We observe strong interaction in medial (V3ab, V6) and lateral motion areas (MT(+)), which differ significantly. Whereas the retinal disparity dominates the binocular contribution to the BOLD activity in the anterior part of area MT(+), headcentric disparity modulation of the BOLD response dominates in area V3ab and V6. This suggests that medial motion areas not only represent rotational speed of the head (Arnoldussen et al., 2011), but also translational speed of the head relative to the scene. Interestingly, a strong response to vergence eye movements was found in area V1, which showed a dependency on visual direction, just like vertical-size disparity. This is the first report of a vertical-size disparity correlate in human striate cortex. |
Árni Gunnar Ásgeirsson; Árni Kristjánsson; Claus Bundesen Repetition priming in selective attention: A TVA analysis Journal Article In: Acta Psychologica, vol. 160, no. 35-42, pp. 35–42, 2015. @article{Asgeirsson2015, Current behavior is influenced by events in the recent past. In visual attention, this is expressed in many variations of priming effects. Here, we investigate color priming in a brief exposure digit-recognition task. Observers performed a masked odd-one-out singleton recognition task where the target-color either repeated or changed between subsequent trials. Performance was measured by recognition accuracy over exposure durations. The purpose of the study was to replicate earlier findings of perceptual priming in brief displays and to model those results based on a Theory of Visual Attention (TVA; Bundesen, 1990). We tested 4 different definitions of a generic TVA-model and assessed their explanatory power. Our hypothesis was that priming effects could be explained by selective mechanisms, and that target-color repetitions would only affect the selectivity parameter (α) of our models. Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure that estimated 4-15 free parameters, depending on the particular model. We draw two main conclusions. Color priming can be modeled simply as a change in selectivity between conditions of repetition or swap of target color. Depending on the desired resolution of analysis; priming can accurately be modeled by a simple four parameter model, where VSTM capacity and spatial biases of attention are ignored, or more fine-grained by a 10 parameter model that takes these aspects into account. |
Mania Asgharpour; Mehdi Tehrani-Doost; Mehrnoosh Ahmadi; Hamid Moshki Visual attention to emotional face in schizophrenia: An eye tracking study Journal Article In: Iranian Journal of Psychiatry, vol. 10, no. 1, pp. 13–18, 2015. @article{Asgharpour2015, OBJECTIVE: Deficits in the processing of facial emotions have been reported extensively in patients with schizophrenia. To explore whether restricted attention is the cause of impaired emotion processing in these patients, we examined visual attention through tracking eye movements in response to emotional and neutral face stimuli in a group of patients with schizophrenia and healthy individuals. We also examined the correlation between visual attention allocation and symptoms severity in our patient group. METHOD: Thirty adult patients with schizophrenia and 30 matched healthy controls participated in this study. Visual attention data were recorded while participants passively viewed emotional-neutral face pairs for 500 ms. The relationship between the visual attention and symptoms severity were assessed by the Positive and Negative Syndrome Scale (PANSS) in the schizophrenia group. Repeated Measures ANOVAs were used to compare the groups. RESULTS: Comparing the number of fixations made during face-pairs presentation, we found that patients with schizophrenia made fewer fixations on faces, regardless of the expression of the face. Analysis of the number of fixations on negative-neutral pairs also revealed that the patients made fewer fixations on both neutral and negative faces. Analysis of number of fixations on positive-neutral pairs only showed more fixations on positive relative to neutral expressions in both groups. We found no correlations between visual attention pattern to faces and symptom severity in schizophrenic patients. CONCLUSION: The results of this study suggest that the facial recognition deficit in schizophrenia is related to decreased attention to face stimuli. Finding of no difference in visual attention for positive-neutral face pairs between the groups is in line with studies that have shown increased ability to positive emotional perception in these patients. |
Ryszard Auksztulewicz; Karl J. Friston Attentional enhancement of auditory mismatch responses: A DCM/MEG study Journal Article In: Cerebral Cortex, vol. 25, no. 11, pp. 4273–4283, 2015. @article{Auksztulewicz2015, Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision. |
Brittany Avery; Christopher D. Cowper-Smith; David A. Westwood Spatial interactions between consecutive manual responses Journal Article In: Experimental Brain Research, vol. 233, no. 11, pp. 3283–3290, 2015. @article{Avery2015, We have shown that the latency to initiate a reaching movement is increased if its direction is the same as a previous movement compared to movements that differ by 90° or 180° (Cowper-Smith and Westwood in Atten Percept Psychophys 75:1914–1922, 2013). An influential study (Taylor and Klein in J Exp Psychol Hum Percept Perform 26:1639–1656, 2000), however, reported the opposite spatial pattern for manual keypress responses: repeated responses on the same side had reduced reaction time compared to responses on opposite sides. In order to determine whether there are fundamental differences in the patterns of spatial interactions between button-pressing responses and reaching movements, we compared both types of manual responses using common methods. Reaching movements and manual keypress responses were performed in separate blocks of trials using consecutive central arrow stimuli that directed participants to respond to left or right targets. Reaction times were greater for manual responses made to the same target as a previous response (M = 390 ms) as compared to the opposite target (M = 365 ms; similarity main effect: p < 0.001) regardless of whether the response was a reaching movement or a keypress response. This finding is broadly consistent with an inhibitory mechanism operating at the level of motor output that discourages movements that achieve the same spatial goal as a recent action. |
Kelvin Balcombe; Iain Fraser; Eugene McSorley Visual attention and attribute attendance in multi-attribute choice experiments Journal Article In: Journal of Applied Econometrics, vol. 30, pp. 447–467, 2015. @article{Balcombe2015, Decision strategies in multi-attribute choice experiments are investigated using eye-tracking. The visual attention towards, and attendance of, attributes is examined. Stated attendance is found to diverge substantively from visual attendance of attributes. However, stated and visual attendance are shown to be informative, non-overlapping sources of information about respondent utility functions when incorporated into model estimation. Eye-tracking also reveals systematic nonattendance of attributes only by a minority of respondents. Most respondents visually attend most attributes most of the time. We find no compelling evidence that the level of attention is related to respondent certainty, or that higher or lower value attributes receive more or less attention. |
Snigdha Banerjee; Hans Peter Frey; Sophie Molholm; John J. Foxe In: European Journal of Neuroscience, vol. 41, no. 6, pp. 818–834, 2015. @article{Banerjee2015, The voluntary allocation of attention to environmental inputs is a crucial mechanism of healthy cognitive functioning, and is probably influenced by an observer's level of interest in a stimulus. For example, an individual who is passionate about soccer but bored by botany will obviously be more attentive at a soccer match than an orchid show. The influence of monetary rewards on attention has been examined, but the impact of more common motivating factors (i.e. the level of interest in the materials under observation) remains unclear, especially during development. Here, stimulus sets were designed based on survey measures of the level of interest of adolescent participants in several item classes. High-density electroencephalography was recorded during a cued spatial attention task in which stimuli of high or low interest were presented in separate blocks. The motivational impact on performance of a spatial attention task was assessed, along with event-related potential measures of anticipatory top-down attention. As predicted, performance was improved for the spatial target detection of high interest items. Further, the impact of motivation was observed in parieto-occipital processes associated with anticipatory top-down spatial attention. The anticipatory activity over these regions was also increased for high vs. low interest stimuli, irrespective of the direction of spatial attention. The results also showed stronger anticipatory attentional and motivational modulations over the right vs. left parieto-occipital cortex. These data suggest that motivation enhances top-down attentional processes, and can independently shape activations in sensory regions in anticipation of events. They also suggest that attentional functions across hemispheres may not fully mature until late adolescence. |
Adrien Baranes; Pierre Yves Oudeyer; Jacqueline Gottlieb Eye movements reveal epistemic curiosity in human observers Journal Article In: Vision Research, vol. 117, pp. 81–90, 2015. @article{Baranes2015, Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. |
Andrew Isaac Meso; Guillaume S. Masson Dynamic resolution of ambiguity during tri-stable motion perception Journal Article In: Vision Research, vol. 107, pp. 113–123, 2015. @article{Meso2015, Multi-stable perception occurs when an image falling onto the retina has multiple incompatible interpretations. We probed this phenomenon in psychophysical experiments using a moving barber-pole visual stimulus configured as a square to generate three competing perceived directions, horizontal, diagonal and vertical. We characterised patterns in reported switching type and percept duration, classifying switches into three groups related to the direction cues driving such transitions i.e. away from diagonal, towards diagonal and between cardinals. The proportions of each class reported by participants depended on contrast. The two including diagonals dominated at low contrast and those between cardinals increased in proportion as contrast was increased. At low contrasts, the less frequent cardinals persisted for shorter than the dominant diagonals and this was reversed at higher contrasts. This observed asymmetry between the dominance of transition classes appears to be driven by different underlying dynamics between cardinal and the oblique cues and their related transitions. At trial onset we found that transitions away from diagonal dominate, a tendency which later in the trial reverses to dominance by transitions excluding the diagonal, most prominently at higher contrasts. Thus ambiguity is resolved over a contrast dependent temporal integration similar to, but lasting longer than that observed when resolving the aperture problem to estimate direction. When the diagonal direction dominates perception, evidence is found for a noisier competition seen in broader duration distributions than during dominance of cardinal perception. There remain aspects of these identified differences in cardinal and oblique dynamics to be investigated in future. |
Cristiano Micheli; Daniel Kaping; Stephanie Westendorff; Taufik A. Valiante; Thilo Womelsdorf Inferior-frontal cortex phase synchronizes with the temporal-parietal junction prior to successful change detection Journal Article In: NeuroImage, vol. 119, pp. 417–431, 2015. @article{Micheli2015, The inferior frontal gyrus (IFG) and the temporo-parietal junction (TPJ) are believed to be core structures of human brain networks that activate when sensory top-down expectancies guide goal directed behavior and attentive perception. But it is unclear how activity in IFG and TPJ coordinates during attention demanding tasks and whether functional interactions between both structures are related to successful attentional performance.Here, we tested these questions in electrocorticographic (ECoG) recordings in human subjects using a visual detection task that required sustained attentional expectancy in order to detect non-salient, near-threshold visual events. We found that during sustained attention the successful visual detection was predicted by increased phase synchronization of band-limited 15-30. Hz beta band activity that was absent prior to misses. Increased beta-band phase alignment during attentional engagement early during the task was restricted to inferior and lateral prefrontal cortex, but with sustained attention it extended to long-range IFG-TPJ phase synchronization and included superior prefrontal areas. In addition to beta, a widely distributed network of brain areas comprising the occipital cortex showed enhanced and reduced alpha band phase synchronization before correct detections.These findings identify long-range phase synchrony in the 15-30. Hz beta band as the mesoscale brain signal that predicts the successful deployment of attentional expectancy of sensory events. We speculate that localized beta coherent states in prefrontal cortex index 'top-down' sensory expectancy whose coupling with TPJ subregions facilitates the gating of relevant visual information. |
Mark Mills; Edwin S. Dalmaijer; Stefan Van der Stigchel; Michael D. Dodd Effects of task and task-switching on temporal inhibition of return, facilitation of return, and saccadic momentum during scene viewing Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 5, pp. 1300–1314, 2015. @article{Mills2015, During scene viewing, saccades directed toward a recently fixated location tend to be delayed relative to saccades in other directions (“delay effect”), an effect attributable to inhibition of return (IOR) and/or saccadic momentum (SM). Previous work indicates this effect may be task-specific, suggesting that gaze control parameters are task-relevant and potentially affected by task-switching. Accordingly, the present study investigated task-set control of gaze behavior using the delay effect as a measure of task performance. The delay effect was measured as the effect of relative saccade direction on preceding fixation duration. Participants were cued on each trial to perform either a search, memory, or rating task. Tasks were performed either in pure-task or mixed-task blocks. This design allowed separation of switch-cost and mixing-cost. The critical result was that expression of the delay effect at 2-back locations was reversed on switch versus repeat trials such that return was delayed in repeat trials but speeded in switch trials. This difference between repeat and switch trials suggests that gaze-relevant parameters may be represented and switched as part of a task-set. Existing and new tests for dissociating IOR and SM accounts of the delay effect converged on the conclusion that the delay at 2-back locations was due to SM, and that task-switching affects SM. Additionally, the new test simultaneously replicated noncor- roborating results in the literature regarding facilitation-of-return (FOR), which confirmed its existence and showed that FOR is “reversed” SM that occurs when preceding and current saccades are both directed toward the 2-back location. |
Mark Mills; Kevin B. Smith; John R. Hibbing; Michael D. Dodd Obama cares about visuo-spatial attention: Perception of political figures moves attention and determines gaze direction Journal Article In: Behavioural Brain Research, vol. 278, pp. 221–225, 2015. @article{Mills2015a, Processing an abstract concept such as political ideology by itself is difficult but becomes easier when a background situation contextualizes it. Political ideology within American politics, for example, is commonly processed using space metaphorically, i.e., the political "left" and "right" (referring to Democrat and Republican views, respectively), presumably to provide a common metric to which abstract features of ideology can be grounded and understood. Commonplace use of space as metaphor raises the question of whether an inherently non-spatial stimulus (e.g., picture of the political "left" leader, Barack Obama) can trigger a spatially-specific response (e.g., attentional bias toward "left" regions of the visual field). Accordingly, pictures of well-known Democrats and Republicans were presented as central cues in peripheral target detection (Experiment 1) and saccadic free-choice (Experiment 2) tasks to determine whether perception of stimuli lacking a direct association with physical space nonetheless induce attentional and oculomotor biases in the direction compatible with the ideological category of the cue (i.e., Democrat/left and Republican/right). In Experiment 1, target detection following presentation of a Democrat (Republican) was facilitated for targets appearing to the left (right). In Experiment 2, participants were more likely to look left (right) following presentation of a Democrat (Republican). Thus, activating an internal representation of political ideology induced a shift of attention and biased choice of gaze direction in a spatially-specific manner. These findings demonstrate that the link between conceptual processing and spatial attention can be totally arbitrary, with no reference to physical or symbolic spatial information. |
Tobias Moehler; Katja Fiehler The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks Journal Article In: Vision Research, vol. 116, pp. 25–35, 2015. @article{Moehler2015, Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation = simultaneous vs. before saccade preparation = sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. |
Hassan Zanganeh Momtaz; Mohammad Reza Daliri Differences of eye movement pattern in natural and man-made scenes and image categorization with the help of these patterns Journal Article In: Journal of Integrative Neuroscience, vol. 14, no. 3, pp. 1–18, 2015. @article{Momtaz2015, In this paper, we investigated the parameters related to eye movement patterns of individuals while viewing images that consist of natural and man-made scenes. These parameters are as follows: number of fixations and saccades, fixation duration, saccade amplitude and distribution of fixation locations. We explored the way in which individuals look at images of different semantic categories, and used this information for automatic image classifcation. We showed that the eye movements and the contents of eye fixation locations of observers differ for images of different semantic categories. These differences were used effectively in automatic image categorization. Another goal of this study was to find the answer of this question that "whether the image patches of fixation points have sufficient information for image categorization?" To achieve this goal, a number of patches with different sizes from two different image categories was extracted. These patches, which were selected at the location of eye fixation points, were used to form a feature vector based on K-means clustering algorithm. Then, different statistical classiers were trained for categorization purpose. The results showed that it is possible to predict the image category by using the feature vectors derived from the image patches. We found significant differences in parameters of eye movement pattern between the two image categories (average across subjects). We could categorize images by using these parameters as features. The results also showed that it is possible to predict the image category by using image patches around the subjects' fixation points. |
Pieter Moors; Filip Germeys; Iwona Pomianowska; Karl Verfaillie Perceiving where another person is looking: The integration of head and body information in estimating another person's gaze Journal Article In: Frontiers in Psychology, vol. 6, pp. 909, 2015. @article{Moors2015, The process through which an observer allocates his/her attention based on the attention of another person is known as joint attention. To be able to do this, the observer effectively has to compute where the other person is looking. It has been shown that observers integrate information from the head and the eyes to determine the gaze of another person. Most studies have documented that observers show a bias called the overshoot effect when eyes and head are misaligned. That is, when the head is not oriented straight to the observer, perceived gaze direction is sometimes shifted in the direction opposite to the head turn. The present study addresses whether body information is also used as a cue to compute perceived gaze direction. In Experiment 1, we observed a similar overshoot effect in both behavioral and saccadic responses when manipulating body orientation. In Experiment 2, we explored whether the overshoot effect was due to observers assuming that the eyes are oriented further than the head when head and body orientation are misaligned. We removed horizontal eye information by presenting the stimulus from a side view. Head orientation was now manipulated in a vertical direction and the overshoot effect was replicated. In summary, this study shows that body orientation is indeed used as a cue to determine where another person is looking. |
Jalal K. Baruni; Brian Lau; C. Daniel Salzman Reward expectation differentially modulates attentional behavior and activity in visual area V4 Journal Article In: Nature Neuroscience, vol. 18, no. 11, pp. 1656–1663, 2015. @article{Baruni2015, Neural activity in visual area V4 is enhanced when attention is directed into neuronal receptive fields. However, the source of this enhancement is unclear, as most physiological studies have manipulated attention by changing the absolute reward associated with a particular location as well as its value relative to other locations. We trained monkeys to discriminate the orientation of two stimuli presented simultaneously in different hemifields while we independently varied the reward magnitude associated with correct discrimination at each location. Behavioral measures of attention were controlled by the relative value of each location. By contrast, neurons in V4 were consistently modulated by absolute reward value, exhibiting increased activity, increased gamma-band power and decreased trial-to-trial variability whenever receptive field locations were associated with large rewards. These data challenge the notion that the perceptual benefits of spatial attention rely on increased signal-to-noise in V4. Instead, these benefits likely derive from downstream selection mechanisms. |
Sarah Bate; Rachel Bennetts; Joseph A. Mole; James A. Ainge; Nicola J. Gregory; Anna K. Bobak; Amanda Bussunt Rehabilitation of face-processing skills in an adolescent with prosopagnosia: Evaluation of an online perceptual training programme Journal Article In: Neuropsychological Rehabilitation, vol. 25, no. 5, pp. 733–762, 2015. @article{Bate2015, In this paper we describe the case of EM, a female adolescent who acquired prosopagnosia following encephalitis at the age of eight. Initial neuropsychological and eye-movement investigations indicated that EM had profound difficulties in face perception as well as face recognition. EM underwent 14 weeks of perceptual training in an online programme that attempted to improve her ability to make fine-grained discriminations between faces. Following training, EM's face perception skills had improved, and the effect generalised to untrained faces. Eye-movement analyses also indicated that EM spent more time viewing the inner facial features post-training. Examination of EM's face recognition skills revealed an improvement in her recognition of personally-known faces when presented in a laboratory-based test, although the same gains were not noted in her everyday experiences with these faces. In addition, EM did not improve on a test assessing the recognition of newly encoded faces. One month after training, EM had maintained the improvement on the eye-tracking test, and to a lesser extent, her performance on the familiar faces test. This pattern of findings is interpreted as promising evidence that the programme can improve face perception skills, and with some adjustments, may at least partially improve face recognition skills. |
Robin Baurès; Simon J. Bennett; Joe Causer Temporal estimation with two moving objects: Overt and covert pursuit Journal Article In: Experimental Brain Research, vol. 233, no. 1, pp. 253–261, 2015. @article{Baures2015, The current study examined temporal estimation in a prediction motion task where participants were cued to overtly pursue one of two moving objects, which could either arrive first, i.e., shortest [time to contact (TTC)] or second (i.e., longest TTC) after a period of occlusion. Participants were instructed to estimate TTC of the first-arriving object only, thus making it necessary to overtly pursue the cued object while at the same time covertly pursuing the other (non-cued) object. A control (baseline) condition was also included in which participants had to estimate TTC of a single, overtly pursued object. Results showed that participants were able to estimate the arrival order of the two objects with very high accuracy irrespective of whether they had overtly or covertly pursued the first-arriving object. However, compared to the single-object baseline, participants' temporal estimation of the covert object was impaired when it arrived 500 ms before the overtly pursued object. In terms of eye movements, participants exhibited significantly more switches in gaze location during occlusion from the cued to the non-cued object but only when the latter arrived first. Still, comparison of trials with and without a switch in gaze location when the non-cued object arrived first indicated no advantage for temporal estimation. Taken together, our results indicate that overt pursuit is sufficient but not necessary for accurate temporal estimation. Covert pursuit can enable representation of a moving object's trajectory and thereby accurate temporal estimation providing the object moves close to the overt attentional focus. |
Brett C. Bays; Kristina M. Visscher; Christophe C. Le Dantec; Aaron R. Seitz Alpha-band EEG activity in perceptual learning Journal Article In: Journal of Vision, vol. 15, no. 10, pp. 1–12, 2015. @article{Bays2015, In studies of perceptual learning (PL), subjects are typically highly trained across many sessions to achieve perceptual benefits on the stimuli in those tasks. There is currently significant debate regarding what sources of brain plasticity underlie these PL-based learning improvements. Here we investigate the hypothesis that PL, among other mechanisms, leads to task automaticity, especially in the presence of the trained stimuli. To investigate this hypothesis, we trained participants for eight sessions to find an oriented target in a field of near-oriented distractors and examined alpha-band activity, which modulates with attention to visual stimuli, as a possible measure of automaticity. Alpha-band activity was acquired via electroencephalogram (EEG), before and after training, as participants performed the task with trained and untrained stimuli. Results show that participants underwent significant learning in this task (as assessed by threshold, accuracy, and reaction time improvements) and that alpha power increased during the pre-stimulus period and then underwent greater desynchronization at the time of stimulus presentation following training. However, these changes in alpha-band activity were not specific to the trained stimuli, with similar patterns of posttraining alpha power for trained and untrained stimuli. These data are consistent with the view that participants were more efficient at focusing resources at the time of stimulus presentation and are consistent with a greater automaticity of task performance. These findings have implications for PL, as transfer effects from trained to untrained stimuli may partially depend on differential effort of the individual at the time of stimulus processing. |
Stefanie I. Becker; Amanda J. Lewis Oculomotor capture by irrelevant onsets with and without color contrast Journal Article In: Annals of the New York Academy of Sciences, vol. 1339, no. 1, pp. 60–71, 2015. @article{Becker2015, It is widely known that irrelevant onsets (i.e., items appearing in previously empty locations) can automatically capture attention and attract our gaze. Some studies have shown that onset capture is stronger when the onset distractor matches the target feature, indicating that onset capture can be modulated by feature-based (top-down) tuning to the target. However, it is less clear whether and to what extent the perceptual saliency of the distractor can further modulate this effect. This study examined the effects of target similarity, competition between target and distractor, and bottom-up color contrast on the ability of onset distractor to capture the gaze, by varying the color (contrast) and stimulus-onset asynchrony of the onset distractor. The results clearly show that competition and feature-based attention modulate capture by the irrelevant onset to a large extent, whereas bottom-up color contrasts do not modulate onset capture. These results indicate the need to revise current accounts of gaze control. |
Harold E. Bedell; John Siderov; Monika A. Formankiewicz; Sarah J. Waugh; Senay Aydin Evidence for an eye-movement contribution to normal foveal crowding Journal Article In: Optometry and Vision Science, vol. 92, no. 2, pp. 237–245, 2015. @article{Bedell2015, Purpose. Along with contour interaction, inaccurate and imprecise eye movements and attention have been suggested to contribute to poorer acuity for ‘‘crowded'' versus uncrowded targets. To investigate the role of eye movements in foveal crowding, we compared percent correct letter identification for short and long lines of near-threshold letters with different separations. Methods. Five normal observers read short (4 to 6 letters) and long (10 to 12 letters) lines of near-threshold, Sloan letters with edge-to-edge letter separations of 0.5, 1, and 2 letter spaces. Percent correct letter identification for the 2 to 4 interior letters in short strings and the 8 to 10 interior letters in long strings was compared with a no-crowding condition. Results. Letter identification was significantly worse than the no-crowding condition for long letter strings with a separation of 1 letter space and for both long and short letter strings with a separation of 0.5 letter spaces. Observers more often reported the incorrect number of letters in long than in short letter strings, even for a separation of 2 letter spaces. Similar results were obtained during straight-ahead gaze and while viewing in 30 to 40 degrees left gaze, where two of the five observers exhibited an increase in horizontal fixational instability. Conclusions. We argue that lower percent correct letter identification and more frequent errors in reporting the number of letters in long compared with short letter strings reflect an eye-movement contribution to foveal crowding. |
Thomas P. O'Connell; Dirk B. Walther Dissociation of salience-driven and content-driven spatial attention to scene category with predictive decoding of gaze patterns Journal Article In: Journal of Vision, vol. 15, no. 5, pp. 1–13, 2015. @article{OConnell2015, Scene content is thought to be processed quickly and efficiently to bias subsequent visual exploration. Does scene content bias spatial attention during task-free visual exploration of natural scenes? If so, is this bias driven by patterns of physical salience or content-driven biases formed through previous encounters with similar scenes? We conducted two eye-tracking experiments to address these questions. Using a novel gaze decoding method, we show that fixation patterns predict scene category during free exploration. Additionally, we isolate salience-driven contributions using computational salience maps and content-driven contributions using gaze-restricted fixation data. We find distinct time courses for salience-driven and content-driven effects. The influence of physical salience peaked initially but quickly fell off at 600 ms past stimulus onset. The influence of content effects started at chance and steadily increased over the 2000 ms after stimulus onset. The combination of these two components significantly explains the time course of gaze allocation during free exploration. |
Yuka O. Okazaki; Jörn M. Horschig; Lisa Luther; Robert Oostenveld; Ikuya Murakami; Ole Jensen Real-time MEG neurofeedback training of posterior alpha activity modulates subsequent visual detection performance Journal Article In: NeuroImage, vol. 107, pp. 323–332, 2015. @article{Okazaki2015, It has been demonstrated that alpha activity is lateralized when attention is directed to the left or right visual hemifield. We investigated whether real-time neurofeedback training of the alpha lateralization enhances participants' ability to modulate posterior alpha lateralization and causes subsequent short-term changes in visual detection performance. The experiment consisted of three phases: (i) pre-training assessment, (ii) neurofeedback phase and (iii) post-training assessment. In the pre- and post-training phases we measured the threshold to covertly detect a cued faint Gabor stimulus presented in the left or right hemifield. During magnetoencephalography (MEG) neurofeedback, two face stimuli superimposed with noise were presented bilaterally. Participants were cued to attend to one of the hemifields. The transparency of the superimposed noise and thus the visibility of the stimuli were varied according to the momentary degree of hemispheric alpha lateralization. In a double-blind procedure half of the participants were provided with sham feedback. We found that hemispheric alpha lateralization increased with the neurofeedback training; this was mainly driven by an ipsilateral alpha increase. Surprisingly, comparing pre- to post-training, detection performance decreased for a Gabor stimulus presented in the hemifield that was un-attended during neurofeedback. This effect was not observed in the sham group. Thus, neurofeedback training alters alpha lateralization, which in turn decreases performances in the untrained hemifield. Our findings suggest that alpha oscillations play a causal role for the allocation of attention. Furthermore, our neurofeedback protocol serves to reduce the detection of unattended visual information and could therefore be of potential use for training to reduce distractibility in attention deficit patients, but also highlights that neurofeedback paradigms can have negative impact on behavioral performance and should be applied with caution. |
Rosanna K. Olsen; Yunjo Lee; Jana Kube; R. Shayna Rosenbaum; Cheryl L. Grady; Morris Moscovitch; Jennifer D. Ryan The role of relational binding in item memory: Evidence from face recognition in a case of developmental amnesia Journal Article In: Journal of Neuroscience, vol. 35, no. 13, pp. 5342–5350, 2015. @article{Olsen2015, Current theories state that the hippocampus is responsible for the formation of memory representations regarding relations, whereas extrahip-pocampal cortical regions support representations for single items. However, findings of impaired item memory in hippocampal amnesics suggest a more nuanced role for the hippocampus in item memory. The hippocampus may be necessary when the item elements need to be bound within and across episodes to form a lasting representation that can be used flexibly. The current investigation was designed to test this hypothesis in face recognition. H.C., an individual who developed with a compromised hippocampal system, and control participants inciden-tally studied individual faces that either varied in presentation viewpoint across study repetitions or remained in a fixed viewpoint across the study repetitions. Eye movements were recorded during encoding and participants then completed a surprise recognition memory test. H.C. demonstrated altered face viewing during encoding. Although the overall number of fixations made by H.C. was not significantly different from that of controls, the distribution of her viewing was primarily directed to the eye region. Critically, H.C. was significantly impaired in her ability to subsequently recognize faces studied from variable viewpoints, but demonstrated spared performance in recognizing faces she encoded from a fixed viewpoint, implicating a relationship between eye movement behavior in the service of a hippocampal binding function. These findings suggest that a compromised hippocampal system disrupts the ability to bind item features within and across study repetitions, ultimately disrupting recognition when it requires access to flexible relational representations. |
Selim Onat; Christian Büchel The neuronal basis of fear generalization in humans Journal Article In: Nature Neuroscience, vol. 18, no. 12, pp. 1811–1818, 2015. @article{Onat2015, Organisms tend to respond similarly to stimuli that are perceptually close to an event that predicts adversity, a phenomenon known as fear generalization. Greater dissimilarity yields weaker behavioral responses, forming a fear-tuning profile. The perceptual model of fear generalization assumes that behavioral fear tuning results from perceptual similarities, suggesting that brain responses should also exhibit the same fear-tuning profile. Using fMRI and a circular fear-generalization procedure, we tested this prediction. In contrast with the perceptual model, insula responses showed less generalization than behavioral responses and encoded the aversive quality of the conditioned stimulus, as shown by high pattern similarity between the conditioned stimulus and the shock. Also inconsistent with the perceptual model, object-sensitive visual areas responded to ambiguity-related outcome uncertainty. Together these results indicate that fear generalization is not passively driven by perception, but is an active process integrating threat identification and ambiguity-based uncertainty to orchestrate a flexible, adaptive fear response. |
Kristien Ooms; Philippe De Maeyer; Veerle Fack Listen to the map user: Cognition, memory, and expertise Journal Article In: The Cartographic Journal, vol. 52, no. 1, pp. 3–19, 2015. @article{Ooms2015a, This paper aims to extend current research regarding map users' cognitive processes while working with screen maps. The described experiment investigates how (expert and novice) map users retrieve information from memory that was previously gathered from screen maps. A user study was conducted in which participants had to draw a map from memory. During this task, they were instructed to say out loud every thought that came into mind. Both user groups addressed the same general cognitive structures and processes to solve the task at hand. However, the experts' background knowledge facilitated the retrieval process and allowed them to derive extra information through deductive reasoning. The novices used more descriptive terms instead of naming the objects and could remember less, and less detailed map elements |
Leonie Oostwoud Wijdenes; Louise Marshall; Paul M. Bays Evidence for optimal integration of visual feature representations across saccades Journal Article In: Journal of Neuroscience, vol. 35, no. 28, pp. 10146–10153, 2015. @article{OostwoudWijdenes2015, We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing by shifting externally stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with each fixation or by maintaining a representation of presaccadic visual features in working memory and updating it with new information from the remapped location. Crucially, when multiple objects are present in a scene the planning of eye movements profoundly affects the precision of their working memory representations, transferring limited memory resources from fixation toward the saccade target. Here we show that when humans make saccades, it results in an update of not just the precision of representations but also their contents. When multiple item colors are shifted imperceptibly during a saccade the perceived colors are found to fall between presaccadic and postsaccadic values, with the weight given to each input varying continuously with item location, and fixed relative to saccade parameters. Increasing sensory uncertainty, by adding color noise, biases updating toward the more reliable input, which is consistent with an optimal integration of presaccadic working memory with a postsaccadic updating signal. We recover this update signal and show it to be tightly focused on the vicinity of the saccade target. These results reveal how the nervous system accumulates detailed visual information from multiple views of the same object or scene. |
José P. Ossandón; Peter König; Tobias Heed Irrelevant tactile stimulation biases visual exploration in external coordinates Journal Article In: Scientific Reports, vol. 5, pp. 10664, 2015. @article{Ossandon2015a, We evaluated the effect of irrelevant tactile stimulation on humans' free-viewing behavior during the exploration of complex static scenes. Specifically, we address the questions of (1) whether task-irrelevant tactile stimulation presented to subjects' hands can guide visual selection during free viewing; (2) whether tactile stimulation can modulate visual exploratory biases that are independent of image content and task goals; and (3) in which reference frame these effects occur. Tactile stimulation to uncrossed and crossed hands during the viewing of static images resulted in long-lasting modulation of visual orienting responses. Subjects showed a well-known leftward bias during the early exploration of images, and this bias was modulated by tactile stimulation presented at image onset. Tactile stimulation, both at image onset and later during the trials, biased visual orienting toward the space ipsilateral to the stimulated hand, both in uncrossed and crossed hand postures. The long-lasting temporal and global spatial profile of the modulation of free viewing exploration by touch indicates that cross-modal cues produce orienting responses, which are coded exclusively in an external reference frame. |
José P. Ossandón; Selim Onat; Peter Konig Spatial biases in viewing behavior Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–26, 2015. @article{Ossandon2015, Viewing behavior exhibits temporal and spatial structure that is independent of stimulus content and task goals. One example of such structure is horizontal biases, which are likely rooted in left-right asymmetries of the visual and attentional systems. Here, we studied the existence, extent, and mechanisms of this bias. Left- and right- handed subjects explored scenes from different image categories, presented in original and mirrored versions. We also varied the spatial spectral content of the images and the timing of stimulus onset. We found a marked leftward bias at the start of exploration that was independent of image category. This left bias was followed by a weak bias to the right that persisted for several seconds. This asymmetry was found in the majority of right-handers but not in left-handers. Neither low- nor high-pass filtering of the stimuli influenced the bias. This argues against mechanisms related to the hemispheric segregation of global versus local visual processing. Introducing a delay in stimulus onset after offset of a central fixation spot also had no influence. The bias was present even when stimuli were presented continuously and without any requirement to fixate, associated to both fixation- and saccade-contingent image changes. This suggests the bias is not caused by structural asymmetries in fixation control. Instead the pervasive horizontal bias is compatible with known asymmetries of higher-level attentional areas related to the detection of novel events. |
Florian Ostendorf; Raymond J. Dolan Integration of retinal and extraretinal information across eye movements Journal Article In: PLoS ONE, vol. 10, no. 1, pp. e0116810, 2015. @article{Ostendorf2015, Visual perception is burdened with a highly discontinuous input stream arising from saccadic eye movements. For successful integration into a coherent representation, the visuomotor system needs to deal with these self-induced perceptual changes and distinguish them from external motion. Forward models are one way to solve this problem where the brain uses internal monitoring signals associated with oculomotor commands to predict the visual consequences of corresponding eye movements during active exploration. Visual scenes typically contain a rich structure of spatial relational information, providing additional cues that may help disambiguate self-induced from external changes of perceptual input. We reasoned that a weighted integration of these two inherently noisy sources of information should lead to better perceptual estimates. Volunteer subjects performed a simple perceptual decision on the apparent displacement of a visual target, jumping unpredictably in sync with a saccadic eye movement. In a critical test condition, the target was presented together with a flanker object, where perceptual decisions could take into account the spatial distance between target and flanker object. Here, precision was better compared to control conditions in which target displacements could only be estimated from either extraretinal or visual relational information alone. Our findings suggest that under natural conditions, integration of visual space across eye movements is based upon close to optimal integration of both retinal and extraretinal pieces of information. |
Mathias Abegg; Dario Pianezzi; Jason J. S. Barton A vertical asymmetry in saccades Journal Article In: Journal of Eye Movement Research, vol. 8, no. 5, pp. 1–10, 2015. @article{Abegg2015, Visual exploration of natural scenes imposes demands that differ between the upper and the lower visual hemifield. Yet little is known about how ocular motor performance is affected by the location of visual stimuli or the direction of a behavioural response. We compared saccadic latencies between upper and lower hemifield in a variety of conditions, including short-latency prosaccades, long-latency prosaccades, antisaccades, memory-guided saccades and saccades with increased attentional and selection demand. All saccade types, except memory guided saccades, had shorter latencies when saccades were directed towards the upper field as compared to downward saccades (p<0.05). This upper field reaction time advantage probably arises in ocular motor rather than visual processing. It may originate in structures involved in motor preparation rather than execution. |
Naotoshi Abekawa; Hiroaki Gomi Online gain update for manual following response accompanied by gaze shift during arm reaching Journal Article In: Journal of Neurophysiology, vol. 113, no. 4, pp. 1206–1216, 2015. @article{Abekawa2015, To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination. |
John F. Ackermann; Michael S. Landy Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 2, pp. 638–658, 2015. @article{Ackermann2015, Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. |
Mehmet N. Ağaoğlu; Michael H. Herzog; Haluk Öğmen The effective reference frame in perceptual judgments of motion direction Journal Article In: Vision Research, vol. 107, pp. 101–112, 2015. @article{Agaoglu2015, The retinotopic projection of stimulus motion depends both on the motion of the stimulus and the movements of the observer. In this study, we aimed to quantify the contributions of endogenous (retinotopic) and exogenous (spatiotopic and motion-based) reference frames on judgments of motion direction. We used a variant of the induced motion paradigm and we created different experimental conditions in which the predictions of each reference frame were different. Finally, assuming additive contributions from different reference frames, we used a linear model to account for the data. Our results suggest that the effective reference frame for motion perception emerges from an amalgamation of motion-based, retinotopic and spatiotopic reference frames. In determining the percept, the influence of relative motion, defined by a motion-based reference frame, dominates those of retinotopic and spatiotopic motions within a finite region. We interpret these findings within the context of the Reference Frame Metric Field (RFMF) theory, which states that local motion vectors might have perceptual reference-frame fields associated with them, and interactions between these fields determine the selection of the effective reference frame. |
Mehmet N. Ağaoğlu; Michael H. Herzog; Haluk Öğmen Field-like interactions between motion-based reference frames Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 6, pp. 2082–2097, 2015. @article{Agaoglu2015a, A reference frame is required to specify how motion is perceived. For example, the motion ofpart ofan object is usually perceived relative to the motion of the object itself. Johansson (Psychological Research, 38,379–393, 1976)pro-posed that the perceptual system carries out a vector decomposition, which rewsults in common and relative motion percepts. Because vector decomposition is an ill-posed problem, several studies have introduced constraints by means ofwhich the number of solutions can be substantially reduced. Here, we have adopted an alternative approach and studied how, rather than why, a subset ofsolutions is selected by the visual system. We propose that each retinotopic motion vector creates a reference-frame field in the retinotopic space, and that the fields created by different motion vectors interact in order to determine a motion vector that will serve as the reference frame at a given point and time in space. To test this theory, we performed a set ofpsychophysical experiments. The field-like influence of motion-based reference frames was manifested by increased nonspatiotopic percepts of the backward motion of a target square with decreasing distance from a drifting grating. We then sought to determine whether these field-like effects ofmotion-based reference frames can also be extended to stationary landmarks. The results suggest that reference-field interactions occur only between motion-generated fields. Finally, we investigated whether and how different reference fields interact with each other, and found that different reference-field interactions are nonlinear and depend on how the motion vectors are grouped. These findings are discussed from the perspective ofthe reference-frame metric field (RFMF) theory, according to which perceptual grouping operations play a central and essential role in determining the prevailing reference frames. |
Sara Ajina; Christopher Kennard; Geraint Rees; Holly Bridge Motion area V5/MT+ response to global motion in the absence of V1 resembles early visual cortex Journal Article In: Brain, vol. 138, no. 1, pp. 164–178, 2015. @article{Ajina2015, Motion area V5/MT+ shows a variety of characteristic visual responses, often linked to perception, which are heavily influenced by its rich connectivity with the primary visual cortex (V1). This human motion area also receives a number of inputs from other visual regions, including direct subcortical connections and callosal connections with the contralateral hemisphere. Little is currently known about such alternative inputs to V5/MT+ and how they may drive and influence its activity. Using functional magnetic resonance imaging, the response of human V5/MT+ to increasing the proportion of coherent motion was measured in seven patients with unilateral V1 damage acquired during adulthood, and a group of healthy age-matched controls. When V1 was damaged, the typical V5/MT+ response to increasing coherence was lost. Rather, V5/MT+ in patients showed a negative trend with coherence that was similar to coherence-related activity in V1 of healthy control subjects. This shift to a response-pattern more typical of early visual cortex suggests that in the absence of V1, V5/MT+ activity may be shaped by similar direct subcortical input. This is likely to reflect intact residual pathways rather than a change in connectivity, and has important implications for blindsight function. It also confirms predictions that V1 is critically involved in normal V5/MT+ global motion processing, consistent with a convergent model of V1 input to V5/MT+. Historically, most attempts to model cortical visual responses do not consider the contribution of direct subcortical inputs that may bypass striate cortex, such as input to V5/MT+. We have shown that the signal change driven by these non-striate pathways can be measured, and suggest that models of the intact visual system may benefit from considering their contribution. |
Reem Alsadoon; Trude Heift Textual input enhancement for vowel blindness: A study with Arabic ESL learners Journal Article In: The Modern Language Journal, vol. 99, no. 1, pp. 57–79, 2015. @article{Alsadoon2015, This study explores the impact of textual input enhancement on the noticing and intake of English vowels by Arabic L2 learners of English. Arabic L1 speakers are known to experience vowel blindness, commonly defined as a difficulty in the textual decoding and encoding of English vowels due to an insufficient decoding of the word form. Thirty beginner ESL learners participated in a training study during which the experimental group received textual input enhancement on English vowels. Students completed a pretest and an immediate and delayed posttest. An eye-tracker recorded students' eye fixations during the treatment phase. Results indicate that vowel blindness was significantly reduced for the experimental group who received vowel training in the form of textual input enhancement. This might be due to a longer focus on the target words as suggested by our eye-tracking data. |
Nicola C. Anderson; Fraser Anderson; Alan Kingstone; Walter F. Bischof A comparison of scanpath comparison methods Journal Article In: Behavior Research Methods, vol. 47, no. 4, pp. 1377–1392, 2015. @article{Anderson2015a, Interest has flourished in studying both the spatial and temporal aspects of eye movement behavior. This has sparked the development of a large number of new methods to compare scanpaths. In the present work, we present a detailed overview of common scanpath comparison measures. Each of these measures was developed to solve a specific problem, but quantifies different aspects of scanpath behavior and requires different data-processing techniques. To understand these differences, we applied each scanpath comparison method to data from an encoding and recognition experiment and compared their ability to reveal scanpath similarities within and between individuals looking at natural scenes. Results are discussed in terms of the unique aspects of scanpath behavior that the different methods quantify. We conclude by making recommendations for choosing an appropriate scanpath comparison measure. |
Nicola C. Anderson; Eduard Ort; Wouter Kruijne; Martijn Meeter; Mieke Donk It depends on when you look at it: Salience influences eye movements in natural scene viewing and search early in time Journal Article In: Journal of Vision, vol. 15, no. 5, pp. 1–22, 2015. @article{Anderson2015b, It is generally accepted that salience affects eye movements in simple artificially created search displays. However, no such consensus exists for eye movements in natural scenes, with several reports arguing that it is mostly high-level cognitive factors that control oculomotor behavior in natural scenes. Here, we manipulate the salience distribution across images by decreasing or increasing the contrast in a gradient across the image. We recorded eye movements in an encoding task (Experiment 1) and a visual search task (Experiment 2) and analyzed the relationship between the latency of fixations and subsequent saccade targeting throughout scene viewing. We find that short-latency first saccades are more likely to land on a region of the image with high salience than long-latency and subsequent saccades in both the encoding and visual search tasks. This implies that salience indeed influences oculomotor behavior in natural scenes, albeit on a different timescale than previously reported. We discuss our findings in relation to current theories of saccade control in natural scenes. |
Ana Beatriz Arêas Da Luz Fontes; Ana Isabel Schwartz Bilingual access of homonym meanings: Individual differences in bilingual access of homonym meanings Journal Article In: Bilingualism: Language and Cognition, vol. 18, no. 4, pp. 639–656, 2015. @article{AreasDaLuzFontes2015, The goal of the present study was to identify the cognitive processes that underlie lexical ambiguity resolution in a second language (L2). We examined which cognitive factors predict the efficiency in accessing subordinate meanings of L2 homonyms in a sample of highly-proficient, Spanish-English bilinguals. The predictive ability of individual differences in (1) homonym processing in the L1, (2) working memory capacity and (3) sensitivity to cross-language form overlap were examined. In two experiments, participants were presented with cognate and noncognate homonyms as either a prime in a lexical decision task (Experiment 1) or embedded in a sentence (Experiment 2). In both experiments speed and accuracy in accessing subordinate meanings in the L1 was the strongest predictor of speed and accuracy in accessing subordinate meanings in the L2. Sensitivity to cross-language form overlap predicted performance in lexical decision while working memory capacity predicted processing in sentence comprehension. |
Joseph M. Arizpe; Vincent Walsh; Chris I. Baker Characteristic visuomotor influences on eye-movement patterns to faces and other high level stimuli Journal Article In: Frontiers in Psychology, vol. 6, pp. 1027, 2015. @article{Arizpe2015, Eye-movement patterns are often utilized in studies of visual perception as indices of the specific information extracted to efficiently process a given stimulus during a given task. Our prior work, however, revealed that not only the stimulus and task influence eye-movements, but that visuomotor (start position) factors also robustly and characteristically influence eye-movement patterns to faces (Arizpe et al., 2012). Here we manipulated lateral starting side and distance from the midline of face and line-symmetrical control (butterfly) stimuli in order to further investigate the nature and generality of such visuomotor influences. First we found that increasing starting distance from midline (4°, 8°, 12°, and 16° visual angle) strongly and proportionately increased the distance of the first ordinal fixation from midline. We did not find influences of starting distance on subsequent fixations, however, suggesting that eye-movement plans are not strongly affected by starting distance following an initial orienting fixation. Further, we replicated our prior effect of starting side (left, right) to induce a spatially contralateral tendency of fixations after the first ordinal fixation. However, we also established that these visuomotor influences did not depend upon the predictability of the location of the upcoming stimulus, and were present not only for face stimuli but also for our control stimulus category (butterflies). We found a correspondence in overall left-lateralized fixation tendency between faces and butterflies. Finally, for faces, we found a relationship between left starting side (right sided fixation pattern tendency) and increased recognition performance, which likely reflects a cortical right hemisphere (left visual hemifield) advantage for face perception. These results further indicate the importance of considering and controlling for visuomotor influences in the design, analysis, and interpretation of eye-movement studies. |
Candice C. Morey; Yongqi Cong; Yixia Zheng; Mindi Price; Richard D. Morey The color-sharing bonus: Roles of perceptual organization and attentive processes in visual working memory. Journal Article In: Archives of Scientific Psychology, vol. 3, no. 1, pp. 18–29, 2015. @article{Morey2015, Color repetitions in a visual scene boost memory for its elements, a phenomenon known as the color-sharing effect. This may occur because improved perceptual organization reduces information load or because the repetitions capture attention. The implications of these explanations differ drastically for both the theoretical meaning of this effect and its potential value for applications in design of visual materials. If repetitions capture attention to the exclusion of other details, then use of repetition in visual displays should be confined to emphasized details, but if repetitions reduce the load of the display, designers can assume that the nonrepeated information is also more likely to be attended and remembered. We manipulated the availability of general attention during a visual memory task by comparing groups of participants engaged in meaningless speech or attention-demanding continuous arithmetic. We also tracked eye movements as an implicit indicator of selective attention. Estimated memory capacity was always higher when color duplicates were tested, and under full attention conditions this bonus spilled over to the unique colors too. Analyses of gazes showed that with full attention, participants tended to glance earlier at duplicate colors during stimulus presentation but looked more at unique colors during the retention interval. This pattern of results suggests that the color-sharing bonus reflects efficient perceptual organization of the display based on the presence of repetitions, and possibly strategic attention allocation when attention is available. |
Michael Morgan; Simon Grant; Dean Melmoth; Joshua A. Solomon Tilted frames of reference have similar effects on the perception of gravitational vertical and the planning of vertical saccadic eye movements Journal Article In: Experimental Brain Research, vol. 233, no. 7, pp. 2115–2125, 2015. @article{Morgan2015, We investigated the effects of a tilted refer- ence frame (i.e., allocentric visual context) on the percep- tion of the gravitational vertical and saccadic eye move- ments along a planned egocentric vertical path. Participants (n = 5) in a darkened room fixated a point in the center of a circle on an LCD display and decided which of two sequentially presented dots was closer to the unmarked ‘6 o'clock' position on that circle (i.e., straight down toward their feet). The slope of their perceptual psychometric func- tions showed that participants were able to locate which dot was nearer the vertical with a precision of 1°–2°. For three of the participants, a square frame centered at fixa- tion and tilted (in the roll direction) 5.6° from the vertical caused a strong perceptual bias, manifest as a shift in the psychometric function, in the direction of the traditional ‘rod-and-frame' effect, without affecting precision. The other two participants showed negligible or no equivalent biases. The same subjects participated in the saccade ver- sion of the task, in which they were instructed to shift their gaze to the 6 o'clock position as soon as the central fixation point disappeared. The participants who showed perceptual biases showed biases of similar magnitude in their saccadic endpoints, with a strong correlation between perceptual and saccadic biases across all subjects. Tilting of the head 5.6° reduced both perceptual and saccadic biases in all but one observer, who developed a strong saccadic bias. Otherwise, the overall pattern and significant correlations between results remained the same. We conclude that our observers' saccades-to-vertical were dominated by perceptual input, which outweighed any gravitational or head-centered input. |
Masahiro Morii; Takayuki Sakagami The effect of gaze-contingent stimulus elimination on preference judgments Journal Article In: Frontiers in Psychology, vol. 6, pp. 1351, 2015. @article{Morii2015, This study examined how stimulus elimination (SE) in a preference judgment task affects observers' choices. Previous research suggests that biasing gaze toward one alternative can increase preference for it; this preference reciprocally promotes gaze bias. Shimojo et al. (2003) called this phenomenon the Gaze Cascade Effect. They showed that the likelihood that an observer's gaze was directed toward their chosen alternative increased steadily until the moment of choosing. Therefore, we tested whether observers would prefer an alternative at which they had been gazing last if both alternatives were removed prior to the start of this rising gaze likelihood. To test this, we used a preference judgment task and controlled stimulus presentation based on gaze using an eye-tracking system. A pair of non-sensical figures was presented on the computer screen and both stimuli were eliminated while participants were still making their preference decision. The timing of the elimination differed between two experiments. In Experiment 1, after gazing at both stimuli one or more times, stimuli were removed when the participant's gaze fell on one alternative, pre-selected as the target stimulus. There was no significant difference in the preference of the two alternatives. In Experiment 2, we did not predefine any target stimulus. After the participant gazed at both stimuli one or more times, both stimuli were eliminated when the participant next fixated on either. The likelihood of choosing the stimulus that was gazed at last (at the moment of elimination) was greater than chance. Results showed that controlling participants' choices using gaze-contingent SE was impossible, but the different results between these two experiments suggest that participants decided which stimulus to choose during their first period of gazing at each alternative. Thus, we could predict participants' choices by analyzing eye movement patterns at the moment of SE. |
Antony C. Moss; Ian P. Albery; Kyle R. Dyer; Daniel Frings; Karis Humphreys; Thomas Inkelaar; Emily Harding; Abbie Speller The effects of responsible drinking messages on attentional allocation and drinking behaviour Journal Article In: Addictive Behaviors, vol. 44, pp. 94–101, 2015. @article{Moss2015, Aims: Four experiments were conducted to assess the acute impact of context and exposure to responsible drinking messages (RDMs) on attentional allocation and drinking behaviour of younger drinkers and to explore the utility of lab-based methods for the evaluation of such materials. Methods: A simulated bar environment was used to examine the impact of context, RDM posters, and brief online responsible drinking advice on actual drinking behaviour. Experiments one (n. =. 50) and two (n. =. 35) comprised female non-problem drinkers, whilst Experiments three (n. =. 80) and 4 (n. =. 60) included a mixed-gender sample of non-problem drinkers, recruited from an undergraduate student cohort. The Alcohol Use Disorders Identification Test (AUDIT) was used to assess drinking patterns. Alcohol intake was assessed through the use of a taste preference task. Results: Drinking in a simulated bar was significantly greater than in a laboratory setting in the first two studies, but not in the third. There was a significant increase in alcohol consumption as a result of being exposed to RDM posters. Provision of brief online RDM reduced the negative impact of these posters somewhat; however the lowest drinking rates were associated with being exposed to neither posters nor brief advice. Data from the final experiment demonstrated a low level of visual engagement with RDMs, and that exposure to posters was associated with increased drinking. Conclusions: Poster materials promoting responsible drinking were associated with increased consumption amongst undergraduate students, suggesting that poster campaigns to reduce alcohol harms may be having the opposite effect to that intended. Findings suggest that further research is required to refine appropriate methodologies for assessing drinking behaviour in simulated drinking environments, to ensure that future public health campaigns of this kind are having their intended effect. |
L. Müller-Pinzler; V. Gazzola; C. Keysers; Jens Sommer; Andreas Jansen; S. Frässle; Wolfgang Einhäuser; Frieder M. Paulus; Sören Krach Neural pathways of embarrassment and their modulation by social anxiety Journal Article In: NeuroImage, vol. 119, pp. 252–261, 2015. @article{MuellerPinzler2015, While being in the center of attention and exposed to other's evaluations humans are prone to experience embarrassment. To characterize the neural underpinnings of such aversive moments, we induced genuine experiences of embarrassment during person-group interactions in a functional neuroimaging study. Using a mock-up scenario with three confederates, we examined how the presence of an audience affected physiological and neural responses and the reported emotional experiences of failures and achievements. The results indicated that publicity induced activations in mentalizing areas and failures led to activations in arousal processing systems. Mentalizing activity as well as attention towards the audience were increased in socially anxious participants. The converging integration of information from mentalizing areas and arousal processing systems within the ventral anterior insula and amygdala forms the neural pathways of embarrassment. Targeting these neural markers of embarrassment in the (para-)limbic system provides new perspectives for developing treatment strategies for social anxiety disorders. |
Vishnu P. Murty; Sarah DuBrow; Lila Davachi The simple act of choosing influences declarative memory Journal Article In: Journal of Neuroscience, vol. 35, no. 16, pp. 6255–6264, 2015. @article{Murty2015, Individuals value the opportunity to make choices and exert control over their environment. This perceived sense of agency has been shown to have broad influences on cognition, including preference, decision-making, and valuation. However, it is unclear whether perceived control influences memory. Using a combined behavioral and functional magnetic resonance imaging approach, we investi- gated whether imbuing individuals with a sense of agency over their learning experience influences novelmemoryencoding. Participants encoded objects during a task that manipulated the opportunity to choose. Critically, unlike previous work on active learning, there was no relationship between individuals' choices and the content of memoranda. Despite this, we found that the opportunity to choose resulted in robust, reliable enhancements in declarative memory. Neuroimaging results revealed that anticipatory activation of the striatum, a region associated with decision-making, valuation, and exploration, correlated with choice-induced memory enhancements in behavior. These memory enhancements were further associated with interactions between the striatum and hippocampus. Specifi- cally, anticipatory signals in the striatum when participants are alerted to the fact that they will have to choose one of two memoranda were associated with encoding success effects in the hippocampus on a trial-by-trial basis. The precedence of the striatal signal in these interactions suggests a modulatory relationship of the striatum over the hippocampus. These findings not only demonstrate enhanced declarative memory when individuals have perceived control over their learning but also support a novel mechanism by which these enhancements emerge. Furthermore, they demonstrate a novel context in which mesolimbic and declarative memory systems interact. |
Andriy Myachykov; Angelo Cangelosi; Rob Ellis; Martin H. Fischer The oculomotor resonance effect in spatial-numerical mapping Journal Article In: Acta Psychologica, vol. 161, pp. 162–169, 2015. @article{Myachykov2015, We investigated automatic Spatial-Numerical Association of Response Codes (SNARC) effect in auditory number processing. Two experiments continually measured spatial characteristics of ocular drift at central fixation during and after auditory number presentation. Consistent with the notion of a spatially oriented mental number line, we found spontaneous magnitude-dependent gaze adjustments, both with and without a concurrent saccadic task. This fixation adjustment (1) had a small-number/left-lateralized bias and (2) it was biphasic as it emerged for a short time around the point of lexical access and it received later robust representation around following number onset. This pattern suggests a two-step mechanism of sensorimotor mapping between numbers and space - a first-pass bottom-up activation followed by a top-down and more robust horizontal SNARC. Our results inform theories of number processing as well as simulation-based approaches to cognition by identifying the characteristics of an oculomotor resonance phenomenon. |
Nicholas E. Myers; Lena Walther; George Wallis; Mark G. Stokes; Anna C. Nobre In: Journal of Cognitive Neuroscience, vol. 27, no. 3, pp. 492–508, 2015. @article{Myers2015a, Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retro- cues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because pre- cues invite anticipation of upcoming information whereas retro- cues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anti- cipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked po- tentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selec- tion mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information |
2014 |
Michael Morgan A bias-free measure of retinotopic tilt adaptation Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–9, 2014. @article{Morgan2014, The traditional method of single stimuli for measuring perceptual illusions and context effects confounds perceptual effects with changes in the observer's decision criterion. By deciding consciously or unconsciously to select one of the two response alternatives more than the other when unsure of the correct response, the observer can shift his or her psychometric function in a manner indistinguishable from a genuine perceptual shift. Here, a spatial two-alternative forced-choice method is described to measure a perceptual aftereffect by its influence on the shape of the psychometric function rather than the mean. The method was tested by measuring the effect of motion adaptation on the apparent Vernier offset of stationary Gabor patterns. The shift due to adaptation was found to be comparable in size to the internal noise, estimated from the slope of the psychometric function. By moving the eyes between adaptation and test, it was determined that adaptation is retinotopic rather than spatiotopic. |
Stefanie Mueller; Katja Fiehler Effector movement triggers gaze-dependent spatial coding of tactile and proprioceptive-tactile reach targets Journal Article In: Neuropsychologia, vol. 62, no. 1, pp. 184–193, 2014. @article{Mueller2014, Reaching in space requires that the target and the hand are represented in the same coordinate system. While studies on visually-guided reaching consistently demonstrate the use of a gaze-dependent spatial reference frame, controversial results exist in the somatosensory domain. We investigated whether effector movement (eye or arm/hand) after target presentation and before reaching leads to gaze-dependent coding of somatosensory targets. Subjects reached to a felt target while directing gaze towards one of seven fixation locations. Touches were applied to the fingertip(s) of the left hand (proprioceptive-tactile targets) or to the dorsal surface of the left forearm (tactile targets). Effector movement was varied in terms of movement of the target limb or a gaze shift. Horizontal reach errors systematically varied as a function of gaze when a movement of either the target effector or gaze was introduced. However, we found no effect of gaze on horizontal reach errors when a movement was absent before the reach. These findings were comparable for tactile and proprioceptive-tactile targets. Our results suggest that effector movement promotes a switch from a gaze-independent to a gaze-dependent representation of somatosensory reach targets. |
Stefanie Mueller; Katja Fiehler Gaze-dependent spatial updating of tactile targets in a localization task Journal Article In: Frontiers in Psychology, vol. 5, pp. 66, 2014. @article{Mueller2014a, There is concurrent evidence that visual reach targets are represented with respect to gaze. For tactile reach targets, we previously showed that an effector movement leads to a shift from a gaze-independent to a gaze-dependent reference frame. Here we aimed to unravel the influence of effector movement (gaze shift) on the reference frame of tactile stimuli using a spatial localization task (yes/no paradigm). We assessed how gaze direction (fixation left/right) alters the perceived spatial location (point of subjective equality) of sequentially presented tactile standard and visual comparison stimuli while effector movement (gaze fixed/shifted) and stimulus order (vis-tac/tac-vis) were varied. In the fixed-gaze condition, subjects maintained gaze at the fixation site throughout the trial. In the shifted-gaze condition, they foveated the first stimulus, then made a saccade toward the fixation site where they held gaze while the second stimulus appeared. Only when an effector movement occurred after the encoding of the tactile stimulus (shifted-gaze, tac-vis), gaze similarly influenced the perceived location of the tactile and the visual stimulus. In contrast, when gaze was fixed or a gaze shift occurred before encoding of the tactile stimulus, gaze differentially affected the perceived spatial relation of the tactile and the visual stimulus suggesting gaze-dependent coding of only one of the two stimuli. Consistent with previous findings this implies that visual stimuli vary with gaze irrespective of whether gaze is fixed or shifted. However, a gaze-dependent representation of tactile stimuli seems to critically depend on an effector movement (gaze shift) after tactile encoding triggering spatial updating of tactile targets in a gaze-dependent reference frame. Together with our recent findings on tactile reaching, the present results imply similar underlying reference frames for tactile spatial perception and action. |
Romy Müller; Jens R. Helmert; Sebastian Pannasch Limitations of gaze transfer: Without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do Journal Article In: Acta Psychologica, vol. 152, pp. 19–28, 2014. @article{Mueller2014b, Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. |
Aidan P. Murphy; David A. Leopold; Andrew E. Welchman Perceptual memory drives learning of retinotopic biases for bistable stimuli Journal Article In: Frontiers in Psychology, vol. 5, pp. 60, 2014. @article{Murphy2014, The visual system exploits past experience at multiple timescales to resolve perceptual$backslash$r$backslash$n ambiguity in the retinal image. For example, perception of a bistable stimulus can be$backslash$r$backslash$n biased towards one interpretation over another when preceded by a brief presentation of a$backslash$r$backslash$n disambiguated version of the stimulus (positive priming) or through intermittent$backslash$r$backslash$n presentations of the ambiguous stimulus (stabilization). Similarly, prior presentations of$backslash$r$backslash$n unambiguous stimuli can be used to explicitly “train” a long-lasting association between$backslash$r$backslash$n a percept and a retinal location (perceptual association). These phenonema have typically$backslash$r$backslash$n been regarded as independent processes, with short-term biases attributed to perceptual$backslash$r$backslash$n memory and longer-term biases described as associative learning. Here we tested for$backslash$r$backslash$n interactions between these two forms of experience-dependent perceptual bias and$backslash$r$backslash$n demonstrate that short-term processes strongly influence long-term outcomes. We first$backslash$r$backslash$n demonstrate that the establishment of long-term perceptual contingencies does not require$backslash$r$backslash$n explicit training by unambiguous stimuli, but can arise spontaneously during the periodic$backslash$r$backslash$n presentation of brief, ambiguous stimuli. Using rotating Necker cube stimuli, we$backslash$r$backslash$n observed enduring, retinotopically specific perceptual biases that were expressed from$backslash$r$backslash$n the outset and remained stable for up to forty minutes, consistent with the known$backslash$r$backslash$n phenomenon of perceptual stabilization. Further, bias was undiminished after a break$backslash$r$backslash$n period of five minutes, but was readily reset by interposed periods of continuous, as$backslash$r$backslash$n opposed to periodic, ambiguous presentation. Taken together, the results demonstrate that$backslash$r$backslash$n perceptual biases can arise naturally and may principally reflect the brain's tendency to$backslash$r$backslash$n favor recent perceptual interpretation at a given retinal location. Further, they suggest that$backslash$r$backslash$n an association between retinal location and perceptual state, rather than a physical stimulus, is sufficient to generate long-term biases in perceptual organization. |