EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2018 |
Mackenzie G. Glaholt; Grace Sim; Philips Laou; Simon Roy Evaluation of fused imagery using eye movement-based measures of perceptual processing Journal Article In: Information Fusion, vol. 39, pp. 186–203, 2018. @article{Glaholt2018, Human performance measures were used to evaluate the perceptual processing efficiency of infrared and fused-infrared images. In two experiments, eye movements were recorded while subjects searched for and identified human targets in forested scenes presented on a computer monitor. The scenes were photographed simultaneously using short-wave infrared (SWIR), long-wave infrared (LWIR), and visible (VIS) spectrum cameras. Fused images were created through two-way combinations of these single-band images. In Experiment 1 the single band sensors were contrasted with a simple average fusion scheme (SWIR/LWIR). Analysis of subjects' eye movements revealed differences between sensors in measures of central processing (gaze duration, response accuracy) and peripheral selection (detection interval, saccade amplitude). In Experiment 2 this methodology was applied to compare three two-way combinations of sensors (SWIR/LWIR, SWIR/VIS, VIS/LWIR), produced by state-of-the-art fusion methods. Peripheral selection for fused images tended to exhibit a compromise between the performance levels of component sensor images, while measures of central processing showed evidence that fused images matched or exceeded the performance level of component single-band sensor images. Stimulus analysis was conducted to link measures of central and peripheral processing efficiency to image characteristics (e.g. target contrast, target-background contrast), and these image characteristics were able to account for a moderate amount of the variance in the performance across fusion conditions. These findings demonstrate the utility of eye movement measures for evaluating the perceptual efficiency of fused imagery. |
Detre A. Godinez; Daniel S. Lumian; Tanisha Crosby-Attipoe; Ana M. Bedacarratz; Paree Zarolia; Kateri McRae Overlapping and distinct neural correlates of imitating and opposing facial movements Journal Article In: NeuroImage, vol. 166, pp. 239–246, 2018. @article{Godinez2018, Previous studies have demonstrated that imitating a face can be relatively automatic and reflexive. In contrast, opposing facial expressions may require engaging flexible, cognitive control. However, few studies have examined the degree to which imitation and opposition of facial movements recruit overlapping and distinct neural regions. Furthermore, little work has examined whether opposition and imitation of facial movements differ between emotional and averted eye gaze facial expressions. This study utilized a novel task with 40 participants to compare passive viewing, imitation and opposition of emotional faces looking forward and neutral faces with averted eye gaze [(3: Look, Imitate, Oppose) x (2: Emotion, Averted Eye)]. Imitation and opposition of both types of facial movements elicited overlapping activation in frontal, premotor, superior temporal and anterior intraparietal regions. These regions are recruited during cognitive control, face processing and mirroring tasks. For both emotional and averted eye gaze photos, opposition engaged the superior frontal gyrus, superior temporal sulcus and the anterior intraparietal sulcus to a greater extent compared to imitation. Finally, stimulus type and instruction interacted, such that for the eye gaze condition only, greater activation was observed in the dorsal anterior cingulate (dACC) during opposition compared to imitation, while no significant dACC differences were observed for the emotional expression conditions, which instead showed significantly greater activation in the middle and frontal pole. Overall these results showed significant overlap between imitation and opposition, as well as increased activation of these regions to generate an opposing facial movement relative to imitating. |
Alexander Goettker; Doris I. Braun; Alexander C. Schütz; Karl R. Gegenfurtner Execution of saccadic eye movements affects speed perception Journal Article In: Proceedings of the National Academy of Sciences, vol. 115, no. 9, pp. 2240–2245, 2018. @article{Goettker2018, Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. |
Frederic Göhringer; Miriam Löhr-Limpens; Thomas Schenk The visual guidance of action is not insulated from cognitive interference: A multitasking study on obstacle-avoidance and bisection Journal Article In: Consciousness and Cognition, vol. 64, pp. 72–83, 2018. @article{Goehringer2018, The Perception-Action Model (PAM) considers the visual system to be divided into two streams defined by their specific functions, a ventral stream for vision and a dorsal stream for action. In this study we investigated two behavioral paradigms which according to PAM represent the two contrasting functions of the ventral and dorsal stream, namely bisection and obstacle-avoidance, respectively. It is an assumption of PAM that while ventral stream processing is ultimately linked with processing in other cognitive systems, dorsal stream processing is insulated from cognition. Accordingly it can be expected that a secondary task will interfere with bisection but not with obstacle-avoidance. We tested this prediction using a rapid serial visual presentation task as our secondary task (RSVP). Contrary to expectations we found significant interference for both bisection and obstacle-avoidance. Our findings suggest that dorsal-stream processing is not insulated from cognitive processes. |
Tao He; Matthias Fritsche; Floris P. Lange Predictive remapping of visual features beyond saccadic targets Journal Article In: Journal of Vision, vol. 18, no. 13, pp. 1–16, 2018. @article{He2018a, Visual stability is thought to be mediated by predictive remapping of the relevant object information from its current, pre-saccadic locations to its future, post-saccadic location on the retina. However, it is heavily debated whether and what feature information is predictively remapped during the pre-saccadic interval. Using an orientation adaptation paradigm, we investigated whether predictive remapping occurs for stimulus features and whether adaptation itself is remapped. We found strong evidence for predictive remapping of a stimulus presented shortly before saccade onset, but no remapping of adaptation. Furthermore, we establish that predictive remapping also occurs for stimuli that are not saccade targets, pointing toward a ‘forward remapping' process operating across the whole visual field. Together, our findings suggest that predictive feature remapping of object information plays an important role in mediating visual stability. |
Wei He; Blake W. Johnson Development of face recognition: Dynamic causal modelling of MEG data Journal Article In: Developmental Cognitive Neuroscience, vol. 30, pp. 13–22, 2018. @article{He2018, Electrophysiological studies of adults indicate that brain activity is enhanced during viewing of repeated faces, at a latency of about 250 ms after the onset of the face (M250/N250). The present study aimed to determine if this effect was also present in preschool-aged children, whose brain activity was measured in a custom-sized pediatric MEG system. The results showed that, unlike adults, face repetition did not show any significant modulation of M250 amplitude in children; however children's M250 latencies were significantly faster for repeated than non-repeated faces. Dynamic causal modelling (DCM) of the M250 in both age groups tested the effects of face repetition within the core face network including the occipital face area (OFA), the fusiform face area (FFA), and the superior temporal sulcus (STS). DCM revealed that repetition of identical faces altered both forward and backward connections in children and adults; however the modulations involved inputs to both FFA and OFA in adults but only to OFA in children. These findings suggest that the amplitude-insensitivity of the immature M250 may be due to a weaker connection between the FFA and lower visual areas. |
James B. Heald; James N. Ingram; J. Randall Flanagan; Daniel M. Wolpert Multiple motor memories are learned to control different points on a tool Journal Article In: Nature Human Behaviour, vol. 2, no. 4, pp. 300–311, 2018. @article{Heald2018, Skilful object manipulation requires learning the dynamics of objects, linking applied force to motion1,2. This involves the formation of a motor memory3,4, which has been assumed to be associated with the object, independent of the point on the object that one chooses to control. Importantly, in manipulation tasks, different control points on an object, such as the rim of a cup when drinking or its base when setting it down, can be associated with distinct dynamics. Here, we show that opposing dynamic perturbations, which interfere when controlling a single location on an object, can be learned when each is associated with a separate control point. This demonstrates that motor memory formation is linked to control points on the object, rather than the object per se. We also show that the motor system only generates separate memories for different control points if they are linked to different dynamics, allowing efficient use of motor memory. To account for these results, we develop a normative switching state-space model of motor learning, in which the association between cues (control points) and contexts (dynamics) is learned rather than fixed. Our findings uncover an important mechanism through which the motor system generates flexible and dexterous behaviour. |
Claire Louise Heard; Tim Rakow; Tom Foulsham Understanding the effect of information presentation order and orientation on information search and treatment evaluation Journal Article In: Medical Decision Making, vol. 38, no. 6, pp. 646–657, 2018. @article{Heard2018, Background. Past research finds that treatment evaluations are more negative when risks are presented after benefits. This study investigates this order effect: manipulating tabular orientation and order of risk–benefit information, and examining information search order and gaze duration via eye-tracking. Design. 108 (Study 1) and 44 (Study 2) participants viewed information about treatment risks and benefits, in either a horizontal (left-right) or vertical (above- below) orientation, with the benefits or risks presented first (left side or at top). For 4 scenarios, participants answered 6 treatment evaluation questions (1–7 scales) that were combined into overall evaluation scores. In addi- tion, Study 2 collected eye-tracking data during the benefit–risk presentation. Results. Participants tended to read one set of information (i.e., all risks or all benefits) before transitioning to the other. Analysis of order of fixations showed this tendency was stronger in the vertical (standardized mean rank difference further from 0 |
Nicholas Hedger; Anthony Haffey; Eugene McSorley; Bhismadev Chakrabarti Empathy modulates the temporal structure of social attention Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 285, pp. 1–9, 2018. @article{Hedger2018, Individuals with low empathy often show reduced attention towards social stimuli. A limitation of this literature is the lack of empirical work that has explicitly characterized how this relationship manifests itself over time. We investigate this issue by analysing data from two large eye-tracking datasets (total n ¼ 176). Via growth-curve analysis, we demonstrate that self-reported empathy (as measured by the empathy quotient-EQ) predicts the temporal evolution of gaze behaviour under conditions where social and non-social stimuli compete for attention. In both datasets, we found that EQ not only predicted a global increase in social attention, but predicted a different temporal profile of social attention. Specifically, we detected a reliable effect of empathy on gaze towards social images after prolonged viewing. An analysis of switch latencies revealed that low-EQ observers switched gaze away from an initially fixated social image more frequently and at earlier latencies than high-EQ observers. Our analyses demonstrate that modelling these temporal components of gaze signals may reveal useful behavioural phenotypes. The explanatory power of this approach may provide enhanced biomarkers for conditions marked by deficits in empathy-related processes. |
Simone G. Heideman; Gustavo Rohenkohl; Joshua J. Chauvin; Clare E. Palmer; Freek Ede; Anna C. Nobre Anticipatory neural dynamics of spatial-temporal orienting of attention in younger and older adults Journal Article In: NeuroImage, vol. 178, pp. 46–56, 2018. @article{Heideman2018a, Spatial and temporal expectations act synergistically to facilitate visual perception. In the current study, we sought to investigate the anticipatory oscillatory markers of combined spatial-temporal orienting and to test whether these decline with ageing. We examined anticipatory neural dynamics associated with joint spatial-temporal orienting of attention using magnetoencephalography (MEG) in both younger and older adults. Participants performed a cued covert spatial-temporal orienting task requiring the discrimination of a visual target. Cues indicated both where and when targets would appear. In both age groups, valid spatial-temporal cues significantly enhanced perceptual sensitivity and reduced reaction times. In the MEG data, the main effect of spatial orienting was the lateralised anticipatory modulation of posterior alpha and beta oscillations. In contrast to previous reports, this modulation was not attenuated in older adults; instead it was even more pronounced. The main effect of temporal orienting was a bilateral suppression of posterior alpha and beta oscillations. This effect was restricted to younger adults. Our results also revealed a striking interaction between anticipatory spatial and temporal orienting in the gamma-band (60–75 Hz). When considering both age groups separately, this effect was only clearly evident and only survived statistical evaluation in the older adults. Together, these observations provide several new insights into the neural dynamics supporting separate as well as combined effects of spatial and temporal orienting of attention, and suggest that different neural dynamics associated with attentional orienting appear differentially sensitive to ageing. |
Simone G. Heideman; Freek Ede; Anna C. Nobre Early behavioural facilitation by temporal expectations in complex visual-motor sequences Journal Article In: Neuroscience, vol. 389, pp. 74–84, 2018. @article{Heideman2018b, In daily life, temporal expectations may derive from incidental learning of recurring patterns of intervals. We investigated the incidental acquisition and utilisation of combined temporal-ordinal (spatial/effector) structure in complex visual-motor sequences using a modified version of a serial reaction time (SRT) task. In this task, not only the series of targets/responses, but also the series of intervals between subsequent targets was repeated across multiple presentations of the same sequence. Each participant completed three sessions. In the first session, only the repeating sequence was presented. During the second and third session, occasional probe blocks were presented, where a new (unlearned) spatial-temporal sequence was introduced. We first confirm that participants not only got faster over time, but that they were slower and less accurate during probe blocks, indicating that they incidentally learned the sequence structure. Having established a robust behavioural benefit induced by the repeating spatial-temporal sequence, we next addressed our central hypothesis that implicit temporal orienting (evoked by the learned temporal structure) would have the largest influence on performance for targets following short (as opposed to longer) intervals between temporally structured sequence elements, paralleling classical observations in tasks using explicit temporal cues. We found that indeed, reaction time differences between new and repeated sequences were largest for the short interval, compared to the medium and long intervals, and that this was the case, even when comparing late blocks (where the repeated sequence had been incidentally learned), to early blocks (where this sequence was still unfamiliar). We conclude that incidentally acquired temporal expectations that follow a sequential structure can have a robust facilitatory influence on visually-guided behavioural responses and that, like more explicit forms of temporal orienting, this effect is most pronounced for sequence elements that are expected at short inter-element intervals. |
Simone G. Heideman; Freek Ede; Anna C. Nobre Temporal alignment of anticipatory motor cortical beta lateralisation in hidden visual-motor sequences Journal Article In: European Journal of Neuroscience, vol. 48, no. 8, pp. 2684–2695, 2018. @article{Heideman2018, Performance improves when participants respond to events that are structured in repeating sequences, suggesting that learning can lead to proactive anticipatory preparation. Whereas most sequence-learning studies have emphasised spatial structure, most sequences also contain a prominent temporal structure. We used MEG to investigate spatial and temporal anticipatory neural dynamics in a modified serial reaction time (SRT) task. Performance and brain activity were compared between blocks with learned spatial-temporal sequences and blocks with new sequences. After confirming a strong behavioural benefit of spatial-temporal predictability, we show lateralisation of beta oscillations in anticipation of the response associated with the upcoming target location and show that this also aligns to the expected timing of these forthcoming events. This effect was found both when comparing between repeated (learned) and new (unlearned) sequences, as well as when comparing targets that were expected after short vs. long intervals within the repeated (learned) sequence. Our findings suggest that learning of spatial-temporal structure leads to proactive and dynamic modulation of motor cortical excitability in anticipation of both the location and timing of events that are relevant to guide action. |
Lukáš Hejtmánek; Ivana Oravcová; Jiří Motýl; Jiří Horáček; Iveta Fajnerová Spatial knowledge impairment after GPS guided navigation: Eye-tracking study in a virtual town Journal Article In: International Journal of Human-Computer Studies, vol. 116, pp. 15–24, 2018. @article{Hejtmanek2018, There is a vibrant debate about consequences of mobile devices on our cognitive capabilities. Use of technology guided navigation has been linked with poor spatial knowledge and wayfinding in both virtual and real world experiments. Our goal was to investigate how the attention people pay to the GPS aid influences their navigation performance. We developed navigation tasks in a virtual city environment and during the experiment, we measured participants' eye movements. We also tested their cognitive traits and interviewed them about their navigation confidence and experience. Our results show that the more time participants spend with the GPS-like map, the less accurate spatial knowledge they manifest and the longer paths they travel without GPS guidance. This poor performance cannot be explained by individual differences in cognitive skills. We also show that the amount of time spent with the GPS is related to participant's subjective evaluation of their own navigation skills, with less confident navigators using GPS more intensively. We therefore suggest that despite an extensive use of navigation aids may have a detrimental effect on person's spatial learning, its general use is modulated by a perception of one's own navigation abilities. |
John M. Henderson; Taylor R. Hayes Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps Journal Article In: Journal of Vision, vol. 18, no. 6, pp. 1–18, 2018. @article{Henderson2018a, We compared the influence of meaning and of salience on attentional guidance in scene images. Meaning was captured by ‘‘meaning maps'' representing the spatial distribution of semantic information in scenes. Meaning maps were coded in a format that could be directly compared to maps of image salience generated from image features. We investigated the degree to which meaning versus image salience predicted human viewers' spatiotemporal distribution of attention over scenes. Extending previous work, here the distribution of attention was operationalized as duration-weighted fixation density. The results showed that both meaning and image salience predicted the duration-weighted distribution of attention, but that when the correlation between meaning and salience was statistically controlled, meaning accounted for unique variance in attention whereas salience did not. This pattern was observed in early as well as late fixations, fixations including and excluding the centers of the scenes, and fixations following short as well as long saccades. The results strongly suggest that meaning guides attention in real-world scenes. We discuss the results from the perspective of a cognitive-relevance theory of attentional guidance. |
John M. Henderson; Taylor R. Hayes; Gwendolyn Rehrig; Fernanda Ferreira Meaning guides attention during real-world scene description Journal Article In: Scientific Reports, vol. 8, pp. 13504, 2018. @article{Henderson2018b, Intelligent analysis of a visual scene requires that important regions be prioritized and attentionally selected for preferential processing. What is the basis for this selection? Here we compared the influence of meaning and image salience on attentional guidance in real-world scenes during two free-viewing scene description tasks. Meaning was represented by meaning maps capturing the spatial distribution of semantic features. Image salience was represented by saliency maps capturing the spatial distribution of image features. Both types of maps were coded in a format that could be directly compared to maps of the spatial distribution of attention derived from viewers' eye fixations in the scene description tasks. The results showed that both meaning and salience predicted the spatial distribution of attention in these tasks, but that when the correlation between meaning and salience was statistically controlled, only meaning accounted for unique variance in attention. The results support theories in which cognitive relevance plays the dominant functional role in controlling human attentional guidance in scenes. The results also have practical implications for current artificial intelligence approaches to labeling real-world images. |
Frouke Hermens; Marius Golubickis; C. Neil Macrae Eye movements while judging faces for trustworthiness and dominance Journal Article In: PeerJ, vol. 6, pp. 1–30, 2018. @article{Hermens2018, Past studies examining how people judge faces for trustworthiness and dominance have suggested that they use particular facial features (e.g. mouth features for trustworthiness, eyebrow and cheek features for dominance ratings) to complete the task. Here, we examine whether eye movements during the task reflect the importance of these features. We here compared eye movements for trustworthiness and dominance ratings of face images under three stimulus configurations: Small images (mimicking large viewing distances), large images (mimicking face to face viewing), and a moving window condition (removing extrafoveal information). Whereas first area fixated, dwell times, and number of fixations depended on the size of the stimuli and the availability of extrafoveal vision, and varied substantially across participants, no clear task differences were found. These results indicate that gaze patterns for face stimuli are highly individual, do not vary between trustworthiness and dominance ratings, but are influenced by the size of the stimuli and the availability of extrafoveal vision. |
Nora A. Herweg; Tobias Sommer; Nico Bunzeck Retrieval demands adaptively change striatal old/new signals and boost subsequent long-term memory Journal Article In: Journal of Neuroscience, vol. 38, no. 3, pp. 745–754, 2018. @article{Herweg2018, The striatum is a central part of the dopaminergic mesolimbic system and contributes both to the encoding and retrieval of long-term memories. In this regard, the co-occurrence of striatal novelty and retrieval success effects in independent studies underlines the structure's double duty and suggests dynamic contextual adaptation. To test this hypothesis and further investigate the underlying mechanisms ofencoding and retrieval dynamics, human subjects viewed pre-familiarized scene images intermixed with new scenes and classified them as indoor versus outdoor (encoding task) or old versus new (retrieval task), while fMRI and eye tracking data were recorded. Subsequently, subjects performed a final recognition task. As hypothesized, striatal activity and pupil size reflected task- conditional salience ofold and new stimuli, but, unexpectedly, this effect was not reflected in the substantia nigra and ventral tegmental area (SN/VTA), medial temporal lobe, or subsequent memory performance. Instead, subsequent memory generally benefitted from retrieval, an effect possibly driven by task difficulty and activity in a network including different parts ofthe striatum and SN/VTA. Our findings extend memory models of encoding and retrieval dynamics by pinpointing a specific contextual factor that differentially modulates the functional properties ofthe mesolimbic system. |
Hannah Hiebel; Anja Ischebeck; Clemens Brunner; Andrey R. Nikolaev; Margit Höfler; Christof Körner Target probability modulates fixation-related potentials in visual search Journal Article In: Biological Psychology, vol. 138, pp. 199–210, 2018. @article{Hiebel2018, This study investigated the influence of target probability on the neural response to target detection in free viewing visual search. Participants were asked to indicate the number of targets (one or two) among distractors in a visual search task while EEG and eye movements were co-registered. Target probability was manipulated by varying the set size of the displays between 10, 22, and 30 items. Fixation-related potentials time-locked to first target fixations revealed a pronounced P300 at the centro-parietal cortex with larger amplitudes for set sizes 22 and 30 than for set size 10. With increasing set size, more distractor fixations preceded the detection of the target, resulting in a decreased target probability and, consequently, a larger P300. For distractors, no increase of P300 amplitude with set size was observed. The findings suggest that set size specifically affects target but not distractor processing in overt serial visual search. |
Matthew D. Hilchey; Jason Rajsic; Greg Huffman; Raymond M. Klein; Jay Pratt Dissociating orienting biases from integration effects with eye movements Journal Article In: Psychological Science, vol. 29, no. 3, pp. 328–339, 2018. @article{Hilchey2018, Despite decades of research, the conditions under which shifts of attention to prior target locations are facilitated or inhibited remain unknown. This ambiguity is a product of the popular feature discrimination task, in which attentional bias is commonly inferred from the efficiency by which a stimulus feature is discriminated after its location has been repeated or changed. Problematically, these tasks lead to integration effects; effects of target-location repetition appear to depend entirely on whether the target feature or response also repeats, allowing for several possible inferences about orienting bias. To parcel out integration effects and orienting biases, we designed the present experiments to require localized eye movements and manual discrimination responses to serially presented targets with randomly repeating locations. Eye movements revealed consistent biases away from prior target locations. Manual discrimination responses revealed integration effects. These data collectively revealed inhibited reorienting and integration effects, which resolve the ambiguity and reconcile episodic integration and attentional orienting accounts. |
Rinat Hilo-Merkovich; Marisa Carrasco; Shlomit Yuval-Greenberg Task performance in covert, but not overt, attention correlates with early laterality of visual evoked potentials Journal Article In: Neuropsychologia, vol. 119, pp. 330–339, 2018. @article{HiloMerkovich2018, Attention affects visual perception at target locations via the amplification of stimuli signal strength, perceptual performance and perceived contrast. Behavioral and neural correlates of attention can be observed when attention is both covertly and overtly oriented (with or without accompanying eye movements). Previous studies have demonstrated that at the grand-average level, lateralization of Event Related Potentials (ERP) is associated with attentional facilitation at cued, relative to un-cued locations. Yet, the correspondence between ERP lateralization and behavior has not been established at the single-subject level. Specifically, it is an open question whether inter-individual differences in the neural manifestation of attentional orienting can predict differences in perception. Here, we addressed this question by examining the correlation between ERP lateralization and visual sensitivity at attended locations. Participants were presented with a cue indicating where a low-contrast grating patch target will appear, following a delay of varying durations. During this delay, while participants were waiting for the target to appear, a task-irrelevant checkerboard probe was presented briefly and bilaterally. ERP was measured relative to the onset of this probe. In separate blocks, participants were requested to report detection of a low-contrast target either by making a fast eye-movement toward the target (overt orienting), or by pressing a button (covert orienting). Results show that in the covert orienting condition, ERP lateralization of individual participants was positively correlated with their mean visual sensitivity for the target. But, no such correlation was found in the overt orienting condition. We conclude that ERP lateralization of individual participants can predict their performance on a covert, but not an overt, target detection task. |
Stephen J. Hinde; Tim J. Smith; Iain D. Gilchrist Does narrative drive dynamic attention to a prolonged stimulus? Journal Article In: Cognitive Research: Principles and Implications, vol. 3, pp. 1–12, 2018. @article{Hinde2018, Attention in the “real world” fluctuates over time, but these fluctuations are hard to examine using a timed trial-based experimental paradigm. Here we use film to study attention. To achieve short-term engagement, filmmakers make use of low-level cinematic techniques such as color, movement and sound design to influence attention. To engage audiences over prolonged periods of time, narrative structure is used. In this experiment, participants performed a secondary auditory choice reaction time (RT) task to measure attention while watching a film. In order to explore the role of narrative on attention, we manipulated the order that film segments were presented. The influence of narrative was then compared to the contribution of low-level features (extracted using a computer-based saliency model) in a multiple regression analysis predicting choice RT. The regression model successfully predicted 28% of the variance in choice RT: 13% was due to low-level saliency, and 8% due to the narrative. This study shows the importance of narrative in determining attention and the value of studying attention with a prolonged stimulus such as film. |
Selam W. Habtegiorgis; Katharina Rifai; Siegfried Wahl Transsaccadic transfer of distortion adaptation in a natural environment Journal Article In: Journal of Vision, vol. 18, no. 1, pp. 1–10, 2018. @article{Habtegiorgis2018, Spatially varying distortions in optical elements—for instance prisms and progressive power lenses— modulate the visual world disparately in different visual areas. Saccadic eye movements in such a complexly distorted environment thereby continuously alter the retinal location of the distortions. Yet the visual system achieves perceptual constancy by compensating for distortions irrespective of their retinal relocations at different fixations. Here, we assessed whether the visual system retains its plasticity to distortions across saccades to attain stability. Specifically, we tapped into reference frames of geometric skew-adaptation aftereffects to evaluate the transfer of retinotopic and spatiotopic distortion information across saccades. Adaptation to skew distortion of natural-image content was tested at retinotopic and spatiotopic locations after a saccade was executed between adaptation and test phases. The skew-adaptation information was partially transferred to a new fixation after a saccade. Significant adaptation aftereffects were obtained at both retinotopic and spatiotopic locations. Conceivably, spatiotopic information was used to counterbalance the saccadic retinal shifts of the distortions. Therefore, distortion processing in a natural visual world does not start anew at each fixation; rather, retinotopic and spatiotopic skew information acquired at previous fixations are preserved to mediate stable perception during eye movements. |
Harry H. Haladjian; Matteo Lisi; Patrick Cavanagh Motion and position shifts induced by the double-drift stimulus are unaffected by attentional load Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 4, pp. 884–893, 2018. @article{Haladjian2018, The double-drift stimulus produces a strong shift in apparent motion direction that generates large errors of perceived position. In this study, we tested the effect of attentional load on the perceptual estimates of motion direction and position for double-drift stimuli. In each trial, four objects appeared, one in each quadrant of a large screen, and they moved upward or downward on an angled trajectory. The target object whose direction or position was to be judged was either cued with a small arrow prior to object motion (low attentional load condition) or cued after the objects stopped moving and disappeared (high attentional load condition). In Experiment 1, these objects appeared 10° from the central fixation, and participants reported the perceived direction of the target's trajectory after the stimulus disappeared by adjusting the direction of an arrow at the center of the response screen. In Experiment 2, the four double-drift objects could appear between 6 ° and 14° from the central fixation, and participants reported the location of the target object after its disappearance by moving the position of a small circle on the response screen. The errors in direction and position judgments showed little effect of the attentional manipulation—similar errors were seen in both experiments whether or not the participant knew which double-drift object would be tested. This suggests that orienting endogenous attention (i.e., by only attending to one object in the precued trials) does not interact with the strength of the motion or position shifts for the double-drift stimulus. |
Michelle G. Hall; Claire K. Naughtin; Jason B. Mattingley; Paul E. Dux Distributed and opposing effects of incidental learning in the human brain Journal Article In: NeuroImage, vol. 173, pp. 351–360, 2018. @article{Hall2018, Incidental learning affords a behavioural advantage when sensory information matches regularities that have previously been encountered. Previous studies have taken a focused approach by probing the involvement of specific candidate brain regions underlying incidentally acquired memory representations, as well as expectation effects on early sensory representations. Here, we investigated the broader extent of the brain's sensitivity to violations and fulfilments of expectations, using an incidental learning paradigm in which the contingencies between target locations and target identities were manipulated without participants' overt knowledge. Multivariate analysis of functional magnetic resonance imaging data was applied to compare the consistency of neural activity for visual events that the contingency manipulation rendered likely versus unlikely. We observed widespread sensitivity to expectations across frontal, temporal, occipital, and sub-cortical areas. These activation clusters showed distinct response profiles, such that some regions displayed more reliable activation patterns under fulfilled expectations, whereas others showed more reliable patterns when expectations were violated. These findings reveal that expectations affect multiple stages of information processing during visual decision making, rather than early sensory processing stages alone. |
Nina M. Hanning; David Aagten-Murphy; Heiner Deubel Independent selection of eye and hand targets suggests effector-specific attentional mechanisms Journal Article In: Scientific Reports, vol. 8, pp. 9434, 2018. @article{Hanning2018a, Both eye and hand movements bind visual attention to their target locations during movement preparation. However, it remains contentious whether eye and hand targets are selected jointly by a single selection system, or individually by independent systems. To unravel the controversy, we investigated the deployment of visual attention – a proxy of motor target selection – in coordinated eye-hand movements. Results show that attention builds up in parallel both at the eye and the hand target. Importantly, the allocation of attention to one effector's motor target was not affected by the concurrent preparation of the other effector's movement at any time during movement preparation. This demonstrates that eye and hand targets are represented in separate, effector-specific maps of action-relevant locations. The eye-hand synchronisation that is frequently observed on the behavioral level must emerge from mutual influences of the two effector systems at later, post-attentional processing stages. |
Nina M. Hanning; Heiner Deubel Independent effects of eye and hand movements on visual working memory Journal Article In: Frontiers in Systems Neuroscience, vol. 12, pp. 37, 2018. @article{Hanning2018, Both eye and hand movements have been shown to selectively interfere with visual working memory. We investigated working memory in the context of combined eye-hand movements to approach the question whether the eye and the hand movement systems independently interact with visual working memory. Participants memorized several locations and performed eye, hand, or combined eye-hand movements during the maintenance interval. Subsequently, we tested spatial working memory at the eye or the hand motor goal, and at action-irrelevant locations. We found that for single eye and single hand movements, memory at the eye or hand target was significantly improved compared to action-irrelevant locations. Remarkably, when an eye and a hand movement were prepared in parallel, but to distinct locations, memory at both motor targets was enhanced – with no tradeoff between the two action goals. This suggests that eye and hand movements independently enhance visual working memory at their goal locations, resulting in an overall working memory performance that is higher than that expected when recruiting only one effector. |
Michael P. Harms; Leah H. Somerville; Beau M. Ances; Jesper Andersson; Deanna M. Barch; Matteo Bastiani; Susan Y. Bookheimer; Timothy B. Brown; Randy L. Buckner; Gregory C. Burgess; Timothy S. Coalson; Michael A. Chappell; Mirella Dapretto; Gwenaëlle Douaud; Bruce Fischl; Matthew F. Glasser; Douglas N. Greve; Cynthia Hodge; Keith W. Jamison; Saad Jbabdi; Sridhar Kandala; Xiufeng Li; Ross W. Mair; Silvia Mangia; Daniel Marcus; Daniele Mascali; Steen Moeller; Thomas E. Nichols; Emma C. Robinson; David H. Salat; Stephen M. Smith; Stamatios N. Sotiropoulos; Melissa Terpstra; Kathleen M. Thomas; M. Dylan Tisdall; Kamil Ugurbil; Andre Kouwe; Roger P. Woods; Lilla Zöllei; David C. Van Essen; Essa Yacoub Extending the Human Connectome Project across ages: Imaging protocols for the Lifespan Development and Aging projects Journal Article In: NeuroImage, vol. 183, pp. 972–984, 2018. @article{Harms2018, The Human Connectome Projects in Development (HCP-D) and Aging (HCP-A) are two large-scale brain imaging studies that will extend the recently completed HCP Young-Adult (HCP-YA) project to nearly the full lifespan, collecting structural, resting-state fMRI, task-fMRI, diffusion, and perfusion MRI in participants from 5 to 100+ years of age. HCP-D is enrolling 1300+ healthy children, adolescents, and young adults (ages 5–21), and HCP-A is enrolling 1200+ healthy adults (ages 36–100+), with each study collecting longitudinal data in a subset of individuals at particular age ranges. The imaging protocols of the HCP-D and HCP-A studies are very similar, differing primarily in the selection of different task-fMRI paradigms. We strove to harmonize the imaging protocol to the greatest extent feasible with the completed HCP-YA (1200+ participants, aged 22–35), but some imaging-related changes were motivated or necessitated by hardware changes, the need to reduce the total amount of scanning per participant, and/or the additional challenges of working with young and elderly populations. Here, we provide an overview of the common HCP-D/A imaging protocol including data and rationales for protocol decisions and changes relative to HCP-YA. The result will be a large, rich, multi-modal, and freely available set of consistently acquired data for use by the scientific community to investigate and define normative developmental and aging related changes in the healthy human brain. |
William J. Harrison; Paul M. Bays Visual working memory is independent of the cortical spacing between memoranda Journal Article In: Journal of Neuroscience, vol. 38, no. 12, pp. 3116–3123, 2018. @article{Harrison2018, The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. |
Matthias Hartmann; Jochen Laubrock; Martin H. Fischer The visual number world: A dynamic approach to study the mathematical mind Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 28–36, 2018. @article{Hartmann2018, In the domain of language research, the simultaneous presentation of a visual scene and its auditory description (i.e., the visual world paradigm) has been used to reveal the timing of mental mechanisms. Here we apply this rationale to the domain of numerical cognition in order to explore the differences between fast and slow arithmetic performance, and to further study the role of spatial-numerical associations during mental arithmetic. We presented 30 healthy adults simultaneously with visual displays containing four numbers and with auditory addition and subtraction problems. Analysis of eye movements revealed that participants look spontaneously at the numbers they currently process (operands, solution). Faster performance was characterized by shorter latencies prior to fixating the relevant numbers and fewer revisits to the first operand while computing the solution. These signatures of superior task performance were more pronounced for addition and visual numbers arranged in ascending order, and for subtraction and numbers arranged in descending order (compared to the opposite pairings). Our results show that the “visual number world”-paradigm provides on-line access to the mind during mental arithmetic, is able to capture variability in arithmetic performance, and is sensitive to visual layout manipulations that are otherwise not reflected in response time measurements. |
Katja I. Häuser; Vera Demberg; Jutta Kray In: Psychology and Aging, vol. 33, no. 8, pp. 1168–1180, 2018. @article{Haeuser2018, Even though older adults are known to have difficulty at language processing when a secondary task has to be performed simultaneously, few studies have addressed how older adults process language in dual-task demands when linguistic load is systematically varied. Here, we manipulated surprisal, an information theoretic measure that quantifies the amount of new information conveyed by a word, to investigate how linguistic load affects younger and older adults during early and late stages of sentence processing under conditions when attention is split between two tasks. In high-surprisal sentences, target words were implausible and mismatched with semantic expectancies based on context, thereby causing integration difficulty. Participants performed semantic meaningfulness judgments on sentences that were presented in isolation (single task) or while performing a secondary tracking task (dual task). Cognitive load was measured by means of pupillometry. Mixed-effects models were fit to the data, showing the following: (a) During the dual task, younger but not older adults demonstrated early sensitivity to surprisal (higher levels of cognitive load, indexed by pupil size) as sentences were heard online; (b) Older adults showed no immediate reaction to surprisal, but a delayed response, where their meaningfulness judgments to high-surprisal words remained stable in accuracy, while secondary tracking performance declined. Findings are discussed in relation to age-related trade-offs in dual tasking and differences in the allocation of attentional resources during language processing. Collectively, our data show that higher linguistic load leads to task trade-offs in older adults and differently affects the time course of online language processing in aging. |
Taylor R. Hayes; John M. Henderson Scan patterns during scene viewing predict individual differences in clinical traits in a normative sample Journal Article In: PLoS ONE, vol. 13, no. 5, pp. e0196654, 2018. @article{Hayes2018, The relationship between viewer individual differences and gaze control has been largely neglected in the scene perception literature. Recently we have shown a robust association between individual differences in viewer cognitive capacity and scan patterns during scene viewing. These findings suggest other viewer individual differences may also be associated with scene gaze control. Here we expand our findings to quantify the relationship between individual differences in clinical traits and scene viewing behavior in a normative sample. The present study used Successor Representation Scanpath Analysis (SRSA) to quantify the strength of the association between individual differences in scan patterns during real-world scene viewing and individual differences in viewer attention-deficit disorder, autism spectrum disorder, and dyslexia scores. The SRSA results revealed individual differences in vertical scan patterns that explained more than half of the variance in attention-deficit scores, a third of the variance in autism quotient scores, and about a quarter of the variance in dyslexia scores. These results suggest that individual differences in attention-deficit disorder, autism spectrum disorder, and dyslexia scores are most strongly associated with vertical scanning behaviors when viewing real-world scenes. More importantly, our results suggest scene scan patterns have promise as potential diagnostic tools and provide insight into the types of vertical scan patterns that are most diagnostic. |
Dexian He; Xianyou He; Siyan Lai; Shuang Wu; Juan Wan; Tingting Zhao The effect of temporal concept on the automatic activation of spatial representation: From axis to plane Journal Article In: Consciousness and Cognition, vol. 65, pp. 95–108, 2018. @article{He2018b, Temporal concepts could be represented horizontally(X-axis) or vertically (Y-axis). However, whether the spatial representation of time exists in the whole plane remains unclear. In this study, we investigated whether processing temporal concepts would automatically activate spatial representations in a whole plane without any guidance or cue. Participants first indicated whether a word was past-related or future-related, then, they identified a target in different visual fields. In Experiment 1, the results demonstrated that past time mapped onto the left and top in a plane or axis, while future time mapped onto the right and bottom, with the horizontal effect being stronger than the vertical effect. In Experiment 2, an index of eye movement showed a similar data pattern. Thinking about temporal concepts activates spatial schema automatically without guidance or cue, and the time-space metaphor is represented not only as an axis but also as a whole plane. The results were discussed in terms of the possible cultural differences that made the Chinese participants tend to be more flexible in spatial representation of time because of their comprehensive thinking. |
Rebecca R. Goldstein; Melissa R. Beck Visual search with varying versus consistent attentional templates: Effects on target template establishment, comparison, and guidance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 44, no. 7, pp. 1086–1102, 2018. @article{Goldstein2018, Attentional templates can be represented in visual working memory (VWM) when the target varies from trial-to-trial and can be represented in long-term memory (LTM) when the target is consistent during trial runs. Given that attentional templates can be represented in either VWM or LTM, are there any differences in how these representations impact visual search when targets are consistent compared with varying? The current study tested the consistent template hypothesis, which predicts faster performance with a consistent target compared with a varying target. Experiment 1 examined whether consistent targets could lead to consistent templates that would improve template establishment, guidance, and/or comparison of the template to search items. Search response time was faster for consistent targets, and consistent targets produced faster comparison processes, but not more efficient guidance. Experiment 2 examined the consistent template restoration hypothesis, which predicts faster template establishment and comparison processes for a previously encoun- tered consistent target. Experiment 2 replicated the consistent template hypothesis and supported the consistent template restoration hypothesis. These studies demonstrate that although attentional guidance is similar with varying and consistent attentional templates, consistent templates improve search performance by speeding template establishment and comparison processes. |
C. C. Gonzalez; M. R. Burke Motor sequence learning in the brain: The long and short of It Journal Article In: Neuroscience, vol. 389, pp. 85–98, 2018. @article{Gonzalez2018, Motor sequence learning involves predictive processing that results in the anticipation of each component of a sequence of actions. In smooth pursuit, this predictive processing is required to decrease tracking errors between the eye and the stimulus. Current models for motor sequence learning suggest parallel mechanisms in the brain for acquiring sequences of differing complexity. We examined this model by comparing shorter versus longer sequences of pursuit eye movements during fMRI. In this way we were able to identify overlapping and distinct brain areas involved in simple versus more complex oculomotor learning. Participants revealed predictive pursuit eye movements from the second presentation of the stimulus in both short and long sequences. Brain imaging results indicated activation of parallel brain areas for the different sequence lengths that consisted of the Inferior Occipital Gyrus and the Cingulate as areas in common. In addition, distinct activation was found in more working memory related brain regions for the shorter sequences (e.g. the middle frontal cortex and dorsolateral prefrontal cortex), and higher activation in the frontal eye fields, supplementary motor cortex and motor cortex for the longer sequences, independent on the number of repetitions. These findings provide new evidence that there are parallel brain areas that involve working memory circuitry for short sequences, and more motoric areas when the sequence is longer and more cognitively demanding. Additionally, our findings are the first to show that the parallel brain regions involved in sequence learning in pursuit are independent of the number of repetitions, but contingent on sequence complexity. |
Gemma Graham; James D. Sauer; Lucy Akehurst; Jenny Smith; Anne P. Hillstrom CCTV observation: The effects of event type and instructions on fixation behaviour in an applied change blindness task Journal Article In: Applied Cognitive Psychology, vol. 32, no. 1, pp. 4–13, 2018. @article{Graham2018, Little is known about how observers' scanning strategies affect performance when monitoring events in closed-circuit television (CCTV) footage. We examined the fixation behaviour ofchange detectors and non-detectors monitoring dynamic scenes. One hundred forty-seven participants observed mock CCTV videos featuring either a mock crime or no crime. Participants were instructed to look for a crime, to look for something unusual or simply to watch the video. In both videos, two of the people depicted switched locations. Eye movements (the number offixations on the targets and the average length ofeach fixation on targets) were recorded prior to and during the critical change period. Change detection (24% overall) was unaffected by event type or task instruction. Fixation behaviour differed significantly between the criminal and non-criminal event conditions. There was no effect ofinstructions on fixation behaviour. Change detectors fixated for longer on the target directly before the change than did non-detectors. Although fixation behaviour before change predicted change detection, fixation count and durations during the critical change period did not. These results highlight the potential value ofstudying fixation behaviour for understanding change blindness during complex, cognitively demanding tasks (e.g. CCTV surveillance). |
Whitney S. Griggs; Hidetoshi Amita; Atul Gopal; Okihide Hikosaka Visual neurons in the superior colliculus discriminate many objects by their historical values Journal Article In: Frontiers in Neuroscience, vol. 12, pp. 396, 2018. @article{Griggs2018, The superior colliculus (SC) is an important structure in the mammalian brain that orients the animal towards distinct visual events. Visually-responsive neurons in SC are modulated by visual object features, including size, motion, and color. However, it remains unclear whether SC activity is modulated by non-visual object features, such as the reward value associated with the object. To address this question, three monkeys were trained (>10 days) to saccade to multiple fractal objects, half of which were consistently associated with large reward while other half were associated with small reward. This created historically high-valued (‘good') and low-valued (‘bad') objects. During the neuronal recordings from the SC, the monkeys maintained fixation at the center while the objects were flashed in the receptive field of the neuron without any reward. We found that approximately half of the visual neurons responded more strongly to the good than bad objects. In some neurons, this value-coding remained intact for a long time (>1 year) after the last object-reward association learning. Notably, the neuronal discrimination of reward values started about 100 ms after the appearance of visual objects and lasted for more than 100 ms. These results provide evidence that SC neurons can discriminate objects by their historical (long-term) values. This object value information may be provided by the basal ganglia, especially the circuit originating from the tail of the caudate nucleus. The information may be used by the neural circuits inside SC for motor (saccade) output or may be sent to the circuits outside SC for future behavior. |
Nicola Grossheinrich; Christine Firk; Martin Schulte-Rüther; Andreas Leupoldt; Kerstin Konrad; Lynn Huestegge Looking while unhappy: A mood-congruent attention bias toward sad adult faces in children Journal Article In: Frontiers in Psychology, vol. 9, pp. 2577, 2018. @article{Grossheinrich2018, A negative mood-congruent attention bias has been consistently observed, for example, in clinical studies on major depression. This bias is assumed to be dysfunctional in that it supports maintaining a sad mood, whereas a potentially adaptive role has largely been neglected. Previous experiments involving sad mood induction techniques found a negative mood-congruent attention bias specifically for young individuals, explained by an adaptive need for information transfer in the service of mood regulation. In the present study we investigated the attentional bias in typically developing children (aged 6–12 years) when happy and sad moods were induced. Crucially, we manipulated the age (adult vs. child) of the displayed pairs of facial expressions depicting sadness, anger, fear and happiness. The results indicate that sad children indeed exhibited a mood specific attention bias toward sad facial expressions. Additionally, this bias was more pronounced for adult faces. Results are discussed in the context of an information gain which should be stronger when looking at adult faces due to their more expansive life experience. These findings bear implications for both research methods and future interventions. |
2017 |
Ricardo Ramos Gameiro; Kai Kaspar; Sabine U. König; Sontje Nordholt; Peter König Exploration and exploitation in natural viewing behavior Journal Article In: Scientific Reports, vol. 7, pp. 2311, 2017. @article{Gameiro2017, Many eye-tracking studies investigate visual behavior with a focus on image features and the semantic content of a scene. A wealth of results on these aspects is available, and our understanding of the decision process where to look has reached a mature stage. However, the temporal aspect, whether to stay and further scrutinize a region (exploitation) or to move on and explore image regions that were yet not in the focus of attention (exploration) is less well understood. Here, we investigate the trade-off between these two processes across stimuli with varying properties and sizes. In a free viewing task, we examined gaze parameters in humans, involving the central tendency, entropy, saccadic amplitudes, number of fixations and duration of fixations. The results revealed that the central tendency and entropy scaled with stimulus size. The mean saccadic amplitudes showed a linear increase that originated from an interaction between the distribution of saccades and the spatial bias. Further, larger images led to spatially more extensive sampling as indicated by a higher number of fixations at the expense of reduced fixation durations. These results demonstrate a profound shift from exploitation to exploration as an adaptation of main gaze parameters with increasing image size. |
Heather J. Ferguson; James Cane Tracking the impact of depression in a perspective-taking task Journal Article In: Scientific Reports, vol. 7, pp. 14821, 2017. @article{Ferguson2017a, Research has identified impairments in Theory of Mind (ToM) abilities in depressed patients, particularly in relation to tasks involving empathetic responses and belief reasoning. We aimed to build on this research by exploring the relationship between depressed mood and cognitive ToM, specifically visual perspective-taking ability. High and low depressed participants were eye-tracked as they completed a perspective-taking task, in which they followed the instructions of a ‘director' to move target objects (e.g. a “teapot with spots on”) around a grid, in the presence of a temporarily-ambiguous competitor object (e.g. a “teapot with stars on”). Importantly, some of the objects in the grid were occluded from the director's (but not the participant's) view. Results revealed no group-based difference in participants' ability to use perspective cues to identify the target object. All participants were faster to select the target object when the competitor was only available to the participant, compared to when the competitor was mutually available to the participant and director. Eye-tracking measures supported this pattern, revealing that perspective directed participants' visual search immediately upon hearing the ambiguous object's name (e.g. “teapot”). We discuss how these results fit with previous studies that have shown a negative relationship between depression and ToM. |
Grace Edwards; Petra Vetter; Fiona McGruer; Lucy S. Petro; Lars Muckli Predictive feedback to V1 dynamically updates with sensory input Journal Article In: Scientific Reports, vol. 7, pp. 16538, 2017. @article{Edwards2017a, Predictive coding theories propose that the brain creates internal models of the environment to predict upcoming sensory input. Hierarchical predictive coding models of vision postulate that higher visual areas generate predictions of sensory inputs and feed them back to early visual cortex. In V1, sensory inputs that do not match the predictions lead to amplified brain activation, but does this amplification process dynamically update to new retinotopic locations with eye-movements? We investigated the effect of eye-movements in predictive feedback using functional brain imaging and eye-tracking whilst presenting an apparent motion illusion. Apparent motion induces an internal model of motion, during which sensory predictions of the illusory motion feed back to V1. We observed attenuated BOLD responses to predicted stimuli at the new post-saccadic location in V1. Therefore, pre-saccadic predictions update their retinotopic location in time for post-saccadic input, validating dynamic predictive coding theories in V1. |
S. Hagerman; Z. Woolard; K. Anderson; Benjamin W. Tatler; F. R. Moore Women's self-rated attraction to male faces does not correspond with physiological arousal Journal Article In: Scientific Reports, vol. 7, pp. 13564, 2017. @article{Hagerman2017, There has been little work to determine whether attractiveness ratings of faces correspond to sexual or more general attraction. We tested whether a measure of women's physiological arousal (pupil diameter change) was correlated with ratings of men's facial attractiveness. In Study 1, women rated the faces of men for whom we also measured salivary testosterone. They rated each face for attractiveness, and for desirability for friendship and long- and short-term romantic relationships. Pupil diameter change was not related to subjective ratings of attractiveness, but was positively correlated with the men's testosterone. In Study 2 we compared women's pupil diameter change in response to the faces of men with high versus low testosterone, as well as in response to non-facial images pre-rated as either sexually arousing or threatening. Pupil dilation was not affected by testosterone, and increased relatively more in response to sexually arousing than threatening images. We conclude that self-rated preferences may not provide a straightforward and direct assessment of sexual attraction. We argue that future work should identify the constructs that are tapped via attractiveness ratings of faces, and support the development of methodology which assesses objective sexual attraction. |
Nicola Binetti; Charlotte Harrison; Isabelle Mareschal; Alan Johnston Pupil response hazard rates predict perceived gaze durations Journal Article In: Scientific Reports, vol. 7, pp. 3969, 2017. @article{Binetti2017, We investigated the mechanisms for evaluating perceived gaze-shift duration. Timing relies on the accumulation of endogenous physiological signals. Here we focused on arousal, measured through pupil dilation, as a candidate timing signal. Participants timed gaze-shifts performed by face stimuli in a Standard/Probe comparison task. Pupil responses were binned according to “Longer/Shorter” judgements in trials where Standard and Probe were identical. This ensured that pupil responses reflected endogenous arousal fluctuations opposed to differences in stimulus content. We found that pupil hazard rates predicted the classification of sub-second intervals (steeper dilation = “Longer” classifications). This shows that the accumulation of endogenous arousal signals informs gaze-shift timing judgements. We also found that participants relied exclusively on the 2nd stimulus to perform the classification, providing insights into timing strategies under conditions of maximum uncertainty. We observed no dissociation in pupil responses when timing equivalent neutral spatial displacements, indicating that a stimulus-dependent timer exploits arousal to time gaze-shifts. |
Fabrice Damon; David Méary; Paul C. Quinn; Kang Lee; Elizabeth A. Simpson; Annika Paukner; Stephen J. Suomi; Olivier Pascalis Preference for facial averageness: Evidence for a common mechanism in human and macaque infants Journal Article In: Scientific Reports, vol. 7, pp. 46303, 2017. @article{Damon2017, Human adults and infants show a preference for average faces, which could stem from a general processing mechanism and may be shared among primates. However, little is known about preference for facial averageness in monkeys. We used a comparative developmental approach and eye-tracking methodology to assess visual attention in human and macaque infants to faces naturally varying in their distance from a prototypical face. In Experiment 1, we examined the preference for faces relatively close to or far from the prototype in 12-month-old human infants with human adult female faces. Infants preferred faces closer to the average than faces farther from it. In Experiment 2, we measured the looking time of 3-month-old rhesus macaques (Macaca mulatta) viewing macaque faces varying in their distance from the prototype. Like human infants, macaque infants looked longer to faces closer to the average. In Experiments 3 and 4, both species were presented with unfamiliar categories of faces (i.e., macaque infants tested with adult macaque faces; human infants and adults tested with infant macaque faces) and showed no prototype preferences, suggesting that the prototypicality effect is experience-dependent. Overall, the findings suggest a common processing mechanism across species, leading to averageness preferences in primates. |
Cristina La Malla; Jeroen B. J. Smeets; Eli Brenner Potential systematic interception Eerrors are avoided when tracking the target with one's eyes Journal Article In: Scientific Reports, vol. 7, pp. 10793, 2017. @article{deLaMalla2017, Directing our gaze towards a moving target has two known advantages for judging its trajectory: the spatial resolution with which the target is seen is maximized, and signals related to the eyes' movements are combined with retinal cues to better judge the target's motion. We here explore whether tracking a target with one's eyes also prevents factors that are known to give rise to systematic errors in judging retinal speeds from resulting in systematic errors in interception. Subjects intercepted white or patterned disks that moved from left to right across a large screen at various constant velocities while either visually tracking the target or fixating the position at which they were required to intercept the target. We biased retinal motion perception by moving the pattern within the patterned targets. This manipulation led to large systematic errors in interception when subjects were fixating, but not when they were tracking the target. The reduction in the errors did not depend on how smoothly the eyes were tracking the target shortly before intercepting it. We propose that tracking targets with one's eyes when one wants to intercept them makes one less susceptible to biases in judging their motion. |
Joseph Arizpe; Vincent Walsh; Galit Yovel; Chris I. Baker The categories, frequencies, and stability of idiosyncratic eye-movement patterns to faces Journal Article In: Vision Research, vol. 141, pp. 191–203, 2017. @article{Arizpe2017a, The spatial pattern of eye-movements to faces considered typical for neurologically healthy individuals is a roughly T-shaped distribution over the internal facial features with peak fixation density tending toward the left eye (observer's perspective). However, recent studies indicate that striking deviations from this classic pattern are common within the population and are highly stable over time. The classic pattern actually reflects the average of these various idiosyncratic eye-movement patterns across individuals. The natural categories and respective frequencies of different types of idiosyncratic eye-movement patterns have not been specifically investigated before, so here we analyzed the spatial patterns of eye-movements for 48 participants to estimate the frequency of different kinds of individual eye-movement patterns to faces in the normal healthy population. Four natural clusters were discovered such that approximately 25% of our participants' fixation density peaks clustered over the left eye region (observer's perspective), 23% over the right eye-region, 31% over the nasion/bridge region of the nose, and 20% over the region spanning the nose, philthrum, and upper lips. We did not find any relationship between particular idiosyncratic eye-movement patterns and recognition performance. Individuals' eye-movement patterns early in a trial were more stereotyped than later ones and idiosyncratic fixation patterns evolved with time into a trial. Finally, while face inversion strongly modulated eye-movement patterns, individual patterns did not become less distinct for inverted compared to upright faces. Group-averaged fixation patterns do not represent individual patterns well, so exploration of such individual patterns is of value for future studies of visual cognition. |
Devavrat Vartak; Danique Jeurissen; Matthew W. Self; Pieter R. Roelfsema The influence of attention and reward on the learning of stimulus-response associations Journal Article In: Scientific Reports, vol. 7, pp. 9036, 2017. @article{Vartak2017, We can learn new tasks by listening to a teacher, but we can also learn by trial-and-error. Here, we investigate the factors that determine how participants learn new stimulus-response mappings by trial-and-error. Does learning in human observers comply with reinforcement learning theories, which describe how subjects learn from rewards and punishments? If yes, what is the influence of selective attention in the learning process? We developed a novel redundant-relevant learning paradigm to examine the conjoint influence of attention and reward feedback. We found that subjects only learned stimulus-response mappings for attended shapes, even when unattended shapes were equally informative. Reward magnitude also influenced learning, an effect that was stronger for attended than for non-attended shapes and that carried over to a subsequent visual search task. Our results provide insights into how attention and reward jointly determine how we learn. They support the powerful learning rules that capitalize on the conjoint influence of these two factors on neuronal plasticity. |
Tom Nissens; Michel Failing; Jan Theeuwes People look at the object they fear: Oculomotor capture by stimuli that signal threat Journal Article In: Cognition and Emotion, vol. 31, no. 8, pp. 1707–1714, 2017. @article{Nissens2017, It is known that people covertly attend to threatening stimuli even when it is not beneficial for the task. In the current study we examined whether overt selection is affected by the presence of an object that signals threat. We demonstrate that stimuli that signal the possibility of receiving an electric shock capture the eyes more often than stimuli signalling no shock. Capture occurred even though the threat-signalling stimulus was neither physically salient nor task relevant at any point during the experiment. Crucially, even though fixating the threat-related stimulus made it more likely to receive a shock, results indicate that participants could not help but doing it. Our findings indicate that the presence of a stimulus merely signalling the possibility of receiving a shock is prioritised in selection, and exogenously captures the eyes even when this ultimately results in the execution of the threat (i.e. receiving a shock). Oculomotor capture was particularly pronounced for the fastest saccades which is consistent with the idea that threat influences visual selection at an early stage of processing, when selection is mainly involuntarily. |
Mahiko Konishi; Kevin Brown; Luca Battaglini; Jonathan Smallwood When attention wanders: Pupillometric signatures of fluctuations in external attention Journal Article In: Cognition, vol. 168, pp. 16–26, 2017. @article{Konishi2017, Attention is not always directed to events in the external environment. On occasion our thoughts wander to people and places distant from the here and now. Sometimes, this lack of external attention can compromise ongoing task performance. In the current study we set out to understand the extent to which states of internal and external attention can be determined using pupillometry as an index of ongoing cognition. In two experiments we found that periods of slow responding were associated with elevations in the baseline pupil signal over three and a half seconds prior to a behavioural response. In the second experiment we found that unlike behavioural lapses, states of off-task thought, particularly those associated with a focus on the past and with an intrusive quality, were associated with reductions in the size of the pupil over the same window prior to the probe. These data show that both states of large and small baseline pupil size are linked to states when attention is not effectively focused on the external environment, although these states have different qualities. More generally, these findings illustrate that subjective and objective markers of task performance may not be equivalent and underscore the importance of developing objective indicators that can allow these different states to be understood. |
Yoko Higuchi; Jun Saiki Implicit Learning of Spatial Configuration Occurs without Eye Movement Journal Article In: Japanese Psychological Research, vol. 59, no. 2, pp. 122–132, 2017. @article{Higuchi2017, Studies have demonstrated that eye movements enhance visual memory. However, the role of eye movement in implicit learning is not clear. We investigated whether implicit learning of spatial configuration requires eye movement using the contextual cueing paradigm. Eye movements were restricted by instructing participants to maintain fixation on the center of a display during a visual search. The results demonstrated that contextual cueing occurs even without eye movements. Furthermore, contextual cueing effects were obtained more rapidly when eye movements were restricted compared to when eye movements were allowed. The findings suggest that eye movements mediate the learning progress in contextual cueing: stabilization of spatial layout representation by restriction of eye movements induces a rapid learning of configuration. |
Maria Staudte; Gerry T. M. Altmann Recalling what was where when seeing nothing there Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 2, pp. 400–407, 2017. @article{Staudte2017, So-called "looks-at-nothing" have previously been used to show that recalling what also elicits the recall of where this was. Here, we present evidence from an eye-tracking study which shows that disrupting looks to "there" does not disrupt recalling what was there, nor do (antici-patory) looks to "there" facilitate recalling what was there. Therefore, our results suggest that recalling where does not recall what. |
Deanna J. Taylor; Nicholas D. Smith; David P. Crabb Searching for Objects in Everyday Scenes: Measuring Performance in People With Dry Age-Related Macular Degeneration Journal Article In: Investigative Ophthalmology & Visual Science, vol. 58, no. 3, pp. 1887, 2017. @article{Taylor2017a, Purpose: Treatment success in clinical trials for AMD would ideally be aligned to measurable performance in visual tasks rather than imperceptible changes on clinical charts. We test the hypothesis that patients with dry AMD perform worse than visually healthy peers on computer-based surrogates of "real-world" visual search tasks. Methods: A prospective case-control study was conducted in which patients with dry AMD performed a computer-based "real-world" visual search task. Participants searched for targets within images of everyday scenes while eye movements were recorded. Average search times across the images were recorded as a primary outcome measure. Comparisons were made against a 90% normative limit established in peers with healthy vision (controls). Eye movement parameters were examined as a secondary outcome measure. Results: Thirty-one patients and 33 controls with median (interquartile range) age of 75 (70-79) and 71 (66-75) years and logMAR binocular visual acuity 0.2 (0.18-0.31) and -0.06 (-0.12 to 0), respectively, were examined. Four, 18, and 9 patients were categorized as having early, intermediate, and late AMD, respectively. Nineteen (61%) patients exceeded the 90% normative limits for average search time; this was statistically significant (Fisher's exact test, P < 0.0001). On average, patients made smaller saccades than controls (P < 0.001). Conclusions: People with dry AMD, certainly those with advanced disease, are likely to have measurable difficulties beyond those observed in visually healthy peers on "real-world" search tasks. Further work might establish this type of task as a useful outcome measure for clinical trials. |
Nicolas Krucien; Mandy Ryan; Frouke Hermens Visual attention in multi-attributes choices: What can eye-tracking tell us? Journal Article In: Journal of Economic Behavior and Organization, vol. 135, pp. 251–267, 2017. @article{Krucien2017, Choice experiments (CE), involving multi-attribute choices, are increasingly used in economics to value non-marketed goods. Such choices require individuals to process large amounts of information, shown to trigger partial information strategies in participants. We develop a new framework in which information processing is treated as a latent (unobservable) process. Testing our approach by combining CE and visual attention (VA) data gathered from eye-tracking, we show that treating information processing as a latent process (LIP) outperforms models assuming full information processing (FIP) or binary information processing (BIP). Our modelling of VA results in a number of key findings. We show that the relationship between VA and individuals' preferences depends on the type of product attribute. More specifically, preferences for “easier to process” attributes appear to be less influenced by changes in underlying level of VA than “harder to process” attributes. In turn this impacts on willingness-to-pay estimates, with the LIP model resulting in smaller values than those obtained with the FIP model. Our results have implications for CE designers. More time should be spent getting subjects to understand more complicated attributes of the CE. Our results are likely to extend beyond experimental choices (stated preferences) to actual choices (revealed preferences). |
Jasper H. Fabius; Sebastiaan Mathôt; Martijn J. Schut; Tanja C. W. Nijboer; Stefan Van der Stigchel Focus of spatial attention during spatial working memory maintenance: Evidence from pupillary light response Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 10–20, 2017. @article{Fabius2017, In this experiment, we demonstrate modulation of the pupillary light response by spatial working memory (SWM). The pupillary light response has previously been shown to reflect the focus of covert attention, as demonstrated by smaller pupil sizes when a subject covertly attends a location on a bright background compared to a dark background. We took advantage of this modulation of the pupillary light response to measure the focus of attention during a SWM delay. Subjects performed two tasks in which a stimulus was presented in the periphery on either the bright or the dark half of a black and white display. Importantly, subjects had to remember the exact location of the stimulus in only one of the two tasks. We observed a modulation of pupil size by background luminance in the delay period, but only when subjects had to remember the exact location. We interpret this as evidence for a tight coupling between spatial attention and maintaining information in SWM. Interestingly, we observed particularly strong modulation of background luminance at the beginning and end of the delay, but not in between. This is suggestive of strategic guidance of spatial attention by the content of spatial working memory when it is task relevant. |
Stefanie I. Becker; Neelam Dutt; Joyce M. G. G Vromen; Gernot Horstmann The capture of attention and gaze in the search for emotional photographic faces Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 241–261, 2017. @article{Becker2017a, Can emotional expressions automatically attract attention in virtue of their affective content? Previous studies mostly used emotional faces (e.g., angry or happy faces) in visual search tasks to assess whether affective contents can automatically attract attention. However, the evidence in support of affective attentional capture is still contentious, as the studies either: (1) did not render affective contents irrelevant to the task, (2) used affective stimuli that were perceptually similar to the target, (3) did not rule out factors occurring later in the visual search process (e.g., disengagement of attention), or (4) used only schematic emotional faces that do not clearly convey affective contents. The present study remedied these shortcomings by measuring the eye movements of observers while they searched for emotional photographic faces. To examine whether irrelevant emotional faces are selected because of their perceptual similarity to the target (top-down), or because of their emotional expressions, we also assessed the perceptual similarity between the emotional distractors and the target. The results show that happy and angry faces can indeed automatically attract attention and the gaze. Perceptual similarity modulated the effect only weakly, indicating that capture was mainly due to bottom-up, stimulus-driven processes. However, post-selectional processes of disengaging attention from the emotional expressions contributed strongly to the overall disruptive effects of emotional expressions. Taken together, these results support a stimulus-driven account of attentional capture by emotional faces, and highlight the need to use measures that can distinguish between early and late processes in visual search. |
Maria J. Barraza-Bernal; Iliya V. Ivanov; Svenja Nill; Katharina Rifai; Susanne Trauzettel-Klosinski; Siegfried Wahl Can positions in the visual field with high attentional capabilities be good candidates for a new preferred retinal locus? Journal Article In: Vision Research, vol. 140, pp. 1–12, 2017. @article{BarrazaBernal2017, The sustained component of visual attention lowers the perceptual threshold of stimuli located at the attended region. Attentional performance is not equal for all eccentric positions, leading to variations in perception. The location of the preferred retinal locus (PRL) for fixation might be influenced by these attentional variations. This study investigated the relation between the placement of sustained attention and the location of a developed PRL using simulations of central scotoma. Thirteen normally sighted subjects participated in the study. Monocular sustained attention was measured in discrete eccentric locations of the visual field using the dominant eye. Subsequently, a six degrees macular scotoma was simulated and PRL training was performed during eight ten-minutes blocks of trials. After training, every subject developed a PRL. Subjects with high attentional capabilities in the lower hemifield generally developed PRLs in the lower hemifield (n = 10), subjects with high attentional capabilities in the upper hemifield developed PRLs in the upper hemifield (n = 2) and one subject with similar attentional capabilities in the upper and lower hemifield developed the PRL on the upper hemifield. Analyzed individually, the results showed that 70% of the subjects had a PRL location in the hemifield where high attentional performance was achieved. These results suggest that attentional capabilities can be used as a predictor for the development of the PRL and are of significance for low vision rehabilitation and for the development of new PRL training procedures, with the option for a preventive attentional training in early macular disease to develop a favorable PRL. |
Maria J. Barraza-Bernal; Katharina Rifai; Siegfried Wahl Transfer of an induced preferred retinal locus of fixation to everyday life visual tasks Journal Article In: Journal of Vision, vol. 17, no. 14, pp. 2, 2017. @article{BarrazaBernal2017a, Subjects develop a preferred retinal locus of fixation (PRL) under simulation of central scotoma. If systematic relocations are applied to the stimulus position, PRLs manifest at a location in favor of the stimulus relocation. The present study investigates whether the induced PRL is transferred to important visual tasks in daily life, namely pursuit eye movements, signage reading, and text reading. Fifteen subjects with normal sight participated in the study. To develop a PRL, all subjects underwent a scotoma simulation in a prior study, where five subjects were trained to develop the PRL in the left hemifield, five different subjects on the right hemifield, and the remaining five subjects could naturally chose the PRL location. The position of this PRL was used as baseline. Under central scotoma simulation, subjects performed a pursuit task, a signage reading task, and a reading-text task. In addition, retention of the behavior was also studied. Results showed that the PRL position was transferred to the pursuit task and that the vertical location of the PRL was maintained on the text reading task. However, when reading signage, a function-driven change in PRL location was observed. In addition, retention of the PRL position was observed over weeks and months. These results indicate that PRL positions can be induced and may further transferred to everyday life visual tasks, without hindering function-driven changes in PRL position. |
Mareike Bayer; Valentina Rossi; Naomi Vanlessen; Annika Grass; Annekathrin Schacht; Gilles Pourtois Independent effects of motivation and spatial attention in the human visual cortex Journal Article In: Social Cognitive and Affective Neuroscience, vol. 12, no. 1, pp. 146–156, 2017. @article{Bayer2017a, Motivation and attention constitute major determinants of human perception and action. Nonetheless, it remains a matter of debate whether motivation effects on the visual cortex depend on the spatial attention system, or rely on independent pathways. This study investigated the impact of motivation and spatial attention on the activity of the human primary and extrastriate visual cortex by employing a factorial manipulation of the two factors in a cued pattern discrimination task. During stimulus presentation, we recorded event-related potentials and pupillary responses. Motivational relevance increased the amplitudes of the C1 component at ∼70 ms after stimulus onset. This modulation occurred independently of spatial attention effects, which were evident at the P1 level. Furthermore, motivation and spatial attention had independent effects on preparatory activation as measured by the contingent negative variation; and pupil data showed increased activation in response to incentive targets. Taken together, these findings suggest independent pathways for the influence of motivation and spatial attention on the activity of the human visual cortex. |
Mareike Bayer; Katja Ruthmann; Annekathrin Schacht The impact of personal relevance on emotion processing: Evidence from event-related potentials and pupillary responses Journal Article In: Social Cognitive and Affective Neuroscience, vol. 12, no. 9, pp. 1470–1479, 2017. @article{Bayer2017, Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants' significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular. |
Vanessa Beanland; Ashleigh J. Filtness; Rhiannon Jeans Change detection in urban and rural driving scenes: Effects of target type and safety relevance on change blindness Journal Article In: Accident Analysis and Prevention, vol. 100, pp. 111–122, 2017. @article{Beanland2017, The ability to detect changes is crucial for safe driving. Previous research has demonstrated that drivers often experience change blindness, which refers to failed or delayed change detection. The current study explored how susceptibility to change blindness varies as a function of the driving environment, type of object changed, and safety relevance of the change. Twenty-six fully-licenced drivers completed a driving-related change detection task. Changes occurred to seven target objects (road signs, cars, motorcycles, traffic lights, pedestrians, animals, or roadside trees) across two environments (urban or rural). The contextual safety relevance of the change was systematically manipulated within each object category, ranging from high safety relevance (i.e., requiring a response by the driver) to low safety relevance (i.e., requiring no response). When viewing rural scenes, compared with urban scenes, participants were significantly faster and more accurate at detecting changes, and were less susceptible to “looked-but-failed-to-see” errors. Interestingly, safety relevance of the change differentially affected performance in urban and rural environments. In urban scenes, participants were more efficient at detecting changes with higher safety relevance, whereas in rural scenes the effect of safety relevance has marginal to no effect on change detection. Finally, even after accounting for safety relevance, change blindness varied significantly between target types. Overall the results suggest that drivers are less susceptible to change blindness for objects that are likely to change or move (e.g., traffic lights vs. road signs), and for moving objects that pose greater danger (e.g., wild animals vs. pedestrians). |
Geoffrey Beattie; Melissa Marselle; Laura Mcguire; Damien Litchfield Staying over-optimistic about the future: Uncovering attentional biases to climate change messages Journal Article In: Semiotica, no. 218, pp. 21–64, 2017. @article{Beattie2017a, There is considerable concern that the public are not getting the message about climate change. One possible explanation is "optimism bias," where individuals overestimate the likelihood of positive events happening to them and underestimate the likelihood of negative events. Evidence from behavioral neuroscience suggest that this bias is underpinned by selective information processing, specifically through a reduced level of neural coding of undesirable information, and an unconscious tendency for optimists to avoid fixating negative information. Here we test how this bias in attention could relate to the processing of climate change messages. Using eye tracking, we found that level of dispositional optimism affected visual fixations on climate change messages. Optimists spent less time (overall dwell time) attending to any arguments about climate changes (either "for" or "against") with substantially shorter individual fixations on aspects of arguments for climate change, i.e., those that reflect the scientific consensus but are bad news. We also found that when asked to summarize what they had read, non-optimists were more likely to frame their recall in terms of the arguments "for" climate change; optimists were significantly more likely to frame it in terms of a debate between two opposing positions. Those highest in dispositional optimism seemed to have the strongest and most pronounced level of optimism bias when it came to estimating the probability of being personally affected by climate change. We discuss the importance of overcoming this cognitive bias to develop more effective strategies for communicating about climate change. |
Stefanie I. Becker; Anthony M. Harris; Ashley York; Jessica Choi Conjunction search is relational: Behavioral and electrophysiological evidence Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 10, pp. 1828–1842, 2017. @article{Becker2017, Attention selects behaviorally relevant stimuli for further capacity-limited processing and gates their access to awareness. Given the importance of attention for conscious perception, it is important to determine the factors and mechanisms that drive attention. A widespread view is that attention is biased to the specific feature values of a conjunction target (e.g., vertical, red, medium). By contrast, the results of the present study show that attention is tuned to the 2 relative features that distinguish a conjunction target from the irrelevant nontargets (e.g., larger and bluer). Moreover, an irrelevant conjunction cue that is briefly presented prior to the target can automatically attract attention, even in the absence of any feature contrasts. Importantly, automatic orienting to the conjunction cue was completely independent of the physical similarity between cue and target, and depended only on whether the conjunction cue matched the relative features of the target. These results demonstrate that attentional orienting is determined by a mechanism that can rapidly extract information about feature relationships and guide attention to the stimulus that best matches the relative attributes of the target. These results are difficult to reconcile with extant feature-specific accounts or object-based accounts of attention and argue for a relational account of conjunction search. |
Stefanie I. Becker; Amanda J. Lewis; Jenna E. Axtens Top-down knowledge modulates onset capture in a feedforward manner Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 2, pp. 436–446, 2017. @article{Becker2017b, How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation. |
Francesca Beilharz; Andrea Phillipou; David J. Castle; Susan L. Rossell Attention to beds in natural scenes by observers with insomnia symptoms Journal Article In: Behaviour Research and Therapy, vol. 92, pp. 51–56, 2017. @article{Beilharz2017, Attention biases to sleep-related stimuli are held to play a key role in the development and maintenance of insomnia, but such biases have only been shown with controlled visual displays. This study investigated whether observers with insomnia symptoms allocate attention to sleep-related items in natural scenes, by recording eye movements during free-viewing of bedrooms. Participants with insomnia symptoms and normal sleepers were matched in their visual exploration of these scenes, and there was no evidence that the attention of those with insomnia symptoms was captured more quickly by sleep-related stimuli than that of normal sleepers. However, the insomnia group fixated bed regions on more trials and, once fixated on a bed, also remained there for longer. These findings indicate that sleep stimuli are particularly effective in retaining visual attention in complex natural scenes. |
Mathias Benedek; Robert Stoiser; Sonja Walcher; Christof Körner Eye behavior associated with internally versus externally directed cognition Journal Article In: Frontiers in Psychology, vol. 8, pp. 1092, 2017. @article{Benedek2017, What do our eyes do when we are focused on internal representations such as during imagination or planning? Evidence from mind wandering research suggests that spontaneous shifts from externally directed cognition (EDC) to internally directed cognition (IDC) involves oculomotor changes indicative of visual disengagement. In the present study, we investigated potential differences in eye behavior between goal-directed forms of IDC and EDC. To this end, we manipulated the focus of attention (internal versus external) in two demanding cognitive tasks (anagram and sentence generation). IDC was associated with fewer and longer fixations and higher variability in pupil diameter and eye vergence compared to EDC, suggesting reduced visual scanning and higher spontaneous eye activity. IDC was further related to longer blinks, lower microsaccade frequency, and a lower angle of eye vergence. These latter changes appear conducive to attenuate visual input and thereby shield ongoing internal processes from external distraction. Together, these findings suggest that IDC is accompanied by characteristic eye behavior that reflects a decoupling of attention from external events and serves gating out visual input. |
Rachel J. Bennetts; Joseph A. Mole; Sarah Bate Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills Journal Article In: Cognitive Neuropsychology, vol. 34, no. 6, pp. 357–376, 2017. @article{Bennetts2017, Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a “super-recognizer” (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence. |
Claire K. Naughtin; Kristina Horne; Dana Schneider; Dustin Venini; Ashley York; Paul E. Dux Do implicit and explicit belief processing share neural substrates? Journal Article In: Human Brain Mapping, vol. 38, no. 9, pp. 4760–4772, 2017. @article{Naughtin2017, Humans rely on their ability to infer another person's mental state to understand and predict others' behavior (“theory of mind,” ToM). Multiple lines of research suggest that not only are humans able to consciously process another person's belief state, but also are able to do so implicitly. Here we explored how general implicit belief states are represented in the brain, compared to those substrates involved in explicit ToM processes. Previous work on this topic has yielded conflicting results, and thus, the extent to which the implicit and explicit ToM systems draw on common neural bases is unclear. Participants were presented with “Sally-Anne” type movies in which a protagonist was falsely led to believe a ball was in one location, only for a puppet to later move it to another location in their absence (false-belief condition). In other movies, the protagonist had their back turned the entire time the puppet moved the ball between the two locations, meaning that they had no opportunity to develop any pre-existing beliefs about the scenario (no-belief condition). Using a group of independently localized explicit ToM brain regions, we found greater activity for false-belief trials, relative to no-belief trials, in the right temporoparietal junction, right superior temporal sulcus, precuneus, and left middle prefrontal gyrus. These findings extend upon previous work on the neural bases of implicit ToM by showing substantial overlap between this system and the explicit ToM system, suggesting that both abilities might recruit a common set of mentalizing processes/functional brain regions. |
Maital Neta; Tien T. Tong; Monica L. Rosen; Alex Enersen; M. Justin Kim; Michael D. Dodd All in the first glance: First fixation predicts individual differences in valence bias Journal Article In: Cognition and Emotion, vol. 31, no. 4, pp. 772–780, 2017. @article{Neta2017, Surprised expressions are interpreted as negative by some people, and as positive by others. When compared to fearful expressions, which are consistently rated as negative, surprise and fear share similar morphological structures (e.g. widened eyes), but these similarities are primarily in the upper part of the face (eyes). We hypothesised, then, that individuals would be more likely to interpret surprise positively when fixating faster to the lower part of the face (mouth). Participants rated surprised and fearful faces as either positive or negative while eye movements were recorded. Positive ratings of surprise were associated with longer fixation on the mouth than negative ratings. There were also individual differences in fixation patterns, with individuals who fixated the mouth earlier exhibiting increased positive ratings. These findings suggest that there are meaningful individual differences in how people process faces. |
Daniel P. Newman; Gerard M. Loughnane; Simon P. Kelly; Redmond G. O'Connell; Mark A. Bellgrove Visuospatial asymmetries arise from differences in the onset time of perceptual evidence accumulation Journal Article In: Journal of Neuroscience, vol. 37, no. 12, pp. 3378–3385, 2017. @article{Newman2017, Healthy subjects tend to exhibit a bias of visual attention whereby left hemifield stimuli are processed more quickly and accurately than stimuli appearing in the right hemifield. It has long been held that this phenomenon arises from the dominant role of the right cerebral hemisphere in regulating attention. However, methods that would enable more precise understanding of the mechanisms underpinning visuospatial bias have remained elusive. We sought to finely trace the temporal evolution of spatial biases by leveraging a novel bilateral dot motion detection paradigm. In combination with electroencephalography, this paradigm enables researchers to isolate discrete neural signals reflecting the key neural processes needed for making these detection decisions. These include signals for spatial attention, early target selection, evidence accumulation, and motor preparation. Using this method, we established that three key neural markers accounted for unique between-subject variation in visuospatial bias: hemispheric asymmetry in posterior α power measured before target onset, which is related to the distribution of preparatory attention across the visual field; asymmetry in the peak latency of the early N2c target-selection signal; and, finally, asymmetry in the onset time of the subsequent neural evidence-accumulation process with earlier onsets for left hemifield targets. Our development of a single paradigm to dissociate distinct processing components that track the temporal evolution of spatial biases not only advances our understanding of the neural mechanisms underpinning normal visuospatial attention bias, but may also in the future aid differential diagnoses in disorders of spatial attention. |
Veerle Neyens; Rose Bruffaerts; Antonietta G. Liuzzi; Ioannis Kalfas; Ronald Peeters; Emmanuel Keuleers; Rufin Vogels; Simon De Deyne; Gert Storms; Patrick Dupont; Rik Vandenberghe Representation of semantic similarity in the left intraparietal sulcus: Functional magnetic resonance imaging evidence Journal Article In: Frontiers in Human Neuroscience, vol. 11, pp. 402, 2017. @article{Neyens2017, According to a recent study, semantic similarity between concrete entities correlates with the similarity of activity patterns in left middle IPS during category naming. We examined the replicability of this effect under passive viewing conditions, the potential role of visuoperceptual similarity, where the effect is situated compared to regions that have been previously implicated in visuospatial attention, and how it compares to effects of object identity and location. Forty-six subjects participated. Subjects passively viewed pictures from two categories, musical instruments and vehicles. Semantic similarity between entities was estimated based on a concept-feature matrix obtained in more than 1,000 subjects. Visuoperceptual similarity was modeled based on the HMAX model, the AlexNet deep convolutional learning model, and thirdly, based on subjective visuoperceptual similarity ratings. Among the IPS regions examined, only left middle IPS showed a semantic similarity effect. The effect was significant in hIP1, hIP2, and hIP3. Visuoperceptual similarity did not correlate with similarity of activity patterns in left middle IPS. The semantic similarity effect in left middle IPS was significantly stronger than in the right middle IPS and also stronger than in the left or right posterior IPS. The semantic similarity effect was similar to that seen in the angular gyrus. Object identity effects were much more widespread across nearly all parietal areas examined. Location effects were relatively specific for posterior IPS and area 7 bilaterally. To conclude, the current findings replicate the semantic similarity effect in left middle IPS under passive viewing conditions, and demonstrate its anatomical specificity within a cytoarchitectonic reference frame. We propose that the semantic similarity effect in left middle IPS reflects the transient uploading of semantic representations in working memory. |
Tom Nissens; Katja Fiehler Saccades and reaches curve away from the other effector's target in simultaneous eye and hand movements Journal Article In: Journal of Neurophysiology, vol. 119, pp. 118–123, 2017. @article{Nissens2017a, Simultaneous eye and hand movements are highly coordinated and tightly coupled. This raises the question whether the selection of eye and hand targets relies on a shared attentional mechanism or separate attentional systems. Previous studies have revealed conflicting results by reporting evidence for both a shared as well as separate systems. Movement properties such as movement curvature can provide novel insights into this question as they provide a sensitive measure for attentional allocation during target selection. In the current study, participants performed simultaneous eye and hand movements to the same or different visual target locations. We show that both saccade and reaching movements curve away from the other effector's target location when they are simultaneously performed to spatially distinct locations. We argue that there is a shared attentional mechanism involved in selecting eye and hand targets which may be found on the level of effector independent priority maps. |
Anna Nowakowska; Alasdair D. F. Clarke; Amelia R. Hunt Human visual search behaviour is far from ideal Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 284, no. 1849, pp. 1–6, 2017. @article{Nowakowska2017, Evolutionary pressures have made foraging behaviours highly efficient in many species. Eye movements during search present a useful instance of foraging behaviour in humans. We tested the efficiency of eye movements during search using homogeneous and heterogeneous arrays of line segments. The search target is visible in the periphery on the homogeneous array, but requires central vision to be detected on the heterogeneous array. For a compound search array that is heterogeneous on one side and homogeneous on the other, eye movements should be directed only to the heterogeneous side. Instead, participants made many fixations on the homogeneous side. By comparing search of compound arrays to an estimate of search performance based on uniform arrays, we isolate two contributions to search inefficiency. First, participants make superfluous fixations, sacrificing speed for a perceived (but not actual) gain in response certainty. Second, participants fixate the homogeneous side even more frequently than predicted by inefficient search of uniform arrays, suggesting they also fail to direct fixations to locations that yield the most new information. |
Lauri Nummenmaa; Lauri Oksama; Enrico Glerean; Jukka Hyönä Cortical circuit for binding object identity and location during multiple-object tracking Journal Article In: Cerebral Cortex, vol. 27, no. 1, pp. 162–172, 2017. @article{Nummenmaa2017, Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants' hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. |
Antje Nuthmann Fixation durations in scene viewing: Modeling the effects of local image features, oculomotor parameters, and task Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 2, pp. 370–392, 2017. @article{Nuthmann2017a, Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1$backslash$textdegree circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation's duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general. |
Antje Nuthmann; Wolfgang Einhäuser; Immo Schütz In: Frontiers in Human Neuroscience, vol. 11, pp. 491, 2017. @article{Nuthmann2017, Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”). This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available. |
Verena A. Oberlader; Ulrich Ettinger; Rainer Banse; Alexander F. Schmidt Development of a cued pro- and antisaccade paradigm: An indirect measure to explore automatic components of sexual interest Journal Article In: Archives of Sexual Behavior, vol. 46, no. 8, pp. 2377–2388, 2017. @article{Oberlader2017, We developed a cued pro- and antisaccade paradigm (CPAP) to explore automatic components of sexual interest. Heterosexual participants (n = 32 women |
Hossein Adeli; Françoise Vitu; Gregory J. Zelinsky A model of the superior colliculus predicts fixation locations during scene viewing and visual aearch Journal Article In: Journal of Neuroscience, vol. 37, no. 6, pp. 1453–1467, 2017. @article{Adeli2017, Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts. |
Mehmet N. Ağaoğlu; Susana T. L. Chung Interaction between stimulus contrast and pre-saccadic crowding Journal Article In: Royal Society Open Science, vol. 4, no. 2, pp. 1–17, 2017. @article{Agaoglu2017, Objects that are briefly flashed around the time of saccades are mislocalized. Previously, robust interactions between saccadic perceptual distortions and stimulus contrast have been reported. It is also known that crowding depends on the contrast of the target and flankers. Here, we investigated how stimulus contrast and crowding interact with pre-saccadic perception. We asked observers to report the orientation of a tilted Gabor presented in the periphery, with or without four flanking vertically oriented Gabors. Observers performed the task either following a saccade or while maintaining fixation. Contrasts of the target and flankers were independently set to either high or low, with equal probability. In both the fixation and saccade conditions, the flanked conditions resulted in worse discrimination performance—the crowding effect. In the unflanked saccade trials, performance significantly decreased with target-to-saccade onset for low-contrast targets but not for high-contrast targets. In the presence of flankers, impending saccades reduced performance only for low-contrast, but not for high-contrast flankers. Interestingly, average performance in the fixation and saccade conditions was mostly similar in all contrast conditions. Moreover, the magnitude of crowding was influenced by saccades only when the target had high contrast and the flankers had low contrasts. Overall, our results are consistent with modulation of perisaccadic spatial localization by contrast and saccadic suppression, but at odds with a recent report of pre-saccadic release of crowding. |
Umair Akram; Jason G. Ellis; Andriy Myachykov; Nicola L. Barclay Preferential attention towards the eye-region amongst individuals with insomnia Journal Article In: Journal of Sleep Research, vol. 26, no. 1, pp. 84–91, 2017. @article{Akram2017, People with insomnia often perceive their own facial appearance as more tired compared with the appearance of others. Evidence also highlights the eye-region in projecting tiredness cues to perceivers, and tiredness judgements often rely on preferential attention towards this region. Using a novel eye-tracking paradigm, this study examined: (i) whether individuals with insomnia display preferential attention towards the eyeregion, relative to nose and mouth regions, whilst observing faces compared with normal-sleepers; and (ii) whether an attentional bias towards the eye-region amongst individuals with insomnia is self-specific or general in nature. Twenty individuals with DSM-5 Insomnia Disorder and 20 normal-sleepers viewed 48 neutral facial photographs (24 of themselves, 24 of other people) for periods of 4000 ms. Eye movements were recorded using eye-tracking, and first fixation onset, first fixation duration and total gaze duration were examined for three interest-regions (eyes, nose, mouth). Significant group 9 interest-region interactions indicated that, regardless of the face presented, participants with insomnia were quicker to attend to, and spent more time observing, the eye-region relative to the nose and mouth regions compared with normal-sleepers. However, no group 9 face 9 interest-region interactions were established. Thus, whilst individuals with insomnia displayed preferential attention towards the eye-region in general, this effect was not accentuated during self-perception. Insomnia appears to be characterized by a general, rather than self-specific, attentional bias towards the eye-region. These findings contribute to our understanding of face perception in insomnia, and provide tentative support for cognitive models of insomnia demonstrating that individuals with insomnia monitor faces in general, with a specific focus around the eye-region, for cues associated with tiredness. |
Albandri Alotaibi; Geoffrey Underwood; Alastair D. Smith Cultural differences in attention: Eye movement evidence from a comparative visual search task Journal Article In: Consciousness and Cognition, vol. 55, pp. 254–265, 2017. @article{Alotaibi2017, Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. |
Tatiana A. Amor; Mirko Luković; Hans J. Herrmann; José S. Andrade Influence of scene structure and content on visual search strategies Journal Article In: Journal of the Royal Society Interface, vol. 14, no. 132, 2017. @article{Amor2017, When searching for a target within an image, our brain can adopt different strategies, but which one does it choose? This question can be answered by tracking the motion of the eye while it executes the task. Following many individuals performing various search tasks, we distinguish between two competing strategies. Motivated by these findings, we introduce a model that captures the interplay of the search strategies and allows us to create artificial eye-tracking trajectories, which could be compared with the experimental ones. Identifying the model parameters allows us to quantify the strategy employed in terms of ensemble averages, characterizing each experimental cohort. In this way, we can discern with high sensitivity the relation between the visual landscape and the average strategy, disclosing how small variations in the image induce changes in the strategy. |
Lucía Amoruso; Agustín Ibáñez; Bruno Fonseca; Sebastián Gadea; Lucas Sedeño; Mariano Sigman; Adolfo M. García; Ricardo Fraiman; Daniel Fraiman Variability in functional brain networks predicts expertise during action observation Journal Article In: NeuroImage, vol. 146, pp. 690–700, 2017. @article{Amoruso2017, Observing an action performed by another individual activates, in the observer, similar circuits as those involved in the actual execution of that action. This activation is modulated by prior experience; indeed, sustained training in a particular motor domain leads to structural and functional changes in critical brain areas. Here, we capitalized on a novel graph-theory approach to electroencephalographic data (Fraiman et al., 2016) to test whether variability in functional brain networks implicated in Tango observation can discriminate between groups differing in their level of expertise. We found that experts and beginners significantly differed in the functional organization of task-relevant networks. Specifically, networks in expert Tango dancers exhibited less variability and a more robust functional architecture. Notably, these expertise-dependent effects were captured within networks derived from electrophysiological brain activity recorded in a very short time window (2 s). In brief, variability in the organization of task-related networks seems to be a highly sensitive indicator of long-lasting training effects. This finding opens new methodological and theoretical windows to explore the impact of domain-specific expertise on brain plasticity, while highlighting variability as a fruitful measure in neuroimaging research. |
Elaine J. Anderson; Marc S. Tibber; D. Sam Schwarzkopf; Sukhwinder S. Shergill; Emilio Fernandez-Egea; Geraint Rees; Steven C. Dakin Visual population receptive fields in people with schizophrenia have reduced inhibitory surrounds Journal Article In: Journal of Neuroscience, vol. 37, no. 6, pp. 1546–1556, 2017. @article{Anderson2017, People with schizophrenia (SZ) experience abnormal visual perception on a range of visual tasks, which have been linked to abnormal synaptic transmission and an imbalance between cortical excitation and inhibition. However, differences in the underlying architecture of visual cortex neurons, which might explain these visual anomalies, have yet to be reportedin vivoHere, we probed the neural basis of these deficits using fMRI and population receptive field (pRF) mapping to infer properties of visually responsive neurons in people with SZ. We employed a difference-of-Gaussian model to capture the center-surround configuration of the pRF, providing critical information about the spatial scale of the pRFs inhibitory surround. Our analysis reveals that SZ is associated with reduced pRF size in early retinotopic visual cortex, as well as a reduction in size and depth of the inhibitory surround in V1, V2, and V4. We consider how reduced inhibition might explain the diverse range of visual deficits reported in SZ. |
Nicola C. Anderson; Mieke Donk Salient object changes influence overt attentional prioritization and object-based targeting in natural scenes Journal Article In: PLoS ONE, vol. 12, no. 2, pp. e0172132, 2017. @article{Anderson2017a, A change to an object in natural scenes attracts attention when it occurs during a fixation. However, when a change occurs during a saccade, and is masked by saccadic suppression, it typically does not capture the gaze in a bottom- up manner. In the present work, we investigated how the type and direction of salient changes to objects affect the prioritization and targeting of objects in natural scenes. We asked observers to look around a scene in preparation for a later memory test. After a period of time, an object in the scene was increased or decreased in salience either during a fixation (with a transient signal) or during a saccade (without transient signal), or it was not changed at all. Changes that were made during a fixation attracted the eyes both when the change involved an increase and a decrease in salience. However, changes that were made during a saccade only captured the eyes when the change was an increase in salience, relative to the baseline no-change condition. These results suggest that the prioritization of object changes can be influenced by the underlying salience of the changed object. In addition, object changes that occurred with a transient signal (which is itself a salient signal) resulted in more central object targeting. Taken together, our results suggest that salient signals in a natural scene are an important component in both object prioritization and targeting in natural scene viewing, insofar as they align with object locations. |
Keith S. Apfelbaum; Bob McMurray Learning during processing: Word learning doesn't wait for word recognition to finish Journal Article In: Cognitive Science, vol. 41, pp. 706–747, 2017. @article{Apfelbaum2017, Previous research on associative learning has uncovered detailed aspects of the process, includ- ing what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learn- ing event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. |
Eduardo A. Aponte; Dario Schöbi; Klaas E. Stephan; Jakob Heinzle The stochastic early reaction, inhibition, and late action (SERIA) model for antisaccades Journal Article In: PLoS computational biology, vol. 13, no. 8, pp. e1005692, 2017. @article{Aponte2017, The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades. |
Ayelet Arazi; Nitzan Censor; Ilan Dinstein Neural variability quenching predicts individual perceptual abilities Journal Article In: Journal of Neuroscience, vol. 37, no. 1, pp. 97–109, 2017. @article{Arazi2017a, Neural activity during repeated presentations ofa sensory stimulus exhibits considerable trial-by-trial variability. Previous studies have reported that trial-by-trial neural variability is reduced (quenched) by the presentation of a stimulus. However, the functional significance and behavioral relevance ofvariability quenching and the potential physiological mechanisms that may drive it have been studied only rarely. Here, we recorded neural activity with EEG as subjects performed a two-interval forced-choice contrast discrimination task. Trial-by-trial neural variability was quenched by⬃40% after the presentation ofthe stimulus relative to the variability apparent before stimulus presentation, yet there were large differences in the magnitude ofvariability quenching across subjects. Individual magnitudes of quenching predicted individual discrimination capabilities such that subjects who exhibited larger quenching had smaller contrast discrimination thresholds and steeper psychometric function slopes. Furthermore, the magnitude ofvariability quenching was strongly correlated with a reduction in broadband EEGpower after stimulus presentation. Our results suggest that neural variability quenching is achieved by reducing the amplitude of broadband neural oscillations after sensory input, which yields relatively more reproducible cortical activity across trials and enables superior perceptual abilities in individuals who quench more. |
Joseph M. Arizpe; Danielle L. McKean; Jack W. Tsao; Annie W. -Y. Chan Where you look matters for body perception: Preferred gaze location contributes to the body inversion effect Journal Article In: PLoS ONE, vol. 12, no. 1, pp. e0169148, 2017. @article{Arizpe2017, The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; how- ever, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contribu- tions of feature and configuration processing to visual body discrimination. Participants pre- dominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indi- cating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interest- ingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations. |
E. Oberwelland; Leonhard Schilbach; I. Barisic; Sarah C. Krall; K. Vogeley; Gereon R. Fink; B. Herpertz-Dahlmann; Kerstin Konrad; Martin Schulte-Rüther Young adolescents with autism show abnormal joint attention network: A gaze contingent fMRI study Journal Article In: NeuroImage: Clinical, vol. 14, pp. 112–121, 2017. @article{Oberwelland2017, Behavioral research has revealed deficits in the development of joint attention (JA) as one of the earliest signs of autism. While the neural basis of JA has been studied predominantly in adults, we recently demonstrated a protracted development of the brain networks supporting JA in typically developing children and adolescents. The present eye-tracking/fMRI study now extends these findings to adolescents with autism. Our results show that in adolescents with autism JA is subserved by abnormal activation patterns in brain areas related to social cognition abnormalities which are at the core of ASD including the STS and TPJ, despite behavioral maturation with no behavioral differences. Furthermore, in the autism group we observed increased neural activity in a network of social and emotional processing areas during interactions with their mother. Moreover, data indicated that less severely affected individuals with autism showed higher frontal activation associated with self-initiated interactions. Taken together, this study provides first-time data of JA in children/adolescents with autism incorporating the interactive character of JA, its reciprocity and motivational aspects. The observed functional differences in adolescents ASD suggest that persistent developmental differences in the neural processes underlying JA contribute to social interaction difficulties in ASD. |
Andrew D. Ogle; Dan J. Graham; Rachel G. Lucas-Thompson; Christina A. Roberto Influence of cartoon media characters on children's attention to and preference for food and beverage products Journal Article In: Journal of the Academy of Nutrition and Dietetics, vol. 117, no. 2, pp. 265–270, 2017. @article{Ogle2017, Background: Over-consuming unhealthful foods and beverages contributes to pediatric obesity and associated diseases. Food marketing influences children's food preferences, choices, and intake. Objective: To examine whether adding licensed media characters to healthful food/beverage packages increases children's attention to and preference for these products. We hypothesized that children prefer less- (vs more-) healthful foods, and pay greater attention to and preferentially select products with (vs without) media characters regardless of nutritional quality. We also hypothesized that children prefer more-healthful products when characters are present over less-healthful products without characters. Design: On a computer, participants viewed food/beverage pairs of more-healthful and less-healthful versions of similar products. The same products were shown with and without licensed characters on the packaging. An eye-tracking camera monitored participant gaze, and participants chose which product they preferred from each of 60 pairs. Participants/setting: Six- to 9-year-old children (n=149; mean age=7.36, standard deviation=1.12) recruited from the Twin Cities, MN, area in 2012-2013. Main outcome measures: Visual attention and product choice. Statistical analyses performed Attention to products was compared using paired-samples t tests, and product choice was analyzed with single-sample t tests. Analyses of variance were conducted to test for interaction effects of specific characters and child sex and age. Results: Children paid more attention to products with characters and preferred less-healthful products. Contrary to our prediction, children chose products without characters approximately 62% of the time. Children's choices significantly differed based on age, sex, and the specific cartoon character displayed, with characters in this study being preferred by younger boys. Conclusions: Results suggest that putting licensed media characters on more-healthful food/beverage products might not encourage all children to make healthier food choices, but could increase selection of healthy foods among some, particularly younger children, boys, and those who like the featured character(s). Effective use likely requires careful demographic targeting. |
Sven Ohl; Martin Rolfs Saccadic eye movements impose a natural bottleneck on visual short-term memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 5, pp. 736–748, 2017. @article{Ohl2017, Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory to VSTM. In 4 experiments, we show that saccades, planned and executed after the disappearance of a memory array, markedly bias visual memory performance. First, items that had appeared at the saccade target were more readily remembered than items that had appeared elsewhere, even though the saccade was irrelevant to the memory task (Experiment 1). Second, this influence was strongest for saccades elicited right after the disappearance of the memory array and gradually declined over the course of a second (Experiment 2). Third, the saccade stabilized memory representations: The imposed bias persisted even several seconds after saccade execution (Experiment 3). Finally, the advantage for stimuli congruent with the saccade target occurred even when that stimulus was far less likely to be probed in the memory test than any other stimulus in the array, ruling out a strategic effort of observers to memorize information presented at the saccade target (Experiment 4). Together, these results make a strong case that saccades inadvertently determine the content of VSTM, and highlight the key role of actions for the fundamental building blocks of cognition. |
Sabine Öhlschläger; Melissa L. -H. Võ SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes Journal Article In: Behavior Research Methods, vol. 49, no. 5, pp. 1780–1791, 2017. @article{Oehlschlaeger2017, Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules – a scene grammar – enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind – SCEGRAM – we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ . |
Eduard Ort; Johannes J. Fahrenfort; Christian N. L. Olivers Lack of free choice reveals the cost of having to search for more than one object Journal Article In: Psychological Science, vol. 28, no. 8, pp. 1137–1147, 2017. @article{Ort2017, It is debated whether people can concurrently search for more than one object or whether this results in switch costs. Using a gaze-contingent eye-tracking paradigm we reveal a crucial role for cognitive control. We instructed participants to simultaneously look for two color-defined objects presented among distractors. In one condition, both targets were available, giving the observer free choice on what to look for, allowing for proactive control. In other conditions, only one of the two targets was made available, so that the choice was imposed and reactive control would be required. No switch costs emerged when target choice was free, but reliable switch costs emerged when targets were imposed. Bridging contradictory findings, the results are consistent with a model of visual selection in which only one attentional template is active, and in which the efficiency of switching targets depends on the type of cognitive control allowed for by the environment. |
Marte Otten; Yaïr Pinto; Chris L. E. Paffen; Anil K. Seth; Ryota Kanai The uniformity illusion: Central stimuli can determine peripheral perception Journal Article In: Psychological Science, vol. 28, no. 1, pp. 56–68, 2017. @article{Otten2017, Vision in the fovea, the center of the visual field, is much more accurate and detailed than vision in the periphery. This is not in line with the rich phenomenology of peripheral vision. Here, we investigated a visual illusion that shows that detailed peripheral visual experience is partially based on a reconstruction of reality. Participants fixated on the center of a visual display in which central stimuli differed from peripheral stimuli. Over time, participants perceived that the peripheral stimuli changed to match the central stimuli, so that the display seemed uniform. We showed that a wide range of visual features, including shape, orientation, motion, luminance, pattern, and identity, are susceptible to this uniformity illusion. We argue that the uniformity illusion is the result of a reconstruction of sparse visual information (from the periphery) based on more readily available detailed visual information (from the fovea), which gives rise to a rich, but illusory, experience of peripheral vision. |
Dekel Abeles; Shlomit Yuval-Greenberg Just look away: Gaze aversions as an overt attentional disengagement mechanism Journal Article In: Cognition, vol. 168, pp. 99–109, 2017. @article{Abeles2017, During visual exploration of a scene, the eye-gaze tends to be directed toward more salient image-locations, containing more information. However, while performing non-visual tasks, such information-seeking behavior could be detrimental to performance, as the perception of irrelevant but salient visual input may unnecessarily increase the cognitive-load. It would be therefore beneficial if during non-visual tasks, eye-gaze would be governed by a drive to reduce saliency rather than maximize it. The current study examined the phenomenon of gaze-aversion during non-visual tasks, which is hypothesized to act as an active avoidance mechanism. In two experiments, gaze-position was monitored by an eye-tracker while participants performed an auditory mental arithmetic task, and in a third experiment they performed an undemanding naming task. Task-irrelevant simple motion stimuli (drifting grating and random dot kinematogram) were centrally presented, moving at varying speeds. Participants averted their gaze away from the moving stimuli more frequently and for longer proportions of the time when the motion was faster than when it was slower. Additionally, a positive correlation was found between the task's difficulty and this aversion behavior. When the task was highly undemanding, no gaze aversion behavior was observed. We conclude that gaze aversion is an active avoidance strategy, sensitive to both the physical features of the visual distractions and the cognitive load imposed by the non-visual task. |
Nathan Arnett; Matthew Wagers Subject encodings and retrieval interference Journal Article In: Journal of Memory and Language, vol. 93, pp. 22–54, 2017. @article{Arnett2017, Interference has been identified as a cause of processing difficulty in linguistic dependencies, such as the subject-verb relation (Van Dyke and Lewis, 2003). However, while mounting evidence implicates retrieval interference in sentence processing, the nature of the retrieval cues involved - and thus the source of difficulty - remains largely unexplored. Three experiments used self-paced reading and eyetracking to examine the ways in which the retrieval cues provided at a verb characterize subjects. Syntactic theory has identified a number of properties correlated with subjecthood, both phrase-structural and thematic. Findings replicate and extend previous findings of interference at a verb from additional subjects, but indicate that retrieval outcomes are relativized to the syntactic domain in which the retrieval occurs. One, the cues distinguish between thematic subjects in verbal and nominal domains. Two, within the verbal domain, retrieval is sensitive to abstract syntactic properties associated with subjects and their clauses. We argue that the processing at a verb requires cue-driven retrieval, and that the retrieval cues utilize abstract grammatical properties which may reflect parser expectations. |
Árni Gunnar Ásgeirsson; Sander Nieuwenhuis No arousal-biased competition in focused visuospatial attention Journal Article In: Cognition, vol. 168, pp. 191–204, 2017. @article{Asgeirsson2017, Arousal sometimes enhances and sometimes impairs perception and memory. A recent theory attempts to reconcile these findings by proposing that arousal amplifies the competition between stimulus representations, strengthening already strong representations and weakening already weak representations. Here, we report a stringent test of this arousal-biased competition theory in the context of focused visuospatial attention. Participants were required to identify a briefly presented target in the context of multiple distractors, which varied in the degree to which they competed for representation with the target, as revealed by psychophysics. We manipulated arousal using emotionally arousing pictures (Experiment 1), alerting tones (Experiment 2) and white-noise stimulation (Experiment 3), and validated these manipulations with electroencephalography and pupillometry. In none of the experiments did we find evidence that arousal modulated the effect of distractor competition on the accuracy of target identification. Bayesian statistics revealed moderate to strong evidence against arousal-biased competition. Modeling of the psychophysical data based on Bundesen's (1990) theory of visual attention corroborated the conclusion that arousal does not bias competition in focused visuospatial attention. |
Janice Attard-Johnson; Markus Bindemann Sex-specific but not sexually explicit: Pupillary responses to dressed and naked adults Journal Article In: Royal Society Open Science, vol. 4, no. 5, pp. 1–10, 2017. @article{AttardJohnson2017, Dilation of the pupils is an indicator of an observer's sexual interest in other people, but it remains unresolved whether this response is strengthened or diminished by sexually explicit material. To address this question, this study compared pupillary responses of heterosexual men and women to naked and dressed portraits of male and female adult film actors. Pupillary responses corresponded with observers' self-reported sexual orientation, such that dilation occurred during the viewing of opposite-sex people, but were comparable for naked and dressed targets. These findings indicate that pupillary responses provide a sex-specific measure, but are not sensitive to sexually explicit content. |
Janice Attard-Johnson; Markus Bindemann; Caoilte Ó Ciardha Heterosexual, homosexual, and bisexual men's pupillary responses to persons at different stages of sexual development Journal Article In: Journal of Sex Research, vol. 54, no. 9, pp. 1085–1096, 2017. @article{AttardJohnson2017a, This study investigated whether pupil size during the viewing of images of adults and children reflects the sexual orientation of heterosexual, homosexual, and bisexual men (n = 100 |
Habiba Azab; Benjamin Y. Hayden Correlates of decisional dynamics in the dorsal anterior cingulate cortex Journal Article In: PLoS Biology, vol. 15, no. 11, pp. e2003091, 2017. @article{Azab2017, We hypothesized that during binary economic choice, decision makers use the first option they attend as a default to which they compare the second. To test this idea, we recorded activity of neurons in the dorsal anterior cingulate cortex (dACC) of macaques choosing between gambles presented asynchronously. We find that ensemble encoding of the value of the first offer includes both choice-dependent and choice-independent aspects, as if reflecting a partial decision. That is, its responses are neither entirely pre- nor post-decisional. In contrast, coding of the value of the second offer is entirely decision dependent (i.e., post-decisional). This result holds even when offer-value encodings are compared within the same time period. Additionally, we see no evidence for 2 pools of neurons linked to the 2 offers; instead, all comparison appears to occur within a single functionally homogenous pool of task-selective neurons. These observations suggest that economic choices reflect a context-dependent evaluation of attended options. Moreover, they raise the possibility that value representations reflect, to some extent, a tentative commitment to a choice. |
Bobby Azarian; George A. Buzzell; Elizabeth G. Esser; Alexander Dornstauder; Matthew S. Peterson Averted body postures facilitate orienting of the eyes Journal Article In: Acta Psychologica, vol. 175, pp. 28–32, 2017. @article{Azarian2017, It is well established that certain social cues, such as averted eye gaze, can automatically initiate the orienting of another's spatial attention. However, whether human posture can also reflexively cue spatial attention remains unclear. The present study directly investigated whether averted neutral postures reflexively cue the attention of observers in a normal population of college students. Similar to classic gaze-cuing paradigms, non-predictive averted posture stimuli were presented prior to the onset of a peripheral target stimulus at one of five SOAs (100 ms–500 ms). Participants were instructed to move their eyes to the target as fast as possible. Eye-tracking data revealed that participants were significantly faster in initiating saccades when the posture direction was congruent with the target stimulus. Since covert attention shifts precede overt shifts in an obligatory fashion, this suggests that directional postures reflexively orient the attention of others. In line with previous work on gaze-cueing, the congruency effect of posture cue was maximal at the 300 ms SOA. These results support the notion that a variety of social cues are used by the human visual system in determining the “direction of attention” of others, and also suggest that human body postures are salient stimuli capable of automatically shifting an observer's attention. |