EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2017 |
Jason Hubbard; David Kuhns; Theo A. J. Schäfer; Ulrich Mayr Is conflict adaptation due to active regulation or passive carry-over? Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 3, pp. 385–393, 2017. @article{Hubbard2017, Conflict-adaptation effects (i.e., reduced response-time costs on high-conflict trials following high-conflict trials) supposedly represent our cognitive system's ability to regulate itself according to current processing demands. However, currently it is not clear whether these effects reflect conflict-triggered, active regulation, or passive carry-over of previous-trial control settings. We used eye movements to examine whether the degree of experienced conflict modulates conflict-adaptation effects, as the conflict-triggered regulation view predicts. Across 2 experiments in which participants had to identify a target stimulus based on an endogenous cue while—on conflict trials—having to resist a sudden-onset distractor, we found a clear indication of conflict adaptation. This adaptation effect disappeared however, when participants inadvertently fixated the sudden-onset distractor on the previous trial—that is, when they experienced a high degree of conflict. This pattern of results suggests that conflict adaptation can be explained parsimoniously in terms of a broader memory process that retains recently adopted control settings across trials. |
C. Hübner; Alexander C. Schütz Numerosity estimation benefits from transsaccadic information integration Journal Article In: Journal of Vision, vol. 17, no. 13, pp. 1–16, 2017. @article{Huebner2017, Humans achieve a stable and homogeneous representation of their visual environment, although visual processing varies across the visual field. Here we investigated the circumstances under which peripheral and foveal information is integrated for numerosity estimation across saccades. We asked our participants to judge the number of black and white dots on a screen. Information was presented either in the periphery before a saccade, in the fovea after a saccade, or in both areas consecutively to measure transsaccadic integration. In contrast to previous findings, we found an underestimation of numerosity for foveal presentation and an overestimation for peripheral presentation. We used a maximum-likelihood model to predict accuracy and reliability in the transsaccadic condition based on peripheral and foveal values. We found near-optimal integration of peripheral and foveal information, consistently with previous findings about orientation integration. In three consecutive experiments, we disrupted object continuity between the peripheral and foveal presentations to probe the limits of transsaccadic integration. Even for global changes on our numerosity stimuli, no influence of object discontinuity was observed. Overall, our results suggest that transsaccadic integration is a robust mechanism that also works for complex visual features such as numerosity and is operative despite internal or external mismatches between foveal and peripheral information. Transsaccadic integration facilitates an accurate and reliable perception of our environment. |
Nam Wook Kim; Zoya Bylinskii; Michelle A. Borkin; Krzysztof Z. Gajos; Aude Oliva; Fredo Durand; Hanspeter Pfister BubbleView: An interface for crowdsourcing image importance maps and tracking visual attention Journal Article In: ACM Transactions on Computer-Human Interaction, vol. 24, no. 5, pp. 1–40, 2017. @article{Kim2017, In this paper, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine. BubbleView is a mouse-contingent, moving-window interface in which participants are presented with a series of blurred images and click to reveal "bubbles" - small, circular areas of the image at original resolution, similar to having a confined area of focus like the eye fovea. Across 10 experiments with 28 different parameter combinations, we evaluated BubbleView on a variety of image types: information visualizations, natural images, static webpages, and graphic designs, and compared the clicks to eye fixations collected with eye-trackers in controlled lab settings. We found that BubbleView clicks can both (i) successfully approximate eye fixations on different images, and (ii) be used to rank image and design elements by importance. BubbleView is designed to collect clicks on static images, and works best for defined tasks such as describing the content of an information visualization or measuring image importance. BubbleView data is cleaner and more consistent than related methodologies that use continuous mouse movements. Our analyses validate the use of mouse-contingent, moving-window methodologies as approximating eye fixations for different image and task types. |
Sujin Kim; Randolph Blake; Minyoung Lee; Chai-Youn Kim Audio-visual interactions uniquely contribute to resolution of visual conflict in people possessing absolute pitch Journal Article In: PLoS ONE, vol. 12, no. 4, pp. e0175103, 2017. @article{Kim2017b, Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3–19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On “pitch-congruent” trials, participants heard an auditory melody that was congruent in pitch with the visual score, on “pitch-incongruent” trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on “melody-incongruent” trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score. |
Mathias Klinghammer; Gunnar Blohm; Katja Fiehler Scene configuration and object reliability affect the use of allocentric information for memory-guided reaching Journal Article In: Frontiers in Neuroscience, vol. 11, pp. 204, 2017. @article{Klinghammer2017, Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information. |
P. Christiaan Klink; Danique Jeurissen; Jan Theeuwes; Damiaan Denys; Pieter R. Roelfsema Working memory accuracy for multiple targets is driven by reward expectation and stimulus contrast with different time-courses Journal Article In: Scientific Reports, vol. 7, pp. 9082, 2017. @article{Klink2017, The richness of sensory input dictates that the brain must prioritize and select information for further processing and storage in working memory. Stimulus salience and reward expectations influence this prioritization but their relative contributions and underlying mechanisms are poorly understood. Here we investigate how the quality of working memory for multiple stimuli is determined by priority during encoding and later memory phases. Selective attention could, for instance, act as the primary gating mechanism when stimuli are still visible. Alternatively, observers might still be able to shift priorities across memories during maintenance or retrieval. To distinguish between these possibilities, we investigated how and when reward cues determine working memory accuracy and found that they were only effective during memory encoding. Previously learned, but currently non-predictive, color-reward associations had a similar influence, which gradually weakened without reinforcement. Finally, we show that bottom-up salience, manipulated through varying stimulus contrast, influences memory accuracy during encoding with a fundamentally different time-course than top-down reward cues. While reward-based effects required long stimulus presentation, the influence of contrast was strongest with brief presentations. Our results demonstrate how memory resources are distributed over memory targets and implicates selective attention as a main gating mechanism between sensory and memory systems. |
Jessica Klusek; Joseph Schmidt; Amanda J. Fairchild; Anna Porter; Jane E. Roberts Altered sensitivity to social gaze in the FMR1 premutation and pragmatic language competence Journal Article In: Journal of Neurodevelopmental Disorders, vol. 9, no. 1, pp. 1–10, 2017. @article{Klusek2017, Background: The FMR1 premutation affects 1:291 women and is associated with a range of cognitive, affective, and physical health complications, including deficits in pragmatic language (i.e., social language). This study investigated attention to eye gaze as a fundamental social-cognitive skill that may be impaired in the FMR1 premutation and could underlie pragmatic deficits. Given the high prevalence of the FMR1 premutation, efforts to define its phenotype and mechanistic underpinnings have significant public health implications. Methods: Thirty-five women with the FMR1 premutation and 20 control women completed an eye-tracking paradigm that recorded time spent dwelling within the eye region in response to a face displaying either direct or averted gaze. Pragmatic language ability was coded from a conversational sample using the Pragmatic Rating Scale. Results: Women with the FMR1 premutation failed to show attentional preference to direct gaze and spent more time dwelling on the averted eyes relative to controls. While dwelling on the eyes was associated with better pragmatic language performance in controls, these variables were unrelated in the women with the FMR1 premutation. Conclusions: Altered sensitivity to social gaze, characterized by increased salience of averted gaze, was observed among womenwiththe FMR1 premutation. Furthermore, women with the FMR1 premutation wereunabletocapitalizeon information conveyed through the eyes to enhance social-communicative engagement, which differed from patterns seen in controls. These findings contribute to the growing characterization of social and communication phenotypes associated with the FMR1 premutation. |
Kathryn Koehler; Miguel P. Eckstein Temporal and peripheral extraction of contextual cues from scenes during visual search Journal Article In: Journal of Vision, vol. 17, no. 2, pp. 1–32, 2017. @article{Koehler2017a, Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16 degrees into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene. |
Kathryn Koehler; Miguel P. Eckstein Beyond scene gist: Objects guide search more than scene background Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 6, pp. 1177–1193, 2017. @article{Koehler2017, Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. |
Stephan Koenig; Peter Nauroth; Sara Lucke; Harald Lachnit; Mario Gollwitzer; Metin Uengoer Fear acquisition and liking of out-group and in-group members: Learning bias or attention? Journal Article In: Biological Psychology, vol. 129, pp. 195–206, 2017. @article{Koenig2017, The present study explores the notion of an out-group fear learning bias that is characterized by facilitated fear acquisition toward harm-doing out-group members. Participants were conditioned with two in-group and two out-group faces as conditioned stimuli. During acquisition, one in-group and one out-group face was paired with an aversive shock whereas the other in-group and out-group face was presented without shock. Psychophysiological measures of fear conditioning (skin conductance and pupil size) and explicit and implicit liking exhibited increased differential responding to out-group faces compared to in-group faces. However, the results did not clearly indicate that harm-doing out-group members were more readily associated with fear than harm-doing in-group members. In contrast, the out-group face not paired with shock decreased conditioned fear and disliking at least to the same extent that the shock-associated out-group face increased these measures. Based on these results, we suggest an account of the out-group fear learning bias that relates to an attentional bias to process in-group information. |
Stephan Koenig; Metin Uengoer; Harald Lachnit Attentional bias for uncertain cues of shock in human fear conditioning: Evidence for Attentional Learning Theory Journal Article In: Frontiers in Human Neuroscience, vol. 11, pp. 266, 2017. @article{Koenig2017a, We conducted a human fear conditioning experiment in which three different color cues were followed by an aversive electric shock on 0, 50, and 100% of the trials, and thus induced low (L), partial (P), and high (H) shock expectancy, respectively. The cues differed with respect to the strength of their shock association (L H). During conditioning we measured pupil dilation and ocular fixations to index differences in the attentional processing of the cues. After conditioning, the shock-associated colors were introduced as irrelevant distracters during visual search for a shape target while shocks were no longer administered and we analyzed the cues' potential to capture and hold overt attention automatically. Our findings suggest that fear conditioning creates an automatic attention bias for the conditioned cues that depends on their correlation with the aversive outcome. This bias was exclusively linked to the strength of the cues' shock association for the early attentional processing of cues in the visual periphery, but additionally was influenced by the uncertainty of the shock prediction after participants fixated on the cues. These findings are in accord with attentional learning theories that formalize how associative learning shapes automatic attention. |
Ellen M. Kok; Avigael M. Aizenman; Melissa L. -H. Võ; Jeremy M. Wolfe Even if I showed you where you looked, remembering where you just looked is hard Journal Article In: Journal of Vision, vol. 17, no. 12, pp. 1–11, 2017. @article{Kok2017, People know surprisingly little about their own visual behavior, which can be problematic when learning or executing complex visual tasks such as search of medical images. We investigated whether providing observers with online information about their eye position during search would help them recall their own fixations immediately afterwards. Seventeen observers searched for various objects in "Where's Waldo" images for 3 s. On two-thirds of trials, observers made target present/absent responses. On the other third (critical trials), they were asked to click twelve locations in the scene where they thought they had just fixated. On half of the trials, a gaze-contingent window showed observers their current eye position as a 7.5 degrees diameter "spotlight." The spotlight "illuminated" everything fixated, while the rest of the display was still visible but dimmer. Performance was quantified as the overlap of circles centered on the actual fixations and centered on the reported fixations. Replicating prior work, this overlap was quite low (26%), far from ceiling (66%) and quite close to chance performance (21%). Performance was only slightly better in the spotlight condition (28% |
Catarina C. Kordsachia; Izelle Labuschagne; Julie C. Stout Abnormal visual scanning of emotionally evocative natural scenes in Huntington's disease Journal Article In: Frontiers in Psychology, vol. 8, pp. 405, 2017. @article{Kordsachia2017, Abstract Huntington's disease (HD) is a neurodegenerative movement disorder associated with deficits in the processing of emotional stimuli, including alterations in the self-reported subjective experience of emotion when presented with pictures of emotional scenes. The aim of this study was to determine whether individuals with HD, compared to unaffected controls, display abnormal visual scanning of emotionally-evocative natural scenes. Using eye-tracking, we recorded eye-movements of 25 HD participants (advanced pre-symptomatic and early symptomatic) and 25 age-matched unaffected control participants during a picture viewing task. Participants viewed pictures of natural scenes associated with different emotions: anger, disgust, happiness, or neutral, and evaluated those pictures on a valence rating scale. Individuals with HD displayed abnormal visual scanning patterns, but did not differ from controls with respect to their valence ratings. Specifically, compared to controls, HD participants spent less time fixating on the pictures and made longer scan paths. This finding highlights the importance of taking visual scanning behavior into account when investigating emotion processing in HD. The visual scanning patterns displayed by HD participants could reflect a heightened, but possibly unfocussed, search for information, and might be linked to attentional deficits or to altered subjective emotional experiences in HD. Another possibility is that HD participants may have found it more difficult than controls to evaluate the emotional valence of the scenes, and the heightened search for information was employed as a compensatory strategy. |
Christoph W. Korn; Matthias Staib; Athina Tzovara; Giuseppe Castegnetti; Dominik R. Bach A pupil size response model to assess fear learning Journal Article In: Psychophysiology, vol. 54, no. 3, pp. 330–343, 2017. @article{Korn2017, During fear conditioning, pupil size responses dissociate between conditioned stimuli that are contingently paired (CS+) with an aversive unconditioned stimulus, and those that are unpaired (CS-). Current approaches to assess fear learning from pupil responses rely on ad hoc specifications. Here, we sought to develop a psychophysiological model (PsPM) in which pupil responses are characterized by response functions within the framework of a linear time-invariant system. This PsPM can be written as a general linear model, which is inverted to yield amplitude estimates of the eliciting process in the central nervous system. We first characterized fear-conditioned pupil size responses based on an experiment with auditory CS. PsPM-based parameter estimates distinguished CS+/CS- better than, or on par with, two commonly used methods (peak scoring, area under the curve). We validated this PsPM in four independent experiments with auditory, visual, and somatosensory CS, as well as short (3.5 s) and medium (6 s) CS/US intervals. Overall, the new PsPM provided equal or decisively better differentiation of CS+/CS- than the two alternative methods and was never decisively worse. We further compared pupil responses with concurrently measured skin conductance and heart period responses. Finally, we used our previously developed luminance-related pupil responses to infer the timing of the likely neural input into the pupillary system. Overall, we establish a new PsPM to assess fear conditioning based on pupil responses. The model has a potential to provide higher statistical sensitivity, can be applied to other conditioning paradigms in humans, and may be easily extended to nonhuman mammals. |
Ivan Koychev; Dan Joyce; E. Barkus; Ulrich Ettinger; Anne Schmechtig; Colin T. Dourish; G. R. Dawson; Kevin J. Craig; J. F. William Deakin In: Cognitive Neuropsychiatry, vol. 22, no. 3, pp. 213–232, 2017. @article{Koychev2017, Introduction: Body dysmorphic disorder (BDD) is characterised by repetitive behaviours and/or mental acts occurring in response to preoccupations with perceived flaws in physical appearance. Based on an eye-tracking paradigm, this study aimed to examine how individuals with BDD processed their own face. Methods: Participants were 21 BDD patients, 19 obsessive– compulsive disorder patients and 21 healthy controls (HC), who were age-, sex-, and IQ-matched. Stimuli were photographs of participants' own faces as well as those from the Pictures of Facial Affect battery. Outcome measures were affect recognition accuracy as well as spatial and temporal scanpath parameters. Results: The BDD group exhibited significantly decreased recognition accuracy for their own face relative to the HC group, and this was most pronounced for those who had a key concern centred on their face. Individual qualitative scanpath analysis revealed restricted and extensive scanning behaviours in BDD participants with a facial preoccupation. Persons with severe BDD also exhibited more marked scanpath deficits. Conclusions: Future research should be directed at extending the current work by incorporating neuroimaging techniques, and investigations of eye-tracking focused on affected body parts in BDD. These could yield fruitful therapeutic applications via incorporation with existing treatment approaches. |
Wolfgang Jaschinski Individual objective and subjective fixation disparity in near vision Journal Article In: PLoS ONE, vol. 12, no. 1, pp. e0170190, 2017. @article{Jaschinski2017, Binocular vision refers to the integration of images in the two eyes for improved visual performance and depth perception. One aspect of binocular vision is the fixation disparity, which is a suboptimal condition in individuals with respect to binocular eye movement control and subsequent neural processing. The objective fixation disparity refers to the vergence angle between the visual axes, which is measured with eye trackers. Subjective fixation disparity is tested with two monocular nonius lines which indicate the physical nonius separation required for perceived alignment. Subjective and objective fixation disparity represent the different physiological mechanisms of motor and sensory fusion, but the precise relation between these two is still unclear. This study measures both types of fixation disparity at viewing distances of 40, 30, and 24 cm while observers fixated a central stationary fusion target. 20 young adult subjects with normal binocular vision were tested repeatedly to investigate individual differences. For heterophoria and subjective fixation disparity, this study replicated that the binocular system does not properly adjust to near targets: outward (exo) deviations typically increase as the viewing distance is shortened. This exo proximity effect—however—was not found for objective fixation disparity, which–on the average–was zero. But individuals can have reliable outward (exo) or inward (eso) vergence errors. Cases with eso objective fixation disparity tend to have less exo states of subjective fixation disparity and heterophoria. In summary, the two types of fixation disparity seem to respond in a different way when the viewing distance is shortened. Motor and sensory fusion–as reflected by objective and subjective fixation disparity–exhibit complex interactions that may differ between individuals (eso versus exo) and vary with viewing distance (far versus near vision). |
Su Keun Jeong; Yaoda Xu Task-context-dependent linear representation of multiple visual objects in human parietal cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 10, pp. 1778–1789, 2017. @article{Jeong2017, A host of recent studies have reported robust representations of visual object information in the human parietal cortex, similar to those found in ventral visual cortex. In ventral visual cortex, both monkey neurophysiology and human fMRI studies showed that the neural representation ofa pair ofunrelated objects can be approximated by the averaged neural representation of the constituent objects shown in isolation. In this study, we examined whether such a linear relationship between objects exists for object representations in the human parietal cortex. Using fMRI and multivoxel pattern analysis, we examined object representations in human inferior and superior intraparietal sulcus, two parietal regions previously implicated in visual object selection and encoding, respectively. We also examined responses from the lateral occipital region, a ventral object processing area. We obtained fMRI response patterns to object pairs and their constituent objects shown in isolation while participants viewed these objects and performed a 1-back repetition detection task. By measuring fMRI response pattern correlations, we found that all three brain regions contained representations for both single object and object pairs. In the lateral occipital region, the representation for a pair ofobjects could be reliably approximated by the average representation of its constituent objects shown in isolation, replicating previous findings in ventral visual cortex. Such a simple linear relationship, however, was not observed in either parietal region examined. Nevertheless, when we equated the amount of task information present by examining responses from two pairs of objects, we found that representations for the average of two object pairs were indistinguishable in both parietal regions from the average of another two object pairs containing the same four component objects but with a different pairing of the objects (i.e., the average of AB and CD vs. that of AD and CB). Thus, when task information was held consistent, the same linear relationship may govern how multiple independent objects are represented in the human parietal cortex as it does in ventral visual cortex. These findings show that object and task representations coexist in the human parietal cortex and characterize one significant dif- ference of how visual information may be represented in ventral visual and parietal regions. |
Jianrong Jia; Ling Liu; Fang Fang; Huan Luo Sequential sampling of visual objects during sustained attention Journal Article In: PLoS Biology, vol. 15, no. 6, pp. e2001903, 2017. @article{Jia2017b, In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional “focus,” involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time. |
Yuncheng Jia; Gang Cheng; Dajun Zhang; Na Ta; Mu Xia; Fangyuan Ding Attachment avoidance is significantly related to attentional preference for infant faces: Evidence from eye movement data Journal Article In: Frontiers in Psychology, vol. 8, pp. 85, 2017. @article{Jia2017, Objective: To determine the influence of adult attachment orientations on infant preference. Methods: We adopted eye-tracking technology to monitor childless college women's eye movements when looking at pairs of faces, including one adult face (man or woman) and one infant face, with three different expressions (happy, sadness, and neutral). The participants (N = 150; 84% Han ethnicity) were aged 18–29 years (M = 19.22 |
Yu-Cin Jian Eye-movement patterns and reader characteristics of students with good and poor performance when reading scientific text with diagrams Journal Article In: Reading and Writing, vol. 30, no. 7, pp. 1447–1472, 2017. @article{Jian2017a, This study investigated the cognitive processes and reader characteristics of sixth graders who had good and poor performance when reading scientific text with diagrams. We first measured the reading ability and reading self-efficacy of sixth-grade participants, and then recorded their eye movements while they were reading an illustrated scientific text and scored their answers to content-related questions. Finally, the participants evaluated the difficulty of the article, the attractiveness of the content and diagram, and their learning performance. The participants were then classified into groups based on how many correct responses they gave to questions related to reading. The results showed that readers with good performance had better character recognition ability and reading self-efficacy, were more attracted to the diagrams, and had higher self-evaluated learning levels than the readers with poor performance did. Eye-movement data indicated that readers with good performance spent significantly more reading time on the whole article, the text section, and the diagram section than the readers with poor performance did. Interestingly, readers with good performance had significantly longer mean fixation duration on the diagrams than readers with poor performance did; further, readers with good performance made more saccades between the text and the diagrams. Additionally, sequential analysis of eye movements showed that readers with good performance preferred to observe the diagram rather than the text after reading the title, but this tendency was not present in readers with poor performance. In sum, using eye-tracking technology and several reading tests and questionnaires, we found that various cognitive aspects (reading strategy, diagram utilization) and affective aspects (reading self-efficacy, article likeness, diagram attraction, and self-evaluation of learning) affected sixth graders' reading performance in this study. |
Elizabeth K. Johnson; Henry W. Fields; F. Michael Beck; Allen R. Firestone; Stephen F. Rosenstiel In: American Journal of Orthodontics and Dentofacial Orthopedics, vol. 151, no. 2, pp. 297–310, 2017. @article{Johnson2017, Introduction: Previous eye-tracking research has demonstrated that laypersons view the range of dental attractiveness levels differently depending on facial attractiveness levels. How the borderline levels of dental attractiveness are viewed has not been evaluated in the context of facial attractiveness and compared with those with near-ideal esthetics or those in definite need of orthodontic treatment according to the Aesthetic Component of the Index of Orthodontic Treatment Need scale. Our objective was to determine the level of viewers' visual attention in its treatment need categories levels 3 to 7 for persons considered “attractive,” “average,” or “unattractive.” Methods: Facial images of persons at 3 facial attractiveness levels were combined with 5 levels of dental attractiveness (dentitions representing Aesthetic Component of the Index of Orthodontic Treatment Need levels 3-7) using imaging software to form 15 composite images. Each image was viewed twice by 66 lay participants using eye tracking. Both the fixation density (number of fixations per facial area) and the fixation duration (length of time for each facial area) were quantified for each image viewed. Repeated-measures analysis of variance was used to determine how fixation density and duration varied among the 6 facial interest areas (chin, ear, eye, mouth, nose, and other). Results: Viewers demonstrated excellent to good reliability among the 6 interest areas (intraviewer reliability, 0.70-0.96; interviewer reliability, 0.56-0.93). Between Aesthetic Component of the Index of Orthodontic Treatment Need levels 3 and 7, viewers of all facial attractiveness levels showed an increase in attention to the mouth. However, only with the attractive models were significant differences in fixation density and duration found between borderline levels with female viewers. Female viewers paid attention to different areas of the face than did male viewers. Conclusions: The importance of dental attractiveness is amplified in facially attractive female models compared with average and unattractive female models between near-ideal and borderline-severe dentally unattractive levels. |
Elizabeth L. Johnson; Callum D. Dewar; Anne Kristin Solbakk; Tor Endestad; Torstein R. Meling; Robert T. Knight Bidirectional frontoparietal oscillatory systems support working memory Journal Article In: Current Biology, vol. 27, no. 12, pp. 1829–1835, 2017. @article{Johnson2017a, The ability to represent and select information in working memory provides the neurobiological infrastructure for human cognition. For 80 years, dominant views of working memory have focused on the key role of prefrontal cortex (PFC) [1–8]. However, more recent work has implicated posterior cortical regions [9–12], suggesting that PFC engagement during working memory is dependent on the degree of executive demand. We provide evidence from neurological patients with discrete PFC damage that challenges the dominant models attributing working memory to PFC-dependent systems. We show that neural oscillations, which provide a mechanism for PFC to communicate with posterior cortical regions [13], independently subserve communications both to and from PFC—uncovering parallel oscillatory mechanisms for working memory. Fourteen PFC patients and 20 healthy, age-matched controls performed a working memory task where they encoded, maintained, and actively processed information about pairs of common shapes. In controls, the electroencephalogram (EEG) exhibited oscillatory activity in the low-theta range over PFC and directional connectivity from PFC to parieto-occipital regions commensurate with executive processing demands. Concurrent alpha-beta oscillations were observed over parieto-occipital regions, with directional connectivity from parieto-occipital regions to PFC, regardless of processing demands. Accuracy, PFC low-theta activity, and PFC → parieto-occipital connectivity were attenuated in patients, revealing a PFC-independent, alpha-beta system. The PFC patients still demonstrated task proficiency, which indicates that the posterior alpha-beta system provides sufficient resources for working memory. Taken together, our findings reveal neurologically dissociable PFC and parieto-occipital systems and suggest that parallel, bidirectional oscillatory systems form the basis of working memory. |
Donatas Jonikaitis; Anna Klapetek; Heiner Deubel Spatial attention during saccade decisions Journal Article In: Journal of Neurophysiology, vol. 118, no. 1, pp. 149–160, 2017. @article{Jonikaitis2017, Behavioral measures of decision making are usually limited to observations of decision outcomes. In the present study, we made use of the fact that oculomotor and sensory selection are closely linked to track oculomotor decision making before oculomotor responses are made. We asked participants to make a saccadic eye movement to one of two memorized target locations and observed that visual sensitivity increased at both the chosen and the non-chosen saccade target locations, with a clear bias towards the chosen target. The time course of changes in visual sensitivity was related to saccadic latency, with the competition between the chosen and non-chosen targets resolved faster before short latency saccades. On error trials, we observed an increased competition between the chosen and non-chosen targets. Moreover, oculomotor selection and visual sensitivity were influenced by top-down and bottom-up factors as well as by selection history and predicted the direction of saccades. Our findings demonstrate that saccade decisions have direct visual consequences and show that decision making can be traced in the human oculomotor system well before choices are made. Our results also indicate a strong association between decision making, saccade target selection and visual sensitivity. |
Mordechai Z. Juni; Miguel P. Eckstein The wisdom of crowds for visual search Journal Article In: Proceedings of the National Academy of Sciences, vol. 114, no. 21, pp. E4306–E4315, 2017. @article{Juni2017, Decision-making accuracy typically increases through collective integration of people's judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people's confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers' confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers' nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. |
Yoshinao Kajikawa; John F. Smiley; Charles E. Schroeder Primary generators of visually evoked field potentials recorded in the macaque auditory cortex Journal Article In: Journal of Neuroscience, vol. 37, no. 42, pp. 10139–10153, 2017. @article{Kajikawa2017, Prior studies have reported “local” field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be “contaminated” by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such “far-field” activity, in addition to, or in absence of, local synaptic responses. |
Sakari Kallio; Mika Koivisto; Johanna K. Kaakinen Synaesthesia-type associations and perceptual changes induced by hypnotic suggestion Journal Article In: Scientific Reports, vol. 7, pp. 17310, 2017. @article{Kallio2017, Are synaesthetic experiences congenital and so hard-wired, or can a functional analogue be created? We induced an equivalent of form-colour synaesthesia using hypnotic suggestions in which symbols in an array (circles, crosses, squares) were suggested always to have a certain colour. In a Stroop type-naming task, three of the four highly hypnotizable participants showed a strong synaesthesia-type association between symbol and colour. This was verified both by their subjective reports and objective eye-movement behaviour. Two resembled a projector- and one an associator-type synaesthete. Participant interviews revealed that subjective experiences differed somewhat from typical (congenital) synaesthesia. Control participants who mimicked the task using cognitive strategies showed a very different response pattern. Overall, the results show that the targeted, preconsciously triggered associations and perceptual changes seen in association with congenital synaesthesia can rapidly be induced by hypnosis. They suggest that each participant's subjective experience of the task should be carefully evaluated, especially when studying hypnotic hallucinations. Studying such experiences can increase understanding of perception, automaticity, and awareness and open unique opportunities in cognitive neuroscience and consciousness research. |
Zampeta Kalogeropoulou; Akshay V. Jagadeesh; Sven Ohl; Martin Rolfs Setting and changing feature priorities in visual short-term memory Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 2, pp. 453–458, 2017. @article{Kalogeropoulou2017a, Many everyday tasks require prioritizing some vi-sual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from mem-ory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reli-ably increased observers' performance (reduced guessing, in-creased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consis-tently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (in-valid-valid condition). Thus, feature-based attention can re-shape priorities in VSTM protecting information that would otherwise be forgotten. |
Zampeta Kalogeropoulou; Martin Rolfs Saccadic eye movements do not disrupt the deployment of feature-based attention Journal Article In: Journal of Vision, vol. 17, no. 8, pp. 1–15, 2017. @article{Kalogeropoulou2017, The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue– stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade.We discuss the role of FBA in perceptual continuity across saccades. |
Kei Kanari; Kiyomi Sakamoto; Hirohiko Kaneko Effect of visual attention on the properties of optokinetic nystagmus Journal Article In: PLoS ONE, vol. 12, no. 4, pp. e0175453, 2017. @article{Kanari2017, It has been demonstrated that optokinetic nystagmus (OKN) gain increases through attention to peripheral motion when the central visual field is occluded. However, how the properties of OKN change when two areas containing motion in different directions are presented in the peripheral visual field is still unclear. In this study, we investigated whether OKN corresponding to the attended motion in the periphery occurred while the observer was maintaining fixation at the center. We presented two areas with different directions of motion arranged on the left and right, top and bottom, or center and surrounding (concentric) areas in the display. Observers counted targets appearing on the attended area in the stimulus to maintain their attention on it. The results indicate that attention enhances the gain and frequency of OKN corresponding to the attended motion even in the case of stimuli having several areas with different directions of motion. |
Ryan W. Langridge; Jonathan J. Marotta In: Experimental Brain Research, vol. 235, no. 9, pp. 2705–2716, 2017. @article{Langridge2017, Participants executed right-handed reach-to-grasp movements toward horizontally translating targets. Visual feedback of the target when reaching, as well as the presence of additional cues placed above and below the target's path, was manipulated. Comparison of average fixations at reach onset and at the time of the grasp suggested that participants accurately extrapolated the occluded target's motion prior to reach onset, but not after the reach had been initiated, resulting in inaccurate grasp placements. Final gaze and grasp positions were more accurate when reaching for leftward moving targets, suggesting individuals use different grasp strategies when reaching for targets traveling away from the reaching hand. Additional cue presence appeared to impair participants' ability to extrapolate the disappeared target's motion, and caused grasps for occluded targets to be less accurate. Novel information is provided about the eye-hand strategies used when reaching for moving targets in unpredictable visual conditions. |
S. J. Larcombe; Christopher Kennard; H. Bridge Time course influences transfer of visual perceptual learning across spatial location Journal Article In: Vision Research, vol. 135, pp. 26–33, 2017. @article{Larcombe2017, Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. |
Thibaut Le Naour; Jean-Pierre Bresciani A skeleton-based approach to analyze and visualize oculomotor behavior when viewing znimated characters Journal Article In: Journal of Eye Movement Research, vol. 10, no. 5, pp. 1–19, 2017. @article{LeNaour2017, Knowing what people look at and understanding how they analyze the dynamic gestures of their peers is an exciting challenge. In this context, we propose a new approach to quan- tifying and visualizing the oculomotor behavior of viewers watching the movements of animated characters in dynamic sequences. Using this approach, we were able to illustrate, on a 'heat mesh', the gaze distribution of one or several viewers, i.e., the time spent on each part of the body, and to visualize viewers' timelines, which are linked to the heat mesh. Our approach notably provides an 'intuitive' overview combining the spatial and temporal characteristics of the gaze pattern, thereby constituting an efficient tool for quickly comparing the oculomotor behaviors of different viewers. The functionalities of our system are illustrated through two use case experiments with 2D and 3D animated media sources, respectively. |
Matthew L. Leavitt; Florian Pieper; Adam J. Sachs; Julio C. Martinez-Trujillo Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles Journal Article In: Proceedings of the National Academy of Sciences, vol. 114, no. 12, pp. E2494–E2503, 2017. @article{Leavitt2017a, Neurons in the primate lateral prefrontal cortex (LPFC) encode working memory (WM) representations via sustained firing, a phenomenon hypothesized to arise from recurrent dynamics within ensembles of interconnected neurons. Here, we tested this hypothesis by using microelectrode arrays to examine spike count correlations (rsc) in LPFC neuronal ensembles during a spatial WM task. We found a pattern of pairwise rsc during WM maintenance indicative of stronger coupling between similarly tuned neurons and increased inhibition between dissimilarly tuned neurons. We then used a linear decoder to quantify the effects of the high-dimensional rsc structure on information coding in the neuronal ensembles. We found that the rsc structure could facilitate or impair coding, depending on the size of the ensemble and tuning properties of its constituent neurons. A simple optimization procedure demonstrated that near-maximum decoding performance could be achieved using a relatively small number of neurons. These WM- optimized subensembles were more signal correlation (rsignal)- diverse and anatomically dispersed than predicted by the statistics of the full recorded population of neurons, and they often con- tained neurons that were poorly WM-selective, yet enhanced cod- ing fidelity by shaping the ensemble's rsc structure. We observed a pattern of rsc between LPFC neurons indicative of recurrent dynamics as a mechanism for WM-related activity and that the rsc structure can increase the fidelity ofWM representations. Thus, WM coding in LPFC neuronal ensembles arises from a complex synergy between single neuron coding properties and multidimensional, ensemble-level phenomena. |
Jeongmi Lee; Joy J. Geng Idiosyncratic patterns of representational similarity in prefrontal cortex predict attentional performance Journal Article In: Journal of Neuroscience, vol. 37, no. 5, pp. 1257–1268, 2017. @article{Lee2017a, The efficiency of finding an object in a crowded environment depends largely on the similarity of nontargets to the search target. Models of attention theorize that the similarity is determined by representations stored within an "attentional template" held in working memory. However, the degree to which the contents of the attentional template are individually unique and where those idiosyncratic representations are encoded in the brain are unknown. We investigated this problem using representational similarity analysis of human fMRI data to measure the common and idiosyncratic representations of famous face morphs during an identity categorization task; data from the categorization task were then used to predict performance on a separate identity search task. We hypothesized that the idiosyncratic categorical representations of the continuous face morphs would predict their distractability when searching for each target identity. The results identified that patterns of activation in the lateral prefrontal cortex (LPFC) as well as in face-selective areas in the ventral temporal cortex were highly correlated with the patterns of behavioral categorization of face morphs and search performance that were common across subjects. However, the individually unique components of the categorization behavior were reliably decoded only in right LPFC. Moreover, the neural pattern in right LPFC successfully predicted idiosyncratic variability in search performance, such that reaction times were longer when distractors had a higher probability of being categorized as the target identity. These results suggest that the prefrontal cortex encodes individually unique components of categorical representations that are also present in attentional tem-plates for target search. |
Karolina M. Lempert; Sandra F. Lackovic; Russell H. Tobe; Paul W. Glimcher; Elizabeth A. Phelps Propranolol reduces reference-dependence in intertemporal choice Journal Article In: Social Cognitive and Affective Neuroscience, vol. 12, no. 9, pp. 1394–1401, 2017. @article{Lempert2017, In intertemporal choices between immediate and delayed rewards, people tend to prefer immediate rewards, often even when the delayed reward is larger. This is known as temporal discounting. It has been proposed that this tendency emerges because immediate rewards are more emotionally arousing than delayed rewards. However, in our previous research, we found no evidence for this but instead found that arousal responses (indexed with pupil dilation) in intertemporal choice are context-dependent. Specifically, arousal tracks the subjective value of the more variable reward option in the paradigm, whether it is immediate or delayed. Nevertheless, people tend to choose the less variable option in the choice task. In other words, their choices are reference-dependent and depend on variance in their recent history of offers. This suggests that there may be a causal relationship between reference-dependent choice and arousal, which we investigate here by reducing arousal pharmacologically using propranolol. Here, we show that propranolol reduces reference-dependence, leading to choices that are less influenced by recent history and more internally consistent. |
Rebekka Lencer; L. J. Mills; N. Alliey-Rodriguez; R. Shafee; A. M. Lee; James L. Reilly; Andreas Sprenger; Jennifer E. McDowell; S. A. McCarroll; Matcheri S. Keshavan; Godfrey D. Pearlson; Carol A. Tamminga; Brett A. Clementz; Elliot S. Gershon; John A. Sweeney; J. R. Bishop Genome-wide association studies of smooth pursuit and antisaccade eye movements in psychotic disorders: findings from the B-SNIP study Journal Article In: Translational Psychiatry, vol. 7, pp. e1249, 2017. @article{Lencer2017a, Eye movement deviations, particularly deficits of initial sensorimotor processing and sustained pursuit maintenance, and antisaccade inhibition errors, are established intermediate phenotypes for psychotic disorders. We here studied eye movement measures of 849 participants from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) study (schizophrenia N = 230, schizoaffective disorder N = 155, psychotic bipolar disorder N = 206 and healthy controls N = 258) as quantitative phenotypes in relation to genetic data, while controlling for genetically derived ancestry measures, age and sex. A mixed-modeling genome-wide association studies approach was used including ~ 4.4 million genotypes (PsychChip and 1000 Genomes imputation). Across participants, sensorimotor processing at pursuit initiation was significantly associated with a single nucleotide polymorphism in IPO8 (12p11.21 |
Laura Leuchs; Max Schneider; Michael Czisch; Victor I. Spoormaker Neural correlates of pupil dilation during human fear learning Journal Article In: NeuroImage, vol. 147, pp. 186–197, 2017. @article{Leuchs2017, Background: Fear conditioning and extinction are prevailing experimental and etiological models for normal and pathological anxiety. Pupil dilations in response to conditioned stimuli are increasingly used as a robust psychophysiological readout of fear learning, but their neural correlates remain unknown. We aimed at identifying the neural correlates of pupil responses to threat and safety cues during a fear learning task. Methods: Thirty-four healthy subjects underwent a fear conditioning and extinction paradigm with simultaneous functional magnetic resonance imaging (fMRI) and pupillometry. After a stringent preprocessing and artifact rejection procedure, trial-wise pupil responses to threat and safety cues were entered as parametric modulations to the fMRI general linear models. Results: Trial-wise magnitude of pupil responses to both conditioned and safety stimuli correlated positively with activity in dorsal anterior cingulate cortex (dACC), thalamus, supramarginal gyrus and insula for the entire fear learning task, and with activity in the dACC during the fear conditioning phase in particular. Phasic pupil responses did not show habituation, but were negatively correlated with tonic baseline pupil diameter, which decreased during the task. Correcting phasic pupil responses for the tonic baseline pupil diameter revealed thalamic activity, which was also observed in an analysis employing a linear (declining) time modulation. Conclusion: Pupil dilations during fear conditioning and extinction provide useful readouts to track fear learning on a trial-by-trial level, particularly with simultaneous fMRI. Whereas phasic pupil responses reflect activity in brain regions involved in fear learning and threat appraisal, most prominently in dACC, tonic changes in pupil diameter may reflect changes in general arousal. |
Amelia K. Lewis; Melanie A. Porter; Tracey A. Williams; Samantha Bzishvili; Kathryn N. North; Jonathan M. Payne Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type Journal Article In: Neuropsychology, vol. 31, no. 4, pp. 361–370, 2017. @article{Lewis2017, OBJECTIVE: This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. METHOD: The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. RESULTS: Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Athough there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. CONCLUSIONS: These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. |
Hui Li; Xu Liu; Ian M. Andolina; Xiaohong Li; Yiliang Lu; Lothar Spillmann; Wei Wang Asymmetries of Dark and Bright Negative Afterimages Are Paralleled by Subcortical ON and OFF Poststimulus Responses Journal Article In: Journal of Neuroscience, vol. 37, no. 8, pp. 1984–1996, 2017. @article{Li2017b, Humans are more sensitive to luminance decrements than increments, as evidenced by lower thresholds and shorter latencies for dark stimuli. This asymmetry is consistent with results of neurophysiological recordings in dorsal lateral geniculate nucleus (dLGN) and primary visual cortex (V1) of cat and monkey. Specifically, V1 population responses demonstrate that darks elicit higher levels of activation than brights, and the latency ofOFF responses in dLGN and V1 is shorter than that ofON responses. The removal ofa dark or bright disc often generates the perception of a negative afterimage, and here we ask whether there also exist asymmetries for negative afterimages elicited bydarkand bright discs. If so, do the poststimulus responses ofsubcortical ON and OFF cells parallel such afterimage asymmetries? To test these hypotheses, we performed psychophysical experiments in humans and single-cell/S-potential recordings in cat dLGN. Psychophysically, we found that bright afterimages elicited by luminance decrements are stronger and last longer than dark afterimages elicited by luminance increments ofequal sizes. Neurophysiologically, we found that ON cells responded to the removal ofa dark disc with higher firing rates that were maintained for longer than OFF cells to the removal of a bright disc. The ON and OFF cell asymmetry was most pronounced at long stimulus durations in the dLGN. We conclude that subcortical response strength differences between ON and OFF channels parallel the asymmetries between bright and dark negative afterimages, further supporting a subcortical origin ofbright and dark afterimage perception. |
Kuei-An Li; Su-Ling Yeh Mean size estimation yields left-side bias: Role of attention on perceptual averaging Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 8, pp. 2538–2551, 2017. @article{Li2017, The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined wheth- er a left-side bias (LSB)—perceptual judgment tends to de- pend more heavily on left visual field'sinputs—affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger con- dition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOAwhen spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side. |
2016 |
Jolande Fooken; Sang-Hoon Yeo; Dinesh K. Pai; Miriam Spering Eye movement accuracy determines natural interception strategies Journal Article In: Journal of Vision, vol. 16, no. 14, pp. 1–15, 2016. @article{Fooken2016, Eye movements aid visual perception and guide actions such as reaching or grasping. Most previous work on eye-hand coordination has focused on saccadic eye movements. Here we show that smooth pursuit eye movement accuracy strongly predicts both interception accuracy and the strategy used to intercept a moving object. We developed a naturalistic task in which participants (n = 42 varsity baseball players) intercepted a moving dot (a "2D fly ball") with their index finger in a designated "hit zone." Participants were instructed to track the ball with their eyes, but were only shown its initial launch (100-300 ms). Better smooth pursuit resulted in more accurate interceptions and determined the strategy used for interception, i.e., whether interception was early or late in the hit zone. Even though early and late interceptors showed equally accurate interceptions, they may have relied on distinct tactics: early interceptors used cognitive heuristics, whereas late interceptors' performance was best predicted by pursuit accuracy. Late interception may be beneficial in real-world tasks as it provides more time for decision and adjustment. Supporting this view, baseball players who were more senior were more likely to be late interceptors. Our findings suggest that interception strategies are optimally adapted to the proficiency of the pursuit system |
Jaap Munneke; Artem V. Belopolsky; Jan Theeuwes Distractors associated with reward break through the focus of attention Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 7, pp. 2213–2225, 2016. @article{Munneke2016, In the present study, we investigated the conditions in which rewarded distractors have the ability to capture attention, even when attention is directed toward the target location. Experiment 1 showed that when the probability of obtaining reward was high, all salient distractors captured attention, even when they were not associated with reward. This effect may have been caused by participants suboptimally using the 100%-valid endogenous location cue. Experiment 2 confirmed this result by showing that salient distractors did not capture attention in a block in which no reward was expected. In Experiment 3, the probability of the presence of a distractor was high, but it only signaled reward availability on a low number of trials. The results showed that those very infrequent distractors that signaled reward captured attention, whereas the distractors (both frequent and infrequent ones) not associated with reward were simply ignored. The latter experiment indicates that even when attention is directed to a location in space, stimuli associated with reward break through the focus of attention, but equally salient stimuli not associated with reward do not. |
Mara Otten; Daniel Schreij; Sander A. Los The interplay of goal-driven and stimulus-driven influences on spatial orienting Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 6, pp. 1642–1654, 2016. @article{Otten2016, Search for a target stimulus among distractors is subject to both goal-driven and stimulus-driven influences. Variables that selectively modify these influences have shown strong interaction effects on saccade trajectories toward the target, suggesting the involvement of a shared spatial orienting mechanism. However, subsequent manual response times (RTs) have revealed additive effects, suggesting that different mechanisms are involved. In the present study, we tested the hypothesis that an interaction for RTs is obscured by preceding multisaccade trajectories, promoted by the continuous presence of distractors in the display. In two experiments, we compared a condition in which distractors were removed soon after the presentation of the search display to a standard condition in which distractors were not removed. The results showed additive goal-driven and stimulus-driven effects on RTs in the standard condition, but an interaction when distractors were removed. These findings support the view that both variables influence a shared spatial orienting mechanism. |
Cécile Eymond; Patrick Cavanagh; Thérèse Collins Feature-based attention across saccades and immediate postsaccadic selection Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 5, pp. 1293–1301, 2016. @article{Eymond2016, Before each eye movement, attentional resources are drawn to the saccade goal. This saccade-related attention is known to be spatial in nature, and in this study we asked whether it also evokes any feature selectivity that is maintained across the saccade. After a saccade toward a colored target, participants performed a postsaccadic feature search on an array displayed at landing. The saccade target either had the same color as the search target in the postsaccadic array (congruent trials) or a different color (incongruent or neutral trials). Our results show that the color of the saccade target did not prime the subsequent feature search. This suggests that "landmark search", the process of searching for the saccade target once the eye lands (Deubel in Visual Cognition, 11, 173-202, 2004), may not involve the attentional mechanisms that underlie feature search. We also analyzed intertrial effects and observed priming of pop-out (Maljkovic & Nakayama in Memory & Cognition, 22, 657-672, 1994) for the postsaccadic feature search: the detection of the color singleton became faster when its color was repeated on successive trials. However, search performance revealed no effect of congruency between the saccade and search targets, either within or across trials, suggesting that the priming of pop-out is specific to target repetitions within the same task and is not seen for repetitions across tasks. Our results support a dissociation between feature-based attention and the attentional mechanisms associated with eye movement programming. |
Samantha W. Michalka; Maya L. Rosen; Lingqiang Kong; Barbara G. Shinn-Cunningham; David C. Somers Auditory spatial coding flexibly recruits anterior, but not posterior, visuotopic parietal cortex Journal Article In: Cerebral Cortex, vol. 26, no. 3, pp. 1302–1308, 2016. @article{Michalka2016, Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. |
Megan H. Papesh; Stephen D. Goldinger; Michael C. Hout Eye movements reveal fast, voice-specific priming Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 3, pp. 314–337, 2016. @article{Papesh2016, In spoken word perception, voice specificity effects are well-documented: When people hear repeated words in some task, performance is generally better when repeated items are presented in their originally heard voices, relative to changed voices. A key theoretical question about voice specificity effects concerns their time-course: Some studies suggest that episodic traces exert their influence late in lexical processing (the time-course hypothesis; McLennan & Luce, 2005), whereas others suggest that episodic traces influence immediate, online processing. We report 2 eye-tracking studies investigating the time-course of voice-specific priming within and across cognitive tasks. In Experiment 1, participants performed modified lexical decision or semantic classification to words spoken by 4 speakers. The tasks required participants to click a red "x" or a blue "+" located randomly within separate visual half-fields, necessitating trial-by-trial visual search with consistent half-field response mapping. After a break, participants completed a second block with new and repeated items, half spoken in changed voices. Voice effects were robust very early, appearing in saccade initiation times. Experiment 2 replicated this pattern while changing tasks across blocks, ruling out a response priming account. In the General Discussion, we address the time-course hypothesis, focusing on the challenge it presents for empirical disconfirmation, and highlighting the broad importance of indexical effects, beyond studies of priming. |
Gustav Kuhn; Ronald A. Rensink The Vanishing Ball Illusion: A new perspective on the perception of dynamic events Journal Article In: Cognition, vol. 148, pp. 64–70, 2016. @article{Kuhn2016, Our perceptual experience is largely based on prediction, and as such can be influenced by knowledge of forthcoming events. This susceptibility is commonly exploited by magicians. In the Vanishing Ball Illusion, for example, a magician tosses a ball in the air a few times and then pretends to throw the ball again, whilst secretly concealing it in his hand. Most people claim to see the ball moving upwards and then vanishing, even though it did not leave the magician's hand (Kuhn & Land, 2006; Triplett, 1900). But what exactly can such illusions tell us? We investigated here whether seeing a real action before the pretend one was necessary for the Vanishing Ball Illusion. Participants either saw a real action immediately before the fake one, or only a fake action. Nearly one third of participants experienced the illusion with the fake action alone, while seeing the real action beforehand enhanced this effect even further. Our results therefore suggest that perceptual experience relies both on long-term knowledge of what an action should look like, as well as exemplars from the immediate past. In addition, whilst there was a forward displacement of perceived location in perceptual experience, this was not found for oculomotor responses, consistent with the proposal that two separate systems are involved in visual perception. |
Sarah C. Krall; Lukas J. Volz; Eileen Oberwelland; Christian Grefkes; Gereon R. Fink; Kerstin Konrad The right temporoparietal junction in attention and social interaction: A transcranial magnetic stimulation study Journal Article In: Human Brain Mapping, vol. 37, no. 2, pp. 796–807, 2016. @article{Krall2016, The right temporoparietal junction (rTPJ) has been associated with the ability to reorient attention to unexpected stimuli and the capacity to understand others' mental states (theory of mind [ToM]/false belief). Using activation likelihood estimation meta-analysis we previously unraveled that the anterior rTPJ is involved in both, reorienting of attention and ToM, possibly indicating a more general role in attention shifting. Here, we used neuronavigated transcranial magnetic stimulation to directly probe the role of the rTPJ across attentional reorienting and false belief. Task performance in a visual cueing paradigm and false belief cartoon task was investigated after application of continuous theta burst stimulation (cTBS) over anterior rTPJ (versus vertex, for control). We found that attentional reorienting was significantly impaired after rTPJ cTBS compared with control. For the false belief task, error rates in trials demanding a shift in mental state significantly increased. Of note, a significant positive correlation indicated a close relation between the stimulation effect on attentional reorienting and false belief trials. Our findings extend previous neuroimaging evidence by indicating an essential overarching role of the anterior rTPJ for both cognitive functions, reorienting of attention and ToM. |
Wouter Kruijne; Martijn Meeter Implicit short- and long-term memory direct our gaze in visual search Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 3, pp. 761–773, 2016. @article{Kruijne2016, Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Simi- larly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing. |
Gustav Kuhn; Robert Teszka; Natalia Tenaw; Alan Kingstone In: Cognition, vol. 146, pp. 136–142, 2016. @article{Kuhn2016a, People's attention is oriented towards faces, but the extent to which these social attention effects are under top down control is more ambiguous. Our first aim was to measure and compare, in real life and in the lab, people's top-down control over overt and covert shifts in reflexive social attention to the face of another. We employed a magic trick in which the magician used social cues (i.e. asking a question whilst establishing eye contact) to misdirect attention towards his face, and thus preventing participants from noticing a visible colour change to a playing card. Our results show that overall people spend more time looking at the magician's face when he is seen on video than in reality. Additionally, although most participants looked at the magician's face when misdirected, this tendency to look at the face was modulated by instruction (i.e., "keep your attention on the cards"), and therefore, by top down control. Moreover, while the card's colour change was fully visible, the majority of participants failed to notice the change, and critically, change detection (our measure of covert attention) was not affected by where people looked (overt attention). We conclude that there is a tendency to shift overt and covert attention reflexively to faces, but that people exert more top down control over this overt shift in attention. These finding are discussed within a new framework that focuses on the role of eye movements as an attentional process as well as a form of non-verbal communication. |
MiYoung Kwon; Rong Liu; Lillian Chien Compensation for blur requires increase in field of view and viewing time Journal Article In: PLoS ONE, vol. 11, no. 9, pp. e0162711, 2016. @article{Kwon2016b, Spatial resolution is an important factor for human patternrecognition. In particular, low res- olution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition.The spa- tial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, whichwas repositioned until object identification(moving window paradigm). Field of view requirement, quantified as the number of “views” (windowreposi- tions) for correct recognition,was obtained for three blur levels, including no blur. In experi- ment 2, studying temporal requirement,we determinedthreshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition,we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects.We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objectsmay furtherchallenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids. |
Markos Kyritsis; Stephen R. Gulliver; Eva Feredoes Environmental factors and features that influence visual search in a 3D WIMP interface Journal Article In: International Journal of Human-Computer Studies, vol. 92-93, pp. 30–43, 2016. @article{Kyritsis2016, The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it '3D', has resulted in much research and development. To evaluate the impact of 3D on the 'finding a target picture in a folder' task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface. |
Kaitlin E. W. Laidlaw; Mona J. H. Zhu; Alan Kingstone Looking away: Distractor influences on saccadic trajectory and endpoint in prosaccade and antisaccade tasks Journal Article In: Experimental Brain Research, vol. 234, no. 6, pp. 1637–1648, 2016. @article{Laidlaw2016, Successful target selection often occurs concurrently with distractor inhibition. A better understanding of the former thus requires a thorough study of the competition that arises between target and distractor representations. In the present study, we explore whether the presence of a distractor influences saccade processing via interfering with visual target and/or saccade goal representations. To do this, we asked participants to make either pro- or antisaccade eye movements to a target and measured the change in their saccade trajectory and landing position (collectively referred to as deviation) in response to distractors placed near or far from the saccade goal. The use of an antisaccade paradigm may help to distinguish between stimulus- and goal-related distractor interference, as unlike with prosaccades, these two features are dissociated in space when making a goal-directed antisaccade response away from a visual target stimulus. The present results demonstrate that for both pro- and antisaccades, distractors near the saccade goal elicited the strongest competition, as indicated by greater saccade trajectory deviation and landing position error. Though distractors far from the saccade goal elicited, on average, greater deviation away in antisaccades than in prosaccades, a time-course analysis revealed a significant effect of far-from-goal distractors in prosaccades as well. Considered together, the present findings support the view that goal-related representations most strongly influence the saccade metrics tested, though stimulus-related representations may play a smaller role in determining distractor-based interference effects on saccade execution under certain circumstances. Further, the results highlight the advantage of considering temporal changes in distractor-based interference. |
Caroline Landelle; Anna Montagnini; Laurent Madelain; Frederic R. Danion Eye tracking a self-moved target with complex hand-target dynamics Journal Article In: Journal of Neurophysiology, vol. 116, no. 4, pp. 1859–1870, 2016. @article{Landelle2016, Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring map- ping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ⬃5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. |
Mitchell R. P. LaPointe; Bruce Milliken Semantically incongruent objects attract eye gaze when viewing scenes for change Journal Article In: Visual Cognition, vol. 24, no. 1, pp. 63–77, 2016. @article{LaPointe2016, Past research has shown that change detection performance is often more efficient for target objects that are semantically incongruent with a surrounding scene context than for target objects that are semantically congruent with the scene context. One account of these findings is that attention is attracted to objects for which the identity of the object conflicts with the meaning of the scene, perhaps as a violation of expectancies created by earlier recruitment of scene gist information. An alternative account of the performance benefit for incongruent objects is that attention is more apt to linger on incongruent objects, as perhaps identifying these objects is more difficult due to conflicting information from the scene context. In the current experiment, we present natural scenes in a change detection task while monitoring eye movements. We find that eye gaze is attracted to these objects relatively early during scene processing. |
Mark A. LeBoeuf; Jessica M. Choplin; Debra Pogrund Stark Eye see what you are saying: Testing conversational influences on the information gleaned from home-loan disclosure forms Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 307–321, 2016. @article{LeBoeuf2016, The federal government mandates the use of home-loan disclosure forms to facilitate understanding of offered loans, enable comparison shopping, and prevent predatory lending. Predatory lending persists, however, and scant research has examined how salespeople might undermine the effectiveness of these forms. Three eye-tracking studies (a laboratory simulation and two controlled experiments) investigated how conversational norms affect the information consumers can glean from these forms. Study 1 was a laboratory simulation that recreated in the laboratory; the effects that previous literature suggested is likely happening in the field, namely, that following or violating conversational norms affects the information that consumers can glean from home-loan disclosure forms and the home-loan decisions they make. Studies 2 and 3 were controlled experiments that isolated the possible factors responsible for the observed biases in the information gleaned from these forms. The results suggest that attentional biases are largely responsible for the effects of conversation on the information consumers get and that perceived importance plays little to no role. Policy implications and how eye-tracking technology can be employed to improve decision-making are considered. |
Helmut Leder; Aleksandra Mitrovic; Jürgen Goller How beauty determines gaze! Facial attractiveness and gaze duration in images of real world scenes Journal Article In: i-Perception, pp. 1–12, 2016. @article{Leder2016, We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. |
Yen-Ju Lee; Harold H. Greene; Chia W. Tsai; Yu J. Chou Differences in sequential eye movement behavior between Taiwanese and American viewers Journal Article In: Frontiers in Psychology, vol. 7, pp. 697, 2016. @article{Lee2016a, Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context. |
Agathe Legrand; Karine Doré-Mazars; Christelle Lemoine; Vincent Nougier; Isabelle Olivier Interference between oculomotor and postural tasks in 7–8-year-old children and adults Journal Article In: Experimental Brain Research, vol. 234, no. 6, pp. 1667–1677, 2016. @article{Legrand2016, Several studies in adults having observed the effect of eye movements on postural control provided contradictory results. In the present study, we explored the effect of various oculomotor tasks on postural control and the effect of different postural tasks on eye movements in eleven children (7.8 ± 0.5 years) and nine adults (30.4 ± 6.3 years). To vary the difficulty of the oculomotor task, three conditions were tested: fixation, prosaccades (reactive saccades made toward the target) and antisaccades (voluntary saccades made in the direction opposite to the visual target). To vary the difficulty of postural control, two postural tasks were tested: Standard Romberg (SR) and Tandem Romberg (TR). Postural difficulty did not affect oculomotor behavior, except by lengthening adults' latencies in the prosaccade task. For both groups, postural control was altered in the antisaccade task as compared to fixation and prosaccade tasks. Moreover, a ceiling effect was found in the more complex postural task. This study highlighted a cortical interference between oculomotor and postural control systems. |
Tsu-Chiang Lei; Shih-Chieh Wu; Chi-Wen Chao; Su-Hsin Lee Evaluating differences in spatial visual attention in wayfinding strategy when using 2D and 3D electronic maps Journal Article In: GeoJournal, vol. 81, no. 2, pp. 153–167, 2016. @article{Lei2016, With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace. |
Karolina M. Lempert; Eli Johnson; Elizabeth A. Phelps Emotional arousal predicts intertemporal choice Journal Article In: Emotion, vol. 16, no. 5, pp. 647–656, 2016. @article{Lempert2016, People generally prefer immediate rewards to rewards received after a delay, often even when the delayed reward is larger. This phenomenon is known as temporal discounting. It has been suggested that preferences for immediate rewards may be due to their being more concrete than delayed rewards. This concreteness may evoke an enhanced emotional response. Indeed, manipulating the representation of a future reward to make it more concrete has been shown to heighten the reward's subjective emotional intensity, making people more likely to choose it. Here the authors use an objective measure of arousal—pupil dilation—to investigate if emotional arousal mediates the influence of delayed reward concreteness on choice. They recorded pupil dilation responses while participants made choices between immediate and delayed rewards. They manipulated concreteness through time interval framing: delayed rewards were presented either with the date on which they would be received (e.g., “$30, May 3”; DATE condition, more concrete) or in terms of delay to receipt (e.g., “$30, 7 days; DAYS condition, less concrete). Contrary to prior work, participants were not overall more patient in the DATE condition. However, there was individual variability in response to time framing, and this variability was predicted by differences in pupil dilation between conditions. Emotional arousal increased as the subjective value of delayed rewards increased, and predicted choice of the delayed reward on each trial. This study advances our understanding of the role of emotion in temporal discounting. |
Mark D. Lescroart; Nancy Kanwisher; Julie D. Golomb No evidence for automatic remapping of stimulus features or location found with fMRI Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 53, 2016. @article{Lescroart2016, The input to our visual system shifts every time we move our eyes. To maintain a stable percept of the world, visual representations must be updated with each saccade. Near the time of a saccade, neurons in several visual areas become sensitive to the regions of visual space that their receptive fields occupy after the saccade. This process, known as remapping, transfers information from one set of neurons to another, and may provide a mechanism for visual stability. However, it is not clear whether remapping transfers information about stimulus features in addition to information about stimulus location. To investigate this issue, we recorded BOLD fMRI responses while human subjects viewed images of faces and houses (two visual categories with many feature differences). Immediately after some image presentations, subjects made a saccade that moved the previously stimulated location to the opposite side of the visual field. We then used a combination of univariate analyses and multivariate pattern analyses to test whether information about stimulus location and stimulus features were remapped to the ipsilateral hemisphere after the saccades. We found no reliable indication of stimulus feature remapping in any region. However, we also found no reliable indication of stimulus location remapping, despite the fact that our paradigm was highly similar to previous fMRI studies of remapping. The absence of location remapping in our study precludes strong conclusions regarding feature remapping. However, these results also suggest that measurement of location remapping with fMRI depends strongly on the details of the experimental paradigm used. We highlight differences in our approach from the original fMRI studies of remapping, discuss potential reasons for the failure to generalize prior location remapping results, and suggest directions for future research. |
Gary J. Lewis; Timothy C. Bates In: Journal of Cognitive Neuroscience, vol. 28, no. 2, pp. 308–318, 2016. @article{Lewis2016, The ability to adaptively shift between exploration and exploitation control states is critical for optimizing behavioral performance. Converging evidence from primate electrophysiology and computational neural modeling has suggested that this ability may be mediated by the broad norepinephrine projections emanating from the locus coeruleus (LC) [Aston-Jones, G., & Cohen, J. D. An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance. Annual Review of Neuroscience, 28, 403–450, 2005]. There is also evidence that pupil diameter covaries systematically with LC activity. Although imperfect and indirect, this link makes pupillometry a useful tool for studying the locus coeruleus norepinephrine system in humans and in high-level tasks. Here, we present a novel paradigm that examines how the pupillary response during exploration and exploitation covaries with individual differences in fluid intelligence during analogical reasoning on Raven's Advanced Progressive Matrices. Pupillometry was used as a noninvasive proxy for LC activity, and concurrent think-aloud verbal protocols were used to identify exploratory and exploitative solution periods. This novel combination of pupillometry and verbal protocols from 40 participants revealed a decrease inpupil diameter during exploitation and an increase during exploration. The temporal dynamics of the pupillary response was characterized by a steep increase during the transition to exploratory periods, sustained dilation for many seconds afterward, and followed by gradual return to baseline.Moreover, the individual differences in the relative magnitude of pupillary dilation accounted for 16% of the variance in Advanced Progressive Matrices scores. Assuming that pupil diameter is a valid index of LC activity, these results establish promising preliminary connections between the literature on locus coeruleus norepinephrine-mediated cognitive control and the literature on analogical reasoning and fluid intelligence. |
Chia-Ling Li; M. Pilar Aivar; Dmitry M. Kit; Matthew H. Tong; Mary Hayhoe Memory and visual search in naturalistic 2D and 3D environments Journal Article In: Journal of Vision, vol. 16, no. 8, pp. 1–20, 2016. @article{Li2016a, The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. |
Efthymia C. Kapnoula; Bob McMurray Newly learned word forms are abstract and integrated immediately after acquisition Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 491–499, 2016. @article{Kapnoula2016a, A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35-39, 2007; Gaskell & Dumay, Cognition, 89, 105-132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85-99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. |
Omid Kardan; John M. Henderson; Grigori Yourganov; Marc G. Berman Observers' cognitive states modulate how visual inputs relate to gaze control Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 9, pp. 1429–1442, 2016. @article{Kardan2016, Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. |
Ioanna Katidioti; Jelmer P. Borst; Marieke K. Vugt; Niels A. Taatgen Interrupt me: External interruptions are less disruptive than self-interruptions Journal Article In: Computers in Human Behavior, vol. 63, pp. 906–915, 2016. @article{Katidioti2016, Interruptions are part of everyday life and are known to be disruptive. With the current study we investigated which kind of interruption is more disruptive: external interruptions or self-interruptions. We conducted two experiments, one behavioral experiment (28 participants) and one in which pupil dilation was measured (21 participants). In both experiments, self-interruptions made participants complete the main task slower than external interruptions (occurring at similar moments in the task as the self-interruptions). However, there was no difference between the two kinds of interruptions in the time needed to resume the main task (resumption lag). Instead, the pupil dilation data revealed that the decision to self-interrupt takes about 1 s, resulting in slower performance overall. |
Leor N. Katz; Jacob L. Yates; Jonathan W. Pillow; Alexander C. Huk Dissociated functional significance of decision-related activity in the primate dorsal stream Journal Article In: Nature, vol. 535, pp. 285–288, 2016. @article{Katz2016, During decision making, neurons in multiple brain regions exhibit responses that are correlated with decisions. However, it remains uncertain whether or not various forms of decision-related activity are causally related to decision making. Here we address this question by recording and reversibly inactivating the lateral intraparietal (LIP) and middle temporal (MT) areas of rhesus macaques performing a motion direction discrimination task. Neurons in area LIP exhibited firing rate patterns that directly resembled the evidence accumulation process posited to govern decision making, with strong correlations between their response fluctuations and the animal's choices. Neurons in area MT, in contrast, exhibited weak correlations between their response fluctuations and choices, and had firing rate patterns consistent with their sensory role in motion encoding. The behavioural impact of pharmacological inactivation of each area was inversely related to their degree of decision-related activity: while inactivation of neurons in MT profoundly impaired psychophysical performance, inactivation in LIP had no measurable impact on decision-making performance, despite having silenced the very clusters that exhibited strong decision-related activity. Although LIP inactivation did not impair psychophysical behaviour, it did influence spatial selection and oculomotor metrics in a free-choice control task. The absence of an effect on perceptual decision making was stable over trials and sessions and was robust to changes in stimulus type and task geometry, arguing against several forms of compensation. Thus, decision-related signals in LIP do not appear to be critical for computing perceptual decisions, and may instead reflect secondary processes. Our findings highlight a dissociation between decision correlation and causation, showing that strong neuron-decision correlations do not necessarily offer direct access to the neural computations underlying decisions. |
Loes T. E. Kessels; Peter R. Harris; Robert A. C. Ruiter; William M. P. Klein Attentional effects of self-affirmation in response to graphic antismoking images Journal Article In: Health Psychology, vol. 35, no. 8, pp. 891–897, 2016. @article{Kessels2016, Objective: Self-affirmation has been shown to reduce defensive responding to threatening information. However, little is known about the cognitive and attentional processes underlying these effects. In the current eye-movement study, the authors explored whether self-affirmation affects attention allocation (i.e., number of fixations) among those for whom a threatening health message is self-relevant. Methods: After a self-affirmation manipulation, 47 smokers and 52 nonsmokers viewed a series of cigarette packs displaying high or low threat smoking-related images accompanied by a brief smoking message containing risk, coping or neutral textual information. Results: Self-affirmed smokers made more fixations to the cigarette packs than did nonaffirmed smokers (across both high and low threat images), whereas self-affirmed nonsmokers made fewer fixations to the cigarette packs than did nonaffirmed nonsmokers (again across both image types). The textual information did not moderate responses. Conclusions: Findings indicate attention-increasing effects of self-affirmation among those for whom the information is self-relevant (smokers) and attention-decreasing effects of self-affirmation among those for whom the information is not self-relevant (nonsmokers). Such findings are consistent with the calibration model of self-affirmation (Griffin & Harris, 2011) in which self-affirmation increases sensitivity to the self-relevance of health-risk information. The use of an implicit measure of visual orienting informs our understanding of the working mechanisms of self-affirmation when encoding health information, and may also hold practical implications for the design and delivery of graphic warning labels. |
Jason J. Ki; Simon P. Kelly; Lucas C. Parra Attention strongly modulates reliability of neural responses to naturalistic narrative stimuli Journal Article In: Journal of Neuroscience, vol. 36, no. 10, pp. 3092–3101, 2016. @article{Ki2016, Attentional engagement is a major determinant of how effectively we gather information through our senses. Alongside the sheer growth in the amount and variety of information content that we are presented with through modern media, there is increased variability in the degree to which we "absorb" that information. Traditional research on attention has illuminated the basic principles of sensory selection to isolated features or locations, but it provides little insight into the neural underpinnings of our attentional engagement with modern naturalistic content. Here, we show in human subjects that the reliability of an individual's neural responses with respect to a larger group provides a highly robust index of the level of attentional engagement with a naturalistic narrative stimulus. Specifically, fast electroencephalographic evoked responses were more strongly correlated across subjects when naturally attending to auditory or audiovisual narratives than when attention was directed inward to a mental arithmetic task during stimulus presentation. This effect was strongest for audiovisual stimuli with a cohesive narrative and greatly reduced for speech stimuli lacking meaning. For compelling audiovisual narratives, the effect is remarkably strong, allowing perfect discrimination between attentional state across individuals. Control experiments rule out possible confounds related to altered eye movement trajectories or order of presentation. We conclude that reliability of evoked activity reproduced across subjects viewing the same movie is highly sensitive to the attentional state of the viewer and listener, which is aided by a cohesive narrative. |
Atsushi Kikumoto; Jason Hubbard; Ulrich Mayr Dynamics of task-set carry-over: evidence from eye-movement analyses Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 3, pp. 899–906, 2016. @article{Kikumoto2016, Trial-to-trial carry-over of task sets (i.e., task-set inertia) is often considered as a primary reason for task-switch costs. Yet, we know little about the dynamics of such carry-over effects, in particular how much they are driven by the most recent trial rather than characterized by a more continuous memory gradient. Using eye-tracking, we examined in a 3-task, switching paradigm whether there is a greater probability of non-target fixations to stimuli associated with the previously relevant attentional set than to those associated with the less-recent set. Indeed, we found strong evidence for more interference (expressed in terms of non-target fixations) from recent than from less-recent tasks and that in particular the interference from pre-switch trials contributed substantially to the overall pattern of response-time switch costs. Moreover, task-set carry-over was dominated by the most-recent trial when subjects could expect task repetitions (with a 33 % switch rate). In comparison, when tasks were selected randomly (with a 66 % switch rate), interference from the most recent trial decreased, whereas interference from less-recent trials increased. In sum, carry-over interference dynamics were characterized both by a gradual recency gradient and expectations about task-transition probabilities. Beyond that, there was little evidence for a unique role of the most-recent trial. |
Lynn Huestegge; Anne Böckler Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–15, 2016. @article{Huestegge2016, Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards. |
Falk Huettig; Esther Janse Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 1, pp. 80–93, 2016. @article{Huettig2016, Several mechanisms of predictive language processing have been proposed. The possible influence of mediating factors such as working memory and processing speed, however, has largely been ignored. We sought to find evidence for such an influence using an individual differences approach. 105 participants from 32–77 years of age received spoken instructions (e.g. “Kijk naar deCOM afgebeelde pianoCOM”– look at the displayed piano) while viewing 4 objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target. Participants could thus use article gender information to predict the target. Multiple regression analyses showed that enhanced working memory abilities and faster processing speed predicted anticipatory eye movements. Models of predictive language processing therefore must take mediating factors into account. More generally, our results are consistent with the notion that working memory grounds language in space and time, linking linguistic and visual–spatial representations. |
Bianca Huurneman; F. Nienke Boonstra; Jeroen Goossens Perceptual learning in children with infantile Nystagmus: Effects on 2D oculomotor behavior Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 10, pp. 4229–4238, 2016. @article{Huurneman2016, PURPOSE: To determine changes in oculomotor behavior after 10 sessions of perceptual learning on a letter discrimination task in children with infantile nystagmus (IN). METHODS: Children with IN (18 children with idiopathic IN and 18 with oculocutaneous albinism accompanied by IN) aged 6 to 11 years were divided into two training groups matched on diagnosis: an uncrowded training group (n = 18) and a crowded training group (n = 18). Target letters always appeared briefly (500 ms) at an eccentric location, forcing subjects to quickly redirect their gaze. Training occurred twice per week for 5 consecutive weeks (3500 trials total). Norm data and test-retest values were collected from children with normal vision (n = 11). Outcome measures were: nystagmus characteristics (amplitude, frequency, intensity, and the expanded nystagmus acuity function); fixation stability (the bivariate contour ellipse area and foveation time); and saccadic eye movements (latencies and accuracy) made during a simple saccade task and a crowded letter-identification task. RESULTS: After training, saccadic responses of children with IN improved on the saccade task (latencies decreased by 14 ± 4 ms and gains increased by 0.03 ± 0.01), but not on the crowded letter task. There were also no training-induced changes in nystagmus characteristics and fixation stability. Although children with normal vision had shorter latencies in the saccade task (47 ± 14 ms at baseline), test-retest changes in their saccade gains and latencies were almost equal to the training effects observed in children with IN. CONCLUSIONS: Our results suggest that the improvement in visual performance after perceptual learning in children with IN is primarily due to improved sensory processing rather than improved two-dimensional oculomotor behavior. |
Guilhem Ibos; David J. Freedman Interaction between spatial and feature attention in posterior parietal cortex Journal Article In: Neuron, vol. 91, no. 4, pp. 931–943, 2016. @article{Ibos2016, Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task that required monkeys to detect specific conjunctions of color, motion direction, and stimulus position. Here we show that FBA and SBA potentiate each other's effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. |
Akiko Ikkai; Sangita Dandekar; Clayton E. Curtis Lateralization in alpha-band oscillations predicts the locus and spatial distribution of attention Journal Article In: PLoS ONE, vol. 11, no. 5, pp. e0154796, 2016. @article{Ikkai2016, Attending to a task-relevant location changes how neural activity oscillates in the alpha band (8-13Hz) in posterior visual cortical areas. However, a clear understanding of the relationships between top-down attention, changes in alpha oscillations in visual cortex, and attention performance are still poorly understood. Here, we tested the degree to which the posterior alpha power tracked the locus of attention, the distribution of attention, and how well the topography of alpha could predict the locus of attention. We recorded magnetoencephalographic (MEG) data while subjects performed an attention demanding visual discrimination task that dissociated the direction of attention from the direction of a saccade to indicate choice. On some trials, an endogenous cue predicted the target's location, while on others it contained no spatial information. When the target's location was cued, alpha power decreased in sensors over occipital cortex contralateral to the attended visual field. When the cue did not predict the target's location, alpha power again decreased in sensors over occipital cortex, but bilaterally, and increased in sensors over frontal cortex. Thus, the distribution and the topography of alpha reliably indicated the locus of covert attention. Together, these results suggest that alpha synchronization reflects changes in the excitability of populations of neurons whose receptive fields match the locus of attention. This is consistent with the hypothesis that alpha oscillations reflect the neural mechanisms by which top-down control of attention biases information processing and modulate the activity of neurons in visual cortex. |
Masato Inoue; Motoaki Uchimura; Shigeru Kitazawa Error signals in motor cortices drive adaptation in reaching Journal Article In: Neuron, vol. 90, no. 5, pp. 1114–1126, 2016. @article{Inoue2016, Reaching movements are subject to adaptation in response to errors induced by prisms or external perturbations. Motor cortical circuits have been hypothesized to provide execution errors that drive adaptation, but human imaging studies to date have reported that execution errors are encoded in parietal association areas. Thus, little evidence has been uncovered that supports the motor hypothesis. Here, we show that both primary motor and premotor cortices encode information on end-point errors in reaching. We further show that post-movement microstimulation to these regions caused trial-by-trial increases in errors, which subsided exponentially when the stimulation was terminated. The results indicate for the first time that motor cortical circuits provide error signals that drive trial-by-trial adaptation in reaching movements. |
Monika Intaitė; João Valente Duarte; Miguel Castelo-Branco Working memory load influences perceptual ambiguity by competing for fronto-parietal attentional resources Journal Article In: Brain Research, vol. 1650, pp. 142–151, 2016. @article{Intaite2016, A visual stimulus is defined as ambiguous when observers perceive it as having at least two distinct and spontaneously alternating interpretations. Neuroimaging studies suggest an involvement of a right fronto-parietal network regulating the balance between stable percepts and the triggering of alternative interpretations. As spontaneous perceptual reversals may occur even in the absence of attention to these stimuli, we investigated neural activity patterns in response to perceptual changes of ambiguous Necker cube under different amounts of working memory load using a dual-task design. We hypothesized that the same regions that process working memory load are involved in perceptual switching and confirmed the prediction that perceptual reversals led to fMRI responses that linearly depended on load. Accordingly, posterior Superior Parietal Lobule, anterior Prefrontal and Dorsolateral Prefrontal cortices exhibited differential BOLD signal changes in response to perceptual reversals under working memory load. Our results also suggest that the posterior Superior Parietal Lobule may be directly involved in the emergence of perceptual reversals, given that it specifically reflects both perceptual versus real changes and load levels. The anterior Prefrontal and Dorsolateral Prefrontal cortices, showing a significant interaction between reversal levels and load, might subserve a modulatory role in such reversals, in a mirror symmetric way: in the former activation is suppressed by the highest loads, and in the latter deactivation is reduced by highest loads, suggesting a more direct role of the aPFC in reversal generation. |
Jessica L. Irons; Andrew B. Leber Choosing attentional control settings in a dynamically changing environment Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 7, pp. 2031–2048, 2016. @article{Irons2016, Goal-directed attentional control supports efficient visual search by prioritizing relevant stimuli in the environment. Previous research has shown that goal-directed control can be configured in many ways, and often multiple control settings can be used to achieve the same goal. However, little is known about how control settings are selected. We explored the extent to which the configuration of goal-directed control is driven by performance maximization (optimally configuring settings to maximize speed and accuracy) and effort minimization (selecting the least effortful settings). We used a new paradigm, adaptive choice visual search, which allows participants to choose one of two available targets (a red or a blue square) on each trial. Distractor colors vary predictively across trials, such that the optimal target switches back and forth throughout the experiment. Results (N = 43) show that participants chose the optimal target most often, updating to the new target when the environment changed, supporting performance maximization. However, individuals were sluggish to update to the optimal color, consistent with effort minimization. Additionally, we found a surprisingly high rate of nonoptimal choices and switching between targets, which could not be explained by either factor. Analysis of participants' self-reported search strategy revealed substantial individual differences in the control strategies used. In sum, the adaptive choice visual search enables a fresh approach to studying goal-directed control. The results contribute new evidence that control is partly determined by both performance maximization and effort minimization, as well as at least one additional factor, which we speculate to include novelty seeking. |
David E. Irwin; Maria M. Robinson Perceiving a continuous visual world across voluntary eye blinks Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 10, pp. 1490–1496, 2016. @article{Irwin2016, People blink their eyes every few seconds, but the changes in retinal illumination that accompany eyeblinks are hardly noticed. Furthermore, despite the loss of visual input, visual experience remains continuous across eyeblinks. Two hypotheses were investigated to account for these phenomena. The first proposes that perceptual information is maintained across a blink whereas the second proposes that perceptual information is not maintained but rather postblink perceptual experience is antedated to the beginning of the blink. Two experiments found no evidence for temporal antedating of a stimulus presented during a voluntary eyeblink. In a third experiment subjects judged the temporal duration of a stimulus that was interrupted by a voluntary eyeblink with that of a stimulus presented while the eyes were open. The duration of stimuli that were interrupted by eyeblinks was judged to be 117 ms shorter than that of stimuli presented while the eyes remained open, indicating that blink duration was not accounted for in the perception of stimulus duration. This suggests that perceptual experience is neither maintained nor antedated across eyeblinks, but rather is ignored, perhaps in response to the extraretinal signal that accompanies the eyeblink. |
Anja Ischebeck; Marina Weilharter; Christof Körner Eye movements reflect and shape strategies in fraction comparison Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 4, pp. 713–727, 2016. @article{Ischebeck2016, The comparison of fractions is a difficult task that can often be facilitated by separately comparing components (numerators and denominators) of the fractions-that is, by applying so-called component-based strategies. The usefulness of such strategies depends on the type of fraction pair to be compared. We investigated the temporal organization and the flexibility of strategy deployment in fraction comparison by evaluating sequences of eye movements in 20 young adults. We found that component-based strategies could account for the response times and the overall number of fixations observed for the different fraction pairs. The analysis of eye movement sequences showed that the initial eye movements in a trial were characterized by stereotypical scanning patterns indicative of an exploratory phase that served to establish the kind of fraction pair presented. Eye movements that followed this phase adapted to the particular type of fraction pair and indicated the deployment of specific comparison strategies. These results demonstrate that participants employ eye movements systematically to support strategy use in fraction comparison. Participants showed a remarkable flexibility to adapt to the most efficient strategy on a trial-by-trial basis. Our results confirm the value of eye movement measurements in the exploration of strategic adaptation in complex tasks. |
Miho Iwasaki; Yasuki Noguchi Hiding true emotions: Micro-expressions in eyes retrospectively concealed by mouth movements Journal Article In: Scientific Reports, vol. 6, pp. 22049, 2016. @article{Iwasaki2016, When we encounter someone we dislike, we may momentarily display a reflexive disgust expression, only to follow-up with a forced smile and greeting. Our daily lives are replete with a mixture of true and fake expressions. Nevertheless, are these fake expressions really effective at hiding our true emotions? Here we show that brief emotional changes in the eyes (micro-expressions, thought to reflect true emotions) can be successfully concealed by follow-up mouth movements (e.g. a smile). In the same manner as backward masking, mouth movements of a face inhibited conscious detection of all types of micro-expressions in that face, even when viewers paid full attention to the eye region. This masking works only in a backward direction, however, because no disrupting effect was observed when the mouth change preceded the eye change. These results provide scientific evidence for everyday behaviours like smiling to dissemble, and further clarify a major reason for the difficulty we face in discriminating genuine from fake emotional expressions. |
Seon-Kyeong Jang; Sujin Kim; Chai-Youn Kim; Hyeon-Seung Lee; Kee-Hong Choi Attentional processing of emotional faces in schizophrenia: Evidence from eye tracking Journal Article In: Journal of Abnormal Psychology, vol. 125, no. 7, pp. 894–906, 2016. @article{Jang2016, Severe emotional disturbances such as anxiety and depression have been closely related to aberrant attentional processing of emotional stimuli. However, this has been little studied in schizophrenia, which is also characterized by marked emotional impairments such as heightened negative affect and anhedonia. In the current study, we investigated temporal dynamics of motivated attention to emotional stimuli in schizophrenia. For this purpose, we tracked eye movements of 22 individuals with schizophrenia or schizoaffective disorder (ISZs) and 19 healthy controls (HCs) to emotional (i.e., happy, sad, angry) and neutral face pairs presented either for 500 ms or 1,500 ms. Initial fixation direction and viewing time at 3 successive intervals (0–500, 500–1,000, 1,000–1,500 ms) were calculated. The results showed that both ISZs and HCs were more likely to orient initial fixations and exhibited longer viewing times to emotional than neutral faces. However, compared with HCs, ISZs allocated less attention to overall faces during the late stage (1,000–1,500 ms) when one of the paired faces displayed negative emotions. Furthermore, positive symptoms were highly associated with initial fixation avoidance to angry faces while depressive symptoms were related to later avoidance of angry faces. Both social amotivation and poor interpersonal functioning were closely related to diminished sustained attention to happy faces. This suggests that early attentional capture of emotional salience may be relatively preserved in schizophrenia, but the people with this disorder display an atypical late attentional process characterized by generalized attentional avoidance of negative stimuli. Of note, aberrant attentional processes of social threat and reward were closely associated with major symptoms and functioning in this disorder. |
Christian P. Janssen; Preeti Verghese Training eye movements for visual search in individuals with macular degeneration Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–20, 2016. @article{Janssen2016, We report a method to train individuals with central field loss due to macular degeneration to improve the efficiency of visual search. Our method requires participants to make a same/different judgment on two simple silhouettes. One silhouette is presented in an area that falls within the binocular scotoma while they are fixating the center of the screen with their preferred retinal locus (PRL); the other silhouette is presented diametrically opposite within the intact visual field. Over the course of 480 trials (approximately 6 hr), we gradually reduced the amount of time that participants have to make a saccade and judge the similarity of stimuli. This requires that they direct their PRL first toward the stimulus that is initially hidden behind the scotoma. Results from nine participants show that all participants could complete the task faster with training without sacrificing accuracy on the same/different judgment task. Although a majority of participants were able to direct their PRL toward the initially hidden stimulus, the ability to do so varied between participants. Specifically, six of nine participants made faster saccades with training. A smaller set (four of nine) made accurate saccades inside or close to the target area and retained this strategy 2 to 3 months after training. Subjective reports suggest that training increased awareness of the scotoma location for some individuals. However, training did not transfer to a different visual search task. Nevertheless, our study suggests that increasing scotoma awareness and training participants to look toward their scotoma may help them acquire missing information. |
Srikant Jayaraman; Raymond M. Klein; Matthew D. Hilchey; Gouri Shanker Patil; Ramesh Kumar Mishra Spatial gradients of oculomotor inhibition of return in deaf and normal adults Journal Article In: Experimental Brain Research, vol. 234, no. 1, pp. 323–330, 2016. @article{Jayaraman2016, We explored the effect of deafness on the spatial (gradient) and temporal (decay) properties of oculomotor inhibition of return (IOR) using a task developed by Vaughan (Theoretical and applied aspects of eye movement research. Elsevier, North Holland, pp 143-150, 1984) in which participants made a sequence of saccades to carefully placed targets . Unlike IOR tasks in which ignored cues are used to explore the aftereffects of covert orienting, this task better approximates real-world behavior in which participants are free to make eye movements to potentially relevant inputs. Because IOR is a bias against returning attention and gaze to a previously attended location, we expected to find, and we did find, slower saccades toward previously fixated locations. Replicating Vaughan, a gradient of inhibition around a previously fixated location was observed and this inhibition began to decay after 1200 ms. Importantly, there were no significant differences between the deaf and the normal hearing subjects, on neither the magnitude of oculomotor IOR, nor its decay over time, nor its gradient around the previously fixated location . |
Laurence C. Jayet Bray; Sonia Bansal; Wilsaan M. Joiner Quantifying the spatial extent of the corollary discharge benefit to transsaccadic visual perception Journal Article In: Journal of Neurophysiology, vol. 115, no. 3, pp. 1132–1145, 2016. @article{JayetBray2016, Extraretinal information, such as corollary discharge (CD), is hypothesized to help compensate for saccade-induced visual input disruptions. However, support for this hypothesis is largely for one-dimensional transsaccadic visual changes, with little comprehensive information on the spatial characteristics. Here we systematically mapped the two-dimensional extent of this compensation by quantifying the insensitivity to different displacement metrics. Human subjects made saccades to targets positioned at different amplitudes (4° or 8°) and directions (rightward, oblique, or upward). After the saccade the initial target disappeared and, after a blank period, reappeared at a shifted location-a collinear, diagonal, or orthogonal displacement. Subjects reported the perceived shift direction, and we determined the displacement detection based on the perceptual judgments. The two-dimensional insensitivity fields resulting from the perceptual thresholds had spatial features similar to the saccadic eye movement variability: 1) scaled with movement amplitude, 2) oriented (less sensitive to the change) along the saccade vector, and 3) approximately constant in shape when normalized by movement amplitude. In addition, comparing the postsaccadic perceptual estimate of the presaccadic target location to that based solely on the postsaccade visual error showed that overall the perceptual estimate was approximately 50% more accurate and 35% less variable than estimates based solely on this visual information. However, this relationship was not uniform: The benefit of extraretinal information was observed largely for displacements with a component parallel to the saccade vector. These results suggest a graded use of extraretinal information when forming the postsaccadic perceptual evaluation of transsaccadic environmental changes. |
Su Keun Jeong; Yaoda Xu The impact of top-down spatial attention on laterality and hemispheric asymmetry in the human parietal cortex Journal Article In: Journal of Vision, vol. 16, no. 10, pp. 1–21, 2016. @article{Jeong2016, The human parietal cortex exhibits a preference to contralaterally presented visual stimuli (i.e., laterality) as well as an asymmetry between the two hemispheres with the left parietal cortex showing greater laterality than the right. Using visual short-term memory and perceptual tasks and varying target location predictability, this study examined whether hemispheric laterality and asymmetry are fixed characteristics of the human parietal cortex or whether they are dynamic and modulated by the deployment of top-down attention to the target present hemifield. Two parietal regions were examined here that have previously been shown to be involved in visual object individuation and identification and are located in the inferior and superior intraparietal sulcus (IPS), respectively. Across three experiments, significant laterality was found in both parietal regions regardless of attentional modulation with laterality being greater in the inferior than superior IPS, consistent with their roles in object individuation and identification, respectively. Although the deployment of top-down attention had no effect on the superior IPS, it significantly increased laterality in the inferior IPS. The deployment of top-down spatial attention can thus amplify the strength of laterality in the inferior IPS. Hemispheric asymmetry, on the other hand, was absent in both brain regions and only emerged in the inferior but not the superior IPS with the deployment of top-down attention. Interestingly, the strength of hemispheric asymmetry significantly correlated with the strength of laterality in the inferior IPS. Hemispheric asymmetry thus seems to only emerge when there is a sufficient amount of laterality present in a brain region. |
Danique Jeurissen; Matthew W. Self; Pieter R. Roelfsema Serial grouping of 2D-image regions with object-based attention in humans Journal Article In: eLife, vol. 5, pp. 1–22, 2016. @article{Jeurissen2016, After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. |
Yu-Cin Jian; Chao-Jung Wu In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016. @article{Jian2016a, Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas. |
Sung Jun Joo; Leor N. Katz; Alexander C. Huk Decision-related perturbations of decision-irrelevant eye movements Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 7, pp. 1925–1930, 2016. @article{Joo2016, It is well established that ongoing cognitive functions affect the trajectories of limb movements mediated by corticospinal circuits, suggesting an interaction between cognition and motor action. Although there are also many demonstrations that decision formation is reflected in the ongoing neural activity in oculomotor brain circuits, it is not known whether the decision-related activity in those oculomotor structures interacts with eye movements that are decision irrelevant. Here we tested for an interaction between decisions and instructed saccades unrelated to the perceptual decision. Observers performed a direction-discrimination decisionmaking task, but made decision-irrelevant saccades before registering their motion decision with a button press. Probing the oculomotor circuits with these decision-irrelevant saccades during decision making revealed that saccade reaction times and peak velocities were influenced in proportion to motion strength, and depended on the directional congruence between decisions about visual motion and decision-irrelevant saccades. These interactions disappeared when observers passively viewed the motion stimulus but still made the same instructed saccades, and when manual reaction times were measured instead of saccade reaction times, confirming that these interactions result from decision formation as opposed to visual stimulation, and are specific to the oculomotor system. Our results demonstrate that oculomotor function can be affected by decision formation, even when decisions are communicated without eye movements, and that this interaction has a directionally specific component. These results not only imply a continuous and interactive mixture of motor and decision signals in oculomotor structures, but also suggest nonmotor recruitment of oculomotor machinery in decision making. |
Emilie L. Josephs; Dejan Draschkow; Jeremy M. Wolfe; Melissa L. -H. Võ Gist in time: Scene semantics and structure enhance recall of searched objects Journal Article In: Acta Psychologica, vol. 169, pp. 100–108, 2016. @article{Josephs2016, Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500 ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization. |
Jakob Kaiser; Graham C. L. Davey; Thomas Parkhouse; Jennifer Meeres; Ryan B. Scott Emotional facial activation induced by unconsciously perceived dynamic facial expressions Journal Article In: International Journal of Psychophysiology, vol. 110, pp. 207–211, 2016. @article{Kaiser2016, Do facial expressions of emotion influence us when not consciously perceived? Methods to investigate this question have typically relied on brief presentation of static images. In contrast, real facial expressions are dynamic and unfold over several seconds. Recent studies demonstrate that gaze contingent crowding (GCC) can block awareness of dynamic expressions while still inducing behavioural priming effects. The current experiment tested for the first time whether dynamic facial expressions presented using this method can induce unconscious facial activation. Videos of dynamic happy and angry expressions were presented outside participants' conscious awareness while EMG measurements captured activation of the zygomaticus major (active when smiling) and the corrugator supercilii (active when frowning). Forced-choice classification of expressions confirmed they were not consciously perceived, while EMG revealed significant differential activation of facial muscles consistent with the expressions presented. This successful demonstration opens new avenues for research examining the unconscious emotional influences of facial expressions. |
Yuki Kamide; Shane Lindsay; Christoph Scheepers; Anuenue Kukona Event processing in the visual world: Projected motion paths during spoken sentence comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 5, pp. 804–812, 2016. @article{Kamide2016, Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. |
Hinze Hogendoorn Voluntary saccadic eye movements ride the attentional rhythm Journal Article In: Journal of Cognitive Neuroscience, vol. 28, no. 10, pp. 1625–1635, 2016. @article{Hogendoorn2016, Visual perception seems continuous, but recent evidence suggests that the underlying perceptual mechanisms are in fact periodic—particularly visual attention. Because visual attention is closely linked to the preparation of saccadic eye movements, the question arises how periodic attentional processes interact with the preparation and execution of voluntary saccades. In two experiments, human observers made voluntary saccades between two placeholders, monitoring each one for the presentation of a threshold-level target. Detection performance was evaluated as a function of latency with respect to saccade landing. The time course ofdetection performance revealed oscillations at around 4 Hz both before the saccade at the saccade origin and after the saccade at the saccade destination. Furthermore, oscillations before and after the saccade were in phase, meaning that the saccade did not disrupt or reset the ongoing attentional rhythm. Instead, it seems that voluntary saccades are executed as part of an ongoing attentional rhythm, with the eyes in flight during the troughs of the attentional wave. This finding for the first time demonstrates that periodic attentional mechanisms affect not only perception but also overt motor behavior. |
Tiffany Hon; Ravi K. Das; Sunjeev K. Kamboj The effects of cognitive reappraisal following retrieval-procedures designed to destabilize alcohol memories in high-risk drinkers Journal Article In: Psychopharmacology, vol. 233, no. 5, pp. 851–861, 2016. @article{Hon2016, RATIONALE: Addiction is a disorder of motivational learning and memory. Maladaptive motivational memories linking drug-associated stimuli to drug seeking are formed over hundreds of reinforcement trials and accompanied by aberrant neuroadaptation in the mesocorticolimbic reward system. Such memories are resistant to extinction. However, the discovery of retrieval-dependent memory plasticity has opened up the possibility of permanent modification of established (long-term) memories during 'reconsolidation'.$backslash$n$backslash$nOBJECTIVES: Here, we investigate whether reappraisal of maladaptive alcohol cognitions performed after procedures designed to destabilize alcohol memory networks affected subsequent alcohol memory, craving, drinking and attentional bias.$backslash$n$backslash$nMETHODS: Forty-seven at-risk drinkers attended two sessions. On the first lab session, participants underwent one of two prediction error-generating procedures in which outcome expectancies were violated while retrieving alcohol memories (omission and value prediction error groups). Participants in a control group retrieved non-alcohol memories. Participants then reappraised personally relevant maladaptive alcohol memories and completed measures of reappraisal recall, alcohol verbal fluency and craving. Seven days later, they repeated these measures along with attentional bias assessment.$backslash$n$backslash$nRESULTS: Omission prediction error (being unexpectedly prevented from drinking beer), but not a value prediction error (drinking unexpectedly bitter-tasting beer) or control procedure (drinking unexpectedly bitter orange juice), was associated with significant reductions in verbal fluency for positive alcohol-related words. No other statistically robust outcomes were detected.$backslash$n$backslash$nCONCLUSIONS: This study provides partial preliminary support for the idea that a common psychotherapeutic strategy used in the context of putative memory retrieval-destabilization can alter accessibility of alcohol semantic networks. Further research delineating the necessary and sufficient requirements for producing alterations in alcohol memory performance based on memory destabilization is still required. |
Ha Hong; Daniel L. K. Yamins; Najib J. Majaj; James J. DiCarlo Explicit information for category-orthogonal object properties increases along the ventral stream Journal Article In: Nature Neuroscience, vol. 19, no. 4, pp. 613–622, 2016. @article{Hong2016, Extensive research has revealed that the ventral visual stream hierarchically builds a robust representation for supporting visual object categorization tasks. We systematically explored the ability of multiple ventral visual areas to support a variety of 'category-orthogonal' object properties such as position, size and pose. For complex naturalistic stimuli, we found that the inferior temporal (IT) population encodes all measured category-orthogonal object properties, including those properties often considered to be low-level features (for example, position), more explicitly than earlier ventral stream areas. We also found that the IT population better predicts human performance patterns across properties. A hierarchical neural network model based on simple computational principles generates these same cross-area patterns of information. Taken together, our empirical results support the hypothesis that all behaviorally relevant object properties are extracted in concert up the ventral visual hierarchy, and our computational model explains how that hierarchy might be built. |
Lauren S. Hopkins; Fred J. Helmstetter; Deborah E. Hannula Eye movements are captured by a perceptually simple conditioned stimulus in the absence of explicit contingency knowledge Journal Article In: Emotion, vol. 16, no. 8, pp. 1157–1171, 2016. @article{Hopkins2016, Past reports suggest that threatening materials can impact the efficiency of goal-directed behavior. However, questions remain about whether a conditional stimulus (CS) can capture attention as previous results may have been influenced by voluntary prioritization of a to-be-ignored CS. In 2 experiments, eye tracking was used to evaluate whether neutral, perceptually simple materials capture attention when they take on aversive properties via probabilistic fear conditioning with strict methods in place to eliminate voluntary CS prioritization. During training, participants attempted to fixate search targets (i.e., horizontally or vertically oriented rectangles) as quickly as possible to avoid shock. In reality, shock administration was related to rectangle orientation so that 1 rectangle (CS+) predicted shock more often than the other (CS-). Subsequently rectangles became distractors and were to be ignored. At this point, participants were instructed to fixate a new target and incidences of CS capture were examined. Results showed that saccades were made more quickly to the CS+ than the CS- as training progressed, and that oculomotor capture by irrelevant rectangles occurred more often for the CS+ than the CS-. An independent physiological index (skin conductance response) confirmed that contingencies had been learned, as SCR magnitude was greater for CS+ than CS- trials early in the test phase. These effects were documented despite the absence of explicit contingency knowledge, assessed using a postexperimental questionnaire. Collectively, these outcomes indicate that a CS can capture attention despite being task-irrelevant, and that these effects do not depend on conscious awareness of learned contingencies. |
Gernot Horstmann; Stefanie I. Becker; Daniel Ernst Perceptual salience captures the eyes on a surprise trial Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 7, pp. 1889–1900, 2016. @article{Horstmann2016a, A number of characteristics of the visual system and of the visual stimulus are invoked to explain involuntary control of attention, including goals, novelty, and perceptual salience. The present experiment tested perceptual salience on a surprise trial, that is, on its unannounced first presentation following trials lacking any salient items, thus eliminating possible confounds by current goals. Moreover, the salient item's location was not singled out by a novel feature, thus eliminating a possible confound by novelty in directing attention. Eye tracking was used to measure involuntary attention. Results show a prioritization of the salient item. However, contrary to predictions of prominent neuro-computational and psychological salience models, prioritization was not fast-acting. Rather the observers' gaze was attracted only as the secondfixationonaverage or later (dependingoncondition) and with a latency of more than 500 ms on average. These results support the general proposition that salience can control attention. However, contrary to most salience models, the present results indicate that salience changes attentional priority only in novel environments. |
Gernot Horstmann; Arvid Herwig Novelty biases attention and gaze in a surprise trial Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 1, pp. 69–77, 2016. @article{Horstmann2016, While the classical distinction between task- driven and stimulus-driven biasing of attention appears to be a dichotomy at first sight, there seems to be a third category that depends on the contrast or discrepancy be- tween active representations and the upcoming stimulus, and may be termed novelty, surprise, or prediction failure. For previous demonstrations of the discrepancy-attention link, stimulus-driven components (saliency) may have played a decisive role. The present study was conducted to evaluate the discrepancy-attentionlinkinadisplay where novel and familiar stimuli are equated for saliency. Eye tracking was used to determine fixations on novel and familiar stimuli as a proxy for attention. Results show a prioritization of attention by the novel color, and a de- prioritization of the familiar color, which is clearly present at the second fixation, and spans over the next couple of fixations. Saliency, on the other hand, did not prioritize items in the display. The results thus reinforce the notion that novelty captures and binds attention. |
Gernot Horstmann; Arvid Herwig; Stefanie I. Becker Distractor dwelling, skipping, and revisiting determine target absent performance in difficult visual search Journal Article In: Frontiers in Psychology, vol. 7, pp. 1152, 2016. @article{Horstmann2016b, Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. |