EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2016 |
Po Sheng Huang; Hsuan-Chih Chen Gender differences in eye movements in solving text-and-diagram science problems Journal Article In: International Journal of Science and Mathematics Education, vol. 14, pp. S327–S346, 2016. @article{Huang2016a, The main purpose of this study was to examine possible gender differences in how junior high school students integrate printed texts and diagrams while solving science problems. We proposed the response style hypothesis and the spatial working memory hypothesis to explain possible gender differences in the integration process. Eye-tracking technique was used to explore these hypotheses. The results of eye-movement indices support the response style hypothesis. Compared to male students, female students spent more time and displayed more fixations in solving science problems. The female students took more time to read the print texts and compare the information between print-based texts and visual-based diagrams more frequently during the problem-solving process than the male students. However, no gender differences were found in the accuracy of their responses to the science problems or their performances in the spatial working memory task. Implications for psychological theory and educational practice are discussed. |
Yi Ting Huang; Alison R. Arnold Word learning in linguistic context: Processing and memory effects Journal Article In: Cognition, vol. 156, pp. 71–87, 2016. @article{Huang2016, During language acquisition, children exploit syntactic cues within sentences to learn the meanings of words. Yet, it remains unknown how this strategy develops alongside an ability to access cues during real-time language comprehension. This study investigates how on-line sensitivity to syntactic cues impacts off-line interpretation and recall of word meanings. Adults and 5-year-olds heard novel words embedded in sentences that were (1) consistent with an agent-first bias (e.g., “The blicket will be eating the seal” → “the blicket” is an agent), (2) required revision of this bias (e.g., “The blicket will be eaten by the seal” → “the blicket” is a theme), or (3) weakened this bias through a familiar NP1 (e.g., “The seal will be eating/eaten by the blicket” → “the seal” is an agent or theme). Across both ages, eye-movements during sentences revealed decreased sensitivity to syntactic cues in contexts that required syntactic revision. In children, the magnitude of on-line sensitivity was positively associated with the accuracy of learning after the sentence. Parsing challenges during the word-learning task also negatively impacted children's later memory for word meanings during a recall task. Altogether, these results suggest that real-time demands impact word learning, through interpretive failures and memory interference. |
Christoph Huber-Huber; Thomas Ditye; María Marchante Fernández; Ulrich Ansorge Using temporally aligned event-related potentials for the investigation of attention shifts prior to and during saccades Journal Article In: Neuropsychologia, vol. 92, pp. 129–141, 2016. @article{HuberHuber2016, According to the pre-motor theory of attention, attention is shifted to a saccade's landing position before the saccade is executed. Such pre-saccadic attention shifts are usually studied in psychophysical dual-task conditions, with a target-discrimination task before saccade onset. Here, we present a novel approach to investigate pre-saccadic attention shifts with the help of event-related potentials (ERPs). Participants executed one or two saccades to color-defined targets while ERPs and eye-movements were recorded. In single-target blocks participants executed a single saccade. In two-targets blocks participants made either a single saccade to one of the targets, or two successive saccades to both targets. Importantly, in two-targets blocks, targets could appear on the same or on opposite sides of the vertical midline. This allowed us to study contra-to-ipsilateral ERP differences (such as the N2pc or PCN) that reflect attention shifts to the targets, prior to saccade onset and during saccades. If pre-saccadic attention shifts to saccade target locations are necessary for saccade execution and if searched-for saccade targets capture attention, there should be enhanced attentional competition (1) between two targets compared to single targets; (2) between two opposite-sides targets compared to two same-side targets; and (3) in two saccades rather than one saccade conditions: More attentional competition was expected to delay saccade latency and to weaken pre-saccadic laterality effects in ERPs. Hypotheses were tested by means of temporally aligned ERPs that were simultaneously time-locked to stimulus onsets, saccade onsets, and saccade offsets. Predictions (1) and (2) were partly and fully confirmed, respectively, but no evidence was found for (3). We explain the implications of our results for the role of attention during saccade preparation, and we point out how temporally aligned ERPs compare to ICA-based electroencephalogram (EEG) artifact correction procedures and to psychophysical dual-task approaches. |
Anna Klapetek; Donatas Jonikaitis; Heiner Deubel Attention allocation before antisaccades Journal Article In: Journal of Vision, vol. 16, no. 1, pp. 1–16, 2016. @article{Klapetek2016, In the present study, we investigated the distribution of attention before antisaccades. We used a dual task paradigm, in which participants made prosaccades or antisaccades and discriminated the orientation of a visual probe shown at the saccade goal, the visual cue location (antisaccade condition), or a neutral location. Moreover, participants indicated whether they had made a correct antisaccade or an erroneous prosaccade. We observed that, while spatial attention in the prosaccade task was allocated only to the saccade goal, attention in the antisaccade task was allocated both to the cued location and to the antisaccade goal. This suggests parallel attentional selection of the cued and antisaccade locations. We further observed that in error trials—in which participants made an incorrect prosaccade instead of an antisaccade—spatial attention was biased towards the prosaccade goal. These erroneous prosaccades were mostly unnoticed and were often followed by corrective antisaccades with very short latencies (<100 ms). Data from error trials therefore provide further evidence for the parallel programming of the reflexive prosaccade to the cue and the antisaccade to the intended location. Taken together, our results suggest that attention allocation and saccade goal selection in the antisaccade task are mediated by a common competitive process. |
Tomas Knapen; Jan Willem De Gee; Jan Brascamp; Stijn Nuiten; Sylco Hoppenbrouwers; Jan Theeuwes Cognitive and ocular factors jointly determine pupil responses under equiluminance Journal Article In: PLoS ONE, vol. 11, no. 5, pp. e0155574, 2016. @article{Knapen2016, Changes in pupil diameter can reflect high-level cognitive signals that depend on central neuromodulatory mechanisms. However, brain mechanisms that adjust pupil size are also exquisitely sensitive to changes in luminance and other events that would be considered a nuisance in cognitive experiments recording pupil size.We implemented a simple auditory experiment involving no changes in visual stimulation. Using finite impulse-response fitting we found pupil responses triggered by different types of events. Among these are pupil responses to auditory events and associated surprise: cognitive effects. However, these cognitive responses were overshadowed by pupil responses associated with blinks and eye movements, both inevitable nuisance factors that lead to changes in effective luminance. Of note, these latter pupil responses were not recording artifacts caused by blinks and eye movements, but endogenous pupil responses that occurred in the wake of these events. Furthermore, we identified slow (tonic) changes in pupil size that differentially influenced faster (phasic) pupil responses. Fitting all pupil responses using gamma functions, we provide accurate characterisations of cognitive and non-cognitive response shapes, and quantify each response's dependence on tonic pupil size. These results allow us to create a set of recommendations for pupil size analysis in cognitive neuroscience, which we have implemented in freely available software. |
Klemens M. Knoeferle; Pia Knoeferle; Carlos Velasco; Charles Spence Multisensory brand search: How the meaning of sounds guides consumers' visual attention Journal Article In: Journal of Experimental Psychology: Applied, vol. 22, no. 2, pp. 196–210, 2016. @article{Knoeferle2016, Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. |
Arkady Konovalov; Ian Krajbich Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning Journal Article In: Nature Communications, vol. 7, pp. 12438, 2016. @article{Konovalov2016, Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. |
Arnout W. Koornneef; Jakub Dotlačil; Paul W. Broek; Ted J. M. Sanders The influence of linguistic and cognitive factors on the time course of verb-based implicit causality Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 3, pp. 455–481, 2016. @article{Koornneef2016, In three eye-tracking experiments the influence of the Dutch causal connective "want" (because) and the working memory capacity of readers on the usage of verb-based implicit causality was examined. Experiments 1 and 2 showed that although a causal connective is not required to activate implicit causality information during reading, effects of implicit causality surfaced more rapidly and were more pronounced when a connective was present in the discourse than when it was absent. In addition, Experiment 3 revealed that-in contrast to previous claims-the activation of implicit causality is not a resource-consuming mental operation. Moreover, readers with higher and lower working memory capacities behaved differently in a dual-task situation. Higher span readers were more likely to use implicit causality when they had all their working memory resources at their disposal. Lower span readers showed the opposite pattern as they were more likely to use the implicit causality cue in the case of an additional working memory load. The results emphasize that both linguistic and cognitive factors mediate the impact of implicit causality on text comprehension. The implications of these results are discussed in terms of the ongoing controversies in the literature-that is, the focusing-integration debate and the debates on the source of implicit causality. |
Christoph W. Korn; Dominik R. Bach A solid frame for the window on cognition: Modeling event-related pupil responses Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–16, 2016. @article{Korn2016, Pupil size is often used to infer central processes, including attention, memory, and emotion. Recent research has spotlighted its relation to behavioral variables from decision-making models and to neural variables such as locus coeruleus activity and cortical oscillations. As yet, a unified and principled approach for analyzing pupil responses is lacking. Here we seek to establish a formal, quantitative forward model for pupil responses by describing them with linear time-invariant systems. Based on empirical data from human participants, we show that a combination of two linear time-invariant systems can parsimoniously explain approximately all variance evoked by illuminance changes. Notably, the model makes a counterintuitive prediction that pupil constriction dominates the responses to darkness flashes, as in previous empirical reports. This prediction was quantitatively confirmed for responses to light and darkness flashes in an independent group of participants. Crucially, illuminance- and nonilluminance-related inputs to the pupillary system are presumed to share a common final pathway, composed of muscles and nerve terminals. Hence, we can harness our illuminance-based model to estimate the temporal evolution of this neural input for an auditory-oddball task, an emotional-words task, and a visual-detection task. Onset and peak latencies of the estimated neural inputs furnish plausible hypotheses for the complexity of the underlying neural circuit. To conclude, this mathematical description of pupil responses serves as a prerequisite to refining their relation to behavioral and brain indices of cognitive processes. |
Hui Kou; Yanhua Su; Taiyong Bi; Xiao Gao; Hong Chen Attentional biases toward face-related stimuli among face dissatisfied women: Orienting and maintenance of attention revealed by eye-movement Journal Article In: Frontiers in Psychology, vol. 7, pp. 919, 2016. @article{Kou2016, The present study was aimed to examine attentional biases toward attractive and unattractive faces among face dissatisfied women. Twenty-seven women with high face dissatisfaction (HFD) and 27 women with low face dissatisfaction (LFD) completed a visual dot-probe task while their eye-movements were tracking. Under the condition of faces-neutral stimuli (vases) pairs, compared to LFD women, HFD women directed their first fixations more often toward faces, directed their first fixations toward unattractive faces more quickly, and had longer first fixation duration on such faces. All participants had longer overall gaze duration on attractive faces than on unattractive ones. Our behavioral data revealed that HFD women had difficulty in disengaging their attention from faces. However, there are no group differences in stimulus pairs containing an attractive and an unattractive face. In sum, when faces were paired with neutral stimuli (vases) HFD women showed an attention pattern characterized by orienting and maintenance, at least initially, toward unattractive faces but showed overall attention maintenance to attractive ones, but any attention bias wasn't found in attractive - unattractive face pairs. |
Andrew Isaac Meso; Anna Montagnini; Jason Bell; Guillaume S. Masson Looking for symmetry: Fixational eye movements are biased by image mirror symmetry Journal Article In: Journal of Neurophysiology, vol. 116, pp. 1250–1260, 2016. @article{Meso2016, Humans are highly sensitive to symmetry. During scene exploration, the area of the retina with dense light receptor coverage acquires most information from relevant locations determined by gaze fixation. We characterised patterns of fixational eye movements made by observers staring at synthetic scenes either freely (i.e. free exploration) or during a symmetry orientation discrimination task (i.e. active exploration). Stimuli could be mirror-symmetric or not. Both free and active exploration generated more saccades parallel to the axis of symmetry than along other orientations. Most saccades were small (<2deg) leaving the fovea within a 4-degree radius of fixation. The analysis of saccade dynamics showed that the observed parallel orientation selectivity emerged within 500ms of stimulus onset and persisted throughout the trials under both viewing conditions. Symmetry strongly distorted existing anisotropies in gaze direction in a seemingly automatic process. We argue that this bias serves a functional role in which adjusted scene sampling enhances and maintains sustained sensitivity to local spatial correlations arising from symmetry. |
Andrew Isaac Meso; James Rankin; Olivier Faugeras; Pierre Kornprobst; Guillaume S. Masson The relative contribution of noise and adaptation to competition during tri-stable motion perception Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–24, 2016. @article{Meso2016a, Animals exploit antagonistic interactions for sensory processing and these can cause oscillations between competing states. Ambiguous sensory inputs yield such perceptual multistability. Despite numerous empirical studies using binocular rivalry or plaid pattern motion, the driving mechanisms behind the spontaneous transitions between alternatives remain unclear. In the current work, we used a tristable barber pole motion stimulus combining empirical and modeling approaches to elucidate the contributions of noise and adaptation to underlying competition. We first robustly characterized the coupling between perceptual reports of transitions and continuously recorded eye direction, identifying a critical window of 480 ms before button presses, within which both measures were most strongly correlated. Second, we identified a novel nonmonotonic relationship between stimulus contrast and average perceptual switching rate with an initially rising rate before a gentle reduction at higher contrasts. A neural fields model of the underlying dynamics introduced in previous theoretical work and incorporating noise and adaptation mechanisms was adapted, extended, and empirically validated. Noise and adaptation contributions were confirmed to dominate at the lower and higher contrasts, respectively. Model simulations, with two free parameters controlling adaptation dynamics and direction thresholds, captured the measured mean transition rates for participants. We verified the shift from noise-dominated toward adaptation-driven in both the eye direction distributions and intertransition duration statistics. This work combines modeling and empirical evidence to demonstrate the signal-strength-dependent interplay between noise and adaptation during tristability. We propose that the findings generalize beyond the barber pole stimulus case to ambiguous perception in continuous feature spaces. |
Audrey L. Michal; David Uttal; Priti Shah; Steven L. Franconeri Visual routines for extracting magnitude relations Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1802–1809, 2016. @article{Michal2016, Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple graphs, and two systematic patterns emerged: eye movements that followed the verbal order of the question (inspecting the "blueberry" value first) versus those that followed a left-first bias (regardless of the left value's identity). Question-order patterns led substantially to faster responses and increased in prevalence with age, whereas the left-first pattern led to far slower responses and was the dominant strategy for younger children. We argue that the optimal way to verify a verbally expressed relation'scon- sistency with visualization is for the eyes to mimic the verbal ordering but that this strategy requires executive control and coordination with language. |
Thomas Miconi; Laura Groomes; Gabriel Kreiman There's Waldo! A normalization model of visual search predicts single-trial human fixations in an object search task Journal Article In: Cerebral Cortex, vol. 26, no. 7, pp. 3064–3082, 2016. @article{Miconi2016, When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts signle-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitutes a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in arror and target-absent trials, in a search task involving complex objects. |
Ravi D. Mill; Akira R. O'Connor; Ian G. Dobbins Pupil dilation during recognition memory: Isolating unexpected recognition from judgment uncertainty Journal Article In: Cognition, vol. 154, pp. 81–94, 2016. @article{Mill2016, Optimally discriminating familiar from novel stimuli demands a decision-making process informed by prior expectations. Here we demonstrate that pupillary dilation (PD) responses during recognition memory decisions are modulated by expectations, and more specifically, that pupil dilation increases for unexpected compared to expected recognition. Furthermore, multi-level modeling demonstrated that the time course of the dilation during each individual trial contains separable early and late dilation components, with the early amplitude capturing unexpected recognition, and the later trailing slope reflecting general judgment uncertainty or effort. This is the first demonstration that the early dilation response during recognition is dependent upon observer expectations and that separate recognition expectation and judgment uncertainty components are present in the dilation time course of every trial. The findings provide novel insights into adaptive memory-linked orienting mechanisms as well as the general cognitive underpinnings of the pupillary index of autonomic nervous system activity. |
Mark Mills; Olivia Wieda; Scott F. Stoltenberg; Michael D. Dodd Emotion moderates the association between HTR2A (rs6313) genotype and antisaccade latency Journal Article In: Experimental Brain Research, vol. 234, no. 9, pp. 2653–2665, 2016. @article{Mills2016, The serotonin system is heavily involved in cognitive and emotional control processes. Previous work has typically investigated this system's role in control processes separately for cognitive and emotional domains, yet it has become clear the two are linked. The present study, therefore, examined whether variation in a serotonin receptor gene (HTR2A, rs6313) moderated effects of emotion on inhibitory control. An emotional antisaccade task was used in which participants looked toward (prosaccade) or away (antisaccade) from a target presented to the left or right of a happy, angry, or neutral face. Overall, antisaccade latencies were slower for rs6313 C allele homozygotes than T allele carriers, with no effect of genotype on prosaccade latencies. Thus, C allele homozygotes showed relatively weak inhibitory control but intact reflexive control. Importantly, the emotional stimulus was either present during target presentation (overlap trials) or absent (gap trials). The gap effect (slowed latency in overlap versus gap trials) in antisaccade trials was larger with angry versus neutral faces in C allele homozygotes. This impairing effect of negative valence on inhibitory control was larger in C allele homozygotes than T allele carriers, suggesting that angry faces disrupted/competed with the control processes needed to generate an antisaccade to a greater degree in these individuals. The genotype difference in the negative valence effect on antisaccade latency was attenuated when trial N-1 was an antisaccade, indicating top-down regulation of emotional influence. This effect was reduced in C/C versus T/_ individuals, suggesting a weaker capacity to downregulate emotional processing of task-irrelevant stimuli. |
Meghan B. Mitchell; Steven D. Shirk; Donald G. McLaren; Jessica S. Dodd; Ali Ezzati; Brandon A. Ally; Alireza Atri Recognition of faces and names: Multimodal physiological correlates of memory and executive function Journal Article In: Brain Imaging and Behavior, vol. 10, no. 2, pp. 408–423, 2016. @article{Mitchell2016, We sought to characterize electrophysiological, eye-tracking and behavioral correlates of face-name recognition memory in healthy younger adults using high-density electroencephalography (EEG), infrared eye-tracking (ET), and neuropsychological measures. Twenty-one participants first studied 40 face-name (FN) pairs; 20 were presented four times (4R) and 20 were shown once (1R). Recognition memory was assessed by asking participants to make old/new judgments for 80 FN pairs, of which half were previously studied items and half were novel FN pairs (N). Simultaneous EEG and ET recording were collected during recognition trials. Comparisons of event-related potentials (ERPs) for correctly identified FN pairs were compared across the three item types revealing classic ERP old/new effects including 1) relative positivity (1R > N) bi-frontally from 300 to 500 ms, reflecting enhanced familiarity, 2) relative positivity (4R > 1R and 4R > N) in parietal areas from 500 to 800 ms, reflecting enhanced recollection, and 3) late frontal effects (1R > N) from 1000 to 1800 ms in right frontal areas, reflecting post-retrieval monitoring. ET analysis also revealed significant differences in eye movements across conditions. Exploration of cross-modality relationships suggested associations between memory and executive function measures and the three ERP effects. Executive function measures were associated with several indicators of saccadic eye movements and fixations, which were also associated with all three ERP effects. This novel characterization of face-name recognition memory performance using simultaneous EEG and ET reproduced classic ERP and ET effects, supports the construct validity of the multimodal FN paradigm, and holds promise as an integrative tool to probe brain networks supporting memory and executive functioning. |
Aleksandra Mitrovic; Pablo P. L. Tinio; Helmut Leder In: Frontiers in Human Neuroscience, vol. 10, pp. 122, 2016. @article{Mitrovic2016, One of the key behavioral effects of attractiveness is increased visual attention to attractive people. This effect is often explained in terms of evolutionary adaptations, such as attractiveness being an indicator of good health. Other factors could influence this effect. In the present study, we explored the modulating role of sexual orientation on the effects of attractiveness on exploratory visual behavior. Heterosexual and homosexual men and women viewed natural-looking scenes that depicted either two women or two men who varied systematically in levels of attractiveness (based on a pre- study). Participants' eye movements and attractiveness ratings toward the faces of the depicted people were recorded. The results showed that although attractiveness had the largest influence on participants' behaviors, participants' sexual orientations strongly modulated the effects.With the exception of homosexual women, all participant groups looked longer and more often at attractive faces that corresponded with their sexual orientations. Interestingly, heterosexual and homosexual men and homosexual women looked longer and more often at the less attractive face of their non-preferred sex than the less attractive face of their preferred sex, evidence that less attractive faces of the preferred sex might have an aversive character. These findings provide evidence for the important role that sexual orientation plays in guiding visual exploratory behavior and evaluations of the attractiveness of others. |
Jeff Moher; Joo-Hyun Song Target selection biases from recent experience transfer across effectors Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 2, pp. 415–426, 2016. @article{Moher2016, Target selection is often biased by an observer's recent experiences. However, not much is known about whether these selection biases influence behavior across different effectors. For example, does looking at a red object make it easier to subsequently reach towards another red object? In the current study, we asked observers to find the uniquely colored target object on each trial. Randomly intermixed pre-trial cues indicated the mode of action: either an eye movement or a visually guided reach movement to the target. In Experiment 1, we found that priming of popout, reflected in faster responses following repetition of the target color on consecutive trials, occurred regardless of whether the effector was repeated from the previous trial or not. In Experiment 2, we examined whether an inhibitory selection bias away from a feature could transfer across effectors. While priming of popout reflects both enhancement of the repeated target features and suppression of the repeated distractor features, the distractor previewing effect isolates a purely inhibitory component of target selection in which a previewed color is presented in a homogenous display and subsequently inhibited. Much like priming of popout, intertrial suppression biases in the distractor previewing effect transferred across effectors. Together, these results suggest that biases for target selection driven by recent trial history transfer across effectors. This indicates that representations in memory that bias attention towards or away from specific features are largely independent from their associated actions. |
Robert M. Mok; Nicholas E. Myers; George Wallis; Anna C. Nobre Behavioral and neural markers of flexible attention over working memory in aging Journal Article In: Cerebral Cortex, vol. 26, no. 4, pp. 1831–1842, 2016. @article{Mok2016, Working memory (WM) declines as we age and, because of its fundamental role in higher order cognition, this can have highly deleterious effects in daily life. We investigated whether older individuals benefit from flexible orienting of attention within WM to mitigate cognitive decline. We measured magnetoencephalography (MEG) in older adults performing a WM precision task with cues during the maintenance period that retroactively predicted the location of the relevant items for performance (retro-cues). WM performance of older adults significantly benefitted from retro-cues. Whereas WM maintenance declined with age, retro-cues conferred strong attentional benefits. A model-based analysis revealed an increase in the probability of recalling the target, a lowered probability of retrieving incorrect items or guessing, and an improvement in memory precision. MEG recordings showed that retro-cues induced a transient lateralization of alpha (8-14 Hz) and beta (15-30 Hz) oscillatory power. Interestingly, shorter durations of alpha/beta lateralization following retro-cues predicted larger cueing benefits, reinforcing recent ideas about the dynamic nature of access to WM representations. Our results suggest that older adults retain flexible control over WM, but individual differences in control correspond to differences in neural dynamics, possibly reflecting the degree of preservation of control in healthy aging. |
Zaeinab Afsari; José P. Ossandón; Peter Konig The dynamic effect of reading direction habit on spatial asymmetry of image perception Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 1–21, 2016. @article{Afsari2016, Exploration of images after stimulus onset is initially biased to the left. Here, we studied the causes of such an asymmetry and investigated effects of reading habits, text primes, and priming by systematically biased eye movements on this spatial bias in visual exploration. Bilinguals first read text primes with right- to-left (RTL) or left-to-right (LTR) reading directions and subsequently explored natural images. In Experiment 1, native RTL speakers showed a leftward free-viewing shift after reading LTR primes but a weaker rightward bias after reading RTL primes. This demonstrates that reading direction dynamically influences the spatial bias. However, native LTR speakers wholearnedanRTL languagelateinlife showed a leftward bias after reading either LTR or RTL primes, which suggests the role of habit formation in the production of the spatial bias. In Experiment 2, LTR bilinguals showed a slightly enhanced leftward bias after reading LTR text primes in their second language. This might contribute to the differences of native RTL and LTR speakers observed in Experiment 1. In Experiment 3, LTR bilinguals read normal (LTR, habitual reading) and mirrored left-to-right (mLTR, nonhabitual reading) texts. We observed a strong leftward bias in both cases, indicating that the bias direction is influenced by habitual reading direction and is not secondary to the actual reading direction. This is confirmed in Experiment 4, in which LTR participants were asked to follow RTL and LTR moving dots in prior image presentation and showed no change in the normal spatial bias. In conclusion, the horizontal bias is a dynamic property and is modulated by habitual reading direction. Introduction |
Mehmet N. Ağaoğlu; Susana T. L. Chung Can (should) theories of crowding be unified? Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–22, 2016. @article{Agaoglu2016, Objects in clutter are difficult to recognize, a phenomenon known as crowding. There is little consensus on the underlying mechanisms of crowding, and a large number of models have been proposed. There have also been attempts at unifying the explanations of crowding under a single model, such as the weighted feature model of Harrison and Bex (2015) and the texture synthesis model of Rosenholtz and colleagues (Balas, Nakano, & Rosenholtz, 2009; Keshvari & Rosenholtz, 2016). The goal of this work was to test various models of crowding and to assess whether a unifying account can be developed. Adopting Harrison and Bex's (2015) experimental paradigm, we asked observers to report the orientation of two concentric C-stimuli. Contrary to the predictions of their model, observers' recognition accuracy was worse for the inner C-stimulus. In addition, we demonstrated that the stimulus paradigm used by Harrison and Bex has a crucial confounding factor, eccentricity, which limits its usage to a very narrow range of stimulus parameters. Nevertheless, reporting the orientations of both C-stimuli in this paradigm proved very useful in pitting different crowding models against each other. Specifically, we tested deterministic and probabilistic versions of averaging, substitution, and attentional resolution models as well as the texture synthesis model. None of the models alone was able to explain the entire set of data. Based on these findings, we discuss whether the explanations of crowding can (should) be unified. |
Mehmet N. Ağaoğlu; Aaron M. Clarke; Michael H. Herzog; Haluk Ögmen Motion-based nearest vector metric for reference frame selection in the perception of motion Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–16, 2016. @article{Agaoglu2016a, We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects. |
Mehmet N. Ağaoğlu; Haluk Öğmen; Susana T. L. Chung Unmasking saccadic uncrowding Journal Article In: Vision Research, vol. 127, pp. 152–164, 2016. @article{Agaoglu2016b, Stimuli that are briefly presented around the time of saccades are often perceived with spatiotemporal distortions. These distortions do not always have deleterious effects on the visibility and identification of a stimulus. Recent studies reported that when a stimulus is the target of an intended saccade, it is released from both masking and crowding. Here, we investigated pre-saccadic changes in single and crowded letter recognition performance in the absence (Experiment 1) and the presence (Experiment 2) of backward masks to determine the extent to which saccadic “uncrowding” and “unmasking” mechanisms are similar. Our results show that pre-saccadic improvements in letter recognition performance are mostly due to the presence of masks and/or stimulus transients which occur after the target is presented. More importantly, we did not find any decrease in crowding strength before impending saccades. A simplified version of a dual-channel neural model, originally proposed to explain masking phenomena, with several saccadic add-on mechanisms, could account for our results in Experiment 1. However, this model falls short in explaining how saccades drastically reduced the effect of backward masking (Experiment 2). The addition of a remapping mechanism that alters the relative spatial positions of stimuli was needed to fully account for the improvements observed when backward masks followed the letter stimuli. Taken together, our results (i) are inconsistent with saccadic uncrowding, (ii) strongly support saccadic unmasking, and (iii) suggest that pre-saccadic letter recognition is modulated by multiple perisaccadic mechanisms with different time courses. |
Başak Akdoğan; Fuat Balcı; Hedderik Rijn Temporal expectation indexed by pupillary response Journal Article In: Timing & Time Perception, vol. 4, no. 4, pp. 354–370, 2016. @article{Akdogan2016, Forming temporal expectations plays an instrumental role for the optimization of behavior and allo- cation of attentional resources. Although the effects of temporal expectations on visual attention are well-established, the question of whether temporal predictions modulate the behavioral outputs of the autonomic nervous system such as the pupillary response remains unanswered. Therefore, this study aimed to obtain an online measure of pupil size while human participants were asked to dif- ferentiate between visual targets presented after varying time intervals since trial onset. Specifically, we manipulated temporal predictability in the presentation of target stimuli consisting of letters which appeared after either a short or long delay duration (1.5 vs. 3 s) in the majority of trials (75%) within different test blocks. In the remaining trials (25%), no target stimulus was present to investi- gate the trajectory of preparatory pupillary response under a low level of temporal uncertainty. The results revealed that the rate of preparatory pupillary response was contingent upon the time of target appearance such that pupils dilated at a higher rate when the targets were expected to appear after a shorter as compared to a longer delay period irrespective of target presence. The finding that pupil size can track temporal regularities and exhibit differential preparatory response between dif- ferent delay conditions points to the existence of a distributed neural network subserving temporal information processing which is crucial for cognitive functioning and goal-directed behavior. |
Andrea Alamia; Alexandre Zénon Statistical regularities attract attention when task-relevant Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 42, 2016. @article{Alamia2016, Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e. the effect of color predictability on reaction times (RT), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the 2 colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task. |
Andrea Albonico; Manuela Malaspina; Emanuela Bricolo; Marialuisa Martelli; Roberta Daini Temporal dissociation between the focal and orientation components of spatial attention in central and peripheral vision Journal Article In: Acta Psychologica, vol. 171, pp. 85–92, 2016. @article{Albonico2016, Selective attention, i.e. the ability to concentrate one's limited processing resources on one aspect of the environment, is a multifaceted concept that includes different processes like spatial attention and its subcomponents of orienting and focusing. Several studies, indeed, have shown that visual tasks performance is positively influenced not only by attracting attention to the target location (orientation component), but also by the adjustment of the size of the attentional window according to task demands (focal component). Nevertheless, the relative weight of the two components in central and peripheral vision has never been studied. We conducted two experiments to explore whether different components of spatial attention have different effects in central and peripheral vision. In order to do so, participants underwent either a detection (Experiment 1) or a discrimination (Experiment 2) task where different types of cues elicited different components of spatial attention: a red dot, a small square and a big square (an optimal stimulus for the orientation component, an optimal and a sub-optimal stimulus for the focal component respectively). Response times and cue-size effects indicated a stronger effect of the small square or of the dot in different conditions, suggesting the existence of a dissociation in terms of mechanisms between the focal and the orientation components of spatial attention. Specifically, we found that the orientation component was stronger in periphery, while the focal component was noticeable only in central vision and characterized by an exogenous nature. |
Micah Allen; Darya Frank; D. Samuel Schwarzkopf; Francesca Fardo; Joel S. Winston; Tobias U. Hauser; Geraint Rees Unexpected arousal modulates the influence of sensory noise on confidence Journal Article In: eLife, vol. 5, pp. 1–17, 2016. @article{Allen2016, Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. |
Tatiana A. Amor; Saulo D. S. Reis; Daniel Campos; Hans J. Herrmann; José S. Andrade Persistence in eye movement during visual search Journal Article In: Scientific Reports, vol. 6, pp. 20815, 2016. @article{Amor2016, As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. |
Nicola C. Anderson; Mieke Donk; Martijn Meeter The influence of a scene preview on eye movement behavior in natural scenes Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1794–1801, 2016. @article{Anderson2016, Rich contextual and semantic information can be extracted from only a brief presentation of a natural scene. This is presumed to be activated quickly enough to guide initial eye movements into a scene. However, early, short-latency eye movements in natural scenes have been shown to be dependent on the salience distribution across the image (Anderson, Ort, Kruijne, Meeter, & Donk, 2015). In the present work, we manipulated the salience distribution across a natural scene by changing the global contrast. We showed participants a brief real or nonsense preview of the scene and examined the time-course of eye movement guidance. A real preview decreased the latency and increased the amplitude of initial saccades into the image, suggesting that the preview allowed observers to obtain additional contextual information that would otherwise not be available. However, the preview did not completely override the initial tendency for short-latency saccades to be guided by the underlying salience distribution of the image. We discuss these findings in the context of oculomotor selection based on the integration of contextual information and low-level features in a natural scene. |
Evan G. Antzoulatos; Earl K. Miller Synchronous beta rhythms of frontoparietal networks support only behaviorally relevant representations Journal Article In: eLife, vol. 5, no. NOVEMBER2016, pp. 1–22, 2016. @article{Antzoulatos2016, Categorization has been associated with distributed networks of the primate brain, including the prefrontal (PFC) and posterior parietal cortices (PPC). Although category-selective spiking in PFC and PPC has been established, the frequency-dependent dynamic interactions of frontoparietal networks are largely unexplored. We trained monkeys to perform a delayed-match-to-spatial-category task while recording spikes and local field potentials from the PFC and PPC with multiple electrodes. We found category-selective beta- and delta-band synchrony between and within the areas. However, in addition to the categories, delta synchrony and spiking activity also reflected irrelevant stimulus dimensions. By contrast, beta synchrony only conveyed information about the task-relevant categories. Further, category-selective PFC neurons were synchronized with PPC beta oscillations, while neurons that carried irrelevant information were not. These results suggest that long-range beta-band synchrony could act as a filter that only supports neural representations of the variables relevant to the task at hand. |
Paul L. Aparicio; Elias B. Issa; James J. DiCarlo Neurophysiological organization of the middle face patch in macaque inferior temporal cortex Journal Article In: Journal of Neuroscience, vol. 36, no. 50, pp. 12729–12745, 2016. @article{Aparicio2016, While early cortical visual areas contain fine scale spatial organization of neuronal properties such as orientation preference, the spatial organization of higher-level visual areas is less well understood. The fMRI demonstration of face preferring regions in human ventral cortex (FFA, OFA) and monkey inferior temporal cortex ("face patches") raises the question of how neural selectivity for faces is organized. Here, we targeted hundreds of spatially registered neural recordings to the largest fMRI-identified face selective region in monkeys, the middle face patch (MFP) and show that the MFP contains a graded enrichment of face preferring neurons. At its center, as much as 93% of the sites we sampled responded twice as strongly to faces than to non-face objects. We estimate the maximum neurophysiological size of the MFP to be ∼6 mm in diameter, consistent with its previously reported size under fMRI. Importantly, face selectivity in the MFP varied strongly even between neighboring sites. Additionally, extremely face selective sites were ∼50x more likely to be present inside the MFP than outside. These results provide the first direct quantification of the size and neural composition of the MFP by showing that the cortical tissue localized to the fMRI defined region consists of a very high fraction of face preferring sites near its center, and a monotonic decrease in that fraction along any radial spatial axis. |
Joseph Arizpe; Dwight J. Kravitz; Vincent Walsh; Galit Yovel; Chris I. Baker Differences in looking at own-and other-race faces are subtle and analysis-dependent: An account of discrepant reports Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0148253, 2016. @article{Arizpe2016, The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processingmechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using |
Tom J. Barry; Bram Vervliet; Dirk Hermans Threat‐related gaze fixation and its relationship with the speed and generalisability of extinction learning Journal Article In: Australian Journal of Psychology, vol. 68, no. 3, pp. 200–208, 2016. @article{Barry2016, Objective: Attention plays an important role in the treatment of anxiety. Research has yet to elucidate how individual differences in attention or, particularly, gaze fixation can influence learning during treatment. The present investigation used an experimental analogue of the acquisition, treatment, and relapse of fear to examine this issue. Method: After pairing a stimulus (A) with an aversive electrocutaneous shock, such that participants come to fear this previously neutral stimulus, participants are repeatedly presented with a second stimulus (B) that possessed some common features with A as well as some of its own unique features. During presentations of B, fear was expected to reduce or extinguish. After this, participants were presented with C, which possessed some features of A that were not present in B as well as some features of B that were not present in A, and return of fear was assessed. Throughout this procedure, differences in gaze were measured so that this could be compared with indices for extinction and return of fear. Fear was measured in terms of skin conductance response. Results: Participants who spent more time looking at the unique features of B or who avoided the features in common with A showed slower extinction of their fear response. The same participants also showed reduced return of fear when C was presented. Conclusions: These findings are interpreted in terms of how attentional avoidance of threat-related stimuli might influence the inhibitory learning that takes place during extinction in experimental settings and exposure in clinical settings. |
Joseph E. Barton; Valentina Graci; Charlene Hafer-Macko; John D. Sorkin; Richard F. Macko Dynamic balanced reach: A temporal and spectral analysis across increasing performance demands Journal Article In: Journal of Biomechanical Engineering, vol. 138, pp. 1–13, 2016. @article{Barton2016a, Standing balanced reach is a fundamental task involved in many activities of daily living that has not been well analyzed quantitatively to assess and characterize the multiseg- mental nature of the body's movements. We developed a dynamic balanced reach test (BRT) to analyze performance in this activity; in which a standing subject is required to maintain balance while reaching and pointing to a target disk moving across a large pro- jection screen according to a sum-of-sines function. This tracking and balance task is made progressively more difficult by increasing the disk's overall excursion amplitude. Using kinematic and ground reaction force data from 32 young healthy subjects, we investigated how the motions ofthe tracking finger and whole-body center ofmass (CoM) varied in response to the motion ofthe disk across five overall disk excursion amplitudes. Group representative performance statistics for the cohort revealed a monotonically increasing root mean squared (RMS) tracking error (RMSE) and RMS deviation (RMSD) between whole-body CoM (projected onto the ground plane) and the center ofthe base of support (BoS) with increasing amplitude (p<0.03). Tracking and CoM response delays remained constant, however, at 0.5 s and 1.0 s, respectively. We also performed detailed spectral analyses ofgroup-representative response data for each ofthe five overall excur- sion amplitudes. We derived empirical and analytical transfer functions between the motion of the disk and that of the tracking finger and CoM, computed tracking and CoM responses to a step input, and RMSE and RMSD as functions ofdisk frequency. We found that for frequencies less than 1.0 Hz, RMSE generally decreased, while RMSE normalized to disk motion amplitude generally increased. RMSD, on the other hand, decreased monotonically. These findings quantitatively characterize the amplitude- and frequency- dependent nature ofyoung healthy tracking and balance in this task. The BRT is not sub- ject to floor or ceiling effects, overcoming an important deficiency associated with most research and clinical instruments used to assess balance. This makes a comprehensive quantification of young healthy balance performance possible. The results of such analy- ses could be used in work space design and in fall-prevention instructional materials, for both the home and work place. Young healthy performance represents “exemplar” per- formance and can also be used as a reference against which to compare the performance ofaging and other clinical populations at risk for falling. |
Joseph E. Barton; Anindo Roy; John D. Sorkin; Mark W. Rogers; Richard F. Macko An engineering model of human balance control—Part I: Biomechanical model Journal Article In: Journal of Biomechanical Engineering, vol. 138, no. 1, pp. 1–11, 2016. @article{Barton2016, We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task-with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities. |
Vanessa Beanland; Rebecca K. Le; Jamie E. M. Byrne Object-scene relationships vary the magnitude of target prevalence effects in visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 6, pp. 766–775, 2016. @article{Beanland2016, Efficiency of visual search in real-world tasks is affected by several factors, including scene context and target prevalence. Observers are more efficient at detecting target objects in congruent locations, and less efficient at detecting rare targets. Although target prevalence and placement often covary, previous research has investigated context and prevalence effects independently. We conducted 2 experiments to explore the potential interaction between scene context and target prevalence effects. In Experiment 1, we varied target prevalence (high, low) and context (congruent, incongruent), and, for congruent contexts, target location (typical, atypical). Experiment 2 focused on the interaction between target prevalence (high, low) and location (typical, atypical) for congruent contexts, and recorded observers' eye movements to examine search strategies. Observers were poorer at detecting low versus high prevalence targets; however, prevalence effects were significantly reduced for targets in typical, con-gruent locations compared with atypical or incongruent locations. Eye movement analyses in Experiment 2 revealed this was related to observers dwelling disproportionately on the most typical target locations within a scene. This suggests that a byproduct of contextual guidance within scenes is that placing targets in unexpected or atypical locations will further increase miss rates for uncommon targets, which has implications for real-world situations in which rare targets appear in unexpected locations. Although prevalence effects are robust, our results suggest potential for mitigating the negative consequences of low prevalence through targeted training that teaches observers where to focus their search. |
Yvonne Behnke How textbook design may influence learning with geography textbooks Journal Article In: Nordidactica – Journal of Humanities and Social Science Education, vol. 1, pp. 38–62, 2016. @article{Behnke2016, This paper investigates how textbook design may influence students' visual attention to graphics, photos and text in current geography textbooks. Eye tracking, a visual method of data collection and analysis, was utilised to precisely monitor students' eye movements while observing geography textbook spreads. In an exploratory study utilising random sampling, the eye movements of 20 students (secondary school students 15–17 years of age and university students 20–24 years of age) were recorded. The research entities were double- page spreads of current German geography textbooks covering an identical topic, taken from five separate textbooks. A two-stage test was developed. Each participant was given the task of first looking at the entire textbook spread to determine what was being explained on the pages. In the second stage, participants solved one of the tasks from the exercise section. Overall, each participant studied five different textbook spreads and completed five set tasks. After the eye tracking study, each participant completed a questionnaire. The results may verify textbook design as one crucial factor for successful knowledge acquisition from textbooks. Based on the eye tracking documentation, learning-related challenges posed by images and complex image-text structures in textbooks are elucidated and related to educational psychology insights and findings from visual communication and textbook analysis. |
Steven Beighley; Helene Intraub Does inversion affect boundary extension for briefly-presented views? Journal Article In: Visual Cognition, vol. 24, no. 3, pp. 252–259, 2016. @article{Beighley2016, Inverting scenes interferes with visual perception and memory on many tasks. Might scene inversion eliminate boundary extension (BE) for briefly-presented photographs? In Experiment 1, an upright or inverted photograph (133, 258, or 383 ms) was followed by a 258 ms masked interval and a test photograph showing the identical view. Test photographs were rated as “same”, “closer”,or “farther away” (5-point scale). BE was just as great for inverted as upright views at the 133 and 383 ms durations, but surprisingly was greater for inverted views at the 258 ms duration. In Experiment 2, 258-ms views yielded greater BE when the study photographs were always tested in the opposite orientation, indicating that the difference in BE was related to encoding. Results suggest that scene construction beyond the view boundaries occurs rapidly and is not impeded by scene inversion, but that changes in the relative quality of visual details available for upright and inverted views may sometimes yield increased BE for inverted scenes. |
Karly N. Neath-Tavares; Roxane J. Itier Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach Journal Article In: Biological Psychology, vol. 119, pp. 122–140, 2016. @article{NeathTavares2016, Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120 ms occipitally, while responses to fearful expressions started around 150 ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350 ms. |
Daniel P. Newman; Steven W. Lockley; Gerard M. Loughnane; Ana C. P. Martins; Rafael Abe; Marco T. R. Zoratti; Simon P. Kelly; Megan H. O'Neill; Shantha M. W. Rajaratnam; Redmond G. O'Connell; Mark A. Bellgrove Ocular exposure to blue-enriched light has an asymmetric influence on neural activity and spatial attention Journal Article In: Scientific Reports, vol. 6, pp. 27754, 2016. @article{Newman2016, Brain networks subserving alertness in humans interact with those for spatial attention orienting. We employed blue-enriched light to directly manipulate alertness in healthy volunteers. We show for the first time that prior exposure to higher, relative to lower, intensities of blue-enriched light speeds response times to left, but not right, hemifield visual stimuli, via an asymmetric effect on right-hemisphere parieto-occipital α-power. Our data give rise to the tantalising possibility of light-based interventions for right hemisphere disorders of spatial attention. The mechanisms for alertness in humans interact with those for spatial attention orienting in an intriguing fash-ion 1,2 . For example, the debilitating inattention of left space observed in patients suffering from unilateral spatial neglect subsequent to right-hemisphere damage can be temporarily overcome by phasic alerting tones 3 . Sleep deprivation in healthy participants causes relative left hemifield inattention in the visual domain 4 , while a pro-nounced auditory inattention to left space occurs during drowsy periods prior to sleep onset 5 . Brain imaging work in both neglect patients and neurologically healthy participants suggests that the distribution of attention between the hemifields is balanced by competitive activation between the hemispheres, specifically within a bilaterally represented dorsal network for spatial attention orienting 1,6,7 . Current models propose that this bilateral orienting network interacts with the right-hemisphere-lateralised ventral network subserving non-spatial processes such as alertness 1,2 which may be preferentially innervated by the locus-coeruleus/noradrenergic (LC-NA) system 1,8,9 . Despite demonstrations that manipulations of alertness can transiently shift spatial attention bias, neuro-science has thus far failed to identify non-invasive methods of manipulating alertness that lead to an enduring improvement in attention to left space. One promising avenue for manipulating alertness is offered by recent photobiology studies of light. Although it is recognised that light exerts powerful alerting effects on brain and behaviour, its mechanism of action has only recently been studied. Specifically, recent research has identified a set of intrinsically photosensitive retinal ganglion cells (ipRGCs) which are maximally sensitive to short wavelength (blue) light (~480 nm) and which mediate a light induced alerting signal to the human brain, in a dose dependent manner 10–12 |
Anna Nowakowska; Alasdair D. F. Clarke; Arash Sahraie; Amelia R. Hunt Inefficient search strategies in simulated hemianopia Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 11, pp. 1858–1872, 2016. @article{Nowakowska2016, We investigated whether healthy participants can spontaneously adopt effective eye movement strategies to compensate for information loss similar to that experienced by patients with damage to visual cortex (hemianopia). Visual information in 1 hemifield was removed or degraded while participants searched for an emotional face among neutral faces or a line tilted 45° to the right among lines of varying degree of tilt. A bias to direct saccades toward the sighted field was observed across all 4 experiments. The proportion of saccades directed toward the "blind" field increased with the amount of information available in that field, suggesting fixations are driven toward salient visual stimuli rather than toward locations that maximize information gain. In Experiments 1 and 2, the sighted-field bias had a minimal impact on search efficiency, because the target was difficult to find. However, the sighted-field bias persisted even when the target was visually distinct from the distractors and could easily be detected in the periphery (Experiments 3 and 4). This surprisingly inefficient search behavior suggests that eye movements are biased to salient visual stimuli even when it comes at a clear cost to search efficiency, and efficient strategies to compensate for visual deficits are not spontaneously adopted by healthy participants. |
Antje Nuthmann; George L. Malcolm Eye guidance during real-world scene search: The role color plays in central and peripheral vision Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–16, 2016. @article{Nuthmann2016, How does the availability of color across the visual field facilitate gaze during real-world search? To answer this question, the presence of color in central or peripheral vision was manipulated using a 5deg gaze-contingent window that followed participants' gaze. Accordingly, scenes were presented in full color (C), grey in central vision and colored in peripheral vision (G-C), colored in central vision and grey in peripheral vision (C-G), and in grey (G). The color conditions were crossed with a manipulation of the search cue: the search object was cued either with a word label or a picture of the target. Across color conditions, search was faster during target template guided search. Search time costs were observed in the C-G and G conditions, highlighting the importance of color in peripheral vision. In addition, a gaze-data based decomposition of search time revealed color-mediated effects on specific sub-processes of search. When color was not available in peripheral vision, it took longer to initiate search, and to locate the search object in the scene. When color was not available in central vision, however, the process of verifying the identity of the target was prolonged. In conclusion, color-information in peripheral vision facilitates saccade target selection. |
E. Oberwelland; Leonhard Schilbach; I. Barisic; Sarah C. Krall; K. Vogeley; Gereon R. Fink; B. Herpertz-Dahlmann; Kerstin Konrad; Martin Schulte-Rüther Look into my eyes: Investigating joint attention using interactive eye-tracking and fMRI in a developmental sample Journal Article In: NeuroImage, vol. 130, pp. 248–260, 2016. @article{Oberwelland2016, Joint attention, the shared attentional focus of at least two people on a third significant object, is one of the earliest steps in social development and an essential aspect of reciprocal interaction. However, the neural basis of joint attention (JA) in the course of development is completely unknown. The present study made use of an interactive eye-tracking paradigm in order to examine the developmental trajectories of JA and the influence of a familiar interaction partner during the social encounter. Our results show that across children and adolescents JA elicits a similar network of "social brain" areas as well as attention and motor control associated areas as in adults. While other-initiated JA particularly recruited visual, attention and social processing areas, self-initiated JA specifically activated areas related to social cognition, decision-making, emotions and motivational/reward processes highlighting the rewarding character of self-initiated JA. Activation was further enhanced during self-initiated JA with a familiar interaction partner. With respect to developmental effects, activation of the precuneus declined from childhood to adolescence and additionally shifted from a general involvement in JA towards a more specific involvement for self-initiated JA. Similarly, the temporoparietal junction (TPJ) was broadly involved in JA in children and more specialized for self-initiated JA in adolescents. Taken together, this study provides first-time data on the developmental trajectories of JA and the effect of a familiar interaction partner incorporating the interactive character of JA, its reciprocity and motivational aspects. |
Bartholomäus Odoj; Daniela Balslev Role of oculoproprioception in coding the locus of attention Journal Article In: Journal of Cognitive Neuroscience, vol. 28, no. 3, pp. 517–528, 2016. @article{Odoj2016, The most common neural representations for spatial atten- tion encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be com- bined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allo- cation of attention, the source of this input has so far re- mained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculopro- prioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants dis- criminated visual targets whose location was cued in a non- visual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculo- proprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculo- proprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention. |
Lauri Oksama; Jukka Hyönä Position tracking and identity tracking are separate systems: Evidence from eye movements Journal Article In: Cognition, vol. 146, no. 393-409, pp. 1–16, 2016. @article{Oksama2016, How do we track multiple moving objects in our visual environment? Some investigators argue that tracking is based on a parallel mechanism (e.g., Cavanagh & Alvarez, 2005; Pylyshyn, 1989), others argue that tracking contains a serial component (e.g. Holcombe & Chen, 2013; Oksama & Hyönä, 2008). In the present study, we put previous theories into a direct test by registering observers' eye movements when they tracked identical moving targets (the MOT task) or when they tracked distinct object identities (the MIT task). The eye movement technique is a useful tool to study whether overt focal attention is exploited during tracking. We found a qualitative difference between these tasks in terms of eye movements. When the participants tracked only position information (MOT), the observers had a clear preference for keeping their eyes fixed for a rather long time on the same screen position. In contrast, active eye behavior was observed when the observers tracked the identities of moving objects (MIT). The participants updated over four target identities with overt attention shifts. These data suggest that there are two separate systems involved in multiple object tracking. The position tracking system keeps track of the positions of the moving targets in parallel without the need of overt attention shifts in the form of eye movements. On the other hand, the identity tracking system maintains identity-location bindings in a serial fashion by utilizing overt attention shifts. |
Rosanna K. Olsen; Vinoja Sebanayagam; Yunjo Lee; Morris Moscovitch; Cheryl L. Grady; R. Shayna Rosenbaum; Jennifer D. Ryan The relationship between eye movements and subsequent recognition: Evidence from individual differences and amnesia Journal Article In: Cortex, vol. 85, pp. 182–193, 2016. @article{Olsen2016, There is consistent agreement regarding the positive relationship between cumulative eye movement sampling and subsequent recognition, but the role of the hippocampus in this sampling behavior is currently unknown. It is also unclear whether the eye movement repetition effect, i.e., fewer fixations to repeated, compared to novel, stimuli, depends on explicit recognition and/or an intact hippocampal system. We investigated the relationship between cumulative sampling, the eye movement repetition effect, subsequent memory, and the hippocampal system. Eye movements were monitored in a developmental amnesic case (H.C.), whose hippocampal system is compromised, and in a group of typically developing participants while they studied single faces across multiple blocks. The faces were studied from the same viewpoint or different viewpoints and were subsequently tested with the same or different viewpoint. Our previous work suggested that hippocampal representations support explicit recognition for information that changes viewpoint across repetitions (Olsen et al., 2015). Here, examination of eye movements during encoding indicated that greater cumulative sampling was associated with better memory among controls. Increased sampling, however, was not associated with better explicit memory in H.C., suggesting that increased sampling only improves memory when the hippocampal system is intact. The magnitude of the repetition effect was not correlated with cumulative sampling, nor was it related reliably to subsequent recognition. These findings indicate that eye movements collect information that can be used to strengthen memory representations that are later available for conscious remembering, whereas eye movement repetition effects reflect a processing change due to experience that does not necessarily reflect a memory representation that is available for conscious appraisal. Lastly, H.C. demonstrated a repetition effect for fixed viewpoint faces but not for variable viewpoint faces, which suggests that repetition effects are differentially supported by neocortical and hippocampal systems, depending upon the representational nature of the underlying memory trace. |
Jacob L. Orquin; Nathaniel J. S. Ashby; Alasdair D. F. Clarke Areas of interest as a signal detection problem in behavioral eye-tracking research Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 103–115, 2016. @article{Orquin2016, Decision researchers frequently analyze attention to individual objects to test hypotheses about underlying cognitive processes. Generally, fixations are assigned to objects using a method known as area of interest (AOI). Ideally, an AOI includes all fixations belonging to an object while fixations to other objects are excluded. Unfortunately, due to measurement inaccuracy and insufficient distance between objects, the distributions of fixations to objects may overlap, resulting in a signal detection problem. If the AOI is to include all fixations to an object, it will also likely include fixations belonging to other objects (false positives). In a survey, we find that many researchers report testing multiple AOI sizes when performing analyses, presumably trying to balance the proportion of true and false positive fixations. To test whether AOI size influences the measurement of object attention and conclusions drawn about cognitive processes, we reanalyze four published studies and conduct a fifth tailored to our purpose. We find that in studies in which we expected overlapping fixation distributions, analyses benefited from smaller AOI sizes (0° visual angle margin). In studies where we expected no overlap, analyses benefited from larger AOI sizes (>.5° visual angle margins). We conclude with a guideline for the use of AOIs in behavioral eye-tracking research. |
Arman Abrahamyan; Laura Luz Silva; Steven C. Dakin; Matteo Carandini; Justin L. Gardner Adaptable history biases in human perceptual decisions Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 25, pp. E3548–E3557, 2016. @article{Abrahamyan2016, When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject's default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics. |
Michael Morgan; Kai Schreiber; J. A. Solomon Low-level mediation of directionally specific motion aftereffects: Motion perception is not necessary Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 8, pp. 2621–2632, 2016. @article{Morgan2016, Previous psychophysical experiments with normal human observers have shown that adaptation to a moving dot stream causes directionally specific repulsion in the perceived angle of a subsequently viewed moving probe. In this study, we used a two-alternative forced choice task with roving pedestals to determine the conditions that are necessary and sufficient for producing directionally specific repulsion with compound adaptors, each of which contains two oppositely moving, differently colored component streams. Experiment 1 provided a demonstration of repulsion between single-component adaptors and probes moving at approximately 90° or 270°. In Experiment 2, oppositely moving dots in the adaptor were paired to preclude the appearance of motion. Nonetheless, repulsion remained strong when the angle between each probe stream and one component was approximately 30°. In Experiment 3, adapting dot pairs were kept stationary during their limited lifetimes. Their orientation content alone proved insufficient for producing repulsion. In Experiments 4–6, the angle between the probe and both adapting components was approximately 90° or 270°. Directional repulsion was found when observers were asked to visually track one of the adapting components (Exp. 6), but not when they were asked to attentionally track it (Exp. 5), nor while they passively viewed the adaptor (Exp. 4). Our results are consistent with a low-level mechanism for motion adaptation. This mechanism is not selective for stimulus color and is not susceptible to attentional modulation. The most likely cortical locus of adaptation is area V1. |
Alexandra S. Mueller; Esther G. González; Chris McNorgan; Martin J. Steinbach; Brian Timney Effects of vertical direction and aperture size on the perception of visual acceleration Journal Article In: Perception, vol. 45, no. 6, pp. 670–683, 2016. @article{Mueller2016a, It is not well understood whether the distance over which moving stimuli are visible affects our sensitivity to the presence of acceleration or our ability to track such stimuli. It is also uncertain whether our experience with gravity creates anisotropies in how we detect vertical acceleration and deceleration. To address these questions, we varied the vertical extent of the aperture through which we presented vertically accelerating and decelerating random dot arrays. We hypothesized that observers would better detect and pursue accelerating and decelerating stimuli that extend over larger than smaller distances. In Experiment 1, we tested the effects of vertical direction and aperture size on acceleration and deceleration detection accuracy. Results indicated that detection is better for downward motion and for large apertures, but there is no difference between vertical acceleration and deceleration detection. A control experiment revealed that our manipulation of vertical aperture size affects the ability to track vertical motion. Smooth pursuit is better (i.e., with higher peak velocities) for large apertures than for small apertures. Our findings suggest that the ability to detect vertical acceleration and deceleration varies as a function of the direction and vertical over which an observer can track the moving stimulus. |
Stefanie Mueller; Katja Fiehler Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement Journal Article In: Neuropsychologia, vol. 87, pp. 63–73, 2016. @article{Mueller2016, Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets. |
Manon Mulckhuyse; Edwin S. Dalmaijer Distracted by danger: Temporal and spatial dynamics of visual selection in the presence of threat Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 16, no. 2, pp. 315–324, 2016. @article{Mulckhuyse2016, Threatening stimuli are known to influence attentional and visual processes in order to prioritize selection. For example, previous research showed faster detection of threatening relative to nonthreatening stimuli. This has led to the proposal that threatening stimuli are prioritized automatically via a rapid subcortical route. However, in most studies, the threatening stimulus is always to some extent task relevant. Therefore, it is still unclear if threatening stimuli are automatically prioritized by the visual system. We used the additional singleton paradigm with task-irrelevant fear-conditioned distractors (CS+ and CS-) and indexed the time course of eye movement behavior. The results demonstrate automatic prioritization of threat. First, mean latency of saccades directed to the neutral target was increased in the presence of a threatening (CS+) relative to a nonthreatening distractor (CS-), indicating exogenous attentional capture and delayed disengagement of covert attention. Second, more error saccades were directed to the threatening than to the nonthreatening distractor, indicating a modulation of automatically driven saccades. Nevertheless, cumulative distributions of the saccade latencies showed no modulation of threat for the fastest goal-driven saccades, and threat did not affect the latency of the error saccades to the distractors. Together these results suggest that threatening stimuli are automatically prioritized in attentional and visual selection but not via faster processing. Rather, we suggest that prioritization results from an enhanced representation of the threatening stimulus in the oculomotor system, which drives attentional and visual selection. The current findings are interpreted in terms of a neurobiological model of saccade programming. |
Peter R. Murphy; Evert Boonstra; Sander Nieuwenhuis Global gain modulation generates time-dependent urgency during perceptual choice in humans Journal Article In: Nature Communications, vol. 7, pp. 13526, 2016. @article{Murphy2016a, Decision-makers must often balance the desire to accumulate information with the costs of protracted deliberation. Optimal, reward-maximizing decision-making can require dynamic adjustment of this speed/accuracy trade-off over the course of a single decision. However, it is unclear whether humans are capable of such time-dependent adjustments. Here, we identify several signatures of time-dependency in human perceptual decision-making and highlight their possible neural source. Behavioural and model-based analyses reveal that subjects respond to deadline-induced speed pressure by lowering their criterion on accumulated perceptual evidence as the deadline approaches. In the brain, this effect is reflected in evidence-independent urgency that pushes decision-related motor preparation signals closer to a fixed threshold. Moreover, we show that global modulation of neural gain, as indexed by task-related fluctuations in pupil diameter, is a plausible biophysical mechanism for the generation of this urgency. These findings establish context-sensitive time-dependency as a critical feature of human decision-making. |
Andriy Myachykov; Rob Ellis; Angelo Cangelosi; Martin H. Fischer Ocular drift along the mental number line Journal Article In: Psychological Research, vol. 80, no. 3, pp. 379–388, 2016. @article{Myachykov2016, We examined the spontaneous association between numbers and space by documenting attention deployment and the time course of associated spatial-nu-merical mapping with and without overt oculomotor responses. In Experiment 1, participants maintained central fixation while listening to number names. In Experiment 2, they made horizontal target-direct saccades following auditory number presentation. In both experiments, we continuously measured spontaneous ocular drift in hori-zontal space during and after number presentation. Experiment 2 also measured visual-probe-directed sac-cades following number presentation. Reliable ocular drift congruent with a horizontal mental number line emerged during and after number presentation in both experiments. Our results provide new evidence for the implicit and automatic nature of the oculomotor resonance effect asso-ciated with the horizontal spatial-numerical mapping mechanism. |
Hamidreza Namazi; Vladimir V. Kulish; Amin Akrami The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal Journal Article In: Scientific Reports, vol. 6, pp. 26639, 2016. @article{Namazi2016, One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex' visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. |
Marko Nardini; Jennifer Bales; Denis Mareschal Integration of audio-visual information for spatial decisions in children and adults Journal Article In: Developmental Science, vol. 19, no. 5, pp. 803–816, 2016. @article{Nardini2016, In adults, decisions based on multisensory information can be faster and/or more accurate than those relying on a single sense. However, this finding varies significantly across development. Here we studied speeded responding to audio-visual targets, a key multisensory function whose development remains unclear. We found that when judging the locations of targets, children aged 4 to 12 years and adults had faster and less variable response times given auditory and visual information together compared with either alone. Comparison of response time distributions with model predictions indicated that children at all ages were integrating (pooling) sensory information to make decisions but that both the overall speed and the efficiency of sensory integration improved with age. The evidence for pooling comes from comparison with the predictions of Miller's seminal ‘race model', as well as with a major recent extension of this model and a comparable ‘pooling' (coactivation) model. The findings and analyses can reconcile results from previous audio-visual studies, in which infants showed speed gains exceeding race model predictions in a spatial orienting task (Neil et al., 2006) but children below 7 years did not in speeded reaction time tasks (e.g. Barutchu et al., 2009). Our results provide new evidence for early and sustained abilities to integrate visual and auditory signals for spatial localization from a young age. |
Natsuki Atagi; Melissa DeWolf; James W. Stigler; Scott P. Johnson The role of visual representations in college students' understanding of mathematical notation Journal Article In: Journal of Experimental Psychology: Applied, vol. 22, no. 3, pp. 295–304, 2016. @article{Atagi2016, Developing understanding of fractions involves connections between nonsymbolic visual representations and symbolic representations. Initially, teachers introduce fraction concepts with visual representations before moving to symbolic representations. Once the focus is shifted to symbolic representations, the connections between visual representations and symbolic notation are considered to be less useful, and students are rarely asked to connect symbolic notation back to visual representations. In 2 experiments, we ask whether visual representations affect understanding of symbolic notation for adults who understand symbolic notation. In a conceptual fraction comparison task (e.g., Which is larger, 5 / a or 8 / a?), participants were given comparisons paired with accurate, helpful visual representations, misleading visual representations, or no visual representations. The results show that even college students perform significantly better when accurate visuals are provided over misleading or no visuals. Further, eye-tracking data suggest that these visual representations may affect performance even when only briefly looked at. Implications for theories of fraction understanding and education are discussed. |
Nada Attar; Matthew H. Schneps; Marc Pomplun Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm Journal Article In: Memory & Cognition, vol. 44, no. 7, pp. 1038–1049, 2016. @article{Attar2016, An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process. |
Janice Attard-Johnson; Markus Bindemann; Caoilte Ó Ciardha Pupillary response as an age-specific measure of sexual interest Journal Article In: Archives of Sexual Behavior, vol. 45, no. 4, pp. 855–870, 2016. @article{AttardJohnson2016, In the visual processing of sexual content, pupil dilation is an indicator of arousal that has been linked to observers' sexual orientation. This study investigated whether this measure can be extended to determine age-specific sexual interest. In two experiments, the pupillary responses of heterosexual adults to images of males and females of different ages were related to self-reported sexual interest, sexual appeal to the stimuli, and a child molestation proclivity scale. In both experiments, the pupils of male observers dilated to photographs ofwomen but not men, children, or neutral stimuli. These pupillary responses corresponded with observer's self-reported sexual interests and their sexual appeal ratings of the stimuli. Female observers showed pupil dilation to photographs of men and women but not children. In women,pupillary responses also correlated poorly with sexual appeal ratings ofthe stimuli. These experiments provide initial evidence that eye-tracking could be used as ameasure of sex-specific interest in male observers, and as an age-specific index in male and female observers. |
Bobby Azarian; Elizabeth G. Esser; Matthew S. Peterson Watch out! Directional threat-related postures cue attention and the eyes Journal Article In: Cognition and Emotion, vol. 30, no. 3, pp. 561–569, 2016. @article{Azarian2016a, Previous work indicates that threatening facial expressions with averted eye gaze can act as a signal of imminent danger, enhancing attentional orienting in the gazed-at direction. However, this threat-related gaze-cueing effect is only present in individuals reporting high levels of anxiety. The present study used eye tracking to investigate whether additional directional social cues, such as averted angry and fearful human body postures, not only cue attention, but also the eyes. The data show that although body direction did not predict target location, anxious individuals made faster eye movements when fearful or angry postures were facing towards (congruent condition) rather than away (incongruent condition) from peripheral targets. Our results provide evidence for attentional cueing in response to threat-related directional body postures in those with anxiety. This suggests that for such individuals, attention is guided by threatening social stimuli in ways that can influence and bias eye movement behaviour. |
Bobby Azarian; Elizabeth G. Esser; Matthew S. Peterson Evidence from the eyes: Threatening postures hold attention Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 3, pp. 764–770, 2016. @article{Azarian2016, Efficient detection of threat provides obvious survival advantages and has resulted in a fast and accurate threatdetection system. Although beneficial under normal circumstances, this system may become hypersensitive and cause threat-processing abnormalities. Past research has shown that anxious individuals have difficulty disengaging attention from threatening faces, but it is unknown whether other forms of threatening social stimuli also influence attentional orienting. Much like faces, human body postures are salient social stimuli, because they are informative of one's emotional state and next likely action. Additionally, postures can convey such information in situations in which another's facial expression is not easily visible. Here we investigated whether there is a threat-specific effect for high-anxious individuals, by measuring the time that it takes the eyes to leave the attended stimulus, a task-irrelevant body posture. The results showed that relative to nonthreating postures, threat-related postures hold attention in anxious individuals, providing further evidence of an anxiety-related attentional bias for threatening information. This is the first study to demonstrate that attentional disengagement from threatening postures is affected by emotional valence in those reporting anxiety. |
Mariana Babo-Rebelo; Craig G. Richter; Catherine Tallon-Baudry Neural responses to heartbeats in the default network encode the self in spontaneous thoughts Journal Article In: Journal of Neuroscience, vol. 36, no. 30, pp. 7829–7840, 2016. @article{BaboRebelo2016, The default network (DN) has been consistently associated with self-related cognition, but also to bodily state monitoring and autonomic regulation. We hypothesized that these two seemingly disparate functional roles of the DN are functionally coupled, in line with theories proposing that selfhood is grounded in the neural monitoring of internal organs, such as the heart. We measured with magnetoencephalograhy neural responses evoked by heartbeats while human participants freely mind-wandered. When interrupted by a visual stimulus at random intervals, participants scored the self-relatedness of the interrupted thought. They evaluated their involvement as the first-person perspective subject or agent in the thought ("I"), and on another scale to what degree they were thinking about themselves ("Me"). During the interrupted thought, neural responses to heartbeats in two regions of the DN, the ventral precuneus and the ventromedial prefrontal cortex, covaried, respectively, with the "I" and the "Me" dimensions of the self, even at the single-trial level. No covariation between self-relatedness and peripheral autonomic measures (heart rate, heart rate variability, pupil diameter, electrodermal activity, respiration rate, and phase) or alpha power was observed. Our results reveal a direct link between selfhood and neural responses to heartbeats in the DN and thus directly support theories grounding selfhood in the neural monitoring of visceral inputs. More generally, the tight functional coupling between self-related processing and cardiac monitoring observed here implies that, even in the absence of measured changes in peripheral bodily measures, physiological and cognitive functions have to be considered jointly in the DN. |
Julia Bahnmueller; Stefan Huber; Hans-Christoph Nuerk; Silke M. Göbel; Korbinian Moeller Processing multi-digit numbers: a translingual eye-tracking study Journal Article In: Psychological Research, vol. 80, no. 3, pp. 422–433, 2016. @article{Bahnmueller2016, The present study aimed at investigating the underlying cognitive processes and language specificities of three-digit number processing. More specifically, it was intended to clarify whether the single digits of three-digit numbers are processed in parallel and/or sequentially and whether processing strategies are influenced by the inversion of number words with respect to the Arabic digits [e.g., 43: dreiundvierzig (“three and forty”)] and/or by differences in reading behavior of the respective first language. Therefore, English- and German-speaking adults had to complete a three-digit number comparison task while their eye-fixation behavior was recorded. Replicating previous results, reliable hundred-decade-compatibility effects (e.g., 742_896: hundred-decade compatible because 7 < 8 and 4 < 9; 362_517: hundred-decade incompatible because 3 < 5 but 6 > 1) for English- as well as hundred-unit-compatibility effects for English- and German-speaking participants were observed, indicating parallel processing strategies. While no indices of partial sequential processing were found for the English-speaking group, about half of the German-speaking participants showed an inverse hundred-decade-compatibility effect accompanied by longer inspection time on the hundred digit indicating additional sequential processes. Thereby, the present data revealed that in transition from two- to higher multi-digit numbers, the homogeneity of underlying processing strategies varies between language groups. The regular German orthography (allowing for letter-by-letter reading) and its associated more sequential reading behavior may have promoted sequential processing strategies in multi-digit number processing. Furthermore, these results indicated that the inversion of number words alone is not sufficient to explain all observed language differences in three-digit number processing. |
Akram Bakkour; Christina Leuker; Ashleigh M. Hover; Nathan Giles; Russell A. Poldrack; Tom Schonberg Mechanisms of choice behavior shift using cue-approach training Journal Article In: Frontiers in Psychology, vol. 7, pp. 421, 2016. @article{Bakkour2016, Cue-approach training has been shown to effectively shift choices for snack food items by associating a cued button-press motor response to particular food items. Furthermore, attention is biased toward previously cued items, even when the cued item is not chosen for real consumption during a choice phase. However, the exact mechanism by which preferences shift during cue-approach training is not entirely clear. In three experiments, we shed light on the possible underlying mechanisms at play during this novel paradigm: 1) Uncued, wholly predictable motor responses paired with particular food items were not sufficient to elicit a preference shift; 2) Cueing motor responses early – concurrently with food item onset – and thus eliminating the need for heightened top-down attention to the food stimulus in preparation for a motor response also eliminated the shift in food preferences. This finding reinforces our hypothesis that heightened attention at behaviorally relevant points in time is key to changing choice behavior in the cue-approach task; 3) Crucially, indicating choice using eye movements rather than manual button presses preserves the effect, thus demonstrating that the shift in preferences is not governed by a learned motor response but more likely via modulation of subjective value in higher associative regions, consistent with previous neuroimaging results. Cue-approach training drives attention at behaviorally relevant points in time to modulate the subjective value of individual items, providing a mechanism for behavior change that does not rely on external reinforcement and that holds great promise for developing real world behavioral interventions. |
Anjuli L. A. Barber; Dania Randi; Corsin A. Muller; Ludwig Huber The processing of human emotional faces by pet and lab dogs: Evidence for lateralization and experience effects Journal Article In: PLoS ONE, vol. 11, no. 4, pp. e0152393, 2016. @article{Barber2016, From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces. |
Yasmine El-Shamayleh; Anitha Pasupathy Contour curvature as an invariant code for objects in visual area V4 Journal Article In: Journal of Neuroscience, vol. 36, no. 20, pp. 5532–5543, 2016. @article{ElShamayleh2016, Size-invariant object recognition-the ability to recognize objects across transformations of scale-is a fundamental feature of biological and artificial vision. To investigate its basis in the primate cerebral cortex, we measured single neuron responses to stimuli of varying size in visual area V4, a cornerstone of the object-processing pathway, in rhesus monkeys (Macaca mulatta). Leveraging two competing models for how neuronal selectivity for the bounding contours of objects may depend on stimulus size, we show that most V4 neurons (∼70%) encode objects in a size-invariant manner, consistent with selectivity for a size-independent parameter of boundary form: for these neurons, "normalized" curvature, rather than "absolute" curvature, provided a better account of responses. Our results demonstrate the suitability of contour curvature as a basis for size-invariant object representation in the visual cortex, and posit V4 as a foundation for behaviorally relevant object codes. |
Jasper H. Fabius; Alessio Fracasso; Stefan Van Der Stigchel Spatiotopic updating facilitates perception immediately after saccades Journal Article In: Scientific Reports, vol. 6, pp. 34488, 2016. @article{Fabius2016, As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window. |
Jasper H. Fabius; Martijn J. Schut; Stefan Van der Stigchel Spatial inhibition of return as a function of fixation history, task, and spatial references Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 6, pp. 1633–1641, 2016. @article{Fabius2016a, In oculomotor selection, each saccade is thought to be automatically biased toward uninspected locations, inhibiting the inefficient behavior of repeatedly refixating the same objects. This automatic bias is related to inhibition of return (IOR). Although IOR seems an appealing property that increases efficiency in visual search, such a mechanism would not be efficient in other tasks. Indeed, evidence for additional, more flexible control over refixations has been provided. Here, we investigated whether task demands implicitly affect the rate of refixations. We measured the probability of refixations after series of six binary saccadic decisions under two conditions: visual search and free viewing. The rate of refixations seems influenced by two effects. One effect is related to the rate of intervening fixations, specifically, more refixations were observed with more intervening fixations. In addition, we observed an effect of task set, with fewer refixations in visual search than in free viewing. Importantly, the history-related effect was more pronounced when sufficient spatial references were provided, suggesting that this effect is dependent on spatiotopic encoding of previously fixated locations. This known history-related bias in gaze direction is not the primary influence on the refixation rate. Instead, multiple factors, such as task set and spatial references, assert strong influences as well. |
Laura Fademrecht; Isabelle Bülthoff; Stephan Rosa Action recognition in the visual periphery Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–14, 2016. @article{Fademrecht2016, Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 08 (fovea/fixation), 158, 308, 458, and 608 of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 308 of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities. |
Xiaoxu Fan; Lan Wang; Hanyu Shao; Daniel Kersten; Sheng He Temporally flexible feedback signal to foveal cortex for peripheral object recognition Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp. 11627–11632, 2016. @article{Fan2016, Recent studies have shown that information from peripherally presented images is present in the human foveal retinotopic cortex, presumably because of feedback signals. We investigated this potential feedback signal by presenting noise in fovea at different object-noise stimulus onset asynchronies (SOAs), whereas subjects performed a discrimination task on peripheral objects. Results revealed a selective impairment of performance when foveal noise was presented at 250-ms SOA, but only for tasks that required comparing objects' spatial details, suggesting a task- and stimulus-dependent foveal processing mechanism. Critically, the temporal window of foveal processing was shifted when mental rotation was required for the peripheral objects, indicating that the foveal retinotopic processing is not automatically engaged at a fixed time following peripheral stimulation; rather, it occurs at a stage when detailed information is required. Moreover, fMRI measurements using multivoxel pattern analysis showed that both image and object category-relevant information of peripheral objects was represented in the foveal cortex. Taken together, our results support the hypothesis of a temporally flexible feedback signal to the foveal retinotopic cortex when discriminating objects in the visual periphery. |
Gerardo Fernández; Facundo Manes; Luis E. Politi; David Orozco; Marcela Schumacher; Liliana Castro; Osvaldo Agamennoni; Nora P. Rotstein Patients with mild Alzheimer's disease fail when using their working memory: Evidence from the eye tracking technique Journal Article In: Journal of Alzheimer's Disease, vol. 50, no. 3, pp. 827–838, 2016. @article{Fernandez2016a, Patients with Alzheimer's disease (AD) develop progressive language, visuoperceptual, attentional, and oculomotor changes that can have an impact on their reading comprehension. However, few studies have examined reading behavior in AD, and none have examined the contribution of predictive cueing in reading performance. For this purpose we analyzed the eye movement behavior of 35 healthy readers (Controls) and 35 patients with probable AD during reading of regular and highpredictable sentences. The cloze predictability of words N- 1, and N+ 1 exerted an influence on the reader's gaze duration. The predictabilities of preceding words in high-predictable sentences served as task-appropriate cues that were used by Control readers. In contrast, these effects were not present in AD patients. In Controls, changes in predictability significantly affected fixation duration along the sentence; noteworthy, these changes did not affect fixation durations in AD patients. Hence, only in healthy readers did predictability of upcoming words influence fixation durations via memory retrieval. Our results suggest that Controls used stored information of familiar texts for enhancing their reading performance and imply that contextual-word predictability, whose processing is proposed to require memory retrieval, only affected reading behavior in healthy subjects. In AD patients, this loss reveals impairments in brain areas such as those corresponding to working memory and memory retrieval. These findings might be relevant for expanding the options for the early detection and monitoring in the early stages of AD. Furthermore, evaluation of eye movements during reading could provide a new tool for measuring drug impact on patients' behavior. |
Aline Ferreira; John Wayne Schwieter; Alexandra Gottardo; Jefferey Jones Cognitive effort in direct and inverse translation performance: Insight from eye-tracking technology Journal Article In: Cadernos de Tradução, vol. 36, no. 3, pp. 60–80, 2016. @article{Ferreira2016, This case study examined the translation performance of four professional translators with the aim of exploring the cognitive effort involved in direct and inverse translation. Four professional translators translated two comparable texts from English into Spanish and from Spa- nish into English. Eye-tracking technology was used to analyze the total time spent in each task, fixation time, and average fixation time. Fixation count in three areas of interest was measured including: source text, target text, and browser, used as an external support. Results suggested that although total time and fixation count were indicators of cognitive effort during the tasks, fixation count in the areas of interest data showed that more effort was directed toward the source text in both tasks. Overall, this study demonstrates that while more traditional measures for translation difficulty (e.g., total time) indicate more effort in the inverse translation task, eye-tracking data indicate that differences in the effort applied in both directions must be carefully analyzed, mostly regarding the areas of interest. |
Jamie Ferri; Joseph Schmidt; Greg Hajcak; Turhan Canli Emotion regulation and amygdala-precuneus connectivity: Focusing on attentional deployment Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 16, no. 6, pp. 991–1002, 2016. @article{Ferri2016, Attentional deployment is an emotion regulation strategy that involves shifting attentional focus. Deploying attention to non-arousing, compared to arousing, regions of unpleasant images has been associated with reduced negative affect, reduced amygdala activation, and increased activity in fronto-parietal control networks. The current study examined neural correlates and functional connectivity associated with using attentional deployment to increase negative affect (deploying attention towards arousing unpleasant information) or to decrease negative affect (deploying attention away from arousing unpleasant information), compared to naturally viewing unpleasant images, in 42 individuals while concurrently monitoring eye movements. Directing attention to both arousing and non-arousing regions resulted in enhanced fronto-parietal activation compared to natural viewing, but only directing attention to non-arousing regions was associated with changes in amygdala activation. There were no significant differences in connectivity between naturally viewing unpleasant images and focusing on arousing regions. However, naturally viewing unpleasant images, relative to focusing on non-arousing regions, was associated with increased connectivity between the amygdala and visual cortex, while focusing on non-arousing regions of unpleasant images, compared to natural viewing, was associated with increased connectivity between the amygdala and the precuneus. Amygdala-precuneus connectivity correlated positively with eye-tracking measures of attentional deployment success and with trait reappraisal. Deploying attention away from arousing unpleasant information, then, may depend upon functional relationships between the amygdala and parietal regions implicated in attentional control. Furthermore, these relationships might relate to the ability to successfully implement attentional deployment, and the predisposition to utilize adaptive emotion regulation strategies. |
Nonie J. Finlayson; Julie D. Golomb Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth Journal Article In: Vision Research, vol. 127, pp. 49–56, 2016. @article{Finlayson2016, A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information – not position-in-depth – seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location. |
Petra Fischer; José P. Ossandón; Johannes Keyser; Alessandro Gulberti; Niklas Wilming; Wolfgang Hamel; Johannes Köppen; Carsten Buhmann; Manfred Westphal; Christian Gerloff; Christian K. E. Moll; Andreas K. Engel; Peter König STN-DBS reduces saccadic hypometria but not visuospatial bias in Parkinson's disease patients Journal Article In: Frontiers in Behavioral Neuroscience, vol. 10, pp. 85, 2016. @article{Fischer2016, In contrast to its well-established role in alleviating skeleto-motor symptoms in Parkinson's disease, little is known about the impact of deep brain stimulation (DBS) of the subthalamic nucleus (STN) on oculomotor control and attention. Eye-tracking data of 17 patients with left-hemibody symptom onset was compared with 17 age-matched control subjects. Free-viewing of natural images was assessed without stimulation as baseline and during bilateral DBS. To examine the involvement of ventral STN territories in oculomotion and spatial attention, we employed unilateral stimulation via the left and right ventralmost contacts respectively. When DBS was off, patients showed shorter saccades and a rightward viewing bias compared with controls. Bilateral stimulation in therapeutic settings improved saccadic hypometria but not the visuospatial bias. At a group level, unilateral ventral stimulation yielded no consistent effects. However, the evaluation of electrode position within normalized MNI coordinate space revealed that the extent of early exploration bias correlated with the precise stimulation site within the left subthalamic area. These results suggest that oculomotor impairments "but not higher-level exploration patterns" are effectively ameliorable by DBS in therapeutic settings. Our findings highlight the relevance of the STN topography in selecting contacts for chronic stimulation especially upon appearance of visuospatial attention deficits. |
Meg Fluharty; Ines Jentzsch; Manuel Spitschan; Dhanraj Vishwanath Eye fixation during multiple object attention is based on a representation of discrete spatial foci Journal Article In: Scientific Reports, vol. 6, pp. 31832, 2016. @article{Fluharty2016, We often look at and attend to several objects at once. How the brain determines where to point our eyes when we do this is poorly understood. Here we devised a novel paradigm to discriminate between different models of spatial selection guiding fixation. In contrast to standard static attentional tasks where the eye remains fixed at a predefined location, observers selected their own preferred fixation position while they tracked static targets that were arranged in specific geometric configurations and which changed identity over time. Fixations were best predicted by a representation of discrete spatial foci, not a polygonal grouping, simple 2-foci division of attention or a circular spotlight. Moreover, attentional performance was incompatible with serial selection. Together with previous studies, our findings are compatible with a view that attentional selection and fixation rely on shared spatial representations and suggest a more nuanced definition of overt vs. covert attention. |
Rebecca M. Foerster In: Frontiers in Psychology, vol. 7, pp. 1845, 2016. @article{Foerster2016, When performing sequential manual actions (e.g., cooking), visual information is prioritized according to the task determining where and when to attend, look, and act. In well-practiced sequential actions, long-term memory (LTM)-based expectations specify which action targets might be found where and when.We have previously demonstrated (Foerster and Schneider, 2015b) that violations of such expectations that are task- relevant (e.g., target location change) cause a regression from a memory-based mode of attentional selection to visual search. How might task-irrelevant expectation violations in such well-practiced sequential manual actions modify attentional selection? This question was investigated by a computerized version of the number-connection test. Participants clicked on nine spatially distributed numbered target circles in ascending order while eye movements were recorded as proxy for covert attention. Target's visual features and locations stayed constant for 65 prechange-trials, allowing practicing the manual action sequence. Consecutively, a task-irrelevant expectation violation occurred and stayed for 20 change-trials. Specifically, action target number 4 appeared in a different font. In 15 reversion-trials, number 4 returned to the original font. During the first task-irrelevant change trial, manual clicking was slower and eye scanpaths were larger and contained more fixations. The additional fixations were mainly checking fixations on the changed target while acting on later targets. Whereas the eyes repeatedly revisited the task-irrelevant change, cursor-paths remained completely unaffected. Effects lasted for 2–3 change trials and did not reappear during reversion. In conclusion, an unexpected task-irrelevant change on a task-defining feature of a well-practiced manual sequence leads to eye-hand decoupling and a “check-after-surprise” mode of attentional selection. |
Tomas Folke; Catrine Jacobsen; Stephen M. Fleming; Benedetto De Martino Explicit representation of confidence informs future value-based decisions Journal Article In: Nature Human Behaviour, vol. 1, pp. 0002, 2016. @article{Folke2016, Humans can reflect on decisions and report variable levels of confidence. But why maintain an explicit representation of confi- dence for choices that have already been made and therefore cannot be undone? Here we show that an explicit representation of confidence is harnessed for subsequent changes of mind. Specifically, when confidence is low, participants are more likely to change their minds when the same choice is presented again, an effect that is most pronounced in participants with greater fidelity in their confidence reports. Furthermore, we show that choices reported with high confidence follow a more consistent pattern (fewer transitivity violations). Finally, by tracking participants' eye movements, we demonstrate that lower-level gaze dynamics can track uncertainty but do not directly impact changes of mind. These results suggest that an explicit and accurate representation of confidence has a positive impact on the quality of future value-based decisions. |
Tom Foulsham; Dean Wybrow; Neil Cohn Reading without words: Eye movements in the comprehension of comic strips Journal Article In: Applied Cognitive Psychology, vol. 30, no. 4, pp. 566–579, 2016. @article{Foulsham2016, The study of attention in pictures is mostly limited to individual images. When we ‘read' a visual narrative (e.g., a comic strip), the pictures have a coherent sequence, but it is not known how this affects attention. In two experiments, we eyetracked participants in order to investigate how disrupting the visual sequence of a comic strip would affect attention. Both when panels were presented one at a time (Experiment 1) and when a sequence was presented all together (Experiment 2), pictures were understood more quickly and with fewer fixations when in their original order. When order was randomised, the same pictures required more attention and additional ‘regressions'. Fixation distributions also differed when the narrative was intact, showing that context affects where we look. This reveals the role of top-down structures when we attend to pictorial information, as well as providing a springboard for applied research into attention within image sequences. |
Alessio Fracasso; Yvonne Koenraads; Giorgio L. Porro; Serge O. Dumoulin Bilateral population receptive fields in congenital hemihydranencephaly Journal Article In: Ophthalmic and Physiological Optics, vol. 36, no. 3, pp. 324–334, 2016. @article{Fracasso2016a, PURPOSE: Congenital hemihydranencephaly (HH) is a very rare disorder characterised by prenatal near-complete unilateral loss of the cerebral cortex. We investigated a patient affected by congenital right HH whose visual field extended significantly into the both visual hemifields, suggesting a reorganisation of the remaining left visual hemisphere. We examined the early visual cortex reorganisation using functional MRI (7T) and population receptive field (pRF) modelling. METHODS: Data were acquired by means of a 7T MRI while the patient affected by HH viewed conventional population receptive field mapping stimuli. Two possible pRF reorganisation schemes were evaluated: where every cortical location processed information from either (i) a single region of the visual field or (ii) from two bilateral regions of the visual field. RESULTS: In the patient affected by HH, bilateral pRFs in single cortical locations of the remaining hemisphere were found. In addition, using this specific pRF reorganisation scheme, the biologically known relationship between pRF size and eccentricity was found. CONCLUSIONS: Bilateral pRFs were found in the remaining left hemisphere of the patient affected by HH, indicating reorganisation of intra-cortical wiring of the early visual cortex and confirming brain plasticity and reorganisation after an early cerebral damage in humans. |
Stefan Frässle; Sören Krach; Frieder M. Paulus; Andreas Jansen Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing Journal Article In: Scientific Reports, vol. 6, pp. 27153, 2016. @article{Fraessle2016, While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness- related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization. |
Stefan Frässle; Frieder M. Paulus; Sören Krach; Stefan Robert Schweinberger; Klaas Enno Stephan; Andreas Jansen Mechanisms of hemispheric lateralization: Asymmetric interhemispheric recruitment in the face perception network Journal Article In: NeuroImage, vol. 124, pp. 977–988, 2016. @article{Fraessle2016a, Perceiving human faces constitutes a fundamental ability of the human mind, integrating a wealth of information essential for social interactions in everyday life. Neuroimaging studies have unveiled a distributed neural network consisting of multiple brain regions in both hemispheres. Whereas the individual regions in the face perception network and the right-hemispheric dominance for face processing have been subject to intensive research, the functional integration among these regions and hemispheres has received considerably less attention. Using dynamic causal modeling (DCM) for fMRI, we analyzed the effective connectivity between the core regions in the face perception network of healthy humans to unveil the mechanisms underlying both intra- and interhemispheric integration. Our results suggest that the right-hemispheric lateralization of the network is due to an asymmetric face-specific interhemispheric recruitment at an early processing stage - that is, at the level of the occipital face area (OFA) but not the fusiform face area (FFA). As a structural correlate, we found that OFA gray matter volume was correlated with this asymmetric interhemispheric recruitment. Furthermore, exploratory analyses revealed that interhemispheric connection asymmetries were correlated with the strength of pupil constriction in response to faces, a measure with potential sensitivity to holistic (as opposed to feature-based) processing of faces. Overall, our findings thus provide a mechanistic description for lateralized processes in the core face perception network, point to a decisive role of interhemispheric integration at an early stage of face processing among bilateral OFA, and tentatively indicate a relation to individual variability in processing strategies for faces. These findings provide a promising avenue for systematic investigations of the potential role of interhemispheric integration in future studies. |
Mallory Frayn; Christopher R. Sears; Kristin M. Ranson A sad mood increases attention to unhealthy food images in women with food addiction Journal Article In: Appetite, vol. 100, pp. 55–63, 2016. @article{Frayn2016, Food addiction and emotional eating both influence eating and weight, but little is known of how negative mood affects the attentional processes that may contribute to food addiction. The purpose of this study was to compare attention to food images in adult women (N = 66) with versus without food addiction, before and after a sad mood induction (MI). Participants' eye fixations were tracked and recorded throughout 8-s presentations of displays with healthy food, unhealthy food, and non-food images. Food addiction was self-reported using the Yale Food Addiction Scale. The sad MI involved watching an 8-min video about a young child who passed away from cancer. It was predicted that: (1) participants in the food addiction group would attend to unhealthy food significantly more than participants in the control group, and (2) participants in the food addiction group would increase their attention to unhealthy food images following the sad MI, due to increased emotional reactivity and poorer emotional regulation. As predicted, the sad MI had a different effect for those with versus without food addiction: for participants with food addiction, attention to unhealthy images increased following the sad MI and attention to healthy images decreased, whereas for participants without food addiction the sad MI did not alter attention to food. These findings contribute to researchers' understanding of the cognitive factors underlying food addiction. |
Susannah F. Freebody; Gustav Kuhn Own-age biases in adults' and children's joint attention: Biased face prioritization, but not gaze following! Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 2, pp. 372–379, 2016. @article{Freebody2016, Previous studies have reported own-age biases in younger and older adults in gaze following. We investigated own-age biases in social attentional processes between adults and children by focusing on two aspects of the joint attention process; the extent to which people attend towards an individual's face, and the extent to which they fixate objects that are looked at by this person (i.e., gaze following). Participants viewed images that always contained a child and an adult who either looked towards each other or each looked at objects located to their side. Observers consistently, and rapidly fixated the actor's faces, though the children were faster to fixate the child's face than the adult's faces, whilst the adults were faster to fixate on the adult's face than the child's face. The children also spent significantly more time fixating the child's face than the adult's face, and the opposite pattern of results was found for the adults. Whilst both adults and children prioritized objects when they were looked at by the actor, both groups showed equivalent levels of gaze following, and there was no own-age bias for gaze following. Our results show an own-age bias for prioritizing faces, but not gaze following. |
Tal Golan; Ido Davidesco; Meir Meshulam; David M. Groppe; Pierre Mégevand; Erin M. Yeagle; Matthew S. Goldfinger; Michal Harel; Lucia Melloni; Charles E. Schroeder; D. L. Deouell; Ashesh D. Mehta; Rafael Malach Human intracranial recordings link suppressed transients rather than 'filling-in' to perceptual continuity across blinks Journal Article In: eLife, vol. 5, pp. 1–28, 2016. @article{Golan2016, We hardly notice our eye blinks, yet an externally generated retinal interruption of a similar duration is perceptually salient. We examined the neural correlates of this perceptual distinction using intracranially measured ECoG signals from human visual cortex in 14 patients. In early visual areas (V1 and V2), the disappearance of the stimulus due to either invisible blinks or salient blank video frames ('gaps') led to a similar drop in activity level, followed by a positive overshoot beyond baseline, triggered by stimulus reappearance. Ascending the visual hierarchy, the reappearance-related overshoot gradually subsided for blinks but not for gaps. By contrast, the disappearance-related drop did not follow the perceptual distinction - it was actually slightly more pronounced for blinks than for gaps. These findings suggest that blinks' limited visibility compared with gaps is correlated with suppression of blink-related visual activity transients, rather than with 'filling-in' of the occluded content during blinks. |
Gil Gonen-Yaacovi; Ayelet Arazi; Nitzan Shahar; Anat Karmon; Shlomi Haar; Nachshon Meiran; Ilan Dinstein Increased ongoing neural variability in ADHD Journal Article In: Cortex, vol. 81, pp. 50–63, 2016. @article{GonenYaacovi2016, Attention Deficit Hyperactivity Disorder (ADHD) has been described as a disorder where frequent lapses of attention impair the ability of an individual to focus/attend in a sustained manner, thereby generating abnormally large intra-individual behavioral variability across trials. Indeed, increased reaction time (RT) variability is a fundamental behavioral characteristic of individuals with ADHD found across a large number of cognitive tasks. But what is the underlying neurophysiology that might generate such behavioral instability? Here, we examined trial-by-trial EEG response variability to visual and auditory stimuli while subjects' attention was diverted to an unrelated task at the fixation cross. Comparisons between adult ADHD and control participants revealed that neural response variability was significantly larger in the ADHD group as compared with the control group in both sensory modalities. Importantly, larger trial-by-trial variability in ADHD was apparent before and after stimulus presentation as well as in trials where the stimulus was omitted, suggesting that ongoing (rather than stimulus-evoked) neural activity is continuously more variable (noisier) in ADHD. While the patho-physiological mechanisms causing this increased neural variability remain unknown, they appear to act continuously rather than being tied to a specific sensory or cognitive process. |
Claudia C. Gonzalez; Jac Billington; Melanie R. Burke The involvement of the fronto-parietal brain network in oculomotor sequence learning using fMRI Journal Article In: Neuropsychologia, vol. 87, pp. 1–11, 2016. @article{Gonzalez2016a, The basis of motor learning involves decomposing complete actions into a series of predictive individual components that form the whole. The present fMRI study investigated the areas of the human brain important for oculomotor short-term learning, by using a novel sequence learning paradigm that is equivalent in visual and temporal properties for both saccades and pursuit, enabling more direct comparisons between the oculomotor subsystems. In contrast with previous studies that have implemented a series of discrete ramps to observe predictive behaviour as evidence for learning, we presented a continuous sequence of interlinked components that better represents sequences of actions. We implemented both a classic univariate fMRI analysis, followed by a further multivariate pattern analysis (MVPA) within a priori regions of interest, to investigate oculomotor sequence learning in the brain and to determine whether these mechanisms overlap in pursuit and saccades as part of a higher order learning network. This study has uniquely identified an equivalent frontal-parietal network (dorsolateral prefrontal cortex, frontal eye fields and posterior parietal cortex) in both saccades and pursuit sequence learning. In addition, this is the first study to investigate oculomotor sequence learning during fMRI brain imaging, and makes significant contributions to understanding the role of the dorsal networks in motor learning. |
David A. Gonzalez; Ewa Niechwiej-Szwedo The effects of monocular viewing on hand-eye coordination during sequential grasping and placing movements Journal Article In: Vision Research, vol. 128, pp. 30–38, 2016. @article{Gonzalez2016, The contribution of binocular vision to the performance of reaching and grasping movements has been examined previously using single reach-to-grasp movements. However, most of our daily activities consist of more complex action sequences, which require precise temporal linking between the gaze behaviour and manual action phases. Many previous studies found a stereotypical hand-eye coordination pattern, such that the eyes move prior to the reach initiation. Moving the eyes to the target object provides information about its features and location, which can facilitate the predictive control of reaching and grasping. This temporal coordination pattern has been established for the performance of sequential movements performed during binocular viewing. Here we manipulated viewing condition and examined the temporal hand-eye coordination pattern during the performance of a sequential reaching, grasping, and placement task. Fifteen participants were tested on a sequencing task while eye and hand movements were recorded binocularly using a video-based eyetracker and a motion capture system. Our results showed that monocular viewing disrupted the temporal coordination between the eyes and the hand during the place-to-reach transition phase. Specifically, the gaze shift was delayed during monocular compared to binocular viewing. The shift in gaze behaviour may be due to increased uncertainty associated with the performance of the placement task because of increased vergence error during monocular viewing, which was evident in all participants. These findings provide insight into the role of binocular vision in predictive control of sequential reaching and grasping movements. |
Peter C. Gordon; Renske S. Hoedemaker Effective scheduling of looking and talking during rapid automatized naming Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 5, pp. 742–760, 2016. @article{Gordon2016, Rapid automatized naming (RAN) is strongly related to literacy gains in developing readers, reading disabilities, and reading ability in children and adults. Because successful RAN performance depends on the close coordination of a number of abilities, it is unclear what specific skills drive this RAN-reading relationship. The current study used concurrent recordings of young adult participants' vocalizations and eye movements during the RAN task to assess how individual variation in RAN performance depends on the coordination of visual and vocal processes. Results showed that fast RAN times are facilitated by having the eyes 1 or more items ahead of the current vocalization, as long as the eyes do not get so far ahead of the voice as to require a regressive eye movement to an earlier item. These data suggest that optimizing RAN performance is a problem of scheduling eye movements and vocalization given memory constraints and the efficiency of encoding and articulatory control. Both RAN completion time (con- ventionally used to indicate RAN performance) and eye-voice relations predicted some aspects of participants' eye movements on a separate sentence reading task. However, eye-voice relations predicted additional features of first-pass reading that were not predicted by RAN completion time. This shows that measurement of eye-voice patterns can identify important aspects of individual variation in reading that are not identified by the standard measure of RAN performance. We argue that RAN performance predicts reading ability because both tasks entail challenges of scheduling cognitive and linguistic processes that operate simultaneously on multiple linguistic inputs. |
Dan J. Graham; Christina A. Roberto In: Health Education and Behavior, vol. 43, no. 4, pp. 389–398, 2016. @article{Graham2016, Background. The U.S. Food and Drug Administration (FDA) has proposed modifying the Nutrition Facts Label (NFL) on food packages to increase consumer attention to this resource and to promote healthier dietary choices. Aims. The present study sought to determine whether the proposed NFL changes will affect consumer attention to the NFL or purchase intentions. Method. This study compared purchase intentions (yes/no responses to “would you purchase this food?” for 64 products) and attention to NFLs (measured via high-speed eye-tracking camera) among 155 young adults randomly assigned to view products with existing versus modified NFLs. Attention to all individual components of the NFL (e.g., calories, fats, sugars) were analyzed separately to assess the impact of each proposed NFL modification on attention to that region. Data were collected in 2014; analysis was conducted in 2015. Results. Modified NFLs did not elicit significantly more visual attention or lead to more healthful purchase intentions than did existing NFLs. Relocating the percent daily value component from the right side of the NFL to the left side, as proposed by the FDA, actually reduced participants' attention to this information. The proposed “added sugars” component was viewed on at least one label by a majority (58%) of participants. Discussion. Results suggest that the proposed NFL changes may not achieve FDA's goals. Changes to nutrition labeling may need to take a different form to meaningfully influence dietary behavior. Conclusion. Young adults' visual attention and purchase intentions do not appear to be meaningfully affected by the proposed NFL modifications. |
Nicola J. Gregory; Frouke Hermens; Rebecca Facey; Timothy L. Hodgson The developmental trajectory of attentional orienting to socio-biological cues Journal Article In: Experimental Brain Research, vol. 234, no. 6, pp. 1351–1362, 2016. @article{Gregory2016, It has been proposed that the orienting of attention in the same direction as another's point of gaze relies on innate brain mechanisms which are present from birth, but direct evidence relating to the influence of eye gaze cues on attentional orienting in young children is limited. In two experiments, 137 children aged 3–10 years old performed an adapted pro-saccade task with centrally presented uninformative eye gaze, finger pointing and arrow pre-cues which were either congruent or incongruent with the direction of target presentations. When the central cue overlapped with presentation of the peripheral target (Experiment 1), children up to 5 years old had difficulty disengaging fixation from central fixation in order to saccade to the target. This effect was found to be particularly marked for eye gaze cues. When central cues were extinguished simultaneously with peripheral target onset (Experiment 2), this effect was greatly reduced. In both experiments finger pointing cues (image of pointing index finger presented at fixation) exerted a strong influence on saccade reaction time to the peripheral stimulus for the youngest group of children (<5 years). Overall the results suggest that although young children are strongly engaged by centrally presented eye gaze cues, the directional influence of such cues on overt attentional orienting is only present in older children, meaning that the effect is unlikely to be dependent upon an innate brain module. Instead, the results are consistent with the existence of stimulus–response associations which develop with age and environmental experience. |
Tjerk P. Gutteling; W. Pieter Medendorp Role of alpha-band oscillations in spatial updating across whole body motion Journal Article In: Frontiers in Psychology, vol. 7, pp. 671, 2016. @article{Gutteling2016, When moving around in the world, we have to keep track of important locations in our surroundings. In this process, called spatial updating, we must estimate our body motion and correct representations of memorized spatial locations in accordance with this motion. While the behavioral characteristics of spatial updating across whole body motion have been studied in detail, its neural implementation lacks detailed study. Here we use electro-encephalography (EEG) to distinguish various spectral components of this process. Subjects gazed at a central body-fixed point in otherwise complete darkness, while a target was briefly flashed, either left or right from this point. Subjects had to remember the location of this target as either moving along with the body or remaining fixed in the world while being translated sideways on a passive motion platform. After the motion, subjects had to indicate the remembered target location in the instructed reference frame using a mouse response. While the body motion, as detected by the vestibular system, should not affect the representation of body-fixed targets, it should interact with the representation of a world-centered target to update its location relative to the body. We show that the initial presentation of the visual target induced a reduction of alpha band power in contralateral parieto-occipital areas, which evolved to a sustained increase during the subsequent memory period. Motion of the body led to a reduction of alpha band power in central parietal areas extending to lateral parieto-temporal areas, irrespective of whether the targets had to be memorized relative to world or body. When updating a world-fixed target, its internal representation shifts hemispheres, only when subjects' behavioral responses suggested an update across the body midline. Our results suggest that parietal cortex is involved in both self-motion estimation and the selective application of this motion information to maintaining target locations as fixed in the world or fixed to the body. |
Tobias Heed; Jenny Backhaus; Brigitte Röder; Stephanie Badde Disentangling the external reference frames relevant to tactile localization Journal Article In: PLoS ONE, vol. 11, no. 7, pp. e0158829, 2016. @article{Heed2016, Different reference frames appear to be relevant for tactile spatial coding. When participants give temporal order judgments (TOJ) of two tactile stimuli, one on each hand, performance declines when the hands are crossed. This effect is attributed to a conflict between anatomi- cal and external location codes: hand crossing places the anatomically right hand into the left side of external space. However, hand crossing alone does not specify the anchor of the external reference frame, such as gaze, trunk, or the stimulated limb. Experiments that used explicit localization responses, such as pointing to tactile stimuli rather than crossing manipulations, have consistently implicated gaze-centered coding for touch. To test whether crossing effects can be explained by gaze-centered coding alone, participants made TOJ while the position of the hands wasmanipulated relative to gaze and trunk. The two hands either lay on different sides of space relative to gaze or trunk, or they both lay on one side of the respective space. In the latter posture, one hand was on its "regular side of space" despite hand crossing, thus reducing overall conflict between anatomical and exter- nal codes. TOJ crossing effects were significantly reduced when the hands were both located on the same side of space relative to gaze, indicating gaze-centered coding. Evi- dence for trunk-centered coding was tentative, with an effect in reaction time but not in accu- racy. These results link paradigms that use explicit localization and TOJ, and corroborate the relevance of gaze-related coding for touch. Yet, gaze and trunk-centered coding did not account for the total size of crossing effects, suggesting that tactile localization relies on additional, possibly limb-centered, reference frames. Thus, tactile location appears to be estimated by integrating multiple anatomical and external reference frames. |
Karin Heidlmayr; Karine Dore-Mazars; Xavier Aparico; Frederic Isel In: PLoS ONE, vol. 11, no. 11, pp. e0165029, 2016. @article{Heidlmayr2016, In the present electroencephalographical study, we asked to which extent executive control processes are shared by both the language and motor domain. The rationale was to examine whether executive control processes whose efficiency is reinforced by the frequent use of a second language can lead to a benefit in the control of eye movements, i.e. a non-linguistic activity. For this purpose, we administrated to 19 highly proficient late French-German bilingual participants and to a control group of 20 French monolingual participants an antisaccade task, i.e. a specific motor task involving control. In this task, an automatic saccade has to be suppressed while a voluntary eye movement in the opposite direction has to be carried out. Here, our main hypothesis is that an advantage in the antisaccade task should be observed in the bilinguals if some properties of the control processes are shared between linguistic and motor domains. ERP data revealed clear differences between bilinguals and monolinguals. Critically, we showed an increased N2 effect size in bilinguals, thought to reflect better efficiency to monitor conflict, combined with reduced effect sizes on markers reflecting inhibitory control, i.e. cue-locked positivity, the target-locked P3 and the saccade-locked presaccadic positivity (PSP). Moreover, effective connectivity analyses (dynamic causal modelling; DCM) on the neuronal source level indicated that bilinguals rely more strongly on ACC-driven control while monolinguals rely on PFC-driven control. Taken together, our combined ERP and effective connectivity findings may reflect a dynamic interplay between strengthened conflict monitoring, associated with subsequently more efficient inhibition in bilinguals. Finally, L2 proficiency and immersion experience constitute relevant factors of the language background that predict efficiency of inhibition. To conclude, the present study provided ERP and effective connectivity evidence for domain-general executive control involvement in handling multiple language use, leading to a control advantage in bilingualism. |
Sarah R. Heilbronner; Benjamin Y. Hayden The description-experience gap in risky choice in nonhuman primates Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 593–600, 2016. @article{Heilbronner2016, Risk attitudes in humans depend on the format used to present the gamble: we are more risk-averse for common gambles in the gains domain whose properties are described to us verbally than for those whose properties we learned about solely through experience. This difference, which constitutes part ofthe description-experience gap,is important, because it highlights the role ofknowledge acquisition in decision-mak- ing. The reasons for the gap remain obscure, but could depend upon uniquely human cognitive abilities, such as those asso- ciated with language. Thus, the gap may or may not extend to nonhuman animals. For this study, rhesus monkeys performed a novel task in which the properties of some gambles were explicitly cued (described), whereas others were learned through previous choices (experienced). Our monkeys displayed a description-experience gap. Overall, monkeys were more risk-seeking for experienced than for described gambles. This difference was observed for a range ofgamble probabilities (from 20% to 80% likelihood of payoff), indicating that it is not limited to low probability events. These results suggest that the description- experience gap does not depend on uniquely human cognitive abilities, such as those associated with lan- guage, and support the idea that epistemic influences on risk attitudes are evolutionarily ancient. |
Klaartje Heinen; Laura Sagliano; Michela Candini; Masud Husain; Marinella Cappelletti; Nahid Zokaei Cathodal transcranial direct current stimulation over posterior parietal cortex enhances distinct aspects of visual working memory Journal Article In: Neuropsychologia, vol. 87, pp. 35–42, 2016. @article{Heinen2016, In this study, we investigated the effects of tDCS over the posterior parietal cortex (PPC) during a visual working memory (WM) task, which probes different sources of response error underlying the precision of WM recall. In two separate experiments, we demonstrated that tDCS enhanced WM precision when applied bilaterally over the PPC, independent of electrode configuration. In a third experiment, we demonstrated with unilateral electrode configuration over the right PPC, that only cathodal tDCS enhanced WM precision and only when baseline performance was low. Looking at the effects on underlying sources of error, we found that cathodal stimulation enhanced the probability of correct target response across all participants by reducing feature-misbinding. Only for low-baseline performers, cathodal stimulation also reduced variability of recall. We conclude that cathodal- but not anodal tDCS can improve WM precision by preventing feature-misbinding and hereby enhancing attentional selection. For low-baseline performers, cathodal tDCS also protects the memory trace. Furthermore, stimulation over bilateral PPC is more potent than unilateral cathodal tDCS in enhancing general WM precision. |
Andrea Helo; Pia Rämä; Sebastian Pannasch; David Meary Eye movement patterns and visual attention during scene viewing in 3-to 12-month-olds Journal Article In: Visual Neuroscience, vol. 33, pp. e014, 2016. @article{Helo2016, Recently, two attentional modes have been associated with specific eye movement patterns during scene processing. Ambient mode, characterized by short fixations and long saccades during early scene inspection, is associated with localization of objects. Focal mode, characterized by longer fixations, is associated with more detailed object feature processing during later inspection phase. The aim of the present study was to investigate the development of these attentional modes. More specifically, we examined whether indications of ambient and focal attention modes are similar in infants and adults. Therefore, we measured eye movements in 3-to 12-months-old infants while exploring visual scenes. Our results show that both adults and 12-month-olds had shorter fixation durations within the first 1.5 s of scene viewing compared with later time phases (>2.5 s); indicating that there was a transition from ambient to focal processing during image inspection. In younger infants, fixation durations between two viewing phases did not differ. Our results suggest that at the end of the first year of life, infants have developed an adult-like scene viewing behavior. The evidence for the existence of distinct attentional processing mechanisms during early infancy furthermore underlines the importance of the concept of the two modes. |
Frouke Hermens; Robin Walker The influence of social and symbolic cues on observers' gaze behaviour Journal Article In: British Journal of Psychology, vol. 107, no. 3, pp. 484–502, 2016. @article{Hermens2016, Research has shown that social and symbolic cues presented in isolation and at fixation have strong effects on observers, but it is unclear how cues compare when they are presented away from fixation and embedded in natural scenes. We here compare the effects of two types of social cue (gaze and pointing gestures) and one type of symbolic cue (arrow signs) on eye movements of observers under two viewing conditions (free viewing vs. a memory task). The results suggest that social cues are looked at more quickly, for longer and more frequently than the symbolic arrow cues. An analysis of saccades initiated from the cue suggests that the pointing cue leads to stronger cueing than the gaze and the arrow cue. While the task had only a weak influence on gaze orienting to the cues, stronger cue following was found for free viewing compared to the memory task. |
Alyssa S. Hess; Andrew J. Wismer; Corey J. Bohil; Mark B. Neider On the hunt: Searching for poorly defined camouflaged targets Journal Article In: PLoS ONE, vol. 11, no. 3, pp. e0152502, 2016. @article{Hess2016, As camouflaged targets share visual characteristics with the environment within which they are embedded, searchers rarely have access to a perfect visual template of such targets. Instead, they must rely on less specific representations to guide search. Although search for camouflaged and non-specified targets have both received attention in the literature, to date they have not been explored in a combined context. Here we introduce a new paradigm for characterizing behavior during search for camouflaged targets in natural scenes, while also exploring how the fidelity of the target template affects search processes. Search scenes were created from forest images, with targets a distortion (varied size) of that image at a random location. In Experiment 1 a preview of the target was provided; in Experiment 2 there was no preview. No differences were found between experiments on nearly all measures. Generally, reaction times and accuracy improved with familiarity on the task (more so for small targets). Analysis of eye movements indicated that performance benefits were related to improvements in both Search and Target Verification time. Combined, our data suggest that search for camouflaged targets can be improved over a short time-scale, even when targets are poorly defined. |