EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2017 |
Lisa C. Goelz; Fabian J. David; John A. Sweeney; David E. Vaillancourt; Howard Poizner; Leonard Verhagen Metman; Daniel M. Corcos The effects of unilateral versus bilateral subthalamic nucleus deep brain stimulation on prosaccades and antisaccades in Parkinson's disease Journal Article In: Experimental Brain Research, vol. 235, no. 2, pp. 615–626, 2017. @article{Goelz2017, Unilateral deep brain stimulation (DBS) of the subthalamic nucleus (STN) in patients with Parkinson's disease improves skeletomotor function assessed clinically, and bilateral STN DBS improves motor function to a significantly greater extent. It is unknown whether unilateral STN DBS improves oculomotor function and whether bilateral STN DBS improves it to a greater extent. Further, it has also been shown that bilateral, but not unilateral, STN DBS is associated with some impaired cognitive-motor functions. The current study compared the effect of unilateral and bilateral STN DBS on sensorimotor and cognitive aspects of oculomotor control. Patients performed prosaccade and antisaccade tasks during no stimulation, unilateral stimulation, and bilateral stimulation. There were three sets of findings. First, for the prosaccade task, unilateral STN DBS had no effect on prosaccade latency and it reduced prosaccade gain; bilateral STN DBS reduced prosaccade latency and increased prosaccade gain. Second, for the antisaccade task, neither unilateral nor bilateral stimulation had an effect on antisaccade latency, unilateral STN DBS increased antisaccade gain, and bilateral STN DBS increased antisaccade gain to a greater extent. Third, bilateral STN DBS induced an increase in prosaccade errors in the antisaccade task. These findings suggest that while bilateral STN DBS benefits spatiotemporal aspects of oculomotor control, it may not be as beneficial for more complex cognitive aspects of oculomotor control. Our findings are discussed considering the strategic role the STN plays in modulating information in the basal ganglia oculomotor circuit. |
Trafton Drew; Sage E. P. Boettcher; Jeremy M. Wolfe One visual search, many memory searches: An eye-tracking investigation of hybrid search Journal Article In: Journal of Vision, vol. 17, no. 11, pp. 1–10, 2017. @article{Drew2017a, Suppose you go to the supermarket with a shopping list of 10 items held in memory. Your shopping expedition can be seen as a combination of visual search and memory search. This is known as "hybrid search." There is a growing interest in understanding how hybrid search tasks are accomplished. We used eye tracking to examine how manipulating the number of possible targets (the memory set size [MSS]) changes how observers (Os) search. We found that dwell time on each distractor increased with MSS, suggesting a memory search was being executed each time a new distractor was fixated. Meanwhile, although the rate of refixation increased with MSS, it was not nearly enough to suggest a strategy that involves repeatedly searching visual space for subgroups of the target set. These data provide a clear demonstration that hybrid search tasks are carried out via a "one visual search, many memory searches" heuristic in which Os examine items in the visual array once with a very low rate of refixations. For each item selected, Os activate a memory search that produces logarithmic response time increases with increased MSS. Furthermore, the percentage of distractors fixated was strongly modulated by the MSS: More items in the MSS led to a higher percentage of fixated distractors. Searching for more potential targets appears to significantly alter how Os approach the task, ultimately resulting in more eye movements and longer response times. |
Trafton Drew; Lauren H. Williams Simple eye-movement feedback during visual search is not helpful Journal Article In: Cognitive Research: Principles and Implications, vol. 2, no. 44, pp. 1–8, 2017. @article{Drew2017, Searching for targets in the visual world, or visual search, is something we all do every day. We frequently make 'false-negative' errors, wherein we erroneously conclude a target was absent when one was, in fact, present. These sorts of errors can have tremendous costs, as when signs of cancers are missed in diagnostic radiology. Prior research has characterized the cause of many of these errors as being due to failure to completely search the area where targets may be present; indeed, roughly one-third of chest nodules missed in lung cancer screening are never fixated (Drew, Võ, Olwal, Jacobson, Seltzer and Wolfe, Journal of Vision 13:3, 2013). This suggests that observers do not have a good representation of what areas have and have not been searched prior to declaring an area target free. Therefore, in six experiments, we sought to examine the utility of reducing the uncertainty with respect to what areas had been examined via online eye-tracking feedback. We hypothesized that providing information about what areas had or had not been examined would lead to lower rates of false negatives or more efficient search, namely faster response times with no cost on target detection accuracy. Neither of these predictions held true. Over six experiments, online eye-tracking feedback did not yield any reliable performance benefits. |
João Valente Duarte; Gabriel Nascimento Costa; Ricardo Martins; Miguel Castelo-Branco Pivotal role of hMT+ in long-range disambiguation of interhemispheric bistable surface motion Journal Article In: Human Brain Mapping, vol. 38, no. 10, pp. 4882–4897, 2017. @article{Duarte2017, It remains an open question whether long-range disambiguation of ambiguous surface motion can be achieved in early visual cortex or instead in higher level regions, which concerns object/surface segmentation/integration mechanisms. We used a bistable moving stimulus that can be perceived as a pattern comprehending both visual hemi-fields moving coherently downward or as two widely segregated nonoverlapping component objects (in each visual hemi-field) moving separately inward. This paradigm requires long-range integration across the vertical meridian leading to interhemispheric binding. Our fMRI study (n = 30) revealed a close relation between activity in hMT+ and perceptual switches involving interhemispheric segregation/integration of motion signals, crucially under nonlocal conditions where components do not overlap and belong to distinct hemispheres. Higher signal changes were found in hMT+ in response to spatially segregated component (incoherent) percepts than to pattern (coherent) percepts. This did not occur in early visual cortex, unlike apparent motion, which does not entail surface segmentation. We also identified a role for top–down mechanisms in state transitions. Deconvolution analysis of switch-related changes revealed prefrontal, insula, and cingulate areas, with the right superior parietal lobule (SPL) being particularly involved. We observed that directed influences could emerge either from left or right hMT+ during bistable motion integration/segregation. SPL also exhibited significant directed functional connectivity with hMT+, during perceptual state maintenance (Granger causality analysis). Our results suggest that long-range interhemispheric binding of ambiguous motion representations mainly reflect bottom–up processes from hMT+ during perceptual state maintenance. In contrast, state transitions maybe influenced by high-level regions such as the SPL. |
Felix Duecker; Teresa Schuhmann; Nina Bien; Christianne Jacobs; Alexander T. Sack In: Journal of Cognitive Neuroscience, vol. 29, no. 7, pp. 1267–1278, 2017. @article{Duecker2017, The concept of interhemispheric competition has been very influential in attention research, and the occurrence of biased attention due to an imbalance in posterior parietal cortex (PPC) is well documented. In this context, the vast majority of studies have assessed attentional performance with tasks that did not include an explicit experimental manipulation of attention, and, as a consequence, it remains largely unknown how these findings relate to core attentional constructs such as endogenous and exogenous control and spatial orienting and reorient- ing. We here addressed this open question by creating an imbalance between left and right PPC with transcranial direct current stimulation, resulting in right-hemispheric dominance, and assessed performance on three experimental paradigms that isolate distinct attentional processes. The comparison between active and sham transcranial direct current stimulations revealed a highly informative pattern of results with differential effects across tasks. Our results demonstrate the functional necessity of PPC for endogenous and exogenous attentional control and, importantly, link the concept of interhemispheric competition to core attentional processes, thus moving beyond the notion of biased attention after noninvasive brain stimulation over PPC. |
Laura Dugué; Alice M. Xue; Marisa Carrasco Distinct perceptual rhythms for feature and conjunction searches Journal Article In: Journal of Vision, vol. 17, no. 3, pp. 1–15, 2017. @article{Dugue2017, Feature and conjunction searches are widely used to study attentional deployment. However, the spatiotemporal behavior of attention integration in these tasks remains under debate. Are multiple search stimuli processed in parallel or sequentially? Does sampling of visual information and attentional deployment differ between these two types of search? If so, how? We used an innovative methodology to estimate the distribution of attention on a single-trial basis for feature and conjunction searches. Observers performed feature- and conjunction-search tasks. They had to detect and discriminate a tilted low-spatial-frequency grating among three low-spatial-frequency vertical gratings (feature search) or low-spatial-frequency vertical gratings and high-spatial-frequency tilted gratings (conjunction search). After a variable delay, two probes were flashed at random locations. Performance in reporting the probes was used to infer attentional deployment to those locations. By solving a second-degree equation, we determined the probability of probe report at the most (P1) and least (P2) attended locations on a given trial. Were P1 and P2 equal, we would conclude that attention had been uniformly distributed across all four locations. Otherwise, we would conclude that visual information sampling and attentional deployment had been nonuniformly distributed. Our results show that processing was nonuniformly distributed across the four locations in both searches, and was modulated periodically over time at ;5 Hz for the conjunction search and ;12 Hz for the feature search. We argue that the former corresponds to the periodicity of attentional deployment during the search, whereas the latter corresponds to ongoing sampling of visual information. Because different locations were not simultaneously processed, this study rules out a strict parallel model for both search types. |
Miguel P. Eckstein; Kathryn Koehler; Lauren E. Welbourne; Emre Akbas Humans, but not deep neural networks, often miss giant targets in scenes Journal Article In: Current Biology, vol. 27, no. 18, pp. 2827–2832, 2017. @article{Eckstein2017, Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [1, 2] to birds [3] to humans [4–12] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target's size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors. Eckstein et al. show that during visual search, humans, but not deep neural networks, often miss targets that have an atypical size relative to the surrounding objects in the scene. The authors suggest that this is not a human malfunction but a useful brain strategy to rapidly discount distractors during visual search. |
Grace Edwards; Céline Paeye; Philippe Marque; Rufin VanRullen; Patrick Cavanagh Predictive position computations mediated by parietal areas: TMS evidence Journal Article In: NeuroImage, vol. 153, pp. 49–57, 2017. @article{Edwards2017, When objects move or the eyes move, the visual system can predict the consequence and generate a percept of the target at its new position. This predictive localization may depend on eye movement control in the frontal eye fields (FEF) and the intraparietal sulcus (IPS) and on motion analysis in the medial temporal area (MT). Across two experiments we examined whether repetitive transcranial magnetic stimulation (rTMS) over right FEF, right IPS, right MT, and a control site, peripheral V1/V2, diminished participants' perception of two cases of predictive position perception: trans-saccadic fusion, and the flash grab illusion, both presented in the contralateral visual field. In trans-saccadic fusion trials, participants saccade toward a stimulus that is replaced with another stimulus during the saccade. Frequently, predictive position mechanisms lead to a fused percept of pre- and post-saccade stimuli (Paeye et al., 2017). We found that rTMS to IPS significantly decreased the frequency of perceiving trans-saccadic fusion within the first 10 min after stimulation. In the flash grab illusion, a target is flashed on a moving background leading to the percept that the target has shifted in the direction of the motion after the flash (Cavanagh and Anstis, 2013). In the first experiment, the reduction in the flash grab illusion after rTMS to IPS and FEF did not reach significance. In the second experiment, using a stronger version of the flash grab, the illusory shift did decrease significantly after rTMS to IPS although not after rTMS to FEF or to MT. These findings suggest that right IPS contributes to predictive position perception during saccades and motion processing in the contralateral visual field. |
Benedikt V. Ehinger; Katja I. Häuser; José P. Ossandón; Peter König Humans treat unreliable filled-in percepts as more real than veridical ones Journal Article In: eLife, vol. 6, pp. 1–17, 2017. @article{Ehinger2017, Humans often evaluate sensory signals according to their reliability for optimal decision-making. However, how do we evaluate percepts generated in the absence of direct input that are, therefore, completely unreliable? Here, we utilize the phenomenon of filling-in occurring at the physiological blind-spots to compare partially inferred and veridical percepts. Subjects chose between stimuli that elicit filling-in, and perceptually equivalent ones presented outside the blind-spots, looking for a Gabor stimulus without a small orthogonal inset. In ambiguous conditions, when the stimuli were physically identical and the inset was absent in both, subjects behaved opposite to optimal, preferring the blind-spot stimulus as the better example of a collinear stimulus, even though no relevant veridical information was available. Thus, a percept that is partially inferred is paradoxically considered more reliable than a percept based on external input. In other words: Humans treat filled-in inferred percepts as more real than veridical ones. |
Wolfgang Einhäuser; Philipp Methfessel; Alexandra Bendixen Newly acquired audio-visual associations bias perception in binocular rivalry Journal Article In: Vision Research, vol. 133, pp. 121–129, 2017. @article{Einhaeuser2017, When distinct stimuli are presented to the two eyes, their mental representations alternate in awareness. Here, such “binocular rivalry” was used to investigate whether audio-visual associations bias visual perception. To induce two arbitrary associations, each between a tone and a grating of a specific color and motion direction, observers were required to respond whenever this combination was presented, but not for other tone-grating combinations. After about 20 min of this induction phase, each of the gratings was presented to one eye to induce rivalry, while either of the two tones or no tone was played. Observers were asked to watch the rivaling stimuli and listen to the tones. The observer's dominant percept was assessed throughout by measuring the optokinetic nystagmus (OKN), whose slow phase follows the direction of the currently dominant grating. We found that perception in rivalry was affected by the concurrently played tone. Results suggest a bias towards the grating that had been associated with the concurrently presented tone and prolonged dominance durations for this grating compared to the other. Numerically, conditions without tone fell in-between for measures of bias and dominance duration. Our data show that a rapidly acquired arbitrary audio-visual association biases visual perception. Unlike previously reported cross-modal interactions in rivalry, this effect can neither be explained by a pure attentional (dual-task) effect, nor does it require a fixed physical or semantic relation between the auditory and visual stimulus. This suggests that audio-visual associations that are quickly formed by associative learning may affect visual representations directly. |
Wolfgang Einhäuser; Sabine Thomassen; Alexandra Bendixen Using binocular rivalry to tag foreground sounds: Towards an objective visual measure for auditory multistability Journal Article In: Journal of Vision, vol. 17, no. 1, pp. 1–19, 2017. @article{Einhaeuser2017a, In binocular rivalry, paradigms have been proposed for unobtrusive moment-by-moment readout of observers' perceptual experience (‘‘no-report paradigms''). Here, we take a first step to extend this concept to auditory multistability. Observers continuously reported which of two concurrent tone sequences they perceived in the foreground: high-pitch (1008 Hz) or low-pitch (400 Hz) tones. Interstimulus intervals were either fixed per sequence (Experiments 1 and 2) or random with tones alternating (Experiment 3). A horizontally drifting grating was presented to each eye; to induce binocular rivalry, gratings had distinct colors and motion directions. To associate each grating with one tone sequence, a pattern on the grating jumped vertically whenever the respective tone occurred. We found that the direction of the optokinetic nystagmus (OKN)—induced by the visually dominant grating—could be used to decode the tone (high/low) that was perceived in the foreground well above chance. This OKN-based readout improved after observers had gained experience with the auditory task (Experiments 1 and 2) and for simpler auditory tasks (Experiment 3). We found no evidence that the visual stimulus affected auditory multistability. Although decoding performance is still far from perfect, our paradigm may eventually provide a continuous estimate of the currently dominant percept in auditory multistability. |
Marjolein Waal; Jason Farquhar; Luciano Fasotti; Peter Desain Preserved and attenuated electrophysiological correlates of visual spatial attention in elderly subjects Journal Article In: Behavioural Brain Research, vol. 317, pp. 415–423, 2017. @article{Waal2017, Healthy aging is associated with changes in many neurocognitive functions. While on the behavioral level, visual spatial attention capacities are relatively stable with increasing age, the underlying neural processes change. In this study, we investigated attention-related modulations of the stimulus-locked event-related potential (ERP) and occipital oscillations in the alpha band (8–14 Hz) in young and elderly participants. Both groups performed a visual attention task equally well and the ERP showed comparable attention-related modulations in both age groups. However, in elderly subjects, oscillations in the alpha band were massively reduced both during the task and in the resting state and the typical task-related lateralized pattern of alpha activity was not observed. These differences between young and elderly participants were observed on the group level as well as on the single trial level. The results indicate that younger and older adults use different neural strategies to reach the same performance in a covert visual spatial attention task. |
Freek Ede; Marcel Niklaus; Anna C. Nobre Temporal expectations guide dynamic prioritization in visual working memory through attenuated α oscillations Journal Article In: Journal of Neuroscience, vol. 37, no. 2, pp. 437–445, 2017. @article{Ede2017, Although working memory is generally considered a highly dynamic mnemonic store, popular laboratory tasks used to understand its psychological and neural mechanisms (such as change detection and continuous reproduction) often remain relatively " static, " involving the retention of a set number of items throughout a shared delay interval. In the current study, we investigated visual working memory in a more dynamic setting, and assessed the following: (1) whether internally guided temporal expectations can dynamically and reversibly prioritize individual mnemonic items at specific times at which they are deemed most relevant; and (2) the neural substrates that support such dynamic prioritization. Participants encoded two differently colored oriented bars into visual working memory to retrieve the orientation of one bar with a precision judgment when subsequently probed. To test for the flexible temporal control to access and retrieve remembered items, we manipulated the probability for each of the two bars to be probed over time, and recorded EEG in healthy human volunteers. Temporal expectations had a profound influence on working memory performance, leading to faster access times as well as more accurate orientation reproductions for items that were probed at expected times. Furthermore, this dynamic prioritization was associated with the temporally specific attenuation of contralateral ␣ (8 –14 Hz) oscillations that, moreover, predicted working memory access times on a trial-by-trial basis. We conclude that attentional prioritization in working memory can be dynamically steered by internally guided temporal expectations, and is supported by the attenuation of ␣ oscillations in task-relevant sensory brain areas. |
Anouk Mariette Loon; Katya Olmos-Solis; Christian N. L. Olivers Subtle eye movement metrics reveal task-relevant representations prior to visual search Journal Article In: Journal of Vision, vol. 17, no. 6, pp. 13, 2017. @article{Loon2017, Visual search is thought to be guided by an active visual working memory (VWM) representation of the task-relevant features, referred to as the search template. In three experiments using a probe technique, we investigated which eye movement metrics reveal which search template is activated prior to the search, and distinguish it from future relevant or no longer relevant VWM content. Participants memorized a target color for a subsequent search task, while being instructed to keep central fixation. Before the search display appeared, we briefly presented two task-irrelevant colored probe stimuli to the left and right from fixation, one of which could match the current target template. In all three experiments, participants made both more and larger eye movements towards the probe matching the target color. The bias was predominantly expressed in microsaccades, 100-250 ms after probe onset. Experiment 2 used a retro-cue technique to show that these metrics distinguish between relevant and dropped representations. Finally, Experiment 3 used a sequential task paradigm, and showed that the same metrics also distinguish between current and prospective search templates. Taken together, we show how subtle eye movements track task-relevant representations for selective attention prior to visual search. |
Joanne C. Van Slooten; Sara Jahfari; Tomas Knapen; Jan Theeuwes Individual differences in eye blink rate predict both transient and tonic pupil responses during reversal learning Journal Article In: PLoS ONE, vol. 12, no. 9, pp. e0185665, 2017. @article{VanSlooten2017, The pupil response under constant illumination can be used as a marker of cognitive processes. In the past, pupillary responses have been studied in the context of arousal and decision-making. However, recent work involving Parkinson's patients suggested that pupillary responses are additionally affected by reward sensitivity. Here, we build on these findings by examining how pupil responses are modulated by reward and loss while participants (N = 30) performed a Pavlovian reversal learning task. In fast (transient) pupil responses, we observed arousal-based influences on pupil size both during the expectation of upcoming value and the evaluation of unexpected monetary outcomes. Importantly, after incorporating eye blink rate (EBR), a behavioral correlate of striatal dopamine levels, we observed that participants with lower EBR showed stronger pupil dilation during the expectation of upcoming reward. Subsequently, when reward expectations were violated, participants with lower EBR showed stronger pupil responses after experiencing unexpected loss. Across trials, the detection of a reward contingency reversal was reflected in a slow (tonic) dilatory pupil response observed already several trials prior to the behavioral report. Interestingly, EBR correlated positively with this tonic detection response, suggesting that variability in the arousal-based detection response may reflect individual differences in striatal dopaminergic tone. Our results provide evidence that a behavioral marker of baseline striatal dopamine level (EBR) can potentially be used to describe the differential effects of value-based learning in the arousal-based pupil response. |
Wieske Zoest; Benedetta Heimler; Francesco Pavani The oculomotor salience of flicker, apparent motion and continuous motion in saccade trajectories Journal Article In: Experimental Brain Research, vol. 235, pp. 181–191, 2017. @article{Zoest2017, The aim of the present study was to investigate the impact of dynamic distractors on the time-course of oculomotor selection using saccade trajectory deviations. Participants were instructed to make a speeded eye move- ment (pro-saccade) to a target presented above or below the fixation point while an irrelevant distractor was presented. Four types of distractors were varied within participants: (1) static, (2) flicker, (3) rotating apparent motion and (4) continuous motion. The eccentricity of the distractor was varied between participants. The results showed that sac- cadic trajectories curved towards distractors presented near the vertical midline; no reliable deviation was found for distractors presented further away from the vertical mid- line. Differences between the flickering and rotating dis- tractor were found when distractor eccentricity was small and these specific effects developed over time such that there was a clear differentiation between saccadic deviation based on apparent motion for long-latency saccades, but not short-latency saccades. The present results suggest that the influence on performance of apparent motion stimuli is relatively delayed and acts in a more sustained manner compared to the influence of salient static, flickering and continuous moving stimuli. |
Hildward Vandormael; Santiago Herce Castañón; Jan Balaguer; Vickie Li; Christopher Summerfield Robust sampling of decision information during perceptual choice Journal Article In: Proceedings of the National Academy of Sciences, vol. 114, no. 10, pp. 2771–2776, 2017. @article{Vandormael2017, Humans move their eyes to gather information about the visual world. However, saccadic sampling has largely been explored in paradigms that involve searching for a lone target in a cluttered array or natural scene. Here, we investigated the policy that humans use to overtly sample information in a perceptual decision task that required information from across multiple spatial locations to be combined. Participants viewed a spatial array of numbers and judged whether the average was greater or smaller than a reference value. Participants preferentially sampled items that were less diagnostic of the correct answer ("inlying" elements; that is, elements closer to the reference value). This preference to sample inlying items was linked to decisions, enhancing the tendency to give more weight to inlying elements in the final choice ("robust averaging"). These findings contrast with a large body of evidence indicating that gaze is directed preferentially to deviant information during natural scene viewing and visual search, and suggest that humans may sample information "robustly" with their eyes during perceptual decision-making. |
Gilles Vannuscorps; Alfonso Caramazza Typical predictive eye movements during action observation without effector-specific motor simulation Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 4, pp. 1152–1157, 2017. @article{Vannuscorps2017, When watching someone reaching to grasp an object, we typically gaze at the object before the agent's hand reaches it-that is, we make a ``predictive eye movement'' to the object. The received explanation is that predictive eye movements rely on a direct matching process, by which the observed action is mapped onto the motor representation of the same body movements in the observer's brain. In this article, we report evidence that calls for a reexamination of this account. We recorded the eye movements of an individual born without arms (D.C.) while he watched an actor reaching for one of two different-sized objects with a power grasp, a precision grasp, or a closed fist. D.C. showed typical predictive eye movements modulated by the actor's hand shape. This finding constitutes proof of concept that predictive eye movements during action observation can rely on visual and inferential processes, unaided by effector-specific motor simulation. |
Alejandra Vasquez-Rosati; Enzo P. Brunetti; Carmen Cordero; Pedro E. Maldonado Pupillary response to negative emotional stimuli is differentially affected in meditation practitioners Journal Article In: Frontiers in Human Neuroscience, vol. 11, pp. 209, 2017. @article{VasquezRosati2017, Clinically, meditative practices have become increasingly relevant, decreasing anxiety in patients and increasing antibody production. However, few studies have examined the physiological correlates, or effects of the incorporation of meditative practices. Because pupillary reactivity is a marker for autonomic changes and emotional processing, we hypothesized that the pupillary responses of mindfulness meditation practitioners (MP) and subjects without such practices (NM differ, reflecting different emotional processing. In a group of 11 MP and 9 NM, we recorded the pupil diameter using video-oculography while subjects explored images with emotional contents. Although both groups showed a similar pupillary response for positive and neutral images, negative images evoked a greater pupillary contraction and a weaker dilation in the MP group. Also, this group had faster physiological recovery to baseline levels. These results suggest that mindfulness meditation practices modulate the response of the autonomic nervous system, reflected in the pupillary response to negative images and faster physiological recovery to baseline levels, suggesting that pupillometry could be used to assess the potential health benefits of these practices in patients. |
Maryam Vaziri-Pashkam; Yaoda Xu Goal-directed visual processing differentially impacts human ventral and dorsal visual representations Journal Article In: Journal of Neuroscience, vol. 37, no. 36, pp. 8767–8782, 2017. @article{VaziriPashkam2017, Recent studies have challenged the ventral/“what” and dorsal/“where” two-visual-processing-pathway view by showing the existence of “what”and“where”information in both pathways. Is thetwo-pathwaydistinction still valid? Here,weexaminedhowgoal-directed visual information processing may differentially impact visual representations in these two pathways. Using fMRI and multivariate pattern analysis, in three experiments onhumanparticipants (57% females), by manipulating whether color or shape was task-relevant andhow they were conjoined, we examined shape-based object category decoding in occipitotemporal and parietal regions.Wefound that object category representations in all the regions examined were influenced by whether or not object shape was task-relevant. This task effect, however,tendedto decrease as task-relevantandirrelevant featuresweremoreintegrated, reflecting thewell-knownobject-based feature encoding. Interestingly, task relevance played a relatively minor role in driving the representational structures of early visual and ventral object regions. They were driven predominantly by variations in object shapes. In contrast, the effect of task was much greater in dorsal than ventral regions, with object category and task relevance both contributing significantly to the representational structures of the dorsal regions. These results showed that, whereas visual representations in the ventral pathway are more invariant and reflect “what an object is,” those in the dorsal pathway are more adaptive and reflect “what we do with it.” Thus, despite the existence of “what” and “where” information in both visual processing pathways, the two pathways may still differ fundamentally in their roles in visual infor- mation representation. |
Bram-Ernst Verhoef; John H. R. Maunsell Attention-related changes in correlated neuronal activity arise from normalization mechanisms Journal Article In: Nature Neuroscience, vol. 20, no. 7, pp. 969–977, 2017. @article{Verhoef2017, Attention is believed to enhance perception by altering the activity-level correlations between pairs of neurons. How attention changes neuronal activity correlations is unknown. Using multielectrodes in monkey visual cortex, we measured spike-count correlations when single or multiple stimuli were presented and when stimuli were attended or unattended. When stimuli were unattended, adding a suppressive, nonpreferred stimulus beside a preferred stimulus increased spike-count correlations between pairs of similarly tuned neurons but decreased spike-count correlations between pairs of oppositely tuned neurons. A stochastic normalization model containing populations of oppositely tuned, mutually suppressive neurons explains these changes and also explains why attention decreased or increased correlations: as an indirect consequence of attention-related changes in the inputs to normalization mechanisms. Our findings link normalization mechanisms to correlated neuronal activity and attention, showing that normalization mechanisms shape response correlations and that these correlations change when attention biases normalization mechanisms. |
Margarita Vinnikov; Robert S. Allison; Suzette Fernandes Gaze-contingent auditory displays for improved spatial attention in virtual reality Journal Article In: ACM Transactions on Computer-Human Interaction, vol. 24, no. 3, pp. 1–38, 2017. @article{Vinnikov2017, Virtual reality simulations of group social interactions are important for many applications, including the virtual treatment of social phobias, crowd and group simulation, collaborative virtual environments (VEs), and entertainment. In such scenarios, when compared to the real world, audio cues are often impoverished. As a result, users cannot rely on subtle spatial audio-visual cues that guide attention and enable effective social interactions in real-world situations. We explored whether gaze-contingent audio enhancement techniques driven by inferring audio-visual attention in virtual displays could be used to enable effective communication in cluttered audio VEs. In all of our experiments, we hypothesized that visual attention could be used as a tool to modulate the quality and intensity of sounds from multiple sources to efficiently and naturally select spatial sound sources. For this purpose, we built a gaze-contingent display (GCD) that allowed tracking of a user's gaze in real-time and modifying the volume of the speakers' voices contingent on the current region of overt attention. We compared six different techniques for sound modulation with a base condition providing no attentional modulation of sound. The techniques were compared in terms of source recognition and preference in a set of user studies. Overall, we observed that users liked the ability to control the sounds with their eyes. They felt that a rapid change in attenuation with attention but not the elimination of competing sounds (partial rather than absolute selection) was most natural. In conclusion, audio GCDs offer potential for simulating rich, natural social, and other interactions in VEs. They should be considered for improving both performance and fidelity in applications related to social behaviour scenarios or when the user needs to work with multiple audio sources of information. |
Theresa Wildegger; Freek Ede; Mark W. Woolrich; Céline R. Gillebert; Anna C. Nobre Preparatory α-band oscillations reflect spatial gating independently of predictions regarding target identity Journal Article In: Journal of Neurophysiology, vol. 117, no. 3, pp. 1385–1394, 2017. @article{Wildegger2017, Preparatory modulations of cortical alpha-band oscillations are a reliable index of the voluntary allocation of covert spatial attention. It is currently unclear whether attentional cues containing information about a target's identity (such as its visual orientation), in addition to its location, might additionally shape preparatory alpha modulations. Here, we explore this question by directly comparing spatial and feature-based attention in the same visual detection task while recording brain activity using magneto-encephalography (MEG). At the behavioural level, preparatory feature-based and spatial attention cues both improved performance, and did so independently of each other. Using MEG, we replicated robust alpha lateralisation following spatial cues: in preparation for a visual target, alpha power decreased contralaterally, and increased ipsilaterally to the attended location. Critically, however, preparatory alpha lateralisation was not significantly modulated by predictions regarding target identity, as carried via the behaviourally effective feature-based attention cues. Furthermore, non-lateralised alpha power during the cue-target interval did not differentiate between uninformative cues and cues carrying feature-based predictions either. Based on these results we propose that preparatory alpha modulations play a role in the gating of information between spatially segregated cortical regions, and are therefore particularly well suited for spatial gating of information. |
Lauren H. Williams; Trafton Drew Distraction in diagnostic radiology: How is search through volumetric medical images affected by interruptions? Journal Article In: Cognitive Research: Principles and Implications, vol. 2, no. 1, pp. 12, 2017. @article{Williams2017, Observational studies have shown that interruptions are a frequent occurrence in diagnostic radiology. The present study used an experimental design in order to quantify the cost of these interruptions during search through volumetric medical images. Participants searched through chest CT scans for nodules that are indicative of lung cancer. In half of the cases, search was interrupted by a series of true or false math equations. The primary cost of these interruptions was an increase in search time with no corresponding increase in accuracy or lung coverage. This time cost was not modulated by the difficulty of the interruption task or an individual's working memory capacity. Eye-tracking suggests that this time cost was driven by impaired memory for which regions of the lung were searched prior to the interruption. Potential interventions will be discussed in the context of these results. |
Niklas Wilming; Tim C. Kietzmann; Megan Jutras; Cheng Xue; Stefan Treue; Elizabeth A. Buffalo; Peter König Differential contribution of low- and high-level image content to eye movements in monkeys and humans Journal Article In: Cerebral Cortex, vol. 27, no. 1, pp. 279–293, 2017. @article{Wilming2017, Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. |
Niklas Wilming; Selim Onat; José P. Ossandón; Alper Açik; Tim C. Kietzmann Data descriptor : An extensive dataset of eye movements during viewing of complex images Journal Article In: Scientific Data, vol. 4, pp. 160126, 2017. @article{Wilming2017a, We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups. |
Julia A. Wolfson; Dan J. Graham; Sara N. Bleich Attention to physical activity–equivalent calorie information on nutrition facts labels: An eye-tracking investigation Journal Article In: Journal of Nutrition Education and Behavior, vol. 49, no. 1, pp. 35–42.e1, 2017. @article{Wolfson2017, Objective Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. Design An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Setting Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. Participants The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Main Outcome Measure(s) Attention to and attitudes about activity-equivalent calorie information. Analysis Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Results Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Conclusions and Implications Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions. |
Nicolas Wattiez; Charlotte Constans; Thomas Deffieux; Pierre M. Daye; Mickael Tanter; Jean-François Aubry; Pierre Pouget Transcranial ultrasonic stimulation modulates single-neuron discharge in macaques performing an antisaccade task Journal Article In: Brain Stimulation, vol. 10, no. 6, pp. 1024–1031, 2017. @article{Wattiez2017, Background Low intensity transcranial ultrasonic stimulation (TUS) has been demonstrated to non-invasively and transiently stimulate the nervous system. Although US neuromodulation has appeared robust in rodent studies, the effects of US in large mammals and humans have been modest at best. In addition, there is a lack of direct recordings from the stimulated neurons in response to US. Our study investigates the magnitude of the US effects on neuronal discharge in awake behaving monkeys and thus fills the void on both fronts. Objective/Hypothesis In this study, we demonstrate the feasibility of recording action potentials in the supplementary eye field (SEF) as TUS is applied simultaneously to the frontal eye field (FEF) in macaques performing an antisaccade task. Results We show that compared to a control stimulation in the visual cortex, SEF activity is significantly modulated shortly after TUS onset. Among all cell types 40% of neurons significantly changed their activity after TUS. Half of the neurons showed a transient increase of activity induced by TUS. Conclusion Our study demonstrates that the neuromodulatory effects of non-invasive focused ultrasound can be assessed in real time in awake behaving monkeys by recording discharge activity from a brain region reciprocally connected with the stimulated region. The study opens the door for further parametric studies for fine-tuning the ultrasonic parameters. The ultrasonic effect could indeed be quantified based on the direct measurement of the intensity of the modulation induced on a single neuron in a freely performing animal. The technique should be readily reproducible in other primate laboratories studying brain function, both for exploratory and therapeutic purposes and to facilitate the development of future clinical TUS devices. |
Tuesday M. Watts; Luke Holmes; Ritch C. Savin-Williams; Gerulf Rieger Pupil dilation to explicit and non-explicit sexual stimuli Journal Article In: Archives of Sexual Behavior, vol. 46, no. 1, pp. 155–165, 2017. @article{Watts2017, In the visual processing of sexual content, pupil dilation is an indicator of arousal that has been linked to observers' sexual orientation. This study investigated whether this measure can be extended to determine age-specific sexual interest. In two experiments, the pupillary responses of heterosexual adults to images of males and females of different ages were related to self-reported sexual interest, sexual appeal to the stimuli, and a child molestation proclivity scale. In both experiments, the pupils of male observers dilated to photographs of women but not men, children, or neutral stimuli. These pupillary responses corresponded with observer's self-reported sexual interests and their sexual appeal ratings of the stimuli. Female observers showed pupil dilation to photographs of men and women but not children. In women, pupillary responses also correlated poorly with sexual appeal ratings of the stimuli. These experiments provide initial evidence that eye-tracking could be used as a measure of sex-specific interest in male observers, and as an age-specific index in male and female observers |
Matthew David Weaver; Clayton Hickey; Wieske Zoest The impact of salience and visual working memory on the monitoring and control of saccadic behavior: An eye-tracking and EEG study Journal Article In: Psychophysiology, vol. 54, no. 4, pp. 544–554, 2017. @article{Weaver2017, In a concurrent eye-tracking and EEG study, we investigated the impact of salience on the monitoring and control of eye movement behavior and the role of visual working memory (VWM) capacity in mediating this effect. Participants made eye movements to a unique line-segment target embedded in a search display also containing a unique distractor. Target and distractor salience was manipulated by varying degree of orientation offset from a homogenous background. VWM capacity was measured using a change-detection task. Results showed greater likelihood of incorrect saccades when the distractor was relatively more salient than when the target was salient. Misdirected saccades to salient distractors were strongly represented in the error-monitoring system by rapid and robust errorrelated negativity (ERN), which predicted a significant adjustment of oculomotor behavior. Misdirected saccades to less-salient distractors, while arguably representing larger errors, were not as well detected or utilized by the error/ performance-monitoring system. This system was instead better engaged in tasks requiring greater cognitive control and by individuals with higher VWM capacity. Our findings show that relative salience of task-relevant and taskirrelevant stimuli can define situations where an increase in cognitive control is necessary, with individual differences in VWM capacity explaining significant variance in the degree of monitoring and control of goal-directed eye movement behavior. The present study supports a conflict-monitoring interpretation of the ERN, whereby the level of competition between different responses, and the stimuli that define these responses, was more important in the generation of an enhanced ERN than the error commission itself. |
Matthew David Weaver; Wieske Zoest; Clayton Hickey A temporal dependency account of attentional inhibition in oculomotor control Journal Article In: NeuroImage, vol. 147, pp. 880–894, 2017. @article{Weaver2017a, We used concurrent electroencephalogram (EEG) and eye tracking to investigate the role of covert attentional mechanisms in the control of oculomotor behavior. Human participants made speeded saccades to targets that were presented alongside salient distractors. By subsequently sorting trials based on whether the distractor was strongly represented or suppressed by the visual system – as evident in the accuracy (Exp. 1) or quality of the saccade (Exp. 2) – we could characterize and contrast pre-saccadic neural activity as a function of whether oculomotor control was established. Results show that saccadic behavior is strongly linked to the operation of attentional mechanisms in visual cortex. In Experiment 1, accurate saccades were preceded by attentional selection of the target – indexed by a target-elicited N2pc component – and by attentional suppression of the distractor – indexed by early and late distractor-elicited distractor positivity (Pd) components. In Experiment 2, the strength of distractor suppression predicted the degree to which the path of slower saccades would deviate away from the distractor en route to the target. However, results also demonstrated clear dissociations of covert and overt selective control, with saccadic latency in particular showing no relationship to the latency of covert selective mechanisms. Eye movements could thus be initiated prior to the onset of attentional ERP components, resulting in stimulus-driven behaviour. Taken together, the results indicate that attentional mechanisms play a role in determining saccadic behavior, but that saccade timing is not contingent on the deployment of attention. This creates a temporal dependency, whereby attention fosters oculomotor control only when attentional mechanisms are given sufficient opportunity to impact stimuli representations before an eye movement is executed. |
Alex L. White; Erik Runeson; John Palmer; Zachary R. Ernst; Geoffrey M. Boynton Evidence for unlimited capacity processing of simple features in visual cortex Journal Article In: Journal of Vision, vol. 17, no. 6, pp. 19, 2017. @article{White2017a, Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level-dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. |
Brian J. White; David J. Berg; Janis Y. Y. Kan; Robert A. Marino; Laurent Itti; Douglas P. Munoz Superior colliculus neurons encode a visual saliency map during free viewing of natural dynamic video Journal Article In: Nature Communications, vol. 8, pp. 14263, 2017. @article{White2017b, Models of visual attention postulate the existence of a saliency map whose function is to guide attention and gaze to the most conspicuous regions in a visual scene. Although cortical representations of saliency have been reported, there is mounting evidence for a subcortical saliency mechanism, which pre-dates the evolution of neocortex. Here, we conduct a strong test of the saliency hypothesis by comparing the output of a well-established computational saliency model with the activation of neurons in the primate superior colliculus (SC), a midbrain structure associated with attention and gaze, while monkeys watched video of natural scenes. We find that the activity of SC superficial visual-layer neurons (SCs), specifically, is well-predicted by the model. This saliency representation is unlikely to be inherited from fronto-parietal cortices, which do not project to SCs, but may be computed in SCs and relayed to other areas via tectothalamic pathways. |
Diana M. E. Torta; Emanuel N. Broeke; Lieve Filbrich; Benvenuto Jacob; Julien Lambert; André Mouraux Intense pain influences the cortical processing of visual stimuli projected onto the sensitized skin Journal Article In: Pain, vol. 158, no. 4, pp. 691–697, 2017. @article{Torta2017, Sensitization is a form of implicit learning produced by the exposure to a harmful stimulus. In humans and other mammals, sensitization after skin injury increases the responsiveness of peripheral nociceptors and enhances the synaptic transmission of nociceptive input in the central nervous system. Here, we show that sensitization-related changes in the central nervous system are not restricted to nociceptive pathways and, instead, also affect other sensory modalities, especially if that modality conveys information relevant for the sensitized body part. Specifically, we show that after sensitizing the forearm using high-frequency electrical stimulation (HFS) of the skin, visual stimuli projected onto the sensitized forearm elicit significantly enhanced brain responses. Whereas mechanical hyperalgesia was present both 20 and 45 minutes after HFS, the enhanced responsiveness to visual stimuli was present only 20 minutes after HFS. Taken together, our results indicate that sensitization involves both nociceptive-specific and multimodal mechanisms, having distinct time courses. |
Yuliy Tsank; Miguel P. Eckstein Domain specificity of oculomotor learning after changes in sensory processing Journal Article In: Journal of Neuroscience, vol. 37, no. 47, pp. 11469 –11484, 2017. @article{Tsank2017, Humans visually process the world with varying spatial resolution and can program their eye movements optimally to maximize information acquisition for a variety of everyday tasks. Diseases such as macular degeneration can change visual sensory processing, introducing central vision loss (a scotoma). However, humans can learn to direct a new preferred retinal location to regions of interest for simple visual tasks. Whether such learned compensatory saccades are optimal and generalize to more complex tasks, which require integrating information across a large area of the visual field, is not well-understood. Here, we explore the possible effects of central vision loss on the optimal saccades during a face identification task, using a gaze-contingent simulated scotoma. We show that a new foveated ideal observer with a central scotoma correctly predicts that the human optimal point of fixation to identify faces shifts from just below the eyes to one that is at the tip of the nose and another at the top of the forehead. However, even after 5,000 trials, humans of both sexes surprisingly do not change their initial fixations to adapt to the new optimal fixation points to faces. In contrast, saccades do change for tasks such as object-following and to a lesser extent during search. Our findings argue against a central brain motor-compensatory mechanism that generalizes across tasks. They instead suggest task-specificity in the learning of oculomotor plans in response to changes in front-end sensory processing and the possibility of separate domain-specific representations of learned oculomotor plans in the brain. |
Courtney Turrin; Nicholas A. Fagan; Olga Dal Monte; Steve W. C. Chang Social resource foraging is guided by the principles of the Marginal Value Theorem Journal Article In: Scientific Reports, vol. 7, pp. 11274, 2017. @article{Turrin2017, Optimality principles guide how animals adapt to changing environments. During foraging for nonsocial resources such as food and water, species across taxa obey a strategy that maximizes resource harvest rate. However, it remains unknown whether foraging for social resources also obeys such a strategic principle. We investigated how primates forage for social information conveyed by conspecific facial expressions using the framework of optimal foraging theory. We found that the canonical principle of Marginal Value Theorem (MVT) also applies to social resources. Consistent with MVT, rhesus macaques (Macaca mulatta) spent more time foraging for social information when alternative sources of information were farther away compared to when they were closer by. A comparison of four models of patch-leaving behavior confirmed that the MVT framework provided the best fit to the observed foraging behavior. This analysis further demonstrated that patch-leaving decisions were not driven simply by the declining value of the images in the patch, but instead were dependent upon both the instantaneous social value intake rate and current time in the patch. |
Yoshiyuki Ueda; Yusuke Kamakura; Jun Saiki Eye movements converge on vanishing points during visual search Journal Article In: Japanese Psychological Research, vol. 59, no. 2, pp. 109–121, 2017. @article{Ueda2017, The vanishing point seems to be a useful cue for understanding scenes at a glance. The closer the objects are, the smaller their sizes become. Because the resolution of central vision is higher than that of peripheral vision, seeing a vanishing point enables individuals to perceive the whole scene. Here, we examined whether vanishing points attract eye movements during visual search. In Experiment 1, we conducted a free‐viewing task to examine whether vanishing points play a significant role. In Experiment 2, the participants searched for a Gabor patch that was embedded in manmade or natural scenes. In Experiment 3, to investigate the robustness of the vanishing point effect, visual search was conducted using simpler geometric backgrounds. We observed that eye movements converged around vanishing points, and that the first fixations are also located around them. These results suggest that vanishing points as well as salient locations can capture eye movements, and eye movements are guided by such environmental structures. |
Anne E. Urai; Anke Braun; Tobias H. Donner Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias Journal Article In: Nature Communications, vol. 8, pp. 14637, 2017. @article{Urai2017, While judging their sensory environments, decision-makers seem to use the uncertainty about their choices to guide adjustments of their subsequent behaviour. One possible source of these behavioural adjustments is arousal: decision uncertainty might drive the brain's arousal systems, which control global brain state and might thereby shape subsequent decision-making. Here, we measure pupil diameter, a proxy for central arousal state, in human observers performing a perceptual choice task of varying difficulty. Pupil dilation, after choice but before external feedback, reflects three hallmark signatures of decision uncertainty derived from a computational model. This increase in pupil-linked arousal boosts observers' tendency to alternate their choice on the subsequent trial. We conclude that decision uncertainty drives rapid changes in pupil-linked arousal state, which shape the serial correlation structure of ongoing choice behaviour. |
Israel Vaca-Palomares; Brian C. Coe; Donald C. Brien; Douglas P. Munoz; Juan Fernandez-Ruiz Voluntary saccade inhibition deficits correlate with extended white-matter cortico-basal atrophy in Huntington's disease Journal Article In: NeuroImage: Clinical, vol. 15, pp. 502–512, 2017. @article{VacaPalomares2017, The ability to inhibit automatic versus voluntary saccade commands in demanding situations can be impaired in neurodegenerative diseases such as Huntington's disease (HD). These deficits could result from disruptions in the interaction between basal ganglia and the saccade control system. To investigate voluntary oculomotor control deficits related to the cortico-basal circuitry, we evaluated early HD patients using an interleaved pro- and anti-saccade task that requires flexible executive control to generate either an automatic response (look at a peripheral visual stimulus) or a voluntary response (look away from the stimulus in the opposite direction). The impairments of HD patients in this task are mainly attributed to degeneration in the striatal medium spiny neurons leading to an over-activation of the indirect-pathway thorough the basal ganglia. However, some studies have proposed that damage outside the indirect-pathway also contribute to executive and saccade deficits. We used the interleaved pro- and anti-saccade task to study voluntary saccade inhibition deficits, Voxel-based morphometry and Tract-based spatial statistic to map cortico-basal ganglia circuitry atrophy in HD. HD patients had voluntary saccade inhibition control deficits, including increased regular-latency anti-saccade errors and increased anticipatory saccades. These deficits correlated with white-matter atrophy in the inferior fronto-occipital fasciculus, anterior thalamic radiation, anterior corona radiata and superior longitudinal fasciculus. These findings suggest that cortico-basal ganglia white-matter atrophy in HD, disrupts the normal connectivity in a network controlling voluntary saccade inhibitory behavior beyond the indirect-pathway. This suggests that in vivo measures of white-matter atrophy can be a reliable marker of the progression of cognitive deficits in HD. |
Christian Valuch; Peter König; Ulrich Ansorge Memory-guided attention during active viewing of edited dynamic scenes Journal Article In: Journal of Vision, vol. 17, no. 1, pp. 12, 2017. @article{Valuch2017, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts. |
Mu Xia; Xueliu Li; Haiqing Zhong; Hong Li Fixation patterns of chinese participants while identifying facial expressions on Chinese faces Journal Article In: Frontiers in Psychology, vol. 8, pp. 581, 2017. @article{Xia2017, Two experiments in this study were designed to explore a model of Chinese fixation with four types of native facial expressions—happy, peaceful, sad, and angry. In both experiments, Participants performed an emotion recognition task while their behaviors and eye movements were recorded. Experiment 1(24 Participants, 12 men) demonstrated that both eye fixations and durations were lower for the upper part of the face than for the lower part of the face for all four types of facial expression. Experiment 2(20 Participants, 6men) repeated this finding and excluded the disturbance of fixation point. These results indicate that Chinese participants demonstrated a superiority effect for the lower part of face while interpreting facial expressions, possibly due to the influence of eastern etiquette culture. |
Kitty Z. Xu; Brian A. Anderson; Erik E. Emeric; Anthony W. Sali; Veit Stuphorn; Steven Yantis; Susan M. Courtney Neural basis of cognitive control over movement inhibition: Human fMRI and primate electrophysiology evidence Journal Article In: Neuron, vol. 96, no. 6, pp. 1447–1458.e6, 2017. @article{Xu2017, Executive control involves the ability to flexibly inhibit or change an action when it is contextually inappropriate. Using the complimentary techniques of human fMRI and monkey electrophysiology in a context-dependent stop signal task, we found a functional double dissociation between the right ventrolateral prefrontal cortex (rVLPFC) and the bi-lateral frontal eye field (FEF). Different regions of rVLPFC were associated with context-based signal meaning versus intention to inhibit a response, while FEF activity corresponded to success or failure of the response inhibition regardless of the stimulus response mapping or the context. These results were validated by electrophysiological recordings in rVLPFC and FEF from one monkey. Inhibition of a planned behavior is therefore likely not governed by a single brain system as had been previously proposed, but instead depends on two distinct neural processes involving different sub-regions of the rVLPFC and their interactions with other motor-related brain regions. Xu et al. present a rare combination of complementary evidence from human fMRI and primate neurophysiology, demonstrating that response inhibition is not directly accomplished by the rVLPFC, but instead requires multiple, distinct rVLPFC networks involving attention and contextual stimulus interpretation. |
Jacob L. Yates; Il Memming Park; Leor N. Katz; Jonathan W. Pillow; Alexander C. Huk Functional dissection of signal and noise in MT and LIP during decision-making Journal Article In: Nature Neuroscience, vol. 20, no. 9, pp. 1285–1292, 2017. @article{Yates2017, During perceptual decision-making, responses in the middle temporal (MT) and lateral intraparietal (LIP) areas appear to map onto theoretically defined quantities, with MT representing instantaneous motion evidence and LIP reflecting the accumulated evidence. However, several aspects of the transformation between the two areas have not been empirically tested. We therefore performed multistage systems identification analyses of the simultaneous activity of MT and LIP during individual decisions. We found that monkeys based their choices on evidence presented in early epochs of the motion stimulus and that substantial early weighting of motion was present in MT responses. LIP responses recapitulated MT early weighting and contained a choice-dependent buildup that was distinguishable from motion integration. Furthermore, trial-by-trial variability in LIP did not depend on MT activity. These results identify important deviations from idealizations of MT and LIP and motivate inquiry into sensorimotor computations that may intervene between MT and LIP. |
Takumi Yokosaka; Scinob Kuroki; Junji Watanabe; Shin'ya Nishida Linkage between free exploratory movements and subjective tactile ratings Journal Article In: IEEE Transactions on Haptics, vol. 10, no. 2, pp. 217–225, 2017. @article{Yokosaka2017, We actively move our hands and eyes when exploring the external world and gaining information about object's attributes. Previous studies showing that how we touch might be related to how we felt led us to consider whether we could decode observers' subjective tactile experiences only by analyzing their exploratory movements without explicitly asking how they perceived. However, in those studies, explicit judgment tasks were performed about specific tactile attributes that were prearranged by experimenters. Here, we systematically investigated whether exploratory movements can explain tactile ratings even when participants do not need to judge any tactile attributes. While measuring both hand and eye movements, we asked participants to touch materials freely without judging any specific tactile attributes (free-touch task) or to evaluate one of four tactile attributes (roughness, hardness, slipperiness, and temperature). We found that tactile ratings in the judgment tasks correlated with exploratory movements even in the free-touch task and that eye movements as well as hand movements correlated with tactile ratings. These results might open up the possibility of decoding tactile experiences by exploratory movements. |
Hongbo Yu; Yunyan Duan; Xiaolin Zhou Guilt in the eyes: Eye movement and physiological evidence for guilt-induced social avoidance Journal Article In: Journal of Experimental Social Psychology, vol. 71, no. March, pp. 128–137, 2017. @article{Yu2017, Guilt is widely acknowledged as an exemplary social emotion that is unpleasant but has positive interpersonal consequences. Previous empirical research focuses largely on documenting the behavioral consequences of guilt; less is known about the psychophysiology of experiencing guilt. Here we designed an interactive paradigm and asked participants to play multiple rounds of a dot-estimation task with two partners. Failure in the task, either due to the participant or due to the partner, would cause electric shocks to the partner. In Experiment 1, we asked the participant to watch video clips depicting the partner's facial expressions while the partner was receiving pain stimulation. Eye movement recording showed that the participant fixated less on the partner's eyes but more on the nose region in the participant-caused pain (high guilt) condition than in the partner-caused pain (low guilt) condition, an indication of social avoidance. In Experiment 2, we asked the participant to either fixate on the eye (Eye Group) or the nose region (Nose group) of the partner and recorded their skin conductance during the viewing. We found that the Eye Group exhibited higher skin conductance response in the high guilt condition than in the low guilt condition and such a difference was absent for the Nose Group, indicating that the forced eye contact with the victim enhanced the emotional arousal of guilt. The life-like interactive paradigm is thus able to demonstrate the mutual dependence between eye contact and social emotions: eye contact both elicits and is regulated by emotional content in social interaction. |
Lili Yu; Qiaoming Zhang; Caspian Priest; Erik D. Reichle; Heather Sheridan Character-complexity effects in Chinese reading and visual search: A comparison and theoretical implications Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 140–151, 2017. @article{Yu2017b, Three eye-movement experiments were conducted to examine how the complexity of characters in Chinese words (i.e., number of strokes per character) influences their processing and eye-movement behaviour. In Experiment 1, English speakers with no significant knowledge of Chinese searched for specific low-, medium-, and high-complexity target characters in a multi-page narrative containing characters of varying complexity (3–16 strokes). Fixation durations and skipping rates were influenced by the visual complexity of both the target characters and the characters being searched even though participants had no knowledge of Chinese. In Experiment 2, native Chinese speakers performed the same character-search task, and a similar pattern of results was observed. Finally, in Experiment 3, a second sample of native Chinese speakers read the same text used in Experiments 1 and 2, with text characters again exhibiting complexity effects. These results collectively suggest that character-complexity effects on eye movements may not be due to lexical processing per se but may instead reflect whatever visual processing is required to know whether or not a character corresponds to an episodically represented target. The theoretical implications of this for our understanding of normal reading are discussed. |
Eckart Zimmermann; Ralph Weidner; Gereon R. Fink Spatiotopic updating of visual feature information Journal Article In: Journal of Vision, vol. 17, no. 12, pp. 6, 2017. @article{Zimmermann2017, Saccades shift the retina with high-speed motion. In order to compensate for the sudden displacement, the visuomotor system needs to combine saccade-related information and visual metrics. Many neurons in oculomotor but also in visual areas shift their receptive field shortly before the execution of a saccade (Duhamel, Colby, & Goldberg, 1992; Nakamura & Colby, 2002). These shifts supposedly enable the binding of information from before and after the saccade. It is a matter of current debate whether these shifts are merely location based (i.e., involve remapping of abstract spatial coordinates) or also comprise information about visual features. We have recently presented fMRI evidence for a feature-based remapping mechanism in visual areas V3, V4, and VO (Zimmermann, Weidner, Abdollahi, & Fink, 2016). In particular, we found fMRI adaptation in cortical regions representing a stimulus' retinotopic as well as its spatiotopic position. Here, we asked whether spatiotopic adaptation exists independently from retinotopic adaptation and which type of information is behaviorally more relevant after saccade execution. We first adapted at the saccade target location only and found a spatiotopic tilt aftereffect. Then, we simultaneously adapted both the fixation and the saccade target location but with opposite tilt orientations. As a result, adaptation from the fixation location was carried retinotopically to the saccade target position. The opposite tilt orientation at the retinotopic location altered the effects induced by spatiotopic adaptation. More precisely, it cancelled out spatiotopic adaptation at the saccade target location. We conclude that retinotopic and spatiotopic visual adaptation are independent effects. |
Joel L. Voss; Neal J. Cohen Hippocampal-cortical contributions to strategic exploration during perceptual discrimination Journal Article In: Hippocampus, vol. 27, no. 6, pp. 642–652, 2017. @article{Voss2017, The hippocampus is crucial for long-term memory; its involvement in short-term or immediate expressions of memory is more controversial. Rodent hippocampus has been implicated in an expression of memory that occurs on-line during exploration termed “vicarious trial-and-error” (VTE) behavior. VTE occurs when rodents iteratively explore options during perceptual discrimination or at choice points. It is strategic in that it accelerates learning and improves later memory. VTE has been associated with activity of rodent hippocampal neurons, and lesions of hippocampus disrupt VTE and associated learning and memory advantages. Analogous findings of VTE in humans would support the role of hippocampus in active use of short-term memory to guide strategic behavior. We therefore measured VTE using eye-movement tracking during perceptual discrimination and identified relevant neural correlates with fMRI. A difficult perceptual-discrimination task was used that required visual information to be maintained during a several-second trial, but with no long-term memory component. VTE accelerated discrimination. Neural correlates of VTE included robust activity of hippocampus and activity of a network of medial prefrontal and lateral parietal regions involved in memory-guided behavior. This VTE-related activity was distinct from activity associated with simply viewing visual stimuli and making eye movements during the discrimination task, which occurred in regions frequently associated with visual processing and eye-movement control. Subjects were mostly unaware of performing VTE, thus further distancing VTE from explicit long-term memory processing. These findings bridge the rodent and human literatures on neural substrates of memory-guided behavior, and provide further support for the role of hippocampus and a hippocampal-centered network of cortical regions in the immediate use of memory in on-line processing and the guidance of behavior. |
Basil Wahn; Supriya Murali; Scott Sinnett; Peter König Auditory stimulus detection partially depends on visuospatial attentional resources Journal Article In: i-Perception, vol. 8, no. 1, 2017. @article{Wahn2017, Humans' ability to detect relevant sensory information while being engaged in a demanding task is crucial in daily life. Yet, limited attentional resources restrict information processing. To date, it is still debated whether there are distinct pools of attentional resources for each sensory modality and to what extent the process of multisensory integration is dependent on attentional resources. We addressed these two questions using a dual task paradigm. Specifically, participants performed a multiple object tracking task and a detection task either separately or simultaneously. In the detection task, participants were required to detect visual, auditory, or audiovisual stimuli at varying stimulus intensities that were adjusted using a staircase procedure. We found that tasks significantly interfered. However, the interference was about 50% lower when tasks were performed in separate sensory modalities than in the same sensory modality, suggesting that attentional resources are partly shared. Moreover, we found that perceptual sensitivities were significantly improved for audiovisual stimuli relative to unisensory stimuli regardless of whether attentional resources were diverted to the multiple object tracking task or not. Overall, the present study supports the view that attentional resource allocation in multisensory processing is task-dependent and suggests that multisensory benefits are not dependent on attentional resources. |
Gabriel Wainstein; D. Rojas-Líbano; N. A. Crossley; X. Carrasco; F. Aboitiz; Tomás Ossandón Pupil size tracks attentional performance in attention-deficit/hyperactivity disorder Journal Article In: Scientific Reports, vol. 7, pp. 8228, 2017. @article{Wainstein2017, Attention-deficit/hyperactivity disorder (ADHD) diagnosis is based on reported symptoms, which carries the potential risk of over- or under-diagnosis. A biological marker that helps to objectively define the disorder, providing information about its pathophysiology, is needed. A promising marker of cognitive states in humans is pupil size, which reflects the activity of an ‘arousal' network, related to the norepinephrine system. We monitored pupil size from ADHD and control subjects, during a visuo-spatial working memory task. A sub group of ADHD children performed the task twice, with and without methylphenidate, a norepinephrine–dopamine reuptake inhibitor. Off-medication patients showed a decreased pupil diameter during the task. This difference was no longer present when patients were on-medication. Pupil size correlated with the subjects' performance and reaction time variability, two vastly studied indicators of attention. Furthermore, this effect was modulated by medication. Through pupil size, we provide evidence of an involvement of the noradrenergic system during an attentional task. Our results suggest that pupil size could serve as a biomarker in ADHD. |
Sonja Walcher; Christof Körner; Mathias Benedek Looking for ideas: Eye behavior during goal-directed internally focused cognition Journal Article In: Consciousness and Cognition, vol. 53, no. March, pp. 165–175, 2017. @article{Walcher2017, Humans have a highly developed visual system, yet we spend a high proportion of our time awake ignoring the visual world and attending to our own thoughts. The present study examined eye movement characteristics of goal-directed internally focused cognition. Deliberate internally focused cognition was induced by an idea generation task. A letter-by-letter reading task served as external task. Idea generation (vs. reading) was associated with more and longer blinks and fewer microsaccades indicating an attenuation of visual input. Idea generation was further associated with more and shorter fixations, more saccades and saccades with higher amplitudes as well as heightened stimulus-independent variation of eye vergence. The latter results suggest a coupling of eye behavior to internally generated information and associated cognitive processes, i.e. searching for ideas. Our results support eye behavior patterns as indicators of goal-directed internally focused cognition through mechanisms of attenuation of visual input and coupling of eye behavior to internally generated information. |
Sonja Walcher; Christof Körner; Mathias Benedek Data on eye behavior during idea generation and letter-by-letter reading Journal Article In: Data in Brief, vol. 15, pp. 18–24, 2017. @article{Walcher2017a, This article includes the description of data information from an idea generation task (alternate uses task, (Guilford, 1967) [1]) and a letter-by-letter reading task under two background brightness conditions with healthy adults as well as a baseline measurement and questionnaire data (SIPI (Huba et al., 1981) [2]; DDFS (Singer and Antrobus, 1972) [3], 1963; RIBS (Runco et al., 2001) [4]). Data are hosted at the Open Science Framework (OSF): https://osf.io/fh66g/ (Walcher et al., 2017) [5]. There you will find eye tracking data, task performance data, questionnaires data, analyses scripts (in R, R Core Team, 2017 [6]), eye tracking paradigms (in the Experiment Builder (SR Research Ltd., [7]) and graphs on pupil and angle of eye vergence dynamics. Data are interpreted and discussed in the article ‘Looking for ideas: Eye behavior during goal-directed internally focused cognition' (Walcher et al., 2017) [8]. |
Julian M. Wallace; Susana T. L. Chung; Bosco S. Tjan Object crowding in age-related macular degeneration Journal Article In: Journal of Vision, vol. 17, no. 1, pp. 33, 2017. @article{Wallace2017, Crowding, the phenomenon of impeded object identification due to clutter, is believed to be a key limiting factor of form vision in the peripheral visual field. The present study provides a characterization of object crowding in age-related macular degeneration (AMD) measured at the participants' respective preferred retinal loci with binocular viewing. Crowding was also measured in young and age-matched controls at the same retinal locations, using a fixation-contingent display paradigm to allow unlimited stimulus duration. With objects, the critical spacing of crowding for AMD participants was not substantially different from controls. However, baseline contrast energy thresholds in the noncrowded condition were four times that of the controls. Crowding further exacerbated deficits in contrast sensitivity to three times the normal crowding-induced contrast energy threshold elevation. These findings indicate that contrast-sensitivity deficit is a major limiting factor of object recognition for individuals with AMD, in addition to crowding. Focusing on this more tractable deficit of AMD may lead to more effective remediation and technological assistance. |
Jiahui Wang; Pavlo D. Antonenko Instructor presence in instructional video: Effects on visual attention, recall, and perceived learning Journal Article In: Computers in Human Behavior, vol. 71, pp. 79–89, 2017. @article{Wang2017c, In an effort to enhance instruction and reach more students, educators design engaging online learning experiences, often in the form of online videos. While many instructional videos feature a picture-inpicture view of instructor, it is not clear how instructor presence influences learners' visual attention and what it contributes to learning and affect. Given this knowledge gap, this study explored the impact of instructor presence on learning, visual attention, and perceived learning in mathematics instructional videos of varying content difficulty. Thirty-six participants each viewed two 10-min-long mathematics videos (easy and difficult topics), with instructor either present or absent. Findings suggest that instructor attracted considerable visual attention, particularly when learners viewed the video on an easy topic. Although no significant difference in learning transfer was found for either topic, participants' recall of information from the video was better for easy topic when instructor was present. Finally, instructor presence positively influenced participants' perceived learning and satisfaction for both topics and led to a lower level of self-reported mental effort for difficult topic. |
Jianglan Wang; Jiao Zhao; Shoujing Wang; Rui Gong; Zhong Zheng; Longqian Liu Cognitive processing of orientation discrimination in anisometropic amblyopia Journal Article In: PLoS ONE, vol. 12, no. 10, pp. e0186221, 2017. @article{Wang2017f, Cognition is very important in our daily life. However, amblyopia has abnormal visual cogni- tion. Physiological changes of the brain during processes of cognition could be reflected with ERPs. So the purpose of this study was to investigate the speed and the capacity of resource allocation in visual cognitive processing in orientation discrimination task during monocular and binocular viewing conditions of amblyopia and normal control as well as the corresponding eyes of the two groups with ERPs. We also sought to investigate whether the speed and the capacity of resource allocation in visual cognitive processing vary with target stimuli at different spatial frequencies (3, 6 and 9 cpd) in amblyopia and normal control as well as between the corresponding eyes of the two groups. Fifteen mild to moderate aniso- metropic amblyopes and ten normal controls were recruited. Three-stimulus oddball para- digms of three different spatial frequency orientation discrimination tasks were used in monocular and binocular conditions in amblyopes and normal controls to elicit event-related potentials (ERPs). Accuracy (ACC), reaction time (RT), the latency of novelty P300 and P3b, and the amplitude of novelty P300 and P3b were measured. Results showed that RT was longer in the amblyopic eye than in both eyes of amblyopia and non-dominant eye in control. Novelty P300 amplitude was largest in the amblyopic eye, followed by the fellow eye, and smallest in both eyes of amblyopia. Novelty P300 amplitude was larger in the amblyopic eye than non-dominant eye and was larger in fellow eye than dominant eye. P3b latency was longer in the amblyopic eye than in the fellow eye, both eyes of amblyopia and non-dominant eye of control. P3b latency was not associated with RT in amblyopia. Neural responses of the amblyopic eye are abnormal at the middle and late stages of cognitive pro- cessing, indicating that the amblyopic eye needs to spend more time or integrate more resources to process the same visual task. Fellow eye and both eyes in amblyopia are slightly different from the dominant eye and both eyes in normal control at the middle and late stages of cognitive processing. Meanwhile, abnormal extents of amblyopic eye do not vary with three different spatial frequencies used in our study. |
Shulin Yue; Zhenlan Jin; Chenggui Fan; Qian Zhang; Ling Li Interference between smooth pursuit and color working memory Journal Article In: Journal of Eye Movement Research, vol. 10, no. 3, pp. 1–10, 2017. @article{Yue2017, Spatial working memory (WM) and spatial attention are closely related, but the relationship between non-spatial WM and spatial attention still remains unclear. The present study aimed to investigate the interaction between color WM and smooth pursuit eye movements. A modified delayed-match-to-sample paradigm (DMS) was applied with 2 or 4 items presented in each visual field. Subjects memorized the colors of items in the cued visual field and smoothly moved eyes towards or away from memorized items during retention interval despite that the colored items were no longer visible. The WM performance decreased with higher load in general. More importantly, the WM performance was better when subjects pursued towards rather than away from the cued visual field. Meanwhile, the pursuit gain decreased with higher load and demonstrated a higher result when pursuing away from the cued visual field. These results indicated that spatial attention, guiding attention to the memorized items, benefits color WM. Therefore, we propose that a competition for attention resources exists between color WM and smooth pursuit eye movements. |
Steffi Zander; Stefanie Wetzel; Tim Kühl; Sven Bertel Underlying processes of an inverted personalization effect in multimedia learning - An eye-tracking study Journal Article In: Frontiers in Psychology, vol. 8, pp. 2202, 2017. @article{Zander2017, One of the frequently examined design principles in multimedia learning is the personalization principle. Based on empirical evidence this principle states that using personalised messages in multimedia learning is more beneficial than using formal language (e.g. using ‘you' instead of ‘the'). Although there is evidence that these slight changes in regard to the language style affect learning, motivation and the perceived cognitive load, it remains unclear, (1) whether the positive effects of personalised language can be transferred to all kinds of content of learning materials (e.g. specific potentially aversive health issues) and (2) which are the underlying processes (e.g. attention allocation) of the personalization effect. German university students (N= 37) learned symptoms and causes of cerebral haemorrhages either with a formal or a personalised version of the learning material. Analysis revealed comparable results to the few existing previous studies, indicating an inverted personalization effect for potentially aversive learning material. This effect was specifically revealed in regard to decreased average fixation duration and the number of fixations exclusively on the images in the personalised compared to the formal version. This result can be seen as indicators for an inverted effect of personalization on the level of visual attention. |
Hang Zeng; Ralph Weidner; Gereon R. Fink; Qi Chen Neural correlates underlying the attentional spotlight in human parietal cortex independent of task difficulty Journal Article In: Human Brain Mapping, vol. 38, no. 10, pp. 4996–5018, 2017. @article{Zeng2017, Changes in the size of the attentional focus and task difficulty often co-vary. Nevertheless, the neural processes underlying the attentional spotlight process and task difficulty are likely to differ from each other. To differentiate between the two, we parametrically varied the size of the attentional focus in a novel behavioral paradigm while keeping visual processing difficulty either constant or not. A behavioral control experiment proved that the present behavioral paradigm could indeed effectively manipulate the size of the attentional focus per se, rather than affecting purely perceptual processes or surface processing. Imaging results showed that neural activity in a dorsal frontoparietal network, including right superior parietal cortex (SPL), was positively correlated with the size of the attentional spotlight, irrespective of whether task difficulty was constant or varied across different sizes of attentional focus. In contrast, neural activity in the ventral frontoparietal network, including the right inferior parietal cortex (IPL), was positively correlated with increasing task difficulty. Data suggest that sub-regions in parietal cortex are differentially involved in the attentional spotlight process and task difficulty: while SPL was involved in the attentional spotlight process independent of task difficulty, IPL was involved in the effect of task difficulty independent of the attentional spotlight process. |
Paul Zerr; Surya Gayet; Kees Mulder; Yaïr Pinto; Ilja Sligte; Stefan Van der Stigchel Remapping high-capacity, pre-attentive, fragile sensory memory Journal Article In: Scientific Reports, vol. 7, pp. 15940, 2017. @article{Zerr2017, Humans typically make several saccades per second. This provides a challenge for the visual system as locations are largely coded in retinotopic (eye-centered) coordinates. Spatial remapping, the updating of retinotopic location coordinates of items in visuospatial memory, is typically assumed to be limited to robust, capacity-limited and attention-demanding working memory (WM). Are pre-attentive, maskable, sensory memory representations (e.g. fragile memory, FM) also remapped? We directly compared trans-saccadic WM (tWM) and trans-saccadic FM (tFM) in a retro-cue change-detection paradigm. Participants memorized oriented rectangles, made a saccade and reported whether they saw a change in a subsequent display. On some trials a retro-cue indicated the to-be-tested item prior to probe onset. This allowed sensory memory items to be included in the memory capacity estimate. The observed retro-cue benefit demonstrates a tFM capacity considerably above tWM. This provides evidence that some, if not all sensory memory was remapped to spatiotopic (world-centered, task-relevant) coordinates. In a second experiment, we show backward masks to be effective in retinotopic as well as spatiotopic coordinates, demonstrating that FM was indeed remapped to world-centered coordinates. Together this provides conclusive evidence that trans-saccadic spatial remapping is not limited to higher-level WM processes but also occurs for sensory memory representations. |
David Zeugin; Norhan Arfa; Michael Notter; Micah M. Murray; Silvio Ionta Implicit self-other discrimination affects the interplay between multisensory affordances of mental representations of faces Journal Article In: Behavioural Brain Research, vol. 333, pp. 282–285, 2017. @article{Zeugin2017, Face recognition is an apparently straightforward but, in fact, complex ability, encompassing the activation of at least visual and somatosensory representations. Understanding how identity shapes the interplay between these face-related affordances could clarify the mechanisms of self-other discrimination. To this aim, we exploited the so-called “face inversion effect” (FIE), a specific bias in the mental rotation of face images (of other people): with respect to inanimate objects, face images require longer time to be mentally rotated from the upside-down. Via the FIE, which suggests the activation of somatosensory mechanisms, we assessed identity-related changes in the interplay between visual and somatosensory affordances between self- and other-face representations. Methodologically, to avoid the potential interference of the somatosensory feedback associated with musculoskeletal movements, we introduced the tracking of gaze direction to record participants' response. Response times from twenty healthy participants showed the larger FIE for self- than other-faces, suggesting that the impact of somatosensory affordances on mental representation of faces varies according to identity. The present study lays the foundations of a quantifiable method to implicitly assess self-other discrimination, with possible translational benefits for early diagnosis of face processing disturbances (e.g. prosopagnosia), and for neurophysiological studies on self-other discrimination in ethological settings. |
Xuemeng Zhang; Shuaiyu Chen; Hong Chen; Yan Gu; Wenjian Xu General and food-specific inhibitory control as moderators of the effects of the impulsive systems on food choices Journal Article In: Frontiers in Psychology, vol. 8, pp. 802, 2017. @article{Zhang2017, The present study aimed to extend the application of the reflective-impulsive model to restrained eating and explore the effect of automatic attention (impulsive system) on food choices. Furthermore, we examined the moderating effects of general inhibitory control (G-IC) and food-specific inhibitory control (F-IC) on successful and unsuccessful restrained eaters (US-REs). Automatic attention was measured using ``the EyeLink 1000,'' which tracked eye movements during the process of making food choices, and G-IC and F-IC were measured using the Stop-Signal Task. The results showed that food choices were related to automatic attention and that G-IC and F-IC moderated the predictive relationship between automatic attention and food choices. Furthermore, among successful restrained eaters (S-REs), automatic attention to high caloric foods did not predict food choices, regardless of whether G-IC or F-IC was high or low. Whereas food choice was positively correlated with automatic attention among US-REs with poor F-IC, this pattern was not observed in those with poor G-IC. In conclusion, the S-REs had more effective self-management skills and their food choices were affected less by automatic attention and inhibitory control. Unsuccessful restrained eating was associated with poor F-IC (not G-IC) and greater automatic attention to high caloric foods. Thus, clinical interventions should focus on enhancing F-IC, not G-IC, and on reducing automatic attention to high caloric foods. |
Yan Zhang; Xiaoying Wang; Juan Wang; Lili Zhang; Yu Xiang Patterns of eye movements when observers judge female facial attractiveness Journal Article In: Frontiers in Psychology, vol. 8, pp. 1909, 2017. @article{Zhang2017a, The purpose of the present study is to explore the fixation patterns for the explicit judgments of attractiveness judgments and infer which features are used for attractiveness. Facial attractiveness is of high importance for human interaction and social behavior. Behavioral studies on the perceptual cues for female facial attractiveness suggested three potentially important features: averageness, symmetry, and sexual dimorphy. However, none of these studies explained which regions of stimulus images influence observers' judgments. Therefore, the present research recorded the eye movements of 24 male observers and 19 female observers as they rated a set of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking. |
Huixia Zhou; Sonja Rossi; Juan Li; Huanhuan Liu; Ran Chen; Baoguo Chen Effects of working memory capacity in processing wh-extractions: Eye-movement evidence from Chinese–English bilinguals Journal Article In: Journal of Research in Reading, vol. 40, no. 4, pp. 420–438, 2017. @article{Zhou2017a, By using the eye-tracking method, the present study explores whether working memory capacity assessed via the second language (L2) reading span (L2WMC) as well as the operational span task (OSPAN) affects the processing of subject-extraction and object-extraction in Chinese–English bilinguals. Results showed that L2WMC has no effects on the grammatical judgement accuracies, the first fixation duration, gaze duration, go-past times and total fixation duration of the critical regions in wh-extractions. In contrast, OSPAN influences the first fixation duration and go-past times of the critical regions in wh-extractions. Specifically, in region 1, (e.g., Who do you think loved the comedian [region 1] with [region 2] all his heart [subject-extraction]? versus Who do you think the comedian loved [region 1] with [region 2] all his heart? [object-extraction] ), participants with high OSPAN were much slower than those with low OSPAN in their first fixation duration in reading subject-extractions, whereas there were no differences between participants with different OSPANs in reading object-extractions. In region 2, participants with high OSPAN were much faster than those with low OSPAN in their go-past times of object-extractions. These results indicated that individual differences in OSPAN rather than in L2WMC more strongly affect processing of wh-extractions. Thus, OSPAN results to be more suitable to explore the influences of working memory while processing L2 sentences with complex syntax, at least for intermediate proficient bilinguals. Results of the study also provide further support for the Capacity Theory of Comprehension. |
Yang Zhou; Lixin Liang; Yujun Pan; Ning Qian; Mingsha Zhang Sites of overt and covert attention define simultaneous spatial reference centers for visuomotor response Journal Article In: Scientific Reports, vol. 7, pp. 46556, 2017. @article{Zhou2017c, The site of overt attention (fixation point) defines a spatial reference center that affects visuomotor response as indicated by the stimulus-response-compatibility (SRC) effect: When subjects press, e.g., a left key to report stimuli, their reaction time is shorter when stimuli appear to the left than to the right of the fixation. Covert attention to a peripheral site appears to define a similar reference center but previous studies did not control for confounding spatiotemporal factors or investigate the relationship between overt-and covert-attention-defined centers. Using an eye tracker to monitor fixation, we found an SRC effect relative to the site of covert attention induced by a flashed cue dot, and a concurrent reduction, but not elimination, of the overt-attention SRC effect. The two SRC effects jointly determined the overall motor reaction time. Since trials with different cue locations were randomlZhou, Y., Liang, L., Pan, Y., Qian, N., & Zhang, M. (2017). Sites of overt and covert attention define simultaneous spatial reference centers for visuomotor response. Scientific Reports, 7, 1–12. https://doi.org/10.1038/srep46556y interleaved, the integration of the two reference centers must be updated online. When the cue was invalid and diminished covert attention, the covert-attention SRC effect disappeared and the overt-attention SRC effect retained full strength, excluding non-attention-based interpretations. We conclude that both covert-and overt-attention sites define visual reference centers that simultaneously contribute to motor response. |
Matthew R. Krause; Theodoros P. Zanos; Bennett A. Csorba; Praveen K. Pilly; Jaehoon Choe; Matthew E. Phillips; Abhishek Datta; Christopher C. Pack Transcranial direct current stimulation facilitates associative learning and alters functional connectivity in the primate brain Journal Article In: Current Biology, vol. 27, no. 20, pp. 3086–3096, 2017. @article{Krause2017, There has been growing interest in transcranial direct current stimulation (tDCS), a non-invasive technique purported to modulate neural activity via weak, externally applied electric fields. Although some promising preliminary data have been reported for applications ranging from stroke rehabilitation to cognitive enhancement, little is known about how tDCS affects the human brain, and some studies have concluded that it may have no effect at all. Here, we describe a macaque model of tDCS that allows us to simultaneously examine the effects of tDCS on brain activity and behavior. We find that applying tDCS to right prefrontal cortex improves monkeys' performance on an associative learning task. While firing rates do not change within the targeted area, tDCS does induce large low-frequency oscillations in the underlying tissue. These oscillations alter functional connectivity, both locally and between distant brain areas, and these long-range changes correlate with tDCS's effects on behavior. Together, these results are consistent with the idea that tDCS leads to widespread changes in brain activity and suggest that it may be a valuable method for cheaply and non-invasively altering functional connectivity in humans. Krause et al. test transcranial direct current stimulation (tDCS) in a realistic non-human primate model. Stimulation of prefrontal cortex (PFC) improved animals' associative learning while altering coherence between PFC and sensory areas. Their data suggest that tDCS may act by altering long-range connectivity between PFC and other brain areas. |
Mariska E. Kret; Carsten K. W. De Dreu Pupil-mimicry conditions trust in partners: Moderation by oxytocin and group membership Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 284, pp. 1–10, 2017. @article{Kret2017, Across species, oxytocin, an evolutionarily ancient neuropeptide, facilitates social communication by attuning individuals to conspecifics' social signals, fostering trust and bonding. The eyes have an important signalling function; and humans use their salient and communicative eyes to intentionally and unintentionally send social signals to others, by contracting the muscles around their eyes and pupils. In our earlier research, we observed that interaction partners with dilating pupils are trusted more than partners with constricting pupils. But over and beyond this effect, we found that the pupil sizes of partners synchronize and that when pupils synchronously dilate, trust is further boosted. Critically, this linkage between mimicry and trust was bound to interactions between ingroup members. The current study investigates whether these findings are modulated by oxytocin and sex of participant and partner. Using incentivized trust games with partners from ingroup and outgroup whose pupils dilated, remained static or constricted, this study replicates our earlier findings. It further reveals that (i) male participants withhold trust from partners with constricting pupils and extend trust to partners with dilating pupils, especially when given oxytocin rather than placebo; (ii) female participants trust partners with dilating pupils most, but this effect is blunted under oxytocin; (iii) under oxytocin rather than placebo, pupil dilation mimicry is weaker and pupil constriction mimicry stronger; and (iv) the link between pupil constriction mimicry and distrust observed under placebo disappears under oxytocin. We suggest that pupil-contingent trust is parochial and evolved in social species in and because of group life. |
Mariska E. Kret; Jeroen J. Stekelenburg; Beatrice Gelder; Karin Roelofs From face to hand: Attentional bias towards expressive hands in social anxiety Journal Article In: Biological Psychology, vol. 122, pp. 42–50, 2017. @article{Kret2017a, The eye-region conveys important emotional information that we spontaneously attend to. Socially submissive individuals avoid other's gaze which is regarded as avoidance of others' emotional face expressions. But this interpretation ignores the fact that there are other sources of emotional information besides the face. Here we investigate whether gaze-aversion is associated with increased attention to emotional signals from the hands. We used eye-tracking to compare eye-fixations of pre-selected high and low socially anxious students when labeling bodily expressions (Experiment 1) with (non)-matching facial expressions (Experiment 2) and passively viewed (Experiment 3). High compared to low socially anxious individuals attended more to hand-regions. Our findings demonstrate that socially anxious individuals do attend to emotions, albeit to different signals than the eyes and the face. Our findings call for a closer investigation of alternative viewing patterns explaining gaze-avoidance and underscore that other signals besides the eyes and face must be considered to reach conclusions about social anxiety. |
Magali Kreutzfeldt; Denise N. Stephan; Klaus Willmes; Iring Koch Modality-specific preparatory influences on the flexibility of cognitive control in task switching Journal Article In: Journal of Cognitive Psychology, vol. 29, no. 5, pp. 607–617, 2017. @article{Kreutzfeldt2017, In the current study, we addressed modality-specificity of the flexibility of cognitive control. We compared performance in single-task and mixed-tasks blocks between blocked auditory and visual stimuli assessing alternation costs (single vs. mixed). Mixed blocks comprised task switches only. The tasks consisted of numerical parity, magnitude, and distance judgments about numbers between one and nine without five. A cue indicated the relevant task. The cue–stimulus interval was varied (short vs. long interval) to examine preparation effects. The results indicated higher response times (RTs) and error rates (ERs) in mixed- vs. single-tasks blocks. The alternation costs in ERs were larger for auditory compared to visual stimulus presentation. Moreover, the reduction of RT alternation costs based on increased preparation time was more pronounced for the auditory modality compared to the visual modality. These results suggest a modality-specific influence on processes involved in maintaining and updating task sets in working memory. |
Philipp Kreyenmeier; Jolande Fooken; Miriam Spering Context effects on smooth pursuit and manual interception of a disappearing target Journal Article In: Journal of Neurophysiology, vol. 118, no. 1, pp. 404–415, 2017. @article{Kreyenmeier2017, In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object (" ball ") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated " hit zone. " In two experiments (n ϭ 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. |
Onkar Krishna; Kiyoharu Aizawa; Andrea Helo; Rama Pia Gaze distribution analysis and saliency prediction across age groups Journal Article In: PLoS ONE, vol. 13, no. 2, pp. e0193149, 2017. @article{Krishna2017, Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups. |
Anna B. Kuhns; Pascasie L. Dombert; Paola Mengotti; Gereon R. Fink; Simone Vossel Spatial attention, motor intention, and Bayesian cue predictability in the human brain Journal Article In: Journal of Neuroscience, vol. 37, no. 21, pp. 5334–5344, 2017. @article{Kuhns2017, Predictions about upcoming events influence how we perceive and respond to our environment. There is increasing evidence that predictions may be generated based upon previous observations following Bayesian principles, but little is known about the underlying corticalmechanismsandtheir specificity for different cognitive subsystems.Thepresent studyaimedat identifyingcommonanddistinct neural signatures of predictive processing in the spatial attentional and motor intentional system. Twenty-three female and male healthy human volunteers performed two probabilistic cueing tasks with either spatial or motor cues while lying in the fMRI scanner. In these tasks, the percentage of cue validity changed unpredictably over time. Trialwise estimates of cue predictability were derived from a Bayesian observer model of behavioral responses. These estimates were included as parametric regressors for analyzing the BOLD time series. Parametric effects of cue predictability in valid and invalid trials were considered to reflect belief updating by precision-weighted prediction errors. The brain areas exhibiting predictability-dependent effects dissociated between the spatial attention and motor inten- tion task, with the right temporoparietal cortex being involved during spatial attention and the left angular gyrus and anterior cingulate cortex during motor intention. Connectivity analyses revealed that all three areas showed predictability-dependent coupling with the right hippocampus. These results suggest that precision-weighted prediction errors of stimulus locations and motor responses are encoded in distinct brain regions, but that crosstalk with the hippocampusmaybe necessary to integrate new trialwise outcomes in both cognitive systems. |
Phillipp Kurtz; Katharine A. Shapcott; Jochen Kaiser; Joscha T. Schmiedt; Michael C. Schmid The influence of endogenous and exogenous spatial attention on decision confidence Journal Article In: Scientific Reports, vol. 7, pp. 6431, 2017. @article{Kurtz2017, Spatial attention allows us to make more accurate decisions about events in our environment. Decision confidence is thought to be intimately linked to the decision making process as confidence ratings are tightly coupled to decision accuracy. While both spatial attention and decision confidence have been subjected to extensive research, surprisingly little is known about the interaction between these two processes. Since attention increases performance it might be expected that confidence would also increase. However, two studies investigating the effects of endogenous attention on decision confidence found contradictory results. Here we investigated the effects of two distinct forms of spatial attention on decision confidence; endogenous attention and exogenous attention. We used an orientation-matching task, comparing the two attention conditions (endogenous and exogenous) to a control condition without directed attention. Participants performed better under both attention conditions than in the control condition. Higher confidence ratings than the control condition were found under endogenous attention but not under exogenous attention. This finding suggests that while attention can increase confidence ratings, it must be voluntarily deployed for this increase to take place. We discuss possible implications of this relative overconfidence found only during endogenous attention with respect to the theoretical background of decision confidence. |
Kaitlin E. W. Laidlaw; Alan Kingstone Fixations to the eyes aids in facial encoding; covertly attending to the eyes does not Journal Article In: Acta Psychologica, vol. 173, pp. 55–65, 2017. @article{Laidlaw2017, When looking at images of faces, people will often focus their fixations on the eyes. It has previously been demonstrated that the eyes convey important information that may improve later facial recognition. Whether this advantage requires that the eyes be fixated, or merely attended to covertly (i.e. while looking elsewhere), is unclear from previous work. While attending to the eyes covertly without fixating them may be sufficient, the act of using overt attention to fixate the eyes may improve the processing of important details used for later recognition. In the present study, participants were shown a series of faces and, in Experiment 1, asked to attend to them normally while avoiding looking at either the eyes or, as a control, the mouth (overt attentional avoidance condition); or in Experiment 2 fixate the center of the face while covertly attending to either the eyes or the mouth (covert attention condition). After the first phase, participants were asked to perform an old/new face recognition task. We demonstrate that a) when fixations to the eyes are avoided during initial viewing then subsequent face discrimination suffers, and b) covert attention to the eyes alone is insufficient to improve face discrimination performance. Together, these findings demonstrate that fixating the eyes provides an encoding advantage that is not availed by covert attention alone. |
Erika K. Hussey; J. Isaiah Harbison; Susan Teubner-Rhodes; Alan Mishler; Kayla Velnoskey; Jared M. Novick Memory and language improvements following cognitive control training Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 1, pp. 23–58, 2017. @article{Hussey2017, Cognitive control refers to adjusting thoughts and actions when confronted with conflict during information processing. We tested whether this ability is causally linked to performance on certain language and memory tasks by using cognitive control training to systematically modulate people's ability to resolve information-conflict across domains. Different groups of subjects trained on 1 of 3 minimally different versions of an n-back task: n-back-with-lures (High-Conflict), n-back-without-lures (Low- Conflict), or 3-back-without-lures (3-Back). Subjects completed a battery of recognition memory and language processing tasks that comprised both high- and low-conflict conditions before and after training. We compared the transfer profiles of (a) the High- versus Low-Conflict groups to test how conflict resolution training contributes to transfer effects, and (b) the 3-Back versus Low-Conflict groups to test for differences not involving cognitive control. High-Conflict training—but not Low-Conflict training— produced discernable benefits on several untrained transfer tasks, but only under selective conditions requiring cognitive control. This suggests that the conflict-focused intervention influenced functioning on ostensibly different outcome measures across memory and language domains. 3-Back training resulted in occasional improvements on the outcome measures, but these were not selective for conditions involving conflict resolution. We conclude that domain-general cognitive control mechanisms are plastic, at least temporarily, and may play a causal role in linguistic and nonlinguistic performance. |
John P. Hutson; Tim J. Smith; Joseph P. Magliano; Lester C. Loschky What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film Journal Article In: Cognitive Research: Principles and Implications, vol. 2, no. 46, pp. 1–30, 2017. @article{Hutson2017, Film is ubiquitous, but the processes that guide viewers' attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles' Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers' comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers' belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative. |
Duong Huynh; Srimant P. Tripathy; Harold E. Bedell; Haluk Öğmen The reference frame for encoding and retention of motion depends on stimulus set size Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 3, pp. 888–910, 2017. @article{Huynh2017, The goal of this study was to investigate the reference frames used in perceptual encoding and storage ofvisual motion information. In our experiments, observers viewed multiple moving objects and reported the direction ofmotion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding ofmotion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case ofcomplex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding ofmotion information. |
Guilhem Ibos; David J. Freedman Sequential sensory and decision processing in posterior parietal cortex Journal Article In: eLife, vol. 6, pp. 1–19, 2017. @article{Ibos2017, <p>Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for).</p> |
Jaime S. Ide; Hsiang C. Tung; Cheng-Ta Yang; Yuan-Chi Tseng; Chiang-Shan R. Li In: Frontiers in Human Neuroscience, vol. 11, pp. 222, 2017. @article{Ide2017, Impulsivity is a personality trait of clinical importance. Extant research focuses on frontostriatal mechanisms of impulsivity and how executive functions are compromised in impulsive individuals. Imaging studies employing voxel based morphometry highlighted impulsivity-related changes in gray matter concentrations in a wide array of cerebral structures. In particular, whereas prefrontal cortical areas appear to show structural alterations in individuals with a neuropsychiatric condition, the findings are less than consistent in the healthy population. Here, in a sample (n = 113) of young adults assessed for Barratt impulsivity, we controlled for age, gender and alcohol use, and showed that higher impulsivity score is associated with increased gray matter volume (GMV) in bilateral medial parietal and occipital cortices known to represent the peripheral visual field. When impulsivity components were assessed, we observed that this increase in parieto-occipital cortical volume is correlated with inattention and non-planning but not motor subscore. In a separate behavioral experiment of 10 young adults, we demonstrated that impulsive individuals are more vulnerable to the influence of a distractor on target detection in an attention task. If replicated, these findings together suggest aberrant visual attention as a neural correlate of an impulsive personality trait in neurotypical individuals and need to be reconciled with the literature that focuses on frontal dysfunctions. |
Jessica L. Irons; Tamara Gradden; Angel Zhang; Xuming He; Nick Barnes; Adele F. Scott; Elinor McKone Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing Journal Article In: Vision Research, vol. 137, pp. 61–79, 2017. @article{Irons2017a, The visual prosthesis (or “bionic eye”) has become a reality but provides a low resolution view of the world. Simulating prosthetic vision in normal-vision observers, previous studies report good face recognition ability using tasks that allow recognition to be achieved on the basis of information that survives low resolution well, including basic category (sex, age) and extra-face information (hairstyle, glasses). Here, we test within-category individuation for face-only information (e.g., distinguishing between multiple Caucasian young men with hair covered). Under these conditions, recognition was poor (although above chance) even for a simulated 40 × 40 array with all phosphene elements assumed functional, a resolution above the upper end of current-generation prosthetic implants. This indicates that a significant challenge is to develop methods to improve face identity recognition. Inspired by “bionic ear” improvements achieved by altering signal input to match high-level perceptual (speech) requirements, we test a high-level perceptual enhancement of face images, namely face caricaturing (exaggerating identity information away from an average face). Results show caricaturing improved identity recognition in memory and/or perception (degree by which two faces look dissimilar) down to a resolution of 32 × 32 with 30% phosphene dropout. Findings imply caricaturing may offer benefits for patients at resolutions realistic for some current-generation or in-development implants. |
Jessica L. Irons; Minjeong Jeon; Andrew B. Leber Pre-stimulus pupil dilation and the preparatory control of attention Journal Article In: PLoS ONE, vol. 12, no. 12, pp. e0188787, 2017. @article{Irons2017, Task preparation involves multiple component processes, including a general evaluative process that signals the need for adjustments in control, and the engagement of task-specific control settings. Here we examined the dynamics of these different mechanisms in preparing the attentional control system for visual search. We explored preparatory activity using pupil dilation, a well-established measure of task demands and effortful processing. In an initial exploratory experiment, participants were cued at the start of each trial to search for either a salient color singleton target (an easy search task) or a low-salience shape singleton target (a difficult search task). Pupil dilation was measured during the preparation period from cue onset to search display onset. Mean dilation was larger in preparation for the difficult shape target than the easy color target. In two additional experiments, we sought to vary effects of evaluative processing and task-specific preparation separately. Experiment 2 showed that when the color and shape search tasks were matched for difficulty, the shape target no longer evoked larger dilations, and the pattern of results was in fact reversed. In Experiment 3, we manipulated difficulty within a single feature dimension, and found that the difficult search task evoked larger dilations. These results suggest that pupil dilation reflects expectations of difficulty in preparation for a search task, consistent with the activity of an evaluative mechanism. We did not find consistent evidence for relationship between pupil dilation and search performance (accuracy and response timing), suggesting that pupil dilation during search preparation may not be strongly linked to ongoing task-specific preparation. |
Roxane J. Itier; Karly N. Neath-Tavares Effects of task demands on the early neural processing of fearful and happy facial expressions Journal Article In: Brain Research, vol. 1663, pp. 38–50, 2017. @article{Itier2017, Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350 ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350 ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350 ms of visual processing. |
Miho Iwasaki; Kodai Tomita; Yasuki Noguchi Non-uniform transformation of subjective time during action preparation Journal Article In: Cognition, vol. 160, pp. 51–61, 2017. @article{Iwasaki2017, Although many studies have reported a distortion of subjective (internal) time during preparation and execution of actions, it is highly controversial whether actions cause a dilation or compression of time. In the present study, we tested a hypothesis that the previous controversy (dilation vs. compression) partly resulted from a mixture of two types of sensory inputs on which a time length was estimated; some studies asked subjects to measure the time of presentation for a single continuous stimulus (stimulus period, e.g. the duration of a long-lasting visual stimulus on a monitor) while others required estimation of a period without continuous stimulations (no-stimulus period, e.g. an inter-stimulus interval between two flashes). Results of our five experiments supported this hypothesis, showing that action preparation induced a dilation of a stimulus period, whereas a no-stimulus period was not subject to this dilation and sometimes can be compressed by action preparation. Those results provided a new insight into a previous view assuming a uniform dilation or compression of subjective time by actions. Our findings about the distinction between stimulus and no-stimulus periods also might contribute to a resolution of mixed results (action-induced dilation vs. compression) in a previous literature. |
Syaheed B. Jabar; Alex Filipowicz; Britt Anderson In: Attention, Perception, and Psychophysics, vol. 79, no. 8, pp. 2338–2353, 2017. @article{Jabar2017, When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute. |
Syaheed B. Jabar; Alex Filipowicz; Britt Anderson Tuned by experience: How orientation probability modulates early perceptual processing Journal Article In: Vision Research, vol. 138, pp. 86–96, 2017. @article{Jabar2017a, Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive ‘P300' component which might be related to either surprise or decision-making. However, the early ‘C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. |
Min Suk Kang; Sori Kim; Kyoung-Min Lee Peripheral target identification performance modulates eye movements Journal Article In: Vision Research, vol. 133, pp. 81–86, 2017. @article{Kang2017a, We often shift our eyes to an interesting stimulus, but it is important to inhibit that eye movement in some environments (e.g., a no-look pass in basketball). Here, we investigated participants' ability to inhibit eye movements when they had to process a peripheral target with a requirement to maintain strict fixation. An array of eight letters composed of four characters was briefly presented and a directional cue was centrally presented to indicate the target location. The stimulus onset asynchrony (SOA) between the cue and the stimulus array was chosen from six values, consisting of pre-cue conditions (−400 and −200 ms), a simultaneous cue condition (0 ms), and post-cue conditions (200, 400, and 800 ms). We found the following: 1) participants shifted their eyes toward the cued location even though the stimulus array was absent at the onset of eye movements, but the eye movement amplitude was smaller than the actual location of the target; 2) eye movements occurred approximately 150 ms after the onset of stimulus array in the pre-cue conditions and 250 ms after cue onset in the simultaneous and post-cue conditions; and 3) eye movement onsets were delayed and their amplitudes were smaller in correct trials than incorrect trials. These results indicate that the inhibitory process controlling eye movements also compete for cognitive resources like other cognitive processes. |
Yul H. R. Kang; Frederike H. Petzschner; Daniel M. Wolpert; Michael N. Shadlen Piercing of consciousness as a threshold-crossing operation Journal Article In: Current Biology, vol. 27, no. 15, pp. 1–11, 2017. @article{Kang2017, Many decisions arise through an accumulation of evidence to a terminating threshold. The process, termed bounded evidence accumulation (or drift diffusion), provides a unified account of decision speed and accuracy, and it is supported by neurophysiology in human and animal models. In many situations, a decision maker may not communicate a decision immediately and yet feel that at some point she had made up her mind. We hypothesized that this occurs when an accumulation of evidence reaches a termination threshold, registered, subjectively, as an “aha” moment. We asked human participants to make perceptual decisions about the net direction of dynamic random dot motion. The difficulty and viewing duration were controlled by the experimenter. After indicating their choice, participants adjusted the setting of a clock to the moment they felt they had reached a decision. The subjective decision times (tSDs) were faster on trials with stronger (easier) motion, and they were well fit by a bounded drift-diffusion model. The fits to the tSDs alone furnished parameters that fully predicted the choices (accuracy) of four of the five participants. The quality of the prediction provides compelling evidence that these subjective reports correspond to the terminating process of a decision rather than a post hoc inference or arbitrary report. Thus, conscious awareness of having reached a decision appears to arise when the brain's representation of accumulated evidence reaches a threshold or bound. We propose that such a mechanism might play a more widespread role in the “piercing of consciousness” by non-conscious thought processes. |
Maryam Kavyani; Alireza Farsi; Behrouz Abdoli; Raymond M. Klein Using the locus-of-slack logic to determine whether inhibition of return in a cue-target paradigm is delaying early or late stages of processing Journal Article In: Canadian Journal of Experimental Psychology, vol. 71, no. 1, pp. 63–70, 2017. @article{Kavyani2017, Inhibition of return (IOR) is a phenomenon characterized by slower responses to targets at cued locations relative to those at uncued locations. Based on the results of previous research, it has been suggested that IOR affects a process at the input end of the processing continuum when it is generated while the reflexive oculomotor system is suppressed (cf. Satel, Hilchey, Wang, Story, & Klein, 2013). To test this theory, we employed a modified psychological refractory period paradigm designed to elicit input IOR with visual stimuli, allowing us to use the locus-of-slack logic to determine whether an early or late stage of processing was inhibited by IOR. On each trial a visual cue was presented, followed by an auditory target (T1) and visual target (T2) separated by a target–target onset asynchrony (TTOA) of varying lengths (200 ms, 400 ms, or 800 ms). Participants (31 young adults) were instructed to ignore the cue and respond to the targets as quickly and accurately as possible. Eye tracking was used to ensure that participants actively suppressed eye movements during trials. As predicted, the inhibitory effect of the cue was observed at the longest TTOA but not when TTOAs were short, supporting our hypothesis that, when generated while the reflexive oculomotor system is suppressed, IOR affects processing before response selection. |
Carmen Keller; Alex Junghans In: Medical Decision Making, vol. 37, no. 8, pp. 942–954, 2017. @article{Keller2017, Background. Individuals with low numeracy have difficulties with understanding complex graphs. Combining the information-processing approach to numeracy with graph comprehension and information-reduction theories, we examined whether high numerates' better comprehension might be explained by their closer attention to task-relevant graphical elements, from which they would expect numerical information to understand the graph. Furthermore, we investigated whether participants could be trained in improving their attention to task-relevant information and graph comprehension. Design. In an eye-tracker experiment (N = 110) involving a sample from the general population, we presented participants with 2 hypothetical scenarios (stomach cancer, leukemia) showing survival curves for 2 treatments. In the training condition, participants received written instructions on how to read the graph. In the control condition, participants received another text. We tracked participants' eye movements while they answered 9 knowledge questions. The sum constituted graph comprehension. We analyzed visual attention to task-relevant graphical elements by using relative fixation durations and relative fixation counts. Results. The mediation analysis revealed a significant (P < 0.05) indirect effect of numeracy on graph comprehension through visual attention to task-relevant information, which did not differ between the 2 conditions. Training had a significant main effect on visual attention (P < 0.05) but not on graph comprehension (P < 0.07). Conclusions. Individuals with high numeracy have better graph comprehension due to their greater attention to task-relevant graphical elements than individuals with low numeracy. With appropriate instructions, both groups can be trained to improve their graph-processing efficiency. Future research should examine (e.g., motivational) mediators between visual attention and graph comprehension to develop appropriate instructions that also result in higher graph comprehension. |
Claire L. Kelly; Trevor J. Crawford; Emma Gowen; Kelly Richardson; Sandra I. Sünram-Lea A temporary deficiency in self-control: Can heightened motivation overcome this effect? Journal Article In: Psychophysiology, vol. 54, no. 5, pp. 773–779, 2017. @article{Kelly2017a, Self-control is important for everyday life and involves behavioral regulation. Self-control requires effort, and when completing two successive self-control tasks, there is typically a temporary drop in performance in the second task. High self-reported motivation and being made self-aware somewhat counteract this effect-with the result that performance in the second task is enhanced. The current study explored the relationship between self-awareness and motivation on sequential self-control task performance. Before employing self-control in an antisaccade task, participants initially applied self-control in an incongruent Stroop task or completed a control task. After the Stroop task, participants unscrambled sentences that primed self-awareness (each started with the word 'I') or unscrambled neutral sentences. Motivation was measured after the antisaccade task. Findings revealed that, after exerting self-control in the incongruent Stroop task, motivation predicted erroneous responses in the antisaccade task for those that unscrambled neutral sentences, and high motivation led to fewer errors. Those primed with self-awareness were somewhat more motivated overall, but motivation did not significantly predict antisaccade performance. Supporting the resource allocation account, if one was motivated-intrinsically or via the manipulation of self-awareness-resources were allocated to both tasks leading to the successful completion of two sequential self-control tasks. |
Shah Khalid; Gernot Horstmann; Thomas Ditye; Ulrich Ansorge In: Psychological Research, vol. 81, no. 2, pp. 508–523, 2017. @article{Khalid2017, In the current study, we tested whether a fear advantage—rapid attraction of attention to fearful faces that is more stimulus-driven than to neutral faces—is emotion specific. We used a cueing task with face cues preceding targets. Cues were non-predictive of the target locations. In two experiments, we found enhanced cueing of saccades towards the targets with fearful face cues than with neutral face cues: Saccades towards targets were more efficient with cues and targets at the same position (under valid conditions) than at opposite positions (under invalid conditions), and this cueing effect was stronger with fearful than with neutral face cues. In addition, this cueing effect difference between fearful and neutral faces was absent with inverted faces as cues, indicating that the fear advantage is face-specific. We also show that emotion categorization of the face cues mirrored these effects: Participants were better at categorizing face cues as fearful or neutral with upright than with inverted faces (Experiment 1). Finally, in alternative blocks including disgusted faces instead of fearful faces, we found more similar cueing effects with disgusted faces and neutral faces, and with upright and inverted faces (Experiment 2). Jointly, these results demonstrate that the fear advantage is emotion-specific. Results are discussed in light of evolutionary explanations of the fear advantage. |
Sayed Hossein Khatoonabadi; Ivan V. Bajić; Yufeng Shan Compressed-domain visual saliency models: Acomparative study Journal Article In: Multimedia Tools and Applications, vol. 76, no. 24, pp. 26297–26328, 2017. @article{Khatoonabadi2017, Computational modeling of visual saliency has become an important research problem in recent years, with applications in video quality estimation, video compression, object tracking, retargeting, summarization, and so on. While most visual saliency models for dynamic scenes operate on raw video, several models have been developed for use with compressed-domain information such as motion vectors and transform coefficients. This paper presents a comparative study of eleven such models as well as two high-performing pixel-domain saliency models on two eye-tracking datasets using several comparison metrics. The results indicate that highly accurate saliency estimation is possible based only on a partially decoded video bitstream. The strategies that have shown success in compressed-domain saliency modeling are highlighted, and certain challenges are identified as potential avenues for further improvement. |
Tim C. Kietzmann; Anna L. Gert; Frank Tong; Peter König Representational dynamics of facial viewpoint encoding Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 4, pp. 637–651, 2017. @article{Kietzmann2017, Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all view-points. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of view-point encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demon-strated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity. |
Dongho Kim; Savannah Lokey; Sam Ling Elevated arousal levels enhance contrast perception Journal Article In: Journal of Vision, vol. 17, no. 2, pp. 1–10, 2017. @article{Kim2017a, Our state of arousal fluctuates from moment to moment—fluctuations that can have profound impacts on behavior. Arousal has been proposed to play a powerful, widespread role in the brain, influencing processes as far ranging as perception, memory, learning, and decision making. Although arousal clearly plays a critical role in modulating behavior, the mechanisms underlying this modulation remain poorly understood. To address this knowledge gap, we examined the modulatory role of arousal on one of the cornerstones of visual perception: contrast perception. Using a reward-driven paradigm to manipulate arousal state, we discovered that elevated arousal state substantially enhances visual sensitivity, incurring a multiplicative modulation of contrast response. Contrast defines vision, determining whether objects appear visible or invisible to us, and these results indicate that one of the consequences of decreased arousal state is an impaired ability to visually process our environment. |
Renske S. Hoedemaker; Peter C. Gordon The onset and time course of semantic priming during rapid recognition of visual words Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 5, pp. 881–902, 2017. @article{Hoedemaker2017a, In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. |
Alexandra Hoffmann; Ulrich Ettinger; Gustavo A. Reyes del Paso; Stefan Duschek Executive function and cardiac autonomic regulation in depressive disorders Journal Article In: Brain and Cognition, vol. 118, pp. 108–117, 2017. @article{Hoffmann2017, Executive function impairments have been frequently observed in depressive disorders. Moreover, reduced heart rate variability (HRV) has repeatedly been described, especially in the high frequency band (i.e., respiratory sinus arrhythmia, RSA), suggesting lower vagal cardiac outflow. The study tested the hypothesis of involvement of low vagal tone in executive dysfunction in depression. In addition to RSA, HRV in the low frequency (LF) band was assessed. In 36 patients with depression and 36 healthy subjects, electrocardiography recordings were accomplished at rest and during performance of five executive function tasks (number-letter task, n-back task, continuous performance test, flanker task, and antisaccade task). Patients displayed increased error rates and longer reaction times in the task-switching condition of the number-letter task, in addition to increased error rates in the n-back task and the final of two blocks of the antisaccade task. In patients, both HRV parameters were lower during all experimental phases. RSA correlated negatively with reaction time during task-switching. This finding confirms reduced performance across different executive functions in depression and suggests that, in addition to RSA, LF HRV is also diminished. However, the hypothesis of involvement of low parasympathetic tone in executive dysfunction related to depression received only limited support. |
Linus Holm; Olympia Karampela; Fredrik Ullén; Guy Madison Executive control and working memory are involved in sub-second repetitive motor timing Journal Article In: Experimental Brain Research, vol. 235, no. 3, pp. 787–798, 2017. @article{Holm2017, The nature of the relationship between timing and cognition remains poorly understood. Cognitive control is known to be involved in discrete timing tasks involving durations above 1 s, but has not yet been demonstrated for repetitive motor timing below 1 s. We examined the latter in two continuation tapping experiments, by varying the cognitive load in a concurrent task. In Experiment 1, participants repeated a fixed three finger sequence (low executive load) or a pseudorandom sequence (high load) with either 524-, 733-, 1024- or 1431-ms inter-onset intervals (IOIs). High load increased timing variability for 524 and 733-ms IOIs but not for the longer IOIs. Experiment 2 attempted to replicate this finding for a concurrent memory task. Participants retained three letters (low working memory load) or seven letters (high load) while producing intervals (524- and 733-ms IOIs) with a drum stick. High load increased timing variability for both IOIs. Taken together, the experiments demonstrate that cognitive control processes influence sub-second repetitive motor timing. |
Gernot Horstmann; Stefanie I. Becker; Daniel Ernst Dwelling, rescanning, and skipping of distractors explain search efficiency in difficult search better than guidance by the target Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 291–305, 2017. @article{Horstmann2017, Prominent models of overt and covert visual search focus on explaining search efficiency by visual guidance. That some searches are fast whereas others are slow is explained by the ability of the target to guide attention to the target's position. Comparably little attention is given to other variables that might also influence search efficiency, such as dwelling on distractors, skipping distractors, and revisiting distractors. Here, we examine the relative contributions of dwelling, skipping, rescanning, and the use of visual guidance, in explaining visual search times in general, and the similarity effect in particular. The hallmark of the similarity effect is more efficient search for a target that is dissimilar to the distractors compared to a target that is similar to the distractors. In the present experiment, participants have to find an emotional face target among nine neutral face non-targets. In different blocks, the target is either more or less similar to the non-targets. Eye-tracking is used to separately measure selection latency, dwelling on distractors, and skipping and revisiting of distractors. As expected, visual search times show a large similarity effect. Similarity also has strong effects on dwelling, skipping, and revisiting, but only weak effects on visual guidance. Regression analyses show that dwelling, skipping, and revisiting determine search times on trial level. The influence of dwelling and revisiting is stronger in target absent than in target present trials, whereas the opposite is true for skipping. The similarity effect is best explained by dwelling. Additionally, including a measure of guidance does not yield substantial benefits. In sum, results indicate that guidance by the target is not the sole principle behind fast search; rather, distractors are less often skipped, more often visited, and longer dwelled on in slow search conditions. |
Michael C. Hout; Arryn Robbins; Hayward J. Godwin; Gemma Fitzsimmons; Collin Scarince Categorical templates are more useful when features are consistent: Evidence from eye movements during search for societally important vehicles Journal Article In: Attention, Perception, and Psychophysics, vol. 79, pp. 1578–1592, 2017. @article{Hout2017, Unlike in laboratory visual search tasks—wherein participants are typically presented with a pictorial represen- tation of the item they are asked to seek out—in real-world searches, the observer rarely has veridical knowledge of the visual features that define their target. During categorical search, observers look for any instance of a categorically de- fined target (e.g., helping a family member look for their mo- bile phone). In these circumstances, people may not have in- formation about noncritical features (e.g., the phone'scolor), and must instead create a broad mental representation using the features that define (or are typical of) the category of ob- jects they are seeking out (e.g., modern phones are typically rectangular and thin). In the current investigation (Experiment 1), using a categorical visual search task, we add to the body ofevidence suggesting that categorical templates are effective enough to conduct efficient visual searches. When color in- formation was available (Experiment 1a), attentional guid- ance, attention restriction, and object identification were en- hanced when participants looked for categories with consis- tent features (e.g., ambulances) relative to categories with more variable features (e.g., sedans). When color information was removed (Experiment 1b), attention benefits disappeared, but object recognition was still better for feature-consistent target categories. In Experiment 2, we empirically validated the relative homogeneity of our societally important vehicle stimuli. Taken together, our results are in line with a category-consistent view of categorical target templates (Yu, Maxfield, & Zelinsky in, Psychological Science, 2016. doi:10.1177/ 0956797616640237), and suggest that when features of a category are consistent and predictable, searchers can create mental representations that allow for the efficient guidance and restriction ofattention as well as swift object identification. |
Nicholas Huang; Mounya Elhilali Auditory salience using natural soundscapes Journal Article In: The Journal of the Acoustical Society of America, vol. 141, no. 3, pp. 2163–2176, 2017. @article{Huang2017a, Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience. |
Po Sheng Huang An exploratory study on remote associates problem solving: Evidence of eye movement indicators Journal Article In: Thinking Skills and Creativity, vol. 24, pp. 63–72, 2017. @article{Huang2017b, In recent years, remote associates problems have been widely used to measure creative processes. However, studies have rarely explored the processes involved in remote associates problem solving. The main purpose of this study was to record eye movements while participants solved twelve remote associates problems compiled by Huang (2014). The results show the following: (1) The mean fixation duration gradually increases throughout the problem-solving process, which indicates that more problem solvers encounter impasses over the course of problem solving. This result supports the “impasse encounter” phase of insight. (2) During the initial period of problem solving, individuals display more regression counts in the fixation region than in the key region, which supports the idea that the impasses are caused by inappropriate initial representation. (3) During the middle period of the problem-solving process, the time individuals spend gazing at the key region increases, while the time that they spend gazing at the fixation region decreases. This pattern supports the “impasse resolution and insight” phase of insight. Finally, we compare the differences in eye movement between insight and remote associates problem solving. |