All EyeLink Eye Tracker Publications
All 14,000+ peer-reviewed EyeLink research publications up until 2025 (with some early 2026s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2015 |
Simona Amenta; Marco Marelli; Davide Crepaldi The fruitless effort of growing a fruitless tree: Early morpho-orthographic and morpho-semantic effects in sentence reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 5, pp. 1587–1596, 2015. @article{Amenta2015,In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way according to the sentence context to which they belong. This way, each target word was embedded in a sentence eliciting either its transparent or opaque interpretation. We analyzed whether the effect of stem frequency changes according to whether the (very same) word is read as a genuine derivation (transparent context) versus as a pseudoderived word (opaque context). Analysis of the first fixation durations revealed a stem-word frequency effect in both opaque and transparent contexts, thus showing that stems were accessed whether or not they contributed to word meaning, that is, word decomposition is indeed blind to semantics. However, while the stem-word frequency effect was facilitatory in the transparent context, it was inhibitory in the opaque context, thus showing an early involvement of semantic representations. This pattern of data is revealed by words with short suffixes. These results indicate that derived and pseudoderived words are segmented into their constituent morphemes also in natural reading; however, this blind-to-semantics process activates morpheme representations that are semantically connoted. |
Claudia C. Gonzalez; Mark Mon-Williams; Melanie R. Burke Children and older adults exhibit distinct sub-optimal cost-benefit functions when preparing to move their eyes and hands Journal Article In: PLoS ONE, vol. 10, no. 2, pp. e0117783, 2015. @article{Gonzalez2015,Numerous activities require an individual to respond quickly to the correct stimulus. The pro-vision of advance information allows response priming but heightened responses can cause errors (responding too early or reacting to the wrong stimulus). Thus, a balance is re-quired between the online cognitive mechanisms (inhibitory and anticipatory) used to pre-pare and execute a motor response at the appropriate time. We investigated the use of advance information in 71 participants across four different age groups: (i) children, (ii) young adults, (iii) middle-aged adults, and (iv) older adults. We implemented 'cued' and 'non-cued' conditions to assess age-related changes in saccadic and touch responses to targets in three movement conditions: (a) Eyes only; (b) Hands only; (c) Eyes and Hand. Children made less saccade errors compared to young adults, but they also exhibited lon-ger response times in cued versus non-cued conditions. In contrast, older adults showed faster responses in cued conditions but exhibited more errors. The results indicate that young adults (18–25 years) achieve an optimal balance between anticipation and execu-tion. In contrast, children show benefits (few errors) and costs (slow responses) of good in-hibition when preparing a motor response based on advance information; whilst older adults show the benefits and costs associated with a prospective response strategy (i.e., good anticipation). |
Zhiya Liu; Xiaohong Song; Carol A. Seger; Peter J. Hills An eye-tracking study of multiple feature value category structure learning: The role of unique features Journal Article In: PLoS ONE, vol. 10, no. 8, pp. e0135729, 2015. @article{Liu2015c,We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. |
James D. Retell; Dustin Venini; Stefanie I. Becker Oculomotor capture by new and unannounced color singletons during visual search Journal Article In: Attention, Perception, & Psychophysics, vol. 77, pp. 1529–1543, 2015. @article{Retell2015,The surprise capture hypothesis states that a stimulus will capture attention to the extent that it is preattentively available and deviates from task-expectancies. Interestingly, it has been noted by Horstmann (Psychological Science 13: 499–505. doi:10.1111/1467-9280.00488, 2002, Human Perception and Performance 31: 1039–1060. doi:10.1037/ 00961523.31.5.1039, 2005, Psychological Research, 70, 13- 25, 2006) that the time course of capture by such classes of stimuli appears distinct from that of capture by expected stimuli. Specifically, attention shifts to an unexpected stimulus are delayed relative to an expected stimulus (delayed onset account). Across two experiments, we investigated this claim under conditions of unguided (Exp. 1) and guided (Exp. 2) search using eye-movements as the primary index of attentional selection. In both experiments, we found strong evidence of surprise capture for the first presentation of an unannounced color singleton. However, in both experiments the pattern of eye-movements was not consistent with a delayed onset account of attention capture. Rather, we observed costs associated with the unexpected stimulus only once the target had been selected. We propose an interference account of surprise capture to explain our data and argue that this account also can explain existing patterns of data in the literature. |
Matthew J. Abbott; Adrian Staub The effect of plausibility on eye movements in reading: Testing E-Z Reader's null predictions Journal Article In: Journal of Memory and Language, vol. 85, pp. 76–87, 2015. @article{Abbott2015,The E-Z Reader 10 model of eye movements in reading (Reichle, Warren, & McConnell, 2009) posits that the process of word identification strictly precedes the process of integration of a word into its syntactic and semantic context. The present study reports a single large-scale (N=112) eyetracking experiment in which the frequency and plausibility of a target word in each sentence were factorially manipulated. The results were consistent with E-Z Reader's central predictions: frequency but not plausibility influenced the probability that the word was skipped over by the eyes rather than directly fixated, and the two variables had additive, not interactive, effects on all reading time measures. Evidence in favor of null effects and null interactions was obtained by computing Bayes factors, using the default priors and sampling methods for ANOVA models implemented by Rouder, Morey, Speckman, and Province (2012). The results suggest that though a word's plausibility may have a measurable influence as early as the first fixation duration on the target word, in fact plausibility may be influencing only a post-lexical processing stage, rather than lexical identification itself. |
Jan Brascamp; Randolph Blake; Tomas Knapen Negligible fronto-parietal BOLD activity accompanying unreportable switches in bistable perception Journal Article In: Nature Neuroscience, vol. 18, no. 11, pp. 1672–1678, 2015. @article{Brascamp2015,The human brain's executive systems have a vital role in deciding and selecting among actions. Selection among alternatives also occurs in the perceptual domain; for instance, when perception switches between interpretations during perceptual bistability. Whether executive systems also underlie this functionality remains debated, with known fronto-parietal concomitants of perceptual switches being variously interpreted as reflecting the switches' cause or as reflecting their consequences. We developed a procedure in which the two eyes receive different inputs and perception demonstrably switches between these inputs, yet the switches themselves are so inconspicuous as to become unreportable, minimizing their executive consequences. Fronto-parietal fMRI BOLD responses that accompanied perceptual switches were similarly minimized in this procedure, indicating that these reflect the switches' consequences rather than their cause. We conclude that perceptual switches do not always rely on executive brain areas and that processes responsible for selection among alternatives may operate outside the brain's executive systems. |
Steve W. C. Chang; Nicholas A. Fagan; Koji Toda; Amanda V. Utevsky; John M. Pearson; Michael L. Platt Neural mechanisms of social decision-making in the primate amygdala Journal Article In: Proceedings of the National Academy of Sciences, vol. 112, no. 52, pp. 16012–16017, 2015. @article{Chang2015,SignificanceMaking social decisions requires evaluation of benefits and costs to self and others. Long associated with emotion and vigilance, neurons in primate amygdala also signal reward and punishment as well as information about the faces and eyes of others. Here we show that neurons in the basolateral amygdala signal the value of rewards for self and others when monkeys make social decisions. These value-mirroring neurons reflected monkeys tendency to make prosocial decisions on a momentary as well as long-term basis. We also found that delivering the social peptide oxytocin into basolateral amygdala enhances both prosocial tendencies and attention to the recipients of prosocial decisions. Our findings endorse the amygdala as a critical neural nexus regulating social decisions. Social decisions require evaluation of costs and benefits to oneself and others. Long associated with emotion and vigilance, the amygdala has recently been implicated in both decision-making and social behavior. The amygdala signals reward and punishment, as well as facial expressions and the gaze of others. Amygdala damage impairs social interactions, and the social neuropeptide oxytocin (OT) influences human social decisions, in part, by altering amygdala function. Here we show in monkeys playing a modified dictator game, in which one individual can donate or withhold rewards from another, that basolateral amygdala (BLA) neurons signaled social preferences both across trials and across days. BLA neurons mirrored the value of rewards delivered to self and others when monkeys were free to choose but not when the computer made choices for them. We also found that focal infusion of OT unilaterally into BLA weakly but significantly increased both the frequency of prosocial decisions and attention to recipients for context-specific prosocial decisions, endorsing the hypothesis that OT regulates social behavior, in part, via amygdala neuromodulation. Our findings demonstrate both neurophysiological and neuroendocrinological connections between primate amygdala and social decisions. |
Tessa Warren; Evelyn Milburn; Nikole D. Patson; Michael Walsh Dickey Comprehending the impossible: what role do selectional restriction violations play? Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 8, pp. 932–939, 2015. @article{Warren2015,To elucidate how different kinds of knowledge are used during comprehension, readers' eye movements were monitored as they read sentences that were: plausible, impossible because of a selectional restriction violation (SRV) or impossible because of a violation of general world knowledge. Eye movements on the pre-critical, critical, and post-critical words evidenced disruption in the SRV condition compared to the other two conditions. These findings suggest that disruption associated with reading about impossible events is not directly determined by how impossible the event seems. Rather, the relationship between the verb and arguments in the sentence seems to matter. These findings are the strongest evidence to date that processing effects associated with selectional restrictions can dissociate from those associated with general world knowledge about events. |
Yangqing Xu; Steven L. Franconeri Capacity for visual features in mental rotation Journal Article In: Psychological Science, vol. 26, no. 8, pp. 1241–1251, 2015. @article{Xu2015,Although mental rotation is a core component of scientific reasoning, little is known about its underlying mechanisms. For instance, how much visual information can someone rotate at once? We asked participants to rotate a simple multipart shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low: Only one feature could remain attached to one part. Behavioral and eye-tracking data showed that this single feature remained "glued" via a singular focus of attention, typically on the object's top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science-education contexts. |
Franziska Kretzschmar; Matthias Schlesewsky; Adrian Staub Dissociating word frequency and predictability effects in reading: Evidence from coregistration of eye movements and EEG Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 6, pp. 1648–1662, 2015. @article{Kretzschmar2015,Two very reliable influences on eye fixation durations in reading are word frequency, as measured by corpus counts, and word predictability, as measured by cloze norming. Several studies have reported strictly additive effects of these 2 variables. Predictability also reliably influences the amplitude of the N400 component in event-related potential studies. However, previous research suggests that while frequency affects the N400 in single-word tasks, it may have little or no effect on the N400 when a word is presented with a preceding sentence context. The present study assessed this apparent dissociation between the results from the 2 methods using a coregistration paradigm in which the frequency and predictability of a target word were manipulated while readers' eye movements and electroencephalograms were simultaneously recorded. We replicated the pattern of significant, and additive, effects of the 2 manipulations on eye fixation durations. We also replicated the predictability effect on the N400, time-locked to the onset of the reader's first fixation on the target word. However, there was no indication of a frequency effect in the electroencephalogram record. We suggest that this pattern has implications both for the interpretation of the N400 and for the interpretation of frequency and predictability effects in language comprehension. |
Lalitta Suriya-Arunroj; Alexander Gail I plan therefore I choose: Free-choice bias due to prior action-probability but not action-value Journal Article In: Frontiers in Behavioral Neuroscience, vol. 9, pp. 315, 2015. @article{SuriyaArunroj2015,According to an emerging view, decision-making, and motor planning are tightly entangled at the level of neural processing. Choice is influenced not only by the values associated with different options, but also biased by other factors. Here we test the hypothesis that preliminary action planning can induce choice biases gradually and independently of objective value when planning overlaps with one of the potential action alternatives. Subjects performed center-out reaches obeying either a clockwise or counterclockwise cue-response rule in two tasks. In the probabilistic task, a pre-cue indicated the probability of each of the two potential rules to become valid. When the subsequent rule-cue unambiguously indicated which of the pre-cued rules was actually valid (instructed trials), subjects responded faster to rules pre-cued with higher probability. When subjects were allowed to choose freely between two equally rewarded rules (choice trials) they chose the originally more likely rule more often and faster, despite the lack of an objective advantage in selecting this target. In the amount task, the pre-cue indicated the amount of potential reward associated with each rule. Subjects responded faster to rules pre-cued with higher reward amount in instructed trials of the amount task, equivalent to the more likely rule in the probabilistic task. Yet, in contrast, subjects showed hardly any choice bias and no increase in response speed in favor of the original high-reward target in the choice trials of the amount task. We conclude that free-choice behavior is robustly biased when predictability encourages the planning of one of the potential responses, while prior reward expectations without action planning do not induce such strong bias. Our results provide behavioral evidence for distinct contributions of expected value and action planning in decision-making and a tight interdependence of motor planning and action selection, supporting the idea that the underlying neural mechanisms overlap. |
Helen E. Jones; Ian M. Andolina; Stewart D. Shipp; Daniel L. Adams; Javier Cudeiro; Thomas E. Salt; Adam M. Sillito Figure-ground modulation in awake primate thalamus Journal Article In: Proceedings of the National Academy of Sciences, vol. 112, no. 22, pp. 7085–7090, 2015. @article{Jones2015,Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process. |
Sara Spotorno; George L. Malcolm; Benjamin W. Tatler Disentangling the effects of spatial inconsistency of targets and distractors when searching in realistic scenes Journal Article In: Journal of Vision, vol. 15, no. 2, pp. 1–21, 2015. @article{Spotorno2015,Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation. |
Basil Wahn; Peter König Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration Journal Article In: Frontiers in Psychology, vol. 6, pp. 1084, 2015. @article{Wahn2015,Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage. |
Basil Wahn; Peter König Vision and haptics share spatial attentional resources and visuotactile integration is not affected by high attentional load Journal Article In: Multisensory Research, vol. 28, no. 3-4, pp. 371–392, 2015. @article{Wahn2015a,Human information processing is limited by attentional resources. Two questions that are discussed in multisensory research are (1) whether there are separate spatial attentional resources for each sensory modality and (2) whether multisensory integration is influenced by attentional load. We investigated these questions using a dual task paradigm: Participants performed two spatial tasks (a multiple object tracking ['MOT'] task and a localization ['LOC'] task) either separately (single task condition) or simultaneously (dual task condition). In the MOT task, participants visually tracked a small subset of several randomly moving objects. In the LOC task, participants either received visual, tactile, or redundant visual and tactile location cues. In the dual task condition, we found a substantial decrease in participants' performance and an increase in participants' mental effort (indicated by an increase in pupil size) relative to the single task condition. Importantly, participants performed equally well in the dual task condition regardless of whether they received visual, tactile, or redundant multisensory (visual and tactile) location cues in the LOC task. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the tactile and visual modality. Also, we found that participants integrated redundant multisensory information optimally even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) spatial attentional resources for the tactile and visual modality overlap and that (2) the integration of spatial cues from these two modalities occurs at an early pre-attentive processing stage. |
Dominik R. Bach; Nicholas Furl; Gareth Barnes; Raymond J. Dolan Sustained magnetic responses in temporal cortex reflect instantaneous significance of approaching and receding sounds Journal Article In: PLoS ONE, vol. 10, no. 7, pp. e0134060, 2015. @article{Bach2015,Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG) to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous) distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas. |
Tommy C. Blanchard; Benjamin Y. Hayden Monkeys are more patient in a foraging task than in a standard intertemporal choice task Journal Article In: PLoS ONE, vol. 10, no. 2, pp. e0117057, 2015. @article{Blanchard2015,Studies of animal impulsivity generally find steep subjective devaluation, or discounting, of delayed rewards - often on the order of a 50% reduction in value in a few seconds. Because such steep discounting is highly disfavored in evolutionary models of time preference, we hypothesize that discounting tasks provide a poor measure of animals' true time preferences. One prediction of this hypothesis is that estimates of time preferences based on these tasks will lack external validity, i.e. fail to predict time preferences in other contexts. We examined choices made by four rhesus monkeys in a computerized patch-leaving foraging task interleaved with a standard intertemporal choice task. Monkeys were significantly more patient in the foraging task than in the intertemporal choice task. Patch-leaving behavior was well fit by parameter-free optimal foraging equations but poorly fit by the hyperbolic discount parameter obtained from the intertemporal choice task. Day-to-day variation in time preferences across the two tasks was uncorrelated with each other. These data are consistent with the conjecture that seemingly impulsive behavior in animals is an artifact of their difficulty understanding the structure of intertemporal choice tasks, and support the idea that animals are more efficient rate maximizers in the multi-second range than intertemporal choice tasks would suggest. |
R. Becket Ebitz; Michael L. Platt Neuronal activity in primate dorsal anterior cingulate cortex signals Task conflict and predicts adjustments in pupil-linked arousal Journal Article In: Neuron, vol. 85, no. 3, pp. 628–640, 2015. @article{Ebitz2015,Whether driving a car, shopping for food, or paying attention in a classroom of boisterous teenagers, it's often hard to maintain focus on goals in theface of distraction. Brain imaging studies in humans implicate the dorsal anterior cingulate cortex (dACC) in regulating the conflict between goals and distractors. Here we show that single dACC neurons signal conflict between task goals and distractors in the rhesus macaque, particularly for biologically relevant social stimuli. For some neurons, task conflict signals predicted subsequent changes in pupil size-a peripheral index of arousal linked to noradrenergic tone-associated with reduced distractor interference. dACC neurons also responded to errors, and these signals predicted adjustments in pupil size. These findings provide the first neurophysiological endorsement of the hypothesis that dACC regulates conflict, in part, via modulation of pupil-linked processes such as arousal. |
Michele Fornaciai; Paola Binda Effect of saccade automaticity on perisaccadic space compression Journal Article In: Frontiers in Systems Neuroscience, vol. 9, pp. 127, 2015. @article{Fornaciai2015,Briefly presented stimuli occurring just before or during a saccadic eye movement are mislocalized, leading to a compression of visual space toward the target of the saccade. In most cases this has been measured in subjects over-trained to perform a stereotyped and unnatural task where saccades are repeatedly driven to the same location, marked by a highly salient abrupt onset. Here, we asked to what extent the pattern of perisaccadic mislocalization depends on this specific context. We addressed this question by studying perisaccadic localization in a set of participants with no prior experience in eye-movement research, measuring localization performance as they practiced the saccade task. Localization was marginally affected by practice over the course of the experiment and it was indistinguishable from the performance of expert observers. The mislocalization also remained similar when the expert observers were tested in a condition leading to less stereotypical saccadic behavior-with no abrupt onset marking the saccade target location. These results indicate that perisaccadic compression is a robust behavior, insensitive to the specific paradigm used to drive saccades and to the level of practice with the saccade task. |
J. D. Silvis; Katya Olmos-Solis; M. Donk The nature of the global effect beyond the first eye movement Journal Article In: Vision Research, vol. 108, pp. 20–32, 2015. @article{ssd15,When two or more visual objects appear in close proximity, the initial oculomotor response is systematically aimed at a location in between the objects, a phenomenon named the global effect. The global effect is known to arise when saccades are initiated relatively quickly, immediately after the presentation of a display, but it has also been shown that a global effect may occur much later in time, even for eye movements beyond the first. That is, when participants are searching for a complex target among complex distractor objects, it can take several eye movements to hit the target, and these eye movements mainly land at intermediate locations. It is debatable whether these findings are caused by the same mechanisms as those involved in the more typical global effect studies, studies in which much simpler search tasks are employed. In the current two experiments, we examined whether and under which circumstances a global effect can be found for a second oculomotor response in a search display containing two simple objects. Experiment 1 showed that the global effect only occurs when the presentation of the target and distractor objects is delayed, until after the first oculomotor response is initiated. Experiment 2 demonstrated that identity information, rather than spatial information, is crucial for the occurrence of the global effect. These results suggest that the global effect is not due to a failure to dissociate between the locations of multiple objects, but a failure to determine which one is the target. |
Wieske Zoest; Dirk Kerzel The effects of saliency on manual reach trajectories and reach target selection Journal Article In: Vision Research, vol. 113, pp. 179–187, 2015. @article{Zoest2015,Reaching trajectories curve toward salient distractors, reflecting the competing activation of reach plans toward target and distractor stimuli. We investigated whether the relative saliency of target and distractor influenced the curvature of the movement and the selection of the final endpoint of the reach. Participants were asked to reach a bar tilted to the right in a context of gray vertical bars. A bar tilted to the left served as distractor. Relative stimulus saliency was varied via color: either the distractor was red and the target was gray, or vice versa. Throughout, we observed that reach trajectories deviated toward the distractor. Surprisingly, relative saliency had no effect on the curvature of reach trajectories. Moreover, when we increased time pressure in separate experiments and analyzed the curvature as a function of reaction time, no influence of relative stimulus saliency was found, not even for the fastest reaction times. If anything, curvature decreased with strong time pressure. In contrast, reach target selection under strong time pressure was influenced by relative saliency: reaches with short reaction times were likely to go to the red distractor. The time course of reach target selection was comparable to saccadic target selection. Implications for the neural basis of trajectory deviations and target selection in manual and eye movements are discussed. |
Tommy C. Blanchard; Caleb E. Strait; Benjamin Y. Hayden Ramping ensemble activity in dorsal anterior cingulate neurons during persistent commitment to a decision Journal Article In: Journal of Neurophysiology, vol. 114, no. 4, pp. 2439–2449, 2015. @article{Blanchard2015a,We frequently need to commit to a choice to achieve our goals; however, the neural processes that keep us motivated in pursuit of delayed goals remain obscure. We examined ensemble responses of neurons in macaque dorsal anterior cingulate cortex (dACC), an area previously implicated in self-control and persistence, in a task that requires commitment to a choice to obtain a reward. After reward receipt, dACC neurons signaled reward amount with characteristic ensemble firing rate patterns; during the delay in anticipation of the reward, ensemble activity smoothly and gradually came to resemble the postreward pattern. On the subset of risky trials, in which a reward was anticipated with 50% certainty, ramping ensemble activity evolved to the pattern associated with the anticipated reward (and not with the anticipated loss) and then, on loss trials, took on an inverted form anticorrelated with the form associated with a win. These findings enrich our knowledge of reward processing in dACC and may have broader implications for our understanding of persistence and self-control. |
Michel Failing; Tom Nissens; Daniel Pearson; Mike Le Pelley; Jan Theeuwes Oculomotor capture by stimuli that signal the availability of reward Journal Article In: Journal of Neurophysiology, vol. 114, no. 4, pp. 2316–2327, 2015. @article{Failing2015,It is well known that eye movement patterns are influenced by both goal- and salience-driven factors. Recent studies, however, have demonstrated that objects that are nonsalient and task irrelevant can still capture our eyes if moving our eyes to those objects has previously produced reward. Here we demonstrate that training such an association between eye movements to an object and delivery of reward is not needed. Instead, an object that merely signals the availability of reward captures the eyes even when it is physically nonsalient and never relevant for the task. Furthermore, we show that oculomotor capture by reward is more reliably observed in saccades with short latencies. We conclude that a stimulus signaling high reward has the ability to capture the eyes independently of bottom-up physical salience or top-down task relevance and that the effect of reward affects early selection processes. |
William W. Sprague; Emily A. Cooper; Ivana Tošić; Martin S. Banks Stereopsis is adaptive for the natural environment Journal Article In: Science Advances, vol. 1, pp. e1400254, 2015. @article{Sprague2015,Humans and many animals have forward-facing eyes providing different views of the environment. Precise depth estimates can be derived from the resulting binocular disparities, but determining which parts of the two retinal images correspond to one another is computationally challenging. To aid the computation, the visual system focuses the search on a small range of disparities. We asked whether the disparities encountered in the natural environment match that range. We did this by simultaneously measuring binocular eye position and three-dimensional scene geometry during natural tasks. The natural distribution of disparities is indeed matched to the smaller range of correspondence search. Furthermore, the distribution explains the perception of some ambiguous stereograms. Finally, disparity preferences of macaque cortical neurons are consistent with the natural distribution. |
Jeroen D. Silvis; Artem V. Belopolsky; Jozua W. I. Murris; Mieke Donk The effects of feature-based priming and visual working memory on oculomotor capture Journal Article In: PLoS ONE, vol. 10, no. 11, pp. e0142696, 2015. @article{sbmd15,Recently, it has been demonstrated that objects held in working memory can influence rapid oculomotor selection. This has been taken as evidence that perceptual salience can be modified by active working memory representations. The goal of the present study was to examine whether these results could also be caused by feature-based priming. In two experiments, participants were asked to saccade to a target line segment of a certain orientation that was presented together with a to-be-ignored distractor. Both objects were given a task-irrelevant color that varied per trial. In a secondary task, a color had to be memorized, and that color could either match the color of the target, match the color of the distractor, or it did not match the color of any of the objects in the search task. The memory task was completed either after the search task (Experiment 1), or before it (Experiment 2). The results showed that in both experiments the memorized color biased oculomotor selection. Eye movements were more frequently drawn towards objects that matched the memorized color, irrespective of whether the memory task was completed after (Experiment 1) or before (Experiment 2) the search task. This bias was particularly prevalent in short-latency saccades. The results show that early oculomotor selection performance is not only affected by properties that are actively maintained in working memory but also by those previously memorized. Both working memory and feature priming can cause early biases in oculomotor selection. |
Petra Warschburger; Claudia Calvano; Eike M. Richter; Ralf Engbert Analysis of attentional bias towards attractive and unattractive body regions among overweight males and females: An eye-movement study Journal Article In: PLoS ONE, vol. 10, no. 10, pp. e0140813, 2015. @article{Warschburger2015,BACKGROUND: Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others' attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias. METHODS/DESIGN: We analyzed eye movements in 30 overweight individuals (18 females) and 28 normal-weight individuals (16 females) with respect to the participants' own pictures as well as gender- and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires. DISCUSSION: The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive compared to unattractive regions of both their own and the control body. For one's own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results. |
Mathias Abegg; Dario Pianezzi; Jason J. S. Barton A vertical asymmetry in saccades Journal Article In: Journal of Eye Movement Research, vol. 8, no. 5, pp. 1–10, 2015. @article{Abegg2015,Visual exploration of natural scenes imposes demands that differ between the upper and the lower visual hemifield. Yet little is known about how ocular motor performance is affected by the location of visual stimuli or the direction of a behavioural response. We compared saccadic latencies between upper and lower hemifield in a variety of conditions, including short-latency prosaccades, long-latency prosaccades, antisaccades, memory-guided saccades and saccades with increased attentional and selection demand. All saccade types, except memory guided saccades, had shorter latencies when saccades were directed towards the upper field as compared to downward saccades (p<0.05). This upper field reaction time advantage probably arises in ocular motor rather than visual processing. It may originate in structures involved in motor preparation rather than execution. |
Andreas Gartus; Nicolas Klemer; Helmut Leder The effects of visual context and individual differences on perception and evaluation of modern art and graffiti art Journal Article In: Acta Psychologica, vol. 156, pp. 64–76, 2015. @article{Gartus2015,Traditionally, artworks are seen as autonomous objects that stand (or should stand) on their own. However, at least since the emergence of Conceptual Art in the 1920s and Pop Art in the 1960s, art lacks any distinctive perceptual features that define it as such. Art, therefore, cannot be defined without reference to its context. Some studies have shown that context affects the evaluation of artworks, and that specific contexts (street for graffiti art, museum for modern art) elicit specific effects (Gartus & Leder, 2014). However, it is yet unclear how context changes perception and appreciation processes. In our study we measured eye-movements while participants (64 psychology undergraduates, 48% women) perceived and evaluated beauty, interest, emotional valence, as well as perceived style for modern art and graffiti art embedded into either museum or street contexts. For modern art, beauty and interest ratings were higher in a museum than in a street context, but context made no difference for the ratings of graffiti art. Importantly, we also found an interaction of context and individual interest in graffiti for beauty and interest ratings, as well as for number of fixations. Analyses of eye-movements also revealed that viewing times were in general significantly longer in museum than in street contexts. We conclude that context can have an important influence on aesthetic appreciation. However, some effects depend also on the style of the artworks and the individual art interests of the viewers. |
Tai-Hsiang Huang; Su-Ling Yeh; Yung-Hao Yang; Hsin-I Liao; Ya-Yeh Tsai; Pai-Ju Chang; Homer H. Chen Method and experiments of subliminal cueing for real-world images Journal Article In: Multimedia Tools and Applications, vol. 74, no. 22, pp. 10111–10135, 2015. @article{Huang2015,Unconscious attention shift triggered by a subliminal cue has been shown to be automatic; however, whether it can be brought into effect for images of real-world scenes remains to be investigated. We present a subliminal cueing method that flashes briefly a visual cue before presenting a real-world image to the viewer. The effectiveness of the method is verified by experiments using three types of cues (spatial cue, face cue, and object cue) of varied durations. Results show that depending on the cue type, the viewer's visual attention is directed to the cued visual hemifield or the cued location without engaging the viewer's awareness. The experiments demonstrate that a brief subliminal cue presented prior to the color image of a real-world complex scene can attract human visual attention. The method is useful for many applications that require efficient, unresisting attention shift to a target image area. |
Olave E. Krigolson; Cameron D. Hassall; Jason Satel; Raymond M. Klein The impact of cognitive load on reward evaluation Journal Article In: Brain Research, vol. 1627, pp. 225–232, 2015. @article{Krigolson2015,The neural systems that afford our ability to evaluate rewards and punishments are impacted by a variety of external factors. Here, we demonstrate that increased cognitive load reduces the functional efficacy of a reward processing system within the human medial–frontal cortex. In our paradigm, two groups of participants used performance feedback to estimate the exact duration of one second while electroencephalographic (EEG) data was recorded. Prior to performing the time estimation task, both groups were instructed to keep their eyes still and avoid blinking in line with well established EEG protocol. However, during performance of the time-estimation task, one of the two groups was provided with trial-to-trial-feedback about their performance on the time-estimation task and their eye movements to induce a higher level of cognitive load relative to participants in the other group who were solely provided with feedback about the accuracy of their temporal estimates. In line with previous work, we found that the higher level of cognitive load reduced the amplitude of the feedback-related negativity, a component of the human event-related brain potential associated with reward evaluation within the medial–frontal cortex. Importantly, our results provide further support that increased cognitive load reduces the functional efficacy of a neural system associated with reward processing. |
Phillip C. F Law; Bryan K. Paton; Jacqueline A. Riddiford; Caroline T. Gurvich; Trung T. Ngo; Steven M. Miller No relationship between binocular rivalry rate and eye-movement profiles in healthy individuals: A Bayes factor analysis Journal Article In: Perception, vol. 44, no. 5, pp. 643–661, 2015. @article{Law2015,Binocular rivalry (BR) is an intriguing phenomenon in which conflicting images are presented, one to each eye, resulting in perceptual alternations between each image. The rate of BR has been proposed as a potential endophenotype for bipolar disorder because (a) it is well established that this highly heritable psychiatric condition is associated with slower BR rate than in controls, and (b) an individual's BR rate is approximately 50% genetically determined. However, eye movements (EMs) could potentially account for the slow BR trait given EM anomalies are observed in psychiatric populations, and there has been report of an association between saccadic rate and BR rate in healthy individuals. Here, we sought to assess the relationship between BR rate and EMs in healthy individuals (N ¼ 40, mean age ¼ 34.4) using separate BR and EM tasks, with the latter measuring saccades during anticipatory, antisaccade, prosaccade, self-paced, free-viewing, and smooth-pursuit tasks. No correlation was found between BR rate and any EM measure for any BR task (p >.01) with substantial evidence favoring this lack of association (BF01 > 3). This finding is in contrast to previous data and has important implications for using BR rate as an endophenotype. If replicated in clinical psychiatric populations, EM interpretations of the slow BR trait can be excluded. |
Xiao-Qing Li; Hai-Yan Zhao; Yuan-Yuan Zheng; Yu-Fang Yang Two-stage interaction between word order and noun animacy during online thematic processing of sentences in Mandarin Chinese Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 5, pp. 555–573, 2015. @article{Li2015,How different sources of linguistic information are used during online language comprehension is a central question in psycholinguistic research. This study used eye-tracking and electrophysiological techniques to investigate how and when word order and noun animacy interact with each other during online thematic processing of Mandarin Chinese sentences. The initial argument in the sentence is animate or inanimate and the following verb disambiguates it as an agent or patient. The results at the verb revealed that, at the early processing stage, the patient-first sentences elicited longer gaze duration and larger N400 than the agent-first ones only when the initial argument was inanimate; however, at the late stage, the patient-first sentences elicited prolonged second-pass time and enhanced P600 only when the initial argument was animate. In addition, the brain oscillations at the verb also showed different patterns in the early and later window latencies. The present results suggested that the online thematic processing of Mandarin Chinese sentences involves not only universal processing strategies (subject-preference) but also language-specific strategies as well. That is, in Mandarin Chinese, noun animacy interacts with word order immediately during online sentence comprehension; the initial processing results can be overridden by additional interpretively relevant information types at a later stage. Those results provided important indications for the language comprehension models. |
Signe Bray; Ramsha Almas; Aiden E. G. F. Arnold; Giuseppe Iaria; Glenda Macqueen Intraparietal sulcus activity and functional connectivity supporting spatial working memory manipulation Journal Article In: Cerebral Cortex, vol. 25, no. 5, pp. 1252–1264, 2015. @article{Bray2015,The intraparietal sulcus (IPS) is recruited during tasks requiring attention, maintenance and manipulation of information in working memory (WM). While WM tasks often show broad bilateral engagement along the IPS, topographic maps of contralateral (CL) visual space have been identified along the IPS, similar to retinotopic maps in visual cortex. In the present study, we asked how these visuotopic IPS regions are differentially involved in the maintenance and manipulation of spatial information in WM. Visuotopic mapping was performed in 26 participants to define regions of interest along the IPS, corresponding to previously described IPS0-4. In a separate task, we showed that while maintaining the location of a briefly flashed target in WM preferentially engaged CL IPS, manipulation of spatial information by mentally rotating the target around a circle engaged bilateral IPS, peaking in IPS1 in most participants. Functional connectivity analyses showed increased interaction between the IPS and prefrontal regions during manipulation, as well as interhemispheric interactions. Two control tasks demonstrated that covert attention shifts, and nonspatial manipulation (arithmetic), engaged patterns of IPS activation and connectivity that were distinct from WM manipulation. These findings add to our understanding of the role of IPS in spatial WM maintenance and manipulation. |
Xiaowei Li; Bin Hu; Tingting Xu; Ji Shen; Martyn Ratcliffe A study on EEG-based brain electrical source of mild depressed subjects Journal Article In: Computer Methods and Programs in Biomedicine, vol. 120, no. 3, pp. 135–141, 2015. @article{Li2015a,Background and objective: Several abnormal brain regions are known to be linked to depression, including amygdala, orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC) etc. The aim of this study is to apply EEG (electroencephalogram) data analysis to investigate, with respect to mild depression, whether there exists dysregulation in these brain regions. Methods: EEG sources were assessed from 9 healthy and 9 mildly depressed subjects who were classified according to the Beck Depression Inventory (BDI) criteria. t-Test was used to calculate the eye movement data and standardized low resolution tomography (sLORETA) was used to correlate EEG activity. Results: A comparison of eye movement data between the healthy and mild depressed subjects exhibited that mildly depressed subjects spent more time viewing negative emotional faces. Comparison of the EEG from the two groups indicated higher theta activity in BA6 (Brodmann area) and higher alpha activity in BA38. Conclusions: EEG source location results suggested that temporal pole activity to be dysregulated, and eye-movement data analysis exhibited mild depressed subjects paid much more attention to negative face expressions, which is also in accordance with the results of EEG source location. |
Paul Roux; Christine Passerieux; Franck Ramus An eye-tracking investigation of intentional motion perception in patients with schizophrenia Journal Article In: Journal of Psychiatry and Neuroscience, vol. 40, no. 2, pp. 118–125, 2015. @article{rpr15,BACKGROUND: Schizophrenia has been characterized by an impaired attribution of intentions in social interactions. However, it remains unclear to what extent poor performance may be due to low-level processes or to later, higher-level stages or to what extent the deficit reflects an over- (hypermentalization) or underattribution of intentions (hypomentalization). METHODS: We evaluated intentional motion perception using a chasing detection paradigm in individuals with schizophrenia or schizoaffective disorder and in healthy controls while eye movements were recorded. Smooth pursuit was measured as a control task. Eye-tracking was used to dissociate ocular from cognitive stages of processing. RESULTS: We included 27 patients with schizophrenia, 2 with schizoaffective disorder and 29 controls in our analysis. As a group, patients had lower sensitivity to the detection of chasing than controls, but showed no bias toward the chasing present response. Patients showed a slightly different visual exploration strategy, which affected their ocular sensitivity to chasing. They also showed a decreased cognitive sensitivity to chasing that was not explained by differences in smooth pursuit ability, in visual exploration strategy or in general cognitive abilities. LIMITATIONS: It is not clear whether the deficit in intentional motion detection demonstrated in this study might be explained by a general deficit in motion perception in individuals with schizophrenia or whether it is specific to the social domain. CONCLUSION: Participants with schizophrenia showed a hypomentalization deficit: they adopted suboptimal visual exploration strategies and had difficulties deciding whether a chase was present or not, even when their eye movement revealed that chasing information had been seen correctly. |
David J. Schaeffer; Lingxi Chi; Cynthia E. Krafft; Qingyang Li; Nicolette F. Schwarz; Jennifer E. Mcdowell Individual differences in working memory moderate the relationship between prosaccade latency and antisaccade error rate Journal Article In: Psychophysiology, vol. 52, no. 4, pp. 605–608, 2015. @article{Schaeffer2015,Cognitive control is required for flexible responses in changing environments and can be assessed by measuring antisaccade error rate. Considerable variance in antisaccade error rate is observed in healthy participants, which motivated the current study to explore the cognitive factors affecting antisaccade performance. Relationships exist between prosaccade latency and antisaccade error rate, with faster prosaccade latencies linked to more antisaccade errors. Individual differences in working memory also impact saccadic performance. The current study tested the relationships among prosaccade latency, antisaccade error rate, and working memory in 153 healthy participants. Correlation and multiple regression analyses demonstrated that prosaccade latency predicted antisaccade error rate, and working memory moderated this relationship. These results may help elucidate individual differences in cognitive control among healthy individuals. |
Mathias Klinghammer; Gunnar Blohm; Katja Fiehler Contextual factors determine the use of allocentric information for reaching in a naturalistic scene Journal Article In: Journal of Vision, vol. 15, no. 13, pp. 1–13, 2015. @article{Klinghammer2015,Numerous studies have demonstrated that humans incorporate allocentric information when reaching toward visual targets. So far, it is unclear how this information is integrated into the movement plan when multiple allocentric cues are available. In this study we investigated whether and how the extent of spatial changes and the task relevance of allocentric cues influence reach behavior. To this end, we conducted two experiments where we presented participants three- dimensional–rendered images of a naturalistic breakfast scene on a computer screen. The breakfast scene included multiple objects (allocentric cues) with a subset of objects functioning as potential reach targets (i.e., they were task-relevant). Participants freely viewed the scene and after a short delay, the scene reappeared with one object missing (target) and other objects being shifted left- or rightwards. Afterwards, participants were asked to reach toward the target position on a gray screen while fixating the screen center. We found systematic deviations of reach endpoints in the direction of object shifts which varied with the number of objects shifted, but only if these objects served as potential reach targets. Our results suggest that the integration of allocentric information into the reach plan is determined by contextual factors, in particular by the extent of spatial cue changes and the task-relevance of allocentric cues. |
Francesc Llorens; Daniel Sanabria; Florentino Huertas; Enrique Molina; Simon J. Bennett Intense physical exercise reduces overt attentional capture Journal Article In: Journal of Sport and Exercise Psychology, vol. 37, no. 5, pp. 559–564, 2015. @article{Llorens2015,The abrupt onset of a visual stimulus typically results in overt attentional capture, which can be quantified by saccadic eye movements. Here, we tested whether attentional capture following onset of task-irrelevant visual stimuli (new object) is reduced after a bout of intense physical exercise. A group of participants performed a visual search task in two different activity conditions: rest, without any prior effort, and effort, immediately after an acute bout of intense exercise. The results showed that participants exhibited (1) slower reaction time of the first saccade toward the target when a new object was simultaneously presented in the visual field, but only in the rest activity condition, and (2) more saccades to the new object in the rest activity condition than in the effort activity condition. We suggest that immediately after an acute bout of effort, participants improved their ability to inhibit irrelevant (distracting) stimuli. |
Gonçalo Padrão; Borja Rodriguez-Herreros; Laura Pérez Zapata; Antoni Rodriguez-Fornells Exogenous capture of medial-frontal oscillatory mechanisms by unattended conflicting information Journal Article In: Neuropsychologia, vol. 75, pp. 458–468, 2015. @article{Padrao2015,A long-standing debate in psychology and cognitive neuroscience concerns the way in which unattended information is processed and influences goal-directed behavior. Although selective attention allows us to filter out task-irrelevant information, there is a substantial number of unattended, yet relevant, events that must be evaluated in a flexible manner so that appropriate behaviors can succeed. Here we inspected the extent to which unattended conflicting visual information, which cannot be consciously identified, influences behavior and activates medial prefrontal cortex (mPFC) mechanisms of action-monitoring and regulation, traditionally associated with conscious control processes.To that end, we performed two experiments using a novel variant of the Eriksen flanker task in which spatial attention was manipulated, preventing the conscious identification of unattended visual events. The first behavioral experiment was conducted to validate the efficacy of the novel paradigm. In the second experiment, we evaluated electrophysiological correlates of mPFC activity (a frontocentral negative ERP component and medial-frontal theta oscillations) in response to attended and unattended conflicting events. The results of both experiments demonstrated that attended and unattended conflicting stimuli altered subjects' behavior in a similar fashion, i.e. slowing down their reaction times and increasing their error rates. Importantly, the results of the EEG experiment showed that unattended conflicting stimuli, similarly to attended conflicting stimuli, led to an increase in theta-related frontocentral ERP activity and medial-frontal theta power, irrespective of the degree of conscious representation of the sources of conflict. This study provides evidence that medial-frontal theta oscillations represent a neural mechanism through which the mPFC may suppress and regulate potentially inappropriate actions that are automatically triggered by conflicting environmental stimuli to which we are oblivious. |
Andreas Sprenger; Frederik D. Weber; Bjoern Machner; Silke Talamo; Sabine Scheffelmeier; Judith Bethke; Christoph Helmchen; Steffen Gais; Hubert Kimmig; Jan Born Deprivation and recovery of sleep in succession enhances reflexive motor behavior Journal Article In: Cerebral Cortex, vol. 25, no. 11, pp. 4610–4618, 2015. @article{Sprenger2015,Sleep deprivation impairs inhibitory control over reflexive behavior, and this impairment is commonly assumed to dissipate after recovery sleep. Contrary to this belief, here we show that fast reflexive behaviors, when practiced during sleep deprivation, is consolidated across recovery sleep and, thereby, becomes preserved. As a model for the study of sleep effects on prefrontal cortex-mediated inhibitory control in humans, we examined reflexive saccadic eye movements (express saccades), as well as speeded 2-choice finger motor responses. Different groups of subjects were trained on a standard prosaccade gap paradigm before periods of nocturnal sleep and sleep deprivation. Saccade performance was retested in the next morning and again 24 h later. The rate of express saccades was not affected by sleep after training, but slightly increased after sleep deprivation. Surprisingly, this increase augmented even further after recovery sleep and was still present 4 weeks later. Additional experiments revealed that the short testing after sleep deprivation was sufficient to increase express saccades across recovery sleep. An increase in speeded responses across recovery sleep was likewise found for finger motor responses. Our findings indicate that recovery sleep can consolidate motor disinhibition for behaviors practiced during prior sleep deprivation, thereby persistently enhancing response automatization. |
Suzanne R. Jongman; Antje S. Meyer; Ardi Roelofs The role of sustained attention in the production of conjoined noun phrases: An individual differences study Journal Article In: PLoS ONE, vol. 10, no. 9, pp. e0137557, 2015. @article{Jongman2015a,It has previously been shown that language production, performed simultaneously with a nonlinguistic task, involves sustained attention. Sustained attention concerns the ability to maintain alertness over time. Here, we aimed to replicate the previous finding by showing that individuals call upon sustained attention when they plan single noun phrases (e.g., "the carrot") and perform a manual arrow categorization task. In addition, we investigated whether speakers also recruit sustained attention when they produce conjoined noun phrases (e.g., "the carrot and the bucket") describing two pictures, that is, when both the first and second task are linguistic. We found that sustained attention correlated with the proportion of abnormally slow phrase-production responses. Individuals with poor sustained attention displayed a greater number of very slow responses than individuals with better sustained attention. Importantly, this relationship was obtained both for the production of single phrases while performing a nonlinguistic manual task, and the production of noun phrase conjunctions in referring to two spatially separated objects. Inhibition and updating abilities were also measured. These scores did not correlate with our measure of sustained attention, suggesting that sustained attention and executive control are distinct. Overall, the results suggest that planning conjoined noun phrases involves sustained attention, and that language production happens less automatically than has often been assumed. |
Naotoshi Abekawa; Hiroaki Gomi Online gain update for manual following response accompanied by gaze shift during arm reaching Journal Article In: Journal of Neurophysiology, vol. 113, no. 4, pp. 1206–1216, 2015. @article{Abekawa2015,To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination. |
Patrick H. Cox; Maximilian Riesenhuber There is a "U" in clutter: Evidence for robust sparse codes underlying clutter tolerance in human vision Journal Article In: Journal of Neuroscience, vol. 35, no. 42, pp. 14148–14159, 2015. @article{Cox2015,The ability to recognize objects in clutter is crucial for human vision, yet the underlying neural computations remain poorly understood. Previous single-unit electrophysiology recordings in inferotemporal cortex in monkeys and fMRI studies of object-selective cortex in humans have shown that the responses to pairs of objects can sometimes be well described as a weighted average of the responses to the constituent objects. Yet, from a computational standpoint, it is not clear how the challenge of object recognition in clutter can be solved if downstream areas must disentangle the identity of an unknown number of individual objects from the confounded average neuronal responses. An alternative idea is that recognition is based on a subpopulation of neurons that are robust to clutter, i.e., that do not show response averaging, but rather robust object-selective responses in the presence of clutter. Here we show that simulations using the HMAX model of object recognition in cortex can fit the aforementioned single-unit and fMRI data, showing that the averaging-like responses can be understood as the result of responses of object-selective neurons to suboptimal stimuli. Moreover, the model shows how object recognition can be achieved by a sparse readout of neurons whose selectivity is robust to clutter. Finally, the model provides a novel prediction about human object recognition performance, namely, that target recognition ability should show a U-shaped dependency on the similarity of simultaneously presented clutter objects. This prediction is confirmed experimentally, supporting a simple, unifying model of how the brain performs object recognition in clutter.$backslash$n$backslash$nSIGNIFICANCE STATEMENT: The neural mechanisms underlying object recognition in cluttered scenes (i.e., containing more than one object) remain poorly understood. Studies have suggested that neural responses to multiple objects correspond to an average of the responses to the constituent objects. Yet, it is unclear how the identities of an unknown number of objects could be disentangled from a confounded average response. Here, we use a popular computational biological vision model to show that averaging-like responses can result from responses of clutter-tolerant neurons to suboptimal stimuli. The model also provides a novel prediction, that human detection ability should show a U-shaped dependency on target-clutter similarity, which is confirmed experimentally, supporting a simple, unifying account of how the brain performs object recognition in clutter. |
Samanthi C. Goonetilleke; Leor N. Katz; Daniel K. Wood; Chao Gu; Alexander C. Huk; Brian D. Corneil In: Journal of Neurophysiology, vol. 114, no. 2, pp. 902–913, 2015. @article{Goonetilleke2015,Recent studies have described a phenomenon wherein the onset of a peripheral visual stimulus elicits short-latency (<100 ms) stimulus-locked recruitment (SLR) of neck muscles in nonhuman primates (NHPs), well before any saccadic gaze shift. The SLR is thought to arise from visual responses within the intermediate layers of the superior colliculus (SCi), hence neck muscle recordings may reflect presaccadic activity within the SCi, even in humans. We obtained bilateral intramuscular recordings from splenius capitis (SPL, an ipsilateral head-turning muscle) from 28 human subjects performing leftward or rightward visually guided eye-head gaze shifts. Evidence of an SLR was obtained in 16/55 (29%) of samples; we also observed examples where the SLR was present only unilaterally. We compared these human results with those recorded from a sample of eight NHPs from which recordings of both SPL and deeper suboccipital muscles were available. Using the same criteria, evidence of an SLR was obtained in 8/14 (57%) of SPL recordings, but in 26/29 (90%) of recordings from suboccipital muscles. Thus, both species-specific and muscle- specific factors contribute to the low SLR prevalence in human SPL. Regardless of the presence of the SLR, neck muscle activity in both human SPL and in NHPs became predictive of the reaction time of the ensuing saccade gaze shift ~70 ms after target appearance; such pregaze recruitment likely reflects developing SCi activity, even if the tectoreticulospinal pathway does not reliably relay visually related activity to SPL in humans. |
Taylor R. Hayes; Alexander A. Petrov; Per B. Sederberg Do we really become smarter when our fluid-intelligence test scores improve? Journal Article In: Intelligence, vol. 48, pp. 1–14, 2015. @article{Hayes2015,Recent reports of training-induced gains on fluid intelligence tests have fueled an explosion of interest in cognitive training-now a billion-dollar industry. The interpretation of these results is questionable because score gains can be dominated by factors that play marginal roles in the scores themselves, and because intelligence gain is not the only possible explanation for the observed control-adjusted far transfer across tasks. Here we present novel evidence that the test score gains used to measure the efficacy of cognitive training may reflect strategy refinement instead of intelligence gains. A novel scanpath analysis of eye movement data from 35 participants solving Raven's Advanced Progressive Matrices on two separate sessions indicated that one-third of the variance of score gains could be attributed to test-taking strategy alone, as revealed by characteristic changes in eye-fixation patterns. When the strategic contaminant was partialled out, the residual score gains were no longer significant. These results are compatible with established theories of skill acquisition suggesting that procedural knowledge tacitly acquired during training can later be utilized at posttest. Our novel method and result both underline a reason to be wary of purported intelligence gains, but also provide a way forward for testing for them in the future. |
Suzanne R. Jongman; Ardi Roelofs; Antje S. Meyer Sustained attention in language production: An individual differences investigation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 68, no. 4, pp. 710–730, 2015. @article{Jongman2015,Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants' sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers' sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources. |
Matúš Šimkovic; Birgit Träuble Pursuit tracks chase: Exploring the role of eye movements in the detection of chasing Journal Article In: PeerJ, vol. 3, pp. 1–36, 2015. @article{st2015,We explore the role of eye movements in a chase detection task. Unlike the previous studies, which focused on overall performance as indicated by response speed and chase detection accuracy, we decompose the search process into gaze events such as smooth eye movements and use a data-driven approach to separately describe these gaze events. We measured eye movements of four human subjects engaged in a chase detection task displayed on a computer screen. The subjects were asked to detect two chasing rings among twelve other randomly moving rings. Using principal component analysis and support vector machines, we looked at the template and classification images that describe various stages of the detection process. We showed that the subjects mostly search for pairs of rings that move one after another in the same direction with a distance of 3.5-3.8 degrees. To find such pairs, the subjects first looked for regions with a high ring density and then pursued the rings in this region. Most of these groups consisted of two rings. Three subjects preferred to pursue the pair as a single object, while the remaining subject pursued the group by alternating the gaze between the two individual rings. In the discussion, we argue that subjects do not compare the movement of the pursued pair to a singular preformed template that describes a chasing motion. Rather, subjects bring certain hypotheses about what motion may qualify as chase and then, through feedback, they learn to look for a motion pattern that maximizes their performance. |
Stefanie I. Becker; Amanda J. Lewis Oculomotor capture by irrelevant onsets with and without color contrast Journal Article In: Annals of the New York Academy of Sciences, vol. 1339, no. 1, pp. 60–71, 2015. @article{Becker2015,It is widely known that irrelevant onsets (i.e., items appearing in previously empty locations) can automatically capture attention and attract our gaze. Some studies have shown that onset capture is stronger when the onset distractor matches the target feature, indicating that onset capture can be modulated by feature-based (top-down) tuning to the target. However, it is less clear whether and to what extent the perceptual saliency of the distractor can further modulate this effect. This study examined the effects of target similarity, competition between target and distractor, and bottom-up color contrast on the ability of onset distractor to capture the gaze, by varying the color (contrast) and stimulus-onset asynchrony of the onset distractor. The results clearly show that competition and feature-based attention modulate capture by the irrelevant onset to a large extent, whereas bottom-up color contrasts do not modulate onset capture. These results indicate the need to revise current accounts of gaze control. |
Xingshan Li; Pingping Liu; Keith Rayner Saccade target selection in Chinese reading Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 2, pp. 524–530, 2015. @article{Li2015b,In Chinese reading, there are no spaces to mark the word boundaries, so Chinese readers cannot target their saccades to the center of a word. In this study, we investigated how Chinese readers decide where to move their eyes during reading. To do so, we introduced a variant of the boundary paradigm in which only the target stimulus remained on the screen, displayed at the saccade landing site, after the participant's eyes crossed an invisible boundary. We found that when the saccade target was a word, reaction times in a lexical decision task were shorter when the saccade landing position was closer to the end of that word. These results are consistent with the predictions of a processing-based strategy to determine where to move the eyes. Specifically, this hypothesis assumes that Chinese readers estimate how much information is processed in parafoveal vision and saccade to a location that will carry novel information. |
Scott N. J. Watamaniuk; Stephen J. Heinen Allocation of attention during pursuit of large objects is no different than during fixation. Journal Article In: Journal of vision, vol. 15, no. 9, pp. 9, 2015. @article{Watamaniuk2015,Attention allocation during pursuit of a spot is usually characterized as asymmetric with more attention placed ahead of the target than behind it. However, attention is symmetrically allocated across larger pursuit stimuli. An unresolved issue is how tightly attention is constrained on large stimuli during pursuit. Although some work shows it is tightly locked to the fovea, other work shows it is allocated flexibly. To investigate this, we had observers perform a character identification task on large pursuit stimuli composed of arrays of five, nine, or 15 characters spaced between 0.6° and 4.0° apart. Initially, the characters were identical, but at a random time, they all changed briefly, rendering one of them unique. Observers identified the unique character. Consistent with previous literature, attention appeared narrow and symmetric around the pursuit target for tightly spaced (0.6°) characters. Increasing spacing dramatically expanded the attention scope, presumably by mitigating crowding. However, when we controlled for crowding, performance was limited by set size, suffering more for eccentric targets. Interestingly, the same limitations on attention allocation were observed with stationary and pursued stimuli-evidence that attention operates similarly during fixation and pursuit of a stimulus that extends into the periphery. The results suggest that attention is flexibly allocated during pursuit, but performance is limited by crowding and set size. In addition, performing the identification task did not hurt pursuit performance, further evidence that pursuit of large stimuli is relatively inattentive. |
Jingjing Zhao; Yonghui Wang; Donglai Liu; Liang Zhao; Peng Liu In: Attention, Perception, & Psychophysics, vol. 77, no. 7, pp. 2284–2292, 2015. @article{Zhao2015,It was found in previous studies that two types of objects (rectangles formed according to the Gestalt principle and Chinese words formed in a top-down fashion) can both induce an object-based effect. The aim of the present study was to investigate how the strength of an object representation affects the result of the competition between these two types of objects based on research carried out by Liu, Wang and Zhou [(2011) Acta Psychologica, 138(3), 397-404]. In Experiment 1, the rectangles were filled with two different colors to increase the strength of Gestalt object representation, and we found that the object effect changed significantly for the different stimulus types. Experiment 2 used Chinese words with various familiarities to manipulate the strength of the top-down object representation. As a result, the object-based effect induced by rectangles was observed only when the Chinese word familiarity was low. These results suggest that the strength of object representation determines the result of competition between different types of objects. |
Jaana Simola; Kevin Le Fevre; Jari Torniainen; Thierry Baccino Affective processing in natural scene viewing: Valence and arousal interactions in eye-fixation-related potentials Journal Article In: NeuroImage, vol. 106, pp. 21–33, 2015. @article{sfk,Attention is drawn to emotionally salient stimuli. The present study investigates processing of emotionally salient regions during free viewing of emotional scenes that were categorized according to the two-dimensional model comprising of valence (unpleasant, pleasant) and arousal (high, low). Recent studies have reported interactions between these dimensions, indicative of stimulus-evoked approach or withdrawal tendencies. We addressed the valence and arousal effects when emotional items were embedded in complex real-world scenes by analyzing both eye movement behavior and eye-fixation-related potentials (EFRPs) time-locked to the critical event of fixating the emotionally salient items for the first time. Both data sets showed an interaction between the valence and arousal dimensions. First, the fixation rates and gaze durations on emotionally salient regions were enhanced for unpleasant versus pleasant images in the high arousal condition. In the low arousal condition, both measures were enhanced for pleasant versus unpleasant images. Second, the EFRP results at 140-170. ms [P2] over the central site showed stronger responses for high versus low arousing images in the unpleasant condition. In addition, the parietal LPP responses at 400-500. ms post-fixation were enhanced for stimuli reflecting congruent stimulus dimensions, that is, stronger responses for high versus low arousing images in the unpleasant condition and stronger responses for low versus high arousing images in the pleasant condition. The present findings support the interactive two-dimensional approach, according to which the integration of valence and arousal recruits brain regions associated with action tendencies of approach or withdrawal. |
M. Isabel Vanegas; Annabelle Blangero; Simon P. Kelly Electrophysiological indices of surround suppression in humans Journal Article In: Journal of Neurophysiology, vol. 113, no. 4, pp. 1100–1109, 2015. @article{Vanegas2015,Surround suppression is a well-known example of contextual interaction in visual cortical neurophysiology, whereby the neural response to a stimulus presented within a neuron's classical receptive field is suppressed by surrounding stimuli. Human psychophysical reports present an obvious analog to the effects seen at the single-neuron level: stimuli are perceived as lower-contrast when embedded in a surround. Here we report on a visual paradigm that provides relatively direct, straightforward indices of surround suppression in human electrophysiology, enabling us to reproduce several well-known neurophysiological and psychophysical effects, and to conduct new analyses of temporal trends and retinal location effects. Steady-state visual evoked potentials (SSVEP) elicited by flickering “foreground” stimuli were measured in the context of various static surround patterns. Early visual cortex geometry and retinotopic organization were exploited to enhance SSVEP amplitude. The foreground response was strongly suppressed as a monotonic function of surround contrast. Further- more, suppression was stronger for surrounds of matching orientation than orthogonally-oriented ones, and stronger at peripheral than foveal locations. These patterns were reproduced in psychophysical reports of perceived contrast, and peripheral electrophysiological suppression effects correlated with psychophysical effects across subjects. Temporal analysis of SSVEP amplitude revealed short-term contrast adaptation effects that caused the foreground signal to either fall or grow over time, depending on the relative contrast of the surround, consistent with stronger adaptation of the suppressive drive. This electrophysiology paradigm has clinical potential in indexing not just visual deficits but possibly gain control deficits expressed more widely in the disordered brain. |
Miguel P. Eckstein; Wade Schoonveld; Sheng Zhang; Stephen C. Mack; Emre Akbas Optimal and human eye movements to clustered low value cues to increase decision rewards during search Journal Article In: Vision Research, vol. 113, pp. 137–154, 2015. @article{Eckstein2015,Rewards have important influences on the motor planning of primates and the firing of neurons coding visual information and action. When eye movements to a target are differentially rewarded across locations, primates execute saccades towards the possible target location with the highest expected value, a product of sensory evidence and potentially earned reward (saccade to maximum expected value model, sMEV). Yet, in the natural world eye movements are not directly rewarded. Their role is to gather information to support subsequent rewarded search decisions and actions. Less is known about the effects of decision rewards on saccades. We show that when varying the decision rewards across cued locations following visual search, humans can plan their eye movements to increase decision rewards. Critically, we report a scenario for which five of seven tested humans do not preferentially deploy saccades to the possible target location with the highest reward, a strategy which is optimal when rewarding eye movements. Instead, these humans make saccades towards lower value but clustered locations when this strategy optimizes decision rewards consistent with the preferences of an ideal Bayesian reward searcher that takes into account the visibility of the target across eccentricities. The ideal reward searcher can be approximated with a sMEV model with pooling of rewards from spatially clustered locations. We also find observers with systematic departures from the optimal strategy and inter-observer variability of eye movement plans. These deviations often reflect multiplicity of fixation strategies that lead to near optimal decision rewards but, for some observers, it relates to suboptimal choices in eye movement planning. |
Kaitlin Falkauskas; Victor Kuperman When experience meets language statistics: Individual variability in processing english compound words Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 6, pp. 1607–1627, 2015. @article{Falkauskas2015,Statistical patterns of language use demonstrably affect language comprehension and language production. This study set out to determine whether the variable amount of exposure to such patterns leads to individual differences in reading behavior as measured via eye-movements. Previous studies have demonstrated that more proficient readers are less influenced by distributional biases in language (e.g., frequency, predictability, transitional probability) than poor readers. We hypothesized that a probabilistic bias that is characteristic of written but not spoken language would preferentially affect readers with greater exposure to printed materials in general and to the specific pattern engendering the bias. Readers of varying reading experience were presented with sentences including English compound words that can occur in 2 spelling formats with differing probabilities: concatenated (windowsill, used 40% of the time) or spaced (window sill, 60%). Linear mixed effects multiple regression models fitted to the eye-movement measures showed that the probabilistic bias toward the presented spelling had a stronger facilitatory effect on compounds that occurred more frequently (in any spelling) or belonged to larger morphological families, and on readers with higher scores on a test of exposure-to-print. Thus, the amount of support toward the compound's spelling is effectively exploited when reading, but only when the spelling patterns are entrenched in an individual's mental lexicon via overall exposure to print and to compounds with alternating spelling. We argue that research on the interplay of language use and structure is incomplete without proper characterization of how particular individuals, with varying levels of experience and skill, learn these language structures. (PsycINFO Database Record (c) 2015 APA, all rights reserved) |
Gregory J. DiGirolamo; David Smelson; Nathan Guevremont Cue-induced craving in patients with cocaine use disorder predicts cognitive control deficits toward cocaine cues Journal Article In: Addictive Behaviors, vol. 47, pp. 86–90, 2015. @article{DiGirolamo2015,Introduction: Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Methods: Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. Results: CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Conclusion: Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. |
Ouazna Habchi; Elodie Rey; Romain Mathieu; Christian Urquizar; Alessandro Farnè; Denis Pélisson Deployment of spatial attention without moving the eyes is boosted by oculomotor adaptation Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 426, 2015. @article{Habchi2015,Vertebrates developed sophisticated solutions to select environmental visual information, being capable of moving attention without moving the eyes. A large body of behavioral and neuroimaging studies indicate a tight coupling between eye movements and spatial attention. The nature of this link, however, remains highly debated. Here, we demonstrate that deployment of human covert attention, measured in stationary eye conditions, can be boosted across space by changing the size of ocular saccades to a single position via a specific adaptation paradigm. These findings indicate that spatial attention is more widely affected by oculomotor plasticity than previously thought. |
Philippe Chassy; Trym A. E. Lindell; Jessica A. Jones; Galina V. Paramei A relationship between visual complexity and aesthetic appraisal of car front images: An eye-tracker study Journal Article In: Perception, vol. 44, no. 8-9, pp. 1085–1097, 2015. @article{Chassy2015,Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. |
Jason Haberman; Timothy F. Brady; George A. Alvarez Individual differences in ensemble perception reveal multiple, independent levels of ensemble representation Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 2, pp. 432–446, 2015. @article{Haberman2015,Ensemble perception, including the ability to “see the average” from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. |
Tommaso Mastropasqua; Jessica Galliussi; David Pascucci; Massimo Turatto Location transfer of perceptual learning: Passive stimulation and double training Journal Article In: Vision Research, vol. 108, pp. 93–102, 2015. @article{Mastropasqua2015,Specificity has always been considered one of the hallmarks of perceptual learning, suggesting that performance improvement would reflect changes at early stages of visual analyses (e.g., V1). More recently, however, this view has been challenged by studies documenting complete transfer of learning among different spatial locations or stimulus orientations when a double-training procedure is adopted. Here, we further investigate the conditions under which transfer of visual perceptual learning takes place, confirming that the passive stimulation at the transfer location seems to be insufficient to overcome learning specificity. By contrast, learning transfer is complete when performing a secondary task at the transfer location. Interestingly, (i) transfer emerges when the primary and secondary tasks are intermingled on a trial-by-trial basis, and (ii) the effects of learning generalization appear to be reciprocal, namely the primary task also serves to enable transfer of the secondary task. However, if the secondary task is not performed for a sufficient number of trials, then transfer is not enabled. Overall, the results lend support to the recent view that task-relevant perceptual learning may involve high-level stages of visual analyses. |
Chris R. Sims The cost of misremembering: Inferring the loss function in visual working memory Journal Article In: Journal of Vision, vol. 15, no. 3, pp. 1–27, 2015. @article{s15,Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. |
Tommaso Mastropasqua; Peter U. Tse; Massimo Turatto Learning of monocular information facilitates breakthrough to awareness during interocular suppression Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 3, pp. 790–803, 2015. @article{Mastropasqua2015a,Continuous flash suppression (CFS) is a potent method of inducing binocular rivalry, wherein a rapid succession of high-contrast images presented to one eye effectively blocks from awareness a low-contrast image presented to the other eye. Here we addressed whether the contents of the suppressed image can break through to awareness with extended CFS exposure. On 2/3 of the trials, we presented a faint bar (the target) to the nondominant eye while a high-contrast flickering Mondrian (the mask) was displayed to the dominant eye. Participants were first asked to report whether the target had broken through the CFS mask. Furthermore, on target-present trials, the participants were then asked to guess whether the target had appeared above or below the fixation point. In Experiment 1, the target was presented with a fixed orientation for four blocks of trials, whereas in the fifth block, the target could also have the orthogonal orientation. In Experiment 2, the target was always presented with a fixed orientation, but in the fifth block, unbeknownst to participants, the target and the mask were swapped across the eyes. We found that awareness of the target rapidly improved with training in both experiments. However, whereas Experiment 1 revealed that the improvement largely generalized across stimulus orientations, Experiment 2 showed that the effect of practice was eye-specific. The results suggest that increased breakthrough with training was due to a monocular form of learning. Finally, a control experiment was conducted to exclude the possibility that the monocular learning we reported could have been due to sensory adaptation caused by the masks. |
Yoshiko Yabe; Melvyn A. Goodale Time flies when we intend to act: Temporal distortion in a Go/No-Go task Journal Article In: Journal of Neuroscience, vol. 35, no. 12, pp. 5023–5029, 2015. @article{Yabe2015,Although many of our actions are triggered by sensory events, almost nothing is known about our perception of the timing of those sensory events. Here we show that, when people react to a sudden visual stimulus that triggers an action, that stimulus is perceived to occur later than an identical stimulus that does not trigger an action. In our experiments, participants fixated the center of a clock face with a rotating second hand. When the clock changed color, they were required to make a motor response and then to report the position of the second hand at the moment the clock changed color. In Experiment 1, in which participants made a target-directed saccade, the color change was perceived to occur 59 ms later than when they maintained fixation. In Experiment 2, in which we used a go/no-go paradigm, this temporal distortion was observed even when participants were required to cancel a prepared saccade. Finally, in Exper-iment 3, the same distortion in perceived time was observed for both go and no-go trials in a manual task in which no eye movements were required. These results suggest that, when a visual stimulus triggers an action, it is perceived to occur significantly later than an identical stimulus unrelated to action. Moreover, this temporal distortion appears to be related not to the execution of the action (or its effect) but rather to the programming of the action. In short, there seems to be a temporal binding between a triggering event and the triggered action |
Patrick Loesche; Jennifer Wiley; Marcus Hasselhorn How knowing the rules affects solving the Raven Advanced Progressive Matrices Test Journal Article In: Intelligence, vol. 48, pp. 58–75, 2015. @article{Loesche2015,The solution process underlying the Raven Advanced Progressive Matrices (RAPM) has been conceptualized to consist of two subprocesses: rule induction and goal management. Past research has also found a strong relation between measures of working memory capacity and performance on RAPM. The present research attempted to test whether the goal management subprocess is responsible for the relation between working memory capacity and RAPM, using a paradigm where the rules necessary to solve the problems were given to subjects, assuming that it would render rule induction unnecessary. Three experiments revealed that working memory capacity was still strongly related to RAPM performance in the given-rules condition, while in two experiments the correlation in the given-rules condition was significantly higher than in the no-rules condition. Experiment 4 revealed that giving the rules affected problem solving behavior. Evidence from eye tracking protocols suggested that participants in the given-rules condition were more likely to approach the problems with a constructive matching strategy. Two possible mechanisms are discussed that could both explain why providing participants with the rules might increase the relation between working memory capacity and RAPM performance. |
Aiga Švede; Elīna Treija; Wolfgang Jaschinski; Gunta Krūmiņa Monocular versus binocular calibrations in evaluating fixation disparity with a video-based eye-tracker Journal Article In: Perception, vol. 44, no. 8-9, pp. 1110–1128, 2015. @article{Svede2015,When measuring fixation disparity (an oculomotor vergence error), the question arises as to whether a monocular or binocular calibration is more precise and physiologically more appropriate. In monocular calibrations, a single eye fixates on a calibration target that is taken as having been projected onto the center of the fovea; the corresponding vergence state represents the heterophoria (the resting vergence position), which has no effect on the calibration procedure. In binocular calibrations, a vergence error may be present and may affect the subsequent measurement of the fixation disparity during binocular recordings. This study includes a test of the precision of both monocular and binocular calibrations and an evaluation of the impact of the calibration procedure on the measurement of fixation disparity during a dot scanning task. Our results show that 11 participants (out of 19) each exhibited a significant difference in fixation disparity with the two types of calibration procedures. In addition, the fixation disparity was more strongly affected by heterophoria undergoing monocular calibration, as opposed to binocular calibration. This serves as additional evidence showing that the monocular calibration produces a physiologically more plausible fixation disparity and seems to be more appropriate for studying the full extent of fixation disparity |
Niels A. Kloosterman; Thomas Meindertsma; Arjan Hillebrand; Bob W. Dijk; Victor A. F. Lamme; Tobias H. Donner Top-down modulation in human visual cortex predicts the stability of a perceptual illusion Journal Article In: Journal of Neurophysiology, vol. 113, no. 4, pp. 1063–1076, 2015. @article{Kloosterman2015,Conscious perception sometimes fluctuates strongly, even when the sensory input is constant. For example, in motion-induced blindness (MIB), a salient visual target surrounded by a moving pattern suddenly disappears from perception, only to reappear after some variable time. Whereas such changes of perception result from fluctuations of neural activity, mounting evidence suggests that the perceptual changes, in turn, may also cause modulations of activity in several brain areas, including visual cortex. In this study, we asked whether these latter modulations might affect the subsequent dynamics of perception. We used magnetoencephalography (MEG) to measure modulations in cortical population activity during MIB. We observed a transient, retinotopically widespread modulation of beta (12-30 Hz)-frequency power over visual cortex that was closely linked to the time of subjects' behavioral report of the target disappearance. This beta modulation was a top-down signal, decoupled from both the physical stimulus properties and the motor response but contingent on the behavioral relevance of the perceptual change. Critically, the modulation amplitude predicted the duration of the subsequent target disappearance. We propose that the transformation of the perceptual change into a report triggers a top-down mechanism that stabilizes the newly selected perceptual interpretation. |
Hassan Zanganeh Momtaz; Mohammad Reza Daliri Differences of eye movement pattern in natural and man-made scenes and image categorization with the help of these patterns Journal Article In: Journal of Integrative Neuroscience, vol. 14, no. 3, pp. 1–18, 2015. @article{Momtaz2015,In this paper, we investigated the parameters related to eye movement patterns of individuals while viewing images that consist of natural and man-made scenes. These parameters are as follows: number of fixations and saccades, fixation duration, saccade amplitude and distribution of fixation locations. We explored the way in which individuals look at images of different semantic categories, and used this information for automatic image classifcation. We showed that the eye movements and the contents of eye fixation locations of observers differ for images of different semantic categories. These differences were used effectively in automatic image categorization. Another goal of this study was to find the answer of this question that "whether the image patches of fixation points have sufficient information for image categorization?" To achieve this goal, a number of patches with different sizes from two different image categories was extracted. These patches, which were selected at the location of eye fixation points, were used to form a feature vector based on K-means clustering algorithm. Then, different statistical classiers were trained for categorization purpose. The results showed that it is possible to predict the image category by using the feature vectors derived from the image patches. We found significant differences in parameters of eye movement pattern between the two image categories (average across subjects). We could categorize images by using these parameters as features. The results also showed that it is possible to predict the image category by using image patches around the subjects' fixation points. |
Annie Roy-Charland; Melanie Perron; Jessica Boulard; Justin Chamberland; Nichola Hoffman If I point, do they look?: The impact of attention-orientation strategies on text exploration during shared book reading Journal Article In: Reading and Writing, vol. 28, no. 9, pp. 1285–1305, 2015. @article{rpbch15,The current study examined the effect of pointing to the words and using highlighted text by examining eye movements when children in preschool, Grade 1 and 2 were read storybooks of two levels of difficulty. For all children, pointing to and highlighting the text was observed to increase the amount of time and number of fixations on the printed text than when there was no intervention. Furthermore, with difficult text, an increased amount of time and number of fixations was observed when the text was pointed to than when it was highlighted. For preschoolers, even with the increased attention on the text from pointing to and highlighting the words, the fixations did not match the narration. First and second graders, with the difficult book, made more matching fixations both when the printed text was pointed to and highlighted than when no intervention was done. Additionally, more matching fixations were made when the printed text was highlighted than when pointed to. Future research is required to examine the effects of attention-orienting strategies on reading related outcomes. |
Anna Wilschut; Jan Theeuwes; Christian N. L. Olivers Nonspecific competition underlies transient attention Journal Article In: Psychological Research, vol. 79, no. 5, pp. 844–860, 2015. @article{Wilschut2015,Cueing a target by abrupt visual stimuli enhances its perception in a rapid but short-lived fashion, an effect known as transient attention. Our recent study showed that when targets are cued at a constant, central location, the emergence of the transient performance pattern was dependent on the presence of competing distractors, whereas targets presented in isolation were enhanced in a sustained manner (Wilschut et al., PLoS ONE, 6:e27661, 2011). The current study examined in more detail whether the transience depends on the specific nature of the competition. We first replicated and extended the competition-dependent transient pattern for peripheral and variable target locations. We then investigated the role of feature similarity, compatibility, and proximity. Both competition by feature similarity and compatibility between the target and distractors were found to impair performance, but effects were additive with the effects of the cueing interval and did not change the transient performance function. Varying the spatial distance between target and distractors yielded mixed evidence, but here too a transient pattern could be observed for targets flanked by both close and far distractors. The results thus show that the presence or absence of competition determines whether attention appears transient or sustained, while the specific nature of the competition (in terms of location or feature) affects selection independent of time. |
Pierce Edmiston; Gary Lupyan What makes words special? Words as unmotivated cues Journal Article In: Cognition, vol. 143, pp. 93–100, 2015. @article{Edmiston2015,Verbal labels, such as the words "dog" and "guitar," activate conceptual knowledge more effectively than corresponding environmental sounds, such as a dog bark or a guitar strum, even though both are unambiguous cues to the categories of dogs and guitars (Lupyan & Thompson-Schill, 2012). We hypothesize that this advantage of labels emerges because word-forms, unlike other cues, do not vary in a motivated way with their referent. The sound of a guitar cannot help but inform a listener to the type of guitar making it (electric, acoustic, etc.). The word "guitar" on the other hand, can leave the type of guitar unspecified. We argue that as a result, labels gain the ability to cue a more abstract mental representation, promoting efficient processing of category members. In contrast, environmental sounds activate representations that are more tightly linked to the specific cause of the sound. Our results show that upon hearing environmental sounds such as a dog bark or guitar strum, people cannot help but activate a particular instance of a category, in a particular state, at a particular time, as measured by patterns of response times on cue-picture matching tasks (Exps. 1-2) and eye-movements in a task where the cues are task-irrelevant (Exp. 3). In comparison, labels activate concepts in a more abstract, decontextualized way-a difference that we argue can be explained by labels acting as "unmotivated cues". |
Niels A. Kloosterman; Thomas Meindertsma; Anouk Mariette Loon; Victor A. F. Lamme; Yoram S. Bonneh; Tobias H. Donner Pupil size tracks perceptual content and surprise Journal Article In: European Journal of Neuroscience, vol. 41, no. 8, pp. 1068–1078, 2015. @article{Kloosterman2015a,Changes in pupil size at constant light levels reflect the activity of neuromodulatory brainstem centers that control global brain state. These endogenously driven pupil dynamics can be synchronized with cognitive acts. For example, the pupil dilates during the spontaneous switches of perception of a constant sensory input in bistable perceptual illusions. It is unknown whether this pupil dilation only indicates the occurrence of perceptual switches, or also their content. Here, we measured pupil diameter in human subjects reporting the subjective disappearance and re-appearance of a physically constant visual target surrounded by a moving pattern ('motion-induced blindness' illusion). We show that the pupil dilates during the perceptual switches in the illusion and a stimulus-evoked 'replay' of that illusion. Critically, the switch-related pupil dilation encodes perceptual content, with larger amplitude for disappearance than re-appearance. This difference in pupil response amplitude enables prediction of the type of report (disappearance vs. re-appearance) on individual switches (receiver-operating characteristic: 61%). The amplitude difference is independent of the relative durations of target-visible and target-invisible intervals and subjects' overt behavioral report of the perceptual switches. Further, we show that pupil dilation during the replay also scales with the level of surprise about the timing of switches, but there is no evidence for an interaction between the effects of surprise and perceptual content on the pupil response. Taken together, our results suggest that pupil-linked brain systems track both the content of, and surprise about, perceptual events. |
W. Joseph MacInnes; Hannah M. Krüger; Amelia R. Hunt Just passing through? Inhibition of return in saccadic sequences Journal Article In: Quarterly Journal of Experimental Psychology, vol. 68, no. 2, pp. 402–416, 2015. @article{MacInnes2015,Responses tend to be slower to previously fixated spatial locations, an effect known as "inhibition of return" (IOR). Saccades cannot be assumed to be independent, however, and saccade sequences programmed in parallel differ from independent eye movements. We measured the speed of both saccadic and manual responses to probes appearing in previously fixated locations when those locations were fixated as part of either parallel or independent saccade sequences. Saccadic IOR was observed in independent but not parallel saccade sequences, while manual IOR was present in both parallel and independent sequence types. Saccadic IOR was also short-lived, and dissipated with delays of more than ∼1500 ms between the intermediate fixation and the probe onset. The results confirm that the characteristics of IOR depend critically on the response modality used for measuring it, with saccadic and manual responses giving rise to motor and attentional forms of IOR, respectively. Saccadic IOR is relatively short-lived and is not observed at intermediate locations of parallel saccade sequences, while attentional IOR is long-lasting and consistent for all sequence types. |
Harold E. Bedell; John Siderov; Monika A. Formankiewicz; Sarah J. Waugh; Senay Aydin Evidence for an eye-movement contribution to normal foveal crowding Journal Article In: Optometry and Vision Science, vol. 92, no. 2, pp. 237–245, 2015. @article{Bedell2015,Purpose. Along with contour interaction, inaccurate and imprecise eye movements and attention have been suggested to contribute to poorer acuity for ‘‘crowded'' versus uncrowded targets. To investigate the role of eye movements in foveal crowding, we compared percent correct letter identification for short and long lines of near-threshold letters with different separations. Methods. Five normal observers read short (4 to 6 letters) and long (10 to 12 letters) lines of near-threshold, Sloan letters with edge-to-edge letter separations of 0.5, 1, and 2 letter spaces. Percent correct letter identification for the 2 to 4 interior letters in short strings and the 8 to 10 interior letters in long strings was compared with a no-crowding condition. Results. Letter identification was significantly worse than the no-crowding condition for long letter strings with a separation of 1 letter space and for both long and short letter strings with a separation of 0.5 letter spaces. Observers more often reported the incorrect number of letters in long than in short letter strings, even for a separation of 2 letter spaces. Similar results were obtained during straight-ahead gaze and while viewing in 30 to 40 degrees left gaze, where two of the five observers exhibited an increase in horizontal fixational instability. Conclusions. We argue that lower percent correct letter identification and more frequent errors in reporting the number of letters in long compared with short letter strings reflect an eye-movement contribution to foveal crowding. |
Alasdair D. F. Clarke; Micha Elsner; Hannah Rohde Giving good directions: Order of mention reflects visual salience Journal Article In: Frontiers in Psychology, vol. 6, pp. 1793, 2015. @article{Clarke2015,In complex stimuli, there are many different possible ways to refer to a specified target. Previous studies have shown that when people are faced with such a task, the content of their referring expression reflects visual properties such as size, salience and clutter. Here, we extend these findings and present evidence that (i) the influence of visual perception on sentence construction goes beyond content selection and in part determines the order in which different objects are mentioned and (ii) order of mention influences comprehension. Study 1 (a corpus study of reference productions) shows that when a speaker uses a relational description to mention a salient object, that object is treated as being in the common ground and is more likely to be mentioned first. Study 2 (a visual search study) asks participants to listen to referring expressions and find the specified target; in keeping with the above result, we find that search for easy-to-find targets is faster when the target is mentioned first, while search for harder-to-find targets is facilitated by mentioning the target later, after a landmark in a relational description. Our findings show that seemingly low-level and disparate mental “modules” like perception and sentence planning interact at a high level and in task-dependent ways. |
Annie Roy-Charland; Melanie Perron; Cheryl Young; Jessica Boulard; Justin A. Chamberland The confusion of fear and surprise: A developmental study of the perceptual-attentional limitation hypothesis using eye movements Journal Article In: The Journal of Genetic Psychology, vol. 176, no. 5, pp. 281–298, 2015. @article{rpybc15,The goal of the present study was to test the Perceptual-Attentional Limitation Hypothesis in children and adults by manipulating the distinctiveness between expressions and recording eye movements. Children 3-5 and 9-11 years old as well as adults were presented pairs of expressions and required to identify a target emotion. Children 3-5 years old were less accurate than those 9-11 years old and adults. All children viewed pictures longer than adults but did not spend more time attending to the relevant cues. For all participants, accuracy for the recognition of fear was lower than for surprise when the distinctive cue was in the brow only. They also took longer and spent more time in both the mouth and brow zones than when a cue was in the mouth or both areas. Adults and children 9-11 years old made more comparisons between the expressions when fear comprised a single distinctive cue in the brow than when the distinctive cue was in the mouth only or when both cues were present. Children 3-5 years old made more comparisons for brow only than both. The results of the present study extend on the Perceptual-Attentional Limitation Hypothesis showing an importance of both decoder and stimuli, and an interaction between decoder and stimuli characteristics. |
Hayley Crawford; Joanna Moss; Joseph P. McCleery; Giles M. Anderson; Chris Oliver Face scanning and spontaneous emotion preference in Cornelia de Lange syndrome and Rubinstein-Taybi syndrome Journal Article In: Journal of Neurodevelopmental Disorders, vol. 7, no. 1, pp. 1–12, 2015. @article{Crawford2015a,BACKGROUND: Existing literature suggests differences in face scanning in individuals with different socio-behavioural characteristics. Cornelia de Lange syndrome (CdLS) and Rubinstein-Taybi syndrome (RTS) are two genetically defined neurodevelopmental disorders with unique profiles of social behaviour. METHODS: Here, we examine eye gaze to the eye and mouth regions of neutrally expressive faces, as well as the spontaneous visual preference for happy and disgusted facial expressions compared to neutral faces, in individuals with CdLS versus RTS. RESULTS: Results indicate that the amount of time spent looking at the eye and mouth regions of faces was similar in 15 individuals with CdLS and 17 individuals with RTS. Both participant groups also showed a similar pattern of spontaneous visual preference for emotions. CONCLUSIONS: These results provide insight into two rare, genetically defined neurodevelopmental disorders that have been reported to exhibit contrasting socio-behavioural characteristics and suggest that differences in social behaviour may not be sufficient to predict attention to the eye region of faces. These results also suggest that differences in the social behaviours of these two groups may be cognitively mediated rather than subcortically mediated. |
Florian Hintz; Antje S. Meyer Prediction and production of simple mathematical equations: Evidence from visual world eye-tracking Journal Article In: PLoS ONE, vol. 10, no. 7, pp. e0130766, 2015. @article{Hintz2015,The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person. |
Florian Loffing; Florian Sölter; Norbert Hagemann; Bernd Strauss Accuracy of outcome anticipation, but not gaze behavior, differs against left-and right-handed penalties in team-handball goalkeeping Journal Article In: Frontiers in Psychology, vol. 6, pp. 1820, 2015. @article{Loffing2015a,Low perceptual familiarity with relatively rarer left-handed as opposed to more common right-handed individuals may result in athletes' poorer ability to anticipate the former's action intentions. Part of such left-right asymmetry in visual anticipation could be due to an inefficient gaze strategy during confrontation with left-handed individuals. To exemplify, observers may not mirror their gaze when viewing left- vs. right-handed actions but preferentially fixate on an opponent's right body side, irrespective of an opponent's handedness, owing to the predominant exposure to right-handed actions. So far empirical verification of such assumption, however, is lacking. Here we report on an experiment where team-handball goalkeepers' and non-goalkeepers' gaze behavior was recorded while they predicted throw direction of left- and right-handed 7-m penalties shown as videos on a computer monitor. As expected, goalkeepers were considerably more accurate than non-goalkeepers and prediction was better against right- than left-handed penalties. However, there was no indication of differences in gaze measures (i.e., number of fixations, overall and final fixation duration, time-course of horizontal or vertical fixation deviation) as a function of skill group or the penalty-takers' handedness. Findings suggest that inferior anticipation of left-handed compared to right-handed individuals' action intentions may not be associated with misalignment in gaze behavior. Rather, albeit looking similarly, accuracy differences could be due to observers' differential ability of picking up and interpreting the visual information provided by left- vs. right-handed movements. |
Magdalena Chechlacz; Glyn W. Humphreys; Stamatios N. Sotiropoulos; Christopher Kennard; Dario Cazzoli Structural organization of the corpus callosum predicts attentional shifts after continuous theta burst stimulation Journal Article In: Journal of Neuroscience, vol. 35, no. 46, pp. 15353–15368, 2015. @article{Chechlacz2015,Repetitive transcranial magnetic stimulation (rTMS) applied over the right posterior parietal cortex (PPC) in healthy participants has been shown to trigger a significant rightward shift in the spatial allocation of visual attention, temporarily mimicking spatial deficits observed in neglect. In contrast, rTMS applied over the left PPC triggers a weaker or null attentional shift. However, large interindividual differences in responses to rTMS have been reported. Studies measuring changes in brain activation suggest that the effects of rTMS may depend on both interhemispheric and intrahemispheric interactions between cortical loci controlling visual attention. Here, we investigated whether variability in the structural organization of human white matter pathways subserving visual attention, as assessed by diffusion magnetic resonance imaging and tractography, could explain interindividual differences in the effects of rTMS. Most participants showed a rightward shift in the allocation of spatial attention after rTMS over the right intraparietal sulcus (IPS), but the size of this effect varied largely across participants. Conversely, rTMS over the left IPS resulted in strikingly opposed individual responses, with some participants responding with rightward and some with leftward attentional shifts. We demonstrate that microstructural and macrostructural variability within the corpus callosum, consistent with differential effects on cross-hemispheric interactions, predicts both the extent and the direction of the response to rTMS. Together, our findings suggest that the corpus callosum may have a dual inhibitory and excitatory function in maintaining the interhemispheric dynamics that underlie the allocation of spatial attention. |
N. Kloth; Lisa N. Jefferies; Gillian Rhodes Gaze direction affects the magnitude of face identity aftereffects Journal Article In: Journal of Vision, vol. 15, no. 2, pp. 1–12, 2015. @article{Kloth2015,The face perception system partly owes its efficiency to adaptive mechanisms that constantly recalibrate face coding to our current diet of faces. Moreover, faces that are better attended produce more adaptation. Here, we investigated whether the social cues conveyed by a face can influence the amount of adaptation that face induces. We compared the magnitude of face identity aftereffects induced by adaptors with direct and averted gazes.We reasoned that faces conveying direct gaze may be more engaging and better attended and thus produce larger aftereffects than those with averted gaze. Using an adaptation duration of 5 s, we found that aftereffects for adaptors with direct and averted gazes did not differ (Experiment 1). However, when processing demands were increased by reducing adaptation duration to 1 s, we found that gaze direction did affect the magnitude of the aftereffect, but in an unexpected direction: Aftereffects were larger for adaptors with averted rather than direct gaze (Experiment 2). Eye tracking revealed that differences in looking time to the faces between the two gaze directions could not account for these findings. Subsequent ratings of the stimuli (Experiment 3) showed that adaptors with averted gaze were actually perceived as more expressive and interesting than adaptors with direct gaze. Therefore it appears that the averted-gaze faces were more engaging and better attended, leading to larger aftereffects. Overall, our results suggest that naturally occurring facial signals can modulate the adaptive impact a face exerts on our perceptual system. Specifically, the faces that we perceive as most interesting also appear to calibrate the organization of our perceptual system most strongly. |
Florian Loffing; Ricarda Stern; Norbert Hagemann Pattern-induced expectation bias in visual anticipation of action outcomes Journal Article In: Acta Psychologica, vol. 161, pp. 45–53, 2015. @article{Loffing2015,When anticipating an opponent's action intention, athletes may rely on both kinematic and contextual cues. Here we show that patterns of previous action outcomes (i.e., a contextual cue) bias visual anticipation of action outcome in subsequent trials. In two video-based experiments, skilled players and novices were presented with volleyball attacks stopping 360. ms (Exp. 1) or 280. ms (Exp. 2) before an attacker's hand-ball-contact and they were asked to predict the type of attack (smash or lob). Attacks were presented block-wise with six attacks per block. The fifth trial served as target trial where we presented identical attacks to control kinematic cues. We varied the outcomes of the preceding four attacks under three conditions: lobs only, smashes only or an alternating pattern of attack outcomes. In Exp. 1, skilled players but not novices were less accurate and responded later in target trials that were incongruent vs. congruent with preceding patterns. In Exp. 2, where the task was easier, another group of novices demonstrated a similar congruence effect for accuracy but not response time. Collectively, findings indicate that participants tended to preferentially expect the continuation of an attack pattern, while possibly attaching less importance to kinematic cues. Thus, overreliance on pattern continuation may be detrimental to anticipation in situations an action's outcome does not correspond to the pattern. From a methodological viewpoint, comparison of novices' performance in Exp. 1 and 2 suggests that task difficulty may be critical as to whether contextual cue effects can be identified in novices. |
David R. Painter; Paul E. Dux; Jason B. Mattingley Causal involvement of visual area MT in global feature-based enhancement but not contingent attentional capture Journal Article In: NeuroImage, vol. 118, pp. 90–102, 2015. @article{Painter2015,When visual attention is set for a particular target feature, such as color or shape, neural responses to that feature are enhanced across the visual field. This global feature-based enhancement is hypothesized to underlie the contingent attentional capture effect, in which task-irrelevant items with the target feature capture spatial attention. In humans, however, different cortical regions have been implicated in global feature-based enhancement and contingent capture. Here, we applied intermittent theta-burst stimulation (iTBS) to assess the causal roles of two regions of extrastriate cortex - right area MT and the right temporoparietal junction (TPJ) - in both global feature-based enhancement and contingent capture. We recorded cortical activity using EEG while participants monitored centrally for targets defined by color and ignored peripheral checkerboards that matched the distractor or target color. In central vision, targets were preceded by colored cues designed to capture attention. Stimuli flickered at unique frequencies, evoking distinct cortical oscillations. Analyses of these oscillations and behavioral performance revealed contingent capture in central vision and global feature-based enhancement in the periphery. Stimulation of right area MT selectively increased global feature-based enhancement, but did not influence contingent attentional capture. By contrast, stimulation of the right TPJ left both processes unaffected. Our results reveal a causal role for the right area MT in feature-based attention, and suggest that global feature-based enhancement does not underlie the contingent capture effect. |
Eli Brenner; Jeroen B. J. Smeets How moving backgrounds influence interception Journal Article In: PLoS ONE, vol. 10, no. 3, pp. e0119903, 2015. @article{Brenner2015,Reaching movements towards an object are continuously guided by visual information about the target and the arm. Such guidance increases precision and allows one to adjust the movement if the target unexpectedly moves. On-going arm movements are also influenced by motion in the surrounding. Fast responses to motion in the surrounding could help cope with moving obstacles and with the consequences of changes in one's eye orientation and vantage point. To further evaluate how motion in the surrounding influences interceptive movements we asked subjects to tap a moving target when it reached a second, static target. We varied the direction and location of motion in the surrounding, as well as details of the stimuli that are known to influence eye movements. Subjects were most sensitive to motion in the background when such motion was near the targets. Whether or not the eyes were moving, and the direction of the background motion in relation to the direction in which the eyes were moving, had very little influence on the response to the background motion. We conclude that the responses to background motion are driven by motion near the target rather than by a global analysis of the optic flow and its relation with other information about self-motion. |
Jedediah M. Singer; Joseph R. Madsen; William S. Anderson; Gabriel Kreiman Sensitivity to timing and order in human visual cortex Journal Article In: Journal of Neurophysiology, vol. 113, no. 5, pp. 1656–1669, 2015. @article{smak15,Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. |
S. Gareth Edwards; Lisa J. Stephenson; Mario Dalmaso; Andrew P. Bayliss Social orienting in gaze leading: A mechanism for shared attention Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 282, no. 1812, pp. 1–8, 2015. @article{Edwards2015,Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to 'gaze following', attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that 'follows' the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish 'shared attention' and maintain the ongoing interaction. |
Kate M. Thompson; Tracy L. Taylor Memory instruction interacts with both visual and motoric inhibition of return Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 3, pp. 804–818, 2015. @article{Thompson2015,In the item-method directed forgetting paradigm, the magnitude of inhibition of return (IOR) is larger after an instruction to forget (F) than after an instruction to remember (R). In the present experiments, we further investigated this increased magnitude of IOR after F than after R memory instructions, to determine whether this F > R IOR pattern occurs only for the motoric form of IOR, as predicted, or also for the visual form. In three experiments, words were presented in one of two peripheral locations, followed by either an F or an R memory instruction. Then, a target appeared either at the same location as the previous word or at the other location. In Experiment 1, participants maintained fixation throughout the trial until the target appeared, at which point they made a saccade to the target. In Experiment 2, they maintained fixation throughout the entire trial and made a manual localization response to the target. The F > R IOR difference in reaction times occurred for both the saccadic and manual responses, suggesting that memory instructions modify both motoric and visual forms of IOR. In Experiment 3, participants made a perceptual discrimination response to report the identity of a target while the eyes remained fixed. The F > R IOR difference also occurred for these manual discrimination responses, increasing our confidence that memory instructions modify the visual form of IOR. We relate our findings to postulated differences in attentional withdrawal following F and R instructions and consider the implications of the findings for successful forgetting. |
Tao He; Yun Ding; Zhiguo Wang Environment- and eye-centered inhibitory cueing effects are both observed after a methodological confound is eliminated Journal Article In: Scientific Reports, vol. 5, pp. 16586, 2015. @article{He2015,Inhibition of return (IOR), typically explored in cueing paradigms, is a performance cost associated with previously attended locations and has been suggested as a crucial attentional mechanism that biases orientation towards novelty. In their seminal IOR paper, Posner and Cohen (1984) showed that IOR is coded in spatiotopic or environment-centered coordinates. Recent studies, however, have consistently reported IOR effects in both spatiotopic and retinotopic (eye-centered) coordinates. One overlooked methodological confound of all previous studies is that the spatial gradient of IOR is not considered when selecting the baseline for estimating IOR effects. This methodological issue makes it difficult to tell if the IOR effects reported in previous studies were coded in retinotopic or spatiotopic coordinates, or in both. The present study addresses this issue with the incorporation of no-cue trials to a modified cueing paradigm in which the cue and target are always intervened by a gaze-shift. The results revealed that a) IOR is indeed coded in both spatiotopic and retinotopic coordinates, and b) the methodology of previous work may have underestimated spatiotopic and retinotopic IOR effects. |
Inbal Itzhak; Shari R. Baum Misleading bias-driven expectations in referential processing and the facilitative role of contrastive accent Journal Article In: Journal of Psycholinguistic Research, vol. 44, no. 5, pp. 623–650, 2015. @article{Itzhak2015,Probabilistic preferences are often facilitative in language processing and may assist in discourse prediction. However, occasionally these sources of information may lead to inaccurate expectations. The current study investigated a test case of this scenario. An eye-tracking experiment examined the interpretation of ambiguous personal pronouns in the context of implicit causality biases. We tested whether reference resolution may be facilitated online by contrastive accent in cases of a bias-inconsistent referent. Implicit causality biases directed looks to the biased noun phrase; however, when the name of the bias-inconsistent antecedent was accented (e.g., JOHN envied Bill because he [Formula: see text]), this tendency was modulated. Contrastive accent seems to dampen the occasionally confusing prediction of implicit causality biases in referential processing. This demonstrates one way in which the spoken language comprehension system copes with occasional misguidance of otherwise helpful probabilistic information. |
You Li; Lei Mo; Qi Chen Differential contribution of velocity and distance to time estimation during self-initiated time-to-collision judgment Journal Article In: Neuropsychologia, vol. 73, pp. 35–47, 2015. @article{Li2015c,To successfully intercept/avoid a moving object, human brain needs to precisely estimate the time-to-collision (TTC) of the object. In real life, time estimation is determined conjointly by the velocity and the distance of a moving object. However, surprisingly little is known concerning whether and how the velocity and the distance dimensions contribute differentially to time estimation. In this fMRI study, we demonstrated that variations of velocity evoked substantially different behavioral and neural responses than distance during self-initiated TTC judgments. Behaviorally, the velocity dimension induced a stronger time dilation effect than the distance dimension that participants' responses were significantly more delayed by increasing velocity than by decreasing distance, even with the theoretical TTC being equated between the two conditions. Neurally, activity in the dorsal fronto-parietal TTC network was parametrically modulated by variations in TTC irrespective of whether the variations in TTC were caused by velocity or distance. Importantly, even with spatial distance being equated, increasing velocity induced illusory perception of longer spatial trajectory in early visual cortex. Moreover, as velocity increased, the early visual cortex showed enhanced connectivity with the TTC network. Our results thus implied that with increasing velocity, TTC judgments depended increasingly on the velocity-induced illusory distance information from early visual cortex and was eventually tampered. |
Andrew K. Mackenzie; Julie M. Harris Eye movements and hazard perception in active and passive driving Journal Article In: Visual Cognition, vol. 23, no. 6, pp. 736–757, 2015. @article{Mackenzie2015,Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment. |
J. Suzanne Singh; Michelle C. Capozzoli; Michael D. Dodd; Debra A. Hope The effects of social anxiety and state anxiety on visual attention: Testing the vigilance–avoidance hypothesis Journal Article In: Cognitive Behaviour Therapy, vol. 44, no. 5, pp. 377–388, 2015. @article{scdh15,A growing theoretical and research literature suggests that trait and state social anxiety can predict attentional patterns in the presence of emotional stimuli. The current study adds to this literature by examining the effects of state anxiety on visual attention and testing the vigilance–avoidance hypothesis, using a method of continuous visual attentional assessment. Participants were 91 undergraduate college students with high or low trait fear of negative evaluation (FNE), a core aspect of social anxiety, who were randomly assigned to either a high or low state anxiety condition. Participants engaged in a free view task in which pairs of emotional facial stimuli were presented and eye movements were continuously monitored. Overall, participants with high FNE avoided angry stimuli and participants with high state anxiety attended to positive stimuli. Participants with high state anxiety and high FNE were avoidant of angry faces, whereas participants with low state and low FNE exhibited a bias toward angry faces. The study provided partial support for the vigilance–avoidance hypothesis. The findings add to the mixed results in the literature that suggest that both positive and negative emotional stimuli may be important in understanding the complex attention patterns associated with social anxiety. Clinical implications and suggestions for future research are discussed. |
B. P. Geelen; Alexander H. Wertheim The prevalence effect in lateral masking and its relevance for visual search Journal Article In: Experimental Brain Research, vol. 233, no. 4, pp. 1119–1124, 2015. @article{Geelen2015,In stimulus displays with or without a single target amid 1,644 identical distractors, target prevalence was varied between 20, 50 and 80 %. Maximum gaze deviation was measured to determine the strength of lateral masking in these arrays. The results show that lateral masking was strongest in the 20 % prevalence condition, which differed significantly from both the 50 and 80 % prevalence conditions. No difference was observed between the latter two. This pattern of results corresponds to that found in the literature on the prevalence effect in visual search (stronger lateral masking corresponding to longer search times). The data add to similar findings reported earlier (Wertheim et al. in Exp Brain Res, 170:387-402, 2006), according to which the effects of many well-known factors in visual search correspond to those on lateral masking. These were the effects of set size, disjunctions versus conjunctions, display area, distractor density, the asymmetry effect (Q vs. O's) and viewing distance. The present data, taken together with those earlier findings, may lend credit to a causal hypothesis that lateral masking could be a more important mechanism in visual search than usually assumed. |
Stefan Huber; Sonja Cornelsen; Korbinian Moeller; Hans-Christoph Nuerk Toward a model framework of generalized parallel componential processing of multi-symbol numbers Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 3, pp. 732–745, 2015. @article{Huber2015,In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., −97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., −53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing. |
Wei He; Jon Brock; Blake W. Johnson Face processing in the brains of pre-school aged children measured with MEG Journal Article In: NeuroImage, vol. 106, pp. 317–327, 2015. @article{He2015a,There are two competing theories concerning the development of face perception: a late maturation account and an early maturation account. Magnetoencephalography (MEG) neuroimaging holds promise for adjudicating between the two opposing accounts by providing objective neurophysiological measures of face processing, with sufficient temporal resolution to isolate face-specific brain responses from those associated with other sensory, cognitive and motor processes. The current study used a customized child MEG system to measure M100 and M170 brain responses in 15 children aged three to six years while they viewed faces, cars and their phase-scrambled counterparts. Compared to adults tested using the same stimuli in a conventional MEG system, children showed significantly larger and later M100 responses. Children's M170 responses, derived by subtracting the responses to phase-scrambled images from the corresponding images (faces or cars) were delayed in latency but otherwise resembled the adult M170. This component has not been obtained in previous studies of young children tested using conventional adult MEG systems. However children did show a markedly reduced M170 response to cars in comparison to adults. This may reflect children's lack of expertise with cars relative to faces. Taken together, these data are in accord with recent behavioural and neuroimaging data that support early maturation of the basic face processing functions. |
Beth A. Stankevich; Joy J. Geng The modulation of reward priority by top-down knowledge Journal Article In: Visual Cognition, vol. 23, no. 1-2, pp. 206–228, 2015. @article{Stankevich2015,Reward-associated features capture attention automatically and continue to do so even when the reward contingencies are removed. This profile has led to the hypothesis that rewards belong to a separate class of attentional biases that is neither typically top-down nor bottom-up. The goal of these experiments was to understand the degree to which top-down knowledge can modulate value-driven attentional capture within (a) the timecourse of a single trial and (b) when the reward contingencies change explicitly over trials. The results suggested that top-down knowledge does not affect the size of value-driven attentional capture within a single trial. There were clear top-down modulations in the magnitude of value-driven capture when reward contingencies explicitly changed, but the original reward associations continued to have a persistent bias on attention. These results contribute to a growing body of evidence that reward associations bias attention through mechanisms separate from other top-down and bottom-up attentional biases. |
Sebastiaan Mathôt; Jean-Baptiste Melmi; Eric Castet Intrasaccadic perception triggers pupillary constriction Journal Article In: PeerJ, vol. 3, pp. 1–16, 2015. @article{Mathot2015,It is commonly believed that vision is impaired during saccadic eye movements. However, here we report that some visual stimuli are clearly visible during saccades, and trigger a constriction of the eye's pupil. Participants viewed sinusoid gratings that changed polarity 150 times per second (every 6.67 ms). At this rate of flicker, the gratings were perceived as homogeneous surfaces while participants fixated. However, the flickering gratings contained ambiguous motion: rightward and leftward motion for vertical gratings; upward and downward motion for horizontal gratings. When participants made a saccade perpendicular to the gratings' orientation (e.g., a leftward saccade for a vertical grating), the eye's peak velocity matched the gratings' motion. As a result, the retinal image was approximately stable for a brief moment during the saccade, and this gave rise to an intrasaccadic percept: A normally invisible stimulus became visible when eye velocity was maximal. Our results confirm and extend previous studies by demonstrating intrasaccadic perception using a reflexive measure (pupillometry) that does not rely on subjective report. Our results further show that intrasaccadic perception affects all stages of visual processing, from the pupillary response to visual awareness. |
Adam Palanica; Roxane J. Itier Eye gaze and head orientation modulate the inhibition of return for faces Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 8, pp. 2589–2600, 2015. @article{Palanica2015,The present study used an inhibition of return (IOR) spatial cueing paradigm to examine how gaze direction and head orientation modulate attention capture for human faces. Target response time (RT) was measured after the presentation of a peripheral cue, which was either a face (with front-facing or averted gaze, in either frontal head view or averted head view) or a house (control). Participants fixated on a centered cross at all times and responded via button press to a peripheral target after a variable stimulus onset asynchrony (SOA) from the stimulus cue. At the shortest SOA (150 ms), RTs were shorter for faces than houses, independent of an IOR response, suggesting a cue-based RT advantage elicited by faces. At the longest SOA (2,400 ms), a larger IOR magnitude was found for faces compared to houses. Both the cue-based RT advantage and later IOR responses were modulated by gaze-head congruency; these effects were strongest for frontal gaze faces in frontal head view, and for averted gaze faces in averted head view. Importantly, participants were not given any specific information regarding the stimuli, nor were they told the true purpose of the study. These findings indicate that the congruent combination of head and gaze direction influence the exogenous attention capture of faces during inhibition of return. |
George Wallis; Mark G. Stokes; Craig Arnold; Anna C. Nobre Reward boosts working memory encoding over a brief temporal window Journal Article In: Visual Cognition, vol. 23, no. 1-2, pp. 291–312, 2015. @article{Wallis2015,Selection mechanisms for WM are ordinarily studied by explicitly cueing a subset of memory items. However, we might also expect the reward associations of stimuli we encounter to modulate their probability of being represented in working memory (WM). Theoretical and computational models explicitly predict that reward value should determine which items will be gated into WM. For example, a model by Braver and colleagues in which phasic dopamine signalling gates WM updating predicts a temporally-specific but not item-specific reward-driven boost to encoding. In contrast, Hazy and colleagues invoke reinforcement learning in cortico-striatal loops and predict an item-wise reward-driven encoding bias. Furthermore, a body of prior work has demonstrated that reward-associated items can capture attention, and it has been shown that attentional capture biases WM encoding. We directly investigated the relationship between reward history and WM encoding. In our first experiment, we found an encoding benefit associated with reward-associated items, but the benefit generalized to all items in the memory array. In a second experiment this effect was shown to be highly temporally specific. We speculate that in real-world contexts in which the environment is sampled sequentially with saccades/shifts in attention, this mechanism could effectively mediate an item-wise encoding bias, because encoding boosts would occur when rewarded items were fixated. |
Wei He; Marta I. Garrido; Paul F. Sowman; Jon Brock; Blake W. Johnson Development of effective connectivity in the core network for face perception Journal Article In: Human Brain Mapping, vol. 36, no. 6, pp. 2161–2173, 2015. @article{He2015b,This study measured effective connectivity within the core face network in young children using a paediatric magnetoencephalograph (MEG). Dynamic casual modeling (DCM) of brain responses was performed in a group of adults (N = 14) and a group of young children aged from 3 to 6 years (N = 15). Three candidate DCM models were tested, and the fits of the MEG data to the three models were compared at both individual and group levels. The results show that the connectivity structure of the core face network differs significantly between adults and children. Further, the relative strengths of face network connections were differentially modulated by experimental conditions in the two groups. These results support the interpretation that the core face network undergoes significant structural configuration and functional specialization between four years of age and adulthood. |
Diane E. MacKenzie; David A. Westwood Investigating visual attention during scene perception of safe and unsafe occupational performance Journal Article In: Canadian Journal of Occupational Therapy, vol. 82, no. 4, pp. 224–234, 2015. @article{MacKenzie2015b,Background. Occupational therapists routinely use observation for evaluation, intervention planning, and prediction of a client's occupational performance and/or safety within the environment. Perception of safety contributes to the decision-making process for discharge or placement recommendations. Purpose. The purpose of this study was to determine if differences exist in safety ratings and eye movements between occupational therapists and nontrained matched individuals while viewing domain-specific versus non-domain-specific images. Method. Ten licensed occupational therapists and 10 age-, gender-, and education level–matched participants completed this eye-tracking study. Findings. For all image exposure durations, occupational therapists had more polarized safety ratings for stroke-related image content but little evidence of differences in eye movements between groups. Eye movement group differences did not emerge in the regions of interest identified by an independent expert panel. Implications. The results point to a complex relationship between decision making and observational behaviour in occupational assessment and highlight the need to look beyond image features. |
