All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2017 |
R. Becket Ebitz; Tirin Moore Selective modulation of the pupil light reflex by microstimulation of prefrontal cortex Journal Article In: Journal of Neuroscience, vol. 37, no. 19, pp. 5008–5018, 2017. @article{Ebitz2017, The prefrontal cortex (PFC) is thought to flexibly regulate sensorimotor responses, perhaps through modulating activity in other circuits. However, the scope of that control remains unknown: it remains unclear whether the PFC can modulate basic reflexes. One canonical example of a central reflex is the pupil light reflex (PLR): the automatic constriction of the pupil in response to luminance increments. Unlike pupil size, which depends on the interaction of multiple physiological and neuromodulatory influences, the PLR reflects the action of a simple brainstem circuit. However, emerging behavioral evidence suggests that the PLR may be modulated by cognitive processes. Although the neural basis of these modulations remains unknown, one possible source is the PFC, particularly the frontal eye field (FEF), an area of the PFC implicated in the control of attention. We show that microstimulation of the rhesus macaque FEF alters the magnitude of the PLR in a spatially specific manner. FEF microstimulation enhanced the PLR to probes presented within the stimulated visual field, but suppressed the PLR to probes at nonoverlapping locations. The spatial specificity of this effect parallels the effect of FEF stimulation on attention and suggests that FEF is capable of modulating visuomotor transformations performed at a lower level than was previously known. These results provide evidence of the selective regulation of a basic brainstem reflex by the PFC. |
Miguel P. Eckstein; Kathryn Koehler; Lauren E. Welbourne; Emre Akbas Humans, but not deep neural networks, often miss giant targets in scenes Journal Article In: Current Biology, vol. 27, no. 18, pp. 2827–2832, 2017. @article{Eckstein2017, Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [1, 2] to birds [3] to humans [4–12] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target's size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors. Eckstein et al. show that during visual search, humans, but not deep neural networks, often miss targets that have an atypical size relative to the surrounding objects in the scene. The authors suggest that this is not a human malfunction but a useful brain strategy to rapidly discount distractors during visual search. |
Anouk J. Brouwer; Tayler Jarvis; Jason P. Gallivan; J. Randall Flanagan Parallel specification of visuomotor feedback gains during bimanual reaching to independent goals Journal Article In: eNeuro, vol. 4, no. 2, pp. 1–12, 2017. @article{Brouwer2017, During goal-directed reaching, rapid visuomotor feedback processes enable the human motor system to quickly correct for errors in the trajectory of the hand that arise from motor noise and, in some cases, external perturbations. To date, these visuomotor responses, the gain of which is sensitive to features of the task and environment, have primarily been examined in the context of unimanual reaching movements toward a single target. However, many natural tasks involve moving both hands together, often to separate targets, such that errors can occur in parallel and at different spatial locations. Here, we examined the resource capacity of automatic visuomotor corrective mechanisms by comparing feedback gains during bimanual reaches, toward two targets, to feedback gains during unimanual reaches toward single targets. To investigate the sensitivity of the feedback gains and their relation to visual-spatial processing, we manipulated the widths of the targets and participants' gaze location. We found that the gain of corrective responses to cursor displacements, while strongly modulated by target width and gaze position, were only slightly reduced during bimanual control. Our results show that automatic visuomotor corrective mechanisms can efficiently operate in parallel across multiple spatial locations. |
Alex Carvalho; Isabelle Dautriche; Isabelle Lin; Anne Christophe Phrasal prosody constrains syntactic analysis in toddlers Journal Article In: Cognition, vol. 163, pp. 67–79, 2017. @article{Carvalho2017, This study examined whether phrasal prosody can impact toddlers' syntactic analysis. French noun-verb homophones were used to create locally ambiguous test sentences (e.g., using the homophone as a noun: [le bébé souris] [a bien mangé] - [the baby mouse] [ate well] or using it as a verb: [le bébé] [sourit à sa maman] - [the baby] [smiles to his mother], where brackets indicate prosodic phrase boundaries). Although both sentences start with the same words (le-bebe-/suʁi/), they can be disambiguated by the prosodic boundary that either directly precedes the critical word /suʁi/ when it is a verb, or directly follows it when it is a noun. Across two experiments using an intermodal preferential looking procedure, 28-month-olds (Exp. 1 and 2) and 20-month-olds (Exp. 2) listened to the beginnings of these test sentences while watching two images displayed side-by-side on a TV-screen: one associated with the noun interpretation of the ambiguous word (e.g., a mouse) and the other with the verb interpretation (e.g., a baby smiling). The results show that upon hearing the first words of these sentences, toddlers were able to correctly exploit prosodic information to access the syntactic structure of sentences, which in turn helped them to determine the syntactic category of the ambiguous word and to correctly identify its intended meaning: participants switched their eye-gaze toward the correct image based on the prosodic condition in which they heard the ambiguous target word. This provides evidence that during the first steps of language acquisition, toddlers are already able to exploit the prosodic structure of sentences to recover their syntactic structure and predict the syntactic category of upcoming words, an ability which would be extremely useful to discover the meaning of novel words. |
Jan Willem Gee; Olympia Colizoli; Niels A. Kloosterman; Tomas Knapen; Sander Nieuwenhuis; Tobias H. Donner Dynamic modulation of decision biases by brainstem arousal systems Journal Article In: eLife, vol. 6, pp. 1–36, 2017. @article{Gee2017, Decision-makers often arrive at different choices when faced with repeated presentations of the same evidence. Variability of behavior is commonly attributed to noise in the brain's decision-making machinery. We hypothesized that phasic responses of brainstem arousal systems are a significant source of this variability. We tracked pupil responses (a proxy of phasic arousal) during sensory-motor decisions in humans, across different sensory modalities and task protocols. Large pupil responses generally predicted a reduction in decision bias. Using fMRI, we showed that the pupil-linked bias reduction was (i) accompanied by a modulation of choice-encoding pattern signals in parietal and prefrontal cortex and (ii) predicted by phasic, pupil-linked responses of a number of neuromodulatory brainstem centers involved in the control of cortical arousal state, including the noradrenergic locus coeruleus. We conclude that phasic arousal suppresses decision bias on a trial-by-trial basis, thus accounting for a significant component of the variability of choice behavior. |
Floor Groot; Falk Huettig; Christian N. L. Olivers Language-induced visual and semantic biases in visual search are subject to task requirements Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 225–240, 2017. @article{Groot2017, Visual attention is biased by both visual and semantic representations activated by words. We investigated to what extent language-induced visual and semantic biases are subject to task demands. Participants memorized a spoken word for a verbal recognition task, and performed a visual search task during the retention period. Crucially, while the word had to be remembered in all conditions, it was either relevant for the search (as it also indicated the target) or irrelevant (as it only served the memory test afterwards). On critical trials, displays contained objects that were visually or semantically related to the memorized word. When the word was relevant for the search, eye movement biases towards visually related objects arose earlier and more strongly than biases towards semantically related objects. When the word was irrelevant there was still evidence for visual and semantic biases, but these biases were substantially weaker and similar in strength and temporal dynamics without a visual advantage. We conclude that language-induced attentional biases are subject to task requirements. |
Christopher A. Dean; Jorge R. Valdés Kroff Cross-linguistic orthographic effects in late spanish/english bilinguals Journal Article In: Languages, vol. 2, pp. 24, 2017. @article{Dean2017, Through the use of the visual world paradigm and eye tracking, we investigate how orthographic–phonological mappings in bilinguals promote interference during spoken language comprehension. Eighteen English-dominant bilinguals and 13 Spanish-dominant bilinguals viewed 4-picture visual displays while listening to Spanish-only auditory sentences (e.g., El detective busca su banco ‘The detective is looking for his bench') in order to select a target image. Stimuli included two types of trials that represent potential conflict in bilinguals: b-v trials, e.g., banco-vaso ‘bench-glass', representing homophonous phonemes with distinct graphemic representations in Spanish, and j-h trials, e.g., juego-huevo ‘game-egg', representing interlingual homophonous phonemes with distinct graphemic representations. Data were collected on accuracy, reaction time (RT), and mean proportion of target fixation. Reaction Time results indicate that Spanish-dominant speakers were slower when the competitor was present in b-v trials, though no effects were observed for English-dominant speakers. Eye-tracking results indicate a lack of competition effects in either set of trials for English-dominant speakers, but lower proportional target fixations for Spanish-dominant speakers in both sets of trials when an orthographic/phonological distractor was present. These results suggest that Spanish-dominant bilinguals may be influenced by the orthographic mappings of their less-dominant L2 English, providing new insight into the nature of the interaction between the orthography and phonology in bilingual speakers. |
Filip Děchtěrenko; Jiří Lukavský; Kenneth Holmqvist Flipping the stimulus: Effects on scanpath coherence? Journal Article In: Behavior Research Methods, vol. 49, no. 1, pp. 382–393, 2017. @article{Dechterenko2017, In experiments investigating dynamic tasks, it is often useful to examine eye movement scan patterns. We can present trials repeatedly and compute within-subjects/conditions similarity in order to distinguish between signal and noise in gaze data. To avoid obvious repetitions of trials, filler trials must be added to the experimental protocol, resulting in long experiments. Alternatively, trials can be modified to reduce the chances that the participant will notice the repetition, while avoiding significant changes in the scan patterns. In tasks in which the stimuli can be geometrically transformed without any loss of meaning, flipping the stimuli around either of the axes represents a candidate modification. In this study, we examined whether flipping of stimulus object trajectories around the x-and y-axes resulted in comparable scan patterns in a multiple object tracking task. We developed two new strategies for the statistical comparison of similarity between two groups of scan patterns, and then tested those strategies on artificial data. Our results suggest that although the scan pat- terns in flipped trials differ significantly from those in the original trials, this difference is small (as little as a 13 % in- crease of overall distance). Therefore, researchers could use geometric transformations to test more complex hypotheses regarding scan pattern coherence while retaining the same duration for experiments. |
Gayle DeDe Effects of lexical variables on silent reading comprehension in individuals with aphasia: Evidence from eye tracking Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 60, pp. 2589–2620, 2017. @article{DeDe2017, Purpose: Previous eye-tracking research has suggested that individuals with aphasia (IWA) do not assign syntactic structure on their first pass through a sentence during silent reading comprehension. The purpose of the present study was to investigate the time course with which lexical variables affect silent reading comprehension in IWA. Three lexical variables were investigated: word frequency, word class, and word length. Methods: IWA and control participants without brain damage participated in the experiment. Participants read sentences while a camera tracked their eye movements. Results: IWA showed effects of word class, word length, and word frequency that were similar to or greater than those observed in controls. Conclusions: IWA showed sensitivity to lexical variables on the first pass through the sentence. The results are consistent with the view that IWA focus on lexical access on their first pass through a sentence and then work to build syntactic structure on subsequent passes. In addition, IWA showed very long rereading times and low skipping rates overall, which may contribute to some of the group differences in reading comprehension. |
Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco In: Journal of Neurophysiology, vol. 118, no. 3, pp. 1809–1823, 2017. @article{DelleMonache2017, The ability to catch objects when tran- siently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, intercep- tion of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5⫹ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5⫹, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer- simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1, trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5⫹ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2, trajectories were entirely visible and participants received tpTMS on TPJ and hMT/ V5+ with same timing as experiment 1. tpTMS of TPJ, hMT/V5⫹, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. |
Francesca Delogu; Matthew W. Crocker; Heiner Drenhaus Teasing apart coercion and surprisal: Evidence from eye-movements and ERPs Journal Article In: Cognition, vol. 161, pp. 46–59, 2017. @article{Delogu2017, Previous behavioral and electrophysiological studies have presented evidence suggesting that coercion expressions (e.g., began the book) are more difficult to process than control expressions like read the book. While this processing cost has been attributed to a specific coercion operation for recovering an event-sense of the complement (e.g., began reading the book), an alternative view based on the Surprisal Theory of language processing would attribute the cost to the relative unpredictability of the complement noun in the coercion compared to the control condition, with no need to postulate coercion-specific mechanisms. In two experiments, monitoring eye-tracking and event-related potentials (ERPs), respectively, we sought to determine whether there is any evidence for coercion-specific processing cost above-and-beyond the difficulty predicted by surprisal, by contrasting coercing and control expressions with a further control condition in which the predictability of the complement noun was similar to that in the coercion condition (e.g., bought the book). While the eye-tracking study showed significant effects of surprisal and a marginal effect of coercion on late reading measures, the ERP study clearly supported the surprisal account. Overall, our findings suggest that the coercion cost largely reflects the surprisal of the complement noun with coercion specific operations possibly influencing later processing stages. |
Heather J. Ferguson; Ian Apperly; James E. Cane Eye tracking reveals the cost of switching between self and other perspectives in a visual perspective-taking task Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 8, pp. 1646–1660, 2017. @article{Ferguson2017, Previous studies have shown that while people can rapidly and accurately compute their own and other people's visual perspectives, they experience difficulty ignoring the irrelevant perspective when the two perspectives differ. We used the ‘avatar' perspective-taking task to examine the mechanisms that underlie these egocentric (i.e. interference from their own perspective) and altercentric (i.e. interference from the other person's perspective) tendencies. Participants were eye-tracked as they verified the number of discs in a visual scene according to either their own or an on-screen avatar's perspective. Crucially in some trials the two perspectives were inconsistent (i.e. each saw a different number of discs), while in others they were consistent. To examine the effect of perspective switching, performance was compared for trials that were preceded with the same versus different perspective cue. We found that altercentric interference can be reduced or eliminated when participants stick with their own perspective across consecutive trials. Our eye- tracking analyses revealed distinct fixation patterns for self and other perspective-taking, suggesting that consistency effects in this paradigm are driven by implicit mentalising of what others can see, and not automatic directional cues from the avatar. |
Evelyn C. Ferstl; Laura Israel; Lisa Putzar Humor facilitates text comprehension: Evidence from eye movements Journal Article In: Discourse Processes, vol. 54, no. 4, pp. 259–284, 2017. @article{Ferstl2017, One crucial property of verbal jokes is that the punchline usually contains an incongruency that has to be resolved by updating the situation model representation. In the standard pragmatic model, these processes are considered to require cognitive effort. However, only few studies compared jokes to texts requiring a situation model revision without being funny. In the present study participants' eye movements were recorded while they read short texts falling into four categories: jokes, texts that made a revision of the situation model necessary without being funny (revision texts), and two types of control texts. Jokes were read faster and elicited fewer regressive eye movements than the other text categories. Women were more sensitive to revision and inference demands of nonhumorous texts than men, and this was particularly the case when the instructions required a meta-linguistic evaluation. In contrast to the predictions of the two-stage model of pragmatics, humor appreciation facilitated text comprehension, and this effect was more pronounced for men than for women. |
Ruth Filik; Emily Brightman; Chloe Gathercole; Hartmut Leuthold The emotional impact of verbal irony: Eye-tracking evidence for a two-stage process Journal Article In: Journal of Memory and Language, vol. 93, pp. 193–202, 2017. @article{Filik2017, In this paper we investigate the socio-emotional functions of verbal irony. Specifically, we use eye-tracking while reading to assess moment-to-moment processing of a character's emotional response to ironic versus literal criticism. In Experiment 1, participants read stories describing a character being upset following criticism from another character. Results showed that participants initially more easily integrated a hurt response following ironic criticism; but later found it easier to integrate a hurt response following literal criticism. In Experiment 2, characters were instead described as having an amused response, which participants ultimately integrated more easily following ironic criticism. From this we propose a two-stage process of emotional responding to irony: While readers may initially expect a character to be more hurt by ironic than literal criticism, they ultimately rationalize ironic criticism as being less hurtful, and more amusing. |
Nonie J. Finlayson; Julie D. Golomb 2D location biases depth-from-disparity judgments but not vice versa Journal Article In: Visual Cognition, vol. 25, no. 9-10, pp. 841–852, 2017. @article{Finlayson2017a, Visual cognition in our 3D world requires understanding how we accurately localize objects in 2D and depth, and what influence both types of location information have on visual processing. Spatial location is known to play a special role in visual processing, but most of these findings have focused on the special role of 2D location. One such phenomena is the spatial congruency bias, where 2D location biases judgments of object features but features do not bias location judgments. This paradigm has recently been used to compare different types of location information in terms of how much they bias different types of features. Here we used this paradigm to ask a related question: whether 2D and depth-from-disparity location bias localization judgments for each other. We found that presenting two objects in the same 2D location biased position-in-depth judgments, but presenting two objects at the same depth (disparity) did not bias 2D location judgments. We conclude that an object's 2D location may be automatically incorporated into perception of its depth location, but not vice versa, which is consistent with a fundamentally special role for 2D location in visual processing. |
Nonie J. Finlayson; Xiaoli Zhang; Julie D. Golomb Differential patterns of 2D location versus depth decoding along the visual hierarchy Journal Article In: NeuroImage, vol. 147, pp. 507–516, 2017. @article{Finlayson2017, Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. |
Eugen Fischer; Paul E. Engelhardt Stereotypical inferences: Philosophical relevance and psycholinguistic toolkit Journal Article In: Ratio, vol. 30, no. 4, pp. 411–442, 2017. @article{Fischer2017, Stereotypes shape inferences in philosophical thought, political discourse, and everyday life. These inferences are routinely made when thinkers engage in language comprehension or production: We make them whenever we hear, read, or formulate stories, reports, philosophical case-descriptions, or premises of arguments – on virtually any topic. These inferences are largely automatic: largely unconscious, non-intentional, and effortless. Accordingly, they shape our thought in ways we can properly understand only by complementing traditional forms of philosophical analysis with experimental methods from psycholinguistics. This paper seeks, first, to bring out the wider philosophical relevance of stereotypical inference, well beyond familiar topics like gender and race. Second, we wish to provide (experimental) philosophers with a toolkit to experimentally study these ubiquitous inferences and what intuitions they may generate. This paper explains what stereotypes are (Section 1), and why they matter to current and traditional concerns in philosophy – experimental, analytic, and applied (Section 2). It then assembles a psycholinguistic toolkit and demonstrates through two studies (Sections 3–4) how potentially questionnairebased measures (plausibility-ratings) can be combined with process measures (reaction times and pupillometry) to garner evidence for specific stereotypical inferences and study when they ‘go through' and influence our thinking. |
Daniel Fiset; Caroline Blais; Jessica Royer; Anne Raphäelle Richoz; Gabrielle Dugas; Roberto Caldara Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia Journal Article In: Social Cognitive and Affective Neuroscience, vol. 12, no. 8, pp. 1334–1341, 2017. @article{Fiset2017, Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed.We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. |
Geoffrey Fisher An attentional drift diffusion model over binary-attribute choice Journal Article In: Cognition, vol. 168, pp. 34–45, 2017. @article{Fisher2017, In order to make good decisions, individuals need to identify and properly integrate information about various attributes associated with a choice. Since choices are often complex and made rapidly, they are typically affected by contextual variables that are thought to influence how much attention is paid to different attributes. I propose a modification of the attentional drift-diffusion model, the binary-attribute attentional drift diffusion model (baDDM), which describes the choice process over simple binary-attribute choices and how it is affected by fluctuations in visual attention. Using an eye-tracking experiment, I find the baDDM makes accurate quantitative predictions about several key variables including choices, reaction times, and how these variables are correlated with attention to two attributes in an accept-reject decision. Furthermore, I estimate an attribute-based fixation bias that suggests attention to an attribute increases its subjective weight by 5%, while the unattended attribute's weight is decreased by 10%. |
Aleya Flechsenhar; Matthias Gamer Top-down influence on gaze patterns in the presence of social features Journal Article In: PLoS ONE, vol. 12, no. 8, pp. e0183799, 2017. @article{Flechsenhar2017, Visual saliency maps reflecting locations that stand out from the background in terms of their low-level physical features have proven to be very useful for empirical research on attentional exploration and reliably predict gaze behavior. In the present study we tested these predictions for socially relevant stimuli occurring in naturalistic scenes using eye tracking. We hypothesized that social features (i.e. human faces or bodies) would be processed preferentially over non-social features (i.e. objects, animals) regardless of their low-level saliency. To challenge this notion, we included three tasks that deliberately addressed non-social attributes. In agreement with our hypothesis, social information, especially heads, was preferentially attended compared to highly salient image regions across all tasks. Social information was never required to solve a task but was regarded nevertheless. More so, after completing the task requirements, viewing behavior reverted back to that of free-viewing with heavy prioritization of social features. Additionally, initial eye movements reflecting potentially automatic shifts of attention, were predominantly directed towards heads irrespective of top-down task demands. On these grounds, we suggest that social stimuli may provide exclusive access to the priority map, enabling social attention to override reflexive and controlled attentional processes. Furthermore, our results challenge the generalizability of saliency-based attention models. |
Francesca Foppolo; Marco Marelli No delay for some inferences Journal Article In: Journal of Semantics, vol. 34, no. 4, pp. 659–681, 2017. @article{Foppolo2017, We present an eye-tracking study on the incremental derivation of the some-but not- all scalar implicature (SI) associated to the scalar quantifier some. This question has been the matter of a vivid debate, both in linguistics and in psycholinguistics (Chemla & Singh 2014a,b). Experimentally, it was addressed by means of eyetracking and different results were obtained: while Huang & Snedeker (2009) found evidence for a delay of some with respect to all, Grodner et al. (2010) argued for a rapid integration of pragmatic some. More recently, Breheny et al. (2013a,b) raised some criticism on the paradigm employed in those studies and contributed with a looking-while-listening task showing incremental derivation of the scalar inference. We first raise some methodological questions, arguing that the paradigm used in previous studies was not apt to distinguish whether a scalar inference was derived or not, for different reasons. By means of a novel visual-world eye-tracking experiment in which we exploit the notion of focus in the activation of scalar alternatives, we show new evidence for the incremental derivation of the pragmatic some-but not- all interpretation of some. We interpret these results within a grammatical approach to SIs (Chierchia et al. 2012; Chierchia 2013), according to which, when scalar alternatives are active, the SI is factored in locally and incrementally during the online processing of the scalar quantifier. |
H. Devillez; Anne Guérin-Dugué; N. Guyader How a distractor influences fixations during the exploration of natural scenes Journal Article In: Journal of Eye Movement Research, vol. 10, no. 2, pp. 1–13, 2017. @article{Devillez2017, The distractor effect is a well-established means of studying different aspects of fixation pro-gramming during the exploration of visual scenes. In this study, we present a task-irrelevant distractor to participants during the free exploration of natural scenes. We investigate the con-trol and programming of fixations by analyzing fixation durations and locations, and the link between the two. We also propose a simple mixture model evaluated using the Expectation-Maximization algorithm to test the distractor effect on fixation locations, including fixations which did not land on the distractor. The model allows us to quantify the influence of a visual distractor on fixation location relative to scene saliency for all fixations, at distractor onset and during all subsequent exploration. The distractor effect is not just limited to the current fixa-tion, it continues to influence fixations during subsequent exploration. An abrupt change in the stimulus not only increases the duration of the current fixation, it also influences the location of the fixation which occurs immediately afterwards and to some extent, in function of the length of the change, the duration and location of any subsequent fixations. Overall, results from the eye movement analysis and the statistical model suggest that fixation durations and locations are both controlled by direct and indirect mechanisms. |
Christel Devue; Gina M. Grimshaw Faces are special, but facial expressions aren't: Insights from an oculomotor capture paradigm Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 5, pp. 1438–1452, 2017. @article{Devue2017, We compared the ability of angry and neutral faces to drive oculomotor behaviour as a test of the widespread claim that emotional information is automatically prioritized when competing for attention. Participants were required to make a saccade to a colour singleton; photos of angry or neutral faces appeared amongst other objects within the array, and were completely irrelevant for the task. Eye-tracking mea- sures indicate that faces drive oculomotor behaviour in a bottom-up fashion; however, angry faces are no more likely to capture the eyes than neutral faces are. Saccade latencies suggest that capture occurrs via reflexive saccades and that the outcome of competition between salient items (colour single- tons and faces) may be subject to fluctuations in attentional control. Indeed, although angry and neutral faces captured the eyes reflexively on a portion of trials, participants successfully maintained goal-relevant oculomotor behaviour on a majority of trials.We outline potential cognitive and brain mechanisms underlying oculomotor capture by faces. |
Nicholas K. DeWind; Jiyun Peng; Andrew Luo; Elizabeth M. Brannon; Michael L. Platt Pharmacological inactivation does not support a unique causal role for intraparietal sulcus in the discrimination of visual number Journal Article In: PLoS ONE, vol. 12, no. 12, pp. e0188820, 2017. @article{DeWind2017, The "number sense" describes the intuitive ability to quantify without counting. Single neu-ron recordings in non-human primates and functional imaging in humans suggest the intra-parietal sulcus is an important neuroanatomical locus of numerical estimation. Other lines of inquiry implicate the IPS in numerous other functions, including attention and decision mak-ing. Here we provide a direct test of whether IPS has functional specificity for numerosity judgments. We used muscimol to reversibly and independently inactivate the ventral and lat-eral intraparietal areas in two monkeys performing a numerical discrimination task and a color discrimination task, roughly equilibrated for difficulty. Inactivation of either area caused parallel impairments in both tasks and no evidence of a selective deficit in numerical pro-cessing. These findings do not support a causal role for the IPS in numerical discrimination, except insofar as it also has a role in the discrimination of color. We discuss our findings in light of several alternative hypotheses of IPS function, including a role in orienting responses, a general cognitive role in attention and decision making processes and a more specific role in ordinal comparison that encompasses both number and color judgments. |
Nathaniel T. Diede; Julie M. Bugg Cognitive effort is modulated outside of the explicit awareness of conflict frequency: Evidence from pupillometry Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 5, pp. 824–835, 2017. @article{Diede2017, Classic theories of cognitive control conceptualized controlled processes as slow, strategic, and willful, with automatic processes being fast and effortless. The context-specific proportion compatibility (CSPC) effect, the reduction in the compatibility effect in a context (e.g., location) associated with a high relative to low likelihood of conflict, challenged classic theories by demonstrating fast and flexible control that appears to operate outside of conscious awareness. Two theoretical questions yet to be addressed are whether the CSPC effect is accompanied by context-dependent variation in effort, and whether the exertion of effort depends on explicit awareness of context-specific task demands. To address these questions, pupil diameter was measured during a CSPC paradigm. Stimuli were randomly presented in either a mostly compatible location or a mostly incompatible location. Replicating prior research, the CSPC effect was found. The novel finding was that pupil diameter was greater in the mostly incompatible location compared to the mostly compatible location, despite participants' lack of awareness of context-specific task demands. Additionally, this difference occurred regardless of trial type or a preceding switch in location. These patterns support the view that context (location) dictates selection of optimal attentional settings in the CSPC paradigm, and varying levels of effort and performance accompany these settings. Theoretically, these patterns imply that cognitive control may operate fast, flexibly, and outside of awareness, but not effortlessly. |
Kevin C. Dieter; Jocelyn L. Sy; Randolph Blake Individual differences in sensory eye dominance reflected in the dynamics of binocular rivalry Journal Article In: Vision Research, vol. 141, pp. 40–50, 2017. @article{Dieter2017, Normal binocular vision emerges from the combination of neural signals arising within separate monocular pathways. It is natural to wonder whether both eyes contribute equally to the unified cyclopean impression we ordinarily experience. Binocular rivalry, which occurs when the inputs to the two eyes are markedly different, affords a useful means for quantifying the balance of influence exerted by the eyes (called sensory eye dominance, SED) and for relating that degree of balance to other aspects of binocular visual function. However, the precise ways in which binocular rivalry dynamics change when the eyes are unbalanced remain uncharted. Relying on widespread individual variability in the relative predominance of the two eyes as demonstrated in previous studies, we found that an observer's overall tendency to see one eye more than the other was driven both by differences in the relative duration and frequency of instances of that eye's perceptual dominance. Specifically, larger imbalances between the eyes were associated with longer and more frequent periods of exclusive dominance for the stronger eye. Increases in occurrences of dominant eye percepts were mediated in part by a tendency to experience “return transitions” to the predominant eye – that is, observers often experienced sequential exclusive percepts of the dominant eye's image with an intervening mixed percept. Together, these results indicate that the often-observed imbalances between the eyes during binocular rivalry reflect true differences in sensory processing, a finding that has implications for our understanding of the mechanisms underlying binocular vision in general. |
Aster Dijkgraaf; Robert J. Hartsuiker; Wouter Duyck Predicting upcoming information in native-language and non-native-language auditory word recognition Journal Article In: Bilingualism: Language and Cognition, vol. 20, no. 5, pp. 917–930, 2017. @article{Dijkgraaf2017, Monolingual listeners continuously predict upcoming information. Here, we tested whether predictive language processing occurs to the same extent when bilinguals listen to their native language vs. a non-native language. Additionally, we tested whether bilinguals use prediction to the same extent as monolinguals. Dutch-English bilinguals and English monolinguals listened to constraining and neutral sentences in Dutch (bilinguals only) and in English, and viewed target and distractor pictures on a display while their eye movements were measured. There was a bias of fixations towards the target object in the constraining condition, relative to the neutral condition, before information from the target word could affect fixations. This prediction effect occurred to the same extent in native processing by bilinguals and monolinguals, but also in non-native processing. This indicates that unbalanced, proficient bilinguals can quickly use semantic information during listening to predict upcoming referents to the same extent in both of their languages. |
Barbara Dillenburger; Michael Morgan Saccades to explicit and virtual features in the Poggendorff figure show perceptual biases Journal Article In: i-Perception, vol. 8, no. 2, pp. 1–21, 2017. @article{Dillenburger2017, Human participants made saccadic eye movements to various features in a modified vertical Poggendorff figure, to measure errors in the location of key geometrical features. In one task, subjects (n ¼ 8) made saccades to the vertex of the oblique T-intersection between a diagonal pointer and a vertical line. Results showed both a small tendency to shift the saccade toward the interior of the angle, and a larger bias in the direction of a shorter saccade path to the landing line. In a different kind of task (visual extrapolation), the same subjects fixated the tip of a 45 pointer and made a saccade to the implicit point of intersection between pointer and a distant vertical line. Results showed large errors in the saccade landing positions and the saccade polar angle, in the direction predicted from the perceptual Poggendorff bias. Further experiments manipulated the position of the fixation point relative to the implicit target, such that the Poggendorff bias would be in the opposite direction from a bias toward taking the shortest path to the landing line. The bias was still significant. We conclude that the Poggendorff bias in eye movements is in part due to the mislocation of visible target features but also to biases in planning a saccade to a virtual target across a gap. The latter kind of error comprises both a tendency to take the shortest path to the landing line, and a perceptual error that overestimates the vector component orthogonal to the gap. |
Brian W. Dillon; Charles Clifton; Shayne Sloggett; Lyn Frazier Appositives and their aftermath: Interference depends on at-issue vs. not-at-issue status Journal Article In: Journal of Memory and Language, vol. 96, pp. 93–109, 2017. @article{Dillon2017, Much research has explored the degree to which not-at-issue content is interpreted independently of at-issue content, or the main assertion of a sentence (AnderBois, Brasoveanu, & Henderson, 2011; Harris & Potts, 2009; Potts, 2005; Schlenker, 2010; Tonhauser, 2011; a.o.). Building on this work, psycholinguistic research has explored the hypothesis that not-at-issue content, such as appositive relative clauses, is treated distinctly from at-issue content in online processing (Dillon, Clifton, & Frazier, 2014; Syrett & Koev, 2015). In the present paper, we explore the way in which appositive relative clauses interact with their host sentences in the course of incremental sentence comprehension. In an offline acceptability judgment, we find that appositive relative clauses contribute significantly less processing difficulty when they intervene between a filler and its gap than do superficially similar restrictive relative clauses. Results from two eye-tracking-while-reading studies suggests that recently processed restrictive relative clauses interfere to a greater degree with processes of integrating the filler at its gap site than do appositive relative clauses. Our findings suggest that the degree of interference observed during sentence processing may depend on the discourse status of potentially interfering constituents. We propose that this arises because the syntactic form of not-at-issue content is rendered relatively unavailable once it has been processed. |
Giorgia D'Innocenzo; Claudia C. Gonzalez; Alexander V. Nowicky; A. Mark Williams; Daniel T. Bishop Motor resonance during action observation is gaze-contingent: A TMS study Journal Article In: Neuropsychologia, vol. 103, pp. 77–86, 2017. @article{DInnocenzo2017, When we observe others performing an action, visual input to our mirror neuron system is reflected in the facilitation of primary motor cortex (M1), a phenomenon known as ‘motor resonance'. However, it is unclear whether this motor resonance is contingent upon our point-of-gaze. In order to address this issue, we collected gaze data from participants as they viewed an intransitive action – thumb abduction/adduction – under four conditions: with natural gaze behaviour (free viewing) and with their gaze fixated on each of three predetermined loci at various distances from the prime mover. In a control condition, participants viewed little finger movements, also with a fixated gaze. Transcranial magnetic stimulation (TMS) was delivered to M1 and motor evoked potentials (MEPs) were recorded from the right abductor pollicis brevis (APB) and right abductor digiti minimi (ADM). Results showed that, relative to a free viewing condition, a fixated point-of-gaze which maximized transfoveal motion facilitated MEPs in APB. Moreover, during free viewing, saccade amplitudes and APB MEP amplitudes were negatively correlated. These findings indicate that motor resonance is contingent on the observer's gaze behaviour and that, for simple movements, action observation effects may be enhanced by employing a fixed point-of-gaze. |
Nicholas E. DiQuattro; Joy J. Geng Presaccadic target competition attenuates distraction Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 4, pp. 1087–1096, 2017. @article{DiQuattro2017, Although it is well known that salient nontargets can capture attention despite being task irrelevant, several studies have reported short fixation dwell times, suggesting the presence of an attentional mechanism to Brapidly reject^ dissimilar distractors. Rapid rejection has been hypothesized to depend on the strong mismatch between distractor features and the target template, but it is unknown whether the pres- ence of strong feature mismatch is sufficient, or if the presence of a target at a competing location is also necessary. Here, we investigated this question by first replicating the finding of rapid rejection for dissimilar distractors in the presence of a concurrent target (Experiment 1); manipulating the onset of the target stimulus relative to the distractor (Experiment 2); and using a saccade-contingent display to delay the target onset until after the first saccade was initiated. The results demonstrate that the speed of distractor rejection depends on the presence of target competition prior to the initiation of the first saccade, and not after the saccade. This suggests that stimulus competition for covert attention sets a Bsaccade pri- ority map^ that unfolds over time, resulting in faster corrective saccades to an anticipated object with higher top-down attentional priority. |
Alice Doherty; Kathy Conklin How gender-expectancy affects the processing of “them” Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 4, pp. 718–735, 2017. @article{Doherty2017, How sensitive is pronoun processing to expectancies based on real-world knowledge and language usage? The current study links research on the integration of gender stereotypes and number-mismatch to explore this question. It focuses on the use of them to refer to antecedents of different levels of gender-expectancy (low-cyclist, high-mechanic, known-spokeswoman). In a rating task, them is considered increasingly unnatural with greater gender-expectancy. However, participants might not be able to differentiate high-expectancy and gender-known antecedents online because they initially search for plural antecedents (e.g., Sanford & Filik), and they make all-or-nothing gender inferences. An eye-tracking study reveals early differences in the processing of them with antecedents of high gender-expectancy compared with gender-known antecedents. This suggests that participants have rapid access to the expected gender of the antecedent and the level of that expectancy. |
Marjorie Dole; David Méary; Olivier Pascalis Modifications of visual field asymmetries for face categorization in early deaf adults: A study with chimeric faces Journal Article In: Frontiers in Psychology, vol. 8, pp. 30, 2017. @article{Dole2017, Right hemisphere lateralization for face processing is well documented in typical populations. At the behavioral level, this right hemisphere bias is often related to a left visual field (LVF) bias. A conventional mean to study this phenomenon consists of using chimeric faces that are composed of the left and right parts of two faces. In this paradigm, participants generally use the left part of the chimeric face, mostly processed through the right optic tract, to determine its identity, gender or age. To assess the impact of early auditory deprivation on face processing abilities, we tested the LVF bias in a group of early deaf participants and hearing controls. In two experiments, deaf and hearing participants performed a gender categorization task with chimeric and normal average faces. Over the two experiments the results confirmed the presence of a LVF bias in participants, which was less frequent in deaf participants. This result suggested modifications of hemispheric lateralization for face processing in deaf participants. In Experiment 2 we also recorded eye movements to examine whether the LVF bias could be related to face scanning behavior. In this second study, participants performed a similar task while we recorded eye movements using an eye tracking system. Using areas of interest analysis we observed that the proportion of fixations on the mouth relatively to the other areas was increased in deaf participants in comparison with the hearing group. This was associated with a decrease of the proportion of fixations on the eyes. In addition these measures were correlated to the LVF bias suggesting a relationship between the LVF bias and the patterns of facial exploration. Taken together, these results suggest that early auditory deprivation results in plasticity phenomenon affecting the perception of static faces through modifications of hemispheric lateralization and of gaze behavior. |
Ewa Domaradzka; Maksymilian Bielecki Deadly attraction - attentional bias toward preferred cigarette brand in smokers Journal Article In: Frontiers in Psychology, vol. 8, pp. 1365, 2017. @article{Domaradzka2017, Numerous studies have shown that biases in visual attention might be evoked by affective and personally relevant stimuli, for example addiction-related objects. Despite the fact that addiction is often linked to specific products and systematic purchase behaviors, no studies focused directly on the existence of bias evoked by brands. Smokers are characterized by high levels of brand loyalty and everyday contact with cigarette packaging. Using the incentive-salience mechanism as a theoretical framework, we hypothesized that this group might exhibit a bias toward the preferred cigarette brand. In our study, a group of smokers (N = 40) performed a dot probe task while their eye movements were recorded. In every trial a pair of pictures was presented – each of them showed a single cigarette pack. The visual properties of stimuli were carefully controlled, so branding information was the key factor affecting subjects' reactions. For each participant, we compared gaze behavior related to the preferred vs. other brands. The analyses revealed no attentional bias in the early, orienting phase of the stimulus processing and strong differences in maintenance and disengagement. Participants spent more time looking at the preferred cigarettes and saccades starting at the preferred brand location had longer latencies. In sum, our data shows that attentional bias toward brands might be found in situations not involving choice or decision making. These results provide important insights into the mechanisms of formation and maintenance of attentiona l biases to stimuli of personal relevance and might serve as a first step toward developing new attitude measurement techniques. |
2016 |
Sujaya Neupane; Daniel Guitton; Christopher C. Pack Two distinct types of remapping in primate cortical area V4 Journal Article In: Nature Communications, vol. 7, pp. 10402, 2016. @article{Neupane2016a, Visual neurons typically receive information from a limited portion of the retina, and such receptive fields are a key organizing principle for much of visual cortex. At the same time, there is strong evidence that receptive fields transiently shift around the time of saccades. The nature of the shift is controversial: Previous studies have found shifts consistent with a role for perceptual constancy; other studies suggest a role in the allocation of spatial attention. Here we present evidence that both the previously documented functions exist in individual neurons in primate cortical area V4. Remapping associated with perceptual constancy occurs for saccades in all directions, while attentional shifts mainly occur for neurons with receptive fields in the same hemifield as the saccade end point. The latter are relatively sluggish and can be observed even during saccade planning. Overall these results suggest a complex interplay of visual and extraretinal influences during the execution of saccades. |
Jolande Fooken; Sang-Hoon Yeo; Dinesh K. Pai; Miriam Spering Eye movement accuracy determines natural interception strategies Journal Article In: Journal of Vision, vol. 16, no. 14, pp. 1–15, 2016. @article{Fooken2016, Eye movements aid visual perception and guide actions such as reaching or grasping. Most previous work on eye-hand coordination has focused on saccadic eye movements. Here we show that smooth pursuit eye movement accuracy strongly predicts both interception accuracy and the strategy used to intercept a moving object. We developed a naturalistic task in which participants (n = 42 varsity baseball players) intercepted a moving dot (a "2D fly ball") with their index finger in a designated "hit zone." Participants were instructed to track the ball with their eyes, but were only shown its initial launch (100-300 ms). Better smooth pursuit resulted in more accurate interceptions and determined the strategy used for interception, i.e., whether interception was early or late in the hit zone. Even though early and late interceptors showed equally accurate interceptions, they may have relied on distinct tactics: early interceptors used cognitive heuristics, whereas late interceptors' performance was best predicted by pursuit accuracy. Late interception may be beneficial in real-world tasks as it provides more time for decision and adjustment. Supporting this view, baseball players who were more senior were more likely to be late interceptors. Our findings suggest that interception strategies are optimally adapted to the proficiency of the pursuit system |
Jaap Munneke; Artem V. Belopolsky; Jan Theeuwes Distractors associated with reward break through the focus of attention Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 7, pp. 2213–2225, 2016. @article{Munneke2016, In the present study, we investigated the conditions in which rewarded distractors have the ability to capture attention, even when attention is directed toward the target location. Experiment 1 showed that when the probability of obtaining reward was high, all salient distractors captured attention, even when they were not associated with reward. This effect may have been caused by participants suboptimally using the 100%-valid endogenous location cue. Experiment 2 confirmed this result by showing that salient distractors did not capture attention in a block in which no reward was expected. In Experiment 3, the probability of the presence of a distractor was high, but it only signaled reward availability on a low number of trials. The results showed that those very infrequent distractors that signaled reward captured attention, whereas the distractors (both frequent and infrequent ones) not associated with reward were simply ignored. The latter experiment indicates that even when attention is directed to a location in space, stimuli associated with reward break through the focus of attention, but equally salient stimuli not associated with reward do not. |
K. B. Pedersen; A. K. Sjølie; A. H. Vestergaard; S. Andréasson; F. Møller In: Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 254, no. 10, pp. 1897–1908, 2016. @article{Pedersen2016, Purpose: To quantify fixation stability in patients with neovascular age-related macular degeneration (nAMD) at baseline, 3 and 6 months after anti-vascular endothelial growth factor (anti-VEGF) treatment and furthermore asses the implications ofan unsteady fixation for multifocal electroretinography (mfERG) measurements. Methods: Fifty eyes of 50 nAMD patients receiving intravitreal anti-VEGF treatment with either bevacizumab or ranibizumab and eight eyes of eight control subjects were included. Fixation stability measurements were performed with the Eye-Link eyetracking system and the retinal area in degrees2 (deg2) containing the 68 % most frequently used fixation points (RAF68) was calculated. MfERG P1 amplitude and implicit time were analyzed in six concentric rings and as a summed response. Patients were examined at baseline, 3 and 6 months. Four different mfERG recordings were performed for the control subjects to mimic an involuntary unstable fixation: normal central fixation, 2.4°, 4.8°, and 7.1° fixation instability. Results: For control subjects, a fixation instability of2.4° (corresponding to the central hexagon) did not reduce mfERG ring amplitudes significantly, whereas 4.8° and 7.1° fixation instability reduced the amplitudes significantly in rings 1 and 2 (p<0.001) as well as in the peripheral rings in the 7.1° instability condition (p< 0.001). Fixation stability improved non- significantly for patients at 3 and 6 months. The size of the retinal area of fixation was at baseline, 3 and 6 months nega- tively correlated to visual acuity (VA) (rbaseline =−0.65, r3 months =−0.60, and r6 months =−0.66 respectively, p< 0.001) and mfERG amplitudes of the three innermost rings (rbaseline =−0.29 |
Mara Otten; Daniel Schreij; Sander A. Los The interplay of goal-driven and stimulus-driven influences on spatial orienting Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 6, pp. 1642–1654, 2016. @article{Otten2016, Search for a target stimulus among distractors is subject to both goal-driven and stimulus-driven influences. Variables that selectively modify these influences have shown strong interaction effects on saccade trajectories toward the target, suggesting the involvement of a shared spatial orienting mechanism. However, subsequent manual response times (RTs) have revealed additive effects, suggesting that different mechanisms are involved. In the present study, we tested the hypothesis that an interaction for RTs is obscured by preceding multisaccade trajectories, promoted by the continuous presence of distractors in the display. In two experiments, we compared a condition in which distractors were removed soon after the presentation of the search display to a standard condition in which distractors were not removed. The results showed additive goal-driven and stimulus-driven effects on RTs in the standard condition, but an interaction when distractors were removed. These findings support the view that both variables influence a shared spatial orienting mechanism. |
Stephen M. Lee; Alicia Peltsch; Maureen Kilmade; Donald C. Brien; Brian C. Coe; Ingrid S. Johnsrude; Douglas P. Munoz Neural correlates of predictive saccades Journal Article In: Journal of Cognitive Neuroscience, vol. 28, no. 8, pp. 1210–1227, 2016. @article{Lee2016, Every day we generate motor responses that are timed with external cues. This phenomenon of sensorimotor synchronization has been simplified and studied extensively using finger tapping sequences that are executed in synchrony with auditory stimuli. The predictive saccade paradigm closely resembles the finger tapping task. In this paradigm, participants follow a visual target that “steps” between two fixed locations on a visual screen at predictable ISIs. Eventually, the time from target appearance to saccade initiation (i.e., saccadic RT) becomes predictive with values nearing 0 msec. Unlike the finger tapping literature, neural control of predictive behavior described within the eye movement literature has not been well established and is inconsistent, especially between neuroimaging and patient lesion studies. To resolve these discrepancies, we used fMRI to investigate the neural correlates of predictive saccades by con- trasting brain areas involved with behavior generated from the predictive saccade task with behavior generated from a reactive saccade task (saccades are generated toward targets that are unpredictably timed). We observed striking differences in neural recruitment between reactive and predictive conditions: Reactive saccades recruited oculomotor structures, as predicted, whereas predictive saccades recruited brain structures that support tim- ing inmotor responses, such as the crus I of the cerebellum, and structures commonly associated with the default mode network. Therefore, our results were more consistent with those found in the finger tapping literature. |
Cécile Eymond; Patrick Cavanagh; Thérèse Collins Feature-based attention across saccades and immediate postsaccadic selection Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 5, pp. 1293–1301, 2016. @article{Eymond2016, Before each eye movement, attentional resources are drawn to the saccade goal. This saccade-related attention is known to be spatial in nature, and in this study we asked whether it also evokes any feature selectivity that is maintained across the saccade. After a saccade toward a colored target, participants performed a postsaccadic feature search on an array displayed at landing. The saccade target either had the same color as the search target in the postsaccadic array (congruent trials) or a different color (incongruent or neutral trials). Our results show that the color of the saccade target did not prime the subsequent feature search. This suggests that "landmark search", the process of searching for the saccade target once the eye lands (Deubel in Visual Cognition, 11, 173-202, 2004), may not involve the attentional mechanisms that underlie feature search. We also analyzed intertrial effects and observed priming of pop-out (Maljkovic & Nakayama in Memory & Cognition, 22, 657-672, 1994) for the postsaccadic feature search: the detection of the color singleton became faster when its color was repeated on successive trials. However, search performance revealed no effect of congruency between the saccade and search targets, either within or across trials, suggesting that the priming of pop-out is specific to target repetitions within the same task and is not seen for repetitions across tasks. Our results support a dissociation between feature-based attention and the attentional mechanisms associated with eye movement programming. |
John-Ross Rizzo; Todd E. Hudson; Weiwei Dai; Ninad Desai; Arash Yousefi; Dhaval Palsana; Ivan Selesnick; Laura J. Balcer; Steven L. Galetta; Janet C. Rucker Objectifying eye movements during rapid number naming: Methodology for assessment of normative data for the King-Devick test Journal Article In: Journal of the Neurological Sciences, vol. 362, pp. 232–239, 2016. @article{Rizzo2016a, Objective Concussion is a major public health problem and considerable efforts are focused on sideline-based diagnostic testing to guide return-to-play decision-making and clinical care. The King-Devick (K-D) test, a sensitive sideline performance measure for concussion detection, reveals slowed reading times in acutely concussed subjects, as compared to healthy controls; however, the normal behavior of eye movements during the task and deficits underlying the slowing have not been defined. Methods Twelve healthy control subjects underwent quantitative eye tracking during digitized K-D testing. Results The total K-D reading time was 51.24 (± 9.7) seconds. A total of 145 saccades (± 15) per subject were generated, with average peak velocity 299.5°/s and average amplitude 8.2°. The average inter-saccadic interval was 248.4 ms. Task-specific horizontal and oblique saccades per subject numbered, respectively, 102 (± 10) and 17 (± 4). Subjects with the fewest saccades tended to blink more, resulting in a larger amount of missing data; whereas, subjects with the most saccades tended to make extra saccades during line transitions. Conclusions Establishment of normal and objective ocular motor behavior during the K-D test is a critical first step towards defining the range of deficits underlying abnormal testing in concussion. Further, it sets the groundwork for exploration of K-D correlations with cognitive dysfunction and saccadic paradigms that may reflect specific neuroanatomic deficits in the concussed brain. |
Jean-Baptiste Bernard; Carlos Aguilar; Eric Castet In: PLoS ONE, vol. 11, no. 4, pp. e0152506, 2016. @article{Bernard2016b, Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). |
Megan H. Papesh; Stephen D. Goldinger; Michael C. Hout Eye movements reveal fast, voice-specific priming Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 3, pp. 314–337, 2016. @article{Papesh2016, In spoken word perception, voice specificity effects are well-documented: When people hear repeated words in some task, performance is generally better when repeated items are presented in their originally heard voices, relative to changed voices. A key theoretical question about voice specificity effects concerns their time-course: Some studies suggest that episodic traces exert their influence late in lexical processing (the time-course hypothesis; McLennan & Luce, 2005), whereas others suggest that episodic traces influence immediate, online processing. We report 2 eye-tracking studies investigating the time-course of voice-specific priming within and across cognitive tasks. In Experiment 1, participants performed modified lexical decision or semantic classification to words spoken by 4 speakers. The tasks required participants to click a red "x" or a blue "+" located randomly within separate visual half-fields, necessitating trial-by-trial visual search with consistent half-field response mapping. After a break, participants completed a second block with new and repeated items, half spoken in changed voices. Voice effects were robust very early, appearing in saccade initiation times. Experiment 2 replicated this pattern while changing tasks across blocks, ruling out a response priming account. In the General Discussion, we address the time-course hypothesis, focusing on the challenge it presents for empirical disconfirmation, and highlighting the broad importance of indexical effects, beyond studies of priming. |
Nayoung So; Veit Stuphorn Supplementary eye field encodes confidence in decisions under risk Journal Article In: Cerebral Cortex, vol. 26, no. 2, pp. 764–782, 2016. @article{So2016, Choices are made with varying degrees of confidence, a cognitive signal representing the subjective belief in the optimality of the choice. Confidence has been mostly studied in the context of perceptual judgments, in which choice accuracy can be measured using objective criteria. Here, we study confidence in subjective value-based decisions. We recorded in the supplementary eye field (SEF) of monkeys performing a gambling task, where they had to use subjective criteria for placing bets. We found neural signals in the SEF that explicitly represent choice confidence independent from reward expectation. This confidence signal appeared after the choice and diminished before the choice outcome. Most of this neuronal activity was negatively correlated with confidence, and was strongest in trials on which the monkey spontaneously withdrew his choice. Such confidence-related activity indicates that the SEF not only guides saccade selection, but also evaluates the likelihood that the choice was optimal. This internal evaluation influences decisions concerning the willingness to bear later costs that follow from the choice or to avoid them. More generally, our findings indicate that choice confidence is an integral component of all forms of decision-making, whether they are based on perceptual evidence or on value estimations. |
Gustav Kuhn; Ronald A. Rensink The Vanishing Ball Illusion: A new perspective on the perception of dynamic events Journal Article In: Cognition, vol. 148, pp. 64–70, 2016. @article{Kuhn2016, Our perceptual experience is largely based on prediction, and as such can be influenced by knowledge of forthcoming events. This susceptibility is commonly exploited by magicians. In the Vanishing Ball Illusion, for example, a magician tosses a ball in the air a few times and then pretends to throw the ball again, whilst secretly concealing it in his hand. Most people claim to see the ball moving upwards and then vanishing, even though it did not leave the magician's hand (Kuhn & Land, 2006; Triplett, 1900). But what exactly can such illusions tell us? We investigated here whether seeing a real action before the pretend one was necessary for the Vanishing Ball Illusion. Participants either saw a real action immediately before the fake one, or only a fake action. Nearly one third of participants experienced the illusion with the fake action alone, while seeing the real action beforehand enhanced this effect even further. Our results therefore suggest that perceptual experience relies both on long-term knowledge of what an action should look like, as well as exemplars from the immediate past. In addition, whilst there was a forward displacement of perceived location in perceptual experience, this was not found for oculomotor responses, consistent with the proposal that two separate systems are involved in visual perception. |
Samantha W. Michalka; Maya L. Rosen; Lingqiang Kong; Barbara G. Shinn-Cunningham; David C. Somers Auditory spatial coding flexibly recruits anterior, but not posterior, visuotopic parietal cortex Journal Article In: Cerebral Cortex, vol. 26, no. 3, pp. 1302–1308, 2016. @article{Michalka2016, Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. |
Julie Mercier; Irina Pivneva; Debra Titone The role of prior language context on bilingual spoken word processing: Evidence from the visual world task Journal Article In: Bilingualism: Language and Cognition, vol. 19, no. 2, pp. 376–399, 2016. @article{Mercier2016, We investigated whether speaking in one language affects cross- and within-language activation when subsequently switching to a task performed in the same or different language. English-French bilinguals (L1 English |
Andrew Isaac Meso; Anna Montagnini; Jason Bell; Guillaume S. Masson Looking for symmetry: Fixational eye movements are biased by image mirror symmetry Journal Article In: Journal of Neurophysiology, vol. 116, pp. 1250–1260, 2016. @article{Meso2016, Humans are highly sensitive to symmetry. During scene exploration, the area of the retina with dense light receptor coverage acquires most information from relevant locations determined by gaze fixation. We characterised patterns of fixational eye movements made by observers staring at synthetic scenes either freely (i.e. free exploration) or during a symmetry orientation discrimination task (i.e. active exploration). Stimuli could be mirror-symmetric or not. Both free and active exploration generated more saccades parallel to the axis of symmetry than along other orientations. Most saccades were small (<2deg) leaving the fovea within a 4-degree radius of fixation. The analysis of saccade dynamics showed that the observed parallel orientation selectivity emerged within 500ms of stimulus onset and persisted throughout the trials under both viewing conditions. Symmetry strongly distorted existing anisotropies in gaze direction in a seemingly automatic process. We argue that this bias serves a functional role in which adjusted scene sampling enhances and maintains sustained sensitivity to local spatial correlations arising from symmetry. |
Andrew Isaac Meso; James Rankin; Olivier Faugeras; Pierre Kornprobst; Guillaume S. Masson The relative contribution of noise and adaptation to competition during tri-stable motion perception Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–24, 2016. @article{Meso2016a, Animals exploit antagonistic interactions for sensory processing and these can cause oscillations between competing states. Ambiguous sensory inputs yield such perceptual multistability. Despite numerous empirical studies using binocular rivalry or plaid pattern motion, the driving mechanisms behind the spontaneous transitions between alternatives remain unclear. In the current work, we used a tristable barber pole motion stimulus combining empirical and modeling approaches to elucidate the contributions of noise and adaptation to underlying competition. We first robustly characterized the coupling between perceptual reports of transitions and continuously recorded eye direction, identifying a critical window of 480 ms before button presses, within which both measures were most strongly correlated. Second, we identified a novel nonmonotonic relationship between stimulus contrast and average perceptual switching rate with an initially rising rate before a gentle reduction at higher contrasts. A neural fields model of the underlying dynamics introduced in previous theoretical work and incorporating noise and adaptation mechanisms was adapted, extended, and empirically validated. Noise and adaptation contributions were confirmed to dominate at the lower and higher contrasts, respectively. Model simulations, with two free parameters controlling adaptation dynamics and direction thresholds, captured the measured mean transition rates for participants. We verified the shift from noise-dominated toward adaptation-driven in both the eye direction distributions and intertransition duration statistics. This work combines modeling and empirical evidence to demonstrate the signal-strength-dependent interplay between noise and adaptation during tristability. We propose that the findings generalize beyond the barber pole stimulus case to ambiguous perception in continuous feature spaces. |
Inga Meyhöfer; Katja Bertsch; Moritz Esser; Ulrich Ettinger Variance in saccadic eye movements reflects stable traits Journal Article In: Psychophysiology, vol. 53, no. 4, pp. 566–578, 2016. @article{Meyhoefer2016, Saccadic tasks are widely used to study cognitive processes, effects of pharmacological treatments, and mechanisms underlying psychiatric disorders. In genetic studies, it is assumed that saccadic endophenotypes are traits. While internal consistency and temporal stability of saccadic performance is high for most of the measures, the magnitude of underlying trait components has not been estimated, and influences of situational aspects and person by situation interactions have not been investigated. To do so, 68 healthy participants performed prosaccades, antisaccades, and memory-guided saccades on three occasions at weekly intervals at the same time of day. Latent state-trait modeling was applied to estimate the proportions of variance reflecting stable trait components, situational influences, and Person × Situation interaction effects. Mean variables for all saccadic tasks showed high to excellent reliabilities. Intraindividual standard deviations were found to be slightly less reliable. Importantly, an average of 60% of variance of a single measurement was explained by trans-situationally stable person effects, while situation aspects and interactions between person and situation were found to play a negligible role. We conclude that saccadic variables, in standard laboratory settings, represent highly reliable measures that are largely unaffected by situational influences. Extending previous reliability studies, these findings clearly demonstrate the trait-like nature of these measures and support their role as endophenotypes. |
Audrey L. Michal; David Uttal; Priti Shah; Steven L. Franconeri Visual routines for extracting magnitude relations Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1802–1809, 2016. @article{Michal2016, Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple graphs, and two systematic patterns emerged: eye movements that followed the verbal order of the question (inspecting the "blueberry" value first) versus those that followed a left-first bias (regardless of the left value's identity). Question-order patterns led substantially to faster responses and increased in prevalence with age, whereas the left-first pattern led to far slower responses and was the dominant strategy for younger children. We argue that the optimal way to verify a verbally expressed relation'scon- sistency with visualization is for the eyes to mimic the verbal ordering but that this strategy requires executive control and coordination with language. |
Thomas Miconi; Laura Groomes; Gabriel Kreiman There's Waldo! A normalization model of visual search predicts single-trial human fixations in an object search task Journal Article In: Cerebral Cortex, vol. 26, no. 7, pp. 3064–3082, 2016. @article{Miconi2016, When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts signle-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitutes a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in arror and target-absent trials, in a search task involving complex objects. |
Evelyn Milburn; Tessa Warren; Michael Walsh Dickey World knowledge affects prediction as quickly as selectional restrictions: Evidence from the visual world paradigm Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 4, pp. 536–548, 2016. @article{Milburn2016, There has been considerable debate regarding the question of whether linguistic knowledge and world knowledge are separable and used differently during processing or not (Hagoort et al, 2004). Integration of word meaning and world knowledge in language comprehension (Matsuki et al, 2011). Event-based plausibility immediately influences on-line language comprehension (Paczynski et al, 2012). Multiple influences of semantic memory on sentence processing: Distinct effects of semantic relatedness on violations of real-world event/state knowledge and animacy selection restrictions (Warren et al, 2007). Investigating effects of selectional restriction violations and plausibility violation severity on eye movements in reading. Previous investigations into this question have provided mixed evidence as to whether violations of selectional restrictions are detected earlier than violations of world knowledge. We report a visual world eye-tracking study comparing the timing of facilitation contributed by selectional restrictions vs. world knowledge. College-aged adults (n = 36) viewed photographs of natural scenes while listening to sentences. Participants anticipated upcoming direct objects similarly regardless of whether facilitation was provided by only world knowledge or a combination of selectional restrictions and world knowledge. These results suggest that selectional restrictions are not available earlier in comprehension than world knowledge. |
Ravi D. Mill; Akira R. O'Connor; Ian G. Dobbins Pupil dilation during recognition memory: Isolating unexpected recognition from judgment uncertainty Journal Article In: Cognition, vol. 154, pp. 81–94, 2016. @article{Mill2016, Optimally discriminating familiar from novel stimuli demands a decision-making process informed by prior expectations. Here we demonstrate that pupillary dilation (PD) responses during recognition memory decisions are modulated by expectations, and more specifically, that pupil dilation increases for unexpected compared to expected recognition. Furthermore, multi-level modeling demonstrated that the time course of the dilation during each individual trial contains separable early and late dilation components, with the early amplitude capturing unexpected recognition, and the later trailing slope reflecting general judgment uncertainty or effort. This is the first demonstration that the early dilation response during recognition is dependent upon observer expectations and that separate recognition expectation and judgment uncertainty components are present in the dilation time course of every trial. The findings provide novel insights into adaptive memory-linked orienting mechanisms as well as the general cognitive underpinnings of the pupillary index of autonomic nervous system activity. |
Mark Mills; Olivia Wieda; Scott F. Stoltenberg; Michael D. Dodd Emotion moderates the association between HTR2A (rs6313) genotype and antisaccade latency Journal Article In: Experimental Brain Research, vol. 234, no. 9, pp. 2653–2665, 2016. @article{Mills2016, The serotonin system is heavily involved in cognitive and emotional control processes. Previous work has typically investigated this system's role in control processes separately for cognitive and emotional domains, yet it has become clear the two are linked. The present study, therefore, examined whether variation in a serotonin receptor gene (HTR2A, rs6313) moderated effects of emotion on inhibitory control. An emotional antisaccade task was used in which participants looked toward (prosaccade) or away (antisaccade) from a target presented to the left or right of a happy, angry, or neutral face. Overall, antisaccade latencies were slower for rs6313 C allele homozygotes than T allele carriers, with no effect of genotype on prosaccade latencies. Thus, C allele homozygotes showed relatively weak inhibitory control but intact reflexive control. Importantly, the emotional stimulus was either present during target presentation (overlap trials) or absent (gap trials). The gap effect (slowed latency in overlap versus gap trials) in antisaccade trials was larger with angry versus neutral faces in C allele homozygotes. This impairing effect of negative valence on inhibitory control was larger in C allele homozygotes than T allele carriers, suggesting that angry faces disrupted/competed with the control processes needed to generate an antisaccade to a greater degree in these individuals. The genotype difference in the negative valence effect on antisaccade latency was attenuated when trial N-1 was an antisaccade, indicating top-down regulation of emotional influence. This effect was reduced in C/C versus T/_ individuals, suggesting a weaker capacity to downregulate emotional processing of task-irrelevant stimuli. |
Wendy Ming; Dimitrios J. Palidis; Miriam Spering; Martin J. McKeown Visual contrast sensitivity in early-stage parkinson's disease Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 13, pp. 5696–5704, 2016. @article{Ming2016, Purpose: Visual impairments are frequent in Parkinson's disease (PD) and impact normal functioning in daily activities. Visual contrast sensitivity is a powerful nonmotor sign for discriminating PD patients from controls. However, it is usually assessed with static visual stimuli. Here we examined the interaction between perception and eye movements in static and dynamic contrast sensitivity tasks in a cohort of mildly impaired, early-stage PD patients. Methods: Patients (n = 13) and healthy age-matched controls (n = 12) viewed stimuli of various spatial frequencies (0-8 cyc/deg) and speeds (0°/s, 10°/s, 30°/s) on a computer monitor. Detection thresholds were determined by asking participants to adjust luminance contrast until they could just barely see the stimulus. Eye position was recorded with a video-based eye tracker. Results: Patients' static contrast sensitivity was impaired in the intermediate spatial-frequency range and this impairment correlated with fixational instability. However, dynamic contrast sensitivity and patients' smooth pursuit were relatively normal. An independent component analysis revealed contrast sensitivity profiles differentiating patients and controls. Conclusions: Our study simultaneously assesses perceptual contrast sensitivity and eye movements in PD, revealing a possible link between fixational instability and perceptual deficits. Spatiotemporal contrast sensitivity profiles may represent an easily measurable metric as a component of a broader combined biometric for nonmotor features observed in PD. |
Alexandra S. Mueller; Esther G. González; Chris McNorgan; Martin J. Steinbach; Brian Timney Effects of vertical direction and aperture size on the perception of visual acceleration Journal Article In: Perception, vol. 45, no. 6, pp. 670–683, 2016. @article{Mueller2016a, It is not well understood whether the distance over which moving stimuli are visible affects our sensitivity to the presence of acceleration or our ability to track such stimuli. It is also uncertain whether our experience with gravity creates anisotropies in how we detect vertical acceleration and deceleration. To address these questions, we varied the vertical extent of the aperture through which we presented vertically accelerating and decelerating random dot arrays. We hypothesized that observers would better detect and pursue accelerating and decelerating stimuli that extend over larger than smaller distances. In Experiment 1, we tested the effects of vertical direction and aperture size on acceleration and deceleration detection accuracy. Results indicated that detection is better for downward motion and for large apertures, but there is no difference between vertical acceleration and deceleration detection. A control experiment revealed that our manipulation of vertical aperture size affects the ability to track vertical motion. Smooth pursuit is better (i.e., with higher peak velocities) for large apertures than for small apertures. Our findings suggest that the ability to detect vertical acceleration and deceleration varies as a function of the direction and vertical over which an observer can track the moving stimulus. |
Stefanie Mueller; Katja Fiehler Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement Journal Article In: Neuropsychologia, vol. 87, pp. 63–73, 2016. @article{Mueller2016, Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets. |
Kinan Muhammed; Sanjay G. Manohar; Michael Ben Yehuda; Trevor T. J. Chong; George Tofaris; Graham Lennox; Marko Bogdanovic; Michele Hu; Masud Husain Reward sensitivity deficits modulated by dopamine are associated with apathy in Parkinson's disease Journal Article In: Brain, vol. 139, no. 10, pp. 2706–2721, 2016. @article{Muhammed2016, Apathy is a debilitating and under-recognized condition that has a significant impact in many neurodegenerative disorders. In Parkinson's disease, it is now known to contribute to worse outcomes and a reduced quality of life for patients and carers, adding to health costs and extending disease burden. However, despite its clinical importance, there remains limited understanding of mechanisms underlying apathy. Here we investigated if insensitivity to reward might be a contributory factor and examined how this relates to severity of clinical symptoms. To do this we created novel ocular measures that indexed motivation level using pupillary and saccadic response to monetary incentives, allowing reward sensitivity to be evaluated objectively. This approach was tested in 40 patients with Parkinson's disease, 31 elderly age-matched control participants and 20 young healthy volunteers. Thirty patients were examined ON and OFF their dopaminergic medication in two counterbalanced sessions, so that the effect of dopamine on reward sensitivity could be assessed. Pupillary dilation to increasing levels of monetary reward on offer provided quantifiable metrics of motivation in healthy subjects as well as patients. Moreover, pupillary reward sensitivity declined with age. In Parkinson's disease, reduced pupillary modulation by incentives was predictive of apathy severity, and independent of motor impairment and autonomic dysfunction as assessed using overnight heart rate variability measures. Reward sensitivity was further modulated by dopaminergic state, with blunted sensitivity when patients were OFF dopaminergic drugs, both in pupillary response and saccadic peak velocity response to reward. These findings suggest that reward insensitivity may be a contributory mechanism to apathy and provide potential new clinical measures for improved diagnosis and monitoring of apathy. |
Manon Mulckhuyse; Edwin S. Dalmaijer Distracted by danger: Temporal and spatial dynamics of visual selection in the presence of threat Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 16, no. 2, pp. 315–324, 2016. @article{Mulckhuyse2016, Threatening stimuli are known to influence attentional and visual processes in order to prioritize selection. For example, previous research showed faster detection of threatening relative to nonthreatening stimuli. This has led to the proposal that threatening stimuli are prioritized automatically via a rapid subcortical route. However, in most studies, the threatening stimulus is always to some extent task relevant. Therefore, it is still unclear if threatening stimuli are automatically prioritized by the visual system. We used the additional singleton paradigm with task-irrelevant fear-conditioned distractors (CS+ and CS-) and indexed the time course of eye movement behavior. The results demonstrate automatic prioritization of threat. First, mean latency of saccades directed to the neutral target was increased in the presence of a threatening (CS+) relative to a nonthreatening distractor (CS-), indicating exogenous attentional capture and delayed disengagement of covert attention. Second, more error saccades were directed to the threatening than to the nonthreatening distractor, indicating a modulation of automatically driven saccades. Nevertheless, cumulative distributions of the saccade latencies showed no modulation of threat for the fastest goal-driven saccades, and threat did not affect the latency of the error saccades to the distractors. Together these results suggest that threatening stimuli are automatically prioritized in attentional and visual selection but not via faster processing. Rather, we suggest that prioritization results from an enhanced representation of the threatening stimulus in the oculomotor system, which drives attentional and visual selection. The current findings are interpreted in terms of a neurobiological model of saccade programming. |
Iris Mulders; Kriszta Szendroi Early association of prosodic focus with alleen 'only': Evidence from eye movements in the visual-world paradigm Journal Article In: Frontiers in Psychology, vol. 7, pp. 150, 2016. @article{Mulders2016, In three visual-world eye tracking studies, we investigated the processing of sentences containing the focus-sensitive operator alleen 'only' and different pitch accents, such as the Dutch Ik heb alleen SELDERIJ aan de brandweerman gegeven 'I only gave CELERY to the fireman' versus Ik heb alleen selderij aan de BRANDWEERMAN gegeven 'I only gave celery to the FIREMAN'. Dutch, like English, allows accent shift to express different focus possibilities. Participants judged whether these utterances match different pictures: in Experiment 1 the Early Stress utterance matched the picture, in Experiment 2 both the Early and Late Stress utterance did, and in Experiment 3 neither did. We found that eye-gaze patterns start to diverge across the conditions already as the indirect object is being heard. Our data also indicate that participants perform anticipatory eye-movements based on the presence of prosodic focus during auditory sentence processing. Our investigation is the first to report the effect of varied prosodic accent placement on different arguments in sentences with a semantic operator, alleen 'only', on the time course of looks in the visual world paradigm. Using an operator in the visual world paradigm allowed us to confirm that prosodic focus information immediately gets integrated into the semantic parse of the proposition. Our study thus provides further evidence for fast, incremental prosodic focus processing in natural language. |
Jana Annina Müller; Dorothea Wendt; Birger Kollmeier; Thomas Brand Comparing eye tracking with electrooculography for measuring individual sentence comprehension duration Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e0164627, 2016. @article{Mueller2016b, The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations. |
Aidan P. Murphy; David A. Leopold; Glyn W. Humphreys; Andrew E. Welchman Lesions to right posterior parietal cortex impair visual depth perception from disparity but not motion cues Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 371, pp. 1–17, 2016. @article{Murphy2016, The posterior parietal cortex (PPC) is understood to be active when observers perceive three-dimensional (3D) structure. However, it is not clear how central this activity is in the construction of 3D spatial representations. Here, we examine whether PPC is essential for two aspects of visual depth perception by testing patients with lesions affecting this region. First, we measured subjects' ability to discriminate depth structure in various 3D surfaces and objects using binocular disparity. Patients with lesions to right PPC (N = 3) exhibited marked perceptual deficits on these tasks, whereas those with left hemisphere lesions (N = 2) were able to reliably discriminate depth as accurately as control subjects. Second, we presented an ambiguous 3D stimulus defined by structure from motion to determine whether PPC lesions influence the rate of bistable perceptual alternations. Patients' percept durations for the 3D stimulus were generally within a normal range, although the two patients with bilateral PPC lesions showed the fastest perceptual alternation rates in our sample. Intermittent stimulus presentation reduced the reversal rate similarly across subjects. Together, the results suggest that PPC plays a causal role in both inferring and maintaining the perception of 3D structure with stereopsis supported primarily by the right hemisphere, but do not lend support to the view that PPC is a critical contributor to bistable perceptual alternations. |
Peter R. Murphy; Evert Boonstra; Sander Nieuwenhuis Global gain modulation generates time-dependent urgency during perceptual choice in humans Journal Article In: Nature Communications, vol. 7, pp. 13526, 2016. @article{Murphy2016a, Decision-makers must often balance the desire to accumulate information with the costs of protracted deliberation. Optimal, reward-maximizing decision-making can require dynamic adjustment of this speed/accuracy trade-off over the course of a single decision. However, it is unclear whether humans are capable of such time-dependent adjustments. Here, we identify several signatures of time-dependency in human perceptual decision-making and highlight their possible neural source. Behavioural and model-based analyses reveal that subjects respond to deadline-induced speed pressure by lowering their criterion on accumulated perceptual evidence as the deadline approaches. In the brain, this effect is reflected in evidence-independent urgency that pushes decision-related motor preparation signals closer to a fixed threshold. Moreover, we show that global modulation of neural gain, as indexed by task-related fluctuations in pupil diameter, is a plausible biophysical mechanism for the generation of this urgency. These findings establish context-sensitive time-dependency as a critical feature of human decision-making. |
Andriy Myachykov; Rob Ellis; Angelo Cangelosi; Martin H. Fischer Ocular drift along the mental number line Journal Article In: Psychological Research, vol. 80, no. 3, pp. 379–388, 2016. @article{Myachykov2016, We examined the spontaneous association between numbers and space by documenting attention deployment and the time course of associated spatial-nu-merical mapping with and without overt oculomotor responses. In Experiment 1, participants maintained central fixation while listening to number names. In Experiment 2, they made horizontal target-direct saccades following auditory number presentation. In both experiments, we continuously measured spontaneous ocular drift in hori-zontal space during and after number presentation. Experiment 2 also measured visual-probe-directed sac-cades following number presentation. Reliable ocular drift congruent with a horizontal mental number line emerged during and after number presentation in both experiments. Our results provide new evidence for the implicit and automatic nature of the oculomotor resonance effect asso-ciated with the horizontal spatial-numerical mapping mechanism. |
Malik M. Naeem Mannan; Shinjung Kim; Myung Yung Jeong; M. Ahmad Kamran Hybrid EEG—Eye tracker: Automatic identification and removal of eye movement and blink artifacts from electroencephalographic signal Journal Article In: Sensors, vol. 16, pp. 241, 2016. @article{NaeemMannan2016, Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. |
Irene Ablinger; Ralph Radach Diverging receptive and expressive word processing mechanisms in a deep dyslexic reader Journal Article In: Neuropsychologia, vol. 81, pp. 12–21, 2016. @article{Ablinger2016, We report on KJ, a patient with acquired dyslexia due to cerebral artery infarction. He represents an unusually clear case of an "output" deep dyslexic reader, with a distinct pattern of pure semantic reading. According to current neuropsychological models of reading, the severity of this condition is directly related to the degree of impairment in semantic and phonological representations and the resulting imbalance in the interaction between the two word processing pathways. The present work sought to examine whether an innovative eye movement supported intervention combining lexical and segmental therapy would strengthen phonological processing and lead to an attenuation of the extreme semantic over-involvement in KJ's word identification process. Reading performance was assessed before (T1) between (T2) and after (T3) therapy using both analyses of linguistic errors and word viewing patterns. Therapy resulted in improved reading aloud accuracy along with a change in error distribution that suggested a return to more sequential reading. Interestingly, this was in contrast to the dynamics of moment-to-moment word processing, as eye movement analyses still suggested a predominantly holistic strategy, even at T3. So, in addition to documenting the success of the therapeutic intervention, our results call for a theoretically important conclusion: Real-time letter and word recognition routines should be considered separately from properties of the verbal output. Combining both perspectives may provide a promising strategy for future assessment and therapy evaluation. |
Arman Abrahamyan; Laura Luz Silva; Steven C. Dakin; Matteo Carandini; Justin L. Gardner Adaptable history biases in human perceptual decisions Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 25, pp. E3548–E3557, 2016. @article{Abrahamyan2016, When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject's default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics. |
Leah Acker; Erica N. Pino; Edward S. Boyden; Robert Desimone FEF inactivation with improved optogenetic methods Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 46, pp. E7297–E7306, 2016. @article{Acker2016, Optogenetic methods have been highly effective for suppressing neural activity and modulating behavior in rodents, but effects have been much smaller in primates, which have much larger brains. Here, we present a suite of technologies to use optogenetics effectively in primates and apply these tools to a classic question in oculomotor control. First, we measured light absorption and heat propagation in vivo, optimized the conditions for using the red-light-shifted halorhodopsin Jaws in primates, and developed a large-volume illuminator to maximize light delivery with minimal heating and tissue displacement. Together, these advances allowed for nearly universal neuronal inactivation across more than 10 mm(3) of the cortex. Using these tools, we demonstrated large behavioral changes (i.e., up to several fold increases in error rate) with relatively low light power densities (≤100 mW/mm(2)) in the frontal eye field (FEF). Pharmacological inactivation studies have shown that the FEF is critical for executing saccades to remembered locations. FEF neurons increase their firing rate during the three epochs of the memory-guided saccade task: visual stimulus presentation, the delay interval, and motor preparation. It is unclear from earlier work, however, whether FEF activity during each epoch is necessary for memory-guided saccade execution. By harnessing the temporal specificity of optogenetics, we found that FEF contributes to memory-guided eye movements during every epoch of the memory-guided saccade task (the visual, delay, and motor periods). |
Hamed Zivari Adab; Rufin Vogels Perturbation of posterior inferior temporal cortical activity impairs coarse orientation discrimination Journal Article In: Cerebral Cortex, vol. 26, no. 9, pp. 3814–3827, 2016. @article{Adab2016, It is reasonable to assume that the discrimination of simple visual stimuli depends on the activity of early visual cortical neurons, because simple visual features are supposedly coded in these areas whereas more complex features are coded in late visual areas. Recently, we showed that training monkeys in a coarse orientation discrimination task modified the response properties of single neurons in the posterior inferior temporal (PIT) cortex, a late visual area. Here, we examined the contribution of PIT to coarse orientation discrimination using causal perturbation methods. Electrical stimulation (ES) of PIT with currents of at least 100 µA impaired coarse orientation discrimination in monkeys. The performance deterioration did not exclusively reflect a general impairment to perform a difficult perceptual task. However, high current (650 µA) but not low-current (100 µA) ES also impaired fine color discrimination. ES of temporal regions dorsal or anterior to PIT produced less impairment of coarse orientation discrimination than ES of PIT. Injections of the GABA agonist muscimol into PIT also impaired performance. These data suggest that the late cortical area PIT is part of the network that supports coarse orientation discrimination of a simple grating stimulus, at least after extensive training in this task at threshold. |
Rick A. Adams; Markus Bauer; Dimitris Pinotsis; Karl J. Friston Dynamic causal modelling of eye movements during pursuit: Confirming precision-encoding in V1 using MEG Journal Article In: Neuroimage, vol. 132, pp. 175–189, 2016. @article{Adams2016, This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia. |
Zaeinab Afsari; José P. Ossandón; Peter Konig The dynamic effect of reading direction habit on spatial asymmetry of image perception Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 1–21, 2016. @article{Afsari2016, Exploration of images after stimulus onset is initially biased to the left. Here, we studied the causes of such an asymmetry and investigated effects of reading habits, text primes, and priming by systematically biased eye movements on this spatial bias in visual exploration. Bilinguals first read text primes with right- to-left (RTL) or left-to-right (LTR) reading directions and subsequently explored natural images. In Experiment 1, native RTL speakers showed a leftward free-viewing shift after reading LTR primes but a weaker rightward bias after reading RTL primes. This demonstrates that reading direction dynamically influences the spatial bias. However, native LTR speakers wholearnedanRTL languagelateinlife showed a leftward bias after reading either LTR or RTL primes, which suggests the role of habit formation in the production of the spatial bias. In Experiment 2, LTR bilinguals showed a slightly enhanced leftward bias after reading LTR text primes in their second language. This might contribute to the differences of native RTL and LTR speakers observed in Experiment 1. In Experiment 3, LTR bilinguals read normal (LTR, habitual reading) and mirrored left-to-right (mLTR, nonhabitual reading) texts. We observed a strong leftward bias in both cases, indicating that the bias direction is influenced by habitual reading direction and is not secondary to the actual reading direction. This is confirmed in Experiment 4, in which LTR participants were asked to follow RTL and LTR moving dots in prior image presentation and showed no change in the normal spatial bias. In conclusion, the horizontal bias is a dynamic property and is modulated by habitual reading direction. Introduction |
Mehmet N. Ağaoğlu; Susana T. L. Chung Can (should) theories of crowding be unified? Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–22, 2016. @article{Agaoglu2016, Objects in clutter are difficult to recognize, a phenomenon known as crowding. There is little consensus on the underlying mechanisms of crowding, and a large number of models have been proposed. There have also been attempts at unifying the explanations of crowding under a single model, such as the weighted feature model of Harrison and Bex (2015) and the texture synthesis model of Rosenholtz and colleagues (Balas, Nakano, & Rosenholtz, 2009; Keshvari & Rosenholtz, 2016). The goal of this work was to test various models of crowding and to assess whether a unifying account can be developed. Adopting Harrison and Bex's (2015) experimental paradigm, we asked observers to report the orientation of two concentric C-stimuli. Contrary to the predictions of their model, observers' recognition accuracy was worse for the inner C-stimulus. In addition, we demonstrated that the stimulus paradigm used by Harrison and Bex has a crucial confounding factor, eccentricity, which limits its usage to a very narrow range of stimulus parameters. Nevertheless, reporting the orientations of both C-stimuli in this paradigm proved very useful in pitting different crowding models against each other. Specifically, we tested deterministic and probabilistic versions of averaging, substitution, and attentional resolution models as well as the texture synthesis model. None of the models alone was able to explain the entire set of data. Based on these findings, we discuss whether the explanations of crowding can (should) be unified. |
Mehmet N. Ağaoğlu; Aaron M. Clarke; Michael H. Herzog; Haluk Ögmen Motion-based nearest vector metric for reference frame selection in the perception of motion Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–16, 2016. @article{Agaoglu2016a, We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects. |
Mehmet N. Ağaoğlu; Haluk Öğmen; Susana T. L. Chung Unmasking saccadic uncrowding Journal Article In: Vision Research, vol. 127, pp. 152–164, 2016. @article{Agaoglu2016b, Stimuli that are briefly presented around the time of saccades are often perceived with spatiotemporal distortions. These distortions do not always have deleterious effects on the visibility and identification of a stimulus. Recent studies reported that when a stimulus is the target of an intended saccade, it is released from both masking and crowding. Here, we investigated pre-saccadic changes in single and crowded letter recognition performance in the absence (Experiment 1) and the presence (Experiment 2) of backward masks to determine the extent to which saccadic “uncrowding” and “unmasking” mechanisms are similar. Our results show that pre-saccadic improvements in letter recognition performance are mostly due to the presence of masks and/or stimulus transients which occur after the target is presented. More importantly, we did not find any decrease in crowding strength before impending saccades. A simplified version of a dual-channel neural model, originally proposed to explain masking phenomena, with several saccadic add-on mechanisms, could account for our results in Experiment 1. However, this model falls short in explaining how saccades drastically reduced the effect of backward masking (Experiment 2). The addition of a remapping mechanism that alters the relative spatial positions of stimuli was needed to fully account for the improvements observed when backward masks followed the letter stimuli. Taken together, our results (i) are inconsistent with saccadic uncrowding, (ii) strongly support saccadic unmasking, and (iii) suggest that pre-saccadic letter recognition is modulated by multiple perisaccadic mechanisms with different time courses. |
Jordi Aguila; Javier Cudeiro; Casto Rivadulla Effects of static magnetic fields on the visual cortex: Reversible visual deficits and reduction of neuronal activity Journal Article In: Cerebral Cortex, vol. 26, no. 2, pp. 628–638, 2016. @article{Aguila2016, Noninvasive brain stimulation techniques have been successfully used to modulate brain activity, have become a highly useful tool in basic and clinical research and, recently, have attracted increased attention due to their putative use as a method for neuro-enhancement. In this scenario, transcranial static magnetic stimulation (SMS) of moderate strength might represent an affordable, simple, and complementary method to other procedures, such as Transcranial Magnetic Stimulation or direct current stimulation, but its mechanisms and effects are not thoroughly understood. In this study, we show that static magnetic fields applied to visual cortex of awake primates cause reversible deficits in a visual detection task. Complementary experiments in anesthetized cats show that the visual deficits are a consequence of a strong reduction in neural activity. These results demonstrate that SMS is able to effectively modulate neuronal activity and could be considered to be a tool to be used for different purposes ranging from experimental studies to clinical applications. |
Başak Akdoğan; Fuat Balcı; Hedderik Rijn Temporal expectation indexed by pupillary response Journal Article In: Timing & Time Perception, vol. 4, no. 4, pp. 354–370, 2016. @article{Akdogan2016, Forming temporal expectations plays an instrumental role for the optimization of behavior and allo- cation of attentional resources. Although the effects of temporal expectations on visual attention are well-established, the question of whether temporal predictions modulate the behavioral outputs of the autonomic nervous system such as the pupillary response remains unanswered. Therefore, this study aimed to obtain an online measure of pupil size while human participants were asked to dif- ferentiate between visual targets presented after varying time intervals since trial onset. Specifically, we manipulated temporal predictability in the presentation of target stimuli consisting of letters which appeared after either a short or long delay duration (1.5 vs. 3 s) in the majority of trials (75%) within different test blocks. In the remaining trials (25%), no target stimulus was present to investi- gate the trajectory of preparatory pupillary response under a low level of temporal uncertainty. The results revealed that the rate of preparatory pupillary response was contingent upon the time of target appearance such that pupils dilated at a higher rate when the targets were expected to appear after a shorter as compared to a longer delay period irrespective of target presence. The finding that pupil size can track temporal regularities and exhibit differential preparatory response between dif- ferent delay conditions points to the existence of a distributed neural network subserving temporal information processing which is crucial for cognitive functioning and goal-directed behavior. |
Nadia Alahyane; Christelle Lemoine-Lardennois; Coline Tailhefer; Thérèse Collins; Jacqueline Fagard; Karine Doré-Mazars Development and learning of saccadic eye movements in 7- to 42-month-old children Journal Article In: Journal of Vision, vol. 16, no. 1, pp. 1–12, 2016. @article{Alahyane2016, From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance. |
Andrea Alamia; Alexandre Zénon Statistical regularities attract attention when task-relevant Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 42, 2016. @article{Alamia2016, Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e. the effect of color predictability on reaction times (RT), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the 2 colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task. |
Andrey R. Nikolaev; Radha Nila Meghanathan; Cees Leeuwen Combining EEG and eye movement recording in free viewing: Pitfalls and possibilities Journal Article In: Brain and Cognition, vol. 107, pp. 55–83, 2016. @article{Nikolaev2016, Co-registration of EEG and eye movement has promise for investigating perceptual processes in free viewing conditions, provided certain methodological challenges can be addressed. Most of these arise from the self-paced character of eye movements in free viewing conditions. Successive eye movements occur within short time intervals. Their evoked activity is likely to distort the EEG signal during fixation. Due to the non-uniform distribution of fixation durations, these distortions are systematic, survive across-trials averaging, and can become a source of confounding. We illustrate this problem with effects of sequential eye movements on the evoked potentials and time-frequency components of EEG and propose a solution based on matching of eye movement characteristics between experimental conditions. The proposal leads to a discussion of which eye movement characteristics are to be matched, depending on the EEG activity of interest. We also compare segmentation of EEG into saccade-related epochs relative to saccade and fixation onsets and discuss the problem of baseline selection and its solution. Further recommendations are given for implementing EEG-eye movement co-registration in free viewing conditions. By resolving some of the methodological problems involved, we aim to facilitate the transition from the traditional stimulus-response paradigm to the study of visual perception in more naturalistic conditions. |
Jessie S. Nixon; Jacolien Rij; Peggy Mok; R. Harald Baayen; Yiya Chen The temporal dynamics of perceptual uncertainty: Eye movement evidence from Cantonese segment and tone perception Journal Article In: Journal of Memory and Language, vol. 90, pp. 103–125, 2016. @article{Nixon2016, Two visual world eyetracking experiments investigated how acoustic cue value and statistical variance affect perceptual uncertainty during Cantonese consonant (Experiment 1) and tone perception (Experiment 2). Participants heard low- or high-variance acoustic stimuli. Euclidean distance of fixations from target and competitor pictures over time was analysed using Generalised Additive Mixed Modelling. Distance of fixations from target and competitor pictures varied as a function of acoustic cue, providing evidence for gradient, nonlinear sensitivity to cue values. Moreover, cue value effects significantly interacted with statistical variance, indicating that the cue distribution directly affects perceptual uncertainty. Interestingly, the time course of effects differed between target distance and competitor distance models. The pattern of effects over time suggests a global strategy in response to the level of uncertainty: as uncertainty increases, verification looks increase accordingly. Low variance generally creates less uncertainty, but can lead to greater uncertainty in the face of unexpected speech tokens. |
Anna Nowakowska; Alasdair D. F. Clarke; Arash Sahraie; Amelia R. Hunt Inefficient search strategies in simulated hemianopia Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 11, pp. 1858–1872, 2016. @article{Nowakowska2016, We investigated whether healthy participants can spontaneously adopt effective eye movement strategies to compensate for information loss similar to that experienced by patients with damage to visual cortex (hemianopia). Visual information in 1 hemifield was removed or degraded while participants searched for an emotional face among neutral faces or a line tilted 45° to the right among lines of varying degree of tilt. A bias to direct saccades toward the sighted field was observed across all 4 experiments. The proportion of saccades directed toward the "blind" field increased with the amount of information available in that field, suggesting fixations are driven toward salient visual stimuli rather than toward locations that maximize information gain. In Experiments 1 and 2, the sighted-field bias had a minimal impact on search efficiency, because the target was difficult to find. However, the sighted-field bias persisted even when the target was visually distinct from the distractors and could easily be detected in the periphery (Experiments 3 and 4). This surprisingly inefficient search behavior suggests that eye movements are biased to salient visual stimuli even when it comes at a clear cost to search efficiency, and efficient strategies to compensate for visual deficits are not spontaneously adopted by healthy participants. |
Nazbanou Nozari; Daniel Mirman; Sharon L. Thompson-Schill The ventrolateral prefrontal cortex facilitates processing of sentential context to locate referents Journal Article In: Brain and Language, vol. 157-158, pp. 1–13, 2016. @article{Nozari2016, Left ventrolateral prefrontal cortex (VLPFC) has been implicated in both integration and conflict resolution in sentence comprehension. Most evidence in favor of the integration account comes from processing ambiguous or anomalous sentences, which also poses a demand for conflict resolution. In two eye-tracking experiments we studied the role of VLPFC in integration when demands for conflict resolution were minimal. Two closely-matched groups of individuals with chronic post-stroke aphasia were tested: the Anterior group had damage to left VLPFC, whereas the Posterior group had left temporo-parietal damage. In Experiment 1 a semantic cue (e.g., "She will eat the apple") uniquely marked the target (apple) among three distractors that were incompatible with the verb. In Experiment 2 phonological cues (e.g., "She will see an eagle."/"She will see a bear.") uniquely marked the target among three distractors whose onsets were incompatible with the cue (e.g., all consonants when the target started with a vowel). In both experiments, control conditions had a similar format, but contained no semantic or phonological contextual information useful for target integration (e.g., the verb "see", and the determiner "the"). All individuals in the Anterior group were slower in using both types of contextual information to locate the target than were individuals in the Posterior group. These results suggest a role for VLPFC in integration beyond conflict resolution. We discuss a framework that accommodates both integration and conflict resolution. |
Nazbanou Nozari; John C. Trueswell; Sharon L. Thompson-Schill In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1942–1953, 2016. @article{Nozari2016a, During sentence comprehension, real-time identification of a referent is driven both by local, context-independent lexical information and by more global sentential information related to the meaning of the utterance as a whole. This paper investigates the cognitive factors that limit the consideration of referents that are supported by local lexical information but not supported by more global sentential information. In an eye-tracking paradigm, participants heard sentences like "She will eat the red pear" while viewing four black-and-white (colorless) line-drawings. In the experimental condition, the display contained a "local attractor" (e.g., a heart), which was locally compatible with the adjective but incompatible with the context ("eat"). In the control condition, the local attractor was replaced by a picture which was incompatible with the adjective (e.g., "igloo"). A second factor manipulated contextual constraint, by using either a constraining verb (e.g., "eat"), or a non-constraining one (e.g., "see"). Results showed consideration of the local attractor, the magnitude of which was modulated by verb constraint, but also by each subject's cognitive control abilities, as measured in a separate Flanker task run on the same subjects. The findings are compatible with a processing model in which the interplay between local attraction, context, and domain-general control mechanisms determines the consideration of possible referents. |
Antje Nuthmann; George L. Malcolm Eye guidance during real-world scene search: The role color plays in central and peripheral vision Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–16, 2016. @article{Nuthmann2016, How does the availability of color across the visual field facilitate gaze during real-world search? To answer this question, the presence of color in central or peripheral vision was manipulated using a 5deg gaze-contingent window that followed participants' gaze. Accordingly, scenes were presented in full color (C), grey in central vision and colored in peripheral vision (G-C), colored in central vision and grey in peripheral vision (C-G), and in grey (G). The color conditions were crossed with a manipulation of the search cue: the search object was cued either with a word label or a picture of the target. Across color conditions, search was faster during target template guided search. Search time costs were observed in the C-G and G conditions, highlighting the importance of color in peripheral vision. In addition, a gaze-data based decomposition of search time revealed color-mediated effects on specific sub-processes of search. When color was not available in peripheral vision, it took longer to initiate search, and to locate the search object in the scene. When color was not available in central vision, however, the process of verifying the identity of the target was prolonged. In conclusion, color-information in peripheral vision facilitates saccade target selection. |
Antje Nuthmann; Françoise Vitu; Ralf Engbert; Reinhold Kliegl No evidence for a saccadic range effect for visually guided and memory-guided saccades in simple saccade-targeting tasks Journal Article In: PLoS ONE, vol. 11, no. 9, pp. e0162449, 2016. @article{Nuthmann2016a, Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task. |
Marcus Nyström; Dan Witzner Hansen; Richard Andersson; Ignace T. C. Hooge Why have microsaccades become larger? Investigating eye deformations and detection algorithms Journal Article In: Vision Research, vol. 118, pp. 17–24, 2016. @article{Nystroem2016, The reported size of microsaccades is considerably larger today compared to the initial era of microsaccade studies during the 1950s and 1960s. We investigate whether this increase in size is related to the fact that the eye-trackers of today measure different ocular structures than the older techniques, and that the movements of these structures may differ during a microsaccade. In addition, we explore the impact such differences have on subsequent analyzes of the eye-tracker signals. In Experiment I, the movement of the pupil as well as the first and fourth Purkinje reflections were extracted from series of eye images recorded during a fixation task. Results show that the different ocular structures produce different microsaccade signatures. In Experiment II, we found that microsaccade amplitudes computed with a common detection algorithm were larger compared to those reported by two human experts. The main reason was that the overshoots were not systematically detected by the algorithm and therefore not accurately accounted for. We conclude that one reason to why the reported size of microsaccades has increased is due to the larger overshoots produced by the modern pupil-based eye-trackers compared to the systems used in the classical studies, in combination with the lack of a systematic algorithmic treatment of the overshoot. We hope that awareness of these discrepancies in microsaccade dynamics across eye structures will lead to more generally accepted definitions of microsaccades. |
E. Oberwelland; Leonhard Schilbach; I. Barisic; Sarah C. Krall; K. Vogeley; Gereon R. Fink; B. Herpertz-Dahlmann; Kerstin Konrad; Martin Schulte-Rüther Look into my eyes: Investigating joint attention using interactive eye-tracking and fMRI in a developmental sample Journal Article In: NeuroImage, vol. 130, pp. 248–260, 2016. @article{Oberwelland2016, Joint attention, the shared attentional focus of at least two people on a third significant object, is one of the earliest steps in social development and an essential aspect of reciprocal interaction. However, the neural basis of joint attention (JA) in the course of development is completely unknown. The present study made use of an interactive eye-tracking paradigm in order to examine the developmental trajectories of JA and the influence of a familiar interaction partner during the social encounter. Our results show that across children and adolescents JA elicits a similar network of "social brain" areas as well as attention and motor control associated areas as in adults. While other-initiated JA particularly recruited visual, attention and social processing areas, self-initiated JA specifically activated areas related to social cognition, decision-making, emotions and motivational/reward processes highlighting the rewarding character of self-initiated JA. Activation was further enhanced during self-initiated JA with a familiar interaction partner. With respect to developmental effects, activation of the precuneus declined from childhood to adolescence and additionally shifted from a general involvement in JA towards a more specific involvement for self-initiated JA. Similarly, the temporoparietal junction (TPJ) was broadly involved in JA in children and more specialized for self-initiated JA in adolescents. Taken together, this study provides first-time data on the developmental trajectories of JA and the effect of a familiar interaction partner incorporating the interactive character of JA, its reciprocity and motivational aspects. |
Emily R. Oby; Sagi Perel; Patrick T. Sadtler; Douglas A. Ruff; Jessica L. Mischel; David F. Montez; Marlene R. Cohen; Aaron P. Batista; Steven M. Chase Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters Journal Article In: Journal of Neural Engineering, vol. 13, no. 3, pp. 1–15, 2016. @article{Oby2016, OBJECTIVE: A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). APPROACH: We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. MAIN RESULTS: The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. SIGNIFICANCE: How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. |
Bartholomäus Odoj; Daniela Balslev Role of oculoproprioception in coding the locus of attention Journal Article In: Journal of Cognitive Neuroscience, vol. 28, no. 3, pp. 517–528, 2016. @article{Odoj2016, The most common neural representations for spatial atten- tion encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be com- bined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allo- cation of attention, the source of this input has so far re- mained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculopro- prioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants dis- criminated visual targets whose location was cued in a non- visual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculo- proprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculo- proprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention. |
Sven Ohl; Reinhold Kliegl Revealing the time course of signals influencing the generation of secondary saccades using Aalen's additive hazards model Journal Article In: Vision Research, vol. 124, pp. 52–58, 2016. @article{Ohl2016, Saccadic eye movements are frequently followed by smaller secondary saccades which are generally assumed to correct for the error in primary saccade landing position. However, secondary saccades can also occur after accurate primary saccades and they are often as small as microsaccades, therefore raising the need to further scrutinize the processes involved in secondary saccade generation. Following up a previous study, we analyzed secondary saccades using rate analysis which allows us to quantify experimental effects as shifts in distributions, therefore going beyond comparisons of mean differences. We use Aalen's additive hazards model to delineate the time course of key influences on the secondary saccade rate. In addition to the established effect of primary saccade error, we observed a time-varying influence of under- vs. overshooting - with a higher risk of generating secondary saccades following undershoots. Moreover, increasing target eccentricity influenced the programming of secondary saccades, therefore demonstrating that error-unrelated variables co-determine secondary saccade programs. Our results provide new insights into the generative mechanisms of small saccades during postsaccadic fixation that need to be accounted for by secondary saccade models. |
Sven Ohl; Christian Wohltat; Reinhold Kliegl; Olga Pollatos; Ralf Engbert Microsaccades are coupled to heartbeat Journal Article In: Journal of Neuroscience, vol. 36, no. 4, pp. 1237–1241, 2016. @article{Ohl2016a, During visual fixation, the eye generates microsaccades and slower components of fixational eye movements that are part of the visual processing strategy in humans. Here, we show that ongoing heartbeat is coupled to temporal rate variations in the generation of microsaccades. Using coregistration of eye recording and ECG in humans, we tested the hypothesis that microsaccade onsets are coupled to the relative phase of the R-R intervals in heartbeats. We observed significantly more microsaccades during the early phase after the R peak in the ECG. This form of coupling between heartbeat and eye movements was substantiated by the additional finding of a coupling between heart phase and motion activity in slow fixational eye movements; i.e., retinal image slip caused by physiological drift. Our findings therefore demonstrate a coupling of the oculomotor system and ongoing heartbeat, which provides further evidence for bodily influences on visuomotor functioning. |
Lauri Oksama; Jukka Hyönä Position tracking and identity tracking are separate systems: Evidence from eye movements Journal Article In: Cognition, vol. 146, no. 393-409, pp. 1–16, 2016. @article{Oksama2016, How do we track multiple moving objects in our visual environment? Some investigators argue that tracking is based on a parallel mechanism (e.g., Cavanagh & Alvarez, 2005; Pylyshyn, 1989), others argue that tracking contains a serial component (e.g. Holcombe & Chen, 2013; Oksama & Hyönä, 2008). In the present study, we put previous theories into a direct test by registering observers' eye movements when they tracked identical moving targets (the MOT task) or when they tracked distinct object identities (the MIT task). The eye movement technique is a useful tool to study whether overt focal attention is exploited during tracking. We found a qualitative difference between these tasks in terms of eye movements. When the participants tracked only position information (MOT), the observers had a clear preference for keeping their eyes fixed for a rather long time on the same screen position. In contrast, active eye behavior was observed when the observers tracked the identities of moving objects (MIT). The participants updated over four target identities with overt attention shifts. These data suggest that there are two separate systems involved in multiple object tracking. The position tracking system keeps track of the positions of the moving targets in parallel without the need of overt attention shifts in the form of eye movements. On the other hand, the identity tracking system maintains identity-location bindings in a serial fashion by utilizing overt attention shifts. |
Henri Olkoniemi; Henri Ranta; Johanna K. Kaakinen Individual differences in the processing of written sarcasm and metaphor: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 3, pp. 433–450, 2016. @article{Olkoniemi2016, The present study examined individual differences in the processing of different forms of figurative language. Sixty participants read sarcastic, metaphorical, and literal sentences embedded in story contexts while their eye movements were recorded, and responded to a text memory and an inference question after each story. Individual differences in working memory capacity (WMC), need for cognition (NFC), and cognitive-affective processing were measured. The results showed that the processing of metaphors was characterized by slow-down during first-pass reading of the utterances, whereas sarcasm produced mainly delayed effects in the eye movement records. Sarcastic utterances were also harder to comprehend than literal or metaphorical utterances as indicated by poorer performance in responses to inference questions. Individual differences in general cognitive factors (WMC and NFC) were related to the processing of metaphors, whereas individual differences in both general cognitive factors (WMC) as well as processing of emotional information were related to the processing of sarcasm. The results indicate that different forms of figurative language pose different cognitive demands to the reader, and show that reader characteristics play a prominent role in figurative language comprehension. |
Rosanna K. Olsen; Vinoja Sebanayagam; Yunjo Lee; Morris Moscovitch; Cheryl L. Grady; R. Shayna Rosenbaum; Jennifer D. Ryan The relationship between eye movements and subsequent recognition: Evidence from individual differences and amnesia Journal Article In: Cortex, vol. 85, pp. 182–193, 2016. @article{Olsen2016, There is consistent agreement regarding the positive relationship between cumulative eye movement sampling and subsequent recognition, but the role of the hippocampus in this sampling behavior is currently unknown. It is also unclear whether the eye movement repetition effect, i.e., fewer fixations to repeated, compared to novel, stimuli, depends on explicit recognition and/or an intact hippocampal system. We investigated the relationship between cumulative sampling, the eye movement repetition effect, subsequent memory, and the hippocampal system. Eye movements were monitored in a developmental amnesic case (H.C.), whose hippocampal system is compromised, and in a group of typically developing participants while they studied single faces across multiple blocks. The faces were studied from the same viewpoint or different viewpoints and were subsequently tested with the same or different viewpoint. Our previous work suggested that hippocampal representations support explicit recognition for information that changes viewpoint across repetitions (Olsen et al., 2015). Here, examination of eye movements during encoding indicated that greater cumulative sampling was associated with better memory among controls. Increased sampling, however, was not associated with better explicit memory in H.C., suggesting that increased sampling only improves memory when the hippocampal system is intact. The magnitude of the repetition effect was not correlated with cumulative sampling, nor was it related reliably to subsequent recognition. These findings indicate that eye movements collect information that can be used to strengthen memory representations that are later available for conscious remembering, whereas eye movement repetition effects reflect a processing change due to experience that does not necessarily reflect a memory representation that is available for conscious appraisal. Lastly, H.C. demonstrated a repetition effect for fixed viewpoint faces but not for variable viewpoint faces, which suggests that repetition effects are differentially supported by neocortical and hippocampal systems, depending upon the representational nature of the underlying memory trace. |
Isabel Orenes; Linda M. Moxey; Christoph Scheepers; Carlos Santamaría Negation in context: Evidence from the visual world paradigm Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 6, pp. 1082–1092, 2016. @article{Orenes2016, Literature assumes that negation is more difficult to understand than affirmation, but this might depend on the pragmatic context. The goal of this paper is to show that pragmatic knowledge modulates the unfolding processing of negation due to the previous activation of the negated situation. To test this, we used the visual world paradigm. In this task, we presented affirmative (e.g., her dad was rich) and negative sentences (e.g., her dadwas not poor) while viewing two images ofthe affirmed and denied enti- ties. The critical sentence in each item was preceded by one of three types of contexts: an inconsistent context (e.g., She supposed that her dad had little savings) that activates the negated situation (a poor man), a consistent context (e.g., She supposed that her dad had enough savings) that activates the actual situation (a rich man), or a neutral context (e.g., her dad lived on the other side oftown) that activates neither of the two models previously suggested. The results corroborated our hypothesis. Pragmatics is implicated in the unfolding processing of negation. We found an increase in fixations on the target compared to the baseline for negative sentences at 800 ms in the neutral context, 600 ms in the inconsistent context, and 1450 ms in the consistent context. Thus, when the negated situation has been previously introduced via an inconsistent context, negation is facilitated. |
Jacob L. Orquin; Nathaniel J. S. Ashby; Alasdair D. F. Clarke Areas of interest as a signal detection problem in behavioral eye-tracking research Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 103–115, 2016. @article{Orquin2016, Decision researchers frequently analyze attention to individual objects to test hypotheses about underlying cognitive processes. Generally, fixations are assigned to objects using a method known as area of interest (AOI). Ideally, an AOI includes all fixations belonging to an object while fixations to other objects are excluded. Unfortunately, due to measurement inaccuracy and insufficient distance between objects, the distributions of fixations to objects may overlap, resulting in a signal detection problem. If the AOI is to include all fixations to an object, it will also likely include fixations belonging to other objects (false positives). In a survey, we find that many researchers report testing multiple AOI sizes when performing analyses, presumably trying to balance the proportion of true and false positive fixations. To test whether AOI size influences the measurement of object attention and conclusions drawn about cognitive processes, we reanalyze four published studies and conduct a fifth tailored to our purpose. We find that in studies in which we expected overlapping fixation distributions, analyses benefited from smaller AOI sizes (0° visual angle margin). In studies where we expected no overlap, analyses benefited from larger AOI sizes (>.5° visual angle margins). We conclude with a guideline for the use of AOIs in behavioral eye-tracking research. |
Meghan B. Mitchell; Steven D. Shirk; Donald G. McLaren; Jessica S. Dodd; Ali Ezzati; Brandon A. Ally; Alireza Atri Recognition of faces and names: Multimodal physiological correlates of memory and executive function Journal Article In: Brain Imaging and Behavior, vol. 10, no. 2, pp. 408–423, 2016. @article{Mitchell2016, We sought to characterize electrophysiological, eye-tracking and behavioral correlates of face-name recognition memory in healthy younger adults using high-density electroencephalography (EEG), infrared eye-tracking (ET), and neuropsychological measures. Twenty-one participants first studied 40 face-name (FN) pairs; 20 were presented four times (4R) and 20 were shown once (1R). Recognition memory was assessed by asking participants to make old/new judgments for 80 FN pairs, of which half were previously studied items and half were novel FN pairs (N). Simultaneous EEG and ET recording were collected during recognition trials. Comparisons of event-related potentials (ERPs) for correctly identified FN pairs were compared across the three item types revealing classic ERP old/new effects including 1) relative positivity (1R > N) bi-frontally from 300 to 500 ms, reflecting enhanced familiarity, 2) relative positivity (4R > 1R and 4R > N) in parietal areas from 500 to 800 ms, reflecting enhanced recollection, and 3) late frontal effects (1R > N) from 1000 to 1800 ms in right frontal areas, reflecting post-retrieval monitoring. ET analysis also revealed significant differences in eye movements across conditions. Exploration of cross-modality relationships suggested associations between memory and executive function measures and the three ERP effects. Executive function measures were associated with several indicators of saccadic eye movements and fixations, which were also associated with all three ERP effects. This novel characterization of face-name recognition memory performance using simultaneous EEG and ET reproduced classic ERP and ET effects, supports the construct validity of the multimodal FN paradigm, and holds promise as an integrative tool to probe brain networks supporting memory and executive functioning. |
Aleksandra Mitrovic; Pablo P. L. Tinio; Helmut Leder In: Frontiers in Human Neuroscience, vol. 10, pp. 122, 2016. @article{Mitrovic2016, One of the key behavioral effects of attractiveness is increased visual attention to attractive people. This effect is often explained in terms of evolutionary adaptations, such as attractiveness being an indicator of good health. Other factors could influence this effect. In the present study, we explored the modulating role of sexual orientation on the effects of attractiveness on exploratory visual behavior. Heterosexual and homosexual men and women viewed natural-looking scenes that depicted either two women or two men who varied systematically in levels of attractiveness (based on a pre- study). Participants' eye movements and attractiveness ratings toward the faces of the depicted people were recorded. The results showed that although attractiveness had the largest influence on participants' behaviors, participants' sexual orientations strongly modulated the effects.With the exception of homosexual women, all participant groups looked longer and more often at attractive faces that corresponded with their sexual orientations. Interestingly, heterosexual and homosexual men and homosexual women looked longer and more often at the less attractive face of their non-preferred sex than the less attractive face of their preferred sex, evidence that less attractive faces of the preferred sex might have an aversive character. These findings provide evidence for the important role that sexual orientation plays in guiding visual exploratory behavior and evaluations of the attractiveness of others. |