EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2018 |
Mikhail Y. Pokhoday; Christoph Scheepers; Yury Y. Shtyrov; Andriy Myachykov Motor (but not auditory) attention affects syntactic choice Journal Article In: PLoS ONE, vol. 13, no. 4, pp. e0195547, 2018. @article{Pokhoday2018, Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or may be subject to cross-modal interaction. The current study addressed this issue. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality dependent and appear to be more prominent in the visuomotor domain than in the auditory domain. |
Barbara Poletti; Laura Carelli; Andrea Faini; Federica Solca; Paolo Meriggi; Annalisa Lafronza; Luciana Ciringione; Elisa Pedroli; Nicola Ticozzi; Andrea Ciammola; Pietro Cipresso; Giuseppe Riva; Vincenzo Silani The Arrows and Colors Cognitive Test (ACCT): A new verbal-motor free cognitive measure for executive functions in ALS Journal Article In: PLoS ONE, vol. 13, no. 8, pp. e0200953, 2018. @article{Poletti2018, Background and objective: The presence of executive deficits in patients with Amyotrophic Lateral Sclerosis is well established, even if standardized measures are difficult to obtain due to progressive physical disability of the patients. We present clinical data concerning a newly developed measure of cognitive flexibility, administered by means of Eye-Tracking (ET) technology in order to bypass verbal-motor limitations. Methods: 21 ALS patients and 21 age-and education-matched healthy subjects participated in an ET-based cognitive assessment, including a newly developed test of cognitive flexibility (Arrows and Colors Cognitive Test–ACCT) and other oculomotor-driven measures of cognitive functions. A standard screening of frontal and working memory abilities and global cognitive efficiency was administered to all subjects, in addition to a psychological self-rated assessment. For ALS patients, a clinical examination was also performed. Results: ACCT successfully discriminated between patients and healthy controls, mainly concerning execution times obtained at different subtests. A qualitative analysis performed on error distributions in patients highlighted a lower prevalence of perseverative errors, with respect to other type of errors. Correlations between ACCT and other ET-based frontal-executive measures were significant and involved different frontal sub-domains. Limited correlations were observed between ACCT and standard ‘paper and pencil' cognitive tests. Conclusions: The newly developed ET-based measure of cognitive flexibility could be a useful tool to detect slight frontal impairments in non-demented ALS patients by bypassing verbal-motor limitations through the oculomotor-driven administration. The findings reported in the present study represent the first contribution towards the development of a full verbal-motor free executive test for ALS patients. |
Alexandra Papadopoulos; Francesco Sforazzini; Gary F. Egan; Sharna D. Jamadar Functional subdivisions within the human intraparietal sulcus are involved in visuospatial transformation in a non-context-dependent manner Journal Article In: Human Brain Mapping, vol. 39, no. 1, pp. 354–368, 2018. @article{Papadopoulos2018, Object-based visuospatial transformation is important for the ability to interact with the world and the people and objects within it. In this preliminary investigation, we hypothesized that object-based visuospatial transformation is a unitary process invoked regardless of current context and is localized to the intraparietal sulcus. Participants (n = 14) performed both antisaccade and mental rotation tasks while scanned using fMRI. A statistical conjunction confirmed that both tasks activated the intraparietal sulcus. Statistical parametric anatomical mapping determined that the statistical conjunction was localized to intraparietal sulcus subregions hIP2 and hIP3. A Gaussian naive Bayes classifier confirmed that the conjunction in region hIP3 was indistinguishable between tasks. The results provide evidence that object-based visuospatial transformation is a domain-general process that is invoked regardless of current context. Our results are consistent with the modular model of the posterior parietal cortex and the distinct cytoarchitectonic, structural, and functional connectivity profiles of the subregions in the intraparietal sulcus. |
Karisa B. Parkington; Roxane J. Itier One versus two eyes makes a difference! Early face perception is modulated by featural fixation and feature context Journal Article In: Cortex, vol. 109, pp. 35–49, 2018. @article{Parkington2018, The N170 event-related potential component is an early marker of face perception that is particularly sensitive to isolated eye regions and to eye fixations within a face. Here, this eye sensitivity was tested further by measuring the N170 to isolated facial features and to the same features fixated within a face, using a gaze-contingent procedure. The neural response to single isolated eyes and eye regions (two eyes) was also compared. Pixel intensity and contrast were controlled at the global (image) and local (featural) levels. Consistent with previous findings, larger N170 amplitudes were elicited when the left or right eye was fixated within a face, compared to the mouth or nose, demonstrating that the N170 eye sensitivity reflects higher-order perceptual processes and not merely low-level perceptual effects. The N170 was also largest and most delayed for isolated features, compared to equivalent fixations within a face. Specifically, mouth fixation yielded the largest amplitude difference, and nose fixation yielded the largest latency difference between these two contexts, suggesting the N170 may reflect a complex interplay between holistic and featural processes. Critically, eye regions elicited consistently larger and shorter N170 responses compared to single eyes, with enhanced responses for contralat-eral eye content, irrespective of eye or nasion fixation. These results confirm the importance of the eyes in early face perception, and provide novel evidence of an increased sensitivity to the presence of two symmetric eyes compared to only one eye, consistent with a neural eye region detector rather than an eye detector per se. |
Alexander Pastukhov; Johanna Prasch; Claus-Christian Carbon Out of sight, out of mind: Occlusion and eye closure destabilize moving bistable structure-from-motion displays Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 5, pp. 1193–1204, 2018. @article{Pastukhov2018, Our brain constantly tries to anticipate the future by using a variety of memory mechanisms. Interestingly, studies using the intermittent presentation of multistable displays have shown little perceptual persistence for interruptions longer than a few hundred milliseconds. Here we examined whether we can facilitate the perceptual stability of bistable displays following a period of invisibility by employing a physically plausible and ecologically valid occlusion event sequence, as opposed to the typical intermittent presentation, with sudden onsets and offsets. To this end, we presented a bistable rotating structure-from-motion display that was moving along a linear horizontal trajectory on the screen and either was temporarily occluded by another object (a cardboard strip in Exp. 1, a computer-generated image in Exp. 2) or became invisible due to eye closure (Exp. 3). We report that a bistable rotation direction reliably persisted following occlusion or interruption only (1) if the pre- and postinterruption locations overlapped spatially (an occluder with apertures in Exp. 2 or brief, spontaneous blinks in Exp. 3) or (2) if an object's size allowed for the efficient grouping of dots on both sides of the occluding object (large objects in Exp. 1). In contrast, we observed no persistence whenever the pre- and postinterruption locations were nonoverlapping (large solid occluding objects in Exps. 1 and 2 and long, prompted blinks in Exp. 3). We report that the bistable rotation direction of a moving object persisted only for spatially overlapping neural representations, and that persistence was not facilitated by a physically plausible and ecologically valid occlusion event. |
Sonja Perkovic; Nicola J. Bown; Gulbanu Kaptan Systematicity of Search Index: A new measure for exploring information search patterns Journal Article In: Journal of Behavioral Decision Making, vol. 31, no. 5, pp. 673–685, 2018. @article{Perkovic2018, Many studies on information search in multi-attribute decision making rely on the analysis of transitions from 1 piece of information to the next. One challenge is to categorize information search that includes an equal amount of alternative-and attribute wise transitions. We propose a measure, the Systematicity of Search Index (SSI), for exploring information search based on sequences of either alternative-or attribute-wise transitions. The SSI explores information search in terms of systematicity or the proportion of nonrandom search, that is, search that is alternative or attribute-wise, corrected for chance. Our experiment confirms the validity of the SSI and shows that the SSI can shed light on processes not captured by the measures analysing single transitions, such as Payne's Search Index. |
Andrew S. Persichetti; Daniel D. Dilks Dissociable neural systems for recognizing places and navigating through them Journal Article In: Journal of Neuroscience, vol. 38, no. 48, pp. 10295–10304, 2018. @article{Persichetti2018, When entering an environment, we can use the present visual information from the scene to either recognize the kind ofplace it is (e.g., a kitchen or a bedroom) or navigate through it. Here we directly test the hypothesis that these two processes, what we call “scene categorization” and “visually-guided navigation”, are supported by dissociable neural systems. Specifically, we manipulated task demands by asking human participants (male and female) to perform a scene categorization, visually-guided navigation, and baseline task on images of scenes, and measured both the average univariate responses and multivariate spatial pattern of responses within two scene-selective cortical regions, the parahippocampal place area (PPA) and occipital place area (OPA), hypothesized to be separably involved in scene categorization and visually-guided navigation, respectively. As predicted, in the univariate analysis, PPA responded significantly more during the categorization task than during both the navigation and baseline tasks, whereas OPA showed the complete opposite pattern. Similarly, in the multivariate analysis, a linear support vector machine achieved above-chance classification for the categorization task, but not the navigation task in PPA. By contrast, above-chance classification was achieved for both the navigation and categorization tasks in OPA. However, above-chance classification for both tasks was also found in early visual cortex and hence not specific to OPA, suggesting that the spatial patterns of responses in OPA are merely inherited from early vision, and thus may be epiphenomenal to behavior. Together, these results are evidence for dissociable neural systems involved in recognizing places and navigating through them. |
Brónagh McCoy; Jan Theeuwes Overt and covert attention to location-based reward Journal Article In: Vision Research, vol. 142, pp. 27–39, 2018. @article{McCoy2018, Recent research on the impact of location-based reward on attentional orienting has indicated that reward factors play an influential role in spatial priority maps. The current study investigated whether and how reward associations based on spatial location translate from overt eye movements to covert attention. If reward associations can be tied to locations in space, and if overt and covert attention rely on similar overlapping neuronal populations, then both overt and covert attentional measures should display similar spatial-based reward learning. Our results suggest that location- and reward-based changes in one attentional domain do not lead to similar changes in the other. Specifically, although we found similar improvements at differentially rewarded locations during overt attentional learning, this translated to the least improvement at a highly rewarded location during covert attention. We interpret this as the result of an increased motivational link between the high reward location and the trained eye movement response acquired during learning, leading to a relative slowing during covert attention when the eyes remained fixated and the saccade response was suppressed. In a second experiment participants were not required to keep fixated during the covert attention task and we no longer observed relative slowing at the high reward location. Furthermore, the second experiment revealed no covert spatial priority of rewarded locations. We conclude that the transfer of location-based reward associations is intimately linked with the reward-modulated motor response employed during learning, and alternative attentional and task contexts may interfere with learned spatial priorities. |
Sarah D. McCrackin; Roxane J. Itier Is it about me? Time-course of self-relevance and valence effects on the perception of neutral faces with direct and averted gaze Journal Article In: Biological Psychology, vol. 135, pp. 47–64, 2018. @article{McCrackin2018, Most face processing research has investigated how we perceive faces presented by themselves, but we view faces everyday within a rich social context. Recent ERP research has demonstrated that context cues, including self-relevance and valence, impact electrocortical and emotional responses to neutral faces. However, the time-course of these effects is still unclear, and it is unknown whether these effects interact with the face gaze direction, a cue that inherently contains self-referential information and triggers emotional responses. We primed direct and averted gaze neutral faces (gaze manipulation) with contextual sentences that contained positive or negative opinions (valence manipulation) about the participants or someone else (self-relevance manipulation). In each trial, participants rated how positive or negative, and how affectively aroused, the face made them feel. Eye-tracking ensured sentence reading and face fixation while ERPs were recorded to face presentations. Faces put into self-relevant contexts were more arousing than those in other-relevant contexts, and elicited ERP differences from 150 to 750 ms post-face, encompassing EPN and LPP components. Self-relevance interacted with valence at both the behavioural and ERP level starting 150 ms post-face. Finally, faces put into positive, self-referential contexts elicited different N170 ERP amplitudes depending on gaze direction. Behaviourally, direct gaze elicited more positive valence ratings than averted gaze during positive, self-referential contexts. Thus, self-relevance and valence contextual cues impact visual perception of neutral faces and interact with gaze direction during the earliest stages of face processing. The results highlight the importance of studying face processing within contexts mimicking the complexities of real world interactions. |
Sarah D. McCrackin; Roxane J. Itier Both fearful and happy expressions interact with gaze direction by 200 ms SOA to speed attention orienting Journal Article In: Visual Cognition, vol. 26, no. 4, pp. 231–252, 2018. @article{McCrackin2018a, Attention orienting towards a gazed-at location is fundamental to social attention. Whether gaze cues can interact with emotional expressions other than those signalling environmental threat to modulate this gaze cueing, and whether this integration changes over time, remains unclear. With four experiments we demonstrate that, when perceived motion inherent to dynamic displays is controlled for, gaze cueing is enhanced by both fearful and happy faces compared to neutral faces. This enhancement is seen with stimulus-onset asynchronies ranging from 200–700 ms. Thus, gaze cueing can be reliably modulated by positive expressions, albeit to a smaller degree than fearful ones, and this gaze–emotion integration impacts behaviour as early as 200 ms post-cue onset. |
Gerald P. McDonnell; Mark Mills; Jordan E. Marshall; Joshua E. Zosky; Michael D. Dodd You detect while I search: Examining visual search efficiency in a joint search task Journal Article In: Visual Cognition, vol. 26, no. 2, pp. 71–88, 2018. @article{McDonnell2018, Numerous factors impact attentional allocation, with behaviour being strongly influenced by the interaction between individual intent and our visual environment. Traditionally, visual search efficiency has been studied under solo search conditions. Here, we propose a novel joint search paradigm where one individual controls the visual input available to another individual via a gaze contingent window (e.g., Participant 1 controls the window with their eye movements and Participant 2 – in an adjoining room – sees only stimuli that Participant 1 is fixating and responds to the target accordingly). Pairs of participants completed three blocks of a detection task that required them to: (1) search and detect the target individually, (2) search the display while their partner performed the detection task, or (3) detect while their partner searched. Search was most accurate when the person detecting was doing so for the second time while the person controlling the visual input was doing so for the first time, even when compared to participants with advanced solo or joint task experience (Experiments 2 and 3). Through surrendering control of one's search strategy, we posit that there is a benefit of a reduced working memory load for the detector resulting in more accurate search. This paradigm creates a counterintuitive speed/accuracy trade-off which combines the heightened ability that comes from task experience (discrimination task) with the slower performance times associated with a novel task (the initial search) to create a potentially more efficient method of visual search. |
Daniel S. McGrath; Amadeus Meitner; Christopher R. Sears The specificity of attentional biases by type of gambling: An eye-tracking study Journal Article In: PLoS ONE, vol. 13, no. 1, pp. e0190614, 2018. @article{McGrath2018, A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls) took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games). Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers. |
Mel McKendrick; Stephen H. Butler; Madeleine A. Grealy Socio-cognitive load and social anxiety in an emotional anti-saccade task Journal Article In: PLoS ONE, vol. 13, no. 5, pp. e0197749, 2018. @article{McKendrick2018, The anti-saccade task has been used to measure attentional control related to general anxiety but less so with social anxiety specifically. Previous research has not been conclusive in suggesting that social anxiety may lead to difficulties in inhibiting faces. It is possible that static face paradigms do not convey a sufficient social threat to elicit an inhibitory response in socially anxious individuals. The aim of the current study was twofold. We investigated the effect of social anxiety on performance in an anti-saccade task with neutral or emotional faces preceded either by a social stressor (Experiment 1), or valenced sentence primes designed to increase the social salience of the task (Experiment 2). Our results indicated that latencies were significantly longer for happy than angry faces. Additionally, and surprisingly, high anxious participants made more erroneous anti-saccades to neutral than angry and happy faces, whilst the low anxious groups exhibited a trend in the opposite direction. Results are consistent with a general approach-avoidance response for positive and threatening social information. However increased socio-cognitive load may alter attentional control with high anxious individuals avoiding emotional faces, but finding it more difficult to inhibit ambiguous faces. The effects of social sentence primes on attention appear to be subtle but suggest that the anti-saccade task will only elicit socially relevant responses where the paradigm is more ecologically valid. |
David Méary; Carole Jaggie; Olivier Pascalis Multisensory representation of gender in infants: An eye-tracking study Journal Article In: Language Learning, vol. 68, pp. 14–30, 2018. @article{Meary2018, Visual and auditory information jointly contribute to face categorization processes in humans, and gender is a socially relevant multisensory category specified by faces and voices that is detected early in infancy. We used an eye tracker to study how gender coherence in audio and visual modalities influence face scanning in 9- to 12-month-old infants and in adults. While viewing dynamic faces, infants attended to a speaker's mouth region to a greater extent than adults, regardless of speech, which was mostly due to an increase in mean fixation durations. However, the time course of attending to eye and mouth regions showed similarities in adults and infants. Face-voice congruence for gender appeared to have little effect on measures of face scanning. Overall, results suggested that 9- to 12-month-old infants give more weight to the processing of a speaker's mouth compared to adults but that infants already have an adult-like face-scanning strategy. |
Anna Meehan; Robert Price; Susan Mirow; James M. Walker; Lindell K. Weaver; Steffanie H. Wilson; William W. Orrison; Christopher S. Williams; Jigar B. Patel; Susan Churchill; Kayla Deru; Anne S. Lindblad Comprehensive evaluation of healthy volunteers using multi-modality brain injury assessments: An exploratory, observational study Journal Article In: Frontiers in Neurology, vol. 9, pp. 1030, 2018. @article{Meehan2018, Introduction: Even though mild traumatic brain injury is common and can result in persistent symptoms, traditional measurement tools can be insensitive in detecting functional deficits after injury. Some newer assessments do not have well-established norms, and little is known about how these measures perform over time or how cross-domain assessments correlate with one another. We conducted an exploratory study to measure the distribution, stability, and correlation of results from assessments used in mild traumatic brain injury in healthy, community-dwelling adults. Materials and Methods: In this prospective cohort study, healthy adult men and womenwithout a history of brain injury underwent a comprehensive brain injury evaluation that included self-report questionnaires and neurological, electroencephalography, sleep, audiology/vestibular, autonomic, visual, neuroimaging, and laboratory testing. Most testing was performed at 3 intervals over 6 months. Results: The study enrolled 83 participants, and 75 were included in the primary analysis. Mean age was 38 years, 58 were male, and 53 were civilians. Participants did not endorse symptoms of post-concussive syndrome, PTSD, or depression. Abnormal neurological examination findings were rare, and 6 had generalized slowing on electroencephalography. Actigraphy and sleep diary showed good sleep maintenance efficiency, but 21 reported poor sleep quality. Heart rate variability was most stable over time in the sleep segment. Dynavision performance was normal, but 41 participants had abnormal ocular torsion. On eye tracking, circular, horizontal ramp, and reading tasks were more likely to be abnormal than other tasks. Most participants had normal hearing, videonystagmography, and rotational chair testing, but computerized dynamic posturography was abnormal in up to 21% of participants. Twenty-two participants had greater than expected white matter changes for age byMRI. Most abnormal findings were dispersed across the population, though a few participants had clusters of abnormalities. Conclusions: Despite our efforts to enroll normal, healthy volunteers, abnormalities on some measures were surprisingly common. |
Alberto Megias; Iga Rzeszewska; Luis Aguado; Andrés Catena Influence of cross-ethnic social experience on face recognition accuracy and the visual perceptual strategies involved Journal Article In: International Journal of Intercultural Relations, vol. 65, no. July 2017, pp. 42–50, 2018. @article{Megias2018, The cross-ethnic effect (in the literature, usually termed the cross-race effect) is defined as the greater difficulty in recognizing faces of other ethnicities compared with faces of one's own. The aims of the present research were: 1) to test the hypothesis that the cross-ethnic effect is due to lack of contact with the other ethnicity. 2) to study possible differences in the perceptual mechanisms employed in face recognition as a function of the contact degree between ethnicities, which may be the basis of the cross-ethnic effect. We compared two ethnic groups with a high degree of contact, but different identities and cultural values: Andalusian Gypsies and Andalusian Caucasians. Both groups had to recognize a set of East Asian, Caucasian, and Gypsy faces while eye movements were monitored. In accordance with the contact hypothesis, our results revealed no differences between Gypsies and Caucasians observers in face recognition success. However, East Asian faces were more poorly recognized than Gypsies and Caucasian faces by both observer groups. With respect to the perceptual strategies, despite achieving similar face recognition performance, Caucasian and Gypsy observers employed different visual exploration strategies. Gypsies focused their attention on the eyes, while Caucasians fixated more on the nose than Gypsies. Our results support the contact hypothesis as an explanation for the cross-ethnic effect, and show how cultural factors imply differences in perceptual strategies even between close ethnic groups. |
Andrew Melnik; Felix Schüler; Constantin A. Rothkopf; Peter König The world as an external memory: The price of saccades in a sensorimotor task Journal Article In: Frontiers in Behavioral Neuroscience, vol. 12, pp. 253, 2018. @article{Melnik2018, Theories of embodied cognition postulate that the world can serve as an external memory. This implies that instead of storing visual information in working memory the information may be equally retrieved by appropriate eye movements. Given this assumption, the question arises, how we balance the effort of memorization with the effort of visual sampling our environment. We analyzed eye-tracking data in a sensorimotor task where participants had to produce a copy of a LEGO®-blocks-model displayed on a computer screen. In the unconstrained condition, the model appeared immediately after eye-fixation on the model. In the constrained condition, we introduced a 0.7 s delay before uncovering the model. The model disappeared as soon as participants made a saccade outside of the Model Area. To successfully copy a model of 8 blocks participants made saccade to model area on average 7.9 times in the unconstrained condition and 5.6 times in the constrained condition. However, the mean duration of a trial was 2.1 s (9%) longer in the constrained condition even when taking into account the delayed visibility of the model. In this study, we found evidence for an adaptive shift in subjects' behavior towards memorization by introducing a price for a certain type of saccades. However, the response is not adaptive; it is maladaptive, as memorization leads to longer overall performance time. |
Paola Mengotti; Frank Boers; Pascasie L. Dombert; Gereon R. Fink; Simone Vossel Integrating modality-specific expectancies for the deployment of spatial attention Journal Article In: Scientific Reports, vol. 8, pp. 1210, 2018. @article{Mengotti2018, The deployment of spatial attention is highly sensitive to stimulus predictability. Despite evidence for strong crossmodal links in spatial attentional systems, it remains to be elucidated how concurrent but divergent predictions for targets in different sensory modalities are integrated. In a series of behavioral studies, we investigated the processing of modality-specific expectancies using a multimodal cueing paradigm in which auditory cues predicted the location of visual or tactile targets with modality-specific cue predictability. The cue predictability for visual and tactile targets was manipulated independently. A Bayesian ideal observer model with a weighting factor was applied to trial-wise individual response speed to investigate how the two probabilistic contexts are integrated. Results showed that the degree of integration depended on the level of predictability and on the divergence of the modality-specific probabilistic contexts (Experiments 1–2). However, when the two probabilistic contexts were matched in their level of predictability and were highly divergent (Experiment 3), higher separate processing was favored, especially when visual targets were processed. These findings suggest that modality-specific predictions are flexibly integrated according to their reliability, supporting the hypothesis of separate modality-specific attentional systems that are however linked to guarantee an efficient deployment of spatial attention across the senses. |
Céline Paeye; Thérèse Collins; Patrick Cavanagh; Arvid Herwig Calibration of peripheral perception of shape with and without saccadic eye movements Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 3, pp. 723–737, 2018. @article{Paeye2018, The cortical representations of a visual object differ radically across saccades. Several studies claim that the visual system adapts the peripheral percept to better match the subsequent foveal view. Recently, Herwig, Weiß, and Schneider (2015, Annals of the New York Academy of Sciences, 1339(1), 97–105) found that the perception of shape demonstrates a saccade-dependent learning effect. Here, we ask whether this learning actually requires saccades. We replicated Herwig et al.'s (2015) study and introduced a fixation condition. In a learning phase, participants were exposed to objects whose shape systematically changed during a saccade, or during a displacement from peripheral to foveal vision (without a saccade). In a subsequent test, objects were perceived as less (more) curved if they previously changed from more circular (triangular) in the periphery to more triangular (circular) in the fovea. Importantly, this pattern was seen both with and without saccades. We then tested whether a variable delay between the presentations of the peripheral and foveal objects would affect their association—hypothetically weakening it at longer delays. Again, we found that shape judgments depended on the changes experienced during the learning phase and that they were similar in both the saccade and fixation conditions. Surprisingly, they were not affected by the delay between the peripheral and foveal presentations over the range we tested. These results suggest that a general associative process, independent of saccade execution, contributes to the perception of shape across viewpoints. |
Christian H. Poth; Werner X. Schneider Attentional competition across saccadic eye movements Journal Article In: Acta Psychologica, vol. 190, pp. 27–37, 2018. @article{Poth2018, Human behavior is guided by visual object recognition. For being recognized, objects compete for limited attentional processing resources. The more objects compete, the lower is performance in recognizing each individual object. Here, we ask whether this competition is confined to eye fixations, periods of relatively stable gaze, or whether it extends from one fixation to the next, across saccadic eye movements. Participants made saccades to a peripheral saccade target. After the saccade, a letter was briefly presented within the saccade target and terminated by a mask. Object recognition of the letter was assessed as participants' report. Critically, either no, two, or four additional non-target objects appeared before the saccade. In Experiment 1, presaccadic non-targets were task-irrelevant and had no effects on postsaccadic object recognition. In Experiment 2, presaccadic non-targets were task-relevant and, here, postsaccadic object recognition deteriorated with increasing number of presaccadic non-targets. As suggested by Experiment 3 and a mathematical model, this effect was due to a slowing down but also a delayed start of visual processing after the saccade. Together, our findings show that objects compete for recognition across saccades, but only if they are task-relevant. This reveals an attentional mechanism of task-driven object recognition that is interlaced with active saccade-mediated vision. |
Diane Poulin-Dubois; Paul D. Hastings; Sabrina S. Chiarella; Elena Geangu; Petra Hauf; Alexa Ruel; Aaron P. Johnson The eyes know it: Toddlers' visual scanning of sad faces is predicted by their theory of mind skills Journal Article In: PLoS ONE, vol. 13, no. 12, pp. e0208524, 2018. @article{PoulinDubois2018, The current research explored toddlers' gaze fixation during a scene showing a person expressing sadness after a ball is stolen from her. The relation between the duration of gaze fixation on different parts of the person's sad face (e.g., eyes, mouth) and theory of mind skills was examined. Eye tracking data indicated that before the actor experienced the negative event, toddlers divided their fixation equally between the actor's happy face and other distracting objects, but looked longer at the face after the ball was stolen and she expressed sadness. The strongest predictor of increased focus on the sad face versus other elements of the scene was toddlers' ability to predict others' emotional reactions when outcomes fulfilled (happiness) or failed to fulfill (sadness) desires, whereas toddlers' visual perspective-taking skills predicted their more specific focusing on the actor's eyes and, for boys only, mouth. Furthermore, gender differences emerged in toddlers' fixation on parts of the scene. Taken together, these findings suggest that top-down processes are involved in the scanning of emotional facial expressions in toddlers. |
Céline Pozniak; Barbara Hemforth; Christoph Scheepers Cross-domain priming from mathematics to relative-clause attachment: A visual-world study in French Journal Article In: Frontiers in Psychology, vol. 9, pp. 2056, 2018. @article{Pozniak2018, Human language processing must rely on a certain degree of abstraction, as we can produce and understand sentences that we have never produced or heard before. One way to establish syntactic abstraction is by investigating structural priming. Structural priming has been shown to be effective within a cognitive domain, in the present case, the linguistic domain. But does priming also work across different domains? In line with previous experiments, we investigated cross-domain structural priming from mathematical expressions to linguistic structures with respect to relative clause attachment in French (e.g., la fille du professeur qui habitait à Paris/the daughter of the teacher who lived in Paris). Testing priming in French is particularly interesting because it will extend earlier results established for English to a language where the baseline for relative clause attachment preferences is different form English: in English, relative clauses (RCs) tend to be attached to the local noun phrase (low attachment) while in French there is a preference for high attachment of relative clauses to the first noun phrase (NP). Moreover, in contrast to earlier studies, we applied an online-technique (visual world eye-tracking). Our results confirm cross-domain priming from mathematics to linguistic structures in French. Most interestingly, different from less mathematically adept participants, we found that in mathematically skilled participants, the effect emerged very early on (at the beginning of the relative clause in the speech stream) and is also present later (at the end of the relative clause). In line with previous findings, our experiment suggests that mathematics and language share aspects of syntactic structure at a very high-level of abstraction. |
Daniel Preciado; Jan Theeuwes To look or not to look? Reward, selection history, and oculomotor guidance Journal Article In: Journal of Neurophysiology, vol. 120, no. 4, pp. 1740–1752, 2018. @article{Preciado2018, The current eye-tracking study examined the influence of reward on oculomotor performance, and the extent to which learned stimulus-reward associations interacted with voluntary oculomotor control with a modified paradigm based on the classical antisaccade task. Participants were shown two equally salient stimuli simultaneously: a gray and a colored circle, and they were instructed to make a fast saccade to one of them. During the first phase of the experiment, participants made a fast saccade toward the colored stimulus, and their performance determined a (cash) bonus. During the second, participants made a saccade toward the gray stimulus, with no rewards available. On each trial, one of three colors was presented, each associated with high, low or no reward during the first phase. Results from the first phase showed improved accuracy and shorter saccade latencies on high-reward trials, while those from the second replicated well-known effects typical of the antisaccade task, namely, decreased accuracy and increased latency during phase II, even despite the absence of abrupt asymmetric onsets. Crucially, performance differences between phases revealed longer latencies and less accurate saccades during the second phase for high-reward trials, compared with the low- and no-reward trials. Further analyses indicated that oculomotor capture by reward signals is mainly found for saccades with short latencies, while this automatic capture can be overridden through voluntary control with longer ones. These results highlight the natural flexibility and adaptability of the attentional system, and the role of reward in modulating this plasticity. |
Eliska Prochazkova; Luisa Prochazkova; Michael Rojek Giffin; H. Steven Scholte; Carsten K. W. De Dreu; Mariska E. Kret Pupil mimicry promotes trust through the theory-of-mind network Journal Article In: Proceedings of the National Academy of Sciences, vol. 115, no. 31, pp. E7265–E7274, 2018. @article{Prochazkova2018, The human eye can provide powerful insights into the emotions and intentions of others; however, how pupillary changes influence observers' behavior remains largely unknown. The present fMRI–pupillometry study revealed that when the pupils of interacting partners synchronously dilate, trust is promoted, which suggests that pupil mimicry affiliates people. Here we provide evidence that pupil mimicry modulates trust decisions through the activation of the theory-of-mind network (precuneus, temporo-parietal junction, superior temporal sulcus, and medial prefrontal cortex). This network was recruited during pupil-dilation mimicry compared with interactions without mimicry or compared with pupil-constriction mimicry. Furthermore, the level of theory-of-mind engagement was proportional to individual's susceptibility to pupil-dilation mimicry. These data reveal a fundamental mechanism by which an individual's pupils trigger neurophysiological responses within an observer: when interacting partners synchronously dilate their pupils, humans come to feel reflections of the inner states of others, which fosters trust formation. |
Sébastien Puma; Nadine Matton; Pierre-Vincent Paubel; Éric Raufaste; Radouane El-Yagoubi Using theta and alpha band power to assess cognitive workload in multitasking environments Journal Article In: International Journal of Psychophysiology, vol. 123, pp. 111–120, 2018. @article{Puma2018, Cognitive workload is of central importance in the fields of human factors and ergonomics. A reliable mea- surement of cognitive workload could allow for improvements in human machine interface designs and increase safety in several domains. At present, numerous studies have used electroencephalography (EEG) to assess cognitive workload, reporting the rise in cognitive workload to be associated with increases in theta band power and decreases in alpha band power. However, results have been inconsistent with some failing to reach the required level of significance. We hypothesized that the lack of consistency could be related to individual dif- ferences in task performance and/or to the small sample sizes in most EEG studies. In the present study we used EEG to assess the increase in cognitive workload occurring in a multitasking environment while taking into account differences in performance. Twenty participants completed a task commonly used in airline pilot re- cruitment, which included an increasing number of concurrent sub-tasks to be processed from one to four. Subjective ratings, performances scores, pupil size and EEG signals were recorded. Results showed that increases in EEG alpha and theta band power reflected increases in the involvement of cognitive resources for the com- pletion of one to three subtasks in a multitasking environment. These values reached a ceiling when perfor- mances dropped. Consistent differences in levels of alpha and theta band power were associated to levels of task performance: highest performance was related to lowest band power. |
Michael Puntiroli; Dirk Kerzel; Sabine Born Placeholder objects shape spatial attention effects before eye movements Journal Article In: Journal of Vision, vol. 18, no. 6, pp. 1–20, 2018. @article{Puntiroli2018, In the time leading up to a saccade, the saccade target is perceptually enhanced compared to other objects in the visual field. This enhancement is attributed to a shift of spatial attention toward the target. We examined whether the presence of visual objects is critical for the perceptual enhancement at the saccade target to occur. We hypothesized that attention may need an object to focus on in order to be effective. We conducted four experiments using a dual-task design, where participants performed eye movements either to a location demarked by a placeholder or to an empty screen location where no object was displayed. At the same time, they discriminated a probe flashed at the location targeted by the eye movement or at one of two control locations. A strong perceptual advantage at the saccade target location was observed only when placeholders were displayed at the time of probe presentation. The complete absence of placeholders (Experiment 1), the presence of placeholders before but not during probe presentation (Experiment 3), and the presence of objects only around the saccade target (Experiments 3 and 4) led to a strong reduction in the saccade-target benefit. We conclude that placeholders may indeed be necessary to observe presaccadic enhancement at the saccade target. However, this is not because placeholders provide an object to focus attention on, but rather because they produce a masking (or crowding) effect. This detrimental effect is overcome by the presaccadic shift of attention, resulting in heightened perception only at the saccade target object. |
Ana Radonjić; Stacey Aston; Avery Krieger; David H. Brainard; Anya C. Hurlbert Illumination discrimination in the absence of a fixed surface-reflectance layout Journal Article In: Journal of Vision, vol. 18, no. 5, pp. 1–27, 2018. @article{Radonjic2018, Previous studies have shown that humans can discriminate spectral changes in illumination and that this sensitivity depends both on the chromatic direction of the illumination change and on the ensemble of surfaces in the scene. These studies, however, always used stimulus scenes with a fixed surface-reflectance layout. Here we compared illumination discrimination for scenes in which the surface reflectance layout remains fixed (fixed-surfaces condition) to those in which surface reflectances were shuffled randomly across scenes, but with the mean scene reflectance held approximately constant (shuffled-surfaces condition). Illumination discrimination thresholds in the fixedsurfaces condition were commensurate with previous reports. Thresholds in the shuffled-surfaces condition, however, were considerably elevated. Nonetheless, performance in the shuffled-surfaces condition exceeded that attainable through random guessing. Analysis of eye fixations revealed that in the fixed-surfaces condition, low illumination discrimination thresholds (across observers) were predicted by low overall fixation spread and high consistency of fixation location and fixated surface reflectances across trial intervals. Performance in the shuffled-surfaces condition was not systematically related to any of the eye-fixation characteristics we examined for that condition, but was correlated with performance in the fixed-surfaces condition. |
Masih Rahmati; Golbarg T. Saber; Clayton E. Curtis Population dynamics of early visual cortex during working memory Journal Article In: Journal of Cognitive Neuroscience, vol. 30, no. 2, pp. 219–233, 2018. @article{Rahmati2018, Although the content of working memory (WM) can be decoded from the spatial patterns of brain activity in early visual cortex, how populations encode WM representations remains unclear.Here, we address this limitation by using a model-based approach that reconstructs the feature encoded by population activity measured with fMRI. Using this approach, we could successfully reconstruct the locations of memory-guided saccade goals based on the pattern of activity in visual cortex during a memory delay. We could reconstruct the saccade goal even when we dissociated the visual stimulus from the saccade goal using a memory-guided antisaccade procedure. By comparing the spatiotemporal population dynamics, we find that the representations in visual cortex are stable but can also evolve from a representation of a remembered visual stimulus to a prospective goal. Moreover, because the representation of the antisaccade goal cannot be the result of bottom–up visual stimulation, it must be evoked by top–down signals presumably originating from frontal and/ or parietal cortex. Indeed, we find that trial-by-trial fluctuations in delay period activity in frontal and parietal cortex correlate with the precision with which our model reconstructed the maintained saccade goal based on the pattern of activity in visual cortex. Therefore, the population dynamics in visual cortex encode WM representations, and these representations can be sculpted by top–down signals from frontal and parietal cortex. |
Meike Ramon; Nayla Sokhn; Junpeng Lao; Roberto Caldara Decisional space determines saccadic reaction times in healthy observers and acquired prosopagnosia Journal Article In: Cognitive Neuropsychology, vol. 35, no. 5-6, pp. 304–313, 2018. @article{Ramon2018a, Determining the familiarity and identity of a face have been considered as independent processes. Covert face recognition in cases of acquired prosopagnosia, as well as rapid detection of familiarity have been taken to support this view. We tested P.S. a well-described case of acquired prosopagnosia, and two healthy controls (her sister and daughter) in two saccadic reaction time (SRT) experiments. Stimuli depicted their family members and well-matched unfamiliar distractors in the context of binary gender, or familiarity decisions. Observers' minimum SRTs were estimated with Bayesian approaches. For gender decisions, P.S. and her daughter achieved sufficient performance, but displayed different SRT distributions. For familiarity decisions, her daughter exhibited above chance level performance and minimum SRTs corresponding to those reported previously in healthy observers, while P.S. performed at chance. These findings extend previous observations, indicating that decisional space determines performance in both the intact and impaired face processing system. |
Meike Ramon; Nayla Sokhn; Junpeng Lao; Roberto Caldara Decisional space modulates saccadic reaction times towards personally familiar faces in healthy observers and acquired prosopagnosia Journal Article In: Journal of Vision, vol. 18, no. 10, pp. 1097, 2018. @article{Ramon2018, Determining the familiarity and identity of a face have been considered as independent processes. Covert face recognition in cases of acquired prosopagnosia, as well as rapid detection of familiarity have been taken to support this view. We tested P.S. a well-described case of acquired prosopagnosia, and two healthy controls (her sister and daughter) in two saccadic reaction time (SRT) experiments. Stimuli depicted their family members and well-matched unfamiliar distractors in the context of binary gender, or familiarity decisions. Observers' minimum SRTs were estimated with Bayesian approaches. For gender decisions, P.S. and her daughter achieved sufficient performance, but displayed different SRT distributions. For familiarity decisions, her daughter exhibited above chance level performance and minimum SRTs corresponding to those reported previously in healthy observers, while P.S. performed at chance. These findings extend previous observations, indicating that decisional space determines performance in both the intact and impaired face processing system. |
Roger Ratcliff Decision making on spatially continuous scales Journal Article In: Psychological Review, vol. 125, no. 6, pp. 888–935, 2018. @article{Ratcliff2018, A new diffusion model of decision making in continuous space is presented and tested. The model is a sequential sampling model in which both spatially continuously distributed evidence and noise are accumulated up to a decision criterion (a 1 dimensional [1D] line or a 2 dimensional [2D] plane). There are two major advances represented in this research. The first is to use spatially continuously distributed Gaussian noise in the decision process (Gaussian process or Gaussian random field noise) which allows the model to represent truly spatially continuous processes. The second is a series of experiments that collect data from a variety of tasks and response modes to provide the basis for testing the model. The model accounts for the distributions of responses over position and response time distributions for the choices. The model applies to tasks in which the stimulus and the response coincide (moving eyes or fingers to brightened areas in a field of pixels) and ones in which they do not (color, motion, and direction identification). The model also applies to tasks in which the response is made with eye movements, finger movements, or mouse movements. This modeling offers a wide potential scope of applications including application to any device or scale in which responses are made on a 1D continuous scale or in a 2D spatial field. |
Chi-Wen Liang Attentional control deficits in social anxiety: Investigating inhibition and shifting functions using a mixed antisaccade paradigm Journal Article In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 60, pp. 46–52, 2018. @article{Liang2018, Background and objectives: Attentional control has recently been assumed to play a critical role in the generation and maintenance of threat-related attentional bias and social anxiety. The present study aimed to investigate whether socially anxious (SA) individuals show impairments in attentional control functions, particularly in inhibition and shifting. Methods: Forty-two SA and 41 non-anxious (NA) participants completed a mixed antisaccade task, a variant of the antisaccade task that is used to investigate inhibition as well as shifting functions. Results: The results showed that, overall, SA participants had longer antisaccade latencies than NA participants, but the two groups did not differ in their antisaccade error rates. Moreover, in the single-task block, SA participants had longer latencies than NA participants for antisaccade but not prosaccade trials. In the mixed-task block, the SA participants had longer latencies than the NA participants for both task types. The two groups did not differ in their latency switch costs in the mixed-task blocks. Limitations: First, this study was conducted using a non-clinical sample of undergraduate students. Second, the antisaccade task measures primarily oculomotor inhibition. Third, this study did not include the measure of state anxiety to rule out the effects of state anxiety on the present findings. Conclusions: This study suggests that SA individuals demonstrate diminished efficiency of inhibition function but show no significant impairment of shifting function. However, in the mixed-task condition, SA individuals may exhibit an overall reduction in processing efficiency due to the higher task difficulty. |
Lindsey Lilienthal; Joel Myerson; Richard A. Abrams; Sandra Hale Effects of environmental support on overt and covert visuospatial rehearsal Journal Article In: Memory, vol. 26, no. 8, pp. 1042–1052, 2018. @article{Lilienthal2018, People can rehearse to-be-remembered locations either overtly, using eye movements, or covertly, using only shifts of spatial attention. The present study examined whether the effectiveness of these two strategies depends on environmental support for rehearsal. In Experiment 1, when environmental support (i.e., the array of possible locations) was present and participants could engage in overt rehearsal during retention intervals, longer intervals resulted in larger spans, whereas in Experiment 2, when support was present but participants could only engage in covert rehearsal, longer intervals resulted in smaller spans. When environmental support was absent, however, longer retention intervals resulted in smaller memory spans regardless of which rehearsal strategies were available. In Experiment 3, analyses of participants' eye movements revealed that the presence of support increased participants' fixations of to-be-remembered target locations more than fixations of non-targets, and that this was associated with better memory performance. Further, although the total time fixating targets increased, individual target fixations were actually briefer. Taken together, the present findings suggest that in the presence of environmental support, overt rehearsal is more effective than covert rehearsal at maintaining to-be-remembered locations in working memory, and that having more time for overt rehearsal can actually increase visuospatial memory spans. |
Alfred Lim; Vivian Eng; Steve M. J. Janssen; Jason Satel Sensory adaptation and inhibition of return: Dissociating multiple inhibitory cueing effects Journal Article In: Experimental Brain Research, vol. 236, no. 5, pp. 1369–1382, 2018. @article{Lim2018, Inhibition of return (IOR) refers to an increase in reaction times to targets that appeared at a previously cued location relative to an uncued location, often investigated using a spatial cueing paradigm. Despite numerous studies that have examined many aspects of IOR, the neurophysiological mechanisms underlying IOR are still in dispute. The objective of the current research is to investigate the plausible mechanisms by manipulating the cue and target types between central and peripheral stimuli in a traditional cue–target paradigm with saccadic responses to targets. In peripheral-cueing conditions, we observed inhibitory cueing effects across all cue–target onset asynchronies (CTOAs) with peripheral targets, but IOR was smaller and arose later with central targets. No inhibition was observed in central-cueing conditions at any CTOAs. Empirical data were simulated using a two-dimensional dynamic neural field model. Our results and simulations support previous work demonstrating that, at short CTOAs, behavioral inhibition is only observed with repeated stimulation—an effect of sensory adaptation. With longer CTOAs, IOR is observed regardless of target type when peripheral cueing is used. Our findings suggest that behaviorally exhibited inhibitory cueing effects can be attributed to multiple mechanisms, including both attenuation of visual stimulation and local inhibition in the superior colliculus. |
John J. H. Lin; Sunny S. J. Lin Integrating eye trackers with handwriting tablets to discover difficulties of solving geometry problems Journal Article In: British Journal of Educational Technology, vol. 49, no. 1, pp. 17–29, 2018. @article{Lin2018c, To deepen our understanding of those aspects of problems that cause the most difficulty for solvers, this study integrated eye-tracking with handwriting devices to investigate problem solvers' online processes while solving geometry problems. We are interested in whether the difference between successful and unsuccessful solvers can be identified by employing eye-tracking and handwriting. Sixty-two high school students were required to complete a series of geometry problems using pen tablets. Responses, including eye movement measures, wrote/drew trace, perceived cognitive load and questionnaires concerning the source of difficulties, were collected. The results suggested that the technique could enhance methods to diagnose difficulties by differentiating between successful and unsuccessful solvers. We considered mental rotation could be a primary obstacle in the integrating stage of diagram comprehension. The technique can be extensively applied in various instructional scenarios. Educational implications for problem solving are discussed. |
Ebony R. Lindor; Nicole J. Rinehart; Joanne Fielding Superior visual search and crowding abilities are not characteristic of all individuals on the autism spectrum Journal Article In: Journal of Autism and Developmental Disorders, vol. 48, pp. 3499–3512, 2018. @article{Lindor2018, Individuals with Autism Spectrum Disorder (ASD) often excel on visual search and crowding tasks; however, inconsistent findings suggest that this 'islet of ability' may not be characteristic of the entire spectrum. We examined whether performance on these tasks changed as a function of motor proficiency in children with varying levels of ASD symptomology. Children with high ASD symptomology outperformed all others on complex visual search tasks, but only if their motor skills were rated at, or above, age expectations. For the visual crowding task, children with high ASD symptomology and superior motor skills exhibited enhanced target discrimination, whereas those with high ASD symptomology but poor motor skills experienced deficits. These findings may resolve some of the discrepancies in the literature. |
Jiachen Liu; Yifeng Zhou; Tzvetomir Tzvetanov Globally normal bistable motion perception of Anisometropic amblyopes may profit from an unusual coding mechanism Journal Article In: Frontiers in Neuroscience, vol. 12, pp. 391, 2018. @article{Liu2018h, Anisometropic amblyopia is a neurodevelopmental disorder of the visual system. There is evidence that the neural deficits spread across visual areas, from the primary cortex up to higher brain areas, including motion coding structures such as MT. Here, we used bistable plaid motion to investigate changes in the underlying mechanisms of motion integration and segmentation and, thus, help us to unravel in more detail deficits in the amblyopic visual motion system. Our results showed that 1) amblyopes globally exhibited normal bistable perception in all viewing conditions compared to the control group and 2) decreased contrast led to a stronger increase in percept switches and decreased percept durations in the control group, while the amblyopic group exhibited no such changes. There were few differences in outcomes dependent upon the use of the weak eye, the strong eye, or both eyes for viewing the stimuli, but this was a general effect present across all subjects, not specific to the amblyopic group. To understand the role of noise and adaptation in such cases of bistable perception, we analysed predictions from a model and found that contrast does indeed affect percept switches and durations as observed in the control group, in line with the hypothesis that lower stimulus contrast enhances internal noise effects. The combination of experimental and computational results presented here suggests a different motion coding mechanism in the amblyopic visual system, with relatively little effect of stimulus contrast on amblyopes' bistable motion perception. |
Liu D. Liu; Kenneth D. Miller; Christopher C. Pack A unifying motif for spatial and directional surround suppression Journal Article In: Journal of Neuroscience, vol. 38, no. 4, pp. 989–999, 2018. @article{Liu2018c, In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. While the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacologically manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. |
Yanping Liu; Erik D. Reichle Using eye movements to understand the leakage of information during Chinese reading Journal Article In: Psychonomic Bulletin & Review, vol. 25, no. 6, pp. 2323–2329, 2018. @article{Liu2018e, How is attention allocated during reading? The present eye-movement experiment used a paradigm developed by Liu and Reichle (Psychological Science, 29, 278–287, 2018) to examine object-based attention during reading: Participants were instructed to read one oftwo spatially overlapping sentences containing colocated target/distractor words of varying frequency. Although target-word frequency modulated fixation-duration measures on the target word, the distractor-word frequency also had a smaller, independent effect. Survival analyses indicate that the distractor-word effect occurred later than the target-word effect, suggesting that subtle orthographic cues were noticed either later or occasionally, thereby modulating decisions about when to move the eyes. The theoretical ramifications of this "leakage" of information are discussed with respect to the general question of attention allocation during reading and possible differences between the reading of Chinese versus English. |
Yanping Liu; Erik D. Reichle Eye-movement evidence for object-based attention in Chinese reading Journal Article In: Psychological Science, vol. 29, no. 2, pp. 278–287, 2018. @article{Liu2018f, Is attention allocated to only one word or to multiple words at any given time during reading? The experiments reported here addressed this question using a novel paradigm inspired by classic findings on object-based attention. In Experiment 1, participants (N = 18) made lexical decisions about one of two spatially colocated Chinese words or nonwords. Our main finding was that only the attended word's frequency influenced response times and accuracy. In Experiment 2, participants (N = 30) read target words embedded in two spatially colocated Chinese sentences. Our key finding here was that only target-word frequencies influenced looking times and fixation positions. These results support the hypothesis that words are attended in a strictly serial (and perhaps object-based) manner during reading. The theoretical implications of this conclusion are discussed in relation to models of eye-movement control during reading and the conceptualization of words as visual objects. |
Zhenguang Liu; Zepeng Wang; Yiyang Yao; Luming Zhang; Ling Shao Deep active learning with contaminated tags for image aesthetics assessment Journal Article In: IEEE Transactions on Image Processing, pp. 1–14, 2018. @article{Liu2018i, Image aesthetic quality assessment has becoming an indispensable technique that facilitates a variety of image applications, e.g., photo retargeting and non-realistic rendering. Conventional approaches suffer from the following limitations: 1) the inefficiency of semantically describing images due to the inherent tag noise and incompletion, 2) the difficulty of accurately reflecting how humans actively perceive various regions inside each image, and 3) the challenge of incorporating the aesthetic experiences of multiple users. To solve these problems, we propose a novel semi-supervised deep active learning (SDAL) algorithm, which discovers how humans perceive semantically important regions from a large quantity of images partially assigned with contaminated tags. More specifically, as humans usually attend to the foreground objects before understanding them, we extract a succinct set of BING (binarized normed gradients) [60]-based object patches from each image. To simulate human visual perception, we propose SDAL which hierarchically learns human gaze shifting path (GSP) by sequentially linking semantically important object patches from each scenery. Noticeably, SDLA unifies the semantically important regions discovery and deep GSP feature learning into a principled framework, wherein only a small proportion of tagged images are adopted. Moreover, based on the sparsity penalty, SDLA can optimally abandon the noisy or redundant low-level image features. Finally, by leveraging the deeply-learned GSP features, a probabilistic model is developed for image aesthetics assessment, where the experience of multiple professional photographers can be encoded. Besides, auxiliary quality-related features can be conveniently integrated into our probabilistic model. Comprehensive experiments on a series of benchmark image sets have demonstrated the superiority of our method. As a byproduct, eye tracking experiments have shown that GSPs generated by our SDAL are about 93% consistent with real human gaze shifting paths. |
Zhong-Xu Liu; Kelly Shen; Rosanna K. Olsen; Jennifer D. Ryan Age-related changes in the relationship between visual exploration and hippocampal activity Journal Article In: Neuropsychologia, vol. 119, pp. 81–91, 2018. @article{Liu2018g, Deciphering the mechanisms underlying age-related memory declines remains an important goal in cognitive neuroscience. Recently, we observed that visual sampling behavior predicted activity within the hippocampus, a region critical for memory. In younger adults, increases in the number of gaze fixations were associated with increases in hippocampal activity (Liu et al., 2017). This finding suggests a close coupling between the oculomotor and memory system. However, the extent to which this coupling is altered with aging has not been investigated. In this study, we gave older adults the same face processing task used in Liu et al. (2017) and compared their visual exploration behavior and neural activation in the hippocampus and the fusiform face area (FFA) to those of younger adults. Compared to younger adults, older adults showed an increase in visual exploration as indexed by the number of gaze fixations. However, the relationship between visual exploration and neural responses in the hippocampus and FFA was weaker than that of younger adults. Older adults also showed weaker responses to novel faces and a smaller repetition suppression effect in the hippocampus and FFA compared to younger adults. All together, this study provides novel evidence that the capacity to bind visually sampled information, in real-time, into coherent representations along the ventral visual stream and the medial temporal lobe declines with aging. |
Nicole D. Karpinsky; Eric T. Chancey; Dakota B. Palmer; Yusuke Yamani Automation trust and attention allocation in multitasking workspace Journal Article In: Applied Ergonomics, vol. 70, pp. 194–201, 2018. @article{Karpinsky2018, Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. |
Norbert Kathmann; Julia Klawohn; Leonhard Lennertz; Ulrich Ettinger; Anja Riesel; Christian Kaufmann; Inga Meyhöfer; Michael Wagner; Rosa Grützmann; Katharina Bey; Stephan Heinzel Impaired antisaccades in obsessive-compulsive disorder: Evidence from meta-analysis and a large empirical study Journal Article In: Frontiers in Psychiatry, vol. 9, pp. 284, 2018. @article{Kathmann2018, Increasing evidence indicates that patients with obsessive-compulsive disorder (OCD) exhibit alterations in fronto-striatal circuitry. Performance deficits in the antisaccade task would support this model, but results from previous small-scale studies have been inconclusive as either increased error rates, prolonged antisaccade latencies, both or neither have been reported in OCD patients. In order to address this issue, we investigated antisaccade performance in a large sample of OCD patients (n = 169) and matched control subjects (n = 183). As impaired antisaccade performance constitutes a potential endophenotype of OCD, unaffected first-degree relatives of OCD patients (n = 100) were assessed, as well. Furthermore, we conducted a quantitative meta-analysis to integrate our data with previous findings. In the empirical study, OCD patients exhibited significantly increased antisaccade latencies, intra-subject variability (ISV) of antisaccade latencies, and antisaccade error rates. The latter effect was driven by errors with express latency (80–130ms), as patients did not differ significantly from controls with regards to regular errors (>130ms). Notably, unaffected relatives of OCD patients showed elevated antisaccade express error rates and increased ISV of antisaccade latencies, as well. Antisaccade performance was not associated with state anxiety within groups. Among relatives, however, we observed a significant correlation between antisaccade error rate and harm avoidance. Medication status of OCD patients, symptom severity, depressive comorbidity, comorbid anxiety disorders and OCD symptom dimensions did not significantly affect antisaccade performance. Meta-analysis of 10 previous and the present empirical study yielded a medium-sized effect (SMD = 0.48, p < 0.001) for higher error rates in OCD patients, while the effect for latencies did not reach significance owing to strong heterogeneity (SMD = 0.51 |
Janne Kauttonen; Yevhen Hlushchuk; Iiro P. Jääskeläinen; Pia Tikka Brain mechanisms underlying cue-based memorizing during free viewing of movie Memento Journal Article In: NeuroImage, vol. 172, pp. 313–325, 2018. @article{Kauttonen2018, How does the human brain recall and connect relevant memories with unfolding events? To study this, we presented 25 healthy subjects, during functional magnetic resonance imaging, the movie ‘Memento' (director C. Nolan). In this movie, scenes are presented in chronologically reverse order with certain scenes briefly overlapping previously presented scenes. Such overlapping “key-frames” serve as effective memory cues for the viewers, prompting recall of relevant memories of the previously seen scene and connecting them with the concurrent scene. We hypothesized that these repeating key-frames serve as immediate recall cues and would facilitate reconstruction of the story piece-by-piece. The chronological version of Memento, shown in a separate experiment for another group of subjects, served as a control condition. Using multivariate event-related pattern analysis method and representational similarity analysis, focal fingerprint patterns of hemodynamic activity were found to emerge during presentation of key-frame scenes. This effect was present in higher-order cortical network with regions including precuneus, angular gyrus, cingulate gyrus, as well as lateral, superior, and middle frontal gyri within frontal poles. This network was right hemispheric dominant. These distributed patterns of brain activity appear to underlie ability to recall relevant memories and connect them with ongoing events, i.e., “what goes with what” in a complex story. Given the real-life likeness of cinematic experience, these results provide new insight into how the human brain recalls, given proper cues, relevant memories to facilitate understanding and prediction of everyday life events. |
Katsuhisa Kawaguchi; Stephane Clery; Paria Pourriahi; Lenka Seillier; Ralf M. Haefner; Hendrikje Nienborg Differentiating between models of perceptual decision making using pupil size inferred confidence Journal Article In: Journal of Neuroscience, vol. 38, no. 41, pp. 8874–8888, 2018. @article{Kawaguchi2018, During perceptual decisions, subjects often rely more strongly on early, rather than late, sensory evidence, even in tasks when both are equally informative about the correct decision. This early psychophysical weighting has been explained by an integration-to-bound decision process, in which the stimulus is ignored after the accumulated evidence reaches a certain bound, or confidence level. Here, we derive predictions about how the average temporal weighting of the evidence depends on a subject's decision confidence in this model. To test these predictions empirically, we devised a method to infer decision confidence from pupil size in 2 male monkeys performing a disparity discrimination task. Our animals' data confirmed the integration-to-bound predictions, with different internal decision bounds and different levels of correlation between pupil size and decision confidence accounting for differences between animals. However, the data were less compatible with two alternative accounts for early psychophysical weighting: attractor dynamics either within the decision area or due to feedback to sensory areas, or a feedforward account due to neuronal response adaptation. This approach also opens the door to using confidence more broadly when studying the neural basis of decision making. |
Devin H. Kehoe; Selvi Aybulut; Mazyar Fallah Higher order, multifeatural object encoding by the oculomotor system Journal Article In: Journal of Neurophysiology, vol. 120, no. 6, pp. 3042–3062, 2018. @article{Kehoe2018a, Previous behavioral and physiological research has demonstrated that as the behavioral relevance of potential saccade goals increases, they elicit more competition during target selection processing as evidenced by increased saccade curvature and neural activity. However, these effects have only been demonstrated for lower order feature singletons, and it remains unclear whether more complicated featural differences between higher order objects also elicit vector modulation. Therefore, we measured human saccades curvature elicited by distractors bilaterally flanking a target during a visual search saccade task and systematically varied subsets of features shared between the two distractors and the target, referred to as objective similarity (OS). Our results demonstrate that saccades deviated away from the distractor highest in OS to the target and that there was a linear relationship between the magnitude of saccade deviation and the number of feature differences between the most similar distractor and the target. Furthermore, an analysis of curvature over the time course of the saccade demonstrated that curvature only occurred in the first 20–30 ms of the movement. Given the multifeatural complexity of the novel stimuli, these results suggest that saccadic target selection processing involves dynamically reweighting vector representations for movement planning to several possible targets based on their behavioral relevance. |
Ladislav Kesner; Dominika Grygarová; Iveta Fajnerová; Jiří Lukavský; Tereza Nekovářová; Jaroslav Tintěra; Yuliya Zaytseva; Jiří Horáček Perception of direct vs. averted gaze in portrait paintings: An fMRI and eye-tracking study Journal Article In: Brain and Cognition, vol. 125, pp. 88–99, 2018. @article{Kesner2018, In this study, we use separate eye-tracking measurements and functional magnetic resonance imaging to investigate the neuronal and behavioral response to painted portraits with direct versus averted gaze. We further explored modulatory effects of several painting characteristics (premodern vs modern period, influence of style and pictorial context). In the fMRI experiment, we show that the direct versus averted gaze elicited increased activation in lingual and inferior occipital and the fusiform face area, as well as in several areas involved in attentional and social cognitive processes, especially the theory of mind: angular gyrus/temporo-parietal junction, inferior frontal gyrus and dorsolateral prefrontal cortex. The additional eye-tracking experiment showed that participants spent more time viewing the portrait's eyes and mouth when the portrait's gaze was directed towards the observer. These results suggest that static and, in some cases, highly stylized depictions of human beings in artistic portraits elicit brain activation commensurate with the experience of being observed by a watchful intelligent being. They thus involve observers in implicit inferences of the painted subject's mental states and emotions. We further confirm the substantial influence of representational medium on brain activity. |
Markku Kilpeläinen; Mark A. Georgeson Luminance gradient at object borders communicates object location to the human oculomotor system Journal Article In: Scientific Reports, vol. 8, pp. 1593, 2018. @article{Kilpelaeinen2018, The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges. |
Esther S. Kim; Salima Suleman; Tammy Hopper Cognitive effort during a short-term memory (STM) task in individuals with aphasia Journal Article In: Journal of Neurolinguistics, vol. 48, pp. 190–198, 2018. @article{Kim2018b, People with aphasia (PWA) have been shown to demonstrate limited short-term memory (STM) span capacity, but little is known about the degree of cognitive effort PWA expend when completing STM tasks. For decades, researchers have used task-evoked pupillary responses (TEPRs) to infer cognitive effort; pupil size increases as the difficulty of a task increases. The purpose of this study was to examine TEPRs while PWA and healthy control participants completed a STM picture span task. Sixteen PWA and 16 demographically matched control participants completed paper-based and computer-based versions of a picture span task. Measures of pupil size were collected using an eye-tracking system during the computer-based task. Both PWA and control participants demonstrated increased pupil size as STM demands increased. The two groups did not differ in pupil size across different span levels; however, PWA performed significantly poorer than matched controls in terms of behavioural accuracy scores. This suggests that although PWA exerted similar amounts of effort to control participants as task demands increased, they did not show a corresponding increase in accuracy. These data provide support for the feasibility of using TEPRs to investigate cognitive effort in PWA. In conjunction with behavioural performance measures, measures of cognitive effort may provide a holistic picture of cognitive and linguistic functioning in PWA. |
Kinam Kim; Minsung Kim Effects of task demand and familiarity with scenes in visuospatial representations on the perception and processing of spatial information Journal Article In: Journal of Geography, vol. 117, no. 5, pp. 193–204, 2018. @article{Kim2018, This study examined the effects of task demand and familiarity on students' perception and processing of spatial information upon viewing visuospatial representations. Participants in South Korea were told that they would travel through an area, either drawing a map or observing the scenery depicted in photographs. The level of familiarity in the photographs was manipulated in three categories: neighborhood, Seoul (capital city of South Korea), and foreign cities. In two experiments, this study investigated students' eye movements, memory, and response sensitivity and bias. The results indicate that the participants in the map-drawing condition focused on structural information, such as routes, and that their memory of the scenes was more accurate. Moreover, the map-drawing group students were more sensitive and prudent in their responses. The increased level of familiarity also made students focus on structural information. The findings provide useful strategies for geography educators to use visuospatial representations. |
Timothy L. Hodgson; Frouke Hermens; Kyla Pennington; Jade S. Pickering; Gemma Ezard; Richard Clarke; Jagdish Sharma; Adrian M. Owen Eye movements in the “Morris Maze” spatial working memory task reveal deficits in strategic planning Journal Article In: Journal of Cognitive Neuroscience, vol. 31, no. 4, pp. 497–509, 2018. @article{Hodgson2018, Analysis ofeye movements can provide insights into processes underlying performance of cognitive tasks. We recorded eye movements in healthy participants and people with idiopathic Parkinson disease during a token foraging task based on the spa-tial working memory component of the widely used Cambridge Neuropsychological Test Automated Battery. Participants selected boxes (using a mouse click) to reveal hidden tokens. Tokens were never hidden under a box where one had been found before, such that memory had to be used to guide box selections. A key measure of performance in the task is between search errors (BSEs) in which a box where a token has been found is selected again. Eye movements were found to bemost commonly directed toward the next box to be clicked on, but fixations also occurred at rates higher than expected by chance on boxes farther ahead or back along the search path. Looking ahead and looking back in this way was found to correlate negatively with BSEs and was significantly reduced in patients with Parkinson disease. Re-fixating boxes where tokens had already been found correlated with BSEs and the severity of Parkinson disease symptoms. It is concluded that eye movements can provide an index of cognitive planning in the task. Refixations on locations where a token has been found may also provide a sensitive indicator of visuospatial memory integrity. Eye movement measures derived from the spatial working memory task may prove useful in the assessment of executive functions as well as neurological and psychiatric diseases in the future. |
Qiaoli Huang; Jianrong Jia; Qiming Han; Huan Luo Fast-backward replay of sequentially memorized items in humans Journal Article In: eLife, vol. 7, pp. 1–21, 2018. @article{Huang2018, Storing temporal sequences of events (i.e., sequence memory) is fundamental to many cognitive functions. However, it is unknown how the sequence order information is maintained and represented in working memory and its behavioral significance, particularly in human subjects. We recorded electroencephalography (EEG) in combination with a temporal response function (TRF) method to dissociate item-specific neuronal reactivations. We demonstrate that serially remembered items are successively reactivated during memory retention. The sequential replay displays two interesting properties compared to the actual sequence. First, the item-by-item reactivation is compressed within a 200 – 400 ms window, suggesting that external events are associated within a plasticity-relevant window to facilitate memory consolidation. Second, the replay is in a temporally reversed order and is strongly related to the recency effect in behavior. This fast-backward replay, previously revealed in rat hippocampus and demonstrated here in human cortical activities, might constitute a general neural mechanism for sequence memory and learning. |
Clare Kirtley How images draw the eye: An eye-tracking study of composition Journal Article In: Empirical Studies of the Arts, vol. 36, no. 1, pp. 41–70, 2018. @article{Kirtley2018, In his instructional art book, Andrew Loomis provides images and corresponding diagrams that indicate how the composition of the image should guide the viewer's eye. Using these images, we examined whether participants would follow the suggested cues. Participants' eyes were tracked as they viewed the images, allowing us to take measures of where they entered and exited the image, whether they attended to the focal part of the image, and what path they followed between these components. These measures could then be compared with Loomis' suggestions, to determine if the elements did indeed have the proposed influence. While viewers were attracted to the focal points, and spent the most time examining these, they did not use the entry and exit points marked by Loomis, and the suggested viewing paths were not closely followed. It appears that Loomis' suggested elements of composition do not strongly influence viewers' eye movements. |
Nicholas J. Kleene; Melchi M. Michel The capacity of trans-saccadic memory in visual search Journal Article In: Psychological Review, vol. 125, no. 3, pp. 391–408, 2018. @article{Kleene2018, Maintaining a continuous, stable perception of the visual world relies on the ability to integrate information from previous fixations with the current one. An essential component of this integration is trans-saccadic memory (TSM), memory for information across saccades. TSM capacity may play a limiting role in tasks requiring efficient trans-saccadic integration, such as multiple-fixation visual search tasks. We estimated TSM capacity and investigated its relationship to visual short-term memory (VSTM) using two visual search tasks, one in which participants maintained fixation while saccades were simulated and another where participants made a sequence of actual saccades. We derived a memory-limited ideal observer model to estimate lower-bounds on memory capacities from human search performance. Analysis of the single-fixation search task resulted in capacity estimates (4–8 bits) consistent with those reported for traditional VSTM tasks. However, analysis of the multiple-fixation search task resulted in capacity estimates (15–32 bits) significantly larger than those measured for VSTM. Our results suggest that TSM plays an important role in visual search tasks, that the effective capacity of TSM is greater than or equal to that of VSTM, and that the TSM capacity of human observers significantly limits performance in multiple-fixation visual search tasks. |
Barrie P. Klein; Susan F. Pas; Alessio Fracasso; Serge O. Dumoulin; Jelle A. Dijk; Chris L. E. Paffen Cortical depth dependent population receptive field attraction by spatial attention in human V1 Journal Article In: NeuroImage, vol. 176, pp. 301–312, 2018. @article{Klein2018, Visual spatial attention concentrates neural resources at the attended location. Recently, we demonstrated that voluntary spatial attention attracts population receptive fields (pRFs) toward its location throughout the visual hierarchy. Theoretically, both a feed forward or feedback mechanism could underlie pRF attraction in a given cortical area. Here, we use sub-millimeter ultra-high field functional MRI to measure pRF attraction across cortical depth and assess the contribution of feed forward and feedback signals to pRF attraction. In line with previous findings, we find consistent attraction of pRFs with voluntary spatial attention in V1. When assessed as a function of cortical depth, we find pRF attraction in every cortical portion (deep, center and superficial), although the attraction is strongest in deep cortical portions (near the gray-white matter boundary). Following the organization of feed forward and feedback processing across V1, we speculate that a mixture of feed forward and feedback processing underlies pRF attraction in V1. Specifically, we propose that feedback processing contributes to the pRF attraction in deep cortical portions. |
Jonas Knöll; Jonathan W. Pillow; Alexander C. Huk Lawful tracking of visual motion in humans, macaques, and marmosets in a naturalistic, continuous, and untrained behavioral context Journal Article In: Proceedings of the National Academy of Sciences, vol. 115, no. 44, pp. E10486–E10494, 2018. @article{Knoell2018, Much study of the visual system has focused on how humans and monkeys integrate moving stimuli over space and time. Such assessments of spatiotemporal integration provide fundamental grounding for the interpretation of neurophysiological data, as well as how the resulting neural signals support perceptual deci- sions and behavior. However, the insights supported by classical characterizations of integration performed in humans and rhesus monkeys are potentially limited with respect to both generality and detail: Standard tasks require extensive amounts of training, involve abstract stimulus–response mappings, and depend on combining data across many trials and/or sessions. It is thus of concern that the integration observed in classical tasks involves the recruitment of brain circuits that might not normally subsume natural behaviors, and that quantitative analyses have limited power for characterizing single-trial or single-session processes. Here we bridge these gaps by showing that three primate species (humans, macaques, and marmosets) track the focus of expansion of an optic flow field continuously and without substantial training. This flow-tracking behavior was volitional and reflected substantial temporal integration. Most strikingly, gaze patterns exhibited lawful and nuanced dependencies on random perturbations in the stimulus, such that repetitions of identical flow movies elicited remarkably similar eye movements over long and continuous time periods. These results demonstrate the generality of spatiotemporal integration in natural vision, and offer a means for studying integration outside of artificial tasks while maintaining lawful and highly reliable behavior. |
Stephan Koenig; Metin Uengoer; Harald Lachnit Pupil dilation indicates the coding of past prediction errors: Evidence for attentional learning theory Journal Article In: Psychophysiology, vol. 55, no. 4, pp. e13020, 2018. @article{Koenig2018, The attentional learning theory of Pearce and Hall (1980) predicts more attention to uncertain cues that have caused a high prediction error in the past. We examined how the cue-elicited pupil dilation during associative learning was linked to such error- driven attentional processes. In three experiments, participants were trained to acquire associations between different cues and their appetitive (Experiment 1), motor (Experiment 2), or aversive (Experiment 3) outcomes. All experiments were designed to examine differences in the processing of continuously reinforced cues (consistently followed by the outcome) versus partially reinforced, uncertain cues (randomly fol- lowed by the outcome). We measured the pupil dilation elicited by the cues in anticipation of the outcome and analyzed how this conditioned pupil response changed over the course of learning. In all experiments, changes in pupil size com- plied with the same basic pattern: During early learning, consistently reinforced cues elicited greater pupil dilation than uncertain, randomly reinforced cues, but this effect gradually reversed to yield a greater pupil dilation for uncertain cues toward the end of learning. The pattern of data accords with the changes in prediction error and error-driven attention formalized by the Pearce-Hall theory. |
Catarina C. Kordsachia; Izelle Labuschagne; Julie C. Stout Visual scanning of the eye region of human faces predicts emotion recognition performance in Huntington's disease Journal Article In: Neuropsychology, vol. 32, no. 3, pp. 356–365, 2018. @article{Kordsachia2018, Objective: Previous research has consistently shown that the ability to recognize emotions from facial expressions is impaired in Huntington's disease (HD). The aim of this study was to examine whether people with the gene expansion for HD visually scan the most emotionally informative features of human faces less than unaffected individuals, and whether altered visual scanning predicts emotion recognition in HD beyond general disease-related decline. Method: We recorded eye movements of 25 participants either in the late premanifest or early stage of HD and 25 age-matched healthy control participants during a face-viewing task. The task involved the viewing of pictures depicting human faces with angry, disgusted, fearful, happy, and neutral expressions, and evaluating each face on a valence rating scale. For data analysis, we defined 2 regions of interest (ROIs) on each picture, including an eye-ROI and a nose/mouth-ROI. Emotion recognition abilities were measured using an established emotion-recognition task and general disease-related decline was measured using the UHDRS motor score. Results: Compared to the control participants, the HD participants spent less time looking at the ROIs relative to the total time spent looking at the pictures (partial η2 = 0.10), and made fewer fixations on the ROIs (partial η2 = 0.16). Furthermore, visual scanning of the eye-ROI, but not the nose/mouth-ROI, predicted emotion recognition performance in the HD group, over and beyond general disease-related decline. Conclusion: The emotion recognition deficit in HD may partly be explained by general disease-related decline in cognition and motor functioning and partly by a social-emotional deficit, which is reflected in reduced eye-viewing. |
Christof Körner; Margit Höfler; Anja Ischebeck; Iain D. Gilchrist The consequence of a limited-capacity short-term memory on repeated visual search Journal Article In: Visual Cognition, vol. 26, no. 7, pp. 552–562, 2018. @article{Koerner2018, When participants search the same letter display repeatedly for different targets we might expect performance to improve on each subsequent search as they memorize characteristics of the display. However, here we find that search performance improved from a first search to a second search but not for a third search of the same display. This is predicted by a simple model that supports search with only a limited capacity short-term memory for items in the display. To support this model we show that a short-term memory recency effect is present in both the second and the third search. The magnitude of these effects is the same in both searches and as a result there is no additional benefit from the second to the third search. |
Paweł Korpal; Katarzyna Stachowiak-Szymczak The whole picture: Processing of numbers and their context in simultaneous interpreting Journal Article In: Poznan Studies in Contemporary Linguistics, vol. 54, no. 3, pp. 335–354, 2018. @article{Korpal2018, This paper presents an eye-tracking study in which number processing in simultaneous interpreting was investigated. Interpreting accuracy and eye behaviour were studied together to unveil the processing and rendering of numbers by interpreting trainees (N = 22) and professional interpreters (N = 26). While professional interpreters rendered numerals and the context in which they appeared with better accuracy, there was also a positive correlation between number interpreting accuracy and context interpreting accuracy. Our results indicate that interpreting arithmetic values of numerals is more cognitively demanding than interpreting their context, which is reflected in longer mean fixation duration on numbers than on the elements they referred to. Further research is needed to investigate numerical data processing in other tasks, involving other language pairs and interpreting directionality. The study outcomes may be a useful contribution to research on the cognitive aspects of simultaneous interpreting, numerical data processing, as well as interpreter training. |
N. Kovácsová; C. D. D. Cabrall; S. J. Antonisse; T. Haan; R. Namen; J. L. Nooren; R. Schreurs; M. P. Hagenzieker; Joost C. F. Winter Cyclists' eye movements and crossing judgments at uncontrolled intersections: An eye-tracking study using animated video clips Journal Article In: Accident Analysis and Prevention, vol. 120, no. May, pp. 270–280, 2018. @article{Kovacsova2018, Research indicates that crashes between a cyclist and a car often occur even when the cyclist must have seen the approaching car, suggesting the importance of hazard anticipation skills. This study aimed to analyze cyclists' eye movements and crossing judgments while approaching an intersection at different speeds. Thirty-six participants watched animated video clips with a car approaching an uncontrolled four-way intersection and continuously indicated whether they would cross the intersection first. We varied (1) car approach scenario (passing, colliding, stopping), (2) traffic complexity (one or two approaching cars), and (3) cyclist's approach speed (15, 25, or 35 km/h). Results showed that participants looked at the approaching car when it was relevant to the task of crossing the intersection and posed an imminent hazard, and they directed less attention to the car after it had stopped or passed the intersection. Traffic complexity resulted in divided attention between the two cars, but participants retained most visual attention to the car that came from the right and had right of way. Effects of cycling speed on cyclists' gaze behavior and crossing judgments were small to moderate. In conclusion, cyclists' visual focus and crossing judgments are governed by situational factors (i.e., objects with priority and future collision potential), whereas cycling speed does not have substantial effects on eye movements and crossing judgments. |
Jerrold Jeyachandra; Yoongoo Nam; YoungWook Kim; Gunnar Blohm; Aarlenne Zein Khan Transsaccadic memory of multiple spatially variant and invariant object features Journal Article In: Journal of Vision, vol. 18, no. 1, pp. 1–14, 2018. @article{Jeyachandra2018, Transsaccadic memory is a process by which remembered object information is updated across a saccade. To date, studies on transsaccadic memory have used simple stimuli—that is, a single dot or feature of an object. It remains unknown how transsaccadic memory occurs for more realistic, complex objects with multiple features. An object's location is a central feature for transsaccadic updating, as it is spatially variant, but other features such as size are spatially invariant. How these spatially variant and invariant features of an object are remembered and updated across saccades is not well understood. Here we tested how well 14 participants remembered either three different features together (location, orientation, and size) or a single feature at a time of a bar either while fixating either with or without an intervening saccade. We found that the intervening saccade influenced memory of all three features, with consistent biases of the remembered location, orientation, and size that were dependent on the direction of the saccade. These biases were similar whether participants remembered either a single feature or multiple features and were not observed with increased memory load (single vs. multiple features during fixation trials), confirming that these effects were specific to the saccade updating mechanisms. We conclude that multiple features of an object are updated together across eye movements, supporting the notion that spatially invariant features of an object are bound to their location in memory. |
Michael Jigo; Marisa Carrasco Attention alters spatial resolution by modulating second-order processing Journal Article In: Journal of Vision, vol. 18, no. 7, pp. 1–12, 2018. @article{Jigo2018a, Endogenous and exogenous visuospatial attention both alter spatial resolution, but they operate via distinct mechanisms. In texture segmentation tasks, exogenous attention inflexibly increases resolution even when detrimental for the task at hand and does so by modulating second-order processing. Endogenous attention is more flexible and modulates resolution to benefit performance according to task demands, but it is unknown whether it also operates at the second-order level. To answer this question, we measured performance on a second-order texture segmentation task while independently manipulating endogenous and exogenous attention. Observers discriminated a second-order texture target at several eccentricities. We found that endogenous attention improved performance uniformly across eccentricity, suggesting a flexible mechanism that can increase or decrease resolution based on task demands. In contrast, exogenous attention improved performance in the periphery but impaired it at central retinal locations, consistent with an inflexible resolution enhancement. Our results reveal that endogenous and exogenous attention both alter spatial resolution by differentially modulating second-order processing. |
Michael Jigo; Mengyuan Gong; Taosheng Liu Neural determinants of task performance during feature-based attention in human cortex Journal Article In: eNeuro, vol. 5, no. 1, pp. 1–15, 2018. @article{Jigo2018, Studies of feature-based attention have associated activity in a dorsal frontoparietal network with putative attentional priority signals. Yet, how this neural activity mediates attentional selection and whether it guides behavior are fundamental questions that require investigation. We reasoned that endogenous fluctuations in the quality of attentional priority should influence task performance. Human subjects detected a speed increment while viewing clockwise (CW) or counterclockwise (CCW) motion (baseline task) or while attending to either direction amid distracters (attention task). In an fMRI experiment, direction-specific neural pattern similarity between the baseline task and the attention task revealed a higher level of similarity for correct than incorrect trials in frontoparietal regions. Using transcranial magnetic stimulation (TMS), we disrupted posterior parietal cortex (PPC) and found a selective deficit in the attention task, but not in the baseline task, demonstrating the necessity of this cortical area during feature-based attention. These results reveal that frontoparietal areas maintain attentional priority that facilitates successful behavioral selection. |
Xinhong Jin; Yahong Jin; Shi Zhou; Shun-nan Yang; Shuzhi Chang; Hui Li Attentional biases toward body images in males at high risk of muscle dysmorphia Journal Article In: PeerJ, vol. 6, pp. 1–17, 2018. @article{Jin2018b, Objective. Although research on muscle dysmorphia (MD), a body dysmorphic disorder subtype, has recently increased, the causes and mechanisms underlying this disorder remain unclear. Results from studies examining disorders associated with body image suggest the involvement of self-schema in biasing attention toward specific body information. The present study examined whether individuals at higher risk ofMDalso display attentional biases toward specific types of body images. Methods. The validated Chinese version of the Muscle Appearance Satisfaction Scale was used to distinguish men at higher and lower risk of MD. Sixty-five adult Chinese men at higher (HRMD, n D 33) and lower risk of MD (LRMD |
Zhenlan Jin; Shulin Yue; Junjun Zhang; Ling Li Task-irrelevant emotional faces impair response adjustments in a double-step saccade task Journal Article In: Cognition and Emotion, vol. 32, no. 6, pp. 1347–1354, 2018. @article{Jin2018, Cognitive control enables us to adjust behaviours according to task demands, and emotion influences the cognitive control. We examined how task-irrelevant emotional stimuli impact the ability to inhibit a prepared response and then programme another appropriate response. In the study, either a single target or two sequential targets appeared after emotional face images (positive, neutral, and negative). Subjects were required to freely viewed the emotional faces and make a saccade quickly upon target onset, but inhibit their initial saccades and redirect gaze to the second target if it appeared. We found that subjects were less successful at inhibiting their initial saccades as the inter-target delay increased. Emotional faces further reduced their inhibition ability with a longer delay, but not with a shorter delay. When subjects failed to inhibit the initial saccade, the longer delay produced longer intersaccadic interval. Especially, positive faces lengthened the intersaccadic interval with a longer delay. These results showed mere presence of emotional stimuli influences gaze redirection by impairing the ability to cancel a prepared saccade and delaying the programming of a corrective saccade. Therefore, we propose that the modulation of response adjustment exerted by emotional faces is related to the stage of initial response programming. |
W. Joseph MacInnes; Amelia R. Hunt; Alasdair D. F. Clarke; Michael D. Dodd A generative model of cognitive state from task and eye movements Journal Article In: Cognitive Computation, vol. 10, no. 5, pp. 703–717, 2018. @article{JosephMacInnes2018, The early eye tracking studies of Yarbus provided descriptive evidence that an observer's task influences patterns of eye movements, leading to the tantalizing prospect that an observer's intentions could be inferred from their saccade behavior. We investigate the predictive value of task and eye movement properties by creating a computational cognitive model of saccade selection based on instructed task and internal cognitive state using a Dynamic Bayesian Network (DBN). Understanding how humans generate saccades under different conditions and cognitive sets links recent work on salience models of low-level vision with higher level cognitive goals. This model provides a Bayesian, cognitive approach to top-down transitions in attentional set in pre-frontal areas along with vector-based saccade generation from the superior colliculus. Our approach is to begin with eye movement data that has previously been shown to differ across task. We first present an analysis of the extent to which individual saccadic features are diagnostic of an observer's task. Second, we use those features to infer an underlying cognitive state that potentially differs from the instructed task. Finally, we demonstrate how changes of cognitive state over time can be incorporated into a generative model of eye movement vectors without resorting to an external decision homunculus. Internal cognitive state frees the model from the assumption that instructed task is the only factor influencing observers' saccadic behavior. While the inclusion of hidden temporal state does not improve the classification accuracy of the model, it does allow accurate prediction of saccadic sequence results observed in search paradigms. Given the generative nature of this model, it is capable of saccadic simulation in real time. We demonstrated that the properties from its generated saccadic vectors closely match those of human observers given a particular task and cognitive state. Many current models of vision focus entirely on bottom-up salience to produce estimates of spatial ” areas of interest” within a visual scene. While a few recent models do add top-down knowledge and task information, we believe our contribution is important in three key ways. First, we incorporate task as learned attentional sets that are capable of self-transition given only information available to the visual system. This matches influential theories of bias signals by (Miller and Cohen Annu Rev Neurosci 24:167–202, 2001) and implements selection of state without simply shifting the decision to an external homunculus. Second, our model is generative and capable of predicting sequence artifacts in saccade generation like those found in visual search. Third, our model generates relative saccadic vector information as opposed to absolute spatial coordinates. This matches more closely the internal saccadic representations as they are generated in the superior colliculus. |
Caroline Junge; Rianne Rooijen; Maartje E. J. Raijmakers Distributional information shapes infants' categorization of objects Journal Article In: Infancy, vol. 23, no. 6, pp. 917–926, 2018. @article{Junge2018, While distributional learning has been successfully demonstrated for auditory categorization, this study tests whether this mechanism also applies to object categorization: Ten- month-olds (n = 38) were familiarized with either a unimodal or bimodal distribution of a visual continuum. Using automatic eye tracking, we assessed categorization through the alternating/nonalternating paradigm. For infants in the bimodal condition, their average dwell time was larger for alternating trials than for nonalternating trials, while infants in the unimodal condition initially looked equally long at both types of trials. This group difference suggests that the shape of frequency distribution bears on the number of categories that infants construct from a continuum. Later in test, all infants show this alternating preference. We conclude that categorization is a flexible process, continuously adjusting itself to additional input. |
Juan E. Kamienkowski; Alexander Varatharajah; Mariano Sigman; Matias J. Ison Parsing a mental program: Fixation-related brain signatures of unitary operations and routines in natural visual search Journal Article In: NeuroImage, vol. 183, pp. 73–86, 2018. @article{Kamienkowski2018a, Visual search involves a sequence or routine of unitary operations (i.e. fixations) embedded in a larger mental global program. The process can indeed be seen as a program based on a while loop (while the target is not found), a conditional construct (whether the target is matched or not based on specific recognition algorithms) and a decision making step to determine the position of the next searched location based on existent evidence. Recent developments in our ability to co-register brain scalp potentials (EEG) during free eye movements has allowed investigating brain responses related to fixations (fixation-Related Potentials; fERPs), including the identification of sensory and cognitive local EEG components linked to individual fixations. However, the way in which the mental program guiding the search unfolds has not yet been investigated. We performed an EEG and eye tracking co-registration experiment in which participants searched for a target face in natural images of crowds. Here we show how unitary steps of the program are encoded by specific local target detection signatures and how the positioning of each unitary operation within the global search program can be pinpointed by changes in the EEG signal amplitude as well as the signal power in different frequency bands. By simultaneously studying brain signatures of unitary operations and those occurring during the sequence of fixations, our study sheds light into how local and global properties are combined in implementing visual routines in natural tasks. |
Pavel Kozik; Laura G. Tateosian; Christopher G. Healey; James T. Enns Impressionism-inspired data visualizations are both functional and liked Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, vol. 13, no. 3, pp. 266–276, 2018. @article{Kozik2018, Creating data visualizations that are functional and aesthetically pleasing is important yet difficult. Here we ask whether creating visualizations using the painterly techniques of impressionist-era artists may help. In two experiments we rendered weather data from the Intergovernmental Panel on Climate Change into a common visualization style, glyph, and impressionism-inspired painting styles, sculptural, containment, and impasto. Experiment 1 tested participants' recognition memory for these visualizations and found that impasto, a style resembling paintings like Starry Night (1889) by Vincent van Gogh, was comparable with glyphs and superior to the other impressionist styles. Experiment 2 tested participants' ability to report the prevalence of the color blue (representative of a single weather condition) within each visualization, and here impasto was superior to glyphs and the other impressionist styles. Questionnaires administered at study completion revealed that styles participants liked had higher task performance relative to less liked styles. Incidental eye tracking in both studies also found impressionist visualizations elicited greater visual exploration than glyphs. These results offer a proof-of-concept that the painterly techniques of impressionism, and particularly those of the impasto style, can create visualizations that are functional, liked, and encourage visual exploration. |
Kristina Krasich; Robert McManus; Stephen Hutt; Myrthe Faber; Sidney K. D'Mello; James Brockmole Gaze-based signatures of mind wandering during real-world scene processing Journal Article In: Journal of Experimental Psychology: General, vol. 147, no. 8, pp. 1111–1124, 2018. @article{Krasich2018, Physiological limitations on the visual system require gaze to move from location to location to extract the most relevant information within a scene. Therefore, gaze provides a real-time index of the information-processing priorities of the visual system. We investigated gaze allocation during mind wandering (MW), a state where cognitive priorities shift from processing task-relevant external stimuli (i.e., the visual world) to task-irrelevant internal thoughts. In both a main study and a replication, we recorded the eye movements of college-aged adults who studied images of urban scenes and responded to pseudorandom thought probes on whether they were mind wandering or attentively viewing at the time of the probe. Probe-caught MW was associated with fewer and longer fixations, greater fixation dispersion, and more frequent eyeblinks (only observed in the main study) relative to periods of attentive scene viewing. These findings demonstrate that gaze indices typically considered to represent greater engagement with scene processing (e.g., longer fixations) can also indicate MW. In this way, the current work exhibits a need for empirical investigations and computational models of gaze control to account for MW for a more accurate representation of the moment-to-moment information-processing priorities of the visual system. |
Krzysztof Krejtz; Andrew T. Duchowski; Anna Niedzielska; Cezary Biele; Izabela Krejtz Eye tracking cognitive load using pupil diameter and microsaccades with fixed gaze Journal Article In: PLoS ONE, vol. 13, no. 9, pp. e0203629, 2018. @article{Krejtz2018, Pupil diameter and microsaccades are captured by an eye tracker and compared for their suitability as indicators of cognitive load (as beset by task difficulty). Specifically, two metrics are tested in response to task difficulty: (1) the change in pupil diameter with respect to inter- or intra-trial baseline, and (2) the rate and magnitude of microsaccades. Participants performed easy and difficult mental arithmetic tasks while fixating a central target. Inter-trial change in pupil diameter and microsaccade magnitude appear to adequately discriminate task difficulty, and hence cognitive load, if the implied causality can be assumed. This paper's contribution corroborates previous work concerning microsaccade magnitude and extends this work by directly comparing microsaccade metrics to pupillometric measures. To our knowledge this is the first study to compare the reliability and sensitivity of task-evoked pupillary and microsaccadic measures of cognitive load. |
Gustav Kuhn; Robert Teszka Don't get misdirected! Differences in overt and covert attentional inhibition between children and adults Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 3, pp. 688–694, 2018. @article{Kuhn2018, Previous research has revealed marked improvements in cognitive control between the age of 10 years and adulthood. The aim of the current study was to explore differences in attentional control between adults and children within a natural context, namely whilst they were watching a magic trick. We measured participants' eye movements whilst they watched a misdirection trick in which attentional misdirection was used to prevent observers from noticing a salient visual event. Half of our participants failed to detect this event even though it took place in full view. Children below the age of 10 were significantly less likely to notice the event than the adults and were also more reliably overtly misdirected (i.e., where they looked). Our results illustrate that within a more naturalistic context children are significantly more distracted than adults, and this distraction can have major implications on their visual awareness. |
Gustav Kuhn; Ieva Vacaityte; Antonia D. C. D'Souza; Abbie C. Millett; Geoff G. Cole Mental states modulate gaze following, but not automatically Journal Article In: Cognition, vol. 180, pp. 1–9, 2018. @article{Kuhn2018a, A number of authors have suggested that the computation of another person's visual perspective occurs automatically. In the current work we examined whether perspective-taking is indeed automatic or more likely to be due to mechanisms associated with conscious control. Participants viewed everyday scenes in which a single human model looked towards a target object. Importantly, the model's view of the object was either visible or occluded by a physical barrier (e.g., briefcase). Results showed that when observers were given five seconds to freely view the scenes, eye movements were faster to fixate the object when the model could see it compared to when it was occluded. By contrast, when observers were required to rapidly discriminate a target superimposed upon the same object no such visibility effect occurred. We also employed the barrier procedure together with the most recent method (i.e., the ambiguous number paradigm) to have been employed in assessing the perspective-taking theory. Results showed that the model's gaze facilitated responses even when this agent could not see the critical stimuli. We argue that although humans do take into account the perspective of other people this does not occur automatically. |
Katarzyna Kurcyus; Efsun Annac; Nina M. Hanning; Ashley D. Harris; Georg Oeltzschner; Richard Edden; Valentin Riedl Opposite dynamics of GABA and glutamate levels in the occipital cortex during visual processing Journal Article In: Journal of Neuroscience, vol. 38, no. 46, pp. 9967–9976, 2018. @article{Kurcyus2018, Magnetic resonance spectroscopy (MRS) measures the two most common inhibitory and excitatory neurotransmitters, GABA and glutamate, in thehumanbrain. However, the role of MRS-derived GABA and glutamate signals in relation to system-level neural signaling and behavior is not fully understood. In this study, we investigated levels of GABA and glutamate in the visual cortex of healthy human participants (both genders) in three functional states with increasing visual input. Compared with a baseline state of eyes closed, GABA levels decreased after opening the eyes in darkness and Glx levels remained stable during eyes open but increased with visual stimulation. In relevant states, GABAand Glx correlated with amplitude off MRI signal fluctuations. Furthermore, visual discriminatory performance correlated with the level of GABA, but not Glx. Our study suggests that differences in brain states can be detected through the contrasting dynamics ofGABA and Glx, which has implications in interpreting MRS measurements. |
Archonteia Kyroudi; Kristoffer Petersson; Mahmut Ozsahin; Jean Bourhis; François Bochud; Raphaël Moeckli Analysis of the treatment plan evaluation process in radiotherapy through eye tracking Journal Article In: Zeitschrift fur Medizinische Physik, vol. 28, no. 4, pp. 318–324, 2018. @article{Kyroudi2018, Background and purpose: Treatment plan evaluation is a clinical decision-making problem that involves visual search and analysis in a contextually rich environment, including delineated structures and isodose lines superposed on CT data. It is a two-step process that includes visual analysis and clinical reasoning. In this work, we used eye tracking methods to gain more knowledge about the treatment plan evaluation process in radiation therapy. Materials and methods: Dose distributions on a single transverse slice of ten prostate cancer treatment plans were presented to eight decision makers. Their eye movements and fixations were recorded with an EyeLink1000 remote eye-tracker. Total evaluation time, dwell time, number and duration of fixations on pre-segmented areas of interest were measured. Results: The main structures receiving more and longer fixations (PTV, rectum, bladder) correspond to the main trade-offs evaluated in a typical prostate plan. Radiation oncologists made more fixations on the main structures compared to the medical physicists. Radiation oncologists fixated longer on the rectum when visited for the first time, while medical physicists fixated longer on the bladder. Conclusion: Our results quantify differences in the visual evaluation patterns between radiation oncologists and medical physicists, which indicate differences in their decision making strategies. |
Oryah C. Lancry-Dayan; Tal Nahari; Gershon Ben-Shakhar; Yoni Pertzov Do you know him? Gaze dynamics toward familiar faces on a concealed information test Journal Article In: Journal of Applied Research in Memory and Cognition, vol. 7, no. 2, pp. 291–302, 2018. @article{LancryDayan2018, Can gaze position reveal concealed knowledge? During visual processing, gaze allocation is influenced not only by features of the visual input, but also by previous exposure to objects. However, the dynamics of gaze allocation toward personally familiar items remains unclear, especially in the context of revealing concealed familiarity. When memorizing four pictures of faces on a short term memory task, participants' gaze was initially directed toward a personally familiar face, followed by a strong avoidance from it. This avoidance was evident even when participants were instructed to conceal their familiarity and direct their gaze equally to all faces. On the other hand, participants were partially able to control the initial preference to fixate on the familiar face. By exploiting these patterns, a machine learning classification algorithm and signal detection analysis revealed impressive detection efficiency estimates, suggesting practical applications of recent theoretical insights from the domains of eye tracking and memory. |
Stephanie J. Larcombe; Christopher Kennard; Holly Bridge Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning Journal Article In: Human Brain Mapping, vol. 39, no. 1, pp. 145–156, 2018. @article{Larcombe2018, Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy partici- pants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields sepa- rately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior por- tion of the human motion complex (hMT1). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. |
Stephanie J. Larcombe; Yuliya Kulyomina; Nikoleta Antonova; Sara Ajina; Charlotte J. Stagg; Philip L. Clatworthy; Holly Bridge Visual training in hemianopia alters neural activity in the absence of behavioural improvement: A pilot study Journal Article In: Ophthalmic and Physiological Optics, vol. 38, no. 5, pp. 538–549, 2018. @article{Larcombe2018a, BACKGROUND: Damage to the primary visual cortex (V1) due to stroke often results in permanent loss of sight affecting one side of the visual field (homonymous hemianopia). Some rehabilitation approaches have shown improvement in visual performance in the blind region, but require a significant time investment. METHODS: Seven patients with cortical damage performed 400 trials of a motion direction discrimination task daily for 5 days. Three patients received anodal transcranial direct current stimulation (tDCS) during training, three received sham stimulation and one had no stimulation. Each patient had an assessment of visual performance and a functional magnetic resonance imaging (fMRI) scan before and after training to measure changes in visual performance and cortical activity. RESULTS: No patients showed improvement in visual function due to the training protocol, and application of tDCS had no effect on visual performance. However, following training, the neural response in motion area hMT+ to a moving stimulus was altered. When the stimulus was presented to the sighted hemifield, activity decreased in hMT+ of the damaged hemisphere. There was no change in hMT+ response when the stimulus was presented to the impaired hemifield. There was a decrease in activity in the inferior precuneus after training when the stimulus was presented to either the impaired or sighted hemifield. Preliminary analysis of tDCS data suggested that anodal tDCS interacted with the delivered training, modulating the neural response in hMT+ in the healthy side of the brain. CONCLUSION: Training can affect the neural responses in hMT+ even in the absence of change in visual performance. |
Rebecca K. Lawrence; Mark Edwards; Stephanie C. Goodhew Changes in the spatial spread of attention with ageing Journal Article In: Acta Psychologica, vol. 188, pp. 188–199, 2018. @article{Lawrence2018, Spatial attention is a necessary cognitive process, allowing for the direction of limited capacity resources to varying locations in the visual field for improved visual processing. Thus, understanding how ageing influences these processes is vital. The current study explored the relationship between the spatial spread of attention and healthy ageing using an inhibition of return task to tap visual attention processing. This task allowed us to measure the spatial distribution of inhibition, and thus acted as a marker for attentional spread. Past research has indicated minimal age differences in inhibitory spread. However, these studies used placeholder stimuli, which may have restricted the range over which age differences could be reliably measured. To address this, in Experiment One, we measured the relationship between the spatial spread of inhibition and healthy ageing using a method which did not employ placeholders. In contrast to past research, an age difference in inhibitory spread was observed, where in comparison to younger adults, older adults exhibited a relatively restricted spread of attention. Experiment Two then confirmed these findings, by directly comparing inhibitory spread for placeholder present and placeholder absent conditions, across younger and older adults. Again, it was found that age differences in inhibitory spread emerged, but only in the placeholder absent condition. Possible reasons for the observed age differences in attention are discussed. |
Campbell Le Heron; Sanjay G. Manohar; Olivia Plant; Kinan Muhammed; Ludovica Griffanti; Andrea Nemeth; Gwenaëlle Douaud; Hugh S. Markus; Masud Husain Dysfunctional effort-based decision-making underlies apathy in genetic cerebral small vessel disease Journal Article In: Brain, vol. 141, pp. 3193–3210, 2018. @article{LeHeron2018, Apathy is a syndrome of reduced motivation that commonly occurs in patients with cerebral small vessel disease, including those with the early onset form, CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalo- pathy). The cognitive mechanisms underlying apathy are poorly understood and treatment options are limited. We hypothesized that disrupted effort-based decision-making, the cognitive process by which potential rewards and the effort cost required to obtain them is integrated to drive behaviour, might underlie the apathetic syndrome. Nineteen patients with a genetic diagnosis of CADASIL, as a model of ‘pure' vascular cognitive impairment, and 19 matched controls were assessed using two different behavioural paradigms and MRI. On a decision-making task, participants decided whether to accept or reject sequential offers of monetary reward in return for exerting physical effort via handheld dynamometers. Six levels of reward and six levels of effort were manipulated independently so offers spanned the full range of possible combinations. Choice, decision time and force metrics were recorded. Each participant's effort and reward sensitivity was estimated using a computational model of choice. On a separate eye movement paradigm, physiological reward sensitivity was indexed by measuring pupillary dilatation to increasing monetary incentives. This metric was related to apathy status and compared to the behavioural metric of reward sensitivity on the decision- making task. Finally, high quality diffusion imaging and tract-based spatial statistics were used to determine whether tracts linking brain regions implicated in effort-based decision-making were disrupted in apathetic patients. Overall, apathetic patients with CADASIL rejected significantly more offers on the decision-making task, due to reduced reward sensitivity rather than effort hypersensitivity. Apathy was also associated with blunted pupillary responses to incentives. Furthermore, these independent be- havioural and physiological markers of reward sensitivity were significantly correlated. Non-apathetic patients with CADASIL did not differ from controls on either task, whilst actual motor performance of apathetic patients in both tasks was also normal. Apathy was specifically associated with reduced fractional anisotropy within tracts connecting regions previously associated with effort-based decision-making. These findings demonstrate behavioural, physiological and anatomical evidence that dysfunctional effort-based decision-making underlies apathy in patients with CADASIL, a model disorder for sporadic small vessel disease. Reduced incentivization by rewards rather than hypersensitivity to effort costs drives this altered pattern of behaviour. The study provides empirical evidence of a cognitive mechanism for apathy in cerebral small vessel disease, and identifies a promising therapeutic target for interventions to improve this debilitating condition. |
Rosanna G. Lea; Pamela Qualter; Sarah K. Davis; Juan Carlos Pérez-González; Munirah Bangee Trait emotional intelligence and attentional bias for positive emotion: An eye tracking study Journal Article In: Personality and Individual Differences, vol. 128, pp. 88–93, 2018. @article{Lea2018, Emotional intelligence (EI) may promote wellbeing through facilitation of adaptive attentional processing patterns. In the current study, a total of 54 adults (43 females, mean age = 25 years |
Matthew L. Leavitt; Florian Pieper; Adam J. Sachs; Julio C. Martinez-Trujillo A quadrantic bias in prefrontal representation of visual-mnemonic space Journal Article In: Cerebral Cortex, vol. 28, no. 7, pp. 2405–2421, 2018. @article{Leavitt2018, Single neurons in primate dorsolateral prefrontal cortex (dLPFC) are known to encode working memory (WM) representations of visual space. Psychophysical studies have shown that the horizontal and vertical meridians of the visual field can bias spatial information maintained in WM. However, most studies and models have tacitly assumed that dLPFC neurons represent mnemonic space homogenously. The anatomical organization of these representations has also eluded clear parametric description. We investigated these issues by recording from neuronal ensembles in macaque dLPFC with microelectrode arrays while subjects performed an oculomotor delayed-response task. We found that spatial WM representations in macaque dLPFC are biased by the vertical and horizontal meridians of the visual field, dividing mnemonic space into quadrants. This bias is reflected in single neuron firing rates, neuronal ensemble representations, the spike count correlation structure, and eye movement patterns. We also found that dLPFC representations of mnemonic space cluster anatomically in a nonretinotopic manner that partially reflects the organization of visual space. These results provide an explanation for known WM biases, and reveal novel principles of WM representation in prefrontal neuronal ensembles and across the cortical surface, as well as the need to reconceptualize models of WM to accommodate the observed representational biases. |
Benjamin D. Lester; Shaun P. Vecera Active listening delays attentional disengagement and saccadic eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 25, no. 3, pp. 1021–1027, 2018. @article{Lester2018, Successful goal-directed visual behavior depends on efficient disengagement of attention. Attention must be withdrawn from its current focus before being redeployed to a new object or internal process. Previous research has demonstrated that occupying cognitive processes with a secondary cellular phone conversation impairs attentional functioning and driving behavior. For example, attentional processing is significantly impacted by concurrent cell phone use, resulting in decreased explicit memory for on-road information. Here, we examined the impact of a critical component of cell-phone use—active listen-ing—on the effectiveness of attentional disengagement. In the gap task—a saccadic manipulation of attentional disengagement—we measured saccade latencies while participants per-formed a secondary active listening task. Saccadic latencies significantly increased under an active listening load only when attention needed to be disengaged, indicating that active listening delays a disengagement operation. Simple dual-task interference did not account for the observed results. Rather, active cognitive engagement is required for measurable disengagement slowing to be observed. These results have implications for investigations of attention, gaze behavior, and distracted driving. Secondary tasks such as active listening or cell-phone conversations can have wide-ranging impacts on cognitive functioning, potentially impairing relatively elementary operations of attentional function, including disengagement. |
Aaron J. Levi; Jacob L. Yates; Alexander C. Huk; Leor N. Katz Strategic and dynamic temporal weighting for perceptual decisions in humans and macaques Journal Article In: eNeuro, vol. 5, no. 5, pp. 1–15, 2018. @article{Levi2018, Perceptual decision-making is often modeled as the accumulation of sensory evidence over time. Recent studies using psychophysical reverse correlation have shown that even though the sensory evidence is stationary over time, subjects may exhibit a time-varying weighting strategy, weighting some stimulus epochs more heavily than others. While previous work has explained time-varying weighting as a consequence of static decision mechanisms (e.g., decision bound or leak), here we show that time-varying weighting can reflect strategic adaptation to stimulus statistics, and thus can readily take a number of forms. We characterized the temporal weighting strategies of humans and macaques performing a motion discrimination task in which the amount of information carried by the motion stimulus was manipulated over time. Both species could adapt their temporal weighting strategy to match the time-varying statistics of the sensory stimulus. When early stimulus epochs had higher mean motion strength than late, subjects adopted a pronounced early weighting strategy, where early information was weighted more heavily in guiding perceptual decisions. When the mean motion strength was greater in later stimulus epochs, in contrast, subjects shifted to a marked late weighting strategy. These results demonstrate that perceptual decisions involve a temporally flexible weighting process in both humans and monkeys, and introduce a paradigm with which to manipulate sensory weighting in decision-making tasks. |
Daniel T. Levin; Adriane E. Seiffert; Sun-Joo Cho; Kelly E. Carter Are failures to look, to represent, or to learn associated with change blindness during screen-capture video learning? Journal Article In: Cognitive Research: Principles and Implications, vol. 3, pp. 1–12, 2018. @article{Levin2018, Although phenomena such as change blindness and inattentional blindness are robust, it is not entirely clear how these failures of visual awareness are related to failures to attend to visual information, to represent it, and to ultimately learn in visual environments. On some views, failures of visual awareness such as change blindness underestimate the true extent of otherwise rich visual representations. This might occur if people did represent the changing features but failed to compare them across views. In contrast, other approaches emphasize visual representations that are created only when they are functional. On this view, change blindness may be associated with poor representations of the changing properties. It is possible to compromise and propose that representational richness varies across contexts, but then it becomes important to detail relationships among attention, awareness, andlearninginspecific, but applicable, settings. We therefore assessed these relationships in an important visual setting: screen-captured instructional videos. In two experiments, we tested the degree to which attention (as measured by gaze) predicts change detection, and whether change detection is associated with visual representations and content learning. We observed that attention sometimes predicted change detection, and that change detection was associated with representations of attended objects. However, there was no relationship between change detection and learning. |
Jie Li; Lauri Oksama; Jukka Hyönä Close coupling between eye movements and serial attentional refreshing during multiple-identity tracking Journal Article In: Journal of Cognitive Psychology, vol. 30, no. 5-6, pp. 609–626, 2018. @article{Li2018a, Multiple-identity tracking (MIT) is a dynamic task in which observers track multiple moving objects of distinct identities and then report the location of each target object. The present study examined participant' eye movements during MIT in order to investigate the relationship between eye movements and attentional performance during the task. The results showed that fixations were predominately directed to individual targets during tracking. When successfully landed on targets, the fixations dwelled for longer duration; otherwise, they were terminated quickly. As the attentional demands for processing the targets increased, fixations landed on the targets more frequently while fixations outside targets decreased both in number and duration. The attentionally more demanding targets were fixated more frequently than the attentionally less demanding ones. The most recently fixated target was tracked with higher performance, while the tracking accuracy for the more previously fixated targets gradually decreased. Taken together, the results indicate that fixations are tightly coupled with attention during MIT, switching serially from target to target for refreshing each object representation to facilitate the tracking of identities and locations of multiple targets. |
Matthew G. Huebner; Jo-Anne LeFevre Selection of procedures in mental subtraction: Use of eye movements as a window on arithmetic processing Journal Article In: Canadian Journal of Experimental Psychology, vol. 72, no. 3, pp. 171–182, 2018. @article{Huebner2018, Adults who use mental procedures other than direct retrieval to solve simple arithmetic problems typically make more errors and respond more slowly than individuals who rely on retrieval. The present study examined how this extra time was distributed across problem components when adults (n = 40) solved small (e.g., 5 – 2) and large (e.g., 17 – 9) subtraction problems. Two performance groups (i.e., retrievers and procedure users) were created based on a 2-group cluster analysis using statistics derived from the ex-Gaussian model of reaction time (RT) distributions (i.e., μ and τ) for both small and large problems. Cluster results differentiated individuals based on the frequency with which they used retrieval versus procedural strategies, supporting the view that differences in mu and tau reflected differences in choice of strategies used. Patterns of eye movements over time were also dramatically different across clusters, and provide strong support for the view that individuals were using different mental procedures to solve these problems. We conclude that eye-movement patterns can be used to distinguish fluent individuals who readily use retrieval from those who rely more on procedural strategies, even if traditional self-report methods are unavailable. |
Lynn Huestegge; Tristan Herbert Pötzsch Integration processes during frequency graph comprehension: Performance and eye movements while processing tree maps versus pie charts Journal Article In: Applied Cognitive Psychology, vol. 32, no. 2, pp. 200–216, 2018. @article{Huestegge2018, Frequency graph types differ in the way how data are translated into visual representations. We compared 2 visualization methods, a traditional circular representation (pie chart) and a rectangular representation (constant column width tree map), which were hypothesized to differ regarding the cognitive ease of visual comparison processes. Performance was evaluated in tasks involving proportion and comparison judgments under both highly controlled and more realistic circumstances. The results showed performance benefits (in terms of reduced response times or error rates) for rectangular representations. Additional eye movement analyses revealed that this benefit was mainly due to a facilitation of scanning the graph for relevant information. The results suggest that facilitating comparison processes by representing the critical variable in less complex visual dimensions (i.e., straight length with constant orientation instead of surface area or curved length) eventually enhances the efficiency of integration processes during graph comprehension. |
Anna E. Hughes Dissociation between perception and smooth pursuit eye movements in speed judgments of moving Gabor targets Journal Article In: Journal of Vision, vol. 18, no. 4, pp. 1–19, 2018. @article{Hughes2018, The relationship between eye movements and subjective perception is still relatively poorly understood. In this study, participants tracked the movement of a Gabor patch and made perceptual judgments of its speed using a two-interval forced choice task. The Gabor patch could either have a static carrier or a carrier moving in the same or opposite direction as the overall envelope motion. We found that smooth pursuit speed was strongly affected by the internal motion of the Gabor carrier, with faster smooth pursuit being made to targets with internal motion in the same direction as overall motion compared to targets with internal motion in the opposite direction. However, we found that there were only small and highly variable differences in the perceptual speed judgments made simultaneously, and that these perceptual and smooth pursuit measures did not significantly correlate with each other. This contrasts with the number of catch-up saccades (saccades made in the direction of overall target motion), which was significantly correlated with the simultaneous perceptual judgments. There was also a significant correlation between perceptual judgments and the difference between the target and eye position immediately before a saccade. These results suggest that it is possible to see dissociations between vision and action in this task, and that the specific type of visual action studied may determine the relationship with perception. |
Stefan Huijser; Marieke K. Vugt; Niels A. Taatgen The wandering self: Tracking distracting self-generated thought in a cognitively demanding context Journal Article In: Consciousness and Cognition, vol. 58, pp. 170–185, 2018. @article{Huijser2018, We investigated how self-referential processing (SRP) affected self-generated thought in a complex working memory task (CWM) to test the predictions of a computational cognitive model. This model described self-generated thought as resulting from competition between task- and distracting processes, and predicted that self-generated thought interferes with rehearsal, reducing memory performance. SRP was hypothesized to influence this goal competition process by encouraging distracting self-generated thinking. We used a spatial CWM task to examine if SRP instigated such thoughts, and employed eye-tracking to examine rehearsal interference in eye-movement and self-generated thinking in pupil size. The results showed that SRP was associated with lower performance and higher rates of self-generated thought. Self-generated thought was associated with less rehearsal and we observed a smaller pupil size for mind wandering. We conclude that SRP can instigate self-generated thought and that goal competition provides a likely explanation for how self-generated thoughts arises in a demanding task. |
Rosalind Hutchings; Romina Palermo; Jason Bruggemann; John R. Hodges; Olivier Piguet; Fiona Kumfor Looking but not seeing: Increased eye fixations in behavioural-variant frontotemporal dementia Journal Article In: Cortex, vol. 103, pp. 71–80, 2018. @article{Hutchings2018, Face processing plays a central role in human communication, with the eye region a particularly important cue for discriminating emotions. Indeed, reduced attention to the eyes has been argued to underlie social deficits in a number of clinical populations. Despite well-established impairments in facial affect recognition in behavioural-variant fronto-temporal dementia, whether these patients also have perturbed facial scanning is yet to be investigated. The current study employed eye tracking to record visual scanning of faces in 20 behavioural-variant frontotemporal dementia patients and 21 controls. Remarkably, behavioural-variant frontotemporal dementia patients displayed more fixations to the eyes of emotional faces, compared to controls. Neural regions associated with fixations to the eyes included the left inferior frontal gyrus, right cerebellum and middle temporal gyrus. Our study is the first to show such compensatory functions in behavioural-variant fron-totemporal dementia and suggest a feedback-style network, including anterior and posterior brain regions, is involved in early face processing. |
John P. Hutson; Joseph P. Magliano; Lester C. Loschky Understanding moment-to-moment processing of visual narratives Journal Article In: Cognitive Science, vol. 42, no. 8, pp. 2999–3033, 2018. @article{Hutson2018, What role do moment-to-moment comprehension processes play in visual attentional selection in picture stories? The current work uniquely tested the role of bridging inference generation processes on eye movements while participants viewed picture stories. Specific components of the Scene Perception and Event Comprehension Theory (SPECT) were tested. Bridging inference generation was induced by manipulating the presence of highly inferable actions embedded in picture stories. When inferable actions are missing, participants have increased viewing times for the immediately following critical image (Magliano, Larson, Higgs, & Loschky, 2016). This study used eye-tracking to test competing hypotheses about the increased viewing time: (a) Computational Load: inference generation processes increase overall computational load, producing longer fixation durations; (b) Visual Search: inference generation processes guide eye-movements to pick up inference-relevant information, producing more fixations. Participants had similar fixation durations, but they made more fixations while generating inferences, with that process starting from the fifth fixation. A follow-up hypothesis predicted that when generating inferences, participants fixate scene regions important for generating the inference. A separate group of participants rated the inferential-relevance of regions in the critical images, and results showed that these inferentially relevant regions predicted differences in other viewers' eye movements. Thus, viewers' event models in working memory affect visual attentional selection while viewing visual narratives. |
Masato Inoue; Shigeru Kitazawa Motor error in parietal area 5 and target error in area 7 drive distinctive adaptation in reaching Journal Article In: Current Biology, vol. 28, pp. 2250–2262, 2018. @article{Inoue2018, Errors in reaching drive trial-by-trial adaptation to compensate for the error. Parietal association areas are implicated in error coding, but whether the parietal error signals directly drive adaptation remains unknown. We first examined the activity of neurons in areas 5 and 7 while two monkeys performed rapid target reaching to clarify whether and how the parietal error signals drive adaptation in reaching. We introduced random errors using a motor-driven prism device to augment random motor errors in reaching. Neurons in both regions encoded information on the target position prior to reaching and information on the motor error after reaching. However, post-movement microstimulation caused trial-by-trial adaptation to cancel the motor error only when it was delivered to area 5. By contrast, stimulation to area 7 caused trial-by-trial adaptation so that the reaching endpoint was adjusted toward the target position. We further hypothesized that area 7 would encode target error that is caused by a target jump during the reach, and our results support this hypothesis. Area 7 neurons encoded target error information, but area 5 neurons did not encode this information. These results suggest that area 5 provides signals for adapting to motor errors and that area 7 provides signals to adapt to target errors. |
Jessica L. Irons; Andrew B. Leber Characterizing individual variation in the strategic use of attentional control Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 44, no. 10, pp. 1637–1654, 2018. @article{Irons2018, Goal-directed attentional control can substanially aid visual search, but only if it is recruited in an effective manner. Previously we found that strategies chosen to control attention vary considerably across individuals, and we proposed that effort avoidance may lead some individuals to choose suboptimal strategies. Here we present a more thorough analysis of individual differences in attentional control strategies. We used the adaptive choice visual search, which provides a method to quantify an individual's attentional control strategy in a dynamically changing, unconstrained environment. We found that individual's strategy choices are highly reliable across sessions, suggesting that attentional control strategies are stable and trait-like. In Experiment 2, we explored the extent to which strategy use was related to subjective evaluations of effort and performance. Results showed that the extent to which individuals found the optimal strategy to be effortful and effective predicted their likelihood of making optimal choices on a subsequent choice block. These results provide the first evidence for a relationship between effort and strategic attentional control, and they highlight the important and often neglected role of strategy in understanding attentional control. |
Eve A. Isham; Cong-huy Le; Arne D. Ekstrom Rightward and leftward biases in temporal reproduction of objects represented in central and peripheral spaces Journal Article In: Neurobiology of Learning and Memory, vol. 153, pp. 71–78, 2018. @article{Isham2018, The basis for how we represent temporal intervals in memory remains unclear. One proposal, the mental time line theory (MTL), posits that our representation of temporal duration depends on a horizontal mental time line, thus suggesting that the representation of time has an underlying spatial component. Recent work suggests that the MTL is a learned strategy, prompting new questions of when and why MTL is used to represent temporal duration, and whether time is always represented spatially. The current study examines the hypothesis that the MTL may be a time processing strategy specific to centrally-located stimuli. In two experiments (visual eccentricity and prismatic adaptation procedures), we investigated the magnitude of the rightward bias, an index of the MTL, in central and peripheral space. When participants performed a supra-second temporal interval reproduction task, we observed a rightward bias only in central vision (within 3° visual angle), but not in the peripheral space (approximately 6–8° visual angle). Instead, in the periphery, we observed a leftward bias. The results suggest that the MTL may be a learned strategy specific to central space and that strategies for temporal interval estimation that do not depend on MTL may exist for stimuli perceived peripherally. |
Leyla Isik; Jedediah M. Singer; Joseph R. Madsen; Nancy Kanwisher; Gabriel Kreiman What is changing when: Decoding visual information in movies from human intracranial recordings Journal Article In: NeuroImage, vol. 180, pp. 147–159, 2018. @article{Isik2018, The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision. |
Roxane J. Itier; Frank F. Preston Increased early sensitivity to eyes in mouthless faces: In support of the LIFTED model of early face processing Journal Article In: Brain Topography, vol. 31, no. 6, pp. 972–984, 2018. @article{Itier2018, The N170 ERP component is a central neural marker of early face perception usually thought to reflect holistic processing. However, it is also highly sensitive to eyes presented in isolation and to fixation on the eyes within a full face. The lateral inhibition face template and eye detector (LIFTED) model (Nemrodov et al. in NeuroImage 97:81–94, 2014) integrates these views by proposing a neural inhibition mechanism that perceptually glues features into a whole, in parallel to the activ- ity of an eye detector that accounts for the eye sensitivity. The LIFTED model was derived from a large number of results obtained with intact and eyeless faces presented upright and inverted. The present study provided a control condition to the original design by replacing eyeless with mouthless faces, hereby enabling testing of specific predictions derived from the model. Using the same gaze-contingent approach, we replicated the N170 eye sensitivity regardless of face orientation. Furthermore, when eyes were fixated in upright faces, the N170 was larger for mouthless compared to intact faces, while inverted mouthless faces elicited smaller amplitude than intact inverted faces when fixation was on the mouth and nose. The results are largely in line with the LIFTED model, in particular with the idea of an inhibition mechanism involved in holistic processing of upright faces and the lack of such inhibition in processing inverted faces. Some modifications to the original model are also proposed based on these results. |
Aine Ito; Martin Corley; Martin J. Pickering A cognitive load delays predictive eye movements similarly during L1 and L2 comprehension Journal Article In: Bilingualism: Language and Cognition, vol. 21, no. 2, pp. 251–264, 2018. @article{Ito2018, We used the visual world eye-tracking paradigm to investigate the effects of cognitive load on predictive eye movements in L1 (Experiment 1) and L2 (Experiment 2) speakers. Participants listened to sentences whose verb was predictive or non-predictive towards one of four objects they were viewing. They then clicked on a mentioned object. Half the participants additionally performed a working memory task of remembering words. Both L1 and L2 speakers looked more at the target object predictively in predictable- than in non-predictable sentences when they performed the listen-and-click task only. However, this predictability effect was delayed in those who performed the concurrent memory task. This pattern of results was similar in L1 and L2 speakers. L1 and L2 speakers make predictions, but cognitive resources are required for making predictive eye movements. The findings are compatible with the claim that L2 speakers use the same mechanisms as L1 speakers to make predictions. |