EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2019 |
Matteo De Tommaso; Massimo Turatto Learning to ignore salient distractors: Attentional set and habituation Journal Article In: Visual Cognition, vol. 27, no. 3-4, pp. 214–226, 2019. @article{DeTommaso2019, Attentional capture by salient distractors can be attenuated by the target search strategy (feature-search mode vs. singleton-detection mode) adopted, as well as by learning processes concerning the distractor features. Hence, two different models, one based on the task-relevant and one on the task-irrelevant information, would interact in the control of attention. Here, we investigated whether the search mode used to locate the target can affect the possibility to reject salient distractor on the basis of habituation mechanisms. Our results showed that when a feature-search mode was used, capture by a uniquely-coloured distractor was progressively reduced with practice, a result consistent with the habituation of capture phenomenon (Experiment 1). Conversely, when a singleton-detection mode was used habituation of capture was prevented (Experiment 2), both when the distractor feature remained constant (Experiment 3) and when a prolonged exposure to the distractor was allowed (Experiment 4). We propose that when the templates for the task-relevant (i.e., the target) and the task-irrelevant (i.e., the distractor) information overlap the former prevails in the control of attention, and prevents habituation of capture to take place. |
J. C. F. Winter; Y. B. Eisma; C. D. D. Cabrall; P. A. Hancock; N. A. Stanton Situation awareness based on eye movements in relation to the task environment Journal Article In: Cognition, Technology and Work, vol. 21, no. 1, pp. 99–111, 2019. @article{Winter2019, The topic of situation awareness has received continuing interest over the last decades. Freeze-probe methods, such as the Situation Awareness Global Assessment Technique (SAGAT), are commonly employed for measuring situation awareness. The aim of this paper was to review validity issues of the SAGAT and examine whether eye movements are a promising alternative for measuring situation awareness. First, we outlined six problems of freeze-probe methods, such as the fact that freeze-probe methods rely on what the operator has been able to remember and then explicitly recall. We propose an operationalization of situation awareness based on the eye movements of the person in relation to their task environment to circumvent shortfalls of memory mediation and task interruption. Next, we analyzed experimental data in which participants (N = 86) were tasked to observe a display of six dials for about 10 min, and press the space bar if a dial pointer crossed a threshold value. Every 90 s, the screen was blanked and participants had to report the state of the dials on a paper sheet. We assessed correlations of participants' task performance (% of threshold crossing detected) with visual sampling scores (% of dials glanced at during threshold crossings) and freeze-probe scores. Results showed that the visual-sampling score correlated with task performance at the threshold-crossing level (r = 0.31) and at the individual level (r = 0.78). Freeze-probe scores were low and showed weak associations with task performance. We conclude that the outlined limitations of the SAGAT impede measurement of situation awareness, which can be computed more effectively from eye movement measurements in relation to the state of the task environment. The present findings have practical value, as advances in eye-tracking cameras and ubiquitous computing lessen the need for interruptive tests such as SAGAT. Eye-based situation awareness is a predictor of performance, with the advantage that it is applicable through real-time feedback technologies. |
Jérémy Decroix; Solène Kalénine What first drives visual attention during the recognition of object-directed actions? The role of kinematics and goal information Journal Article In: Attention, Perception, and Psychophysics, vol. 81, pp. 2400–2409, 2019. @article{Decroix2019, The recognition of others' object-directed actions is known to involve the decoding of both the visual kinematics of the action and the action goal. Yet whether action recognition is first guided by the processing of visual kinematics or by a prediction about the goal of the actor remains debated. In order to provide experimental evidence to this issue, the present study aimed at investigating whether visual attention would be preferentially captured by visual kinematics or by action goal information when processing others' actions. In a visual search task, participants were asked to find correct actions (e.g., drinking from glass) among distractor actions. Distractors actions contained grip and/or goal violations and could therefore share the correct goal and/or the correct grip with the target. The time course of fixation proportion on each distractor action has been taken as an indicator of visual attention allocation. Results show that visual attention is first captured by the distractor action with similar goal. Then the withdrawal of visual attention from the action distractor with similar goal suggests a later attentional capture by the action distractor with similar grip. Overall, results are in line with predictive approaches of action understanding, which assume that observers first make a prediction about the actor's goal before verifying this prediction using the visual kinematics of the action. |
J. A. Del Punta; G. Gasaneo; L. U. Ancarani Generalized Sturmian Functions used for a discrete wavelet construction Journal Article In: Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization, vol. 7, no. 5-6, pp. 541–549, 2019. @article{DelPunta2019, In this paper we discuss the implementation of scaling functions and wavelets based on Generalized Sturmian Functions (GSF). When dealing with finite dimensional spaces, the completeness relation of the GSF basis generates localized functions in two coordinates. By fixing one of the coordinates on particular points one defines the corresponding scaling functions. Wavelets are defined in a similar way by fixing, on a complementary set of functions, one of the coordinates on another set of particular points. This procedure allows for a multiscale decomposition of any signal. When the chosen points are the zeros of the (N+1)th GSF, scaling functions generate a (N+1)-dimensional space and happen to be orthogonal. We use GSF associated to the classical damped harmonic oscillator to build scaling functions and wavelets, and apply them to represent eye-tracking signals. |
Tao Deng; Hongmei Yan; Long Qin; Thuyen Ngo; B. S. Manjunath How do drivers allocate their potential attention? Driving fixation prediction via convolutional neural networks Journal Article In: IEEE Transactions on Intelligent Transportation Systems, pp. 1–9, 2019. @article{Deng2019, The traffic driving environment is a complex and dynamic changing scene in which drivers have to pay close attention to salient and important targets or regions for safe driving. Modeling drivers' eye movements and attention allocation in traffic driving can also help guiding unmanned intelligent vehicles. However, until now, few studies have modeled drivers' true fixations and allocations while driving. To this end, we collect an eye tracking dataset from a total of 28 experienced drivers viewing 16 traffic driving videos. Based on the multiple drivers' attention allocation dataset, we propose a convolutional-deconvolutional neural network (CDNN) to predict the drivers' eye fixations. The experimental results indicate that the proposed CDNN outperforms the state-of-the-art saliency models and predicts drivers' attentional locations more accurately. The proposed CDNN can predict the major fixation location and shows excellent detection of secondary important information or regions that cannot be ignored during driving if they exist. Compared with the present object detection models in autonomous and assisted driving systems, our human-like driving model does not detect all of the objects appearing in the driving scenes, but it provides the most relevant regions or targets, which can largely reduce the interference of irrelevant scene information. |
Rachel N. Denison; Shlomit Yuval-greenberg; Marisa Carrasco Directing voluntary temporal attention increases fixational stability Journal Article In: Journal of Neuroscience, vol. 39, no. 2, pp. 353–363, 2019. @article{Denison2019, Our visual input is constantly changing, but not all moments are equally relevant. Visual temporal attention, the prioritization of visual information at specific points in time, increases perceptual sensitivity at behaviorally relevant times. The dynamic processes underlying this increase are unclear. During fixation, humans make small eye movements called microsaccades, and inhibiting microsaccades improves perception o fbrief stimuli. Here, we investigated whether temporal attention changes the pattern ofmicrosaccades in antici- pation of brief stimuli. Human observers (female and male) judged stimuli presented within a short sequence. Observers were given either aninformative precue to attend to one of the stimuli, which was likely to be probed, or an uninformative (neutral) precue. We found strong microsaccadic inhibition before the stimulus sequence, likely due to its predictable onset. Critically, this anticipatory inhibition was stronger when the first target in the sequence (T1) was precued (task-relevant) than when the precue was uninformative. Moreover, the timing of the last microsaccade before T1 and the first microsaccade after T1 shifted such that both occurred earlier when T1 was precued than when the precue was uninformative. Finally, the timing of the nearest pre- and post-T1 microsaccades affected task performance. Directing voluntary temporal attention therefore affects microsaccades, helping to stabilize fixation at the most relevant moments over and above the effect ofpredictability. Just as saccading to a relevant stimulus can be an overt correlate ofthe allocation of spatial attention, precisely timed gaze stabilization can be an overt correlate ofthe allocation oftemporal attention. |
Christ Devia; Rocio Mayol-Troncoso; Javiera Parrini; Gricel Orellana; Aida Ruiz; Pedro E. Maldonado; Jose Ignacio Egaña EEG classification during scene free-viewing for schizophrenia detection Journal Article In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 6, pp. 1193–1199, 2019. @article{Devia2019, Currently, the diagnosis of schizophrenia is made solely based on interviews and behavioral observations by a trained psychiatrist. Technologies such as electroencephalography (EEG) are used for differential diagnosis and not to support the psychiatrist's positive diagnosis. Here, we show the potential of EEG recordings as biomarkers of the schizophrenia syndrome. We recorded EEG while schizophrenia patients freely viewed natural scenes, and we analyzed the average EEG activity locked to the image onset. We found significant differences between patients and healthy controls in occipital areas approximately 500 ms after image onset. These differences were used to train a classifier to discriminate the schizophrenia patients from the controls. The best classifier had 81% sensitivity for the detection of patients and specificity of 59% for the detection of controls, with an overall accuracy of 71%. These results indicate that EEG signals from a free-viewing paradigm discriminate patients from healthy controls and have the potential to become a tool for the psychiatrist to support the positive diagnosis of schizophrenia. |
Valeria Di Caro; Jan Theeuwes; Chiara Della Libera Suppression history of distractor location biases attentional and oculomotor control Journal Article In: Visual Cognition, vol. 27, no. 2, pp. 142–157, 2019. @article{DiCaro2019, Past selection experience greatly affects the deployment of attention such that targets are more readily selected if their features or locations were more frequently selected in the past. Crucially, recent studies have shown similar experience-dependent effects also for salient task irrelevant stimuli: distractors exerted less interference if they appeared at a location where they were presented more often, relatively to other possible locations. Here we investigated the effects of such suppression history on the immediate behavioural correlates of attentional deployment, i.e., eye movements. Participants were to make saccadic eye movements to a target stimulus, while ignoring a highly distracting irrelevant visual onset appearing abruptly on the screen in a proportion of trials. Crucially, this irrelevant onset occurred more frequently in two locations on the visual display and our results showed that, relatively to distractors elsewhere, onsets presented at these locations became easier to ignore, giving rise to reduced oculomotor capture. Consistent with the notion that experience can alter attentional deployment towards spatial locations, these findings indicate that, through learning, the priority of high frequency locations becomes suppressed, attenuating the intrinsic saliency of distractors appearing therein. Traces left by individual events of attentional suppression decrease the processing priority of coordinates within topographic maps of the visual space. |
David Dignath; Oliver Herbort; Aleksandra Pieczykolan; Lynn Huestegge; Andrea Kiesel Flexible coupling of covert spatial attention and motor planning based on learned spatial contingencies Journal Article In: Psychological Research, vol. 83, no. 3, pp. 476–484, 2019. @article{Dignath2019, The present study tested whether the coupling of covert attentional shifts and motor planning of pointing movements can be modulated by learning. Participants performed two tasks. As a primary movement task, they executed a pointing movement to a movement target (MT) location. As a secondary visual attention task, they identified a discrimination target (DT) that was presented shortly before initiation of the pointing movement. These DTs either occurred at the same or at different locations with the MT. A common finding in such and similar settings is the enhanced visual target identification when locations of MT and DT coincide. However, it is not known which factors govern the flexibility of spatial attention–action coupling. Here, we tested the influence of previously learned spatial contingencies between MT and DT on the coupling of covert attention and motor planning. These contingencies were manipulated in three groups (always same locations, always opposite locations, non-contingent locations) in a training session. Results indicated that in a subsequent test phase, previously learned contingencies enhanced visual identification accordingly, even when targets for the movement task and the visual task were presented at opposite sides. These results corroborate previous findings of a rather flexible interaction of attention and motor planning, and demonstrate how one can learn to control attention by means of motor planning. |
Aster Dijkgraaf; Robert J. Hartsuiker; Wouter Duyck Prediction and integration of semantics during L2 and L1 listening Journal Article In: Language, Cognition and Neuroscience, vol. 34, no. 7, pp. 881–900, 2019. @article{Dijkgraaf2019, Using the visual world paradigm, we tested whether Dutch-English bilinguals predict upcoming semantic information in auditory sentence comprehension to the same extent in their native (L1) and second language (L2). Participants listened to sentences in L1 and L2 while their eye-movements were measured. A display containing a picture of either a target word or a semantic competitor, and three unrelated objects was shown before the onset of the auditory target word in the sentence. There were more fixations on the target and competitor pictures relative to the unrelated pictures in both languages, before hearing the target word could affect fixations. Also, semantically stronger related competitors attracted more fixations. This relatedness effect was stronger, and it started earlier in the L1 than in the L2. These results suggest that bilinguals predict semantics in the L2, but the spread of semantic activation during prediction is slower and weaker than in the L1. |
Xiaomao Ding; Ana Radonjić; Nicolas P. Cottaris; Haomiao Jiang; Brian A. Wandell; David H. Brainard Computational-observer analysis of illumination discrimination Journal Article In: Journal of Vision, vol. 19, no. 7, pp. 1–16, 2019. @article{Ding2019, The spectral properties of the ambient illumination provide useful information about time of day and weather. We study the perceptual representation of illumination by analyzing measurements of how well people discriminate between illuminations across scene configurations. More specifically, we compare human performance to a computational-observer analysis that evaluates the information available in the isomerizations of cone photopigment in a model human photoreceptor mosaic. The performance of such an observer is limited by the Poisson variability of the number of isomerizations in each cone. The overall level of Poisson-limited computational-observer sensitivity exceeded that of human observers. This was modeled by increasing the amount of noise in the number of isomerizations of each cone. The additional noise brought the overall level of performance of the computational observer into the same range as that of human observers, allowing us to compare the pattern of sensitivity across stimulus manipulations. Key patterns of human performance were not accounted for by the computational observer. In particular, neither the elevation of illumination-discrimination thresholds for illuminant changes in a blue color direction (when thresholds are expressed in CIELUV DE units), nor the effects of varying the ensemble of surfaces in the scenes being viewed, could be accounted for by variation in the information available in the cone isomerizations. |
Benjamin Tari; Matthew Heath Pro- and antisaccade task-switching: response suppression—and not vector inversion—contributes to a task-set inertia Journal Article In: Experimental Brain Research, vol. 237, no. 12, pp. 3475–3484, 2019. @article{Tari2019a, Alternating between different tasks represents an executive function essential to activities of daily living. In the oculomotor literature, reaction times (RT) for a ‘standard' and stimulus-driven (SD) prosaccade (i.e., saccade to target at target onset) are increased when preceded by a ‘non-standard' antisaccade (i.e., saccade mirror-symmetrical to target at target onset), whereas the converse switch does not elicit an RT cost. The prosaccade switch-cost has been attributed to lingering neural activity—or task-set inertia—related to the antisaccade executive demands of response suppression and vector inversion. It is, however, unclear whether response suppression and/or vector inversion contribute to the prosaccade switch-cost. Experiment 1 of the present work had participants alternate (i.e., AABB paradigm) between minimally delayed (MD) pro- and antisaccades. MD saccades require a response after target extinction and necessitate response suppression for both pro- and antisaccades—a paradigm providing a framework to determine whether vector inversion contributes to a task-set inertia. In Experiment 2, participants alternated between SD pro- and MD antisaccades—a paradigm designed to determine if a switch-cost is selectively imparted when a SD and standard response is preceded by a non-standard response. Experiment 1 showed that RTs for MD pro- and antisaccades were refractory to the preceding trial-type; that is, vector inversion did not engender a switch-cost. Experiment 2 indicated that RTs for SD prosaccades were increased when preceded by an MD antisaccade. Accordingly, response suppression engenders a task-set inertia but only for a subsequent stimulus-driven and standard response (i.e., SD prosaccade). Such a result is in line with the view that response suppression is a hallmark feature of executive function. |
Antonia F. Ten Brink; Tanja C. W. Nijboer; Jasper H. Fabius; Stefan Van der Stigchel No direction specific costs in trans-saccadic memory Journal Article In: Neuropsychologia, vol. 125, pp. 23–29, 2019. @article{TenBrink2019, Even though we frequently execute saccades, we perceive the external world as coherent and stable. An important mechanism of trans-saccadic perception is spatial remapping: the process of updating information across eye movements. Previous studies have indicated a right hemispheric dominance for spatial remapping, which has been proposed to translate into enhanced trans-saccadic memory for locations that are remapped into the right compared to the left hemisphere in healthy participants. Previous study designs suffered from several limitations, however (i.e. multiple eye movements had to be made instead of one, fixations were not controlled for, and ceiling effects were likely present). We therefore compared accuracy of trans-saccadic memory for central items after left-versus rightward eye movements, and secondary, for items that were remapped within the left versus right visual field. Participants memorized the location of a briefly presented item, made one saccade, and subsequently decided in what direction the item had shifted. We used a staircase to adjust task difficulty. Bayesian repeated measures ANOVAs were used to compare between left versus right eye movements and items in the left versus right visual field. We found most evidence against directional differences in trans-saccadic memory (BF 10 = 0.23). We found some evidence suggestive of enhanced trans-saccadic memory for items that were remapped within the left compared to the right visual field (BF 10 = 4.00). The latter result could be explained by a leftward spatial attention bias. As such, the hypothesized right hemispheric dominance for spatial remapping does not result in asymmetric trans-saccadic memory capacities in healthy participants. |
Anne Marie Ternes; Meaghan Clough; Paige Foletta; Owen B. White; Joanne Fielding Executive control deficits correlate with reduced frontal white matter volume in multiple sclerosis Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 41, no. 7, pp. 723–729, 2019. @article{Ternes2019, Introduction: Executive control deficits are frequently reported in patients with multiple sclerosis (MS). We have previously proposed that in the context of competing automatic and volitional processes, such deficits may in part reflect poor resolution of response conflict. This study aimed to investigate the neuropathological underpinnings of executive control deficits in MS, focusing on the frontostriatal system proposed to mediate executive control. Method: Forty-one MS patients and 25 healthy controls completed measures of executive control that have previously been used to characterize deficit in MS: antisaccade and endogenously cued saccade paradigms, and the Stroop color and word test. Relationships between task performance and volumetric measures of frontal white matter, frontal gray matter, striatum, and pallidum were investigated. Results: MS participants performed significantly more poorly on the Stroop and antisaccade tasks than controls. For MS patients, higher erroneous responding on the antisaccade task was related to reduced frontal white matter volume. Conclusion: These findings suggest that loss of frontal white matter may underlie executive control deficits in MS, and provides information that may inform the development of targeted cognitive training strategies in MS. |
Anne Marie Ternes; Meaghan Clough; Paige Foletta; Owen B. White; Joanne Fielding Characterization of inhibitory failure in Multiple Sclerosis: Evidence of impaired conflict resolution Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 41, no. 3, pp. 320–329, 2019. @article{Ternes2019a, Introduction: Inhibitory control deficits are frequently reported in Multiple Sclerosis (MS), although it is unclear whether these deficits represent a global or process-specific failure. Notably, most models of inhibitory control recognize at least two dissociable processes, the most consistent being: (a) the inhibition of a dominant response: response suppression, and (b) the inhibition of a dominant response and initiation of a nondominant response: executive control. This study aimed to ascertain the processes underlying inhibitory failure in MS. Method: Twenty-three MS patients and 25 healthy controls completed a battery of commonly used inhibitory tasks, with measures from each task entered into a principal components analysis with orthogonal (varimax) rotation. Results: As anticipated, two components emerged, with tasks evaluating response suppression (stop signal, go/no go) loading on a common component, and tasks evaluating executive control (Stroop, antisaccade, endogenously-cued saccade) loading on a separate common component. Composite scores were generated for each component and compared between groups. Unlike response suppression scores, executive control scores were significantly poorer for MS patients. Conclusions: Inhibitory control deficits in MS may reflect poor resolution in the context of competing processes, rather than difficulty in preventing the execution of an inappropriate response. |
Leah N. Tobin; Amy H. Barron; Christopher R. Sears; Kristin M. Ranson Greater body appreciation moderates the association between maladaptive attentional biases and body dissatisfaction in undergraduate women Journal Article In: Journal of Experimental Psychopathology, pp. 1–15, 2019. @article{Tobin2019, Attentional biases for weight-related information are thought to contribute to maintenance of body dissatisfaction and eating disorders. Women with greater body appreciation may pay less attention to thin-ideal cues if body appreciation protects them from negative effects of thin-ideal media, and if so, they may be less susceptible to development of maladaptive attentional biases. The present study used eye-gaze tracking to measure attention to weight-related words/images in 167 body-dissatisfied undergraduate women (aged 17-39 years) to examine the associations among body dissatisfaction, body appreciation, and attentional biases. Participants viewed displays of thin-related, fat-related, and neutral words/images while their eye fixations were tracked over 8-s intervals. We hypothesized body appreciation (as measured by the Body Appreciation Scale) would moderate the documented association between body dissatisfaction and attentional biases for thin-related information only, such that as body appreciation increased, the strength of the relationship between body dissatisfaction and attentional biases would decrease. Results indicated that body appreciation moderated the association between body dissatisfaction and attentional biases for thin-related words only. With low body appreciation, body dissatisfaction was positively associated with attention to thin-related words. With high body appreciation, there was an inverse association between body dissatisfaction and attention to thin-related words. Results suggest that body appreciation may be an effective prevention target for reducing maladaptive attentional biases. |
Jane E. Raymond; Scott P. Jones Strategic eye movements are used to support object authentication Journal Article In: Scientific Reports, vol. 9, pp. 2424, 2019. @article{Raymond2019, Authentication is an important cognitive process used to determine whether one's initial identification of an object is corroborated by additional sensory information. Although authentication is critical for safe interaction with many objects, including food, websites, and valuable documents, the visual orienting strategies used to garner additional sensory data to support authentication remain poorly understood. When reliable visual cues to counterfeit cannot be anticipated, distributing fixations widely across an object's surface might be useful. However, strategic fixation of specific object-defining attributes would be more efficient and should lead to better authentication performance. To investigate, we monitored eye movements during a repetitive banknote authentication task involving genuine and counterfeit banknotes. Although fixations were distributed widely across the note prior to authentication decisions, preference for hard-to mimic areas and avoidance of easily mimicked areas was evident. However, there was a strong tendency to initially fixate the banknote's portrait, and only thereafter did eye movement control appear to be more strategic. Those who directed a greater proportion of fixations at hard-to-mimic areas and resisted more easily mimicked areas performed better on the authenticity task. The tendency to deploy strategic fixation improved with experience, suggesting that authentication benefits from precise visual orienting and refined categorisation criteria. |
Daniele Re; Maya Inbar; Craig G. Richter; Ayelet N. Landau Feature-based attention samples stimuli rhythmically Journal Article In: Current Biology, vol. 29, no. 4, pp. 693–699, 2019. @article{Re2019, Attention supports the allocation of resources to relevant locations and objects in a scene. Under most conditions, several stimuli compete for neural representation. Attention biases neural representation toward the response associated with the attended object [1, 2]. Therefore, an attended stimulus enjoys a neural response that resembles the response to that stimulus in isolation. Factors that determine and generate attentional bias have been researched, ranging from endogenously controlled processes to exogenous capture of attention [1–4]. Recent studies investigate the temporal structure governing attention. When participants monitor a single location, visual-target detection depends on the phase of an ~8-Hz brain rhythm [5, 6]. When two locations are monitored, performance fluctuates at 4 Hz for each location [7, 8]. The hypothesis is that 4-Hz sampling for two locations may reflect a common sampler that operates at 8 Hz globally, which is divided between relevant locations [5–7, 9]. The present study targets two properties of this phenomenon, called rhythmic-attentional sampling: first, sampling is typically described for selection over different locations. We examined whether rhythmic sampling is limited to selection over space or whether it extends to feature-based attention. Second, we examined whether sampling at 4 Hz results from the division of an 8-Hz rhythm over two objects. We found that two overlapping objects defined by features are sampled at ~4 Hz per object. In addition, performance on a single object fluctuated at 8 Hz. Rhythmic sampling of features did not result from temporal structure in eye movements. |
Peng Ren; Armando Barreto; Xiaole Ma; Shengnan Liu; Min Zhang; Ying Wang; Yeyun Dong; Dezhong Yao Dynamics of blink and non-blink cyclicity for affective assessment: A case study for stress identification Journal Article In: IEEE Transactions on Affective Computing, pp. 1–12, 2019. @article{Ren2019, Previous studies have shown that eye activities, including blinks, can indicate the psychological state of an individual. However, almost all previous studies analyzing blinks merely concentrated on traditional descriptive statistics, which are unable to reflect their dynamic processes. Furthermore, the states of non-blink (opening the eyes) and blink alternate with each other, forming a physiological cycle. If we only investigate blinks alone, it may be inadequate to describe how blinking works. Therefore, we attempted to recognize the affective state (“relaxation” vs. “stress”) of an individual through the dynamics of blink and non-blink cyclicity (BNBC), as one example, to illustrate this method. First, the “Stroop Test” was employed for emotion elicitation. Then, features were extracted from a categorical time series (0: non- blink; 1: blink), which was recorded by the eye-tracking system. Finally, the areas under the receiver operating characteristic curve (AUC) values were obtained via eight commonly used classifiers. The results show that, compared with the traditional approaches for blink analysis, BNBC exhibits more compelling proficiency to detect stress. In summation, BNBC can be considered a new type of psychophysiological measure, which could be widely applied in psychology, medicine, and engineering. |
Peng Ren; Xiaole Ma; Wenjia Lai; Min Zhang; Shengnan Liu; Ying Wang; Min Li; Dan Ma; Yeyun Dong; Yongsheng He; Xiaolei Xu Comparison of the use of blink rate and blink rate variability for mental state recognition Journal Article In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 5, pp. 867–875, 2019. @article{Ren2019a, Recent research has unearthed that blink rate variability (BRV) can be employed as a psychophysiological measure. However, its efficiency for mental state recognition (MSR) has not been investigated yet. Because BRV can indicate dynamics inherent in eye blinks, we conjectured that BRV might exhibit stronger abilities for the MSR if compared with blink rate (BR), known as the leading indicator derived from eye blinks for MSR. Therefore, in this paper, we attempted to differentiate between high and low cognitive loads of an individual through the analyses of BR and BRV, respectively, which could be viewed as a preliminary study for comparing their MSR abilities. First, an n-back experiment was performed to collect data. Then, in order to characterize the phenomenon of BRV, the features were extracted from its time and frequency domains, respectively. Finally, the area under the curve (AUC) values of BRV and BR for MSR were estimated by the ten commonly used classifiers, respectively. The results indicated that BRV achieves significantly higher AUC values than BR, which suggests its strong potentiality for MSR. In sum, the BRV may prove to be a promising method for the MSR, which should be considered in the future. |
Zahra Rezvani; Ali Katanforoush; Richard Van Wezel; Hamidreza Pouretemad Arbitrary eye movement strategies in global-local processing experiments Journal Article In: Journal of Neurodevelopmental Cognition, vol. 2, pp. 98–109, 2019. @article{Rezvani2019, Perceptual organization is one of the most hotly debated issues in visual perception. Human adults, in normal conditions, process global features faster than local details, the effect that is called “Global Precedence”. Researches have shown that as stimulus eccentricity gets more distant from the fovea, the perceptual decisions of local details become more delayed. This even happens when the gaze is fixated on the center of the field of view and the stimulus location is manually adjusted. The present study aims to explore the eye movement strategies in the process of global and local features, when the gaze point is not restricted to a particular fixation point. Fourteen participants were asked to respond to Matching and Similarity Judgment tasks. The data was recorded using EYELINKII™, with a sampling frequency of 1000Hz. The Global Precedence Effect (GPE) was observed in the two tasks. Additionally, a higher average of “arbitrary eccentricity” in global trials was observed as compared to local trials. Arbitrary eccentricity was referred to as the eccentricity individuals unconsciously choose to perceive the stimuli. Furthermore, the number of fixations were significantly greater in local trials. From our findings we speculate that in daily life we can perceive the world globally with peripheral vision and not always need eye-movements and only decide to focus foveally when selectively attending to local details seems necessary. |
Reuben Rideaux; William J. Harrison Border ownership-dependent tilt aftereffect for shape defined by binocular disparity and motion parallax Journal Article In: Journal of Neurophysiology, vol. 121, no. 5, pp. 1917–1923, 2019. @article{Rideaux2019, Discerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviors is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border ownership cells (Zhou H, Friedman HS, von der Heydt R. J Neurosci 20: 6594-6611, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioral correlate supporting the existence of these cells in humans was demonstrated with two-dimensional luminance-defined objects (von der Heydt R, Macuda T, Qiu FT. J Opt Soc Am A Opt Image Sci Vis 22: 2222-2229, 2005). However, objects in our natural visual environments are often signaled by complex cues, such as motion and binocular disparity. Thus for border ownership systems to effectively support figureground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measured in humans (of both sexes) border ownership-dependent tilt aftereffects after adaptation to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figureground depth order. Furthermore, we find that the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff HH, Mallot HA. J Opt Soc Am A 5: 1749-1758, 1988). These results suggest that border ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments. |
Arryn Robbins; Michael C. Hout Scene priming provides clues about target appearance that improve attentional guidance during categorical search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, pp. 1–11, 2019. @article{Robbins2019, During categorical search (e.g., “look for a dog”), observers have broad information about their intended target, but no specific details about the target's precise appearance. Research suggests that mental representations used to guide attention during categorical search (or search templates) comprise typical or category consistent features. Unlike laboratory settings, real world search is not conducted in isolation; yet to be understood is how context shapes categorical search templates. Here, participants searched for category items after viewing a contextual scene prime. Response times were consistently faster to context-congruent targets, even though searchers had no incentive to intentionally use the scene to shape their template. Eye movements revealed enhanced attentional guidance during congruent searches, suggesting that context allows searchers to develop more useful templates. Thus, contextual primes may trigger scene-specific schemas that activate object features in memory that can then be used to guide attention. |
Mark J. Roberts; Gesa Lange; Tracey Van Der Veen; Eric Lowet; Peter De Weerd The attentional blink is related to the microsaccade rate signature Journal Article In: Cerebral Cortex, vol. 29, no. 12, pp. 5190–5203, 2019. @article{Roberts2019, The reduced detectability of a target T2 following discrimination of a preceding target T1 in the attentional blink (AB) paradigm is classically interpreted as a consequence of reduced attention to T2 due to attentional allocation to T1. Here, we investigated whether AB was related to changes in microsaccade rate (MSR). We found a pronounced MSR signature following T1 onset, characterized by MSR suppression from 200 to 328ms and enhancement from 380 to 568 ms. Across participants, the magnitude of the MSR suppression correlated with the AB effect such that low T2 detectability corresponded to reduced MSR. However, in the same task, T1 error trials coincided with the presence of microsaccades. We discuss this apparent paradox in terms of known neurophysiological correlates of MS whereby cortical excitability is suppressed both during the microsaccade and MSR suppression, in accordance to poor T1 performance with microsaccade occurrence and poor T2 performance with microsaccade absence. Our data suggest a novel low-level mechanism contributing to AB characterized by reduced MSR, thought to cause suppressed visual cortex excitability. This opens the question of whether attention mediates T2 performance suppression independently from MSR, and if not, how attention interacts with MSR to produce the T2 performance suppression. |
Christiane S. Rohr; Dennis Dimond; Manuela Schuetze; Ivy Y. K. Cho; Limor Lichtenstein-Vidne; Hadas Okon-Singer; Deborah Dewey; Signe Bray Girls' attentive traits associate with cerebellar to dorsal attention and default mode network connectivity Journal Article In: Neuropsychologia, vol. 127, pp. 84–92, 2019. @article{Rohr2019, Attention traits are a cornerstone to the healthy development of children's performance in the classroom, their interactions with peers, and in predicting future success and problems. The cerebellum is increasingly appreciated as a region involved in complex cognition and behavior, and moreover makes important connections to key brain networks known to support attention: the dorsal attention and default mode networks (DAN; DMN). The cerebellum has also been implicated in childhood disorders affecting attention, namely autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), suggesting that attention networks extending to the cerebellum may be important to consider in relation to attentive traits. Yet, direct investigations into the association between cerebellar FC and attentive traits are lacking. Therefore, in this study we examined attentive traits, assessed using parent reports of ADHD and ASD symptoms, in a community sample of 52 girls aged 4–7 years, i.e. around the time of school entry, and their association with cerebellar connections with the DAN and DMN. We found that cortico-cerebellar functional connectivity (FC) jointly and differentially correlated with attentive traits, through a combination of weaker and stronger FC across anterior and posterior DAN and DMN nodes. These findings suggest that cortico-cerebellar integration may play an important role in the manifestation of attentive traits. |
Gabor Stefanics; Klaas Enno Stephan; Jakob Heinzle Feature-specific prediction errors for visual mismatch Journal Article In: NeuroImage, vol. 196, pp. 142–151, 2019. @article{Stefanics2019, Predictive coding (PC) theory posits that our brain employs a predictive model of the environment to infer the causes of its sensory inputs. A fundamental but untested prediction of this theory is that the same stimulus should elicit distinct precision weighted prediction errors (pwPEs) when different (feature-specific) predictions are violated, even in the absence of attention. Here, we tested this hypothesis using functional magnetic resonance imaging (fMRI) and a multi-feature roving visual mismatch paradigm where rare changes in either color (red, green), or emotional expression (happy, fearful) of faces elicited pwPE responses in human participants. Using a computational model of learning and inference, we simulated pwPE and prediction trajectories of a Bayes-optimal observer and used these to analyze changes in blood oxygen level dependent (BOLD) responses to changes in color and emotional expression of faces while participants engaged in a distractor task. Controlling for visual attention by eye-tracking, we found pwPE responses to unexpected color changes in the fusiform gyrus. Conversely, unexpected changes of facial emotions elicited pwPE responses in cortico-thalamo-cerebellar structures associated with emotion and theory of mind processing. Predictions pertaining to emotions activated fusiform, occipital and temporal areas. Our results are consistent with a general role of PC across perception, from low-level to complex and socially relevant object features, and suggest that monitoring of the social environment occurs continuously and automatically, even in the absence of attention. |
Marianna Stella; Paul E. Engelhardt Syntactic ambiguity resolution in dyslexia: An examination of cognitive factors underlying eye movement differences and comprehension failures Journal Article In: Dyslexia, vol. 25, no. 2, pp. 115–141, 2019. @article{Stella2019, This study examined eye movements and comprehension of temporary syntactic ambiguities in individuals with dyslexia, as few studies have focused on sentence-level comprehension in dyslexia. We tested 50 participants with dyslexia and 50 typically developing controls, in order to investigate (a) whether dyslexics have difficulty revising temporary syntactic misinterpretations and (b) underlying cognitive factors (i.e., working memory and processing speed) associated with eye movement differences and comprehension failures. In the sentence comprehension task, participants read subordinate-main structures that were either ambiguous or unambiguous, and we also manipulated the type of verb contained in the subordinate clause (i.e., reflexive or optionally transitive). Results showed a main effect of group on comprehension, in which individuals with dyslexia showed poorer comprehension than typically developing readers. In addition, participants with dyslexia showed longer total reading times on the disambiguating region of syntactically ambiguous sentences. With respect to cognitive factors, working memory was more associated with group differences than was processing speed. Conclusions focus on sentence-level syntactic processing issues in dyslexia (a previously under-researched area) and the relationship between online and offline measures of syntactic ambiguity resolution. |
Kevin G. Stephenson; Steven G. Luke; Mikle South Separate contributions of autistic traits and anxious apprehension, but not alexithymia, to emotion processing in faces Journal Article In: Autism, pp. 1–13, 2019. @article{Stephenson2019, Reduced eye fixation has been commonly reported in autistic samples but may be at least partially explained by alexithymia (i.e., difficulty understanding and describing one's emotional state). Because anxiety is often elevated in autism, and emotion-processing differences have also been observed in anxious samples, anxiety traits may also influence emotion processing within autism. This study tested the contribution of dimensional traits of autism, anxious apprehension, and alexithymia in mediating eye fixation during face processing. Participants included 105 adults from three samples: autistic adults (AS; n = 30), adults with clinically elevated anxiety and no autism (HI-ANX; n = 29), and neurotypical adults without elevated anxiety (NT; n = 46). Experiment 1 used an emotion identification task with dynamic stimuli, while Experiment 2 used a static luminance change detection task with emotional- and neutral-expression static photos. The emotions of interest were joy, anger, and fear. Dimensional mixed-effects models showed that autism traits, but not alexithymia, predicted reduced eye fixation across both tasks. Anxious apprehension was negatively related to response time in Experiment 1 and positively related to eye fixation in Experiment 2. Attentional avoidance of negative stimuli occurred at lower levels of autism traits and higher levels of worry traits. The results highlight the contribution of autism traits to emotional processing and suggest additional effects of worry-related traits. |
Ryan A. Stevenson; Aviva Philipp-Muller; Naomi Hazlett; Ze Y. Wang; Jessica Luk; Jong Lee; Karen R. Black; Lok-Kin Yeung; Fakhri Shafai; Magali Segers; Susanne Feber; Morgan D. Barense Conjunctive visual processing appears abnormal in Autism Journal Article In: Frontiers in Psychology, vol. 9, pp. 2668, 2019. @article{Stevenson2019, Face processing in autism spectrum disorder (ASD) is thought to be atypical, but it is unclear whether differences in visual conjunctive processing are specific to faces. To address this, we adapted a previously established eye-tracking paradigm which modulates the need for conjunctive processing by varying the degree of feature ambiguity in faces and objects. Typically-developed (TD) participants showed a canonical pattern of conjunctive processing: High-ambiguity objects were processed more conjunctively than low-ambiguity objects, and faces were processed in an equally conjunctive manner regardless of ambiguity level. In contrast, autistic individuals did not show differences in conjunctive processing based on stimulus category, providing evidence that atypical visual conjunctive processing in ASD is the result of a domain general lack of perceptual specialization. |
Gregory P. Strauss; Eric Granholm; Jason L. Holden; Ivan Ruiz; James M. Gold; Deanna L. Kelly; Robert W. Buchanan The effects of combined oxytocin and cognitive behavioral social skills training on social cognition in schizophrenia Journal Article In: Psychological Medicine, vol. 49, no. 10, pp. 1731–1739, 2019. @article{Strauss2019, Background Individuals with schizophrenia have deficits in social cognition that are associated with poor functional outcome. Unfortunately, current treatments result in only modest improvement in social cognition. Oxytocin, a neuropeptide with pro-social effects, has significant benefits for social cognition in the general population. However, studies examining the efficacy of oxytocin in schizophrenia have yielded inconsistent results. One reason for inconsistency may be that oxytocin has typically not been combined with psychosocial interventions. It may be necessary for individuals with schizophrenia to receive concurrent psychosocial treatment while taking oxytocin to have the context needed to make gains in social cognitive skills.Methods The current study tested this hypothesis in a 24-week (48 session) double-blind, placebo-controlled trial that combined oxytocin and Cognitive-Behavioral Social Skills Training (CBSST), which included elements from Social Cognition and Interaction Training (SCIT). Participants included 62 outpatients diagnosed with schizophrenia (placebo n = 31; oxytocin n = 31) who received 36 IU BID, with supervised administration 45 min prior to sessions on CBSST group therapy days. Participants completed a battery of measures administered at 0, 12, and 24 weeks that assessed social cognition.Results CBSST generally failed to enhance social cognition from baseline to end of study, and there was no additive benefit of oxytocin beyond the effects of CBSST alone.Conclusions Findings suggest that combined CBSST and oxytocin had minimal benefit for social cognition, adding to the growing literature indicating null effects of oxytocin in multi-dose trials. Methodological and biological factors may contribute to inconsistent results across studies. |
Michael J. Stroud; Tamaryn Menneer; Elina Kaplan; Kyle R. Cave; Nick Donnelly We can guide search by a set of colors, but are reluctant to do it Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 2, pp. 377–406, 2019. @article{Stroud2019, For some real-world color searches, the target colors are not precisely known, and any item within a range of color values should be attended. Thus, a target representation that captures multiple similar colors would be advantageous. If such a multicolor search is possible, then search for two targets (e.g., Stroud, Menneer, Cave, and Donnelly, Journal of Experimental Psychology: Human Perception and Performance, 38(1): 113-122, 2012) might be guided by a target representation that included the target colors as well as the continuum of colors that fall between the targets within a contiguous region in color space. Results from Stroud, Menneer, Cave, and Donnelly, Journal of Experimental Psychology: Human Perception and Performance, 38(1): 113-122, (2012) suggest otherwise, however. The current set of experiments show that guidance for a set of colors that are all from a single region of color space can be reasonably effective if targets are depicted as specific discrete colors. Specifically, Experiments 1–3 demonstrate that a search can be guided by four and even eight colors given the appropriate conditions. However, Experiment 5 gives evidence that guidance is sometimes sensitive to how informative the target preview is to search. Experiments 6 and 7 show that a stimulus showing a continuous range of target colors is not translated into a search target representation. Thus, search can be guided by multiple discrete colors that are from a single region in color space, but this approach was not adopted in a search for two targets with intervening distractor colors. |
Jacob L. Stubbs; Sherryse L. Corrow; Benjamin R. Kiang; Jeffrey C. Corrow; Hadley L. Pearce; Alex Y. Cheng; Jason J. S. Barton; William J. Panenka In: Scientific Reports, vol. 9, pp. 291, 2019. @article{Stubbs2019, Smooth pursuit eye movements have been investigated as a diagnostic tool for mild traumatic brain injury (mTBI). However, the degree to which smooth pursuit diferentiates mTBI patients from healthy controls (i.e. its diagnostic performance) is only moderate. Our goal was to establish if simultaneous performance of smooth pursuit and a working memory task increased the diagnostic performance of pursuit metrics following mTBI. We integrated an n-back task with two levels of working memory load into a pursuit target, and tested single- and dual-task pursuit in mTBI patients and healthy controls. We assessed pursuit using measures of velocity accuracy, positional accuracy and positional variability. The mTBI group had higher pursuit variability than the control group in all conditions. Performing a concurrent 1-back task decreased pursuit variability for both the mTBI and control groups. Performing a concurrent 2-back task produced diferential efects between the groups: Pursuit variability was signifcantly decreased in the control group, but not in the mTBI group. Diagnostic indices were improved when pursuit was combined with the 2-back task, and increased by 20% for the most sensitive variable. Smooth pursuit with simultaneous working memory load may be a superior diagnostic tool for mTBI than measuring smooth pursuit alone. |
Marta Suárez-Pinilla; Kyriacos Nikiforou; Zafeirios Fountas; Anil K. Seth; Warrick Roseboom Perceptual content, not physiological signals, determines perceived duration when viewing dynamic, natural scenes Journal Article In: Collabra: Psychology, vol. 5, no. 1, pp. 1–16, 2019. @article{SuarezPinilla2019, The neural basis of time perception remains unknown. A prominent account is the pacemaker-accumulator model, wherein regular ticks of some physiological or neural pacemaker are read out as time. Putative candidates for the pacemaker have been suggested in physiological processes (heartbeat), or dopaminergic mid-brain neurons, whose activity has been associated with spontaneous blinking. However, such proposals have difficulty accounting for observations that time perception varies systematically with perceptual content. We examined physiological influences on human duration estimates for naturalistic videos between 1–64 seconds using cardiac and eye recordings. Duration estimates were biased by the amount of change in scene content. Contrary to previous claims, heart rate, and blinking were not related to duration estimates. Our results support a recent proposal that tracking change in perceptual classification networks provides a basis for human time perception, and suggest that previous assertions of the importance of physiological factors should be tempered. |
Xiao Sun; Luming Zhang; Zepeng Wang; Jie Chang; Yiyang Yao; Ping Li; Roger Zimmermann Scene categorization using deeply learned gaze shifting kernel Journal Article In: IEEE Transactions on Cybernetics, vol. 49, no. 6, pp. 2156–2166, 2019. @article{Sun2019, Accurately recognizing sophisticated sceneries from a rich variety of semantic categories is an indispensable component in many intelligent systems, e.g., scene parsing, video surveillance, and autonomous driving. Recently, there have emerged a large quantity of deep architectures for scene categorization, wherein promising performance has been achieved. However, these models cannot explicitly encode human visual perception toward different sceneries, i.e., the sequence of humans sequentially allocates their gazes. To solve this problem, we propose deep gaze shifting kernel to distinguish sceneries from different categories. Specifically, we first project regions from each scenery into the so-called perceptual space, which is established by combining color, texture, and semantic features. Then, a novel non-negative matrix factorization algorithm is developed which decomposes the regions' feature matrix into the product of the basis matrix and the sparse codes. The sparse codes indicate the saliency level of different regions. In this way, the gaze shifting path from each scenery is derived and an aggregation-based convolutional neural network is designed accordingly to learn its deep representation. Finally, the deep representations of gaze shifting paths from all the scene images are incorporated into an image kernel, which is further fed into a kernel SVM for scene categorization. Comprehensive experiments on six scenery data sets have demonstrated the superiority of our method over a series of shallow/deep recognition models. Besides, eye tracking experiments have shown that our predicted gaze shifting paths are 94.6% consistent with the real human gaze allocations. |
David W. Sutterer; Joshua J. Foster; Kirsten C. S. Adam; Edward K. Vogel; Edward Awh Item-specific delay activity demonstrates concurrent storage of multiple active neural representations in working memory Journal Article In: PLoS Biology, vol. 17, no. 4, pp. e3000239, 2019. @article{Sutterer2019, Persistent neural activity that encodes online mental representations plays a central role in working memory (WM). However, there has been debate regarding the number of items that can be concurrently represented in this active neural state, which is often called the “focus of attention.” Some models propose a strict single-item limit, such that just 1 item can be neurally active at once while other items are relegated to an activity-silent state. Although past studies have decoded multiple items stored in WM, these studies cannot rule out a switching account in which only a single item is actively represented at a time. Here, we directly tested whether multiple representations can be held concurrently in an active state. We tracked spatial representations in WM using alpha-band (8–12 Hz) activity, which encodes spatial positions held in WM. Human observers remembered 1 or 2 positions over a short delay while we recorded electroencephalography (EEG) data. Using a spatial encoding model, we reconstructed active stimulus-specific representations (channel-tuning functions [CTFs]) from the scalp distribution of alpha-band power. Consistent with past work, we found that the selectivity of spatial CTFs was lower when 2 items were stored than when 1 item was stored. Critically, data-driven simulations revealed that the selectivity of spatial representations in the two-item condition could not be explained by models that propose that only a single item can exist in an active state at once. Thus, our findings demonstrate that multiple items can be concurrently represented in an active neural state. |
David W. Sutterer; Joshua J. Foster; John T. Serences; Edward K. Vogel; Edward Awh Alpha-band oscillations track the retrieval of precise spatial representations from long-term memory Journal Article In: Journal of Neurophysiology, vol. 122, no. 2, pp. 539–551, 2019. @article{Sutterer2019a, A hallmark of episodic memory is the phenomenon of mentally reexperiencing the details of past events, and a well-established concept is that the neuronal activity that mediates encoding is reinstated at retrieval. Evidence for reinstatement has come from multiple modalities, including functional magnetic resonance imaging and electroencephalography (EEG). These EEG studies have shed light on the time course of reinstatement but have been limited to distinguishing between a few categories. The goal of this work was to use recently developed experimental and technical approaches, namely continuous report tasks and inverted encoding models, to determine which frequencies of oscillatory brain activity support the retrieval of precise spatial memories. In experiment 1, we establish that an inverted encoding model applied to multivariate alpha topography tracks the retrieval of precise spatial memories. In experiment 2, we demonstrate that the frequencies and patterns of multivariate activity at study are similar to the frequencies and patterns observed during retrieval. These findings highlight the broad potential for using encoding models to characterize long-term memory retrieval. NEW & NOTEWORTHY Previous EEG work has shown that category-level information observed during encoding is recapitulated during memory retrieval, but studies with this time-resolved method have not demonstrated the reinstatement of feature-specific patterns of neural activity during retrieval. Here we show that EEG alpha-band activity tracks the retrieval of spatial representations from long-term memory. Moreover, we find considerable overlap between the frequencies and patterns of activity that track spatial memories during initial study and at retrieval. |
Gil Suzin; Ramit Ravona-Springer; Elissa L. Ash; Eddy J. Davelaar; Marius Usher Differences in semantic memory encoding strategies in young, healthy old and MCI patients Journal Article In: Frontiers in Aging Neuroscience, vol. 11, pp. 306, 2019. @article{Suzin2019, Associative processes, such as the encoding of associations between words in a list, can enhance episodic memory performance and are thought to deteriorate with age. Here, we examine the nature of age-related deficits in the encoding of associations, by using a free recall paradigm with visual arrays of objects. Fifty-five participants (26 young students; 20 cognitive healthy older adults; nine patients with Mild Cognitive Impairment, MCI) were shown multiple slides (experimental trials), each containing an array of nine common objects for recall. Most of the arrays contained three objects from three semantic categories, each. In the remaining arrays, the nine objects were unrelated. Eye fixations were also monitored during the viewing of the arrays, in a subset of the participants. While for young participants the immediate recall was higher for the semantically related arrays, this effect was diminished in healthy elderly and totally absent in MCI patients. Furthermore, only in the young group did the sequence of eye fixations show a semantic scanning pattern during encoding, even when the related objects were non- adjacent in the array. Healthy elderly and MCI patients were not influenced by the semantic relatedness of items during the array encoding, to the same extent as young subjects, as observed by a lack of (or reduced) semantic scanning. The results support a version of the encoding of the association aging-deficit hypothesis. |
Yuta Suzuki; Tetsuto Minami; Shigeki Nakauchi Pupil constriction in the glare illusion modulates the steady-state visual evoked potentials Journal Article In: Neuroscience, vol. 416, pp. 221–228, 2019. @article{Suzuki2019, The glare illusion enhances the perceived brightness of a central white area surrounded by a luminance gradient, without any actual change in light intensity. In this study, we measured the varied brightness and neurophysiological responses of electroencephalography (EEG) and pupil size with the several luminance contrast patterns of the glare illusion to address the question of whether the illusory brightness changes to the glare illusion process in the early visual cortex. We hypothesized that if the illusory brightness enhancement was created in the early stages of visual processing, the neural response would be similar to how it processes an actual change in light intensity. To test this, we observed the sustained visual cortical response of steady-state visual evoked potentials (SSVEPs), while participants watched flickering dots displayed in the central white area of both the varied luminance contrast of glare illusion and a control stimulus (no glare condition). We found the SSVEP amplitude was lower in the glare illusion than in the control condition, especially under high luminance contrast conditions. Furthermore, we found the probable mechanisms of the inhibited SSVEP amplitude to the high luminance contrast of glare illusion based on the greater pupil constriction, thereby decreasing the amount of light entering the pupil. Thus, the brightness enhancement in the glare illusion is already represented at the primary stage of visual processing linked to the larger pupil constriction. |
Warrick Roseboom; Zafeirios Fountas; Kyriacos Nikiforou; David Bhowmik; Murray Shanahan; Anil K. Seth Activity in perceptual classification networks as a basis for human subjective time perception Journal Article In: Nature Communications, vol. 10, pp. 267, 2019. @article{Roseboom2019, Despite being a fundamental dimension of experience, how the human brain generates the perception of time remains unknown. Here, we provide a novel explanation for how human time perception might be accomplished, based on non-temporal perceptual classification processes. To demonstrate this proposal, we build an artificial neural system centred on a feed-forward image classification network, functionally similar to human visual processing. In this system, input videos of natural scenes drive changes in network activation, and accumulation of salient changes in activation are used to estimate duration. Estimates produced by this system match human reports made about the same videos, replicating key qualitative biases, including differentiating between scenes of walking around a busy city or sitting in a cafe or office. Our approach provides a working model of duration perception from stimulus to estimation and presents a new direction for examining the foundations of this central aspect of human experience. |
Gal Rosenzweig; Yoram S. Bonneh Familiarity revealed by involuntary eye movements on the fringe of awareness Journal Article In: Scientific Reports, vol. 9, pp. 3029, 2019. @article{Rosenzweig2019, Involuntary eye movements during fixation of gaze are typically transiently inhibited following stimulus onset. This oculomotor inhibition (OMI), which includes microsaccades and spontaneous eye blinks, is modulated by stimulus saliency and anticipation, but it is currently unknown whether it is sensitive to familiarity. To investigate this, we measured the OMI while observers passively viewed a slideshow of one familiar and 7 unfamiliar facial images presented briefly at 1 Hz in random order. Since the initial experiments indicated that OMI was occasionally insensitive to familiarity when the facial images were highly visible, and to prevent top-down strategies and potential biases, we limited visibility by backward masking making the faces barely visible or at the fringe of awareness. Under these conditions, we found prolonged inhibition of both microsaccades and eye-blinks, as well as earlier onset of microsaccade inhibition with familiarity. These findings demonstrate, for the first time, the sensitivity of OMI to familiarity. Because this is based on involuntary eye movements and can be measured on the fringe of awareness and in passive viewing, our results provide direct evidence that OMI can be used as a novel physiological measure for studying hidden memories with potential implications for health, legal, and security purposes. |
Lara Rösler; Matthias Gamer Freezing of gaze during action preparation under threat imminence Journal Article In: Scientific Reports, vol. 9, pp. 17215, 2019. @article{Roesler2019, When confronted with threatening stimuli, animals typically respond with freezing behavior characterized by reduced movement and heart rate deceleration. Freezing-like responses during threat anticipation have also been observed in humans and are associated with anxiety. Recent evidence yet suggests that freezing does not necessarily reflect helpless immobility but can also aid the preparation of a threat escape. To investigate which further behavioral responses human freezing encompasses, we presented 50 young adults (10 male) with aversive stimuli that could sometimes be avoided while measuring gaze, cardiovascular and electrodermal activity. In trials in which the threat could be escaped, participants displayed reduced heart rate, increased electrodermal activity and reduced visual exploration. Furthermore, heart rate deceleration and restricted visual exploration predicted the speed of flight responses. These results provide evidence for freezing behavior in measures of visual exploration and suggest that such responding is adaptive in preparing the subsequent escape of approaching threats. |
Lara Rösler; Marius Rubo; Matthias Gamer Artificial faces predict gaze allocation in complex dynamic scenes Journal Article In: Frontiers in Psychology, vol. 10, pp. 2877, 2019. @article{Roesler2019a, Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes. |
Lars O. M. Rothkegel; Heiko H. Schütt; Hans A. Trukenbrod; Felix A. Wichmann; Ralf Engbert Searchers adjust their eye-movement dynamics to target characteristics in natural scenes Journal Article In: Scientific Reports, vol. 9, pp. 1635, 2019. @article{Rothkegel2019, When searching a target in a natural scene, it has been shown that both the target's visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism. |
Douglas A. Ruff; Marlene R. Cohen Simultaneous multi-area recordings suggest that attention improves performance by reshaping stimulus representations Journal Article In: Nature Neuroscience, vol. 22, pp. 1669–1676, 2019. @article{Ruff2019, Visual attention dramatically improves individuals' ability to see and modulates the responses of neurons in every known visual and oculomotor area, but whether such modulations can account for perceptual improvements is unclear. We measured the relationship between populations of visual neurons, oculomotor neurons and behavior during detection and discrimination tasks. We found that neither of the two prominent hypothesized neuronal mechanisms underlying attention (which concern changes in information coding and the way sensory information is read out) provide a satisfying account of the observed behavioral improvements. Instead, our results are more consistent with the hypothesis that attention reshapes the representation of attended stimuli to more effectively influence behavior. Our results suggest a path toward understanding the neural underpinnings of perception and cognition in health and disease by analyzing neuronal responses in ways that are constrained by behavior and interactions between brain areas. |
Koen Rummens; Bilge Sayim Disrupting uniformity: Feature contrasts that reduce crowding interfere with peripheral word recognition Journal Article In: Vision Research, vol. 161, pp. 25–35, 2019. @article{Rummens2019, Peripheral word recognition is impaired by crowding, the harmful influence of surrounding objects (flankers) on target identification. Crowding is usually weaker when the target and the flankers differ (for example in color). Here, we investigated whether reducing crowding at syllable boundaries improved peripheral word recognition. In Experiment 1, a target letter was flanked by single letters to the left and right and presented at 8° in the lower visual field. Target and flankers were either the same or different in regard to contrast polarity, color, luminance, and combined color/luminance. Crowding was reduced when the target differed from the flankers in contrast polarity, but not in any of the other conditions. Using the same color and luminance values as in Experiment 1, we measured recognition performance (speed and accuracy) for uniform (e.g., all letters black), congruent (e.g., alternating black and white syllables), and incongruent (e.g., alternating black and white non-syllables) words in Experiment 2. Participants verbally reported the target word, briefly displayed at 8° in the lower visual field. Congruent and incongruent words were recognized slower compared to uniform words in the opposite contrast polarity condition, but not in the other conditions. Our results show that the same feature contrast between the target and the flankers that yielded reduced crowding, deteriorated peripheral word recognition when applied to syllables and non-syllabic word parts. We suggest that a potential advantage of reduced crowding at syllable boundaries in word recognition is counteracted by the disruption of word uniformity. |
N. C. C. Russell; S. G. Luke; R. A Lundwall; M. South Not so fast: Autistic traits and anxious apprehension in real-world visual search scenarios Journal Article In: Journal of Autism and Developmental Disorders, vol. 49, pp. 1795–1806, 2019. @article{Russell2019, Autistic individuals have shown superior performance in simple, albeit difficult, visual search tasks. We compared eye movements and behavioral markers across two visual search tasks based on real-world scenes in young adults. Context-aided search increased speed and accuracy for all groups. Autistic adults (n = 29) were on average consistently slower and less accurate than a non-anxious neurotypical comparison group (n = 48), but similar to neurotypical adults with elevated anxious apprehension (n = 26). Dimensional analyses suggest that autism traits, not anxious apprehension, are most associated with search efficiency of naturalistic stimuli suggesting that autistic individuals can effectively integrate contextual information to aid visual search, but that advantages in less visually complex tasks, reported in previous studies, may not transfer to situations involving real-world scenes. |
Nathan Ryckman; Martina Bandzo; Yichen Qian; Anthony J. Lambert Sub-threshold cuing: Saccadic responses to low-contrast, peripheral, transient visual landmark cues Journal Article In: Consciousness and Cognition, vol. 74, pp. 1–14, 2019. @article{Ryckman2019, Dorsal stream visual encoding was studied in three experiments, by examining effects of peripheral landmark cues on eye movements. Stimulus features and task structure were tailored to physiological and functional characterisations of the dorsal visual stream. Sub-discriminable peripheral stimuli served as landmark cue stimuli. In Experiments 1 and 2, orienting behaviour in response to cues and targets differed for participants with relatively low and relatively high peripheral contrast thresholds. In Experiment 1, low, but not high-threshold participants oriented towards landmark cues that could not be discriminated consciously. However, in Experiment 3, high-, but not low-threshold participants oriented towards near threshold cues. Hence, under appropriate conditions both groups of participants oriented in response to brief, low-contrast, peripheral information. We propose that landmark cueing may provide a useful tool for measuring individual differences in dorsal stream processing and dynamic aspects of visual functioning and awareness. |
Amirsaman Sajad; David C. Godlove; Jeffrey D. Schall Cortical microcircuitry of performance monitoring Journal Article In: Nature Neuroscience, vol. 22, pp. 265–274, 2019. @article{Sajad2019, The medial frontal cortex enables performance monitoring, indexed by the error-related negativity (ERN) and manifested by performance adaptations. We recorded electroencephalogram over and neural spiking across all layers of the supplementary eye field, an agranular cortical area, in monkeys performing a saccade-countermanding (stop signal) task. Neurons signaling error production, feedback predicting reward gain or loss, and delivery of fluid reward had different spike widths and were concentrated differently across layers. Neurons signaling error or loss of reward were more common in layers 2 and 3 (L2/3), whereas neurons signaling gain of reward were more common in layers 5 and 6 (L5/6). Variation of error– and reinforcement-related spike rates in L2/3 but not L5/6 predicted response time adaptation. Variation in error-related spike rate in L2/3 but not L5/6 predicted ERN magnitude. These findings reveal novel features of cortical microcircuitry supporting performance monitoring and confirm one cortical source of the ERN. |
Emilio Salinas; Benjamin R. Steinberg; Lauren A. Sussman; Sophia M. Fry; Christopher K. Hauser; Denise D. Anderson; Terrence R. Stanford Voluntary and involuntary contributions to perceptually guided saccadic choices resolved with millisecond precision Journal Article In: eLife, vol. 8, pp. 1–22, 2019. @article{Salinas2019, In the antisaccade task, which is considered a sensitive assay of cognitive function, a salient visual cue appears and the participant must look away from it. This requires sensory, motorplanning, and cognitive neural mechanisms, but what are their unique contributions to performance, and when exactly are they engaged? Here, by manipulating task urgency, we generate a psychophysical curve that tracks the evolution of the saccadic choice process with millisecond precision, and resolve the distinct contributions of reflexive (exogenous) and voluntary (endogenous) perceptual mechanisms to antisaccade performance over time. Both progress extremely rapidly, the former driving the eyes toward the cue early on (~100 ms after cue onset) and the latter directing them away from the cue ~40 ms later. The behavioral and modeling results provide a detailed, dynamical characterization of attentional and oculomotor capture that is not only qualitatively consistent across participants, but also indicative of their individual perceptual capacities. |
Viljami R. Salmela; Kaisu Ölander; Ilkka Muukkonen; Paul M. Bays Recall of facial expressions and simple orientations reveals competition for resources at multiple levels of the visual hierarchy Journal Article In: Journal of Vision, vol. 19, no. 3, pp. 1–13, 2019. @article{Salmela2019, Many studies of visual working memory have tested humans' ability to reproduce primary visual features of simple objects, such as the orientation of a grating or the hue of a color patch, following a delay. A consistent finding of such studies is that precision of responses declines as the number of items in memory increases. Here we compared visual working memory for primary features and high-level objects. We presented participants with memory arrays consisting of oriented gratings, facial expressions, or a mixture of both. Precision of reproduction for all facial expressions declined steadily as the memory load was increased from one to five faces. For primary features, this decline and the specific distributions of error observed, have been parsimoniously explained in terms of neural population codes. We adapted the population coding model for circular variables to the non-circular and bounded parameter space used for expression estimation. Total population activity was held constant according to the principle of normalization and the intensity of expression was decoded by drawing samples from the Bayesian posterior distribution. The model fit the data well, showing that principles of population coding can be applied to model memory representations at multiple levels of the visual hierarchy. When both gratings and faces had to be remembered, an asymmetry was observed. Increasing the number of faces decreased precision of orientation recall, but increasing the number of gratings did not affect recall of expression, suggesting that memorizing faces involves the automatic encoding of low-level features, in addition to higher-level expression information. |
Jason M. Samonds; Veronica Choi; Nicholas J. Priebe Mice discriminate stereoscopic surfaces without fixating in depth Journal Article In: Journal of Neuroscience, vol. 39, no. 41, pp. 8024–8037, 2019. @article{Samonds2019, Stereopsis is aubiquitous feature ofprimatemammalianvision, but little is knownabout ifandhowrodentssuchasmiceuse stereoscopic vision. We used random dot stereograms to test for stereopsis in male and female mice, and they were able to discriminate near from far surfaces over a range of disparities, with diminishing performance for small and large binocular disparities. Based on two-photon measurements of disparity tuning, the range of disparities represented in the visual cortex aligns with the behavior and covers a broad range ofdisparities. Whenwe examined their binocular eye movements, we found that, unlike primates, mice did not systematically vary relative eye positions or use vergence eye movements when presented with different disparities. Nonetheless, the representation of disparity tuning was wide enough to capture stereoscopic information over a range of potential vergence angles. Although mice share fundamental characteristics of stereoscopic vision with primates and carnivores, their lack ofdisparity-dependent vergence eye move- ments and wide neuronal representation suggests that they may use a distinct strategy for stereopsis. |
Morteza Sarafyazd; Mehrdad Jazayeri Hierarchical reasoning by neural circuits in the frontal cortex Journal Article In: Science, vol. 364, pp. 1–11, 2019. @article{Sarafyazd2019, Humans process information hierarchically. In the presence of hierarchies, sources of failures are ambiguous. Humans resolve this ambiguity by assessing their confidence after one or more attempts. To understand the neural basis of this reasoning strategy, we recorded from dorsomedial frontal cortex (DMFC) and anterior cingulate cortex (ACC) of monkeys in a task in which negative outcomes were caused either by misjudging the stimulus or by a covert switch between two stimulus-response contingency rules. We found that both areas harbored a representation of evidence supporting a rule switch. Additional perturbation experiments revealed that ACC functioned downstream of DMFC and was directly and specifically involved in inferring covert rule switches. These results reveal the computational principles of hierarchical reasoning, as implemented by cortical circuits. |
Sarah E. Schwettmann; Joshua B. Tenenbaum; Nancy Kanwisher Invariant representations of mass in the human brain Journal Article In: eLife, vol. 8, pp. 1–26, 2019. @article{Schwettmann2019, An intuitive understanding of physical objects and events is critical for successfully interacting with the world. Does the brain achieve this understanding by running simulations in a mental physics engine, which represents variables such as force and mass, or by analyzing patterns of motion without encoding underlying physical quantities? To investigate, we scanned participants with fMRI while they viewed videos of objects interacting in scenarios indicating their mass. Decoding analyses in brain regions previously implicated in intuitive physical inference revealed mass representations that generalized across variations in scenario, material, friction, and motion energy. These invariant representations were found during tasks without action planning, and tasks focusing on an orthogonal dimension (object color). Our results support an account of physical reasoning where abstract physical variables serve as inputs to a forward model of dynamics, akin to a physics engine, in parietal and frontal cortex. |
Hannah Scott; Jonathan P. Batten; Gustav Kuhn Why are you looking at me? It's because I'm talking, but mostly because I'm staring or not doing much Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 1, pp. 109–118, 2019. @article{Scott2019, Our attention is particularly driven toward faces, especially the eyes, and there is much debate over the factors that modulate this social attentional orienting. Most of the previous research has presented faces in isolation, and we tried to address this shortcoming by measuring people's eye movements whilst they observe more naturalistic and varied social interactions. Participants' eye movements were monitored whilst they watched three different types of social interactions (monologue, manual activity, active attentional misdirection), which were either accompanied by the corresponding audio as speech or by silence. Our results showed that (1) participants spent more time looking at the face when the person was giving a monologue, than when he/she was carrying out manual activities, and in the latter case they spent more time fixating on the person's hands. (2) Hearing speech significantly increases the amount of time participants spent looking at the face (this effect was relatively small), although this was not accounted for by any increase in mouth-oriented gaze. (3) Participants spent significantly more time fixating on the face when direct eye contact was established, and this drive to establish eye contact was significantly stronger in the manual activities than during the monologue. These results highlight people's strategic top-down control over when they attend to faces and the eyes, and support the view that we use our eyes to signal non-verbal information. |
Coltan Scrivner; Kyoung Whan Choe; Joseph Henry; Muxuan Lyu; Dario Maestripieri; Marc G. Berman Violence reduces attention to faces and draws attention to points of contact Journal Article In: Scientific Reports, vol. 9, pp. 17779, 2019. @article{Scrivner2019, Although violence is a frequently researched topic, little is known about how different social features influence information gathering from violent interactions. Regions of an interaction that provide contextual information should receive more attention. We predicted the most informative features of a violent social interaction would be faces, points of contact, and objects being held. To test this, we tracked the eyes of 90 participants as they viewed images of social interactions that varied with respect to violence. When viewing violent interactions, participants attended significantly less to faces and significantly more to points of contact. Moreover, first-fixation analysis suggests that some of these biases are present from the beginning of scene-viewing. These findings are the first to demonstrate the visual relevance of faces and contact points in gathering information from violent social interactions. These results also question the attentional dominance of faces in active social scenes, highlighting the importance of using a variety of stimuli and contexts in social cognition research. |
Christopher Sears; Leanne Quigley; Amanda Fernandez; Kristin Newman; Keith Dobson The reliability of attentional biases for emotional images measured using a free-viewing eye-tracking paradigm Journal Article In: Behavior Research Methods, vol. 51, no. 6, pp. 2748–2760, 2019. @article{Sears2019, Cognitive theories of anxiety disorders and depression posit that attentional biases play a role in the development, maintenance, and recurrence of these disorders. Several paradigms have been used to examine attentional biases in anxiety and depression, but information on the reliability of different attentional bias indices is limited. In this study we examined the internal consistency and 6-month test–retest reliability of attentional bias indices derived from a free-viewing eye-tracking paradigm. Participants completed two versions of an eye-tracking task—one that used naturalistic images as stimuli, and one that used face images. In both tasks, participants viewed displays of four images, each display consisting of one threat image, one sad image, one positive/happy image, and one neutral image. The internal consistency of the fixation indices (dwell time and number of fixations) for threat, sad, and positive images over the full 8-s display was moderate to excellent. When the 8-s display was divided into 2-s intervals, the dwell times for the 0- to 2-s and 2- to 4-s intervals showed lower reliability, particularly for the face images. The attentional bias indices for the naturalistic images showed adequate to good stability over the test–retest period, whereas the test–retest reliability estimates for the face images were in the low to moderate range. The implications of these results for attentional bias research are discussed. |
Ehsan Sedaghat-Nejad; David J. Herzfeld; Reza Shadmehr Reward prediction error modulates saccade vigor Journal Article In: Journal of Neuroscience, vol. 39, no. 25, pp. 5010–5017, 2019. @article{SedaghatNejad2019a, Movement vigor, defined as the reciprocal of the latency from availability of reward to its acquisition, changes with reward magnitude: Movements exhibit shorter reaction time and increased velocity when they are directed toward more rewarding stimuli. This invigoration may be due to release of dopamine before movement onset, which has been shown to be modulated by events that signal reward prediction error (RPE). Here, we generated an RPE event in the milliseconds before movement onset and tested whether there was a relationship between RPE and vigor. Human subjects (both sexes) made saccades toward an image. During execution of the primary saccade, we probabilistically changed the position and content of that image, encouraging a secondary saccade. On some trials, the content of the secondary image was more valuable than the first image, resulting in a positive RPE (+RPE) event that preceded the secondary saccade.Onother trials, this content was less valuable (–PEevent).Wefound that reaction time of the secondary saccade was affected in an orderly fashion by the magnitude and direction of the preceding RPE event: The most vigorous saccades followed the largest +RPE, whereas the least vigorous saccades followed the largest –RPE. Presence of the secondary saccade indicated that the primary saccade had experienced a movement error, inducing trial-to-trial adaptation. However, this learning from movement error was not modulated by the RPE event. The data suggest that RPE events, which are thought to transiently alter the release of dopamine, modulate the vigor of the ensuing movement. |
Tatjana Seizova-Cajic; Nika Adamian; Marianne Duyck; Patrick Cavanagh Motion-induced scotoma Journal Article In: Perception, vol. 48, no. 2, pp. 115–137, 2019. @article{SeizovaCajic2019, We investigated artificial scotomas created when a moving object instantaneously crossed a gap, jumping ahead and continuing its otherwise smooth motion. Gaps of up to 5.1 degrees of visual angle, presented at 18 eccentricity, either closed completely or appeared much shorter than when the same gap was crossed by two-point apparent motion, or crossed more slowly, mimicking occlusion. Prolonged exposure to motion trajectories with a gap in most cases led to further shrinking of the gap. The same gap-shrinking effect has previously been observed in touch. In both sensory modalities, it implicates facilitation among codirectional local motion detectors and motion neurons with receptive fields larger than the gap. Unlike stimuli that simply deprive a receptor surface of input, suggesting it is insentient, our motion pattern skips a section in a manner that suggests a portion of the receptor surface has been excised, and the remaining portions stitched back together. This makes it a potentially useful tool in the experimental study of plasticity in sensory maps. |
M. Senoussi; James C. Moreland; Niko A. Busch; Laura Dugué Attention explores space periodically at the theta frequency Journal Article In: Journal of Vision, vol. 19, no. 5, pp. 1–17, 2019. @article{Senoussi2019, Voluntary attention is at the core of a wide variety of cognitive functions. Attention can be oriented to and sustained at a location or reoriented in space to allow processing at other locations—critical in an ever- changing environment. Numerous studies have investigated attentional orienting in time and space, but little is known about the spatiotemporal dynamics of attentional reorienting. Here we explicitly manipulated attentional reorienting using a cuing procedure in a two- alternative forced-choice orientation-discrimination task. We interrogated attentional distribution by flashing two probe stimuli with various delays between the precue and target stimuli. Then we used the probabilities that both probes and neither probe were correctly reported to solve a second-degree equation, which estimates the report probability at each probe location. We demonstrated that attention reorients periodically at ;4 Hz (theta) between the two stimulus locations. We further characterized the processing dynamics at each stimulus location, and demonstrated that attention samples each location periodically at ;11 Hz (alpha). Finally, simulations support our findings and show that this method is sufficiently powered, making it a valuable tool for studying the spatiotemporal dynamics of attention. |
Neda Shahidi; Ariana R. Andrei; Ming Hu; Valentin Dragoi High-order coordination of cortical spiking activity modulates perceptual accuracy Journal Article In: Nature Neuroscience, vol. 22, pp. 1148–1158, 2019. @article{Shahidi2019, The accurate relay of electrical signals within cortical networks is key to perception and cognitive function. Theoretically, it has long been proposed that temporal coordination of neuronal spiking activity controls signal transmission and behavior. However, whether and how temporally precise neuronal coordination in population activity influences perception are unknown. Here, we recorded populations of neurons in early and mid-level visual cortex (areas V1 and V4) simultaneously to discover that the precise temporal coordination between the spiking activity of three or more cells carries information about visual perception in the absence of firing rate modulation. The accuracy of perceptual responses correlated with high-order spiking coordination within V4, but not V1, and with feedforward coordination between V1 and V4. These results indicate that while visual stimuli are encoded in the discharge rates of neurons, perceptual accuracy is related to temporally precise spiking coordination within and between cortical networks. |
Signy Sheldon; Kelly Cool; Nadim El-Asmar The processes involved in mentally constructing event- and scene-based autobiographical representations Journal Article In: Journal of Cognitive Psychology, vol. 31, pp. 261–275, 2019. @article{Sheldon2019, Autobiographical experiences can be mentally constructed as generalised events or as spatial scenes. We investigated the commonalities and distinctions in using episodic and visual imagery processes to imagine autobiographical scenarios as events or scenes. Participants described personal scenarios framed as future events or spatial scenes. We analyzed the number and type of episodic details within the descriptions. To measure imagery processing, we monitored eye-movements and examined the impact of viewing a imagery disrupting stimulus (Dynamic Visual Noise; DVN) when these descriptions were made. We found that events were described with more generalised details and scenes with more perceptual details. DVN reduced the number of episodic details generated for all descriptions and eye fixation rates negatively correlated with the number of these details that were generated. This suggests that different content is used to imagine event- or scene-based experiences and imagery contributes similarly to the episodic specificity of these imaginations. |
Risako Shirai; Hayaki Banno; Hirokazu Ogawa Trypophobic images induce oculomotor capture and inhibition Journal Article In: Attention, Perception, and Psychophysics, vol. 81, pp. 420–432, 2019. @article{Shirai2019, It is known that unpleasant images capture our attention. However, the causes of the emotions evoked by these images can vary. Trypophobia is the fear of clustered objects. A recent study claimed that this phobia is elicited by the specific power spectrum of such images. In the present study, we measured saccade trajectories to examine how trypophobic images possessing a characteristic power spectrum affect visual attention. The participants' task was to make a saccade in the direction that was indicated by a cue. Four irrelevant images with different emotional content were presented as periphery distractors at 0 ms, 150 ms, and 450 ms in terms of cue-image onset asynchrony. The irrelevant images consisted of trypophobic, fearful, or neutral scenes. The presence of saccade trajectory deviations induced by trypophobic images suggest that intact trypophobic images oriented attention to their location. Moreover, when the images were phase scrambled, the saccade curved away from the trypophobic images, suggesting that trypophobic power spectra also triggered attentional capture, which was weak and then led to inhibition. These findings suggest that not only the power spectral characteristics but also the gist of a trypophobic image affect attentional deployment. |
Shirin Vafaei Shooshtari; Jamal Esmaily Sadrabadi; Zahra Azizi; Reza Ebrahimpour Confidence representation of perceptual decision by EEG and eye data in a random dot motion task Journal Article In: Neuroscience, vol. 406, pp. 510–527, 2019. @article{Shooshtari2019, The Confidence of a decision could be considered as the internal estimate of decision accuracy. This variable has been studied extensively by different types of recording data such as behavioral, electroencephalography (EEG), eye and electrophysiology data. Although the value of the reported confidence is considered as one of the most important parameters in decision making, the confidence reporting phase might be considered as a restrictive element in investigating the decision process. Thus, decision confidence should be extracted by means of other provided types of information. Here, we proposed eight confidence related properties in EEG and eye data which are significantly descriptive of the defined confidence levels in a random dot motion (RDM) task. As a matter of fact, our proposed EEG and eye data properties are capable of recognizing more than nine distinct levels of confidence. Among our proposed features, the latency of the pupil maximum diameter through the stimulus presentation was established to be the most associated one to the confidence levels. Through the time-dependent analysis of these features, we recognized the time interval of 500–600 ms after the stimulus onset as an important time in correlating features to the confidence levels. |
Johanna Elisa Silberg; Ioannis Agtzidis; Mikhail Startsev; Teresa Fasshauer; Karen Silling; Andreas Sprenger; Michael Dorr; Rebekka Lencer Free visual exploration of natural movies in schizophrenia Journal Article In: European Archives of Psychiatry and Clinical Neuroscience, vol. 269, no. 4, pp. 407–418, 2019. @article{Silberg2019, Background: Eye tracking dysfunction (ETD) observed with standard pursuit stimuli represents a well-established biomarker for schizophrenia. How ETD may manifest during free visual exploration of real-life movies is unclear. Methods: Eye movements were recorded (EyeLink®1000) while 26 schizophrenia patients and 25 healthy age-matched controls freely explored nine uncut movies and nine pictures of real-life situations for 20 s each. Subsequently, participants were shown still shots of these scenes to decide whether they had explored them as movies or pictures. Participants were additionally assessed on standard eye-tracking tasks. Results: Patients made smaller saccades (movies (p = 0.003), pictures (p = 0.002)) and had a stronger central bias (movies and pictures (p < 0.001)) than controls. In movies, patients' exploration behavior was less driven by image-defined, bottom- up stimulus saliency than controls (p < 0.05). Proportions of pursuit tracking on movies differed between groups depending on the individual movie (group*movie p = 0.011, movie p < 0.001). Eye velocity on standard pursuit stimuli was reduced in patients (p = 0.029) but did not correlate with pursuit behavior on movies. Additionally, patients obtained lower rates of correctly identified still shots as movies or pictures (p = 0.046). Conclusion: Our results suggest a restricted centrally focused visual exploration behavior in patients not only on pictures, but also on movies of real-life scenes. While ETD observed in the laboratory cannot be directly transferred to natural view- ing conditions, these alterations support a model of impairments in motion information processing in patients resulting in a reduced ability to perceive moving objects and less saliency driven exploration behavior presumably contributing to alterations in the perception of the natural environment. |
Čeněk Šašinka; Zdeněk Stachoň; Petr Kubíček; Sascha Tamm; Aleš Matas; Markéta Kukaňová The impact of global/local bias on task-solving in map-related tasks employing extrinsic and intrinsic visualization of risk uncertainty maps Journal Article In: The Cartographic Journal, vol. 56, no. 2, pp. 175–191, 2019. @article{Sasinka2019, The form of visual representation affects both the way in which the visual representation is processed and the effectiveness of this processing. Different forms of visual representation may require the employment of different cognitive strategies in order to solve a particular task; at the same time, the different representations vary as to the extent to which they correspond with an individual's preferred cognitive style. The present study employed a Navon-type task to learn about the occurrence of global/local bias. The research was based on close interdisciplinary cooperation between the domains of both psychology and cartography. Several different types of tasks were made involving avalanche hazard maps with intrinsic/extrinsic visual representations, each of them employing different types of graphic variables representing the level of avalanche hazard and avalanche hazard uncertainty. The research sample consisted of two groups of participants, each of which was provided with a different form of visual representation of identical geographical data, such that the representations could be regarded as ‘informationally equivalent'. The first phase of the research consisted of two correlation studies, the first involving subjects with a high degree of map literacy (students of cartography) (intrinsic method: N = 35; extrinsic method: N = 37). The second study was performed after the results of the first study were analyzed. The second group of participants consisted of subjects with a low expected degree of map literacy (students of psychology; intrinsic method: N = 35; extrinsic method: N = 27).The first study revealed a statistically significant moderate correlation between the students' response times in extrinsic visualization tasks and their response times in a global subtest (r = 0.384, p < 0.05); likewise, a statistically significant moderate correlation was found between the students' response times in intrinsic visualization tasks and their response times in the local subtest (r = 0.387, p < 0.05). At the same time, no correlation was found between the students' performance in the local subtest and their performance in extrinsic visualization tasks, or between their scores in the global subtest and their performance in intrinsic visualization tasks. The second correlation study did not confirm the results of the first correlation study (intrinsic visualization/‘small figures test': r = 0.221; extrinsic visualization/‘large figures test': r = 0.135). The first phase of the research, where the data was subjected to statistical analysis, was followed by a comparative eye-tracking study, whose aim was to provide more detailed insight into the cognitive strategies employed when solving map-related tasks. More specifically, the eye-tracking study was expected to be able to detect possible differences between the cognitive patterns employed when solving extrinsic- as opposed to intrinsic visualization tasks. The results of an exploratory eye-tracking data analysis support the hypothesis of different strategies of visual information processing being used in reaction to different types of visualization. |
Bilge Sayim; Henry Taylor Letters lost: Capturing appearance in crowded peripheral vision reveals a new kind of masking Journal Article In: Psychological Science, vol. 30, no. 7, pp. 1082–1086, 2019. @article{Sayim2019, Peripheral vision is strongly limited by crowding, the deleterious influence of flanking items on target perception. Distinguishing what is seen from what is merely inferred in crowding is difficult because task demands and prior knowledge may influence observers' reports. Here, we used a standard identification task in which participants were susceptible to these influences, and to minimize them, we used a free-report-and-drawing paradigm. Three letters were presented in the periphery. In Experiment 1, 10 participants were asked to identify the central target letter. In Experiment 2, 25 participants freely named and drew what they saw. When three identical letters were presented, performance was almost perfect in Experiment 1, but it was very poor in Experiment 2, in which most participants reported only two letters. Our study reveals limitations of standard crowding paradigms and uncovers a hitherto unrecognized effect that we call redundancy masking. |
Lukas F. Schaeffner; Andrew E. Welchman The mixed-polarity benefit of stereopsis arises in early visual cortex Journal Article In: Journal of Vision, vol. 19, no. 2, pp. 1–14, 2019. @article{Schaeffner2019, Depth perception is better when observers view stimuli containing a mixture of bright and dark visual features. It is currently unclear where in the visual system sensory processing benefits from the availability of different contrast polarity. To address this question, we applied transcranial magnetic stimulation to the visual cortex to modulate normal neural activity during processing of single- or mixed-polarity random-dot stereograms. In line with previous work, participants gave significantly better depth judgments for mixed-polarity stimuli. Stimulation of early visual cortex (V1/V2) significantly increased this benefit for mixed-polarity stimuli, and it did not affect performance for single-polarity stimuli. Stimulation of disparity responsive areas V3a and LO had no effect on perception. Our findings show that disparity processing in early visual cortex gives rise to the mixedpolarity benefit. This is consistent with computational models of stereopsis at the level of V1 that produce a mixed polarity benefit. |
Kimberly B. Schauder; Woon Ju Park; Yuliy Tsank; Miguel P. Eckstein; Duje Tadin; Loisa Bennetto Initial eye gaze to faces and its functional consequence on face identification abilities in autism spectrum disorder Journal Article In: Journal of Neurodevelopmental Disorders, vol. 11, no. 1, pp. 1–20, 2019. @article{Schauder2019, Background: Autism spectrum disorder (ASD) is a neurodevelopmental disorder defined and diagnosed by core deficits in social communication and the presence of restricted and repetitive behaviors. Research on face processing suggests deficits in this domain in ASD but includes many mixed findings regarding the nature and extent of these differences. The first eye movement to a face has been shown to be highly informative and sufficient to achieve high performance in face identification in neurotypical adults. The current study focused on this critical moment shown to be essential in the process of face identification. Methods: We applied an established eye-tracking and face identification paradigm to comprehensively characterize the initial eye movement to a face and test its functional consequence on face identification performance in adolescents with and without ASD (n = 21 per group), and in neurotypical adults. Specifically, we presented a series of faces and measured the landing location of the first saccade to each face, while simultaneously measuring their face identification abilities. Then, individuals were guided to look at specific locations on the face, and we measured how face identification performance varied as a function of that location. Adolescent participants also completed a more traditional measure of face identification which allowed us to more fully characterize face identification abilities in ASD. Results: Our results indicate that the location of the initial look to faces and face identification performance for briefly presented faces are intact in ASD, ruling out the possibility that deficits in face perception, at least in adolescents with ASD, begin with the initial eye movement to the face. However, individuals with ASD showed impairments on the more traditional measure of face identification. Conclusion: Together, the observed dissociation between initial, rapid face perception processes, and other measures of face perception offers new insights and hypotheses related to the timing and perceptual complexity of face processing and how these specific aspects of face identification may be disrupted in ASD. |
Sebastian Schindler; Maximilian Bruchmann; Florian Bublatzky; Thomas Straube Modulation of face- and emotion-selective ERPs by the three most common types of face image manipulations Journal Article In: Social Cognitive and Affective Neuroscience, vol. 14, no. 5, pp. 493–503, 2019. @article{Schindler2019, In neuroscientific studies, the naturalness of face presentation differs; a third of published studies makes use of close-up full coloured faces, a third uses close-up grey-scaled faces and another third employs cutout grey-scaled faces. Whether and how these methodological choices affect emotion-sensitive components of the event-related brain potentials (ERPs) is yet unclear. Therefore, this pre-registered study examined ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions, while attention was directed to no-face oddballs. Results revealed no interaction of face naturalness and emotion for any ERP component, but showed, however, large main effects for both factors. Specifically, fearful faces and decreasing face naturalness elicited substantially enlarged N170 and early posterior negativity amplitudes and lower face naturalness also resulted in a larger P1.This pattern reversed for the LPP, showing linear increases in LPP amplitudes with increasing naturalness.We observed no interaction of emotion with face naturalness, which suggests that face naturalness and emotion are decoded in parallel at these early stages. Researchers interested in strong modulations of early components should make use of cutout grey-scaled faces, while those interested in a pronounced late positivity should use close-up coloured faces. |
Tobias Schoeberl; Ulrich Ansorge The impact of temporal contingencies between cue and target onset on spatial attentional capture by subliminal onset cues Journal Article In: Psychological Research, vol. 83, no. 7, pp. 1416–1425, 2019. @article{Schoeberl2019, Prior research suggested that attentional capture by subliminal abrupt onset cues is stimulus driven. In these studies, reacting was faster when a searched-for target appeared at the location of a preceding abrupt onset cue compared to when the same target appeared at a location away from the cue (cueing effect), although the earlier onset of the cue was subliminal, because it appeared as one out of three horizontally aligned placeholders with a lead time that was too short to be noticed by the participants. Because the cueing effects seemed to be independent of top–down search settings for target features, the effect was attributed to stimulus-driven attentional capture. However, prior studies did not investigate if participants experienced the cues as useful temporal warning signals and, therefore, attended to the cues in a top–down way. Here, we tested to which extent search settings based on temporal contingencies between cue and target onset could be responsible for spatial cueing effects. Cueing effects were replicated, and we showed that removing temporal contingencies between cue and target onset did not diminish the cueing effects (Experiments 1 and 2). Neither presenting the cues in the majority of trials after target onset (Experiment 1) nor presenting cue and target unrelated to one another (Experiment 2) led to a significant reduction of the spatial cueing effects. Results thus support the hypothesis that the subliminal cues captured attention in a stimulus-driven way. |
Martin Schoemann; Michael Schulte-Mecklenbeck; Frank Renkewitz; Stefan Scherbaum Forward inference in risky choice: Mapping gaze and decision processes Journal Article In: Journal of Behavioral Decision Making, vol. 32, no. 5, pp. 521–535, 2019. @article{Schoemann2019, The study of cognitive processes is built on a close mapping between three components: overt gaze behavior, overt choice, and covert processes. To validate this overt–covert mapping in the domain of decision-making, we collected eye-movement data during decisions between risky gamble problems. Applying a forward inference paradigm, participants were instructed to use specific decision strategies to solve those gamble problems (maximizing expected values or applying different choice heuristics) during which gaze behavior was recorded. We revealed differences between overt behavior, as indicated by eye movements, and covert decision processes, instructed by the experimenter. However, our results show that the overt–covert mapping is for some eye-movement measures not as close as expected by current decision theory, and hence question reverse inference as being prone to fallacies due to a violation of its prerequisite, that is, a close overt–covert mapping. We propose a framework to rehabilitate reverse inference. |
Daniel E. Schoth; Jun Wu; Jin Zhang; Xiaoying Guo; Christina Liossi Eye-movement behaviours when viewing real-world pain-related images Journal Article In: European Journal of Pain, vol. 23, pp. 945–956, 2019. @article{Schoth2019, Background: Pain‐related cues are evolutionarily primed to capture attention, although evidence of attentional biases towards pain‐related information is mixed in healthy individuals. The present study explores whether healthy individuals show significantly different eye‐movement behaviours when viewing real‐world pain‐re- lated scenes compared to neutral scenes. The effect of manipulating via written information the threat value of the pain‐related scenes on eye‐movement behaviours was also assessed. Methods: Participants were randomized to threatening (n = 28) and non‐threatening (n = 27) information conditions. All completed a free‐viewing task with real‐world pain‐related and neutral images while their eye movements were recorded. Results: Participants made significantly fewer fixations of significantly longer duration when viewing pain‐related images compared to neutral images. No significant differences were found between threatening and non‐threatening information groups in their pattern of eye movements. Conclusions: This study shows that healthy individuals demonstrate attentional biases to pain‐related real‐world complex images compared to neutral images. Future research is needed to establish the implications of these biases, particularly in the context of acute pain, on the onset and/or subsequent maintenance of chronic pain conditions. Significance: Healthy individuals show different eye‐movement behaviours when viewing pain‐related scenes than neutral scenes, supporting evolutionary accounts of pain. Implications for the onset and/or maintenance of chronic pain need to be explored. |
Volkhard Schroth; Roland Joos; Ewald Alshuth; Wolfgang Jaschinski Effects of aligning prisms on the objective and subjective fixation disparity in far distance Journal Article In: Journal of Eye Movement Research, vol. 12, no. 4, pp. 1–12, 2019. @article{Schroth2019, Fixation disparity (FD) refers to a suboptimal condition of binocular vision. The oculomotor aspect of FD refers to a misadjustment in the vergence angle between the two visual axes that is measured in research with eye trackers (objective fixation disparity, oFD). The sensory aspect is psychophysically tested using dichoptic nonius lines (subjective fixation disparity, sFD). Some optometrists use nonius tests to determine the prisms for constant wear aiming to align the eyes. However, they do not (yet) use eye trackers. We investigate the effect of aligning prisms on oFD and sFD for 60 sec exposure duration of prisms determined with the clinically established Cross test in far distance vision. Without prisms, both types of FD were correlated with the aligning prism, while with prisms the FD was close to zero (these analyses included all base-in and base-out cases). The effect of base-in prisms on oFD was proportional to the amount of the aligning prism for the present 60 sec exposure, similar as for the 2- 5 sec exposure in Schmid et al. (2018). Thus, within 1 minute of prism exposure, no substantial vergence adaptation seems to occur in the present test conditions. Further studies may investigate intra-individual responses to different exposure times of aligning prisms in both prism directions. |
Rebekka S. Schubert; Maarten L. Jung; Jens R. Helmert; Boris M. Velichkovsky; Sebastian Pannasch Size matters: How reaching and vergence movements are influenced by the familiar size of stereoscopically presented objects Journal Article In: PLoS ONE, vol. 14, no. 11, pp. e0225311, 2019. @article{Schubert2019, The knowledge about the usual size of objects—familiar size—is known to be a taken into account for distance perception. The influence of familiar size on action programming is less clear and has not yet been tested with regard to vergence eye movements. In two experiments, we stereoscopically presented everyday objects, such as a credit card or a package of paper tissues, and varied the distance as specified by binocular disparity and the distance as specified by familiar size. Participants had to fixate the shown object and subsequently reach towards it either with open or with closed eyes. When binocular disparity and familiar size were in conflict, reaching movements revealed a combination of the two depth cues with individually different weights. The influence of familiar size was larger when no visual feedback was available during the reaching movement. Vergence movements closely followed binocular disparity and were largely unaffected by familiar size. In sum, the results suggest that in this experimental setting familiar size is taken into account for programming and executing reaching movements while vergence movements are primarily based on binocular disparity. |
Teresa Schuhmann; Selma K. Kemmerer; Felix Duecker; Tom A. Graaf; Sanne Oever; Peter Weerd; Alexander T. Sack Left parietal tACS at alpha frequency induces a shift of visuospatial attention Journal Article In: PLoS ONE, vol. 14, no. 11, pp. e0217729, 2019. @article{Schuhmann2019, Background Voluntary shifts of visuospatial attention are associated with a lateralization of parieto-occipital alpha power (7-13Hz), i.e. higher power in the hemisphere ipsilateral and lower power contralateral to the locus of attention. Recent noninvasive neuromodulation studies demonstrated that alpha power can be experimentally increased using transcranial alternating current stimulation (tACS). Objective/Hypothesis We hypothesized that tACS at alpha frequency over the left parietal cortex induces shifts of attention to the left hemifield. However, spatial attention shifts not only occur voluntarily (endogenous/ top-down), but also stimulus-driven (exogenous/ bottom-up). To study the task-specificity of the potential effects of tACS on attentional processes, we administered three conceptually different spatial attention tasks. Methods 36 healthy volunteers were recruited from an academic environment. In two separate sessions, we applied either high-density tACS at 10Hz, or sham tACS, for 35–40 minutes to their left parietal cortex. We systematically compared performance on endogenous attention, exogenous attention, and stimulus detection tasks. Results In the endogenous attention task, a greater leftward bias in reaction times was induced during left parietal 10Hz tACS as compared to sham. There were no stimulation effects in either the exogenous attention or the stimulus detection task. Conclusion The study demonstrates that high-density tACS at 10Hz can be used to modulate visuospatial attention performance. The tACS effect is task-specific, indicating that not all forms of attention are equally susceptible to the stimulation. |
Heiko H. Schütt; Lars O. M. Rothkegel; Hans A. Trukenbrod; Ralf Engbert; Felix A. Wichmann Disentangling top-down vs. bottom-up and low-level vs. high-level influences on eye movements over time Journal Article In: Journal of Vision, vol. 19, no. 3, pp. 1–23, 2019. @article{Schuett2019, Bottom-up and top-down, as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analysing their influence over time. For this purpose we develop a saliency model which is based on the internal representation of a recent early spatial vision model to measure the low-level bottom-up factor. To measure the influence of high-level bottom-up features, we use a recent DNN-based saliency model. To account for top-down influences, we evaluate the models on two large datasets with different tasks: first, a memorisation task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: The first saccade, an initial guided exploration characterised by a gradual broadening of the fixation density, and an steady state which is reached after roughly 10 fixations. Saccade target selection during the initial exploration and in the steady state are related to similar areas of interest, which are better predicted when including high-level features. In the search dataset, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later this high-level bottom-up control can be overruled by top-down influences. |
Max K. Smith; Satoru Suzuki; Marcia F. Grabowecky Exogenous Covert Orientation of Attention to the Center of Mass Journal Article In: Journal of Vision, vol. 19, no. 10, pp. 264c, 2019. @article{Smith2019, Anne Treisman's scientific career included broad-ranging contributions that advanced our understanding of the attentional mechanisms that people rely on to make sense of the world. In this paper, we describe results from a visual-search paradigm first developed by Grabowecky and Treisman (Grabowecky, 1992). Their design exploited known feature-search asymmetries (Treisman & Gormican, 1988) to investigate the role of a center of mass (CoM) mechanism in determining the initial locus of visual-spatial attention in visual search. The original experiment supported the hypothesis that CoM influences initial orienting of visual-spatial attention, as targets near the CoM of a multi-element array were detected more quickly than targets distant from the CoM. These findings were replicated in a follow-up experiment using a different feature-search asymmetry, with eye-tracking added to verify central fixation. We also investigated whether CoM had any influence on pop-out search, and found no evidence that it does. Surprisingly, the effect of position of the search array on the CoM suggested that CoM may be computed independently for elements contained within each visual hemifield. Whereas our work on CoM with Treisman was initiated within an earlier theoretical context, the present results are also compatible with contemporary theoretical advances; both the early results and the new results can be integrated within current ways of thinking about attention and pre-attentive mechanisms. |
Stephanie M. Smith; Ian Krajbich Gaze amplifies value in decision making Journal Article In: Psychological Science, vol. 30, no. 1, pp. 116–128, 2019. @article{Smith2019b, When making decisions, people tend to choose the option they have looked at more. An unanswered question is how attention influences the choice process: whether it amplifies the subjective value of the looked-at option or instead adds a constant, value-independent bias. To address this, we examined choice data from six eye-tracking studies (Ns = 39, 44, 44, 36, 20, and 45, respectively) to characterize the interaction between value and gaze in the choice process. We found that the summed values of the options influenced response times in every data set and the gaze-choice correlation in most data sets, in line with an amplifying role of attention in the choice process. Our results suggest that this amplifying effect is more pronounced in tasks using large sets of familiar stimuli, compared with tasks using small sets of learned stimuli. |
Stephanie M. Smith; Ian Krajbich Gaze-informed modeling of preference learning and prediction Journal Article In: Journal of Neuroscience, Psychology, and Economics, vol. 12, no. 3-4, pp. 143–158, 2019. @article{Smith2019a, Learning other people's preferences is a basic skill required to function effectively in society. However, the process underlying this behavior has been left largely unstudied. Here we aimed to characterize this process, using eye-tracking and computational modeling to study people while they estimated another person's film preferences. In the first half of the study, subjects received immediate feedback after their guess, whereas in the second half, subjects were presented with four random first-half outcomes to aid them with their current estimation. From a variety of learning models, we identified two that best fit subjects' behavior and eye movements: k-nearest neighbor and beauty contest. These results indicate that although some people attempt to form a highdimensional representation of other people's preferences, others simply go with the average opinion. These strategies can be distinguished by looking at a person's eye movements. The results also demonstrate subjects' ability to appropriately weight feedback in their estimates. |
Alessandra S. Souza; Stefan Czoschke; Elke B. Lange Gaze-based and attention-based rehearsal in spatial working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 46, no. 5, pp. 1–25, 2019. @article{Souza2019, How do we maintain information about spatial configurations in mind? Many working memory (WM) models assume that rehearsal processes are used to counteract forgetting in WM. Here, we investigated the contributions of gaze-based and attention-based rehearsal for protecting spatial representations from time-based forgetting. Participants memorized 6 locations selected from a grid of 30 scattered dots. Memory was tested after 1.5 or 4.5 s, and this interval was either blank or the grid remained onscreen (which is assumed to provide rehearsal support). In 2 experiments, we monitored eye movements during the retention phase, or asked participants to fixate the screen center. In 3 subsequent experiments, we tested spatial WM under dual-task conditions inhibiting shifts of visuospatial attention or central attention to the memoranda. Memory was better and more resistant to time-based forgetting in the grid than blank condition. Recording of fixations showed more frequent and efficient gaze-based rehearsal in the presence of the grid. Fixations toward distractor locations occurred at a similar frequency in the blank and grid conditions, and it did not predict incorrect recalls. Inhibition of eye-movements or shifts of visuospatial attention impaired memory overall, but it did not change the grid benefit nor the rate of time-based forgetting. In contrast, distracting central attention increased time-based forgetting regardless of grid presence. These results indicate that (a) the grid benefit is only partially explained by rehearsal; (b) gaze-errors (i.e., distractor fixations) do not lead to more forgetting; and (c) the maintenance of spatial representations over time depends on central processing. |
Michael J. Spilka; Daniel J. Pittman; Signe L. Bray; Vina M. Goghari Manipulating visual scanpaths during facial emotion perception modulates functional brain activation in schizophrenia patients and controls Journal Article In: Journal of Abnormal Psychology, vol. 128, no. 8, pp. 855–866, 2019. @article{Spilka2019, Individuals with schizophrenia exhibit deficits in facial emotion processing, which have been associated with abnormalities in visual gaze behavior and functional brain activation. However, the relationship between gaze behavior and brain activation in schizophrenia remains unexamined. Studies in healthy individuals and other clinical samples indicate a relationship between gaze behavior and functional activation in brain regions implicated in facial emotion processing deficits in schizophrenia (e.g., fusiform gyrus), prompting the question of whether a similar relationship exists in schizophrenia. This study examined whether manipulating visual scanpaths during facial emotion perception would modulate functional brain activation in a sample of 23 schizophrenia patients and 26 community controls. Participants underwent functional magnetic resonance imaging (MRI) while viewing pictures of emotional faces. During the typical viewing condition, a fixation cue directed participants' gaze primarily to the eyes and mouth, whereas during the atypical viewing condition gaze was directed to peripheral features. Both viewing conditions elicited a robust response throughout face-processing regions. Typical viewing led to greater activation in visual association cortex including the right inferior occipital gyrus/occipital face area, whereas atypical viewing elicited greater activation in primary visual cortex and regions involved in attentional control. There were no between-groups activation differences in response to faces or interaction between group and gaze manipulation. The results indicate that gaze behavior modulates functional activation in early face-processing regions in individuals with and without schizophrenia, suggesting that abnormal gaze behavior in schizophrenia may contribute to activation abnormalities during facial emotion perception. |
Lisa Stacchi; Meike Ramon; Junpeng Lao; Roberto Caldara Neural representations of faces are tuned to eye movements Journal Article In: Journal of Neuroscience, vol. 39, no. 21, pp. 4113–4123, 2019. @article{Stacchi2019, Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations. |
Tereza Stárková; Jiří Lukavský; Ondřej Javora; Cyril Brom Anthropomorphisms in multimedia learning: Attract attention but do not enhance learning? Journal Article In: Journal of Computer Assisted Learning, vol. 35, no. 4, pp. 555–568, 2019. @article{Starkova2019, Anthropomorphizing graphical elements in multimedia learning materials improves learning outcomes. The reasons for enhanced learning are unclear. We extended a seminal anthropomorphism study in order to examine whether the effect of anthropomorphisms on learning outcomes, both immediate and delayed, is caused by the anthropomorphized elements' effects on attention distribution or by elevated positive affective–motivational states. The study had a partial 3 × 2 design (the materials' graphics: schematic vs. black-and-white anthropomorphisms vs. colourful anthropomorphisms × eye tracker: present vs. absent). The participants were university students (N = 181). Unexpectedly, we found no significant effect of anthropomorphisms on learning outcomes. Anthropomorphisms significantly affected attention distribution during initial fixations but not overall. Modest effect on enjoyment was found, but no such effect was detected as concerns flow and generalized positive affect. We also found that the eye tracker's mere presence had slight adverse effects on learners, but these effects did not compromise learning. |
Mikhail Startsev; Ioannis Agtzidis; Michael Dorr Characterizing and automatically detecting smooth pursuit in a large-scale ground-truth data set of dynamic natural scenes Journal Article In: Journal of Vision, vol. 19, no. 14, pp. 1–25, 2019. @article{Startsev2019, Eye movements are fundamental to our visual experience of the real world, and tracking smooth pursuit eye movements play an important role because of the dynamic nature of our environment. Static images, however, do not induce this class of eye movements, and commonly used synthetic moving stimuli lack ecological validity because of their low scene complexity compared to the real world. Traditionally, ground truth data for pursuit analyses with naturalistic stimuli are obtained via laborious hand-labelling. Therefore, previous studies typically remained small in scale. We here present the first large-scale quantitative characterization of human smooth pursuit. In order to achieve this, we first provide a methodological framework for such analyses by collecting a large set of manual annotations for eye movements in dynamic scenes and by examining the bias and variance of human annotators. To enable further research on even larger future data sets, we also describe, improve, and thoroughly analyze a novel algorithm to automatically classify eye movements. Our approach incorporates unsupervised learning techniques and thus demonstrates improved performance with the addition of unlabelled data. The code and data related to our manual and automated eye movement annotation are publicly available via https://web.gin.g-node.org/ioannis.agtzidis/gazecom_annotations/. |
Seolmin Kim; Jeongjun Park; Joonyeol Lee Effect of prior direction expectation on the accuracy and precision of smooth pursuit eye movements Journal Article In: Frontiers in Systems Neuroscience, vol. 13, pp. 71, 2019. @article{Kim2019d, The integration of sensory with top–down cognitive signals for generating appropriate sensory–motor behaviors is an important issue in understanding the brain's information processes. Recent studies have demonstrated that the interplay between sensory and high-level signals in oculomotor behavior could be explained by Bayesian inference. Specifically, prior knowledge for motion speed introduces a bias in the speed of smooth pursuit eye movements. The other important prediction of Bayesian inference is variability reduction by prior expectation; however, there is insufficient evidence in oculomotor behaviors to support this prediction. In the present study, we trained monkeys to switch the prior expectation about motion direction and independently controlled the strength of the motion stimulus. Under identical sensory stimulus conditions, we tested if prior knowledge about the motion direction reduced the variability of open-loop smooth pursuit eye movements. We observed a significant reduction when the prior expectation was strong; this was consistent with the prediction of Bayesian inference. Taking advantage of the open-loop smooth pursuit, we investigated the temporal dynamics of the effect of the prior to the pursuit direction bias and variability. This analysis demonstrated that the strength of the sensory evidence depended not only on the strength of the sensory stimulus but also on the time required for the pursuit system to form a neural sensory representation. Finally, we demonstrated that the variability and directional bias change by prior knowledge were quantitatively explained by the Bayesian observer model. |
Youngsook Kim; Taiseok Chang; Inchon Park Visual scanning behavior and attention strategies for shooting among expert versus collegiate Korean archers Journal Article In: Perceptual and Motor Skills, vol. 126, no. 3, pp. 530–545, 2019. @article{Kim2019f, This study analyzed differences in visual scanning behavior and resistance to distractions between Olympic and collegiate archers. The experiment required the participants to watch a test film comprising six stages corresponding to the phases of an archery performance. The recording emulated the archer's point of view. During initial phases of shooting, Olympic archers demonstrated more frequent and longer fixations than did their collegiate counterparts, whereas during the later phases of shooting, the groups' visual scanning patterns did not differ significantly. In a second experiment within this study, auditory and visual distractors led Olympic archers to exhibit fewer fixations of longer duration and less eye movement, regardless of the type of distraction. Thus, in each experiment, Korean national-team archers modified their attentional strategies more efficiently than collegiate archers, expanding and narrowing their focused attention based on task demands. These findings provide fundamental information on the nature of expert shooters' visual scanning patterns and have implications for developing training protocols for aspiring athletes. |
Maedbh King; Carlos R. Hernandez-Castillo; Russell A. Poldrack; Richard B. Ivry; Jörn Diedrichsen Functional boundaries in the human cerebellum revealed by a multi-domain task battery Journal Article In: Nature Neuroscience, vol. 22, pp. 1371–1378, 2019. @article{King2019, There is compelling evidence that the human cerebellum is engaged in a wide array of motor and cognitive tasks. A fundamental question centers on whether the cerebellum is organized into distinct functional subregions. To address this question, we employed a rich task battery designed to tap into a broad range of cognitive processes. During four functional MRI sessions, participants performed a battery of 26 diverse tasks comprising 47 unique conditions. Using the data from this multi-domain task battery, we derived a comprehensive functional parcellation of the cerebellar cortex and evaluated it by predicting functional boundaries in a novel set of tasks. The new parcellation successfully identified distinct functional subregions, providing significant improvements over existing parcellations derived from task-free data. Lobular boundaries, commonly used to summarize functional data, did not coincide with functional subdivisions. The new parcellation provides a functional atlas to guide future neuroimaging studies. |
Raymond M. Klein; Mathew Reichertz; John Christie; Jack Wong; Bryan Maycock On the roles of central and peripheral vision in the extraction of material and form from a scene Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 5, pp. 1209–1219, 2019. @article{Klein2019, Conventional wisdom tells us that the appreciation of local (detail) and global (form and spatial relations) information from a scene is preferentially processed by central and peripheral vision, respectively. Using an eye monitor with high spatial and temporal precision, we sought to provide direct evidence for this idea by controlling whether carefully designed hierarchical scenes were viewed only with central vision (the periphery was masked), only with peripheral vision (the central region was masked), or with full vision. The scenes consisted of a neutral form (a D shape) composed of target circles or squares, or a target circle or square composed of neutral material (Ds). The task was for the participant to determine as quickly as possible whether the scene contained circle(s) or square(s). Increasing the size of the masked region had deleterious effects on performance. This deleterious effect was greater for the extraction of form information when the periphery was masked, and greater for the extraction of material information when central vision was masked, thus providing direct evidence for conventional ideas about the processing predilections of central and peripheral vision. |
Thomas Kluth; Michele Burigo; Holger Schultheis; Pia Knoeferle Does direction matter? Linguistic asymmetries reflected in visual attention Journal Article In: Cognition, vol. 185, pp. 91–120, 2019. @article{Kluth2019, Language and vision interact in non-trivial ways. Linguistically, spatial utterances are often asymmetrical as they relate more stable objects (reference objects) to less stable objects (located objects). Researchers have claimed that such linguistic asymmetry should also be reflected in the allocation of visual attention when people process a depicted spatial relation described by spatial language. More specifically, it was assumed that people move their attention from the reference object to the located object. However, recent theoretical and empirical findings challenge the directionality of this attentional shift. In this article, we present the results of an empirical study based on predictions generated by computational cognitive models implementing different directionalities of attention. Moreover, we thoroughly analyze the computational models. While our results do not favor any of the implemented directionalities of attention, we found that two unknown sources of geometric information affect spatial language understanding. We provide modifications to the computational models that substantially improve their performance on empirical data. |
Shaojun Kong; Zhenfang Huang; Noel Scott; Zi'ang Zhang; Zhixiang Shen Web advertisement effectiveness evaluation: Attention and memory Journal Article In: Journal of Vacation Marketing, vol. 25, no. 1, pp. 130–146, 2019. @article{Kong2019, Tourist marketers rely heavily on using visual stimuli in their advertising to attract attention and improve awareness and interest of their experience. This study used eye-tracking and self-reported recall methods to investigate online tourism advertisement effectiveness based on the hierarchy of effects model. A within-subjects experimental design (n = 30) was used to examine mock advertisements (stimuli) containing various combinations of image, text and product price. Results show that the advertisement containing both image and price was least effective, while the stimuli with text and price were most effective in capturing the respondent's attention. Advertising consisting of image, text and price generated the best recall. There were significant differences in results based on gender, task and experience. |
Oscar Kovacs; Irina M. Harris The role of location in visual feature binding Journal Article In: Attention, Perception, and Psychophysics, vol. 81, pp. 1551–1563, 2019. @article{Kovacs2019, Location appears to play a vital role in binding discretely processed visual features into coherent objects. Consequently, it has been proposed that objects are represented for cognition by their spatiotemporal location, with other visual features attached to this location index. On this theory, the visual features of an object are only connected via mutual location; direct binding cannot occur. Despite supporting evidence, some argue that direct binding does take over according to task demands and when representing familiar objects. The current study was developed to evaluate these claims, using a brief memory task to test for contingencies between features under different circumstances. Participants were shown a sequence of three items in different colours and locations, and then asked for the colour and/or location of one of them. The stimuli could either be abstract shapes, or familiar objects. Results indicated that location is necessary for binding regardless of the type of stimulus and task demands, supporting the proposed structure. A follow-up experiment assessed an alternate explanation for the apparent importance of location in binding; eye movements may automatically capture location information, making it impossible to ignore and suggesting a contingency that is not representative of cognitive processes. Participants were required to maintain fixation on half of the trials, with an eye tracker for confirmation. Results indicated that the importance of location in binding cannot be attributed to eye movements. Overall, the findings of this study support the claim that location is essential for visual feature binding, due to the structure of object representations. |
Natalie G. Koval Testing the deficient processing account of the spacing effect in second language vocabulary learning: Evidence from eye tracking Journal Article In: Applied Psycholinguistics, vol. 4, no. 5, pp. 1103–1139, 2019. @article{Koval2019, The spacing effect refers to the learning benefit that comes from separating repeated study of target items by time or by other items. A prominent proposed explanation for this effect states that repeated exposures that occur closely together may not engage full attentional processing due to residual activation of the previous exposure and also, in an intentional learning context, due to a sense of familiarity that may result in strategic allocation of less study time to an item in massed repetitions. The present study used eye-tracking methodology to investigate the effects of temporal distribution of repeated exposures to novel second language words on attentional processing and learning of these words under intentional learning instructions. Adult native speakers of English read Finnish words embedded in English sentence contexts under massed and spaced conditions. The results showed that (a) massed repeated exposures received less attentional processing than spaced repeated exposures; (b) target words were better remembered in the spaced condition ; and (c) attention was a significant mediator of the obtained spacing effect, in line with the predictions of the deficient processing account of the spacing effect. Implications for vocabulary learning are discussed. |
E. Kaplan; Alexandra Jesse Fixating the eyes of a speaker provides sufficient visual information to modulate early auditory processing Journal Article In: Biological Psychology, vol. 146, pp. 1–9, 2019. @article{Kaplan2019, In face-to-face conversations, when listeners process and combine information obtained from hearing and seeing a speaker, they mostly look at the eyes rather than at the more informative mouth region. Measuring event-related potentials, we tested whether fixating the speaker's eyes is sufficient for gathering enough visual speech information to modulate early auditory processing, or whether covert attention to the speaker's mouth is needed. Results showed that when listeners fixated the eye region of the speaker, the amplitudes of the auditory evoked N1 and P2 were reduced when listeners heard and saw the speaker than when they only heard her. These cross-modal interactions also occurred when, in addition, attention was restricted to the speaker's eye region. Fixating the speaker's eyes thus provides listeners with sufficient visual information to facilitate early auditory processing. The spread of covert attention to the mouth area is not needed to observe audiovisual interactions. |
Anton S. Kaplanyan; Anton Sochenov; Thomas Leimkühler; Mikhail Okunev; Todd Goodall; Gizem Rufo DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos Journal Article In: ACM Transactions on Graphics, vol. 38, no. 6, pp. 1–13, 2019. @article{Kaplanyan2019, In order to provide an immersive visual experience, modern displays require head mounting, high image resolution, low latency, as well as high refresh rate. This poses a challenging computational problem. On the other hand, the human visual system can consume only a tiny fraction of this video stream due to the drastic acuity loss in the peripheral vision. Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we explore a novel foveated reconstruction method that employs the recent advances in generative adversarial neural networks. We reconstruct a plausible peripheral video from a small fraction of pixels provided every frame. The reconstruction is done by finding the closest matching video to this sparse input stream of pixels on the learned manifold of natural videos. Our method is more efficient than the state-of-the-art foveated rendering, while providing the visual experience with no noticeable quality degradation. We conducted a user study to validate our reconstruction method and compare it against existing foveated rendering and video compression techniques. Our method is fast enough to drive gaze-contingent head-mounted displays in real time on modern hardware. We plan to publish the trained network to establish a new quality bar for foveated rendering and compression as well as encourage follow-up research. |
Kohitij Kar; Jonas Kubilius; Kailyn Schmidt; Elias B. Issa; James J. DiCarlo Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior Journal Article In: Nature Neuroscience, vol. 22, pp. 974–983, 2019. @article{Kar2019, Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these ‘challenge' images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged ~30 ms later in the IT cortex for challenge images compared with primate performance-matched ‘control' images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development. |
Eeva-Leena Kataja; Linnea Karlsson; Christine E. Parsons; Juho Pelto; Henri Pesonen; Tuomo Häikiö; Jukka Hyönä; Saara Nolvi; Riikka Korja; Hasse Karlsson Maternal pre- and postnatal anxiety symptoms and infant attention disengagement from emotional faces Journal Article In: Journal of Affective Disorders, vol. 243, pp. 280–289, 2019. @article{Kataja2019, Background: Biases in socio-emotional attention may be early markers of risk for self-regulation difficulties and mental illness. We examined the associations between maternal pre- and postnatal anxiety symptoms and infant attention patterns to faces, with particular focus on attentional biases to threat, across male and female infants. Methods: A general population, Caucasian sample of eight-month old infants (N = 362) were tested using eye-tracking and an attention disengagement (overlap) paradigm, with happy, fearful, neutral, and phase-scrambled faces and distractors. Maternal self-reported anxiety symptoms were assessed with the Symptom Checklist-90/anxiety subscale at five time points between gestational week 14 and 6 months postpartum. Results: Probability of disengagement was lowest for fearful faces in the whole sample. Maternal pre- but not postnatal anxiety symptoms associated with higher threat bias in infants, and the relation between maternal anxiety symptoms in early pregnancy and higher threat bias in infants remained significant after controlling for maternal postnatal symptoms. Maternal postnatal anxiety symptoms, in turn, associated with higher overall probability of disengagement from faces to distractors, but the effects varied by child sex. Limitations: The small number of mothers suffering from very severe symptoms. No control for the comorbidity of depressive symptoms. Conclusions: Maternal prenatal anxiety symptoms associate with infant's heightened attention bias for threat. Maternal postnatal anxiety symptoms, in turn, associate with infant's overall disengagement probability differently for boys and girls. Boys may show enhanced vigilance for distractors, except when viewing fearful faces, and girls enhanced vigilance for all socio-emotional stimuli. Long-term implications of these findings remain to be explored. |
Jolie R. Keemink; Maryam J. Keshavarzi-Pour; David J. Kelly Infants' responses to interactive gaze-contingent faces in a novel and naturalistic eye-tracking paradigm Journal Article In: Developmental Psychology, vol. 55, no. 7, pp. 1362–1371, 2019. @article{Keemink2019, Face scanning is an important skill that takes place in a highly interactive context embedded within social interaction. However, previous research has studied face scanning using noninteractive stimuli. We aimed to study face scanning and social interaction in infancy in a more ecologically valid way by providing infants with a naturalistic and socially engaging experience. We developed a novel gazecontingent eye-tracking paradigm in which infants could interact with face stimuli. Responses (socially engaging/socially disengaging) from faces were contingent on infants' eye movements. We collected eye-tracking and behavioral data of 162 (79 male, 83 female) 6-, 9- and 12-month-old infants. All infants showed a clear preference for looking at the eyes relative to the mouth. Contingency was learned implicitly, and infants were more likely to show behavioral responses (e.g., smiling, pointing) when receiving socially engaging responses. Infants' responses were also more often congruent with the actors' responses. Additionally, our large sample allowed us to look at the ranges of behavior on our task, and we identified a small number of infants who displayed deviant behaviors. We discuss these findings in relation to data collected from a small sample (N = 11) of infants considered to be at-risk for autism spectrum disorders. Our results demonstrate the versatility of the gaze-contingency eye-tracking paradigm, allowing for a more nuanced and complex investigation of face scanning as it happens in real-life interaction. As we provide additional measures of contingency learning and reciprocity, our task holds the potential to investigate atypical neurodevelopment within the first year of life. |
Rizwan Ahmed Khan; Alexandre Meyer; Hubert Konik; Saida Bouakaz Saliency-based framework for facial expression recognition Journal Article In: Frontiers of Computer Science, vol. 13, no. 1, pp. 183–198, 2019. @article{Khan2019, This article proposes a novel framework for the recognition of six universal facial expressions. The framework is based on three sets of features extracted from a face image: entropy, brightness, and local binary pattern. First, saliency maps are obtained using the state-of-the-art saliency detection algorithm “frequency-tuned salient region detection”. The idea is to use saliency maps to determine appropriate weights or values for the extracted features (i.e., brightness and entropy). We have performed a visual experiment to validate the performance of the saliency detection algorithm against the human visual system. Eye movements of 15 subjects were recorded using an eye-tracker in free-viewing conditions while they watched a collection of 54 videos selected from the Cohn-Kanade facial expression database. The results of the visual experiment demonstrated that the obtained saliency maps are consistent with the data on human fixations. Finally, the performance of the proposed framework is demonstrated via satisfactory classification results achieved with the Cohn-Kanade database, FG-NET FEED database, and Dartmouth database of children's faces. |
Haena Kim; Brian A. Anderson Dissociable components of experience-driven attention Journal Article In: Current Biology, vol. 29, no. 5, pp. 841–845, 2019. @article{Kim2019a, What we pay attention to is influenced by current task goals (goal-directed attention) [1, 2], the physical salience of stimuli (stimulus-driven attention) [3, 4, 5], and selection history [6, 7, 8, 9, 10, 11, 12]. This third construct, which encompasses reward learning, aversive conditioning, and repetitive orienting behavior [12, 13, 14, 15, 16, 17, 18], is often characterized as a unitary mechanism of control that can be contrasted with the other two [12, 13, 14]. Here, we present evidence that two different learning processes underlie the influence of selection history on attention, with dissociable consequences for orienting behavior. Human observers performed an antisaccade task in which they were paid for shifting their gaze in the direction opposite one of two color-defined targets. Strikingly, such training resulted in a bias to do the opposite of what observers were motivated and paid to do, with associative learning facilitating orienting toward reward cues. On the other hand, repetitive orienting away from a target produced a bias to repeat this behavior even when it conflicted with current goals, reflecting instrumental conditioning of the orienting response. Our findings challenge the idea that selection history reflects a common mechanism of learning-dependent priority and instead suggest multiple distinct routes by which learning history shapes orienting behavior. We also provide direct evidence for the idea that value-based attention is approach oriented, which limits the effectiveness of attentional bias modification techniques that utilize incentive structures. |
Minah Kim; Tak Hyung Lee; Jung-Seok Choi; Yoo Bin Kwak; Wu Jeong Hwang; Taekwan Kim; Ji Yoon Lee; Bo Mi Kim; Jun Soo Kwon; Yoo Bin Kwak Dysfunctional attentional bias and inhibitory control during anti-saccade task in patients with internet gaming disorder: An eye tracking study Journal Article In: Progress in Neuropsychopharmacology and Biological Psychiatry, vol. 95, pp. 1–7, 2019. @article{Kim2019b, Background: Although internet gaming disorder (IGD) is considered an addictive disorder, evidence of the neurobiological underpinnings of IGD as an addictive disorder is currently lacking. We investigated whether attentional bias toward game-related stimuli was altered in IGD patients using an eye-tracking method during an anti-saccade task. Methods: Twenty-three IGD patients and 27 healthy control (HC) subjects participated in the anti-saccade task with game-related, neutral, and scrambled images during eye tracking. Participants rated subjective scores of valence, arousal, and craving for each image stimulus after finishing eye tracking. Mixed design analysis of variance was performed to compare the differences between eye movement latency and error rate in the pro-saccade and anti-saccade conditions according to image type across the IGD and HC groups. Results: In the anti-saccade task, the IGD group exhibited higher error rates in the case of game-related images than in neutral or scrambled images. However, ratings on valence, arousal, and craving did not vary among image types. The error rates of the HCs did not vary across image types, but higher arousal/craving and lower valence were reported with respect to the game-related images. Conclusions: Increased error rate during anti-saccade tasks with game-related stimuli in IGD may be due to disabilities in goal-directed behavior or inhibitory control, as observed in other addictive disorders. These findings suggest that attentional bias toward game-related stimuli can be a sensitive biological marker of IGD as an addictive disorder. |