EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2016 |
Wayne E. Mackey; Orrin Devinsky; Werner K. Doyle; Michael R. Meager; Clayton E. Curtis Human dorsolateral prefrontal cortex is not necessary for spatial working memory Journal Article In: Journal of Neuroscience, vol. 36, no. 10, pp. 2847–2856, 2016. @article{Mackey2016a, A dominant theory, based on electrophysiological and lesion evidence from nonhuman primate studies, posits that the dorsolateral prefrontal cortex (dlPFC) stores and maintains working memory (WM) representations. Yet, neuroimaging studies have consistently failed to translate these results to humans; these studies normally find that neural activity persists in the human precentral sulcus (PCS) during WM delays. Here, we attempt to resolve this discrepancy. To test the degree to which dlPFC is necessary for WM, we compared the performance of patients with dlPFC lesions and neurologically healthy controls on a memory-guided saccade task that was used in the monkey studies to measure spatial WM. We found that dlPFC damage only impairs the accuracy of memory-guided saccades if the damage impacts the PCS; lesions to dorsolateral dlPFC that spare the PCS have no effect on WM. These results identify the necessary subregion of the frontal cortex for WM and specify how this influential animal model of human cognition must be revised. |
Qian Li; Zhuowei Joy Huang; Kiel Christianson Visual attention toward tourism photographs with text: An eye-tracking study Journal Article In: Tourism Management, vol. 54, pp. 243–258, 2016. @article{Li2016b, This study examines consumers' visual attention toward tourism photographs with text naturally embedded in landscapes and their perceived advertising effectiveness. Eye-tracking is employed to record consumers' visual attention and a questionnaire is administered to acquire information about the perceived advertising effectiveness. The impacts of text elements are examined by two factors: viewers' understanding of the text language (understand vs. not understand), and the number of textual messages (single vs. multiple). Findings indicate that text within the landscapes of tourism photographs draws the majority of viewers' visual attention, irrespective of whether or not participants understand the text language. People spent more time viewing photographs with text in a known language compared to photographs with an unknown language, and more time viewing photographs with a single textual message than those with multiple textual messages. Viewers reported higher perceived advertising effectiveness toward tourism photographs that included text in the known language. |
Hsin-I Liao; Shunsuke Kidani; Makoto Yoneya; Makio Kashino; Shigeto Furukawa Correspondences among pupillary dilation response, subjective salience of sounds, and loudness Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 412–425, 2016. @article{Liao2016, A pupillary dilation response is known to be evoked by salient deviant or contrast auditory stimuli, but so far a direct link between it and subjective salience has been lacking. In two experiments, participants listened to various environmental sounds while their pupillary responses were recorded. In separate sessions, participants performed subjec-tive pairwise-comparison tasks on the sounds with respect to their salience, loudness, vigorousness, preference, beauty, an-noyance, and hardness. The pairwise-comparison data were converted to ratings on the Thurstone scale. The results showed a close link between subjective judgments of salience and loudness. The pupil dilated in response to the sound pre-sentations, regardless of sound type. Most importantly, this pupillary dilation response to an auditory stimulus positively correlated with the subjective salience, as well as the loudness, of the sounds (Exp. 1). When the loudnesses of the sounds were identical, the pupil responses to each sound were similar and were not correlated with the subjective judgments of sa-lience or loudness (Exp. 2). This finding was further con-firmed by analyses based on individual stimulus pairs and participants. In Experiment 3, when salience and loudness were manipulated by systematically changing the sound pres-sure level and acoustic characteristics, the pupillary dilation response reflected the changes in both manipulated factors. A regression analysis showed a nearly perfect linear correlation between the pupillary dilation response and loudness. The overall results suggest that the pupillary dilation response re-flects the subjective salience of sounds, which is defined, or is heavily influenced, by loudness. |
Hsin-I Liao; Makoto Yoneya; Shunsuke Kidani; Makio Kashino; Shigeto Furukawa Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention Journal Article In: Frontiers in Neuroscience, vol. 10, pp. 43, 2016. @article{Liao2016a, A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. |
Ping-I Lin; Cheng-Da Hsieh; Chi-Hung Juan; Md Monir Hossain; Craig A. Erickson; Yang-Han Lee; Mu-Chun Su Predicting aggressive tendencies by visual attention bias associated with hostile emotions Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0149487, 2016. @article{Lin2016, The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies. |
Yu-Tzu Lin; Cheng-Chih Wu; Ting-Yun Hou; Yu-Chih Lin; Fang-Ying Yang; Chia-Hu Chang Tracking students' cognitive processes during program debugging-an eye-movement approach Journal Article In: IEEE Transactions on Education, vol. 59, no. 3, pp. 175–186, 2016. @article{Lin2016a, This study explores students' cognitive processes while debugging programs by using an eye tracker. Students' eye movements during debugging were recorded by an eye tracker to investigate whether and how high- and low-performance students act differently during debugging. Thirty-eight computer science undergraduates were asked to debug two C programs. The path of students' gaze while following program codes was subjected to sequential analysis to reveal significant sequences of areas examined. These significant gaze path sequences were then compared to those of students with different debugging performances. The results show that, when debugging, high-performance students traced programs in a more logical manner, whereas low-performance students tended to stick to a line-by-line sequence and were unable to quickly derive the program's higher-level logic. Low-performance students also often jumped directly to certain suspected statements to find bugs, without following the program's logic. They also often needed to trace back to prior statements to recall information, and spent more time on manual computation. Based on the research results, adaptive instructional strategies and materials can be developed for students of different performance levels, to improve associated cognitive activities during debugging, which can foster learning during debugging and programming. |
Damien Litchfield; Tim Donovan Worth a quick look? Initial scene previews can guide eye movements as a function of domain-specific expertise but can also have unforeseen costs Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 7, pp. 982–994, 2016. @article{Litchfield2016, Rapid scene recognition is a global visual process we can all exploit to guide search. This ability is thought to underpin expertise in medical image perception yet there is no direct evidence that isolates the expertise-specific contribution of processing scene previews on subsequent eye movement performance. We used the flash-preview moving window paradigm (Castelhano&Henderson, 2007) to investigate this issue. Expert radiologists and novice observers underwent 2 experiments whereby participants viewed a 250-ms scene preview or a mask before searching for a target. Observers looked for everyday objects from real-world scenes (Experiment 1), and searched for lung nodules from medical images (Experiment 2). Both expertise groups exploited the brief preview of the upcoming scene to more efficiently guide windowed search in Experiment 1, but there was only a weak effect of domain-specific expertise in Experiment 2, with experts showing small improvements in search metrics with scene previews. Expert diagnostic performance was better than novices in all conditions but was not contingent on seeing the scene preview, and scene preview actually impaired novice diagnostic performance. Experiment 3 required novice and experienced observers to search for a variety of abnormalities from different medical images. Rather than maximizing the expertise-specific advantage of processing scene previews, both novices and experienced radiographers were worse at detecting abnormalities with scene previews. We discuss how restricting access to the initial glimpse can be compensated for by subsequent search and discovery processing, but there can still be costs in integrating a fleeting glimpse of a medical scene. |
Liu D. Liu; Ralf M. Haefner; Christopher C. Pack A neural basis for the spatial suppression of visual motion perception Journal Article In: eLife, vol. 5, pp. 1–20, 2016. @article{Liu2016c, In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. |
Rong Liu; MiYoung Kwon Integrating oculomotor and perceptual training to induce a pseudofovea: A model system for studying central vision loss Journal Article In: Journal of Vision, vol. 16, no. 6, pp. 1–21, 2016. @article{Liu2016b, People with a central scotoma often adopt an eccentric retinal location (Preferred Retinal Locus, PRL) for fixation. Here, we proposed a novel training paradigm as a model system to study the nature of the PRL formation and its impacts on visual function. The training paradigm was designed to effectively induce a PRL at any intended retinal location by integrating oculomotor control and pattern recognition. Using a gaze-contingent display, a simulated central scotoma was induced in eight normally sighted subjects. A subject's entire peripheral visual field was blurred, except for a small circular aperture with location randomly assigned to each subject (to the left, right, above, or below the scotoma). Under this viewing condition, subjects performed a demanding oculomotor and visual recognition task. Various visual functions were tested before and after training at both PRL and nonPRL locations. After 6-10 hr of the training, all subjects formed their PRL within the clear window. Both oculomotor control and visual recognition performance significantly improved. Moreover, there was considerable improvement at PRL location in high-level function, such as trigram letter-recognition, reading, and spatial attention, but not in low-level function, such as acuity and contrast sensitivity. Our results demonstrated that within a relatively short time, a PRL could be induced at any intended retinal location in normally-sighted subjects with a simulated scotoma. Our training paradigm might not only hold promise as a model system to study the dynamic nature of the PRL formation, but also serve as a rehabilitation regimen for individuals with central vision loss. |
Taosheng Liu Neural representation of object-specific attentional priority Journal Article In: NeuroImage, vol. 129, pp. 15–24, 2016. @article{Liu2016a, Humans can flexibly select locations, features, or objects in a visual scene for prioritized processing. Although it is relatively straightforward to manipulate location- and feature-based attention, it is difficult to isolate object-based selection. Because objects are always composed of features, studies of object-based selection can often be interpreted as the selection of a combination of locations and features. Here we examined the neural representation of attentional priority in a paradigm that isolated object-based selection. Participants viewed two superimposed gratings that continuously changed their color, orientation, and spatial frequency, such that the gratings traversed the same exact feature values within a trial. Participants were cued at the beginning of each trial to attend to one or the other grating to detect a brief luminance increment, while their brain activity was measured with fMRI. Using multi-voxel pattern analysis, we were able to decode the attended grating in a set of frontoparietal areas, including anterior intraparietal sulcus (IPS), frontal eye field (FEF), and inferior frontal junction (IFJ). Thus, a perceptually varying object can be represented by patterned neural activity in these frontoparietal areas. We suggest that these areas can encode attentional priority for abstract, high-level objects independent of their locations and features. |
Xin Liu; Tong Chen; Guoqiang Xie; Guangyuan Liu Contact-free cognitive load recognition based on eye movement Journal Article In: Journal of Electrical and Computer Engineering, pp. 1–8, 2016. @article{Liu2016e, The cognitive overload not only affects the physical and mental diseases, but also affects the work efficiency and safety. Hence, the research of measuring cognitive load has been an important part of cognitive load theory. In this paper, we proposed a method to identify the state of cognitive load by using eye movement data in a noncontact manner. We designed a visual experiment to elicit human's cognitive load as high and low state in two light intense environments and recorded the eye movement data in this whole process. Twelve salient features of the eye movement were selected by using statistic test. Algorithms for processing some features are proposed for increasing the recognition rate. Finally we used the support vector machine (SVM) to classify high and low cognitive load. The experimental results show that the method can achieve 90.25% accuracy in light controlled condition. |
Matthew F. Peterson; Jing Lin; Ian Zaun; Nancy Kanwisher Individual differences in face-looking behavior generalize from the lab to the world Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–18, 2016. @article{Peterson2016, Recent laboratory studies have found large, stable individual differences in the location people first fixate when identifying faces, ranging from the brows to the mouth. Importantly, this variation is strongly associated with differences in fixation-specific identification performance such that an individual's recognition ability is maximized when looking at their preferred location (Mehoudar, Arizpe, Baker, & Yovel, 2014; Peterson & Eckstein, 2013). This finding suggests that face representations are retinotopic and individuals enact gaze strategies that optimize identification, yet the extent to which this behavior reflects real-world gaze behavior is unknown. Here, we used mobile eye-trackers to test whether individual differences in face-gaze generalize from lab to real-world vision. In-lab fixations were measured with a speeded face identification task, while real-world behavior was measured as subjects freely walked around the MIT campus. We found a strong correlation between the patterns of individual differences in face-gaze in the laboratory and real-world settings. Our findings support the hypothesis that individuals optimize real-world face identification by consistently fixating the same location and thus strongly constraining the space of retinotopic input. The methods developed for this study entailed collecting a large set of high-definition, wide field-of-view natural videos from head-mounted cameras and the viewer's fixation position, allowing us to characterize subject's actually-experienced real-world retinotopic images. These images enable us to ask how vision is optimized not just for the statistics of the “natural images” found in web databases, but of the truly natural, retinotopic images that have landed on actual human retinae during real-world experience. |
Judith Peth; Kristina Suchotzki; Matthias Gamer Influence of countermeasures on the validity of the Concealed Information Test Journal Article In: Psychophysiology, vol. 53, no. 9, pp. 1429–1440, 2016. @article{Peth2016, The Concealed Information Test (CIT) is a psychophysiological technique that allows for detecting crime-related knowledge. Usually, autonomic response measures are used for this purpose, but ocular measures have also been proposed recently. Prior studies reported heterogeneous results for the usage of countermeasures (CM) to corrupt the CIT's validity, depending on the CM technique and the dependent measure. The current study systematically compared the application of physical and mental CM on autonomic and ocular measures during the CIT. Sixty participants committed a mock crime and were assigned to one of three guilty conditions: standard guilty (without CM), physical CM, or mental CM. An additional group of 20 innocents was investigated with the same CIT to calculate validity estimates. Electrodermal responses were more vulnerable for CM usage compared to heart rate and respiration, and physical CM were more effective than mental CM. Independent of CM usage, a combined score of autonomic responses enabled a valid differentiation between guilty and innocent examinees. Fixations and blinks also allowed for detecting crime-related knowledge, but these measures were more affected by CM application than autonomic responses. The current study delivered further evidence that CM differentially impact physiological and ocular responses in the CIT. Whereas individual data channels were strongly affected by CM usage, a combination of different response measures yielded a relatively stable differentiation of guilty and innocent examinees when mental CM were used. These findings are especially relevant for field applications and might inspire future studies to detect or prevent CM usage in CIT examinations. |
Christina U. Pfeuffer; Andrea Kiesel; Lynn Huestegge A look into the future: Spontaneous anticipatory saccades reflect processes of anticipatory action control Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 11, pp. 1530–1547, 2016. @article{Pfeuffer2016, According to ideomotor theory, human action control uses anticipations of one's own actions' future consequences, that is, action effect anticipations, as a means of triggering actions that will produce desired outcomes (e.g., Hommel, Müsseler, Aschersleben, & Prinz, 2001). Using the response-effect compatibility paradigm (Kunde, 2001), we demonstrate that the anticipation of one's own manual actions' future consequences not only triggers appropriate (i.e., instructed) actions, but simultaneously induces spontaneous (uninstructed) anticipatory saccades to the location of future action consequences. In contrast to behavioral response-effect compatibility effects that have been linked to processes of action selection and action planning, our results suggest that these anticipatory saccades serve the function of outcome evaluation, that is, the comparison of expected/intended and observed action outcomes. Overall, our results demonstrate the informational value of additionally analyzing uninstructed behavioral components complementary to instructed responses and allow us to specify essential mechanisms of the complex interplay between the manual and oculomotor control system in goal-directed action control. |
Laura Piccardi; Maria De Luca; Raffaella Nori; Liana Palermo; Fabiana Iachini; Cecilia Guariglia Navigational style influences eye movement pattern during exploration and learning of an environmental map Journal Article In: Frontiers in Behavioral Neuroscience, vol. 10, pp. 140, 2016. @article{Piccardi2016, During navigation people may adopt three different spatial styles (i.e., Landmark, Route, and Survey). Landmark style (LS) people are able to recall familiar landmarks but cannot combine them with directional information; Route style (RS) people connect landmarks to each other using egocentric information about direction; Survey style (SS) people use a map-like representation of the environment. SS individuals generally navigate better than LS and RS people. Fifty-one college students (20 LS; 17 RS, and 14 SS) took part in the experiment. The spatial cognitive style (SCS) was assessed by means of the SCS test; participants then had to learn a schematic map of a city, and after 5 min had to recall the path depicted on it. During the learning and delayed recall phases, eye-movements were recorded. Our intent was to investigate whether there is a peculiar way to explore an environmental map related to the individual's spatial style. Results support the presence of differences in the strategy used by the three spatial styles for learning the path and its delayed recall. Specifically, LS individuals produced a greater number of fixations of short duration, while the opposite eye movement pattern characterized SS individuals. Moreover, SS individuals showed a more spread and comprehensive explorative pattern of the map, while LS individuals focused their exploration on the path and related targets. RS individuals showed a pattern of exploration at a level of proficiency between LS and SS individuals. We discuss the clinical and anatomical implications of our data. |
Jordan E. Pierce; Jennifer E. McDowell Effects of preparation time and trial type probability on performance of anti- and pro-saccades Journal Article In: Acta Psychologica, vol. 164, pp. 188–194, 2016. @article{Pierce2016, Cognitive control optimizes responses to relevant task conditions by balancing bottom-up stimulus processing with top-down goal pursuit. It can be investigated using the ocular motor system by contrasting basic prosaccades (look toward a stimulus) with complex antisaccades (look away from a stimulus). Furthermore, the amount of time allotted between trials, the need to switch task sets, and the time allowed to prepare for an upcoming saccade all impact performance. In this study the relative probabilities of anti- and pro-saccades were manipulated across five blocks of interleaved trials, while the inter-trial interval and trial type cue duration were varied across subjects. Results indicated that inter-trial interval had no significant effect on error rates or reaction times (RTs), while a shorter trial type cue led to more antisaccade errors and faster overall RTs. Responses following a shorter cue duration also showed a stronger effect of trial type probability, with more antisaccade errors in blocks with a low antisaccade probability and slower RTs for each saccade task when its trial type was unlikely. A longer cue duration yielded fewer errors and slower RTs, with a larger switch cost for errors compared to a short cue duration. Findings demonstrated that when the trial type cue duration was shorter, visual motor responsiveness was faster and subjects relied upon the implicit trial probability context to improve performance. When the cue duration was longer, increased fixation-related activity may have delayed saccade motor preparation and slowed responses, guiding subjects to respond in a controlled manner regardless of trial type probability. |
Alessandro Piras; Milena Raffi; Michela Persiani; Monica Perazzolo; Salvatore Squatrito Effect of heading perception on microsaccade dynamics Journal Article In: Behavioural Brain Research, vol. 312, pp. 246–252, 2016. @article{Piras2016a, The present study shows the relationship between microsaccades and heading perception. Recent research demonstrates that microsaccades during fixation are necessary to overcome loss of vision due to continuous stimulation of the retinal receptors, even at the potential cost of a decrease in visual acuity. The goal of oculomotor fixational mechanisms might be not retinal stabilization, but controlled image motion adjusted to be optimal for visual processing. Thus, patterns of microsaccades may be exploited to help to understand the oculomotor system, aspects of visual perception, and the dynamics of visual attention. We presented an expansion optic flow in which the dot speed simulated a heading directed to the left or to the right of the subject, who had to signal the perceived heading by making a saccade toward the perceived direction. We recorded microsaccades during the optic flow stimulation to investigate their characteristics before and after the response. The time spent on heading perception was similar between right and left direction, and response latency was shorter during correct than incorrect responses. Furthermore, we observed that correct heading perception is associated with longer, larger and faster microsaccade characteristics. The time-course of microsaccade rate shows a modulation across the perception process similar to that seen for other local perception tasks, while the main direction is oriented toward the opposite side with respect to the perceived heading. Microsaccades enhance visual perception and, therefore, represent a fundamental motor process, with a specific effect for the build-up of global visual perception of space. |
Ivo D. Popivanov; Philippe G. Schyns; Rufin Vogels Stimulus features coded by single neurons of a macaque body category selective patch Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 17, pp. E2450–E2459, 2016. @article{Popivanov2016, Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons' responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category). |
Nicholas L. Port; Jane Trimberger; Steve Hitzeman; Bryan Redick; Stephen Beckerman Micro and regular saccades across the lifespan during a visual search of "Where's Waldo" puzzles Journal Article In: Vision Research, vol. 118, pp. 144–157, 2016. @article{Port2016, Despite the fact that different aspects of visual-motor control mature at different rates and aging is associated with declines in both sensory and motor function, little is known about the relationship between microsaccades and either development or aging. Using a sample of 343 individuals ranging in age from 4 to 66 and a task that has been shown to elicit a high frequency of microsaccades (solving Where's Waldo puzzles), we explored microsaccade frequency and kinematics (main sequence curves) as a function of age. Taking advantage of the large size of our dataset (183,893 saccades), we also address (a) the saccade amplitude limit at which video eye trackers are able to accurately measure microsaccades and (b) the degree and consistency of saccade kinematics at varying amplitudes and directions. Using a modification of the Engbert-Mergenthaler saccade detector, we found that even the smallest amplitude movements (0.25-0.5°) demonstrate basic saccade kinematics. With regard to development and aging, both microsaccade and regular saccade frequency exhibited a very small increase across the life span. Visual search ability, as per many other aspects of visual performance, exhibited a U-shaped function over the lifespan. Finally, both large horizontal and moderate vertical directional biases were detected for all saccade sizes. |
Daniel R. McCloy; Eric D. Larson; Bonnie K. Lau; Adrian K. C. Lee Temporal alignment of pupillary response with stimulus events via deconvolution Journal Article In: The Journal of the Acoustical Society of America, vol. 139, no. 3, pp. EL57–EL62, 2016. @article{McCloy2016, Analysis of pupil dilation has been used as an index of attentional effort in the auditory domain. Previous work has modeled the pupillary response to attentional effort as a linear time-invariant system with a characteristic impulse response, and used deconvolution to estimate the attentional effort that gives rise to changes in pupil size. Here it is argued that one parameter of the impulse response (the latency of response maximum, t(max)) has been mis-estimated in the literature; a different estimate is presented, and it is shown how deconvolution with this value of t(max) yields more intuitively plausible and informative results. |
Vincent B. McGinty; Antonio Rangel; William T. Newsome Orbitofrontal cortex value signals depend on fixation location during free viewing Journal Article In: Neuron, vol. 90, no. 6, pp. 1299–1311, 2016. @article{McGinty2016, In the natural world, monkeys and humans judge the economic value of numerous competing stimuli by moving their gaze from one object to another, in a rapid series of eye movements. This suggests that the primate brain processes value serially, and that value-coding neurons may be modulated by changes in gaze. To test this hypothesis, we presented monkeys with value-associated visual cues and took the unusual step of allowing unrestricted free viewing while we recorded neurons in the orbitofrontal cortex (OFC). By leveraging natural gaze patterns, we found that a large proportion of OFC cells encode gaze location and, that in some cells, value coding is amplified when subjects fixate near the cue. These findings provide the first cellular-level mechanism for previously documented behavioral effects of gaze on valuation and suggest a major role for gaze in neural mechanisms of valuation and decision-making under ecologically realistic conditions. |
Mel McKendrick; Stephen H. Butler; Madeleine A. Grealy The effect of self-referential expectation on emotional face processing Journal Article In: PLoS ONE, vol. 11, no. 5, pp. e0155576, 2016. @article{McKendrick2016, The role of self-relevance has been somewhat neglected in static face processing paradigms but may be important in understanding how emotional faces impact on attention, cognition and affect. The aim of the current study was to investigate the effect of self-relevant primes on processing emotional composite faces. Sentence primes created an expectation of the emotion of the face before sad, happy, neutral or composite face photos were viewed. Eye movements were recorded and subsequent responses measured the cognitive and affective impact of the emotion expressed. Results indicated that primes did not guide attention, but impacted on judgments of valence intensity and self-esteem ratings. Negative self-relevant primes led to the most negative self-esteem ratings, although the effect of the prime was qualified by salient facial features. Self-relevant expectations about the emotion of a face and subsequent attention to a face that is congruent with these expectations strengthened the affective impact of viewing the face. |
Catherine M. McMahon; Isabelle Boisvert; Peter Lissa; Louise Granger; Ronny Ibrahim; Chi Yhun Lo; Kelly Miles; Petra L. Graham Monitoring alpha oscillations and pupil dilation across a performance-intensity function Journal Article In: Frontiers in Psychology, vol. 7, pp. 745, 2016. @article{McMahon2016, Listening to degraded speech can be challenging and requires a continuous investment of cognitive resources, which is more challenging for those with hearing loss. However, while alpha power (8-12 Hz) and pupil dilation have been suggested as objective correlates of listening effort, it is not clear whether they assess the same cognitive processes involved, or other sensory and/or neurophysiological mechanisms that are associated with the task. Therefore, the aim of this study is to compare alpha power and pupil dilation during a sentence recognition task in 15 randomized levels of noise (-7dB to +7dB SNR) using highly intelligible (16 channel vocoded) and moderately intelligible (6 channel vocoded) speech. Twenty young normal hearing adults participated in the study; however, due to extraneous noise, data from 16 (10 females, 6 males; aged 19-28 years) was used in the EEG analysis and 10 in the pupil analysis. Behavioral testing of perceived effort and speech performance was assessed at 3 fixed SNRs per participant and was comparable to sentence recognition performance assessed in the physiological test session for both 16- and 6-channel vocoded sentences. Results showed a significant interaction between channel vocoding for both the alpha power and the pupil size changes. While both measures significantly decreased with more positive SNRs for the 16-channel vocoding, this was not observed with the 6-channel vocoding. The results of this study suggest that these measures may encode different processes involved in speech perception, which show similar trends for highly intelligible speech, but diverge for more spectrally degraded speech. The results to date suggest that these objective correlates of listening effort, and the cognitive processes involved in listening effort, are not yet sufficiently well understood to be used within a clinical setting. |
Tobias Meilinger; Katsumi Watanabe Multiple strategies for spatial integration of 2D layouts within working memory Journal Article In: PLoS ONE, vol. 11, no. 4, pp. e0154088, 2016. @article{Meilinger2016, Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions. |
Martin Meißner; Andres Musalem; Joel C. Huber Eye tracking reveals processes that enable conjoint choices to become increasingly efficient with practice Journal Article In: Journal of Marketing Research, vol. 53, no. 1, pp. 1–17, 2016. @article{MeWIBBLEer2016, Choice-based conjoint is a popular way to characterize consumers' choices. Three eye-tracking studies reveal decision processes in conjoint choices that take less time and are more accurate with practice. We observe two simplification processes that are associated with greater speed and reliability. Alternative focus gradually shifts attention towards options that represent promising choices, while attribute focus directs attention to important attributes that are most likely to alter or confirm a decision. Alternative and attribute focus increase in intensity with practice. In terms of biases, we detect a small but consistent focus on positive aspects of the item chosen and negative aspects of the items not chosen. We also show that incidental exposures arising from the alternative first examined or from a central horizontal location increase attention but have a much more modest and often insignificant impact on conjoint choices. Overall, conjoint choice is revealed to be a process that is largely formed by goal-driven values that respondents bring to the task, one that is relatively free of distorting effects from task layout or random exposures. |
Céline Paeye; Alexander C. Schütz; Karl R. Gegenfurtner Visual reinforcement shapes eye movements in visual search Journal Article In: Journal of Vision, vol. 16, no. 10, pp. 1–15, 2016. @article{Paeye2016, We use eye movements to gain information about our visual environment; this information can indirectly be used to affect the environment. Whereas eye movements are affected by explicit rewards such as points or money, it is not clear whether the information gained by finding a hidden target has a similar reward value. Here we tested whether finding a visual target can reinforce eye movements in visual search performed in a noise background, which conforms to natural scene statistics and contains a large number of possible target locations. First we tested whether presenting the target more often in one specific quadrant would modify eye movement search behavior. Surprisingly, participants did not learn to search for the target more often in high probability areas. Presumably, participants could not learn the reward structure of the environment. In two subsequent experiments we used a gaze-contingent display to gain full control over the reinforcement schedule. The target was presented more often after saccades into a specific quadrant or a specific direction. The proportions of saccades meeting the reinforcement criteria increased considerably, and participants matched their search behavior to the relative reinforcement rates of targets. Reinforcement learning seems to serve as the mechanism to optimize search behavior with respect to the statistics of the task. |
Jennifer Malsert; Didier Grandjean Mixed saccadic paradigm releases top-down emotional interference in antisaccade and prosaccade trials Journal Article In: Experimental Brain Research, vol. 234, no. 10, pp. 2915–2922, 2016. @article{Malsert2016, Saccadic movements are well known to involve specific top-down or bottom-up processes depending on the task and paradigm characteristics. For example, after the Gap bottom-up effect, it has been shown that an Instruction effect, i.e., asking to identify a peripheral target instead of simply look toward it, reduces latencies in prosaccade (PS) but not in antisaccade (AS) tasks. Nevertheless, in a mixed task comprising AS, PS and nosaccade trials, such differences vanished. Thus, it has been suggested that a top-down effect could be dependent on tonic or phasic neuronal activation and that only the tonic frontal activation could enable interferences with other cortical regions involved. In this study, we tested the interference of emotional information with saccadic performance depending on cognitive cost of the task. We used emotional facial expression cues in block and mixed paradigms. Using a generalized linear mixed model for the analysis, we found a main effect of the paradigm, with task and emotional effects only in mixed saccadic task that could suggest a top-down effect of emotional information processing over the regions involved in saccadic performances. Moreover, we demonstrated that prosaccades latencies are significantly reduced by emotion, while antisaccades are significantly increased, suggesting a disinhibition of reflexive saccades. |
Ran Manor; Liran Mishali; Amir B. Geva Multimodal neural network for rapid serial visual presentation brain computer interface Journal Article In: Frontiers in Computational Neuroscience, vol. 10, pp. 130, 2016. @article{Manor2016, Brain computer interfaces allow users to preform various tasks using only the electrical activity of the brain. BCI applications often present the user a set of stimuli and record the corresponding electrical response. The BCI algorithm will then have to decode the acquired brain response and perform the desired task. In rapid serial visual presentation (RSVP) tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. In this work, we suggest a multimodal neural network for RSVP tasks. The network operates on the brain response and on the initiating stimulus simultaneously, providing more information for the BCI application. We present two variants of the multimodal network, a supervised model, for the case when the targets are known in advanced, and a semi-supervised model for when the targets are unknown. We test the neural networks with a RSVP experiment on satellite imagery carried out with two subjects. The multimodal networks achieve a significant performance improvement in classification metrics. We visualize what the networks has learned and discuss the advantages of using neural network models for BCI applications. |
Jonathan J. Marotta; Timothy J. Graham Cluttered environments: Differential effects of obstacle position on grasp and gaze locations Journal Article In: Canadian Journal of Experimental Psychology, vol. 70, no. 3, pp. 242–247, 2016. @article{Marotta2016, Previous research has investigated the effects of nontarget objects (NTOs) on reach trajectories, but their effects on eye-hand coordination remain to be determined. The current investigation utilized an eye-hand coordination paradigm, where a reaching and grasping task was performed in the presence of an NTO positioned exclusively in the right or left workspace of each right-handed participant. NTOs varied in their closeness to the subject and reach-path, between the starting location of the hand and the target-object of the reach. A control condition, where only the target was present, was also included. When an NTO was presented on the right (ipsilateral to the reaching hand), it pushed the final grasp and gaze locations on the target, shifting them to the left-away from the "obstacle." The impact of the ipsilateral NTO was increased as it was moved into positions closer to the participant that were of greater obstruction to the hand and arm. In contrast, when the NTO was contralateral, the risk of collision was low and participants developed a set reach plan that was repeated nearly identically for each contralateral NTO position. Our findings also indicate that the "invasiveness" of the NTO positions had a greater effect on grasp than it did on gaze position-demonstrating how the arrangement of clutter in an environment can differentially affect gaze and grasp when reaching for an object. |
Jun Maruta; Eva M. Palacios; Robert D. Zimmerman; Jamshid Ghajar; Pratik Mukherjee Chronic post-concussion neurocognitive deficits . I . Relationship with white matter integrity Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 35, 2016. @article{Maruta2016, We previously identified visual tracking deficits and associated degradation of integrity in specific white matter tracts as characteristics of concussion. We re-explored these characteristics in adult patients with persistent post-concussive symptoms using independent new data acquired during 2009–2012. Thirty-two patients and 126 normal controls underwent cognitive assessments and MR-DTI. After data collection, a subset of control subjects was selected to be individually paired with patients based on gender and age. We identified patients' cognitive deficits through pairwise comparisons between patients and matched control subjects. Within the remaining 94 normal subjects, we identified white matter tracts whose integrity correlated with metrics that indicated performance degradation in patients. We then tested for reduced integrity in these white matter tracts in patients relative to matched controls. Most patients showed no abnormality in MR images unlike the previous study. Patients' visual tracking was generally normal. Patients' response times in an attention task were slowed, but could not be explained as reduced integrity of white matter tracts relating to normal response timing. In the present patient cohort, we did not observe behavioral or anatomical deficits that we previously identified as characteristic of concussion. The recent cohort likely represented those with milder injury compared to the earlier cohort. The discrepancy may be explained by a change in the patient recruitment pool circa 2007 associated with an increase in public awareness of concussion. |
Jun Maruta; Lisa A. Spielman; Brett B. Yarusi; Yushi Wang; Jonathan M. Silver; Jamshid Ghajar Chronic post-concussion neurocognitive deficits. II. Relationship with persistent symptoms Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 45, 2016. @article{Maruta2016a, Individuals who sustain a concussion may continue to experience problems long after their injury. However, it has been postulated in the literature that the relationship between a concussive injury and persistent complaints attributed to it is mediated largely by the development of symptoms associated with posttraumatic stress disorder (PTSD) and depression. We sought to characterize cognitive deficits of adult patients who had persistent symptoms after a concussion and determine whether the original injury retains associations with these deficits after accounting for the developed symptoms that overlap with PTSD and depression. We compared the results of neurocognitive testing from 33 patients of both genders aged 18-55 at 3 months to 5 years post-injury with those from 140 control subjects. Statistical comparisons revealed that patients generally produced accurate responses on reaction time-based tests, but with reduced efficiency. On visual tracking, patients increased gaze position error variability following an attention demanding task, an effect that may reflect greater fatigability. When neurocognitive performance was examined in the context of demographic- and symptom-related variables, the original injury retained associations with reduced performance at a statistically significant level. For some patients, reduced cognitive efficiency and fatigability may represent key elements of interference when interacting with the environment, leading to varied paths of recovery after a concussion. Poor recovery may be better understood when these deficits are taken into consideration. |
Sebastiaan Mathôt; Jean-Baptiste Melmi; Lotje Linden; Stefan Van Der Stigchel The mind-writing pupil: A human-computer interface based on decoding of covert attention through pupillometry Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0148805, 2016. @article{Mathot2016, We present a new human-computer interface that is based on decoding of attention through pupillometry. Our method builds on the recent finding that covert visual attention affects the pupillary light response: Your pupil constricts when you covertly (without looking at it) attend to a bright, compared to a dark, stimulus. In our method, participants covertly attend to one of several letters with oscillating brightness. Pupil size reflects the brightness of the selected letter, which allows us–with high accuracy and in real time–to determine which letter the par- ticipant intends to select. The performance of our method is comparable to the best covert- attention brain-computer interfaces to date, and has several advantages: no movement other than pupil-size change is required; no physical contact is required (i.e. no electrodes); it is easy to use; and it is reliable. Potential applications include: communication with totally locked-in patients, training of sustained attention, and ultra-secure password input. |
Nadine Matton; Pierre-Vincent Paubel; Julien Cegarra; Éric Raufaste Differences in multitask resource reallocation after change in task values Journal Article In: Human Factors, vol. 58, no. 8, pp. 1128–1142, 2016. @article{Matton2016, Objective: The objective was to characterize multitask resource reallocation strategies when managing subtasks with various assigned values. Background: When solving a resource conflict in multitasking, Salvucci and Taatgen predict a globally rational strategy will be followed that favors the most urgent subtask and optimizes global performance. However, Katidioti and Taatgen identified a locally rational strategy that optimizes only a subcomponent of the whole task, leading to detrimental consequences on global performance. Moreover, the question remains open whether expertise would have an impact on the choice of the strategy. Method: We adopted a multitask environment used for pilot selection with a change in emphasis on two out of four subtasks while all subtasks had to be maintained over a minimum performance. A laboratory eye-tracking study contrasted 20 recently selected pilot students considered as experienced with this task and 15 university students considered as novices. Results: When two subtasks were emphasized, novices focused their resources particularly on one high-value subtask and failed to prevent both low-value subtasks falling below minimum performance. On the contrary, experienced people delayed the processing of one low-value subtask but managed to optimize global performance. Conclusion: In a multitasking environment where some subtasks are emphasized, novices follow a locally rational strategy whereas experienced participants follow a globally rational strategy. Application: During complex training, trainees are only able to adjust their resource allocation strategy to subtask emphasis changes once they are familiar with the multitasking environment. |
Christian H. Poth; Werner X. Schneider Breaking object correspondence across saccades impairs object recognition: The role of color and luminance Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 1–12, 2016. @article{Poth2016, Rapid saccadic eye movements bring the foveal region of the eye's retina onto objects for high-acuity vision. Saccades change the location and resolution of objects' retinal images. To perceive objects as visually stable across saccades, correspondence between the objects before and after the saccade must be established. We have previously shown that breaking object correspondence across the saccade causes a decrement in object recognition (Poth, Herwig, & Schneider, 2015). Color and luminance can establish object correspondence, but it is unknown how these surface features contribute to transsaccadic visual processing. Here, we investigated whether changing the surface features color-and-luminance and color alone across saccades impairs postsaccadic object recognition. Participants made saccades to peripheral objects, which either maintained or changed their surface features across the saccade. After the saccade, participants briefly viewed a letter within the saccade target object (terminated by a pattern mask). Postsaccadic object recognition was assessed as participants' accuracy in reporting the letter. Experiment A used the colors green and red with different luminances as surface features, Experiment B blue and yellow with approximately the same luminances. Changing the surface features across the saccade deteriorated postsaccadic object recognition in both experiments. These findings reveal a link between object recognition and object correspondence relying on the surface features colors and luminance, which is currently not addressed in theories of transsaccadic perception. We interpret the findings within a recent theory ascribing this link to visual attention (Schneider, 2013). |
Georgie Powell; Zoe Meredith; Rebecca McMillin; Tom C. A. Freeman Bayesian models of individual differences: Combining autistic traits and sensory thresholds to predict motion perception Journal Article In: Psychological Science, vol. 27, no. 12, pp. 1562–1572, 2016. @article{Powell2016, According to Bayesian models, perception and cognition depend on the optimal combination of noisy incoming evidence with prior knowledge of the world. Individual differences in perception should therefore be jointly determined by a person's sensitivity to incoming evidence and his or her prior expectations. It has been proposed that individuals with autism have flatter prior distributions than do nonautistic individuals, which suggests that prior variance is linked to the degree of autistic traits in the general population. We tested this idea by studying how perceived speed changes during pursuit eye movement and at low contrast. We found that individual differences in these two motion phenomena were predicted by differences in thresholds and autistic traits when combined in a quantitative Bayesian model. Our findings therefore support the flatter-prior hypothesis and suggest that individual differences in prior expectations are more systematic than previously thought. In order to be revealed, however, individual differences in sensitivity must also be taken into account. |
Georgie Powell; Petroc Sumner; James J. Harrison; Aline Bompas Interaction between contours and eye movements in the perception of afterimages: A test of the signal ambiguity theory Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–11, 2016. @article{Powell2016a, An intriguing property of afterimages is that conscious experience can be strong, weak, or absent following identical stimulus adaptation. Previously we suggested that postadaptation retinal signals are inherently ambiguous, and therefore the perception they evoke is strongly influenced by cues that increase or decrease the likelihood that they represent real objects (the signal ambiguity theory). Here we provide a more definitive test of this theory using two cues previously found to influence afterimage perception in opposite ways and plausibly at separate loci of action. However, by manipulating both cues simultaneously, we found that their effects interacted, consistent with the idea that they affect the same process of object interpretation rather than being independent influences. These findings bring contextual influences on afterimages into more general theories of cue combination, and we suggest that afterimage perception should be considered alongside other areas of vision science where cues are found to interact in their influence on perception. |
Rebecca B. Price; Inez M. Greven; Greg J. Siegle; Ernst H. W. Koster; Rudi De Raedt A Novel Attention Training Paradigm Based on Operant Conditioning of Eye Gaze: Preliminary Findings Journal Article In: Emotion, vol. 16, no. 1, pp. 110–116, 2016. @article{Price2016, A sequential injection system which consists of a syringe pump, a selector valve, a multi-port valve, a gas-liquid separator and a solenoid valve for the determination of arsenic by hydride generation atomic absorption spectrometry using tetrahydroborate as reductant was developed. The reduction time of sample with tetrahydoborate has increased by keeping the reactant in gas-liquid separator by using the solenoid valve. Various parameters affecting the performance of the sequential injection system were optimized, including reaction-time, carrier gas flow, sample volume, tetrahydroborate volume and concentration. Established sequential injection hydride generation technique was simple and automated operation. A sample throughput of 112/h was achieved with 400 μL samples with a precision of 2.0% RSD at 4 μg/L As (n = 10) and a detection limit of 0.09 μg/L. Good agreement with the certified values was obtained for the determination of arsenic in standard reference materials. |
Maria Solé Puig; Josep Marco Pallarés; Laura Perez Zapata; Laura Puigcerver; Josep Cañete; Hans Supèr Attentional selection accompanied by eye vergence as revealed by event-related brain potentials Journal Article In: PLoS ONE, vol. 11, no. 12, pp. e0167646, 2016. @article{Puig2016, Neural mechanisms of attention allow selective sensory information processing. Top-down deployment of visual-spatial attention is conveyed by cortical feedback connections from frontal regions to lower sensory areas modulating late stimulus responses. A recent study reported the occurrence of small eye vergence during orienting top-down attention. Here we assessed a possible link between vergence and attention by comparing visual event related potentials (vERPs) to a cue stimulus that induced attention to shift towards the target location to the vERPs to a no-cue stimulus that did not trigger orienting attention. The results replicate the findings of eye vergence responses during orienting attention and show that the strength and time of eye vergence coincide with the onset and strength of the vERPs when subjects oriented attention. Our findings therefore support the idea that eye vergence relates to and possibly has a role in attentional selection. |
Mary Vining Radomski; Mattie Anheluk; M. Penny Bartzen; Joette Zola In: American Journal of Occupational Therapy, vol. 70, no. 3, pp. 1–9, 2016. @article{Radomski2016, OBJECTIVE: To determine the effectiveness of interventions addressing cognitive impairments to improve occupational performance for people with traumatic brain injury. METHOD: A total of 37 studies met inclusion criteria: 9 Level I systematic reviews, 14 Level I studies, 5 Level II studies, and 9 Level III studies. RESULTS: Strong evidence supports use of direct attention training, dual-task training, and strategy training to optimize executive functioning, encoding, and use of memory compensations, including assistive technology. However, in most studies, occupational performance was a secondary outcome, if it was evaluated at all. CONCLUSION: Although evidence supports many intervention approaches used by occupational therapy practitioners to address cognitive impairments of adults with traumatic brain injury, more studies are needed in which occupational performance is the primary outcome of cognitive intervention. |
Ana Radonjić; David H. Brainard The nature of instructional effects in color constancy Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 6, pp. 847–865, 2016. @article{Radonjic2016, The instructions subjects receive can have a large effect on experimentally measured color constancy, but the nature of these effects and how their existence should inform our understanding of color perception remains unclear. We used a factorial design to measure how instructional effects on constancy vary with experimental task and stimulus set. In each of 2 experiments, we employed both a classic adjustment- based asymmetric matching task and a novel color selection task. Four groups of naive subjects were instructed to make adjustments/selections based on (a) color (neutral instructions); (b) the light reaching the eye (physical spectrum instructions); (c) the actual surface reflectance of an object (objective reflectance instructions); or (d) the apparent surface reflectance of an object (apparent reflectance instructions). Across the 2 experiments we varied the naturalness of the stimuli. We find clear interac- tions between instructions, task, and stimuli. With simplified stimuli (Experiment 1), instructional effects were large and the data revealed 2 instruction-dependent patterns. In 1 (neutral and physical spectrum instructions) constancy was low, intersubject variability was also low, and adjustment-based and selection-based constancy were in agreement. In the other (reflectance instructions) constancy was high, intersubject variability was large, adjustment-based constancy deviated from selection-based constancy and for some subjects selection-based constancy increased across sessions. Similar patterns held for naturalistic stimuli (Experiment 2), although instructional effects were smaller. We interpret these 2 patterns as signatures of distinct task strategies—1 is perceptual, with judgments based primarily on the perceptual representation of color; the other involves explicit instruction-driven reasoning. |
Jenni Radun; Mikko Nuutinen; Tuomas Leisti; Jukka Häkkinen Individual differences in image-quality estimations: Estimation rules and viewing strategies Journal Article In: ACM Transactions on Applied Perception, vol. 13, no. 3, pp. 1–22, 2016. @article{Radun2016, Subjective image-quality estimation with high-quality images is often a preference-estimation task. Preferences are subjective, and individual differences exist. Individual differences are also seen in the eye movements of people. A task's subjectivity can result from people using different rules as a basis for their estimation. Using two studies, we investigated whether different preference-estimation rules are related to individual differences in viewing behaviour by examining the process of preference es-timation of high-quality images. The estimation rules were measured from free subjective reports on important quality-related attributes (Study 1) and from estimations of the attributes' importance in preference estimation (Study 2). The free reports showed that the observers used both feature-based image-quality attributes (e.g., sharpness, illumination) and abstract attributes, which include an interpretation of the image features (e.g., atmosphere and naturalness). In addition, the observers were classified into three viewing-strategy groups differing in fixation durations in both studies. These groups also used different estimation rules. In both studies, the group with medium-length fixations differed in their estimation rules from the other groups. In Study 1, the observers in this group used more abstract attributes than those in the other groups; in Study 2, they con-sidered atmosphere to be a more important image feature. The study shows that individual differences in a quality-estimation task are related to both estimation rules and viewing strategies, and that the difference is related to the level of abstraction of the estimations. |
Pavan Ramkumar; Bruce C. Hansen; Sebastian Pannasch; Lester C. Loschky Visual information representation and rapid-scene categorization are simultaneous across cortex: An MEG study Journal Article In: NeuroImage, vol. 134, pp. 295–304, 2016. @article{Ramkumar2016, Perceiving the visual world around us requires the brain to represent the features of stimuli and to categorize the stimulus based on these features. Incorrect categorization can result either from errors in visual representation or from errors in processes that lead to categorical choice. To understand the temporal relationship between the neural signatures of such systematic errors, we recorded whole-scalp magnetoencephalography (MEG) data from human subjects performing a rapid-scene categorization task. We built scene category decoders based on (1) spatiotemporally resolved neural activity, (2) spatial envelope (SpEn) image features, and (3) behavioral responses. Using confusion matrices, we tracked how well the pattern of errors from neural decoders could be explained by SpEn decoders and behavioral errors, over time and across cortical areas. Across the visual cortex and the medial temporal lobe, we found that both SpEn and behavioral errors explained unique variance in the errors of neural decoders. Critically, these effects were nearly simultaneous, and most prominent between 100 and 250 ms after stimulus onset. Thus, during rapid-scene categorization, neural processes that ultimately result in behavioral categorization are simultaneous and co-localized with neural processes underlying visual information representation. |
Meike Ramon; Sébastien Miellet; Anna M. Dzieciol; Boris Nikolai Konrad; Martin Dresler; Roberto Caldara Super-memorizers are not super-recognizers Journal Article In: PLoS ONE, vol. 11, no. 3, pp. e0150972, 2016. @article{Ramon2016, Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory. |
Matthew T. Rätsep; Andrew F. Hickman; Brandon Maser; Jessica Pudwell; Graeme N. Smith; Donald Brien; Patrick W. Stroman; Michael A. Adams; James N. Reynolds; B. Anne Croy; Angelina Paolozza Impact of preeclampsia on cognitive function in the offspring Journal Article In: Behavioural Brain Research, vol. 302, pp. 175–181, 2016. @article{Raetsep2016, Preeclampsia (PE) is a significant clinical disorder occurring in 3-5% of all human pregnancies. Offspring of PE pregnancies (PE-F1s) are reported to exhibit greater cognitive impairment than offspring from uncomplicated pregnancies. Previous studies of PE-F1 cognitive ability used tests with bias that do not assess specific cognitive domains. To improve cognitive impairment classification in PE-F1s we used standardized clinical psychometric testing and eye tracking studies of saccadic eye movements. PE-F1s ( n= 10) and sex/age matched control participants ( n= 41 for psychometrics; n= 59 for eye-tracking) were recruited from the PE-NET study or extracted from the NeuroDevNet study databases. Participants completed a selected array of psychometric tests which assessed executive function, working memory, attention, inhibition, visuospatial processing, reading, and math skills. Eye-tracking studies included the prosaccade, antisaccade, and memory-guided tasks. Psychometric testing revealed an impairment in working memory among PE-F1s. Eye-tracking studies revealed numerous impairments among PE-F1s including additional saccades required to reach the target, poor endpoint accuracy, and slower reaction time. However, PE-F1s made faster saccades than controls, and fewer sequence errors in the memory-guided task. Our study provides a comprehensive assessment of cognitive function among PE-F1s. The development of PE may be seen as an early predictor of reduced cognitive function in children, specifically in working memory and oculomotor control. Future studies should extended to a larger study populations, and may be valuable for early studies of children born to pregnancies complicated by other disorders, such as gestational diabetes or intrauterine growth restriction. |
Jelmer P. De Vries; Stefan Van der Stigchel; Ignace T. C. Hooge; Frans A. J. Verstraten Revisiting the global effect and inhibition of return Journal Article In: Experimental Brain Research, vol. 234, no. 10, pp. 2999–3009, 2016. @article{DeVries2016b, Saccades toward previously cued locations have longer latencies than saccades toward other locations, a phenomenon known as inhibition of return (IOR). Watanabe (Exp Brain Res 138:330?342. doi:10.1007/s002210100709, 2001) combined IOR with the global effect (where saccade landing points fall in between neighboring objects) to investigate whether IOR can also have a spatial component. When one of two neighboring targets was cued, there was a clear bias away from the cued location. In a condition where both targets were cued, it appeared that the global effect magnitude was similar to the condition without any cues. However, as the latencies in the double cue condition were shorter compared to the no cue condition, it is still an open question whether these results are representative for IOR. Considering the double cue condition can provide valuable insight into the interaction of the mechanisms underlying the two phenomena, here, we revisit this condition in an adapted paradigm. Our paradigm does result in longer latencies for the cued locations, and we find that the magnitude of the global effect is reduced significantly. Unexpectedly, this holds even when only including saccades with the same latencies for both conditions. Thus, the increased latencies associated with IOR cannot directly explain the reduction in global effect. The global effect reduction can likely best be seen as either a result of short-term depression of exogenous visual signals or a result of IOR established at the center of gravity of cues. |
Vera Demberg; Asad Sayeed The frequency of rapid pupil dilations as a measure of linguistic processing difficulty Journal Article In: PLoS ONE, vol. 11, no. 1, pp. e0146194, 2016. @article{Demberg2016, While it has long been known that the pupil reacts to cognitive load, pupil size has received little attention in cognitive research because of its long latency and the difficulty of separating effects of cognitive load from the light reflex or effects due to eye movements. A novel measure, the Index of Cognitive Activity (ICA), relates cognitive effort to the frequency of small rapid dilations of the pupil. We report here on a total of seven experiments which test whether the ICA reliably indexes linguistically induced cognitive load: three experiments in reading (a manipulation of grammatical gender match / mismatch, an experiment of semantic fit, and an experiment comparing locally ambiguous subject versus object relative clauses, all in German), three dual-task experiments with simultaneous driving and spoken language comprehension (using the same manipulations as in the single-task reading experiments), and a visual world experiment comparing the processing of causal versus concessive discourse markers. These experiments are the first to investigate the effect and time course of the ICA in language processing. All of our experiments support the idea that the ICA indexes linguistic processing difficulty. The effects of our linguistic manipulations on the ICA are consistent for reading and auditory presentation. Furthermore, our experiments show that the ICA allows for usage within a multi-task paradigm. Its robustness with respect to eye movements means that it is a valid measure of processing difficulty for usage within the visual world paradigm, which will allow researchers to assess both visual attention and processing difficulty at the same time, using an eye-tracker. We argue that the ICA is indicative of activity in the locus caeruleus area of the brain stem, which has recently also been linked to P600 effects observed in psycholinguistic EEG experiments. |
Shujie Deng; Jian Chang; Julie A. Kirkby; Jian J. Zhang Gaze–mouse coordinated movements and dependency with coordination demands in tracing Journal Article In: Behaviour & Information Technology, vol. 35, no. 8, pp. 665–679, 2016. @article{Deng2016, Eye movements have been shown to lead hand movements in tracing tasks where subjects have to move their fingers along a predefined trace. The question remained, whether the leading relationship was similar when tracing with a pointing device, such as a mouse; more importantly, whether tasks that required more or less gaze–mouse coordination would introduce variation in this pattern of behaviour, in terms of both spatial and temporal leading of gaze position to mouse movement. A three-level gaze–mouse coordination demand paradigm was developed to address these questions. A substantial dataset of 1350 trials was collected and analysed. The linear correlation of gaze–mouse movements, the statistical distribution of the lead time, as well as the lead distance between gaze and mouse cursor positions were all considered, and we proposed a new method to quantify lead time in gaze–mouse coordination. The results supported and extended previous empirical findings that gaze often led mouse movements. We found that the gaze–mouse coordination demands of the task were positively correlated to the gaze lead, both spatially and temporally. However, the mouse movements were synchronised with or led gaze in the simple straight line condition, which demanded the least gaze–mouse coordination. |
Tao Deng; Kaifu Yang; Yongjie Li; Hongmei Yan Where does the driver look? Top-down-based saliency detection in a traffic driving environment Journal Article In: IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 2051–2062, 2016. @article{Deng2016a, A traffic driving environment is a complex and dynamically changing scene. When driving, drivers always allocate their attention to the most important and salient areas or targets. Traffic saliency detection, which computes the salient and prior areas or targets in a specific driving environment, is an indispensable part of intelligent transportation systems and could be useful in supporting autonomous driving, traffic sign detection, driving training, car collision warning, and other tasks. Recently, advances in visual attention models have provided substantial progress in describing eye movements over simple stimuli and tasks such as free viewing or visual search. However, to date, there exists no computational framework that can accurately mimic a driver's gaze behavior and saliency detection in a complex traffic driving environment. In this paper, we analyzed the eye-tracking data of 40 subjects consisted of nondrivers and experienced drivers when viewing 100 traffic images. We found that a driver's attention was mostly concentrated on the end of the road in front of the vehicle. We proposed that the vanishing point of the road can be regarded as valuable top-down guidance in a traffic saliency detection model. Subsequently, we build a framework of a classic bottom-up and top-down combined traffic saliency detection model. The results show that our proposed vanishing-point-based top-down model can effectively simulate a driver's attention areas in a driving environment. |
Adele Diederich; Hans Colonius; Farid I. Kandil Prior knowledge of spatiotemporal configuration facilitates crossmodal saccadic response: A TWIN analysis Journal Article In: Experimental Brain Research, vol. 234, no. 7, pp. 2059–2076, 2016. @article{Diederich2016, Saccadic reaction times from a focused-attention task with a visual target and an acoustic nontarget support the hypothesis that the amount of saccadic facilitation in the presence of a nontarget increases with the prior knowledge of alignment with the target across different blocks of trials. The time-window-of-integration model can account for the size of the effect by having window size depend on the prior knowledge of alignment. Some efforts to identify the neural correlates of the effect are discussed. |
Pia Dietze; Eric D. Knowles Social class and the motivational relevance of other human beings: Evidence from visual attention Journal Article In: Psychological Science, vol. 27, no. 11, pp. 1517–1527, 2016. @article{Dietze2016, We theorize that people's social class affects their appraisals of others' motivational relevance—the degree to which others are seen as potentially rewarding, threatening, or otherwise worth attending to. Supporting this account, three studies indicate that social classes differ in the amount of attention their members direct toward other human beings. In Study 1, wearable technology was used to film the visual fields of pedestrians on city streets; higher-class participants looked less at other people than did lower-class participants. In Studies 2a and 2b, participants' eye movements were tracked while they viewed street scenes; higher class was associated with reduced attention to people in the images. In Study 3, a change-detection procedure assessed the degree to which human faces spontaneously attract visual attention; faces proved less effective at drawing the attention of high-class than low-class participants, which implies that class affects spontaneous relevance appraisals. The measurement and conceptualization of social class are discussed. |
Gregory J. Digirolamo; Neha Patel; Clare L. Blaukopf Arousal facilitates involuntary eye movements Journal Article In: Experimental Brain Research, vol. 234, pp. 1967–1976, 2016. @article{Digirolamo2016, Attention plays a critical role in action selection. However, the role of attention in eye movements is complicated as these movements can be either voluntary or involuntary, with, in some circumstances (antisaccades), these two actions competing with each other for execution. But attending to the location of an impending eye movement is only one facet of attention that may play a role in eye movement selection. In two experiments, we investigated the effect of arousal on voluntary eye movements (antisaccades) and involuntary eye movements (prosaccadic errors) in an antisaccade task. Arousal, as caused by brief loud sounds and indexed by changes in pupil diameter, had a facilitation effect on involuntary eye movements. Involuntary eye movements were both significantly more likely to be executed and significantly faster under arousal conditions (Experiments 1 and 2), and the influence of arousal had a specific time course (Experiment 2). Arousal, one form of attention, can produce significant costs for human movement selection as potent but unplanned actions are benefited more than planned ones. |
Gregory J. DiGirolamo; Ellen J. Sophis; Jennifer L. Daffron; Gerardo Gonzalez; Mauricio Romero-Gonzalez; Sean A. Gillespie Breakdowns of eye movement control toward smoking cues in young adult light smokers Journal Article In: Addictive Behaviors, vol. 52, pp. 98–102, 2016. @article{DiGirolamo2016b, Background: Many studies suggest that dependent smokers have a preference or attentional bias toward smoking cues. The purpose of this study was to test the ability of infrequent non-dependent light smokers to control their eye movements by look away from smoking cues. Poor control in the lightest of smokers would suggest nicotine cue-elicited behavior occurring even prior to nicotine dependency as measured by daily smoking. Methods: 17 infrequent non-dependent light smokers and 17 lifetime non-smokers performed an antisaccade task (look away from suddenly appearing cue) on smoking, alcohol, neutral, and dot cues. Results: The light smokers, who were confirmed light smokers and non-dependent (MFaegerström Dependency Score=0.35), were significantly worse at controlling their eye movements to smoking cues relative to both neutral cues (p<.04) and alcohol cues (p<.02). Light smokers made significantly more errors to smoking cues than non-smokers (p<.004). Conclusions: These data suggest that prior to developing clinical symptoms of severe dependence or progressing to heavier smoking (e.g., daily smoking), the lightest of smokers are showing a specific deficit in control of nicotine cue-elicited behavior. |
Yun Ding; Tao He; Jason Satel; Zhiguo Wang Inhibitory cueing effects following manual and saccadic responses to arrow cues Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 4, pp. 1020–1029, 2016. @article{Ding2016, With two cueing tasks, in the present study we examined output-based inhibitory cueing effects (ICEs) with manual responses to arrow targets following manual or saccadic responses to arrow cues. In all experiments, ICEs were observed when manual localization responses were required to both the cues and targets, but only when the cue-target onset asynchrony (CTOA) was 2,000 ms or longer. In contrast, when saccadic responses were made in response to the cues, ICEs were only observed with CTOAs of 2,000 ms or less-and only when an auditory cue-back signal was used. The present study also showed that the magnitude of ICEs following saccadic responses to arrow cues decreased with time, much like traditional inhibition-of-return effects. The magnitude of ICEs following manual responses to arrow cues, however, appeared later in time and had no sign of decreasing even 3 s after cue onset. These findings suggest that ICEs linked to skeletomotor activation do exist and that the ICEs evoked by oculomotor activation can carry over to the skeletomotor system. |
Yun Ding; Jing Zhao; Tao He; Yufei Tan; Lingshuang Zheng; Zhiguo Wang Selective impairments in covert shifts of attention in Chinese dyslexic children Journal Article In: Dyslexia, vol. 22, no. 4, pp. 362–378, 2016. @article{Ding2016a, Reading depends heavily on the efficient shift of attention. Mounting evidence has suggested that dyslexics have deficits in covert attentional shift. However, it remains unclear whether dyslexics also have deficits in overt attentional shift. With the majority of relevant studies carried out in alphabetic writing systems, it is also unknown whether the attentional deficits observed in dyslexics are restricted to a particular writing system. The present study examined inhibition of return (IOR)-a major driving force of attentional shifts-in dyslexic children learning to read a logographic script (i.e., Chinese). Robust IOR effects were observed in both covert and overt attentional tasks in two groups of typically developing children, who were age- or reading ability-matched to the dyslexic children. In contrast, the dyslexic children showed IOR in the overt but not in the covert attentional task. We conclude that covert attentional shift is selectively impaired in dyslexic children. This impairment is not restricted to alphabetic writing systems, and it could be a significant contributor to the difficulties encountered by children learning to read. |
Pascasie L. Dombert; Gereon R. Fink; Simone Vossel The impact of probabilistic feature cueing depends on the level of cue abstraction Journal Article In: Experimental Brain Research, vol. 234, no. 3, pp. 685–694, 2016. @article{Dombert2016, Allocation of attentional resources rests on predictions about the likelihood of events. While this effect has been extensively studied in the spatial attention domain where the location of a target stimulus is pre-cued, less is known about the cueing of stimulus features such as the color of a behaviorally relevant target. Moreover, there is disagreement about which types of color cues are effective for biasing attention. Here we investigated the effects of probabilistic context (percentage of cue validity, %CV) for different levels of cue abstraction to elucidate how feature-based search information is processed and used to direct attention. The color of a target was cued by presenting the perceptual color, the color word, or two-letter abbreviations. %CV, i.e., the probability that the cue indicated the color correctly, changed unpredictably between 50, 70, and 90%. Response times (RTs) for valid and invalid trials in each %CV condition were recorded in 60 datasets and analyzed with analyses of variance. The results showed that all cues were associated with comparable RT costs after invalid cueing. The modulation of RT costs by probabilities, however, depended upon level of cue abstraction and time on task: While a strong, immediate impact of %CV was found for two-letter cueing, the effect was solely observed in the second half of the experiment for perceptual and word cues. These results demonstrate that probabilistic feature-based information is processed differently for different levels of cue abstraction. Moreover, the modulatory effect of the environmental statistics differentially depends on the time on task for different feature cues.; |
Pascasie L. Dombert; Anna B. Kuhns; Paola Mengotti; Gereon R. Fink; Simone Vossel Functional mechanisms of probabilistic inference in feature- and space-based attentional systems Journal Article In: NeuroImage, vol. 142, pp. 553–564, 2016. @article{Dombert2016a, Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. |
Helen E. Clark; John A. Perrone; Robert B. Isler; Samuel G. Charlton The role of eye movements in the size-speed illusion of approaching trains Journal Article In: Accident Analysis and Prevention, vol. 86, pp. 146–154, 2016. @article{Clark2016, Recent research on the perceived speed of large moving objects, compared to smaller moving objects, has revealed the presence of a size-speed illusion. This illusion, where a large object seems to be moving more slowly than a small object travelling at the same speed may account for collisions between motor cars and trains at level crossings, which is a serious safety issue in New Zealand and worldwide. One possible reason for the perceived size-speed difference may be related to the movement of our eyes when we track moving vehicles. In order to investigate this, we tested observers' relative speed perception of moving objects (both abstract and more detailed objects) moving in depth towards the observer, presented on a computer display and eye movements recorded with an eyetracker. Experiment 1 confirmed first the size-speed illusion when the observers were situated further away (18, 36 m) from the simulated rail crossing or intersection. It also revealed that the eye movement behaviour of our participants was different when they judged the speeds of the small and large objects; eye fixations were localised around the visual centroid of longer objects and hence were further from the front of the moving large objects than the smaller ones. Experiment 2 found that manipulating eye movements could reduce the magnitude of the illusion. When observers tracked targets (dots) that were placed at corresponding locations at the front of the small object and the long object respectively, they perceived the speeds of the two objects as equal. When target dots were placed closer to the visual centroid, observers perceived the larger object to be moving slower. These results demonstrate that there is a close relationship between eye movement behaviour and our perceived judgement of an approaching train's speed. |
Alasdair D. F. Clarke; Patrick Green; Mike J. Chantler; Amelia R. Hunt Human search for a target on a textured background is consistent with a stochastic model Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–16, 2016. @article{Clarke2016a, Previous work has demonstrated that search for a target in noise is consistent with the predictions of the optimal search strategy, both in the spatial distribution of fixation locations and in the number of fixations observers require to find the target. In this study we describe a challenging visual-search task and compare the number of fixations required by human observers to find the target to predictions made by a stochastic search model. This model relies on a target-visibility map based on human performance in a separate detection task. If the model does not detect the target, then it selects the next saccade by randomly sampling from the distribution of saccades that human observers made. We find that a memoryless stochastic model matches human performance in this task. Furthermore, we find that the similarity in the distribution of fixation locations between human observers and the ideal observer does not replicate: Rather than making the signature doughnut-shaped distribution predicted by the ideal search strategy, the fixations made by observers are best described by a central bias. We conclude that, when searching for a target in noise, humans use an essentially random strategy, which achieves near optimal behavior due to biases in the distributions of saccades we have a tendency to make. The findings reconcile the existence of highly efficient human search performance with recent studies demonstrating clear failures of optimality in single and multiple saccade tasks. |
Alasdair D. F. Clarke; Amelia R. Hunt Failure of intuition when choosing whether to invest in a single goal or split resources between two goals Journal Article In: Psychological Science, vol. 27, no. 1, pp. 64–74, 2016. @article{Clarke2016, In a series of related experiments, we asked people to choose whether to split their attention between two equally likely potential tasks or to prioritize one task at the expense of the other. In such a choice, when the tasks are easy, the best strategy is to prepare for both of them. As difficulty increases beyond the point at which people can perform both tasks accurately, they should switch strategy and focus on one task at the expense of the other. Across three very different tasks (target detection, throwing, and memory), none of the participants switched their strategy at the correct point. Moreover, the majority consistently failed to modify their strategy in response to changes in task difficulty. This failure may have been related to uncertainty about their own ability, because in a version of the experiment in which there was no uncertainty, participants uniformly switched at an optimal point. |
Claudia Classen; Armin Kibele Action induction by visual perception of rotational motion Journal Article In: Psychological Research, vol. 80, no. 5, pp. 785–804, 2016. @article{Classen2016, A basic process in the planning of everyday actions involves the integration of visually perceived movement characteristics. Such processes of information integration often occur automatically. The aim of the present study was to examine whether the visual perception of spatial characteristics of a rotational motion (rotation direction) can induce a spatially compatible action. Four reaction time experiments were conducted to analyze the effect of perceiving task irrelevant rotational motions of simple geometric figures as well as of gymnasts on a horizontal bar while responding to color changes in these objects. The results show that the participants react faster when the directional information of a rotational motion is compatible with the spatial characteristics of an intended action. The degree of complexity of the perceived event does not play a role in this effect. The spatial features of the used biological motion were salient enough to elicit a motion based Simon effect. However, in the cognitive processing of the visual stimulus, the critical criterion is not the direction of rotation, but rather the relative direction of motion (direction of motion above or below the center of rotation). Nevertheless, this conclusion is tainted with reservations since it is only fully supported by the response behavior of female participants. |
Virginia Clinton; Kinga Morsanyi; Martha W. Alibali; Mitchell J. Nathan Learning about probability from text and tables: Do color coding and labeling through an interactive-user interface help? Journal Article In: Applied Cognitive Psychology, vol. 30, no. 3, pp. 440–453, 2016. @article{Clinton2016, Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly assigned to learn about probabilistic reasoning from one of 4 computer-based lessons generated from a 2 (color coding/no color coding) by 2 (labeling/no labeling) between-subjects design. Learners added the labels or color coding at their own pace by clicking buttons in a computer-based lesson. Participants' eye movements were recorded while viewing the lesson. Labeling was beneficial for learning, but color coding was not. In addition, labeling, but not color coding, increased attention to important information in the table and time with the lesson. Both labeling and color coding increased looks between the text and corresponding information in the table. The findings provide support for the multimedia principle, and they suggest that providing labeling enhances learning about probabilistic reasoning from text and tables. |
Moreno I. Coco; Frank Keller; George L. Malcolm Anticipation in real-world scenes: The role of visual context and visual memory Journal Article In: Cognitive Science, vol. 40, no. 8, pp. 1995–2024, 2016. @article{Coco2016, The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. |
Thérèse Collins The spatiotopic representation of visual objects across time Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 6, pp. 1531–1537, 2016. @article{Collins2016, Each eye movement introduces changes in the retinal location of objects. How a stable spatiotopic representation emerges from such variable input is an important question for the study of vision. Researchers have classically probed human observers' performance in a task requiring a location judgment about an object presented at different locations across a saccade. Correct performance on this task requires realigning or remapping retinal locations to compensate for the saccade. A recent study showed that performance improved with longer presaccadic viewing time, suggesting that accurate spatiotopic representations take time to build up. The first goal of the study was to replicate that finding. Two experiments, one an exact replication and the second a modified version, failed to replicate improved performance with longer presaccadic viewing time. The second goal of this study was to examine the role of attention in constructing spatiotopic representations, as theoretical and neurophysiological accounts of remapping have proposed that only attended targets are remapped. A third experiment thus manipulated attention with a spatial cueing paradigm and compared transsaccadic location performance of attended versus unattended targets. No difference in spatiotopic performance was found between attended and unattended targets. Although only negative results are reported, they might nevertheless suggest that spatiotopic representations are relatively stable over time. |
Jason C. Coronel; Kara D. Federmeier In: Communication Research, vol. 43, no. 7, pp. 922–944, 2016. @article{Coronel2016, Gender-based political stereotypes pervade the media environment in the United States, and this may cause voters to automatically activate these stereotypes while evaluating politicians. In the research reported here, we investigate whether voters are able to reduce the automatic activation of unwanted stereotypes and how political sophistication influences this capacity. The current experiment uses self-reports to measure controlled stereotyping, and we develop a new eye movement metric to measure automatic stereotyping. We find that political sophisticates are more effective than novices at reducing unwanted gender-based political stereotypes. This study has two main implications for communication research. First, the results suggest that the effects of gender-based automatic stereotyping—induced by the information environment—on political judgments may not be as powerful as some of the current literature portrays them to be. Second, this study adds eye movements to the arsenal of tools available to communication scholars interested in measuring covert forms of stereotyping. |
Antoine Coutrot; Nicola Binetti; Charlotte Harrison; Isabelle Mareschal; Alan Johnston Face exploration dynamics differentiate men and women Journal Article In: Journal of Vision, vol. 16, no. 14, pp. 1–19, 2016. @article{Coutrot2016, The human face is central to our everyday social interactions. Recent studies have shown that while gazing at faces, each one of us has a particular eye-scanning pattern, highly stable across time. Although variables such as culture or personality have been shown to modulate gaze behavior, we still don't know what shapes these idiosyncrasies. Moreover, most previous observations rely on static analyses of small-sized eye-position data sets averaged across time. Here, we probe the temporal dynamics of gaze to explore what information can be extracted about the observers and what is being observed. Controlling for any stimuli effect, we demonstrate that among many individual characteristics, the gender of both the participant (gazer) and the person being observed (actor) are the factors that most influence gaze patterns during face exploration. We record and exploit the largest set of eye-tracking data (405 participants, 58 nationalities) from participants watching videos of another person. Using novel data-mining techniques, we show that female gazers follow a much more exploratory scanning strategy than males. Moreover, female gazers watching female actresses look more at the eye on the left side. These results have strong implications in every field using gaze-based models from computer vision to clinical psychology. |
Hayley Crawford; Joanna Moss; Chris Oliver; Natasha Elliott; Giles M. Anderson; Joseph P. McCleery Visual preference for social stimuli in individuals with autism or neurodevelopmental disorders: An eye-tracking study Journal Article In: Molecular Autism, vol. 7, no. 1, pp. 1–12, 2016. @article{Crawford2016, Background: Recent research has identified differences in relative attention to competing social versus non-social video stimuli in individuals with autism spectrum disorder (ASD). Whether attentional allocation is influenced by the potential threat of stimuli has yet to be investigated. This is manipulated in the current study by the extent to which the stimuli are moving towards or moving past the viewer. Furthermore, little is known about whether such differences exist across other neurodevelopmental disorders. This study aims to determine if adolescents with ASD demonstrate differences in attentional allocation to competing pairs of social and non-social video stimuli, where the actor or object either moves towards or moves past the viewer, in comparison to individuals without ASD, and to determine if individuals with three genetic syndromes associated with differing social phenotypes demonstrate differences in attentional allocation to the same stimuli. Methods: In study 1, adolescents with ASD and control participants were presented with social and non-social video stimuli in two formats (moving towards or moving past the viewer) whilst their eye movements were recorded. This paradigm was then employed with groups of individuals with fragile X, Cornelia de Lange, and Rubinstein-Taybi syndromes who were matched with one another on chronological age, global adaptive behaviour, and verbal adaptive behaviour (study 2). Results: Adolescents with ASD demonstrated reduced looking-time to social versus non-social videos only when stimuli were moving towards them. Individuals in the three genetic syndrome groups showed similar looking-time but differences in fixation latency for social stimuli moving towards them. Across both studies, we observed within- and between-group differences in attention to social stimuli that were moving towards versus moving past the viewer. Conclusions: Taken together, these results provide strong evidence to suggest differential visual attention to competing social versus non-social video stimuli in populations with clinically relevant, genetically mediated differences in socio-behavioural phenotypes. |
Frédéric Crevecoeur; Douglas P. Munoz; Stephen H. Scott Dynamic multisensory integration: Somatosensory speed trumps visual accuracy during feedback control Journal Article In: Journal of Neuroscience, vol. 36, no. 33, pp. 8598–8611, 2016. @article{Crevecoeur2016, Recent advances in movement neuroscience have consistently highlighted that the nervous system performs sophisticated feedback control over very short time scales (100ms for upper limb). These observations raise the important question of how the nervous system processes multiple sources of sensory feedback in such short time intervals, given that temporal delays across sensory systems such as vision and proprioception differ by tens of milliseconds. Here we show that during feedback control, healthy humans use dynamic estimates of hand motion that rely almost exclusively on limb afferent feedback even when visual information about limb motion is available. We demonstrate that such reliance on the fastest sensory signal during movement is compatible with dynamic Bayesian estimation. These results suggest that the nervous system considers not only sensory variances but also temporal delays to perform optimal multisensory integration and feedback control in real-time. |
Deborah A. Cronin; James R. Brockmole Evaluating the influence of a fixated object's spatio-temporal properties on gaze control Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 4, pp. 996–1003, 2016. @article{Cronin2016, Despite recent progress in understanding the factors that determine where an observer will eventually look in a scene, we know very little about what determines how an observer decides where he or she will look next. We investi- gated the potential roles of object-level representations in the direction ofsubsequent shifts ofgaze. In five experiments, we considered whether a fixated object's spatial orientation, im- plied motion, and perceived animacy affect gaze direction when shifting overt attention to another object. Eye move- ments directed away from a fixated object were biased in the direction it faced. This effect was not modified by implying a particular direction ofinanimate or animate motion. Together, these results suggest that decisions regarding where one should look next are in part determined by the spatial, but not by the implied temporal, properties of the object at the current locus of fixation. WABBLE |
Evan T. Curtis; Matthew G. Huebner; Jo-Anne LeFevre The relationship between problem size and fixation patterns during addition, subtraction, multiplication, and division Journal Article In: Journal of Numerical Cognition, vol. 2, no. 2, pp. 91–115, 2016. @article{Curtis2016, Eye-tracking methods have only rarely been used to examine the online cognitive processing that occurs during mental arithmetic on simple arithmetic problems, that is, addition and multiplication problems with single-digit operands (e.g., operands 2 through 9; 2 + 3, 6 x 8) and the inverse subtraction and division problems (e.g., 5 – 3; 48 ÷ 6). Participants (N = 109) solved arithmetic problems from one of the four operations while their eye movements were recorded. We found three unique fixation patterns. During addition and multiplication, participants allocated half of their fixations to the operator and one-quarter to each operand, independent of problem size. The pattern was similar on small subtraction and division problems. However, on large subtraction problems, fixations were distributed approximately evenly across the three stimulus components. On large division problems, over half of the fixations occurred on the left operand, with the rest distributed between the operation sign and the right operand. We discuss the relations between these eye tracking patterns and other research on the differences in processing across arithmetic operations. |
Mario Dalmaso; S. Gareth Edwards; Andrew P. Bayliss Re-encountering individuals who previously engaged in joint gaze modulates subsequent gaze cueing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 2, pp. 271–284, 2016. @article{Dalmaso2016, We assessed the extent to which previous experience of joint gaze with people (i.e., looking toward the same object) modulates later gaze cueing of attention elicited by those individuals. Participants in Experiments 1 and 2a/b first completed a saccade/antisaccade task while a to-be-ignored face either looked at, or away from, the participants' eye movement target. Two faces always engaged in joint gaze with the participant, whereas 2 other faces never engaged in joint gaze. Then, we assessed standard gaze cueing in response to these faces to ascertain the effect of these prior interactions on subsequent social attention episodes. In Experiment 1, the face's eyes moved before the participant's target appeared, meaning that the participant always gaze-followed 2 faces and never gaze-followed 2 other faces. We found that this prior experience modulated the timecourse of subsequent gaze cueing. In Experiments 2a/b, the participant looked at the target first, then was either followed (i.e., the participant initiated joint gaze), or was not followed. These participants then showed an overall decrement of gaze cueing with individuals who had previously followed participants' eyes (Experiment 2a), an effect that was associated with autism spectrum quotient scores and modulated perceived trustworthiness of the faces (Experiment 2b). Experiment 3 demonstrated that these modulations are unlikely to be because of the association of different levels of task difficulty with particular faces. These findings suggest that establishing joint gaze with others influences subsequent social attention processes that are generally thought to be relatively insensitive to learning from prior episodes. |
Archy O. Berker; Robb B. Rutledge; Christoph Mathys; Louise Marshall; Gemma F. Cross; Raymond J. Dolan; Sven Bestmann Computations of uncertainty mediate acute stress responses in humans Journal Article In: Nature Communications, vol. 7, pp. 10996, 2016. @article{Berker2016, The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function. |
Archy O. Berker; Margot Tirole; Robb B. Rutledge; Gemma F. Cross; Raymond J. Dolan; Sven Bestmann Acute stress selectively impairs learning to act Journal Article In: Scientific Reports, vol. 6, pp. 29816, 2016. @article{Berker2016a, Stress interferes with instrumental learning. However, choice is also influenced by non-instrumental factors, most strikingly by biases arising from Pavlovian associations that facilitate action in pursuit of rewards and inaction in the face of punishment. Whether stress impacts on instrumental learning via these Pavlovian associations is unknown. Here, in a task where valence (reward or punishment) and action (go or no-go) were orthogonalised, we asked whether the impact of stress on learning was action or valence specific. We exposed 60 human participants either to stress (socially-evaluated cold pressor test) or a control condition (room temperature water). We contrasted two hypotheses: that stress would lead to a non-selective increase in the expression of Pavlovian biases; or that stress, as an aversive state, might specifically impact action production due to the Pavlovian linkage between inaction and aversive states. We found support for the second of these hypotheses. Stress specifically impaired learning to produce an action, irrespective of the valence of the outcome, an effect consistent with a Pavlovian linkage between punishment and inaction. This deficit in action-learning was also reflected in pupillary responses; stressed individuals showed attenuated pupillary responses to action, hinting at a noradrenergic contribution to impaired action-learning under stress. |
Anouk J. Brouwer; Eli Brenner; Jeroen B. J. Smeets Keeping a target in memory does not increase the effect of the Müller-Lyer illusion on saccades Journal Article In: Experimental Brain Research, vol. 234, no. 4, pp. 977–983, 2016. @article{Brouwer2016a, The effects of visual contextual illusions on motor behaviour vary largely between experimental con- ditions. Whereas it has often been reported that the effects of illusions on pointing and grasping are largest when the movement is performed some time after the stimulus has disappeared, the effect of a delay has hardly been studied for saccadic eye movements. In this experiment, partici- pants viewed a briefly presented Müller-Lyer illusion with a target at its endpoint and made a saccade to the remem- bered position of this target after a delay of 0, 0.6, 1.2 or 1.8 s. We found that horizontal saccade amplitudes were shorter for the perceptually shorter than for the perceptu- ally longer configuration of the illusion. Most importantly, although the delay clearly affected saccade amplitude, resulting in shorter saccades for longer delays, the illu- sion effect did not depend on the duration of the delay. We argue that visually guided and memory-guided saccades are likely based on a common visual representation. |
Olivier Condappa; Jan M. Wiener Human place and response learning: navigation strategy selection, pupil size and gaze behavior Journal Article In: Psychological Research, vol. 80, no. 1, pp. 82–93, 2016. @article{Condappa2016, In this study, we examined the cognitive processes and ocular behavior associated with on-going navigation strategy choice using a route learning paradigm that distinguishes between three different wayfinding strategies: an allocentric place strategy, and the egocentric associative cue and beacon response strategies. Participants approached intersections of a known route from a variety of directions, and were asked to indicate the direction in which the original route continued. Their responses in a subset of these test trials allowed the assessment of strategy choice over the course of six experimental blocks. The behavioral data revealed an initial maladaptive bias for a beacon response strategy, with shifts in favor of the optimal configuration place strategy occurring over the course of the experiment. Response time analysis suggests that the configuration strategy relied on spatial transformations applied to a viewpoint-dependent spatial representation, rather than direct access to an allocentric representation. Furthermore, pupillary measures reflected the employment of place and response strategies throughout the experiment, with increasing use of the more cognitively demanding configuration strategy associated with increases in pupil dilation. During test trials in which known intersections were approached from different directions, visual attention was directed to the landmark encoded during learning as well as the intended movement direction. Interestingly, the encoded landmark did not differ between the three navigation strategies, which is discussed in the context of initial strategy choice and the parallel acquisition of place and response knowledge. |
Julian De Freitas; Nicholas E. Myers; Anna C. Nobre Tracking the changing feature of a moving object Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–21, 2016. @article{DeFreitas2016, The mind can track not only the changing locations of moving objects, but also their changing features, which are often meaningful for guiding action. How does the mind track such features? Using a task in which observers tracked the changing orientation of a rolling wheel's spoke, we found that this ability is enabled by a highly feature-specific process which continuously tracks the orientation feature itself—even during occlusion, when the feature is completely invisible. This suggests that the mental representation of a changing orientation feature and its moving object are continuously transformed and updated, akin to studies showing continuous tracking of an object's boundaries alone.We also found a systematic error in performance, whereby the orientation was reliably perceived to be further ahead than it truly was. This effect appears to occur because during occlusion the mental representation of the feature is transformed beyond the veridical position, perhaps in order to conservatively anticipate future feature states. |
Floor Groot; Falk Huettig; Christian N. L. Olivers Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search Journal Article In: Visual Cognition, vol. 24, no. 3, pp. 226–245, 2016. @article{Groot2016a, When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than looking at unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present. [ABSTRACT FROM AUTHOR] |
Jeanne Bovet; Junpeng Lao; Océane Bartholomée; Roberto Caldara; Michel Raymond Mapping female bodily features of attractiveness Journal Article In: Scientific Reports, vol. 6, pp. 18551, 2016. @article{Bovet2016, "Beauty is bought by judgment of the eye" (Shakespeare, Love's Labour's Lost), but the bodily features governing this critical biological choice are still debated. Eye movement studies have demonstrated that males sample coarse body regions expanding from the face, the breasts and the midriff, while making female attractiveness judgements with natural vision. However, the visual system ubiquitously extracts diagnostic extra-foveal information in natural conditions, thus the visual information actually used by men is still unknown. We thus used a parametric gaze-contingent design while males rated attractiveness of female front- and back-view bodies. Males used extra-foveal information when available. Critically, when bodily features were only visible through restricted apertures, fixations strongly shifted to the hips, to potentially extract hip-width and curvature, then the breast and face. Our hierarchical mapping suggests that the visual system primary uses hip information to compute the waist-to-hip ratio and the body mass index, the crucial factors in determining sexual attractiveness and mate selection. |
Jeffrey S. Bowers; Ivan I. Vankov; Casimir J. H. Ludwig The visual system supports online translation invariance for object identification Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 432–438, 2016. @article{Bowers2016, The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved "online," such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision. |
Johannes Brand; Marco Piccirelli; Marie Claude Hepp-Reymond; Manfred Morari; Lars Michels; Kynan Eng Virtual hand feedback reduces reaction time in an interactive finger reaching task Journal Article In: PLoS ONE, vol. 11, no. 5, pp. e0154807, 2016. @article{Brand2016, Computer interaction via visually guided hand or finger movements is a ubiquitous part of daily computer usage in work or gaming. Surprisingly, however, little is known about the performance effects of using virtual limb representations versus simpler cursors. In this study 26 healthy right-handed adults performed cued index finger flexion-extension movements towards an on-screen target while wearing a data glove. They received each of four different types of real-time visual feedback: a simple circular cursor, a point light pattern indicating finger joint positions, a cartoon hand and a fully shaded virtual hand. We found that participants initiated the movements faster when receiving feedback in the form of a hand than when receiving circular cursor or point light feedback. This overall difference was robust for three out of four hand versus circle pairwise comparisons. The faster movement initiation for hand feedback was accompanied by a larger movement amplitude and a larger movement error. We suggest that the observed effect may be related to priming of hand information during action perception and execution affecting motor planning and execution. The results may have applications in the use of body representations in virtual reality applications. |
Doris I. Braun; Karl R. Gegenfurtner Dynamics of oculomotor direction discrimination Journal Article In: Journal of Vision, vol. 16, no. 13, pp. 1–26, 2016. @article{Braun2016, Successful foveation of a dynamic target depends on good predictions of its movement direction and speed. We measured and compared the temporal dynamics of directional precision of both saccades and smooth pursuit and their interactions. We also compared the directional precision of both eye movements to psychophysical direction discrimination thresholds. Directional thresholds of pure pursuit responses improved rapidly and reached asymptotic values of 1.5°-3° within 300 ms after target motion onset, both for trained and untrained observers and irrespective of the speed of the stimuli. Psychophysical thresholds were in the same range. Directional thresholds for saccades in the ramp paradigm were just slightly higher, but these occurred significantly earlier in time at around 200 ms after target motion onset. At the equivalent time during pure pursuit initiation, thresholds were typically higher by 2°-3°. The rise in directional precision-or decrease in thresholds-over time was more pronounced for trials with longer latencies. As an effect, precision depended mainly on time since stimulus motion onset rather than pursuit onset. Directional precision for saccades to static targets was slightly better than to moving targets, at even shorter latencies. We conclude that directional precision is higher for the saccadic system at saccade onset than for the pursuit system, presumably due to additional position signals that are not available to the pursuit system at that point in time. The pursuit response improves rapidly due to refined sensory processing and motor planning. The combination of initial saccades and pursuit to track moving targets is a good strategy for the oculomotor system to reduce directional errors during the phase of initiation. The target speed has very little effects on the directional precision of both eye movements. |
Scott L. Brincat; Earl K. Miller Prefrontal cortex networks shift from external to internal modes during learning Journal Article In: Journal of Neuroscience, vol. 36, no. 37, pp. 9739–9754, 2016. @article{Brincat2016, As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with "internal" memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)-regions critical for sensory associations-of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11-27 Hz) oscillatory power and synchrony associated with "top-down" or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. |
James A. Brissenden; Emily J. Levin; David E. Osher; Mark A. Halko; David C. Somers Functional evidence for a cerebellar node of the dorsal attention network Journal Article In: Journal of Neuroscience, vol. 36, no. 22, pp. 6083–6096, 2016. @article{Brissenden2016, The "dorsal attention network" or "frontoparietal network" refers to a network of cortical regions that support sustained attention and working memory. Recent work has demonstrated that cortical nodes of the dorsal attention network possess intrinsic functional connections with a region in ventral cerebellum, in the vicinity of lobules VII/VIII. Here, we performed a series of task-based and resting-state fMRI experiments to investigate cerebellar participation in the dorsal attention network in humans. We observed that visual working memory and visual attention tasks robustly recruit cerebellar lobules VIIb and VIIIa, in addition to canonical cortical dorsal attention network regions. Across the cerebellum, resting-state functional connectivity with the cortical dorsal attention network strongly predicted the level of activation produced by attention and working memory tasks. Critically, cerebellar voxels that were most strongly connected with the dorsal attention network selectively exhibited load-dependent activity, a hallmark of the neural structures that support visual working memory. Finally, we examined intrinsic functional connectivity between task-responsive portions of cerebellar lobules VIIb/VIIIa and cortex. Cerebellum-to-cortex functional connectivity strongly predicted the pattern of cortical activation during task performance. Moreover, resting-state connectivity patterns revealed that cerebellar lobules VIIb/VIIIa group with cortical nodes of the dorsal attention network. This evidence leads us to conclude that the conceptualization of the dorsal attention network should be expanded to include cerebellar lobules VIIb/VIIIa. |
Andreas Brocher; Tim Graf Pupil old/new effects reflect stimulus encoding and decoding in short-term memory Journal Article In: Psychophysiology, vol. 53, no. 12, pp. 1823–1835, 2016. @article{Brocher2016, We conducted five pupil old/new experiments to examine whether pupil old/new effects can be linked to familiarity and/or recollection processes of recognition memory. In Experiments 1–3, we elicited robust pupil old/new effects for legal words and pseudowords (Experiment 1), positive and negative words (Experiment 2), and low-frequency and high-frequency words (Experiment 3). Importantly, unlike for old/new effects in ERPs, we failed to find any effects of long-term memory representations on pupil old/new effects. In Experiment 4, using the words and pseudowords from Experiment 1, participants made lexical decisions instead of old/new decisions. Pupil old/new effects were restricted to legal words. Additionally requiring participants to make speeded responses (Experiment 5) led to a complete absence of old/new effects. Taken together, these data suggest that pupil old/new effects do not map onto familiarity and recollection processes of recognition memory. They rather seem to reflect strength of memory traces in short-term memory, with little influence of long-term memory representations. Crucially, weakening the memory trace through manipulations in the experimental task significantly reduces pupil/old new effects. |
Simona Buetti; Alejandro Lleras Distractibility is a function of engagement, not task difficulty: Evidence from a new oculomotor capture paradigm Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 10, pp. 1382–1405, 2016. @article{Buetti2016, It has been shown that when humans require a brief moment of concentration or mental effort, they tend to avert their gaze away from the attended location (or even blink). Similarly, participants tend to miss unexpected events when they are highly focused on a task. We present an engagement theory of distractibility that is meant to capture the relationship between participants' engagement in a task and reduction in sensitivity to new sensory events in a broad range of situations. In a series of experiments, we asked participants to perform different cognitive tasks of varying degrees of difficulty while we measured spontaneous oculomotor capture by new images that were completely unrelated to the participants' task. The images appeared while participants were cognitively engaged in the task. Our results showed that increased cognitive engagement produced decreased sensitivity to visual events. We propose that individual differences in intrinsic motivation play a large role in determining sensitivity to task unrelated events. In addition, our results also indicate that changes in task difficulty on a trial-to-trial basis do not generate trial-by-trial differences in oculomotor capture. Importantly, we believe our framework provides us with a promising way of extending laboratory findings to many real world situations. |
Anke Cajar; Ralf Engbert; Jochen Laubrock Spatial frequency processing in the central and peripheral visual field during scene viewing Journal Article In: Vision Research, vol. 127, pp. 186–197, 2016. @article{Cajar2016a, Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial-frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. |
Anke Cajar; Paul Schneeweiss; Ralf Engbert; Jochen Laubrock Coupling of attention and saccades when viewing scenes with central and peripheral degradation Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–19, 2016. @article{Cajar2016, Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations. |
Damien Camors; Yves Trotter; Pierre Pouget; Sophie Gilardeau; Jean-Baptiste Durand Visual straight-ahead preference in saccadic eye movements Journal Article In: Scientific Reports, vol. 6, pp. 23124, 2016. @article{Camors2016, Ocular saccades bringing the gaze toward the straight-ahead direction (centripetal) exhibit higher dynamics than those steering the gaze away (centrifugal). This is generally explained by oculomotor determinants: centripetal saccades are more efficient because they pull the eyes back toward their primary orbital position. However, visual determinants might also be invoked: elements located straight-ahead trigger saccades more efficiently because they receive a privileged visual processing. Here, we addressed this issue by using both pro- and anti-saccade tasks in order to dissociate the centripetal/centrifugal directions of the saccades, from the straight-ahead/eccentric locations of the visual elements triggering those saccades. Twenty participants underwent alternating blocks of pro- and anti-saccades during which eye movements were recorded binocularly at 1 kHz. The results confirm that centripetal saccades are always executed faster than centrifugal ones, irrespective of whether the visual elements have straight-ahead or eccentric locations. However, by contrast, saccades triggered by elements located straight-ahead are consistently initiated more rapidly than those evoked by eccentric elements, irrespective of their centripetal or centrifugal direction. Importantly, this double dissociation reveals that the higher dynamics of centripetal pro-saccades stem from both oculomotor and visual determinants, which act respectively on the execution and initiation of ocular saccades. |
Florence Campana; Ignacio Rebollo; Anne E. Urai; Valentin Wyart; Catherine Tallon-Baudry Conscious vision proceeds from global to local content in goal-directed tasks and spontaneous vision Journal Article In: Journal of Neuroscience, vol. 36, no. 19, pp. 5200–5213, 2016. @article{Campana2016, The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. |
Maria Nella Carminati; Pia Knoeferle Priming younger and older adults' sentence comprehension: Insights from dynamic emotional facial expressions and pupil pize measures Journal Article In: The Open Psychology Journal, vol. 9, no. 1, pp. 129–148, 2016. @article{Carminati2016, Background: Prior visual-world research has demonstrated that emotional priming of spoken sentence processing is rapidly modulated by age. Older and younger participants saw two photographs of a positive and of a negative event side-by-side and listened to a spoken sentence about one of these events. Older adults' fixations to the mentioned (positive) event were enhanced when the still photograph of a previously-inspected positive-valence speaker face was (vs. wasn't) emotionally congruent with the event/sentence. By contrast, the younger adults exhibited such an enhancement with negative stimuli only. Objective: The first aim of the current study was to assess the replicability of these findings with dynamic face stimuli (unfolding from neutral to happy or sad). A second goal was to assess a key prediction made by socio-emotional selectivity theory, viz. that the positivity effect (a preference for positive information) displayed by older adults involves cognitive effort. Method: We conducted an eye-tracking visual-world experiment. Results: Most priming and age effects, including the positivity effects, replicated. However, against our expectations, the positive gaze preference in older adults did not co-vary with a standard measure of cognitive effort - increased pupil dilation. Instead, pupil size was significantly bigger when (both younger and older) adults processed negative than positive stimuli. Conclusion: These findings are in line with previous research on the relationship between positive gaze preferences and pupil dilation. We discuss both theoretical and methodological implications of these results. |
Monica S. Castelhano; Richelle L. Witherspoon How you use it matters: Object function guides attention during visual search in scenes Journal Article In: Psychological Science, vol. 27, no. 5, pp. 606–621, 2016. @article{Castelhano2016, How does one know where to look for objects in scenes? Objects are seen in context daily, but also used for specific purposes. Here, we examined whether an objects function can guide attention during visual search in scenes. In Experiment 1, participants studied either the function (function group) or features (feature group) of a set of invented objects. In a subsequent search, the function group located studied objects faster than novel (unstudied) objects, whereas the feature group did not. In Experiment 2, invented objects were positioned in locations that were either congruent or incongruent with the objects functions. Search for studied objects was faster for function-congruent locations and hampered for function-incongruent locations, relative to search for novel objects. These findings demonstrate that knowledge of object function can guide attention in scenes, and they have important implications for theories of visual cognition, cognitive neuroscience, and developmental and ecological psychology. |
Raymond Chang; Alexis T. Baria; Matthew W. Flounders; Biyu J. He Unconsciously elicited perceptual prior Journal Article In: Neuroscience of Consciousness, vol. 2016, no. 1, pp. niw008, 2016. @article{Chang2016, Increasing evidence over the past decade suggests that vision is not simply a passive, feed-forward process in which cortical areas relay progressively more abstract information to those higher up in the visual hierarchy, but rather an inferential process with top-down processes actively guiding and shaping perception. However, one major question that persists is whether such processes can be influenced by unconsciously perceived stimuli. Recent psychophysics and neuroimaging studies have revealed that while consciously perceived stimuli elicit stronger responses in higher visual and frontoparietal areas than those that fail to reach conscious awareness, the latter can still drive high-level brain and behavioral responses. We investigated whether unconscious processing of a masked natural image could facilitate subsequent conscious recognition of its degraded counterpart (a black-and-white "Mooney" image) presented many seconds later. We found that this is indeed the case, suggesting that conscious vision may be influenced by priors established by unconscious processing of a fleeting image. |
Cheng Chen; Kaibin Jin; Yehua Li; Hongmei Yan The attentional dependence of emotion cognition Is variable with the competing task Journal Article In: Frontiers in Behavioral Neuroscience, vol. 10, pp. 219, 2016. @article{Chen2016, The relationship between emotion and attention has fascinated researchers for decades. Many previous studies have used eye-tracking, ERP, MEG, and fMRI to explore this issue but have reached different conclusions: some researchers hold that emotion cognition is an automatic process and independent of attention, while some others believed that emotion cognition is modulated by attentional resources and is a type of controlled processing. The present research aimed to investigate this controversy, and we hypothesized that the attentional dependence of emotion cognition is variable with the competing task. Eye-tracking technology and a dual-task paradigm were adopted, and subjects' attention was manipulated to fixate at the central task to investigate whether subjects could detect the emotional faces presented in the peripheral area with a decrease or near-absence of attention. The results revealed that when the peripheral task was emotional face discrimination but the central attention-demanding task was different, subjects performed well in the peripheral task, which means that emotional information can be processed in parallel with other stimuli, and there may be a specific channel in the human brain for processing emotional information. However, when the central and peripheral tasks were both emotional face discrimination, subjects could not perform well in the peripheral task, indicating that the processing of emotional information required attentional resources and that it is a type of controlled processing. Therefore, we concluded that the attentional dependence of emotion cognition varied with the competing task. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner LRP predicts smooth pursuit eye movement onset during the ocular tracking of self-generated movements Journal Article In: Journal of Neurophysiology, vol. 116, no. 1, pp. 18–29, 2016. @article{Chen2016c, Several studies indicated that human observers are very efficient at tracking self-generated hand movements with their gaze, yet it is not clear whether this is simply a byproduct of the predictability of self-generated actions or if it results from a deeper coupling of the somatomotor and oculomotor systems. In a first behavioral experiment we compared pursuit performance as observers either followed their own finger or tracked a dot whose motion was externally generated but mimicked their finger motion. We found that even when the dot motion was completely predictable both in terms of onset time and in terms of kinematics, pursuit was not identical to the one produced as the observers tracked their finger, as evidenced by increased rate of catch-up saccades and by the fact that in the initial phase of the movement gaze was lagging behind the dot, whereas it was ahead of the finger. In a second experiment we recorded EEG in the attempt to find a direct link between the finger motor preparation, indexed by the lateralized readiness potential (LRP), and the latency of smooth pursuit. After taking into account finger movement onset variability, we observed larger LRP amplitudes associated with earlier smooth pursuit onset across trials. The same held across subjects, where average LRP onset correlated with average eye latency. The evidence from both experiments concurs to indicate that a strong coupling exists between the motor systems leading to eye and finger movements and that simple top-down predictive signals are unlikely to support optimal coordination. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner Role of motor execution in the ocular tracking of self-generated movements Journal Article In: Journal of Neurophysiology, vol. 116, no. 6, pp. 2586–2593, 2016. @article{Chen2016d, When human observers track the movements of their own hand with their gaze, the eyes can start moving before the finger (i.e., anticipatory smooth pursuit). The signals driving anticipation could come from motor commands during finger motor execution or from motor intention and decision processes associated with self-initiated movements. For the present study, we built a mechanical device that could move a visual target either in the same direction as the participant's hand or in the opposite direction. Gaze pursuit of the target showed stronger anticipation if it moved in the same direction as the hand compared with the opposite direction, as evidenced by decreased pursuit latency, increased positional lead of the eye relative to target, increased pursuit gain, decreased saccade rate, and decreased delay at the movement reversal. Some degree of anticipation occurred for incongruent pursuit, indicating that there is a role for higher-level movement prediction in pursuit anticipation. The fact that anticipation was larger when target and finger moved in the same direction provides evidence for a direct coupling between finger and eye motor commands. |
Ming Chen; Peichao Li; Shude Zhu; Chao Han; Haoran Xu; Yang Fang; Jiaming Hu; Anna W. Roe; Haidong D. Lu An orientation map for motion boundaries in macaque V2 Journal Article In: Cerebral Cortex, vol. 26, no. 1, pp. 279–287, 2016. @article{Chen2016e, The ability to extract the shape of moving objects is fundamental to visual perception. However, where such computations are processed in the visual system is unknown. To address this question, we used intrinsic signal optical imaging in awake monkeys to examine cortical response to perceptual contours defined by motion contrast (motion boundaries, MBs). We found that MB stimuli elicit a robust orientation response in area V2. Orientation maps derived from subtraction of orthogonal MB stimuli aligned well with the orientation maps obtained with luminance gratings (LGs). In contrast, area V1 responded well to LGs, but exhibited a much weaker orientation response to MBs. We further show that V2 direction domains respond to motion contrast, which is required in the detection of MB in V2. These results suggest that V2 represents MB information, an important prerequisite for shape recognition and figure-ground segregation. |
Stephanie Y. Chen; Brian H. Ross; Gregory L. Murphy Eyetracking reveals multiple-category use in induction Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 7, pp. 1050–1067, 2016. @article{Chen2016b, Category information is used to predict properties of new category members. When categorization is uncertain, people often rely on only one, most likely category to make predictions. Yet studies of perception and action often conclude that people combine multiple sources of information nearoptimally. We present a perception-action analog of category-based induction using eye movements as a measure of prediction. The categories were objects of different shapes that moved in various directions. Experiment 1 found that people integrated information across categories in predicting object motion. The results of Experiment 2 suggest that the integration of information found in Experiment 1 were not a result of explicit strategies. Experiment 3 tested the role of explicit categorization, finding that making a categorization judgment, even an uncertain one, stopped people from using multiple categories in our eye-movement task. Experiment 4 found that induction was indeed based on category-level predictions rather than associations between object properties and directions. |
Hak Soo Choi; Shinjung Kim; Donghoon Lee; Chang Seok Kim; Myung Yung Jeong Synchronized tracking of brain cognitive processing using EEG and vision signals Journal Article In: Applied Spectroscopy Reviews, vol. 51, no. 7-9, pp. 592–602, 2016. @article{Choi2016, Many efforts have been made to understand the neural mechanisms of the human brain. However, visualization of human brain processing has been a main challenge in the field. It is still largely unknown how the human brain allocates attention to target objects while excluding unrelated information in a complex visual environment. Using simultaneous electroencephalogram and eye tracking measurements, in this study, we analyzed two brain regions separately to detect the brain wave activity during visual information processing. We observed an activation difference between sensory (P100) and cognitive (P300) processing, and the behavioral response was improved by providing valid cue-target location information. Furthermore, neural processing was evaluated according to the specific area of brain activation and eye movements during cognitive processing. Our results demonstrate the correlation between behavior performance and visual stimuli and suggest an advantage of combined paradigms for efficient visual information processing. |
Heeyoung Choo; Dirk B. Walther In: NeuroImage, vol. 135, pp. 32–44, 2016. @article{Choo2016, Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scene-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions. |
Jean-Baptiste Bernard; Susana T. L. Chung The role of external features in face recognition with central vision loss Journal Article In: Optometry and Vision Science, vol. 93, no. 5, pp. 510–520, 2016. @article{Bernard2016a, PURPOSE: We evaluated how the performance of recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. METHODS: In experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (experiment 2) and for hybrid images where the internal and external features came from two different source images (experiment 3) for five observers with central vision loss and four age-matched control observers. RESULTS: When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss was centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8 ± 3.3% correct) than for images containing only internal features (35.8 ± 15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4 ± 17.8%) than to the internal features (9.3 ± 4.9%), whereas control observers did not show the same bias toward responding to the external features. CONCLUSIONS: Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images. |
Federica Bianchi; Sébastien Santurette; Dorothea Wendt; Torsten Dau Pitch discrimination in musicians and non-musicians: Effects of harmonic resolvability and processing effort Journal Article In: JARO - Journal of the Association for Research in Otolaryngology, vol. 17, no. 1, pp. 69–79, 2016. @article{Bianchi2016, Musicians typically show enhanced pitch discrimination abilities compared to non-musicians. The present study investigated this perceptual enhancement behaviorally and objectively for resolved and unresolved complex tones to clarify whether the enhanced performance in musicians can be ascribed to increased peripheral frequency selectivity and/or to a different processing effort in performing the task. In a first experiment, pitch discrimination thresholds were obtained for harmonic complex tones with fundamental frequencies (F0s) between 100 and 500 Hz, filtered in either a low- or a high-frequency region, leading to variations in the resolvability of audible harmonics. The results showed that pitch discrimination performance in musicians was enhanced for resolved and unresolved complexes to a similar extent. Additionally, the harmonics became resolved at a similar F0 in musicians and non-musicians, suggesting similar peripheral frequency selectivity in the two groups of listeners. In a follow-up experiment, listeners' pupil dilations were measured as an indicator of the required effort in performing the same pitch discrimination task for conditions of varying resolvability and task difficulty. Pupillometry responses indicated a lower processing effort in the musicians versus the non-musicians, although the processing demand imposed by the pitch discrimination task was individually adjusted according to the behavioral thresholds. Overall, these findings indicate that the enhanced pitch discrimination abilities in musicians are unlikely to be related to higher peripheral frequency selectivity and may suggest an enhanced pitch representation at more central stages of the auditory system in musically trained listeners. |