All EyeLink Eye Tracker Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2011 |
Andreas Hartwig; Emma Gowen; W. Neil Charman; Hema Radhakrishnan Working distance and eye and head movements during near work in myopes and non-myopes Journal Article In: Clinical and Experimental Optometry, vol. 94, no. 6, pp. 536–544, 2011. @article{Hartwig2011a, PURPOSE: Reasons for the development and progression of myopia remain unclear. Some studies show a high prevalence of myopia in certain occupational groups. This might imply that certain head and eye movements lead to ocular elongation, perhaps as a result of forces from the extraocular muscles, lids or other structures. The present study aims to analyse head and eye movements in myopes and non-myopes for near-vision tasks. METHODS: The study analysed head and eye movements in a cohort of 14 myopic and 16 non-myopic young adults. Eye and head movements were monitored by an eye tracker and a motion sensor while the subjects performed three near tasks, which included reading on a screen, reading a book and writing. Horizontal eye and head movements were measured in terms of angular amplitudes. Vertical eye and head movements were analysed in terms of the range of the whole movement during the recording. Values were also assessed as a ratio based on the width of the printed text, which changed between participants due to individual working distances. RESULTS: Horizontal eye and head movements were significantly different among the three tasks (p = 0.03 and p = 0.014, for eye and head movements, respectively, repeated measures ANOVA). Horizontal and vertical eye and head movements did not differ significantly between myopes and non-myopes. As expected, eye movements preponderated over head movements for all tasks and in both meridians. A positive correlation was found between mean spherical equivalent and the working distance for reading a book (r = 0.41; p = 0.025). CONCLUSIONS: The results show a similar pattern of eye movements in all participating subjects, although the amplitude of these movements varied considerably between the individuals. It is likely that some individuals when exposed to certain occupational tasks might show different eye and head movement patterns. |
Ben M. Harvey; Serge O. Dumoulin In: Journal of Neuroscience, vol. 31, no. 38, pp. 13604–13612, 2011. @article{Harvey2011, Receptive field (RF) sizes and cortical magnification factor (CMF) are fundamental organization properties of the visual cortex. At increasing visual eccentricity, RF sizes increase and CMF decreases. A relationship between RF size and CMF suggests constancies in cortical architecture, as their product, the cortical representation of an RF (point image), may be constant. Previous animal neurophysiology studies of this question yield conflicting results. Here, we use fMRI to determine the relationship between the population RF (pRF) and CMF in humans. In average and individual data, the product of CMF and pRF size, the population point image, is near constant, decreasing slightly with eccentricity in V1. Interhemisphere and subject variations in CMF, pRF size, and V1 surface area are correlated, and the population point image varies less than these properties. These results suggest a V1 cortical processing architecture of approximately constant size between humans. Up the visual hierarchy, to V2, V3, hV4, and LO1, the population point image decreases with eccentricity, and both the absolute values and rate of change increase. PRF sizes increase between visual areas and with eccentricity, but when expressed in V1 cortical surface area (i.e., corticocortical pRFs), they are constant across eccentricity in V2/V3. Thus, V2/V3, and to some degree hV4, sample from a constant extent of V1. This may explain population point image changes in later areas. Consequently, the constant factor determining pRF size may not be the relationship to the local CMF, but rather pRF sizes and CMFs in visual areas from which the pRF samples. |
Katharina Havermann; Eckart Zimmermann; Markus Lappe Eye position effects in saccadic adaptation Journal Article In: Journal of Neurophysiology, vol. 106, no. 5, pp. 2536–2545, 2011. @article{Havermann2011, Saccades are used by the visual system to explore visual space with the high accuracy of the fovea. The visual error after the saccade is used to adapt the control of subsequent eye movements of the same amplitude and direction in order to keep saccades accurate. Saccadic adaptation is thus specific to saccade amplitude and direction. In the present study we show that saccadic adaptation is also specific to the initial position of the eye in the orbit. This is useful, because saccades are normally accompanied by head movements and the control of combined head and eye movements depends on eye position. Many parts of the saccadic system contain eye position information. Using the intrasaccadic target step paradigm, we adaptively reduced the amplitude of reactive saccades to a suddenly appearing target at a selective position of the eyes in the orbitae and tested the resulting amplitude changes for the same saccade vector at other starting positions. For central adaptation positions the saccade amplitude reduction transferred completely to eccentric starting positions. However, for adaptation at eccentric starting positions, there was a reduced transfer to saccades from central starting positions or from eccentric starting positions in the opposite hemifield. Thus eye position information modifies the transfer of saccadic amplitude changes in the adaptation of reactive saccades. A gain field mechanism may explain the eye position dependence found. |
Benjamin Y. Hayden; Sarah R. Heilbronner; John M. Pearson; Michael L. Platt Surprise signals in anterior cingulate cortex: Neuronal encoding of unsigned reward prediction errors driving adjustment in behavior Journal Article In: Journal of Neuroscience, vol. 31, no. 11, pp. 4178–4187, 2011. @article{Hayden2011, In attentional models of learning, associations between actions and subsequent rewards are stronger when outcomes are surprising, regardless of their valence. Despite the behavioral evidence that surprising outcomes drive learning, neural correlates of unsigned reward prediction errors remain elusive. Here we show that in a probabilistic choice task, trial-to-trial variations in preference track outcome surprisingness. Concordant with this behavioral pattern, responses of neurons in macaque (Macaca mulatta) dorsal anterior cingulate cortex (dACC) to both large and small rewards were enhanced when the outcome was surprising. Moreover, when, on some trials, probabilities were hidden, neuronal responses to rewards were reduced, consistent with the idea that the absence of clear expectations diminishes surprise. These patterns are inconsistent with the idea that dACC neurons track signed errors in reward prediction, as dopamine neurons do. Our results also indicate that dACC neurons do not signal conflict. In the context of other studies of dACC function, these results suggest a link between reward-related modulations in dACC activity and attention and motor control processes involved in behavioral adjustment. More speculatively, these data point to a harmonious integration between reward and learning accounts of ACC function on one hand, and attention and cognitive control accounts on the other. |
Benjamin Y. Hayden; John M. Pearson; Michael L. Platt Neuronal basis of sequential foraging decisions in a patchy environment Journal Article In: Nature Neuroscience, vol. 14, no. 7, pp. 933–939, 2011. @article{Hayden2011a, Deciding when to leave a depleting resource to exploit another is a fundamental problem for all decision makers. The neuronal mechanisms mediating patch-leaving decisions remain unknown. We found that neurons in primate (Macaca mulatta) dorsal anterior cingulate cortex, an area that is linked to reward monitoring and executive control, encode a decision variable signaling the relative value of leaving a depleting resource for a new one. Neurons fired during each sequential decision to stay in a patch and, for each travel time, these responses reached a fixed threshold for patch-leaving. Longer travel times reduced the gain of neural responses for choosing to stay in a patch and increased the firing rate threshold mandating patch-leaving. These modulations more closely matched behavioral decisions than any single task variable. These findings portend an understanding of the neural basis of foraging decisions and endorse the unification of theoretical and experimental work in ecology and neuroscience. |
Taylor R. Hayes; Alexander A. Petrov; Per B. Sederberg A novel method for analyzing sequential eye movements reveals strategic influence on Raven's Advanced Progressive Matrices Journal Article In: Journal of Vision, vol. 11, no. 10, pp. 1–11, 2011. @article{Hayes2011, Eye movements are an important data source in vision science. However, the vast majority of eye movement studies ignore sequential information in the data and utilize only first-order statistics. Here, we present a novel application of a temporal-difference learning algorithm to construct a scanpath successor representation (SR; P. Dayan, 1993) that captures statistical regularities in temporally extended eye movement sequences. We demonstrate the effectiveness of the scanpath SR on eye movement data from participants solving items from Raven's Advanced Progressive Matrices Test. Analysis of the SRs revealed individual differences in scanning patterns captured by two principal components that predicted individual Raven scores much better than existing methods. These scanpath SR components were highly interpretable and provided new insight into the role of strategic processing on the Raven test. The success of the scanpath SR in terms of prediction and interpretability suggests that this method could prove useful in a much broader context. |
Becky Heaver; Samuel B. Hutton Keeping an eye on the truth? pupil size changes associated with recognition memory Journal Article In: Memory, vol. 19, no. 4, pp. 398–405, 2011. @article{Heaver2011, During recognition memory tests participants' pupils dilate more when they view old items compared to novel items. We sought to replicate this "pupil old/new effect" and to determine its relationship to participants' responses. We compared changes in pupil size during recognition when participants were given standard recognition memory instructions, instructions to feign amnesia, and instructions to report all items as new. Participants' pupils dilated more to old items compared to new items under all three instruction conditions. This finding suggests that the increase in pupil size that occurs when participants encounter previously studied items is not under conscious control. Given that pupil size can be reliably and simply measured, the pupil old/new effect may have potential in clinical settings as a means for determining whether patients are feigning memory loss. |
Masahiro Hirai; Daniel R. Saunders; Nikolaus F. Troje Allocation of attention to biological motion: Local motion dominates global shape Journal Article In: Journal of Vision, vol. 11, no. 3, pp. 1–11, 2011. @article{Hirai2011, Directional information can be retrieved from a point-light walker (PLW) in two different ways: either from recovering the global shape of the articulated body or from signals in the local motion of individual dots. Here, we introduce a voluntary eye movement task to assess how the direction of a centrally presented, task-irrelevant PLW affects the onset latency and accuracy of saccades to peripheral targets. We then use this paradigm to design experiments to study which aspects of biological motion-the global form mediated by the motion of the walker or the local movements of critical features-drive the observed attentional effects. Putting the two cues into conflict, we show that saccade latency and accuracy were affected by the local motion of the dots representing the walker's feet-but only if they retain their familiar, predictable location within the display. |
Margit Höfler; Iain D. Gilchrist; Christof Körner Inhibition of return functions within but not across searches Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 5, pp. 1385–1397, 2011. @article{Hoefler2011, Inhibition of return (IOR) facilitates visual search by discouraging the reinspection of recently processed items. We investigated whether IOR operates across two consecutive searches of the same display for different targets. In Experiment 1, we demonstrated that IOR is present within each of the two searches. In Experiment 2, we found no evidence for IOR across searches. In Experiment 3, we showed that IOR is present across the two searches when the first search is interrupted, suggesting that the completion of the search is what causes the resetting of IOR. We concluded that IOR is a partially flexible process that can be reset when the task completes, but not necessarily when it changes. When resetting occurs, this flexibility ensures that the inhibition of previously visited locations does not interfere with the new search. |
L. Elliot Hong; Gunvant K. Thaker; Robert P. McMahon; Ann Summerfelt; Jill RachBeisel; Rebecca L. Fuller; Ikwunga Wonodi; Robert W. Buchanan; Carol Myers; Stephen J. Heishman; Jeff Yang; Adrienne Nye In: Archives of General Psychiatry, vol. 68, no. 12, pp. 1195–1206, 2011. @article{Hong2011, CONTEXT: The administration of nicotine transiently improves many neurobiological and cognitive functions in schizophrenia and schizoaffective disorder. It is not yet clear which nicotinic acetylcholine receptor (nAChR) subtype or subtypes are responsible for these seemingly pervasive nicotinic effects in schizophrenia and schizoaffective disorder. OBJECTIVE: Because α4β2 is a key nAChR subtype for nicotinic actions, we investigated the effect of varenicline tartrate, a relatively specific α4β2 partial agonist and antagonist, on key biomarkers that are associated with schizophrenia and are previously shown to be responsive to nicotinic challenge in humans. DESIGN: A double-blind, parallel, randomized, placebo-controlled trial of patients with schizophrenia or schizoaffective disorder to examine the effects of varenicline on biomarkers at 2 weeks (short-term treatment) and 8 weeks (long-term treatment), using a slow titration and moderate dosing strategy for retaining α4β2-specific effects while minimizing adverse effects. SETTING: Outpatient clinics. PARTICIPANTS: A total of 69 smoking and nonsmoking patients; 64 patients completed week 2, and 59 patients completed week 8. Intervention Varenicline. MAIN OUTCOME MEASURES: Prepulse inhibition, sensory gating, antisaccade, spatial working memory, eye tracking, processing speed, and sustained attention. RESULTS: A moderate dose of varenicline (1) significantly reduced the P50 sensory gating deficit in nonsmokers after long-term treatment (P = .006), (2) reduced startle reactivity (P = .02) regardless of baseline smoking status, and (3) improved executive function by reducing the antisaccadic error rate (P = .03) regardless of smoking status. A moderate dose of varenicline had no significant effect on spatial working memory, predictive and maintenance pursuit measures, processing speed, or sustained attention by Conners' Continuous Performance Test. Clinically, there was no evidence of exacerbation of psychiatric symptoms, psychosis, depression, or suicidality using a gradual titration (1-mg daily dose). CONCLUSIONS: Moderate-dose treatment with varenicline has a unique treatment profile on core schizophrenia-related biomarkers. Further development is warranted for specific nAChR compounds and dosing and duration strategies to target subgroups of schizophrenic patients with specific biological deficits. |
Ningdong Li; Xiajuan Wang; Yuchuan Wang; Liming Wang; Ming Ying; Ruifang Han; Yuyan Liu; Kanxing Zhao Investigation of the gene mutations in two Chinese families with X-linked infantile nystagmus Journal Article In: Molecular Vision, vol. 17, pp. 461–468, 2011. @article{Li2011, Purpose: To identify the gene mutations causing X-linked infantile nystagmus in two Chinese families (NYS003 and NYS008), of which the NYS003 family was assigned to the FERM domain–containing 7 (FRMD7) gene linked region in our previous study, and no mutations were found by direct sequencing. Methods: Two microsatellites, DXS1047 and DXS1001, were amplified using a PCR reaction for the linkage study in the NYS008 family. FRMD7 was sequenced and mutations were analyzed. Multiplex ligation-dependent probe amplification (MLPA) was used to detect FRMD7 mutations in the NYS003 family. Results: The NYS008 family yielded a maximum logarithm of odds (LOD) score of 1.91 at θ=0 with DXS1001. FRMD7 sequencing showed a nucleotide change of c. 623A>G in exon7 of the patients' FRMD7 gene, which was predicted to result in an H208R amino acid change. This novel mutation was absent in 100 normal Han Chinese controls. No FRMD7 gene mutations were detected by MLPA in the NYS003 family. Conclusions: We identified a novel mutation, c. 623A>G (p. H208R), in a Han Chinese family with infantile nystagmus. This mutation expands the mutation spectrum of FRMD7 and contributes to the research on the molecular pathogenesis of FRMD7. |
Xingshan Li; Pingping Liu; Keith Rayner Eye movement guidance in Chinese reading: Is there a preferred viewing location? Journal Article In: Vision Research, vol. 51, pp. 1146–1156, 2011. @article{Li2011a, In this study, we examined eye movement guidance in Chinese reading. We embedded either a 2-character word or a 4-character word in the same sentence frame, and observed the eye movements of Chinese readers when they read these sentences. We found that when all saccades into the target words were considered that readers eyes tended to land near the beginning of the word. However, we also found that Chinese readers' eyes landed at the center of words when they made only a single fixation on a word, and that they landed at the beginning of a word when they made more than one fixation on a word. However, simulations that we carried out suggest that these findings cannot be taken to unambiguously argue for word-based saccade targeting in Chinese reading. We discuss alternative accounts of eye guidance in Chinese reading and suggest that eye movement target planning for Chinese readers might involve a combination of character-based and word-based targeting contingent on word segmentation processes. |
I. Fan Lin; Andrei Gorea Location and identity memory of saccade targets Journal Article In: Vision Research, vol. 51, no. 3, pp. 323–332, 2011. @article{Lin2011, While the memory of objects' identity and of their spatiotopic location may sustain transsaccadic spatial constancy, the memory of their retinotopic location may hamper it. Is it then true that saccades perturb retinotopic but not spatiotopic memory? We address this issue by assessing localization performances of the last and of the penultimate saccade target in a series of 2-6 saccades. Upon fixation, nine letter-pairs, eight black and one white, were displayed at 3° eccentricity around fixation within a 20°×20° grey frame, and subjects were instructed to saccade to the white letter-pair; the cycle was then repeated. Identical conditions were run with the eyes maintaining fixation throughout the trial but with the grey frame moving so as to mimic its retinal displacement when the eyes moved. At the end of a trial, subjects reported the identity and/or the location of the target in either retinotopic (relative to the current fixation dot) or frame-based. 1In the context of this study " frame-based" and " spatiotopic" are equivalent terms and will be used interchangeably.1(relative to the grey frame) coordinates. Saccades degraded target's retinotopic location memory but not its frame-based location or its identity memory. Results are compatible with the notion that spatiotopic representation takes over retinotopic representation during eye movements thereby contributing to the stability of the visual world as its retinal projection jumps on our retina from saccade to saccade. |
Mauro Marchitto; Leandro Luigi Di Stasi; José J. Cañas Ocular movements under taskload manipulations: Influence of geometry on saccades in air traffic control simulated tasks Journal Article In: Human Factors and Ergonomics in Manufacturing & Service Industries, vol. 19, no. 6, pp. 1–13, 2011. @article{Marchitto2011, Traffic geometry is a factor that contributes to cognitive complexity in air traffic control. In conflict-detection tasks, geometry can affect the attentional effort necessary to correctly perceive and interpret the situation; online measures of situational workload are therefore highly desirable. In this study, we explored whether saccadic movements vary with changes in geometry. We created simple scenarios with two aircraft and simulated a conflict detection task. Independent variables were the conflict angle and the distance to convergence point. We hypothesized lower saccadic peak velocity (and longer duration) for increasing complexity, that is, for increasing conflict angles and for different distances to convergence point. Response times varied accordingly with task complexity. Concerning saccades, there was a decrease of peak velocity (and a related increase of duration) for increased geometry complexity for large saccades (>15°). The data therefore suggest that geometry is able to influence "reaching" saccades and not "fixation" saccades. |
Andrea E. Martin; Brian McElree Direct-access retrieval during sentence comprehension: Evidence from Sluicing Journal Article In: Journal of Memory and Language, vol. 64, no. 4, pp. 327–343, 2011. @article{Martin2011, Language comprehension requires recovering meaning from linguistic form, even when the mapping between the two is indirect. A canonical example is ellipsis, the omission of information that is subsequently understood without being overtly pronounced. Comprehension of ellipsis requires retrieval of an antecedent from memory, without prior prediction, a property which enables the study of retrieval in situ (Martin & McElree, 2008, 2009). Sluicing, or inflectional-phrase ellipsis, in the presence of a conjunction, presents a test case where a competing antecedent position is syntactically licensed, in contrast with most cases of nonadjacent dependency, including verb-phrase ellipsis. We present speed-accuracy tradeoff and eye-movement data inconsistent with the hypothesis that retrieval is accomplished via a syntactically guided search, a particular variant of search not examined in past research. The observed timecourse profiles are consistent with the hypothesis that antecedents are retrieved via a cue-dependent direct-access mechanism susceptible to general memory variables. |
Jessica Maryott; Abigail L. Noyce; Robert Sekuler Eye movements and imitation learning: Intentional disruption of expectation Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 1–16, 2011. @article{Maryott2011, Over repeated viewings of motion along a quasi-random path, ability to reproduce that path from memory improves. To assess the role of expectations and sequence context on such learning, subjects eye movements were measured while trajectories were viewed for subsequent reproduction. As a sequence of motions was repeated, subjects' eye movements became anticipatory, leading the stimulus' motions. To investigate how prediction errors affected eye movements and imitation learning, we injected an occasional deviant motion into a well-learned stimulus sequence, violating subjects' expectation about the motion that would be seen. This unexpected direction of motion in the stimulus sequence did not impair reproduction of the sequence. The externally induced prediction errors promoted one-shot learning: During the very next stimulus presentation, their eye movements showed that subjects now expected the new sequence item to reappear. A second experiment showed that an associative mismatch can facilitate accurate reproduction of an unexpected stimulus. After a deviant sequence item was presented, imitation accuracy for sequences that contained the deviant direction of motion was reduced relative to sequences that restored the original direction of motions. These findings demonstrate that in the context of a familiar sequence, unexpected events can play an important role in learning the sequence. |
Chia-Lun Liu; Philip Tseng; Hui-Yan Chiau; Wei-Kuang Liang; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan The location probability effects of saccade reaction times are modulated in the frontal eye fields but not in the supplementary eye field Journal Article In: Cerebral Cortex, vol. 21, no. 6, pp. 1416–1425, 2011. @article{Liu2011c, The visual system constantly utilizes regularities that are embedded in the environment and by doing so reduces the computational burden of processing visual information. Recent findings have demonstrated that probabilistic information can override attentional effects, such as the cost of making an eye movement away from a visual target (antisaccade cost). The neural substrates of such probability effects have been associated with activity in the superior colliculus (SC). Given the immense reciprocal connections to SC, it is plausible that this modulation originates from higher oculomotor regions, such as the frontal eye field (FEF) and the supplementary eye field (SEF). To test this possibility, the present study employed theta burst transcranial magnetic stimulation (TMS) to selectively interfere with FEF and SEF activity. We found that TMS disrupted the effect of location probability when TMS was applied over FEF. This was not observed in the SEF TMS condition. Together, these 2 experiments suggest that the FEF plays a critical role not only in initiating saccades but also in modulating the effects of location probability on saccade production. |
Donglai Liu; Yonghui Wang; Xiaolin Zhou Lexical- and perceptual-based object effects in the two-rectangle cueing paradigm Journal Article In: Acta Psychologica, vol. 138, no. 3, pp. 397–404, 2011. @article{Liu2011d, Previous studies demonstrate that attentional selection can be object-based, in which the object is defined in terms of Gestalt principles or lexical organizations. Here we investigate how attentional selection functions when the two types of objects are manipulated jointly. Experiment 1 replicated Li and Logan (2008) by showing that attentional shift between two Chinese characters is more efficient when they form a compound word than when they form a nonword. Experiment 2A presented characters either alone or within rectangles (Egly, Driver, & Rafal, 1994) and the characters in a rectangle formed either a word or a nonword. Experiment 2B differed from Experiment 2A in that the two characters forming a word were in different rectangles. Experiment 3A presented the two characters of a word either within a rectangle or in different rectangles. Experiment 3B used the same design as Experiment 3A but presented stimuli of different types in random orders, rather than in blocks as in Experiments 2A, 2B and 3A. In blocked presentation, detection responses to the target color on a character were faster when this character and the cue character formed a word than when they did not, and the size of this lexical-based object effect did not vary according to whether the two characters were presented alone or within or between rectangles. In random presentation, however, the lexical-based object effect was diminished when the two characters of a word were presented in different rectangles. Overall, these findings suggest that the processes that constrain attention deployment over conjoined objects can be strategically adjusted. |
Haoxue Liu; Guangming Ding; Weihua Zhao; Hui Wang; Kaizheng Liu; Ludan Shi Variation of drivers' visual features in long-tunnel entrance section on expressway Journal Article In: Journal of Transportation Safety and Security, vol. 3, no. 1, pp. 27–37, 2011. @article{Liu2011e, To avoid traffic accidents in long tunnel entrance sections, the authors studied the variation of driver's visual features based on real road experiments on the expressway. Drivers' visual feature parameters were recorded in real-time using Eyelink (eye tracking system) during the driving test. Mathematic models of drivers' fixation duration, the number of fixations, and saccade amplitude in tunnel entrance were established based on BP Neural Network (Error Back Propagation Network) simulation. Results showed that fixation duration increased gradually as the vehicle moving closer to the tunnel entrance, whereas the number of fixations and saccade amplitude decreased. Meanwhile, drivers' fixations shifted from straight ahead to the right side, which resulted in the number of fixations on the right side increased. After drivers entering the tunnel, fixation duration firstly decreased and then increased afterward, while the number of fixations and saccade amplitude kept increasing. |
Taosheng Liu; Luke Hospadaruk; David C. Zhu; Justin L. Gardner Feature-specific attentional priority signals in human cortex Journal Article In: Journal of Neuroscience, vol. 31, no. 12, pp. 4484–4495, 2011. @article{Liu2011f, Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. Although the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors) and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained functional magnetic resonance imaging response for the attention task compared with a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multivoxel pattern analysis, we were able to decode the attended feature in both early visual areas (primary visual cortex to human motion complex hMT+) and frontal and parietal areas (e.g., intraparietal sulcus areas IPS1-IPS4 and frontal eye fields) that are commonly associated with spatial attention. Furthermore, analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting that different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that, rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different nonspatial features. |
Taosheng Liu; Youyang Hou Global feature-based attention to orientation. Journal Article In: Journal of Vision, vol. 11, no. 10, pp. 1–8, 2011. @article{Liu2011a, Selective attention to motion direction can modulate the strength of direction-selective sensory responses regardless of their spatial locations. Although such spatially global modulation is thought to be a general property of feature-based attention, few studies have examined visual features other than motion. Here, we used an adaptation protocol combined with attentional instructions to assess whether attention to orientation, a prominent feature in early visual processing, also exhibit such spatially global modulation. We adapted observers to an orientation by cuing them to attend to the orientation in a compound grating that was presented at a peripheral location. We then assessed the size of the tilt aftereffect at three locations that were never stimulated by the adapter. Attending to orientation produced a tilt aftereffect in these locations, indicating that attention modulated orientation-selective mechanisms in remote locations from the adapter. Furthermore, there was no difference in the magnitude of the tilt aftereffect for test stimuli that were located at different distances and hemifields to the adapter. These results suggest that attention to orientation spreads uniformly across the visual field. Thus, spatially global modulation seems to be a general property of feature-based attention, and it provides a flexible mechanism to modulate feature salience across the visual field. |
Taosheng Liu; Irida Mance Constant spread of feature-based attention across the visual field Journal Article In: Vision Research, vol. 51, no. 1, pp. 26–33, 2011. @article{Liu2011b, Attending to a feature in one location can produce feature-specific modulation in a different location. This global feature-based attention effect has been demonstrated using two stimulus locations. Although the spread of feature-based attention is presumed to be constant across spatial locations, it has not been tested empirically. We examined the spread of feature-based attention by measuring attentional modulation of the motion aftereffect (MAE) at remote locations. Observers attended to one of two directions in a compound motion stimulus (adapter) and performed a speed-increment task. MAE was measured via a speed nulling procedure for a test stimulus at different distances from the adapter. In Experiment 1, the adapter was at fixation, while the test stimulus was located at different eccentricities. We also measured the magnitude of baseline MAE for each location in two control conditions that did not require feature-based selection necessitated by a compound stimulus. In Experiment 2, the adapter and test stimuli were all located in the periphery at the same eccentricity. Our results showed that attention induced MAE spread completely across the visual field, indicating a genuine global effect. These results add to our understanding of the deployment of feature-based attention and provide empirical constraints on theories of visual attention. |
Taosheng Liu; Timothy J. Pleskac Neural correlates of evidence accumulation in a perceptual decision task Journal Article In: Journal of Neurophysiology, vol. 106, no. 5, pp. 2383–2398, 2011. @article{Liu2011, Sequential sampling models provide a useful framework for understanding human decision making. A key component of these models is an evidence accumulation process in which information is accrued over time to a threshold, at which point a choice is made. Previous neurophysiological studies on perceptual decision making have suggested accumulation occurs only in sensorimotor areas involved in making the action for the choice. Here we investigated the neural correlates of evidence accumulation in the human brain using functional magnetic resonance imaging (fMRI) while manipulating the quality of sensory evidence, the response modality, and the foreknowledge of the response modality. We trained subjects to perform a random dot motion direction discrimination task by either moving their eyes or pressing buttons to make their responses. In addition, they were cued about the response modality either in advance of the stimulus or after a delay. We isolated fMRI responses for perceptual decisions in both independently defined sensorimotor areas and task-defined nonsensorimotor areas. We found neural signatures of evidence accumulation, a higher fMRI response on low coherence trials than high coherence trials, primarily in saccade-related sensorimotor areas (frontal eye field and intraparietal sulcus) and nonsensorimotor areas in anterior insula and inferior frontal sulcus. Critically, such neural signatures did not depend on response modality or foreknowledge. These results help establish human brain areas involved in evidence accumulation and suggest that the neural mechanism for evidence accumulation is not specific to effectors. Instead, the neural system might accumulate evidence for particular stimulus features relevant to a perceptual task. |
Eric Lambert; Denis Alamargot; Denis Larocque; Gilles Caporossi Dynamics of the spelling process during a copy task: Effects of regularity and frequency Journal Article In: Canadian Journal of Experimental Psychology, vol. 65, no. 3, pp. 141–150, 2011. @article{Lambert2011, This study investigated the time course of spelling, and its influence on graphomotor execution, in a successive word copy task. According to the cascade model, these two processes may be engaged either sequentially or in parallel, depending on the cognitive demands of spelling. In this experiment, adults were asked to copy a series of words varying in frequency and spelling regularity. A combined analysis of eye and pen movements revealed periods where spelling occurred in parallel with graphomotor execution, but concerned different processing units. The extent of this parallel processing depended on the words' orthographic characteristics. Results also highlighted the specificity of word recognition for copying purposes compared with recognition for reading tasks. The results confirm the validity of the cascade model and clarify the nature of the dependence between spelling and graphomotor processes. |
Martijn J. M. Lamers; Ardi Roelofs Attention and gaze shifting in dual-task and go/no-go performance with vocal responding Journal Article In: Acta Psychologica, vol. 137, no. 3, pp. 261–268, 2011. @article{Lamers2011, Evidence from go/no-go performance on the Eriksen flanker task with manual responding suggests that individuals gaze at stimuli just as long as needed to identify them (e.g., Sanders, 1998). In contrast, evidence from dual-task performance with vocal responding suggests that gaze shifts occur after response selection (e.g., Roelofs, 2008a). This difference in results may be due to the nature of the task situation (go/no-go vs. dual task) or the response modality (manual vs. vocal). We examined this by having participants vocally respond to congruent and incongruent flanker stimuli and shift gaze to left- or right-pointing arrows. The arrows required a manual response (dual task) or determined whether the vocal response to the flanker stimuli had to be given or not (go/no-go). Vocal response and gaze shift latencies were longer on incongruent than congruent trials in both dual-task and go/no-go performance. The flanker effect was also present in the manual response latencies in dual-task performance. Ex-Gaussian analyses revealed that the flanker effect on the gaze shifts consisted of a shift of the entire latency distribution. These results suggest that gaze shifts occur after response selection in both dual-task and go/no-go performance with vocal responding. |
Wolf Gero Lange; Kathrin Heuer; Oliver Langner; Ger P. J. Keijsers; Eni S. Becker; Mike Rinck Face value: Eye movements and the evaluation of facial crowds in social anxiety Journal Article In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 42, no. 3, pp. 355–363, 2011. @article{Lange2011, Scientific evidence is equivocal on whether Social Anxiety Disorder (SAD) is characterized by a biased negative evaluation of (grouped) facial expressions, even though it is assumed that such a bias plays a crucial role in the maintenance of the disorder. To shed light on the underlying mechanisms of face evaluation in social anxiety, the eye movements of 22 highly socially anxious (SAs) and 21 non-anxious controls (NACs) were recorded while they rated the degree of friendliness of neutral-angry and smiling-angry face combinations. While the Crowd Rating Task data showed no significant differences between SAs and NACs, the resultant eye-movement patterns revealed that SAs, compared to NACs, looked away faster when the face first fixated was angry. Additionally, in SAs the proportion of fixated angry faces was significantly higher than for other expressions. Independent of social anxiety, these fixated angry faces were the best predictor of subsequent affect ratings for either group. Angry faces influence attentional processes such as eye movements in SAs and by doing so reflect biased evaluations. As these processes do not correlate with explicit ratings of faces, however, it remains unclear at what point implicit attentional behaviors lead to anxiety-prone behaviors and the maintenance of SAD. The relevance of these findings is discussed in the light of the current theories. |
Georgia Laretzaki; Sotiris Plainis; Ioannis Vrettos; Anna Chrisoulakis; Ioannis G. Pallikaris; Panos Bitsios Threat and trait anxiety affect stability of gaze fixation Journal Article In: Biological Psychology, vol. 86, no. 3, pp. 330–336, 2011. @article{Laretzaki2011, Threat accelerates early visual information processing, as shown by shorter P100 latencies of pattern Visual Evoked Potentials in subjects with low trait anxiety, but the opposite is true for high anxious subjects. We sought to determine if, and how, threat and trait anxiety interact to affect stability of gaze fixation. We used video oculography to record gaze position in the presence and in the absence of a fixational stimulus, in a safe and a verbal threat condition in subjects characterised for their trait anxiety. Trait anxiety significantly predicted fixational instability in the threat condition. An extreme tertile analysis revealed that fixation was less stable in the high anxiety group, especially under threat or in the absence of a stimulus. The effects of anxiety extend to perceptual and sensorimotor processes. These results have implications for the understanding of individual differences in occulomotor planning and visually guided behavior. |
Louisa Lavergne; Dorine Vergilino-Perez; Christelle Lemoine; Thérèse Collins; Karine Doré-Mazars Exploring and targeting saccades dissociated by saccadic adaptation Journal Article In: Brain Research, vol. 1415, pp. 47–55, 2011. @article{Lavergne2011, Saccadic adaptation maintains saccade accuracy and has been studied with targeting saccades, i.e. saccades that bring the gaze to a target, with the classical intra-saccadic step procedure in which the target systematically jumps to a new position during saccade execution. Post-saccadic visual feedback about the error between target position and the saccade landing position is crucial to establish and maintain adaptation. However, recent research focusing on two-saccade sequences has shown that exploring saccades, i.e. saccades that explore an object, resists this classical intra-saccadic step procedure but can be adapted by systematically changing the main parameter used for their coding: stimulus size. Here, we adapted an exploring saccade and a targeting saccade in two separate experiments, using the appropriate adaptation procedure, and we tested whether the adaptation induced on one saccade type transferred to the other. We showed that whereas classical targeting saccade adaptation does not transfer to exploring saccades, the reciprocal transfer (i.e., from exploring to targeting saccades) occurred when targeting saccades aimed for a spatially extended stimulus, but not when they aimed for an isolated target. These results show that, in addition to position errors, size errors can drive adaptation, and confirm that exploring vs. targeting a stimulus leads to two different motor planning modes. |
Helmut Leder; Michael Forster; Gernot Gerger The glasses stereotype revisited: Effects of eyeglasses on perception, recognition, and impression of faces Journal Article In: Swiss Journal of Psychology, vol. 70, no. 4, pp. 211–222, 2011. @article{Leder2011, In face perception, besides physiognomic changes, accessories like eyeglasses can influence facial appearance. According to a stereotype, people who wear glasses are more intelligent, but less attractive. In a series of four experiments, we showed how full-rim and rimless glasses, differing with respect to the amount of face they cover, affect face perception, recognition, distinctiveness, and the attribution of stereotypes. Eyeglasses generally directed observers' gaze to the eye regions; rimless glasses made faces appear less distinctive and resulted in reduced distinctiveness in matching and in recognition tasks. Moreover, the stereotype was confirmed but depended on the kind of glasses—rimless glasses yielded an increase in perceived trustworthiness, but not a decrease in attractiveness. Thus, glasses affect how we perceive the faces of the people wearing them and, in accordance with an old stereotype, they can lower how attractive, but increase how intelligent and trustworthy people wearing them appear. These effects depend on the kind of glasses worn. |
Eun Ju Lee; Gusang Kwon; Aekyoung Lee; Jamshid Ghajar; Minah Suh Individual differences in working memory capacity determine the effects of oculomotor task load on concurrent word recall performance Journal Article In: Brain Research, vol. 1399, pp. 59–65, 2011. @article{Lee2011, In this study, the interaction between individual differences in working memory capacity, which were assessed by the Korean version of the California Verbal Learning Test (K-CVLT), and the effects of oculomotor task load on word recall performance are examined in a dual-task experiment. We hypothesized that varying levels of oculomotor task load should result in different demands on cognitive resources. The verbal working memory task used in this study involved a brief exposure to seven words to be remembered, followed by a 30-second delay during which the subject carried out an oculomotor task. Then, memory performance was assessed by having the subjects recall as many words as possible. Forty healthy normal subjects with no vision-related problems carried out four separate dual-tasks over four consecutive days of participation, wherein word recall performances were tested under unpredictable random SPEM (smooth pursuit eye movement), predictive SPEM, fixation, and eyes-closed conditions. The word recall performance of subjects with low K-CVLT scores was significantly enhanced under predictive SPEM conditions as opposed to the fixation and eyes-closed conditions, but performance was reduced under the random SPEM condition, thus reflecting an inverted-U relationship between the oculomotor task load and word recall performance. Subjects with high K-CVLT scores evidenced steady word recall performances, regardless of the type of oculomotor task performed. The concurrent oculomotor performance measured by velocity error did not differ significantly among the K-CVLT groups. However, the high-scoring subjects evidenced smaller phase errors under predictive SPEM conditions than did the low-scoring subjects; this suggests that different resource allocation strategies may be adopted, depending on individuals' working memory capacity. |
Jiyeon Lee; Cynthia K. Thompson Real-time production of unergative and unaccusative sentences in normal and agrammatic speakers: An eyetracking study Journal Article In: Aphasiology, vol. 25, no. 6-7, pp. 813–825, 2011. @article{Lee2011a, Background: Speakers with agrammatic aphasia have greater difficulty producing unaccusative (float) compared to unergative (bark) verbs (Kegl, 1995; Lee & Thompson, 2004; Thompson, 2003), putatively because the former involve movement of the theme to the subject position from the post-verbal position, and are therefore more complex than the latter (Burzio, 1986; Perlmutter, 1978). However, it is unclear if and how sentence production processes are affected by the linguistic distinction between these two types of verbs in normal and impaired speakers. Aims: This study examined real-time production of sentences with unergative (the black dog is barking) vs unaccusative (the black tube is floating) verbs in healthy young speakers and individuals with agrammatic aphasia, using eyetracking. Methods & Procedures: Participants' eye movements and speech were recorded while they produced a sentence using computer displayed written stimuli (e.g., black, dog, is barking). Outcomes & Results: Both groups of speakers produced numerically fewer unaccusative sentences than unergative sentences. However, the eye movement data revealed significant differences in fixations between the adjective (black) vs the noun (tube) when producing unaccusatives, but not when producing unergatives for both groups. Interestingly, whereas healthy speakers showed this difference during speech, speakers with agrammatism showed this difference prior to speech onset. Conclusions: These findings suggest that the human sentence production system differentially processes unaccusatives vs unergatives. This distinction is preserved in individuals with agrammatism; however, the time course of sentence planning appears to differ from healthy speakers (Lee & Thompson, 2010). |
Antonio F. Macedo; Michael D. Crossland; Gary S. Rubin Investigating unstable fixation in patients with macular disease Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 3, pp. 1275–1280, 2011. @article{Macedo2011, PURPOSE. To assess the effect on visual acuity of compensating fixation instability by controlling retinal image motion in people with macular disease. METHODS. Ten patients with macular disease participated in this study. Crowded and noncrowded visual acuity were measured using an eye tracking system to compensate for fixation instability. Four conditions, corresponding to four levels of retinal image motion, were tested: no compensation (normal motion), partial compensation (reduced motion), total compensation (no motion), and overcompensation (increased motion). Fixation stability and the number of preferred retinal loci were also measured. RESULTS. Modulating retinal image motion had the same effect on crowded and noncrowded visual acuity (P ⫽ 0.601). When fixation instability was overcompensated, acuity worsened by 0.1 logMAR units (P ⬍ 0.001) compared with baseline (no compensation) and remained equal to baseline for all other conditions. CONCLUSIONS. In people with macular disease, retinal image motion caused by fixation instability does not reduce either crowded or noncrowded visual acuity. Acuity declines when fixation instability is overcompensated, showing limited tolerance to increased retinal image motion. The results provide evidence that fixation instability does not improve visual acuity and may be a consequence of poor oculomotor control. |
Alexander Maier; Christopher J. Aura; David A. Leopold Infragranular sources of sustained local field potential responses in macaque primary visual cortex Journal Article In: Journal of Neuroscience, vol. 31, no. 6, pp. 1971–1980, 2011. @article{Maier2011, A local field potential (LFP) response can be measured throughout the visual cortex in response to the abrupt appearance of a visual stimulus. Averaging LFP responses to many stimulus presentations isolates transient, phase-locked components of the response that are consistent from trial to trial. However, stimulus responses are also composed of sustained components, which differ in their phase from trial to trial and therefore must be evaluated using other methods, such as computing the power of the response of each trial before averaging. Here, we investigate the basis of phase-locked and non-phase-locked LFP responses in the primary visual cortex of the macaque monkey using a novel variant of current source density (CSD) analysis. We applied a linear array of electrode contacts spanning the thickness of the cortex to measure the LFP and compute band-limited CSD power to identify the laminar sites of persistent current exchange that may be the basis of sustained visual LFP responses. In agreement with previous studies, we found a short-latency phase-locked current sink, thought to correspond to thalamocortical input to layer 4C. In addition, we found a prominent non-phase-locked component of the CSD that persisted as long as the stimulus was physically present. The latter was relatively broadband, lasted throughout the stimulus presentation, and was centered ∼500 μm deeper than the initial current sink. These findings demonstrate a fundamental difference in the neural mechanisms underlying the initial and sustained processing of simple visual stimuli in the V1 microcircuit. |
Femke Maij; Eli Brenner; Jeroen B. J. Smeets Peri-saccadic mislocalization is not influenced by the predictability of the saccade target location Journal Article In: Vision Research, vol. 51, no. 1, pp. 154–159, 2011. @article{Maij2011, Flashes presented around the time of a saccade are often mislocalized. The precise pattern of mislocalization is influenced by many factors. Here we study one such factor: the predictability of the saccade target's location. The experiment examines two conditions. In the first the subject makes the same horizontal rightward saccade to the same target location over and over again. In the second the subject makes saccades to a target that is jumping in unpredictable radial directions. A dot is flashed in the vicinity of the saccade target near the time of saccade onset. Subjects are asked to localize the flash by touching its location on the screen. Although various saccade parameters differed, the errors that subjects made were very similar in both conditions. We conclude that the pattern of mislocalization does not depend on the predictability of the location of the saccade target. |
Femke Maij; Eli Brenner; Jeroen B. J. Smeets Temporal uncertainty separates flashes from their background during saccades Journal Article In: Journal of Neuroscience, vol. 31, no. 10, pp. 3708–3711, 2011. @article{Maij2011a, It is known that spatial localization of flashed objects fails around the time of rapid eye movements (saccades). This mislocalization is often interpreted in terms of a combination of shifts and deformations of the brain's representation of space to account for the eye movement. Such temporary remapping of positions in space should affect all elements in a scene, leaving ordinal relationships between positions intact. We performed an experiment in which we presented flashes on a background with red and green regions to human subjects. We found that flashes that were presented on the green part of the background around the time of a saccade were readily reported to have been presented on the red part of the background and vice versa. This is inconsistent with the notion of a temporary shift and deformation of perceived space. To explain our results, we present a model that illustrates how temporal uncertainty could give rise to the observed spatial mislocalization. The model combines uncertainty about the time of the flash with a bias to localize targets where one is looking. It reproduced the pattern of mislocalization very accurately, showing that perisaccadic mislocalization can best be explained in terms of temporal uncertainty about the moment of the flash. |
Alexis D. J. Makin; Rochelle Ackerley; Kelly S. Wild; Ellen Poliakoff; Emma Gowen; Wael El-Deredy Coherent illusory contours reduce microsaccade frequency Journal Article In: Neuropsychologia, vol. 49, no. 9, pp. 2798–2801, 2011. @article{Makin2011, Synchronized high-frequency gamma band oscillations (30-100. Hz) are thought to mediate the binding of single visual features into whole-object representations. For example, induced gamma band oscillations (iGBRs) have been recorded ∼280. ms after the onset of a coherent Kanizsa triangle, but not after an incoherent equivalent shape. However, several recent studies have provided evidence that the EEG-recorded iGBR may be a by-product of small saccadic eye movements (microsaccades). Considering these two previous findings, one would hypothesis that there should be more microsaccades following the onset of a coherent Kanizsa triangle. However, we found that microsaccade rebound rate was significantly higher after an incoherent triangle was presented. This result suggests that microsaccades are not a reliable indicator of perceptual binding, and, more importantly, implies that iGBR cannot be universally produced by ocular artefacts. |
Carly J. Leonard; Steven J. Luck The role of magnocellular signals in oculomotor attentional capture Journal Article In: Journal of Vision, vol. 11, no. 13, pp. 1–12, 2011. @article{Leonard2011, While it is known that salient distractors often capture covert and overt attention, it is unclear whether salience signals that stem from magnocellular visual input have a more dominant role in oculomotor capture than those that result from parvocellular input. Because of the direct anatomical connections between the magnocellular pathway and the superior colliculus, salience signals generated from the magnocellular pathway may produce greater oculomotor capture than those from the parvocellular pathway, which could be potentially harder to overcome with "top-down," goal-directed guidance. Although previous research has addressed this with regard to magnocellular transients, in the current research, we investigated whether a static singleton distractor defined along a dimension visible to the magnocellular pathway would also produce enhanced oculomotor capture. In two experiments, we addressed this possibility by comparing a parvo-biased singleton condition, in which the distractor was defined by isoluminant chromatic color contrast, with a magno + parvo singleton condition, in which the distractor also differed in luminance from the surrounding objects. In both experiments, magno + parvo singletons elicited faster eye movements than parvo-only singletons, presumably reflecting faster information transmission in the magnocellular pathway, but magno + parvo singletons were not significantly more likely to produce oculomotor capture. Thus, although magnocellular salience signals are available more rapidly, they have no sizable advantage over parvocellular salience signals in controlling oculomotor orienting when all stimuli have a common onset. |
Benjamin D. Lester; Paul Dassonville Attentional control settings modulate susceptibility to the induced Roelofs effect Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 5, pp. 1398–1406, 2011. @article{Lester2011, When a visible frame is offset laterally from an observer's objective midline, the subjective midline is pulled toward the frame's center, causing the frame and any enclosed targets to be misperceived as being shifted somewhat in the opposite direction. This illusion, the Roelofs effect, is driven by environmental (bottom-up) visual cues, but whether it can be modulated by top-down (e.g., task-relevant) information is unknown. Here, we used an attentional manipulation (i.e., the color-contingency effect) to test whether attentional filtering can modulate the magnitude of the illusion. When observers were required to report the location of a colored target, presented within an array of differently colored distractors, there was a greater effect of the illusion when the Roelofs-inducing frame was the same color as the target. These results indicate that feature-based attentional processes can modulate the impact of contextual information on an observer's perception of space. |
Casimir J. H. Ludwig; J. Rhys Davies Estimating the growth of internal evidence guiding perceptual decisions Journal Article In: Cognitive Psychology, vol. 63, no. 2, pp. 61–92, 2011. @article{Ludwig2011, Perceptual decision-making is thought to involve a gradual accrual of noisy evidence. Temporal integration of the evidence reduces the relative contribution of dynamic internal noise to the decision variable, thereby boosting its signal-to-noise ratio. We aimed to estimate the internal evidence guiding perceptual decisions over time, using a novel combination of external noise and the response signal methods. Observers performed orientation discrimination of patterns presented in external noise. We varied the contrast of the patterns and the delay at which observers were forced to signal their decision. Each test stimulus (patterns and noise sample) was presented twice. Across two experiments we varied the avail- ability of the visual stimulus for processing. Observer model anal- yses of discrimination accuracy and response consistency to two passes of the same stimulus, suggested that there was very little growth in the internal evidence. The improvement in accuracy over time characterised by the speed-accuracy trade-off function pre- dominantly reflected a decreasing proportion of non-visual deci- sions, or pure guesses. There was no advantage to having the visual patterns visible for longer than 80 ms, indicating that only the visual information in a short window after display onset was used to drive the decisions. The remarkable constancy of the inter- nal evidence over time suggests that temporal integration of the sensory information was very limited. Alternatively, more extended integration of the evidence from memory could have taken place, provided that the dominant source of internal noise limiting performance occurs between-trials, which cannot be reduced by prolonged evidence integration. |
Arthur J. Lugtigheid; Eli Brenner; Andrew E. Welchman Speed judgments of three-dimensional motion incorporate extraretinal information Journal Article In: Journal of Vision, vol. 11, no. 13, pp. 1–11, 2011. @article{Lugtigheid2011, When tracking an object moving in depth, the visual system should take changes of eye vergence into account to judge the object's 3D speed correctly. Previous work has shown that extraretinal information about changes in eye vergence is exploited when judging the sign of 3D motion. Here, we ask whether extraretinal signals also affect judgments of 3D speed. Observers judged the speed of a small target surrounded by a large background. To manipulate extraretinal information, we varied the vergence demand of the entire stimulus sinusoidally over time. At different phases of vergence pursuit, we changed the disparity of the target relative to the background, leading observers to perceive approaching target motion. We determined psychometric functions for the target's approach speed when the eyes were (1) converging, (2) diverging, (3) maximally converged (near), and (4) maximally diverged (far). The target's motion was reported as faster during convergence and slower during divergence but perceived speed was little affected at near or far vergence positions. Thus, 3D speed judgments are affected by extraretinal signals about changes in eye rotation but appear unaffected by the absolute orientation of the eyes. We develop a model that accounts for observers' judgments by taking a weighted average of the retinal and extraretinal signals to target motion. |
Katie L. Meadmore; Itiel E. Dror; Romola S. Bucks; Simon P. Liversedge Eye movements during visuospatial judgements Journal Article In: European Journal of Cognitive Psychology, vol. 23, no. 1, pp. 92–101, 2011. @article{Meadmore2011, The goal of the current research was to determine whether eye movements reflect different underlying cognitive processes associated with visuospatial relation judgements. Ten participants made three different judgements regarding the position of a dot in relation to a bar; an above/below judgement, a near/far judgement, and a precise distance estimation. The results highlight similarities between above/ below and near/far visuospatial judgements; specifically, such binary judgements were fast, reflexive, and did not require precise distance computation. In contrast, estimating distance was comparatively cognitively demanding and required precise distance computation, as evidenced through distinct scan paths. The eye movement data provide significant insight into the cognitive processes underlying visuospatial judgements, showing aspects of visuospatial processing that are similar, as well as those that differ between tasks. |
Alberto Megías; Antonio Maldonado; Andrés Catena; Leandro Luigi Di Stasi; Jesús Serrano; Antonio Cándido Modulation of attention and urgent decisions by affect-laden roadside advertisement in risky driving scenarios Journal Article In: Safety Science, vol. 49, no. 10, pp. 1388–1393, 2011. @article{Megias2011, In road safety literature the effects of emotional content and salience of advertising billboards have been scarcely investigated. The main aim of this work was to uncover how affect-laden roadside advertisements can affect attention - eye-movements - and subsequent risky decisions - braking - on the Honda Riding Trainer motorcycle simulator. Results indicated that the number of fixations and total fixation time elicited by the negative and positive emotional advertisements were larger than the neutral ones. At the same time, negative pictures got later gaze disengagement than positive and neutral ones. This attentional capture results in less eye fixation times on the road relevant region, where the important driving events happen. Finally, the negative emotional valence advertisements sped up braking on subsequent risky situations. Overall results demonstrated how advertisements with emotional content modulate attention allocation and driving decisions in risky situations and might be helpful for designing roadside advertisements regulations and risk prevention programs. |
David Melcher; Manuela Piazza The role of attentional priority and saliency in determining capacity limits in enumeration and visual working memory Journal Article In: PLoS ONE, vol. 6, no. 12, pp. e29296, 2011. @article{Melcher2011, Many common tasks require us to individuate in parallel two or more objects out of a complex scene. Although the mechanisms underlying our abilities to count the number of items, remember the visual properties of objects and to make saccadic eye movements towards targets have been studied separately, each of these tasks require selection of individual objects and shows a capacity limit. Here we show that a common factor–salience–determines the capacity limit in the various tasks. We manipulated bottom-up salience (visual contrast) and top-down salience (task relevance) in enumeration and visual memory tasks. As one item became increasingly salient, the subitizing range was reduced and memory performance for all other less-salient items was decreased. Overall, the pattern of results suggests that our abilities to enumerate and remember small groups of stimuli are grounded in an attentional priority or salience map which represents the location of important items. |
Korbinian Moeller; Elise Klein; Hans-Christoph Nuerk (No) small adults: Children's processing of carry addition problems Journal Article In: Developmental Neuropsychology, vol. 36, no. 6, pp. 702–720, 2011. @article{Moeller2011, Analyses of eye-fixation data in adults suggested that the carry effect in addition is determined by the unit digits of the summands. Correspondingly, we recorded children's eye fixation behavior in a choice reaction paradigm. As for adults the carry effect was most pronounced on the unit digits of the summands. However, children's fixation pattern differed reliably from that of adults in other important aspects. While these data suggested common processes in children and adults for performing a carry in mental addition, children's eye fixation data indicated a less flexible processing style primarily based on calculating the correct result. |
Korbinian Moeller; Elise Klein; Hans-Christoph Nuerk Three processes underlying the carry effect in addition - Evidence from eye tracking Journal Article In: British Journal of Psychology, vol. 102, no. 3, pp. 623–645, 2011. @article{Moeller2011a, Recent research indicated that processes of unit-decade integration pose particular difficulty on multi-digit addition. In fact, longer response latencies as well as higher error rates have been observed for addition problems involving a carry operation (e.g., 18 +27) compared to problems not requiring a carry (e.g., 13 +32). However, the cognitive instantiation of this carry effect remained unknown. In the current study, this question was pursued by recording participants' eye fixation behaviour during addition problem verification. Analyses of the eye fixation data suggested a prominent role of the unit digits of the summands. The need for a carry seems to be recognized very early during the encoding of the problem after initial unit sum calculation has established the basis for the no carry/carry detection. Additionally, processes related to the actual carry execution seemed to be associated with the processing of the decade digit of the solution probe but were less unambiguous. Taken together, our findings indicate that unit-based calculations and the associated recognition that a carry is needed as well as its completion determine the difficulty of carry addition problems. On a more general level, this study shows how the nature of numerical-cognitive processes can be further differentiated by the evaluation of eye movement measures. |
Sarim Mohammad; Irene Gottlob; Anil Kumar; Mervyn G. Thomas; Christopher Degg; Viral Sheth; Frank A. Proudlock The functional significance of foveal abnormalities in albinism measured using spectral-domain optical coherence tomography Journal Article In: Ophthalmology, vol. 118, no. 8, pp. 1645–1652, 2011. @article{Mohammad2011, Purpose: The relationship between foveal abnormalities in albinism and best-corrected visual acuity (BCVA) is unclear. High-resolution spectral-domain optical coherence tomography (SD OCT) was used to quantify foveal retinal layer thicknesses and to assess the functional significance of foveal morphologic features in patients with albinism. Design: Cross-sectional study. Participants: Forty-seven patients with albinism and 20 healthy control volunteers were recruited to the study. Methods: Using high-resolution SD OCT, 7×7×2-mm volumetric scans of the fovea were acquired (3-μm axial resolution). The B scan nearest the center of the fovea was identified using signs of foveal development. The thickness of each retinal layer at the fovea and foveal pit depth were quantified manually using ImageJ software and were compared with BCVA. Main Outcome Measures: Total retinal thickness, foveal pit depth, photoreceptor layer thickness, and processing layer thickness in relation to BCVA. Results: Total photoreceptor layer thickness at the fovea was correlated highly to BCVA (P = 0.0008; r = 0.501). Of the photoreceptor layers, the outer segment length was correlated most strongly to BCVA (P<0.0001; r = 0.641). In contrast, there was no significant correlation between either total retinal thickness or pit depth and BCVA (P>0.05). This was because of an inverse correlation between total photoreceptor layer thickness and total processing layer thickness (P<0.0001; r = 0.696). Conclusions: Neither the total retinal thickness nor the pit depth are reliable indicators of visual deficit, because patients with similar overall retinal thickness had widely varying foveal morphologic features. In albinism, the size of the photoreceptor outer segment was found to be the strongest predictor of BCVA. These results suggest that detailed SD OCT images of photoreceptor anatomic features provide a useful tool in assessing the visual potential in patients with albinism. |
Jeff Moher; Jared Abrams; Howard E. Egeth; Steven Yantis; Veit Stuphorn Trial-by-trial adjustments of top-down set modulate oculomotor capture Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 5, pp. 897–903, 2011. @article{Moher2011, The role of top-down control in visual search has been a subject of much debate. Recent research has focused on whether attentional and oculomotor capture by irrelevant salient distractors can be modulated through top-down control, and if so, whether top-down control can be rapidly initiated based on current task goals. In the present study, participants searched for a unique shape in an array containing otherwise homogeneous shapes. A cue prior to each trial indicated the probability that an irrelevant color singleton distractor would appear on that trial. Initial saccades were less likely to land on the target and participants took longer to initiate a saccade to the target when a color distractor was present than when it was absent; this cost was greatly reduced on trials in which the probability that a distractor would appear was high, as compared to when the probability was low. These results suggest that top-down control can modulate oculomotor capture in visual search, even in a singleton search task in which distractors are known to readily capture both attention and the eyes. Furthermore, the results show that top-down distractor suppression mechanisms can be initiated quickly in anticipation of irrelevant salient distractors and can be adjusted on a trial-by-trial basis. |
Pierre Morel; Sophie Deneve; Pierre Baraduc Optimal and suboptimal use of postsaccadic vision in sequences of saccades Journal Article In: Journal of Neuroscience, vol. 31, no. 27, pp. 10039–10049, 2011. @article{Morel2011, Saccades are imprecise, due to sensory and motor noise. To avoid an accumulation of errors during sequences of saccades, a prediction derived from the efference copy can be combined with the reafferent visual feedback to adjust the following eye movement. By varying the information quantity of the visual feedback, we investigated how the reliability of the visual information affects the postsaccadic update in humans. Two elements of the visual scene were manipulated, the saccade target or the background, presented either together or in isolation. We determined the weight of the postsaccadic visual information by measuring the effect of intrasaccadic visual shifts on the following saccade. We confirmed that the weight of visual information evolves with information quantity as predicted for a statistically optimal system. In particular, we found that the visual background alone can guide the postsaccadic update, and that information from target and background are optimally combined. Moreover, these visual weights are adjusted dynamically and on a trial-to-trial basis to the level of visual noise determined by target eccentricity and reaction time. In contrast, we uncovered a dissociation between the visual signals used to update the next planned saccade (main saccade) and those used to generate an involuntary corrective saccade. The latter was exclusively based on visual information about the target, and discarded all information about the background: a suboptimal use of visual evidence. |
Sofie Moresi; Jos J. Adam; Jons Rijcken; Harm Kuipers; Marianne Severens; Pascal W. M. Van Gerven Response preparation with adjacent versus overlapped hands: A pupillometric study Journal Article In: International Journal of Psychophysiology, vol. 79, no. 2, pp. 280–286, 2011. @article{Moresi2011, Preparatory cues facilitate performance in speeded choice tasks. It is debated, however, whether the lateralized neuro-anatomical organization of the human motor system contributes to this facilitation. To investigate this issue, we examined response preparation in a finger-cuing task using two conditions. In the hands adjacent condition, the hands were placed adjacently to each other with index and middle fingers placed on four linearly arrayed response keys. In the overlapped hand placement condition, the fingers of different hands alternated, thus dissociating hand and spatial position factors. Preparatory cues specified a subset of two fingers. Left-right cues specified the two leftmost or two rightmost fingers. Inner-outer cues specified the two inner or outer fingers. Alternate cues specified the first and third, or the second and fourth finger in the response set. In addition to reaction time and response errors, we measured the pupillary response to assess the cognitive processing load associated with response preparation. Results showed stronger pupil dilations (and also longer RTs and more errors) for the overlapped than for the adjacent hand placement condition, reflecting an overall increase in cognitive processing load. Furthermore, the negative impact of overlapping the hands on pupil dilation interacted with cue type, indicating that left-right cues (associated with two fingers on one hand) suffered most from overlapping the hands. With the hands overlapped, alternate cues (now associated with two fingers on the same hand) produced the shortest RTs. These findings demonstrate the importance of motoric factors in response preparation. |
Hironori Nakatani; Nicoletta Orlandi; Cees Van Leeuwen Precisely timed oculomotor and parietal EEG activity in perceptual switching Journal Article In: Cognitive Neurodynamics, vol. 5, no. 4, pp. 399–409, 2011. @article{Nakatani2011, Blinks and saccades cause transient interruptions of visual input. To investigate how such effects influence our perceptual state, we analyzed the time courses of blink and saccade rates in relation to perceptual switching in the Necker cube. Both time courses of blink and saccade rates showed peaks at different moments along the switching process. A peak in blinking rate appeared 1,000 ms prior to the switching responses. Blinks occurring around this peak were associated with subsequent switching to the preferred interpretation of the Necker cube. Saccade rates showed a peak 150 ms prior to the switching response. The direction of saccades around this peak was predictive of the perceived orientation of the Necker cube afterwards. Peak blinks were followed and peak saccades were preceded by transient parietal theta band activity indicating the changing of the perceptual interpretation. Precisely-timed blinks, therefore, can initiate perceptual switching, and precisely-timed saccades can facilitate an ongoing change of interpretation. © 2011 The Author(s). |
Nicole Naue; Daniel Strüber; Ingo Fründ; Jeanette Schadow; Daniel Lenz; Stefan Rach; Ursula Körner; Christoph S. Herrmann Gamma in motion: Pattern reversal elicits stronger gamma-band responses than motion Journal Article In: NeuroImage, vol. 55, no. 2, pp. 808–817, 2011. @article{Naue2011, Previous studies showed higher gamma-band responses (GBRs, ≈ 40. Hz) of the electroencephalogram (EEG) for moving compared to stationary stimuli. However, it is unclear whether this modulation by motion reflects a special responsiveness of the GBR to the stimulus feature ''motion,'' or whether GBR enhancements of similar magnitude can be elicited also by a salient change within a static stimulus that does not include motion.Therefore, we measured the EEG of healthy subjects watching stationary square wave gratings of high contrast that either started to move or reversed their black and white pattern shortly after their onset. The strong contrast change of the pattern reversal represented a salient but motionless change within the grating that was compared to the onset of the stationary grating and the motion onset. Induced and evoked GBRs were analyzed for all three display conditions. In order to assess the influenceof fixational eye movements on the induced GBRs, we also examined the time courses of microsaccade rates during the three display conditions. Amplitudes of both evoked and induced GBRs were stronger for pattern reversal than for motion onset. There was no significant amplitude difference between the onsets of the stationary and moving gratings. However, mean frequencies of the induced GBR were ~10. Hz higher in response to the onsets of moving compared to stationary gratings. Furthermore, the modulations of the induced GBR did not parallel the modulations of microsaccade rate, indicating that our induced GBRs reflect neuronal processes. These results suggest that, within the gamma-band range, the encoding of moving gratings in early visual cortex is primarily based on an upward frequency shift, whereas contrast changes within static gratings are reflected by amplitude enhancement. |
Mark B. Neider; Arthur F. Kramer Older adults capitalize on contextual information to guide search Journal Article In: Experimental Aging Research, vol. 37, no. 5, pp. 539–571, 2011. @article{Neider2011, Much has been learned about the age-related cognitive declines associated with the attentional processes that utilize perceptual features during visual search. However, questions remain regarding the ability of older adults to use scene information to guide search processes, perhaps as a compensatory mechanism for declines in perceptual processes. The authors had younger and older adults search pseudorealistic scenes for targets with strong or no spatial associations. Both younger and older adults exhibited reaction time benefits when searching for a target that was associated with a specific scene region. Eye movement analyses revealed that all observers dedicated most oftheir time to scanning target-consistent display regions and that guidance to these regions was often evident on the initial saccade ofa trial. Both the benefits and costs related to contextual information were larger for older adults, suggesting that this information was relied on heavily to guide search processes towards the target. |
Mark B. Neider; Gregory J. Zelinsky Cutting through the clutter: Searching for targets in evolving realistic scenes Journal Article In: Journal of Vision, vol. 11, no. 14, pp. 1–16, 2011. @article{Neider2011a, We evaluated the use of visual clutter as a surrogate measure of set size effects in visual search by comparing the effects of subjective clutter (determined by independent raters) and objective clutter (as quantified by edge count and feature congestion) using "evolving" scenes, ones that varied incrementally in clutter while maintaining their semantic continuity. Observers searched for a target building in rural, suburban, and urban city scenes created using the game SimCity. Stimuli were 30 screenshots obtained for each scene type as the city evolved over time. Reaction times and search guidance (measured by scan path ratio) were fastest/strongest for sparsely cluttered rural scenes, slower/weaker for more cluttered suburban scenes, and slowest/weakest for highly cluttered urban scenes. Subjective within-city clutter estimates also increased as each city matured and correlated highly with RT and search guidance. However, multiple regression modeling revealed that adding objective estimates failed to better predict search performance over the subjective estimates alone. This suggests that within-city clutter may not be explained exclusively by low-level feature congestion; conceptual congestion (e.g., the number of different types of buildings in a scene), part of the subjective clutter measure, may also be important in determining the effects of clutter on search. |
Nhung X. Nguyen; Andrea Stockum; Gesa A. Hahn; Susanne Trauzettel-Klosinski Training to improve reading speed in patients with juvenile macular dystrophy: A randomized study comparing two training methods Journal Article In: Acta Ophthalmologica, vol. 89, no. 1, pp. 82–88, 2011. @article{Nguyen2011, Purpose: In this study, we examined the clinical application of two training methods for optimizing reading ability in patients with juvenile macular dystrophy with established eccentric preferred reti- nal locus and optimal use of low-vision aids. Method: This randomized study included 36 patients with juvenile macular dystrophy (35 with Stargardt's disease and one with Best's disease). All patients have been using individually opti- mized low-vision aids. After careful ophthalmological examination, patients were randomized into two groups: Group 1: Training to read during rapid serial visual presentation (RSVP) with elimination of eye movements as far as possible (n = 20); Group 2: Training to optimize reading eye movements (SM, sensomotoric training) (n = 16). Only patients with magnification requirement up to sixfold were included in the study. Training was performed for 4 weeks with an intensity of ½ hr per day and 5 days a week. Reading speed during page reading was measured before and after training. Eye movements during silent reading were recorded before and after training using a video eye tracker in 11 patients (five patients of SM and six of RSVP training group) and using an infrared reflection system in five patients (three patients from the SM and two patients of RSVP training group). Results: Age, visual acuity and magnification requirement did not differ significantly between the two groups. The median reading speed was 83 words per minute (wpm) (interquartile range 74–105 wpm) in the RSVP training group and 102 (interquartile range 63–126 wpm) in the SM group before training and increased significantly to 104 (interquartile range 81–124 wpm) and 122, respectively (interquartile range 102–137 wpm; p = 0.01 and 0.006) after training, i.e. patients with RSVP training increased their reading speed by a median of 21 wpm, while it was 20 wpm in the SM group. There were individual patients, who benefited strongly from the training. Eye move- ment recordings before and after training showed that in the RSVP group, increasing reading speed correlated with decreasing fixation duration (r = )0.75 |
Jianguang Ni; Huihui Jiang; Yixiang Jin; Nanhui Chen; Jianhong Wang; Zhengbo Wang; Yuejia Luo; Yuanye Ma; Xintian Hu Dissociable modulation of overt visual attention in valence and arousal revealed by topology of scan path Journal Article In: PLoS ONE, vol. 6, no. 4, pp. e18262, 2011. @article{Ni2011, Emotional stimuli have evolutionary significance for the survival of organisms; therefore, they are attention-grabbing and are processed preferentially. The neural underpinnings of two principle emotional dimensions in affective space, valence (degree of pleasantness) and arousal (intensity of evoked emotion), have been shown to be dissociable in the olfactory, gustatory and memory systems. However, the separable roles of valence and arousal in scene perception are poorly understood. In this study, we asked how these two emotional dimensions modulate overt visual attention. Twenty-two healthy volunteers freely viewed images from the International Affective Picture System (IAPS) that were graded for affective levels of valence and arousal (high, medium, and low). Subjects' heads were immobilized and eye movements were recorded by camera to track overt shifts of visual attention. Algebraic graph-based approaches were introduced to model scan paths as weighted undirected path graphs, generating global topology metrics that characterize the algebraic connectivity of scan paths. Our data suggest that human subjects show different scanning patterns to stimuli with different affective ratings. Valence salient stimuli (with neutral arousal) elicited faster and larger shifts of attention, while arousal salient stimuli (with neutral valence) elicited local scanning, dense attention allocation and deep processing. Furthermore, our model revealed that the modulatory effect of valence was linearly related to the valence level, whereas the relation between the modulatory effect and the level of arousal was nonlinear. Hence, visual attention seems to be modulated by mechanisms that are separate for valence and arousal. |
Robert Niebergall; Paul S. Khayat; Stefan Treue; Julio C. Martinez-Trujillo Multifocal attention filters targets from distracters within and beyond primate mt neurons' receptive field boundaries Journal Article In: Neuron, vol. 72, no. 6, pp. 1067–1079, 2011. @article{Niebergall2011, Visual attention has been classically described as a spotlight that enhances the processing of a behaviorally relevant object. However, in many situations, humans and animals must simultaneously attend to several relevant objects separated by distracters. To account for this ability, various models of attention have been proposed including splitting of the attentional spotlight into multiple foci, zooming of the spotlight over a region of space, and switching of the spotlight among objects. We investigated this controversial issue by recording neuronal activity in visual area MT of two macaques while they attended to two translating objects that circumvented a third distracter object located inside the neurons' receptive field. We found that when the attended objects passed through or nearby the receptive field, neuronal responses to the distracter were either decreased or remained unaltered. These results demonstrate that attention can split into multiple spotlights corresponding to relevant objects while filtering out interspersed distracters. |
Robert Niebergall; Paul S. Khayat; Stefan Treue; Julio C. Martinez-Trujillo Expansion of MT neurons excitatory receptive fields during covert attentive tracking Journal Article In: Journal of Neuroscience, vol. 31, no. 43, pp. 15499–15510, 2011. @article{Niebergall2011a, Primates can attentively track moving objects while keeping gaze stationary. The neural mechanisms underlying this ability are poorly understood. We investigated this issue by recording responses of neurons in area MT of two rhesus monkeys while they performed two different tasks. During the Attend-Fixation task, two moving random dot patterns (RDPs) translated across a screen at the same speed and in the same direction while the animals directed gaze to a fixation spot and detected a change in its luminance. During the Tracking task, the animals kept gaze on the fixation spot and attentively tracked the two RDPs to report a change in the local speed of one of the patterns' dots. In both conditions, neuronal responses progressively increased as the RDPs entered the neurons' receptive field (RF), peaked when they reached its center, and decreased as they translated away. This response profile was well described by a Gaussian function with its center of gravity indicating the RF center and its flanks the RF excitatory borders. During Tracking, responses were increased relative to Attend-Fixation, causing the Gaussian profiles to expand. Such increases were proportionally larger in the RF periphery than at its center, and were accompanied by a decrease in the trial-to-trial response variability (Fano factor) relative to Attend-Fixation. These changes resulted in an increase in the neurons' performance at detecting targets at longer distances from the RF center. Our results show that attentive tracking dynamically changes MT neurons' RF profiles, ultimately improving the neurons' ability to encode the tracked stimulus features. |
Isamu Motoyoshi Attentional modulation of temporal contrast sensitivity in human vision Journal Article In: PLoS ONE, vol. 6, no. 4, pp. e19303, 2011. @article{Motoyoshi2011, Recent psychophysical studies have shown that attention can alter contrast sensitivities for temporally broadband stimuli such as flashed gratings. The present study examined the effect of attention on the contrast sensitivity for temporally narrowband stimuli with various temporal frequencies. Observers were asked to detect a drifting grating of 0-40 Hz presented gradually in the peripheral visual field with or without a concurrent letter identification task in the fovea. We found that removal of attention by the concurrent task reduced the contrast sensitivity for gratings with low temporal frequencies much more profoundly than for gratings with high temporal frequencies and for flashed gratings. The analysis revealed that the temporal contrast sensitivity function had a more band-pass shape with poor attention. Additional experiments showed that this was also true when the target was presented in various levels of luminance noise. These results suggest that regardless of the presence of external noise, attention extensively modulates visual sensitivity for sustained retinal inputs. |
Christina Moutsiana; David T. Field; John P. Harris The neural basis of centre-surround interactions in visual motion processing Journal Article In: PLoS ONE, vol. 6, no. 7, pp. e22902, 2011. @article{Moutsiana2011, Perception of a moving visual stimulus can be suppressed or enhanced by surrounding context in adjacent parts of the visual field. We studied the neural processes underlying such contextual modulation with fMRI. We selected motion selective regions of interest (ROI) in the occipital and parietal lobes with sufficiently well defined topography to preclude direct activation by the surround. BOLD signal in the ROIs was suppressed when surround motion direction matched central stimulus direction, and increased when it was opposite. With the exception of hMT+/V5, inserting a gap between the stimulus and the surround abolished surround modulation. This dissociation between hMT+/V5 and other motion selective regions prompted us to ask whether motion perception is closely linked to processing in hMT+/V5, or reflects the net activity across all motion selective cortex. The motion aftereffect (MAE) provided a measure of motion perception, and the same stimulus configurations that were used in the fMRI experiments served as adapters. Using a linear model, we found that the MAE was predicted more accurately by the BOLD signal in hMT+/V5 than it was by the BOLD signal in other motion selective regions. However, a substantial improvement in prediction accuracy could be achieved by using the net activity across all motion selective cortex as a predictor, suggesting the overall conclusion that visual motion perception depends upon the integration of activity across different areas of visual cortex. |
Neil G. Muggleton; Roger Kalla; Chi-Hung Juan; Vincent Walsh Dissociating the contributions of human frontal eye fields and posterior parietal cortex to visual search Journal Article In: Journal of Neurophysiology, vol. 105, no. 6, pp. 2891–2896, 2011. @article{Muggleton2011, Imaging, lesion, and transcranial magnetic stimulation (TMS) studies have implicated a number of regions of the brain in searching for a target defined by a combination of attributes. The necessity of both frontal eye fields (FEF) and posterior parietal cortex (PPC) in task performance has been shown by the application of TMS over these regions. The effects of stimulation over these two areas have, thus far, proved to be remarkably similar and the only dissociation reported being in the timing of their involvement. We tested the hypotheses that 1) FEF contributes to performance in terms of visual target detection (possibly by modulation of activity in extrastriate areas with respect to the target), and 2) PPC is involved in translation of visual information for action. We used a task where the presence (and location) of the target was indicated by an eye movement. Task disruption was seen with FEF TMS (with reduced accuracy on the task) but not with PPC stimulation. When a search task requiring a manual response was presented, disruption with PPC TMS was seen. These results show dissociation of FEF and PPC contributions to visual search performance and that PPC involvement seems to be dependent on the response required by the task, whereas this is not the case for FEF. This supports the idea of FEF involvement in visual processes in a manner that might not depend on the required response, whereas PPC seems to be involved when a manual motor response to a stimulus is required. |
Marcus R. Munafò; Nicole Roberts; Linda Bauld; Ute Leaonards Plain packaging increases visual attention to health warnings on cigarette packs in non-smokers and weekly smokers but not daily smokers Journal Article In: Addiction, vol. 106, pp. 1505–1510, 2011. @article{Munafo2011, AIMS: To assess the impact of plain packaging on visual attention towards health warning information on cigarette packs. DESIGN: Mixed-model experimental design, comprising smoking status as a between-subjects factor, and package type (branded versus plain) as a within-subjects factor. SETTING: University laboratory. PARTICIPANTS: Convenience sample of young adults, comprising non-smokers (n = 15), weekly smokers (n = 14) and daily smokers (n = 14). MEASUREMENTS: Number of saccades (eye movements) towards health warnings on cigarette packs, to directly index visual attention. FINDINGS: Analysis of variance indicated more eye movements (i.e. greater visual attention) towards health warnings compared to brand information on plain packs versus branded packs. This effect was observed among non-smokers and weekly smokers, but not daily smokers. CONCLUSION: Among non-smokers and non-daily cigarette smokers, plain packaging appears to increase visual attention towards health warning information and away from brand information. |
Marnix Naber; Stefan Frässle; Wolfgang Einhäuser Perceptual rivalry: Reflexes reveal the gradual nature of visual awareness Journal Article In: PLoS ONE, vol. 6, no. 6, pp. e20910, 2011. @article{Naber2011, Rivalry is a common tool to probe visual awareness: a constant physical stimulus evokes multiple, distinct perceptual interpretations (‘‘percepts'') that alternate over time. Percepts are typically described as mutually exclusive, suggesting that a discrete (all-or-none) process underlies changes in visual awareness. Here we follow two strategies to address whether rivalry is an all-or-none process: first, we introduce two reflexes as objective measures of rivalry, pupil dilation and optokinetic nystagmus (OKN); second, we use a continuous input device (analog joystick) to allow observers a gradual subjective report. We find that the ‘‘reflexes'' reflect the percept rather than the physical stimulus. Both reflexes show a gradual dependence on the time relative to perceptual transitions. Similarly, observers' joystick deflections, which are highly correlated with the reflex measures, indicate gradual transitions. Physically simulating wave-like transitions between percepts suggest piece-meal rivalry (i.e., different regions of space belonging to distinct percepts) as one possible explanation for the gradual transitions. Furthermore, the reflexes show that dominance durations depend on whether or not the percept is actively reported. In addition, reflexes respond to transitions with shorter latencies than the subjective report and show an abundance of short dominance durations. This failure to report fast changes in dominance may result from limited access of introspection to rivalry dynamics. In sum, reflexes reveal that rivalry is a gradual process, rivalry's dynamics is modulated by the required action (response mode), and that rapid transitions in perceptual dominance can slip away from awareness. |
Sébastien Miellet; Roberto Caldara; Philippe G. Schyns Local jekyll and global hyde: The dual identity of face identification Journal Article In: Psychological Science, vol. 22, no. 12, pp. 1518–1526, 2011. @article{Miellet2011, The main concern in face-processing research is to understand the processes underlying the identification of faces. In the study reported here, we addressed this issue by examining whether local or global information supports face identification. We developed a new methodology called "iHybrid." This technique combines two famous identities in a gaze-contingent paradigm, which simultaneously provides local, foveated information from one face and global, complementary information from a second face. Behavioral face-identification performance and eye-tracking data showed that the visual system identified faces on the basis of either local or global information depending on the location of the observer's first fixation. In some cases, a given observer even identified the same face using local information on one trial and global information on another trial. A validation in natural viewing conditions confirmed our findings. These results clearly demonstrate that face identification is not rooted in a single, or even preferred, information-gathering strategy. |
Mark Mills; Andrew Hollingworth; Stefan Van der Stigchel; Lesa Hoffman; Michael D. Dodd Examining the influence of task set on eye movements and fixations Journal Article In: Journal of Vision, vol. 11, no. 8, pp. 1–15, 2011. @article{Mills2011, The purpose of the present study was to examine the influence of task set on the spatial and temporal characteristics of eye movements during scene perception. In previous work, when strong control was exerted over the viewing task via specification of a target object (as in visual search), task set biased spatial, rather than temporal, parameters of eye movements. Here, we find that more participant-directed tasks (in which the task establishes general goals of viewing rather than specific objects to fixate) affect not only spatial (e.g., saccade amplitude) but also temporal parameters (e.g., fixation duration). Further, task set influenced the rate of change in fixation duration over the course of viewing but not saccade amplitude, suggesting independent mechanisms for control of these parameters. |
Milica Milosavljevic; Christof Koch; Antonio Rangel Consumers can make decisions in as little as a third of a second Journal Article In: Judgment and Decision Making, vol. 6, no. 6, pp. 520–530, 2011. @article{Milosavljevic2011, We make hundreds of decisions every day, many of them extremely quickly and without much explicit deliberation. This motivates two important open questions: What is the minimum time required to make choices with above chance accuracy? What is the impact of additional decision-making time on choice accuracy? We investigated these questions in four experiments in which subjects made binary food choices using saccadic or manual responses, under either “speed” or “accuracy” instructions. Subjects were able to make above chance decisions in as little as 313 ms, and choose their preferred food item in over 70% of trials at average speeds of 404 ms. Further, slowing down their responses by either asking them explicitly to be confident about their choices, or to respond with hand movements, generated about a 10% increase in accuracy. Together, these results suggest that consumers can make accurate every-day choices, akin to those made in a grocery store, at significantly faster speeds than previously reported. |
Milica Milosavljevic; Eric Madsen; Christof Koch; Antonio Rangel Fast saccades toward numbers: Simple number comparisons can be made in as little as 230 ms Journal Article In: Journal of Vision, vol. 11, no. 4, pp. 1–12, 2011. @article{Milosavljevic2011a, Visual psychophysicists have recently developed tools to measure the maximal speed at which the brain can accurately carry out different types of computations (H. Kirchner & S. J. Thorpe, 2006). We use this methodology to measure the maximal speed with which individuals can make magnitude comparisons between two single-digit numbers. We find that individuals make such comparisons with high accuracy in 306 ms on average and are able to perform above chance in as little as 230 ms. We also find that maximal speeds are similar for "larger than" and "smaller than" number comparisons and in a control task that simply requires subjects to identify the number in a number-letter pair. The results suggest that the brain contains dedicated processes involved in implementing basic number comparisons that can be deployed in parallel with processes involved in low-level visual processing. |
David M. Milstein; Michael C. Dorris The relationship between saccadic choice and reaction times with manipulations of target value Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 122, 2011. @article{Milstein2011, Choosing the option with the highest expected value (EV; reward probability × reward magnitude) maximizes the intake of reward under conditions of uncertainty. However, human economic choices indicate that our value calculation has a subjective component whereby probability and reward magnitude are not linearly weighted. Using a similar economic framework, our goal was to characterize how subjective value influences the generation of simple motor actions. Specifically, we hypothesized that attributes of saccadic eye movements could provide insight into how rhesus monkeys, a well-studied animal model in cognitive neuroscience, subjectively value potential visual targets. In the first experiment, monkeys were free to choose by directing a saccade toward one of two simultaneously displayed targets, each of which had an uncertain outcome. In this task, choices were more likely to be allocated toward the higher valued target. In the second experiment, only one of the two possible targets appeared on each trial. In this task, saccadic reaction times (SRTs) decreased toward the higher valued target. Reward magnitude had a much stronger influence on both choices and SRTs than probability, whose effect was observed only when reward magnitude was similar for both targets. Across EV blocks, a strong relationship was observed between choice preferences and SRTs. However, choices tended to maximize at skewed values whereas SRTs varied more continuously. Lastly, SRTs were unchanged when all reward magnitudes were 1×, 1.5×, and 2× their normal amount, indicating that saccade preparation was influenced by the relative value of the targets rather than the absolute value of any single-target. We conclude that value is not only an important factor( )for deliberative decision making in primates, but also for the selection and preparation of simple motor actions, such as saccadic eye movements. More precisely, our results indicate that, under conditions of uncertainty, saccade choices and reaction times are influenced by the relative expected subjective value of potential movements. |
Daniel Mirman; Eiling Yee; Sheila E. Blumstein; James S. Magnuson Theories of spoken word recognition deficits in Aphasia: Evidence from eye-tracking and computational modeling Journal Article In: Brain and Language, vol. 117, no. 2, pp. 53–68, 2011. @article{Mirman2011, We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot-. parrot) and cohort (e.g., beaker-. beetle) competitors. Broca's aphasic participants exhibited larger rhyme competition effects than age-matched controls. A re-analysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke's aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke's aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca's aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control. |
Parag K. Mital; Tim J. Smith; Robin L. Hill; John M. Henderson Clustering of gaze during dynamic scene viewing is predicted by motion Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 5–24, 2011. @article{Mital2011, Where does one attend when viewing dynamic scenes? Research into the factors influencing gaze location during static scene viewing have reported that low-level visual features contribute very little to gaze location especially when opposed by high-level factors such as viewing task. However, the inclusion of transient features such as motion in dynamic scenes may result in a greater influence of visual features on gaze allocation and coordination of gaze across viewers. In the present study, we investigated the contribution of low-to mid-level visual features to gaze location during free-viewing of a large dataset of videos ranging in content and length. Signal detection analysis on visual features and Gaussian Mixture Models for clustering gaze was used to identify the contribution of visual features to gaze location. The results show that mid-level visual features including corners and orientations can distinguish between actual gaze locations and a randomly sampled baseline. However, temporal features such as flicker, motion, and their respective contrasts were the most predictive of gaze location. Additionally, moments in which all viewers' gaze tightly clustered in the same location could be predicted by motion. Motion and mid-level visual features may influence gaze allocation in dynamic scenes, but it is currently unclear whether this influence is involuntary or due to correlations with higher order factors such as scene semantics. |
Holger Mitterer The mental lexicon is fully specified: Evidence from eye-tracking Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 2, pp. 496–513, 2011. @article{Mitterer2011, Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input (pin) activates lexical entries with underspecified coronal stops (tin), but lexical entries with specified labial stops (pin) are not activated by mismatching input (tin). The eye-tracking data failed to show such a pattern. Although words that were phonologically similar to the spoken target attracted more looks than did unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs (tin-pin) and in Experiments 2 and 3 with words with an onset overlap (peacock-teacake). Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input mismatched only in terms of place than if it mismatched in place and voice, contrary to the assumption that /t/ is unspecified for place and voice. These results show that speech perception uses signal-driven information to the fullest, as was predicted by an optimal perception account. |
Kazunaga Matsuki; Tracy Chow; Mary Hare; Jeffrey L. Elman; Christoph Scheepers; Ken McRae Event-based plausibility immediately influences on-line language comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 4, pp. 913–934, 2011. @article{Matsuki2011, In some theories of sentence comprehension, linguistically relevant lexical knowledge, such as selectional restrictions, is privileged in terms of the time-course of its access and influence. We examined whether event knowledge computed by combining multiple concepts can rapidly influence language understanding even in the absence of selectional restriction violations. Specifically, we investigated whether instruments can combine with actions to influence comprehension of ensuing patients of (as in Rayner, Warren, Juhuasz, & Liversedge, 2004; Warren & McConnell, 2007). Instrument-verb-patient triplets were created in a norming study designed to tap directly into event knowledge. In self-paced reading (Experiment 1), participants were faster to read patient nouns, such as hair, when they were typical of the instrument-action pair (Donna used the shampoo to wash vs. the hose to wash). Experiment 2 showed that these results were not due to direct instrument-patient relations. Experiment 3 replicated Experiment 1 using eyetracking, with effects of event typicality observed in first fixation and gaze durations on the patient noun. This research demonstrates that conceptual event-based expectations are computed and used rapidly and dynamically during on-line language comprehension. We discuss relationships among plausibility and predictability, as well as their implications. We conclude that selectional restrictions may be best considered as event-based conceptual knowledge rather than lexical-grammatical knowledge. |
Michi Matsukura; James R. Brockmole; Walter R. Boot; John M. Henderson Oculomotor capture during real-world scene viewing depends on cognitive load Journal Article In: Vision Research, vol. 51, no. 6, pp. 546–552, 2011. @article{Matsukura2011, It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. |
Hideyuki Matsumoto; Yasuo Terao; Toshiaki Furubayashi; Akihiro Yugeta; Hideki Fukuda; Masaki Emoto; Ritsuko Hanajima; Yoshikazu Ugawa Small saccades restrict visual scanning area in Parkinson's disease Journal Article In: Movement Disorders, vol. 26, no. 9, pp. 1619–1626, 2011. @article{Matsumoto2011, The purpose of this study wasto investigate abnormalities in visual scanning when Parkinson's disease patients view images of varying complexity. Eighteen nondemented Parkinson's disease patients and 18 normal subjects participated in the study. The ocular fixation position during viewing visual images was recorded using an eye-tracking device. The number of saccades, duration of fixation, amplitude of saccades, and scanned area in Parkinson's disease patients were compared with those in normal subjects. We also investigated whether the number of saccades, duration of fixation, or amplitude of saccades influenced the scanned area. While scanning images of varying complexity, Parkinson's disease patients made fewer saccades with smaller amplitude and longer fixation compared with normal subjects. As image complexity increased, the number of saccades and duration of fixation gradually approached those of normal subjects. Nevertheless, the scanned area in Parkinson's disease patients was consistently smaller than that in normal subjects. The scanned area significantly correlated with saccade amplitude in most images. Importantly, although Parkinson's disease patients cannot make frequent saccades when viewing simple figures, they can increase the saccade number and reduce their fixation duration when viewing more complex figures, making use of the abundant visual cues in such figures, suggesting the existence of ocular kinesie paradoxale. Nevertheless, both the saccade amplitude and the scanned area were consistently smaller than those of normal subjects for all levels of visual complexity. This indicates that small saccade amplitude is the main cause of impaired visual scanning in Parkinson's disease patients. |
Hideyuki Matsumoto; Yasuo Terao; Akihiro Yugeta; Hideki Fukuda; Masaki Emoto; Toshiaki Furubayashi; Tomoko Okano; Ritsuko Hanajima; Yoshikazu Ugawa Where do neurologists look when viewing brain CT images? An eye-tracking study involving stroke cases Journal Article In: PLoS ONE, vol. 6, no. 12, pp. e28928, 2011. @article{Matsumoto2011a, The aim of this study was to investigate where neurologists look when they view brain computed tomography (CT) images and to evaluate how they deploy their visual attention by comparing their gaze distribution with saliency maps. Brain CT images showing cerebrovascular accidents were presented to 12 neurologists and 12 control subjects. The subjects' ocular fixation positions were recorded using an eye-tracking device (Eyelink 1000). Heat maps were created based on the eye-fixation patterns of each group and compared between the two groups. The heat maps revealed that the areas on which control subjects frequently fixated often coincided with areas identified as outstanding in saliency maps, while the areas on which neurologists frequently fixated often did not. Dwell time in regions of interest (ROI) was likewise compared between the two groups, revealing that, although dwell time on large lesions was not different between the two groups, dwell time in clinically important areas with low salience was longer in neurologists than in controls. Therefore it appears that neurologists intentionally scan clinically important areas when reading brain CT images showing cerebrovascular accidents. Both neurologists and control subjects used the "bottom-up salience" form of visual attention, although the neurologists more effectively used the "top-down instruction" form. |
Ali Mazaheri; Nicholas E. DiQuattro; Jesse Bengson; Joy J. Geng Pre-stimulus activity predicts the winner of top-down vs. bottom-up attentional selection Journal Article In: PLoS ONE, vol. 6, no. 2, pp. e16243, 2011. @article{Mazaheri2011, Our ability to process visual information is fundamentally limited. This leads to competition between sensory information that is relevant for top-down goals and sensory information that is perceptually salient, but task-irrelevant. The aim of the present study was to identify, from EEG recordings, pre-stimulus and pre-saccadic neural activity that could predict whether top-down or bottom-up processes would win the competition for attention on a trial-by-trial basis. We employed a visual search paradigm in which a lateralized low contrast target appeared alone, or with a low (i.e., non-salient) or high contrast (i.e., salient) distractor. Trials with a salient distractor were of primary interest due to the strong competition between top-down knowledge and bottom-up attentional capture. Our results demonstrated that 1) in the 1-sec pre-stimulus interval, frontal alpha (8-12 Hz) activity was higher on trials where the salient distractor captured attention and the first saccade (bottom-up win); and 2) there was a transient pre-saccadic increase in posterior-parietal alpha (7-8 Hz) activity on trials where the first saccade went to the target (top-down win). We propose that the high frontal alpha reflects a disengagement of attentional control whereas the transient posterior alpha time-locked to the saccade indicates sensory inhibition of the salient distractor and suppression of bottom-up oculomotor capture. |
Kathryn L. McCabe; Dominique Rich; Carmel M. Loughland; Ulrich Schall; Linda E. Campbell Visual scanpath abnormalities in 22q11.2 deletion syndrome: Is this a face specific deficit? Journal Article In: Psychiatry Research, vol. 189, no. 2, pp. 292–298, 2011. @article{McCabe2011, People with 22q11.2 deletion syndrome (22q11DS) have deficits in face emotion recognition. However, it is not known whether this is a deficit specific to faces, or represents maladaptive information processing strategies to complex stimuli in general. This study examined the specificity of face emotion processing deficits in 22q11DS by exploring recognition accuracy and visual scanpath performance to a Faces task compared to a Weather Scene task. Seventeen adolescents with 22q11DS (11. =females |
Tanja C. W. Nijboer; Gabriela Satris; Stefan Van Stigchel The influence of synesthesia on eye movements: No synesthetic pop-out in an oculomotor target selection task Journal Article In: Consciousness and Cognition, vol. 20, no. 4, pp. 1193–1200, 2011. @article{Nijboer2011, Recent research on grapheme-colour synesthesia has focused on whether visual attention is necessary to induce a synesthetic percept. The current study investigated the influence of synesthesia on overt visual attention during an oculomotor target selection task. Chromatic and achromatic stimuli were presented with one target among distractors (e.g. a '2' (target) among multiple '5's (distractors)). Participants executed an eye movement to the target. Synesthetes and controls showed a comparable target selection performance across conditions and a 'pop-out effect' was only seen in the chromatic condition. As a pop-out effect was absent for the synesthetes in the achromatic condition, a synesthetic element appears not to elicit a synesthetic colour, even when it is the target. The synesthetic percepts are not pre-attentively available to distinguish the synesthetic target from synesthetic distractors when elements are presented in the periphery. Synesthesia appears to require full recognition to bind form and colour. |
Andrey R. Nikolaev; Chie Nakatani; Gijs Plomp; Peter Jurica; Cees Leeuwen Eye fixation-related potentials in free viewing identify encoding failures in change detection Journal Article In: NeuroImage, vol. 56, no. 3, pp. 1598–1607, 2011. @article{Nikolaev2011, We considered the hypothesis that spontaneous dissociation between the direction of attention and eye movement causes encoding failure in change detection. We tested this hypothesis by analyzing eye fixation-related potentials (EFRP) at the encoding stage of a change blindness task; when participants freely inspect a scene containing an unmarked target region, in which a change will occur in a subsequent presentation. We measured EFRP amplitude prior to the execution of a saccade, depending on its starting or landing position relative to the target region. For those landings inside the target region, we found a difference in EFRP between correct detection and failure. Overall, correspondence between EFRP amplitude and the size of the saccade predicted successful detection of change; lack of correspondence was followed by change blindness. By contrast, saccade sizes and fixation durations around the target region were unrelated to subsequent change detection. Since correspondence between EFRP and eye movement indicates that overt attention was given to the target region, we concluded that overt attention is needed for successful encoding and that dissociation between eye movement and attention leads to change blindness. |
Shinji Nishimoto; Jack L. Gallant A three-dimensional spatiotemporal receptive field model explains responses of area MT neurons to naturalistic movies Journal Article In: Journal of Neuroscience, vol. 31, no. 41, pp. 14551–14564, 2011. @article{Nishimoto2011, Area MT has been an important target for studies of motion processing. However, previous neurophysiological studies of MT have used simple stimuli that do not contain many of the motion signals that occur during natural vision. In this study we sought to determine whether views of area MT neurons developed using simple stimuli can account for MT responses under more naturalistic conditions. We recorded responses from macaque area MT neurons during stimulation with naturalistic movies. We then used a quantitative modeling framework to discover which specific mechanisms best predict neuronal responses under these challenging conditions. We find that the simplest model that accurately predicts responses of MT neurons consists of a bank of V1-like filters, each followed by a compressive nonlinearity, a divisive nonlinearity, and linear pooling. Inspection of the fit models shows that the excitatory receptive fields of MT neurons tend to lie on a single plane within the three-dimensional spatiotemporal frequency domain, and suppressive receptive fields lie off this plane. However, most excitatory receptive fields form a partial ring in the plane and avoid low temporal frequencies. This receptive field organization ensures that most MT neurons are tuned for velocity but do not tend to respond to ambiguous static textures that are aligned with the direction of motion. In sum, MT responses to naturalistic movies are largely consistent with predictions based on simple stimuli. However, models fit using naturalistic stimuli reveal several novel properties of MT receptive fields that had not been shown in prior experiments. |
Yasuki Noguchi; Shinsuke Shimojo; Ryusuke Kakigi; Minoru Hoshiyama An integration of color and motion information in visual scene analyses Journal Article In: Psychological Science, vol. 22, no. 2, pp. 153–158, 2011. @article{Noguchi2011, To analyze complex scenes efficiently, the human visual system performs perceptual groupings based on various features (e.g., color and motion) of the visual elements in a scene. Although previous studies demonstrated that such groupings can be based on a single feature (e.g., either color or motion information), here we show that the visual system also performs scene analyses based on a combination of two features. We presented subjects with a mixture of red and green dots moving in various directions. Although the pairings between color and motion information were variable across the dots (e.g., one red dot moved upward while another moved rightward), subjects' perceptions of the color-motion pairings were significantly biased when the randomly paired dots were flanked by additional dots with consistent color-motion pairings. These results indicate that the visual system resolves local ambiguities in color-motion pairings using unambiguous pairings in surrounds, demonstrating a new type of scene analysis based on the combination of two featural cues. |
Lauri Nummenmaa; Jari K. Hietanen; Manuel G. Calvo; Jukka Hyönä Food catches the eye but not for everyone: A BMI-contingent attentional bias in rapid detection of nutriments Journal Article In: PLoS ONE, vol. 6, no. 5, pp. e19215, 2011. @article{Nummenmaa2011, An organism's survival depends crucially on its ability to detect and acquire nutriment. Attention circuits interact with cognitive and motivational systems to facilitate detection of salient sensory events in the environment. Here we show that the human attentional system is tuned to detect food targets among nonfood items. In two visual search experiments participants searched for discrepant food targets embedded in an array of nonfood distracters or vice versa. Detection times were faster when targets were food rather than nonfood items, and the detection advantage for food items showed a significant negative correlation with Body Mass Index (BMI). Also, eye tracking during searching within arrays of visually homogenous food and nonfood targets demonstrated that the BMI-contingent attentional bias was due to rapid capturing of the eyes by food items in individuals with low BMI. However, BMI was not associated with decision times after the discrepant food item was fixated. The results suggest that visual attention is biased towards foods, and that individual differences in energy consumption–as indexed by BMI–are associated with differential attentional effects related to foods. We speculate that such differences may constitute an important risk factor for gaining weight. |
Jan Churan; Daniel Guitton; Christopher C. Pack Context dependence of receptive field remapping in superior colliculus Journal Article In: Journal of Neurophysiology, vol. 106, no. 4, pp. 1862–1874, 2011. @article{Churan2011, Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available. |
Laetitia Cirilli; Philippe Timary; Philippe Lefèvre; Marcus Missal Individual differences in impulsivity predict anticipatory eye movements Journal Article In: PLoS ONE, vol. 6, no. 10, pp. e26699, 2011. @article{Cirilli2011, Impulsivity is the tendency to act without forethought. It is a personality trait commonly used in the diagnosis of many psychiatric diseases. In clinical practice, impulsivity is estimated using written questionnaires. However, answers to questions might be subject to personal biases and misinterpretations. In order to alleviate this problem, eye movements could be used to study differences in decision processes related to impulsivity. Therefore, we investigated correlations between impulsivity scores obtained with a questionnaire in healthy subjects and characteristics of their anticipatory eye movements in a simple smooth pursuit task. Healthy subjects were asked to answer the UPPS questionnaire (Urgency Premeditation Perseverance and Sensation seeking Impulsive Behavior scale), which distinguishes four independent dimensions of impulsivity: Urgency, lack of Premeditation, lack of Perseverance, and Sensation seeking. The same subjects took part in an oculomotor task that consisted of pursuing a target that moved in a predictable direction. This task reliably evoked anticipatory saccades and smooth eye movements. We found that eye movement characteristics such as latency and velocity were significantly correlated with UPPS scores. The specific correlations between distinct UPPS factors and oculomotor anticipation parameters support the validity of the UPPS construct and corroborate neurobiological explanations for impulsivity. We suggest that the oculomotor approach of impulsivity put forth in the present study could help bridge the gap between psychiatry and physiology. |
Dario Cazzoli; Thomas Nyffeler; Christian W. Hess; René M. Müri Vertical bias in neglect: A question of time? Journal Article In: Neuropsychologia, vol. 49, no. 9, pp. 2369–2374, 2011. @article{Cazzoli2011, Neglect is defined as the failure to attend and to orient to the contralesional side of space. A horizontal bias towards the right visual field is a classical finding in patients who suffered from a right-hemispheric stroke. The vertical dimension of spatial attention orienting has only sparsely been investigated so far. The aim of this study was to investigate the specificity of this vertical bias by means of a search task, which taps a more pronounced top-down attentional component. Eye movements and behavioural search performance were measured in thirteen patients with left-sided neglect after right hemispheric stroke and in thirteen age-matched controls. Concerning behavioural performance, patients found significantly less targets than healthy controls in both the upper and lower left quadrant. However, when targets were located in the lower left quadrant, patients needed more visual fixations (and therefore longer search time) to find them, suggesting a time-dependent vertical bias. |
Jessica P. K. Chan; Daphne Kamino; Malcolm A. Binns; Jennifer D. Ryan Can changes in eye movement scanning alter the age-related deficit in recognition memory? Journal Article In: Frontiers in Psychology, vol. 2, pp. 92, 2011. @article{Chan2011, Older adults typically exhibit poorer face recognition compared to younger adults. These recognition differences may be due to underlying age-related changes in eye movement scanning. We examined whether older adults' recognition could be improved by yoking their eye movements to those of younger adults. Participants studied younger and older faces, under free viewing conditions (bases), through a gaze-contingent moving window (own), or a moving window which replayed the eye movements of a base participant (yoked). During the recognition test, participants freely viewed the faces with no viewing restrictions. Own-age recognition biases were observed for older adults in all viewing conditions, suggesting that this effect occurs independently of scanning. Participants in the bases condition had the highest recognition accuracy, and participants in the yoked condition were more accurate than participants in the own condition. Among yoked participants, recognition did not depend on age of the base participant. These results suggest that successful encoding for all participants requires the bottom-up contribution of peripheral information, regardless of the locus of control of the viewer. Although altering the pattern of eye movements did not increase recognition, the amount of sampling of the face during encoding predicted subsequent recognition accuracy for all participants. Increased sampling may confer some advantages for subsequent recognition, particularly for people who have declining memory abilities. |
Steve W. C. Chang; Amy A. Winecoff; Michael L. Platt Vicarious reinforcement in rhesus macaques (Macaca mulatta) Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 27, 2011. @article{Chang2011, What happens to others profoundly influences our own behavior. Such other-regarding outcomes can drive observational learning, as well as motivate cooperation, charity, empathy, and even spite. Vicarious reinforcement may serve as one of the critical mechanisms mediating the influence of other-regarding outcomes on behavior and decision-making in groups. Here we show that rhesus macaques spontaneously derive vicarious reinforcement from observing rewards given to another monkey, and that this reinforcement can motivate them to subsequently deliver or withhold rewards from the other animal. We exploited Pavlovian and instrumental conditioning to associate rewards to self (M1) and/or rewards to another monkey (M2) with visual cues. M1s made more errors in the instrumental trials when cues predicted reward to M2 compared to when cues predicted reward to M1, but made even more errors when cues predicted reward to no one. In subsequent preference tests between pairs of conditioned cues, M1s preferred cues paired with reward to M2 over cues paired with reward to no one. By contrast, M1s preferred cues paired with reward to self over cues paired with reward to both monkeys simultaneously. Rates of attention to M2 strongly predicted the strength and valence of vicarious reinforcement. These patterns of behavior, which were absent in non-social control trials, are consistent with vicarious reinforcement based upon sensitivity to observed, or counterfactual, outcomes with respect to another individual. Vicarious reward may play a critical role in shaping cooperation and competition, as well as motivating observational learning and group coordination in rhesus macaques, much as it does in humans. We propose that vicarious reinforcement signals mediate these behaviors via homologous neural circuits involved in reinforcement learning and decision-making. |
Chang-Mao Chao; Philip Tseng; Tzu-Yu Hsu; Jia-Han Su; Ovid J. L. Tzeng; Daisy L. Hung; Neil G. Muggleton; Chi-Hung Juan Predictability of saccadic behaviors is modified by transcranial magnetic stimulation over human posterior parietal cortex Journal Article In: Human Brain Mapping, vol. 32, no. 11, pp. 1961–1972, 2011. @article{Chao2011, Predictability in the visual environment provides a powerful cue for efficient processing of scenes and objects. Recently, studies have suggested that the directionality and magnitude of saccade curvature can be informative as to how the visual system processes predictive information. The pres-ent study investigated the role of the right posterior parietal cortex (rPPC) in shaping saccade curva-tures in the context of predictive and non-predictive visual cues. We used an orienting paradigm that incorporated manipulation of target location predictability and delivered transcranial magnetic stimulation (TMS) over rPPC. Participants were presented with either an informative or uninforma-tive cue to upcoming target locations. Our results showed that rPPC TMS generally increased sac-cade latency and saccade error rates. Intriguingly, rPPC TMS increased curvatures away from the distractor only when the target location was unpredictable and decreased saccadic errors towards the distractor. These effects on curvature and accuracy were not present when the target location was predictable. These results dissociate the strong contingency between saccade latency and saccade curvature and also indicate that rPPC plays an important role in allocating and suppressing attention to distractors when the target demands visual disambiguation. Furthermore, the present study sug-gests that, like the frontal eye fields, rPPC is critically involved in determining saccade curvature and the generation of saccadic behaviors under conditions of differing target predictability. |
Minglei Chen; Hwawei Ko Exploring the eye-movement patterns as Chinese children read texts: A developmental perspective Journal Article In: Journal of Research in Reading, vol. 34, no. 2, pp. 232–246, 2011. @article{Chen2011, This study was to investigate Chinese children's eye patterns while reading different text genres from a developmental perspective. Eye movements were recorded while children in the second through sixth grades read two expository texts and two narrative texts. Across passages, overall word frequency was not significantly different between the two genres. Results showed that all children had longer fixation durations for low-frequency words. They also had longer fixation durations on content words. These results indicate that children adopted a word-based processing strategy like skilled readers do. However, only older children's rereading times were affected by genre. Overall, eye-movement patterns of older children reported in this study are in accordance with those of skilled Chinese readers, but younger children are more likely to be responsive to word characteristics than text level when reading a Chinese text. |
Ying Chen; Patrick Byrne; J. Douglas Crawford Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach Journal Article In: Neuropsychologia, vol. 49, no. 1, pp. 49–60, 2011. @article{Chen2011a, Allocentric cues can be used to encode locations in visuospatial memory, but it is not known how and when these representations are converted into egocentric commands for behaviour. Here, we tested the influence of different memory intervals on reach performance toward targets defined in either egocentric or allocentric coordinates, and then compared this to performance in a task where subjects were implicitly free to choose when to convert from allocentric to egocentric representations. Reach and eye positions were measured using Optotrak and Eyelink Systems, respectively, in fourteen subjects. Our results confirm that egocentric representations degrade over a delay of several seconds, whereas allocentric representations remained relatively stable over the same time scale. Moreover, when subjects were free to choose, they converted allocentric representations into egocentric representations as soon as possible, despite the apparent cost in reach precision in our experimental paradigm. This suggests that humans convert allocentric representations into egocentric commands at the first opportunity, perhaps to optimize motor noise and movement timing in real-world conditions. |
Hui-Yan Chiau; Philip Tseng; Jia-Han Su; Ovid J. L. Tzeng; Daisy L. Hung; Neil G. Muggleton; Chi-Hung Juan Trial type probability modulates the cost of antisaccades Journal Article In: Journal of Neurophysiology, vol. 106, no. 2, pp. 515–526, 2011. @article{Chiau2011, The antisaccade task, where eye movements are made away from a target, has been used to investigate the flexibility of cognitive control of behavior. Antisaccades usually have longer saccade latencies than prosaccades, the so-called antisaccade cost. Recent studies have shown that this antisaccade cost can be modulated by event probability. This may mean that the antisaccade cost can be reduced, or even reversed, if the probability of surrounding events favors the execution of antisaccades. The probabilities of prosaccades and antisaccades were systematically manipulated by changing the proportion of a certain type of trial in an interleaved pro/antisaccades task. We aimed to disentangle the intertwined relationship between trial type probabilities and the antisaccade cost with the ultimate goal of elucidating how probabilities of trial types modulate human flexible behaviors, as well as the characteristics of such modulation effects. To this end, we examined whether implicit trial type probability can influence saccade latencies and also manipulated the difficulty of cue discriminability to see how effects of trial type probability would change when the demand on visual perceptual analysis was high or low. A mixed-effects model was applied to the analysis to dissect the factors contributing to the modulation effects of trial type probabilities. Our results suggest that the trial type probability is one robust determinant of antisaccade cost. These findings highlight the importance of implicit probability in the flexibility of cognitive control of behavior. |
Mara Breen; Charles Clifton Stress matters: Effects of anticipated lexical stress on silent reading Journal Article In: Journal of Memory and Language, vol. 64, no. 2, pp. 153–170, 2011. @article{Breen2011, This paper presents findings from two eye-tracking studies designed to investigate the role of metrical prosody in silent reading. In Experiment 1, participants read stress-alternating noun-verb or noun-adjective homographs (e.g. PREsent, preSENT) embedded in limericks, such that the lexical stress of the homograph, as determined by context, either matched or mismatched the metrical pattern of the limerick. The results demonstrated a reading cost when readers encountered a mismatch between the predicted and actual stress pattern of the word. Experiment 2 demonstrated a similar cost of a mismatch in stress patterns in a context where the metrical constraint was mediated by lexical category rather than by explicit meter. Both experiments demonstrated that readers are slower to read words when their stress pattern does not conform to expectations. The data from these two eye-tracking experiments provide some of the first on-line evidence that metrical information is part of the default representation of a word during silent reading. |
Eli Brenner; Jeroen B. J. Smeets Continuous visual control of interception Journal Article In: Human Movement Science, vol. 30, no. 3, pp. 475–494, 2011. @article{Brenner2011, People generally try to keep their eyes on a moving target that they intend to catch or hit. In the present study we first examined how important it is to do so. We did this by designing two interception tasks that promote different eye movements. In both tasks it was important to be accurate relative to both the moving target and the static environment. We found that performance was more variable in relation to the structure that was not fixated. This suggests that the resolution of visual information that is gathered during the movement is important for continuously improving predictions about critical aspects of the task, such as anticipating where the target will be at some time in the future. If so, variability in performance should increase if the target briefly disappears from view just before being hit, even if the target moves completely predictably. We demonstrate that it does, indicating that new visual information is used to improve precision throughout the movement. |
Meredith Brown; Anne Pier Salverda; Laura C. Dilley; Michael K. Tanenhaus Expectations from preceding prosody influence segmentation in online sentence processing Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 6, pp. 1189–1196, 2011. @article{Brown2011, Previous work examining prosodic cues in online spoken-word recognition has focused primarily on local cues to word identity. However, recent studies have suggested that utterance-level prosodic patterns can also influence the interpretation of subsequent sequences of lexically ambiguous syllables (Dilley, Mattys, & Vinke, Journal of Memory and Language, 63:274–294, 2010; Dilley & McAuley, Journal of Memory and Language, 59:294–311, 2008). To test the hypothesis that these distal prosody effects are based on expectations about the organization of upcoming material, we conducted a visual-world experiment. We examined fixations to competing alternatives such as pan and panda upon hearing the target word panda in utterances in which the acoustic properties of the preceding sentence material had been manipulated. The proportions of fixations to the monosyllabic competitor were higher beginning 200 ms after target word onset when the preceding prosody supported a prosodic constituent boundary following pan-, rather than following panda. These findings support the hypothesis that expectations based on perceived prosodic patterns in the distal context influence lexical segmentation and word recognition. |
Sarah Brown-Schmidt; Agnieszka E. Konopka Experimental Aapproaches to referential domains and the on-line processing of referring expressions in unscripted conversation Journal Article In: Information, vol. 2, no. 4, pp. 302–326, 2011. @article{BrownSchmidt2011, This article describes research investigating the on-line processing of language in unscripted conversational settings. In particular, we focus on the process of formulating and interpreting definite referring expressions. Within this domain we present results of two eye-tracking experiments addressing the problem of how speakers interrogate the referential domain in preparation to speak, how they select an appropriate expression for a given referent, and how addressees interpret these expressions. We aim to demonstrate that it is possible, and indeed fruitful, to examine unscripted, conversational language using modified experimental designs and standard hypothesis testing procedures. |
Maximilian Bruchmann; Philipp Hintze; Simon Mota The effects of spatial and temporal cueing on metacontrast masking Journal Article In: Advances in Cognitive Psychology, vol. 7, no. 1, pp. 132–141, 2011. @article{Bruchmann2011, We studied the effects of selective attention on metacontrast masking with 3 different cueing experiments. Experiments 1 and 2 compared central symbolic and peripheral spatial cues. For symbolic cues, we observed small attentional costs, that is, reduced visibility when the target appeared at an unexpected location, and attentional costs as well as benefits for peripheral cues. All these effects occurred exclusively at the late, ascending branch of the U-shaped metacontrast masking function, although the possibility exists that cueing effects at the early branch were obscured by a ceiling effect due to almost perfect visibility at short stimulus onset asynchronies (SOAs). In Experiment 3, we presented temporal cues that indicated when the target was likely to appear, not where. Here, we also observed cueing effects in the form of higher visibility when the target appeared at the expected point in time compared to when it appeared too early. However, these effects were not restricted to the late branch of the masking function, but enhanced visibility over the complete range of the masking function. Given these results we discuss a common effect for different types of spatial selective attention on metacontrast masking involving neural subsystems that are different from those involved in temporal attention. |
Julie N. Buchan; Kevin G. Munhall The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information Journal Article In: Perception, vol. 40, no. 10, pp. 1164–1182, 2011. @article{Buchan2011, Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception. |
Brittany N. Bushnell; Philip J. Harding; Yoshito Kosai; Wyeth Bair; Anitha Pasupathy Equiluminance cells in visual cortical area V4 Journal Article In: Journal of Neuroscience, vol. 31, no. 35, pp. 12398–12412, 2011. @article{Bushnell2011, We report a novel class of V4 neuron in the macaque monkey that responds selectively to equiluminant colored form. These "equiluminance" cells stand apart because they violate the well established trend throughout the visual system that responses are minimal at low luminance contrast and grow and saturate as contrast increases. Equiluminance cells, which compose ∼22% of V4, exhibit the opposite behavior: responses are greatest near zero contrast and decrease as contrast increases. While equiluminance cells respond preferentially to equiluminant colored stimuli, strong hue tuning is not their distinguishing feature-some equiluminance cells do exhibit strong unimodal hue tuning, but many show little or no tuning for hue. We find that equiluminance cells are color and shape selective to a degree comparable with other classes of V4 cells with more conventional contrast response functions. Those more conventional cells respond equally well to achromatic luminance and equiluminant color stimuli, analogous to color luminance cells described in V1. The existence of equiluminance cells, which have not been reported in V1 or V2, suggests that chromatically defined boundaries and shapes are given special status in V4 and raises the possibility that form at equiluminance and form at higher contrasts are processed in separate channels in V4. |
Brittany N. Bushnell; Philip J. Harding; Yoshito Kosai; Anitha Pasupathy Partial occlusion modulates contour-based shape encoding in primate area V4 Journal Article In: Journal of Neuroscience, vol. 31, no. 11, pp. 4012–4024, 2011. @article{Bushnell2011a, Past studies of shape coding in visual cortical area V4 have demonstrated that neurons can accurately represent isolated shapes in terms of their component contour features. However, rich natural scenes contain many partially occluded objects, which have "accidental" contours at the junction between the occluded and occluding objects. These contours do not represent the true shape of the occluded object and are known to be perceptually discounted. To discover whether V4 neurons differentially encode accidental contours, we studied the responses of single neurons in fixating monkeys to complex shapes and contextual stimuli presented either in isolation or adjoining each other to provide a percept of partial occlusion. Responses to preferred contours were suppressed when the adjoining context rendered those contours accidental. The observed suppression was reversed when the partial occlusion percept was compromised by introducing a small gap between the component stimuli. Control experiments demonstrated that these results likely depend on contour geometry at T-junctions and cannot be attributed to mechanisms based solely on local color/luminance contrast, spatial proximity of stimuli, or the spatial frequency content of images. Our findings provide novel insights into how occluded objects, which are fundamental to complex visual scenes, are encoded in area V4. They also raise the possibility that the weakened encoding of accidental contours at the junction between objects could mark the first step of image segmentation along the ventral visual pathway. |
Roberto Caldara; Sébastien Miellet iMap: A novel method for statistical fixation mapping of eye movement data Journal Article In: Behavior Research Methods, vol. 43, no. 3, pp. 864–878, 2011. @article{Caldara2011, Eye movement data analyses are commonly based on the probability of occurrence of saccades and fixations (and their characteristics) in given regions of interest (ROIs). In this article, we introduce an alternative method for computing statistical fixation maps of eye movements-iMap-based on an approach inspired by methods used in functional magnetic resonance imaging. Importantly, iMap does not require the a priori segmentation of the experimental images into ROIs. With iMap, fixation data are first smoothed by convolving Gaussian kernels to generate three-dimensional fixation maps. This procedure embodies eyetracker accuracy, but the Gaussian kernel can also be flexibly set to represent acuity or attentional constraints. In addition, the smoothed fixation data generated by iMap conform to the assumptions of the robust statistical random field theory (RFT) approach, which is applied thereafter to assess significant fixation spots and differences across the three-dimensional fixation maps. The RFT corrects for the multiple statistical comparisons generated by the numerous pixels constituting the digital images. To illustrate the processing steps of iMap, we provide sample analyses of real eye movement data from face, visual scene, and memory processing. The iMap MATLAB toolbox is editable and freely available for download online ( www.unifr.ch/psycho/ibmlab/ ). |
Manuel G. Calvo; Lauri Nummenmaa Time course of discrimination between emotional facial expressions: The role of visual saliency Journal Article In: Vision Research, vol. 51, no. 15, pp. 1751–1759, 2011. @article{Calvo2011, Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280. ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500. ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions. |