All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2012 |
Elizabeth R. Schotter; Cainen Gerety; Keith Rayner Heuristics and criterion setting during selective encoding in visual decision making: Evidence from eye movements Journal Article In: Visual Cognition, vol. 20, no. 9, pp. 1110–1129, 2012. @article{Schotter2012, When making a decision, people spend longer looking at the option they ultimately choose compared to other options - termed the gaze bias effect - even during their first encounter with the options (Glaholt &Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, online during the first encounter with them. To extend their findings and test this claim, we recorded subjects' eye movements as they made judgements about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same colour content (e.g., both in colour or both in black-and-white) or whether they differed in colour content and the extent to which colour content was a reliable cue to relative recentness of the images. We found that the magnitude ofthe gaze bias effect decreasedwhen the colour content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information online. |
Alexander C. Schutz There's more behind it: Perceived depth order biases perceived numerosity/density Journal Article In: Journal of Vision, vol. 12, no. 12, pp. 1–16, 2012. @article{Schutz2012, Humans have a clear sense of the numerosity of elements in a surface. However, recent studies showed that the binding of features to the single elements is severely limited. By studying the relationship of depth order and perceived numerosity of overlapping, pseudotransparent surfaces, we show that the binding of elements to the surfaces is also limited. In transparent motion, anisotropies for perceived depth order and perceived numerosity were highly correlated: directions that were more likely to be perceived in the back were also more likely to be perceived as more numerous. The magnitude of anisotropies, however, was larger for depth order than for numerosity, and the correlation with eye movement anisotropies also developed earlier for depth order than for numerosity judgments. Presenting the surfaces at different disparities removed the anisotropies but lead to a consistent bias to overestimate the numerosity of the surface in the back and to underestimate the surface in the front. The magnitude of this bias did not depend on dot density or lifetime. However when the speed of motion was reduced or when the two surfaces were presented at different luminance polarities, the magnitude of anisotropies and the numerosity bias were greatly reduced. These results show that the numerosity of pseudotransparent surfaces is not processed independent of the depth structure in the scene. Instead there is a strong prior for higher numerosity in the back surface. |
Alexander C. Schutz; Julia Trommershäuser; Karl R. Gegenfurtner Dynamic integration of information about salience and value for saccadic eye movements Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 19, pp. 7547–7552, 2012. @article{Schutz2012a, Humans shift their gaze to a new location several times per second. It is still unclear what determines where they look next. Fixation behavior is influenced by the low-level salience of the visual stimulus, such as luminance, contrast, and color, but also by high-level task demands and prior knowledge. Under natural conditions, different sources of information might conflict with each other and have to be combined. In our paradigm, we trade off visual salience against expected value. We show that both salience and value information influence the saccadic end point within an object, but with different time courses. The relative weights of salience and value are not constant but vary from eye movement to eye movement, depending critically on the availability of the value information at the time when the saccade is programmed. Short-latency saccades are determined mainly by salience, but value information is taken into account for long-latency saccades. We present a model that describes these data by dynamically weighting and integrating detailed topographic maps of visual salience and value. These results support the notion of independent neural pathways for the processing of visual information and value. |
Kilian G. Seeber; Dirk Kerzel Cognitive load in simultaneous interpreting: Model meets data Journal Article In: International Journal of Bilingualism, vol. 16, no. 2, pp. 228–242, 2012. @article{Seeber2012, Seeber (2011) recently introduced a series of analytical cognitive load models, providing a detailed illustration of conjectured cognitive resource allocation during simultaneous interpreting. In this article, the authors set out to compare these models with data gathered in an experiment using task-evoked pupillary responses to measure online cognitive load during simultaneous interpreting when embedded in single-sentence context and discourse context. Verb-final and verb-initial constructions were analysed in terms of the load they cause to an inherently capacity-limited system when interpreted simultaneously into a verb-initial language like English. The results show larger pupil dilation with verb-final than with verb-initial constructions, suggesting higher cognitive load with asymmetrical structures. A tendency for reduced cognitive load in the discourse context compared to the sentence context was also found. These data support the models' prediction of an increase in cognitive load towards (and beyond) the end of verb-final constructions. |
Annie Roy-Charland; Jean Saint-Aubin; Raymond M. Klein; Gregory H. MacLean; Amanda Lalande; Ashley Bélanger Eye movements when reading: The importance of the word to the left of fixation Journal Article In: Visual Cognition, vol. 20, no. 3, pp. 328–355, 2012. @article{RoyCharland2012, In reading, it is well established that word processing can begin in the parafovea while the eyes are fixating the previous word. However, much less is known about the processing of information to the left of fixation. In two experiments, this issue was explored by combining a gaze-contingent display procedure preventing parafoveal preview and a letter detection task. All words were displayed as a series of xs until the reader fixated them, thereby preventing forward parafoveal processing, yet enabling backward parafoveal or postview processing. Results from both experiments revealed that readers were able to detect a target letter embedded in a word that was skipped. In those cases, the letter could only have been identified in postview (to the left of fixation), and detection rate decreased as the distance between the target letter and the eyes' landing position increased. Most importantly, for those skipped words, the typical missing-letter effect was observed with more omissions for target letters embedded in function than in content words. This can be taken as evidence that readers can extract basic prelexical information, such as the presence of a letter, in the parafoveal area to the left of fixation. Implications of these results are discussed in relation to models of eye movement control in reading and also in relation to models of the missing-letter effect. |
Octavio Ruiz; Michael A. Paradiso Macaque V1 representations in natural and reduced visual contexts: Spatial and temporal properties and influence of saccadic eye movements Journal Article In: Journal of Neurophysiology, vol. 108, no. 1, pp. 324–333, 2012. @article{Ruiz2012, Vision in natural situations is different from the paradigms generally used to study vision in the laboratory. In natural vision, stimuli usually appear in a receptive field as the result of saccadic eye movements rather than suddenly flashing into view. The stimuli themselves are rich with meaningful and recognizable objects rather than simple abstract patterns. In this study we examined the sensitivity of neurons in macaque area V1 to saccades and to complex background contexts. Using a variety of visual conditions, we find that natural visual response patterns are unique. Compared with standard laboratory situations, in more natural vision V1 responses have longer latency, slower time course, delayed orientation selectivity, higher peak selectivity, and lower amplitude. Furthermore, the influences of saccades and background type (complex picture vs. uniform gray) interact to give a distinctive, and presumably more natural, response pattern. While in most of the experiments natural images were used as background, we find that similar synthetic unnatural background stimuli produce nearly identical responses (i.e., complexity matters more than "naturalness"). These findings have important implications for our understanding of vision in more natural situations. They suggest that with the saccades used to explore complex images, visual context ("surround effects") would have a far greater effect on perception than in standard experiments with stimuli flashed on a uniform background. Perceptual thresholds for contrast and orientation should also be significantly different in more natural situations. |
Jason Rupp; Mario Dzemidzic; Tanya Blekher; John West; Siu L. Hui; Joanne Wojcieszek; Andrew J. Saykin; David A. Kareken; Tatiana M. Foroud Comparison of vertical and horizontal saccade measures and their relation to gray matter changes in premanifest and manifest Huntington disease Journal Article In: Journal of Neurology, vol. 259, no. 2, pp. 267–276, 2012. @article{Rupp2012, Saccades are a potentially important biomarker of Huntington disease (HD) progression, as saccadic abnormalities can be detected both cross-sectionally and longitudinally. Although vertical saccadic impairment was reported decades ago, recent studies have focused on horizontal saccades. This study investigated antisaccade (AS) and memory guided saccade (MG) impairment in both the horizontal and vertical directions in individuals with the disease-causing CAG expansion (CAG+; n = 74), using those without the expansion (CAG-; n = 47) as controls. Percentage of errors, latency, and variability of latency were used to measure saccadic performance. We evaluated the benefits of measuring saccades in both directions by comparing effect sizes of horizontal and vertical measures, and by investigating the correlation of saccadic measures with underlying gray matter loss. Consistent with previous studies, AS and MG impairments were detected prior to the onset of manifest disease. Furthermore, the largest effect sizes were found for vertical saccades. A subset of participants (12 CAG-, 12 premanifest CAG+, 7 manifest HD) underwent magnetic resonance imaging, and an automated parcellation and segmentation procedure was used to extract thickness and volume measures in saccade-generating and inhibiting regions. These measures were then tested for associations with saccadic impairment. Latency of vertical AS was significantly associated with atrophy in the left superior frontal gyrus, left inferior parietal lobule, and bilateral caudate nuclei. This study suggests an important role for measuring vertical saccades. Vertical saccades may possess more statistical power than horizontal saccades, and the latency of vertical AS is associated with gray matter loss in both cortical and subcortical regions important in saccade function. |
Leanne Quigley; Andrea L. Nelson; Jonathan Carriere; Daniel Smilek; Christine Purdon The effects of trait and state anxiety on attention to emotional images: An eye-tracking study Journal Article In: Cognition and Emotion, vol. 26, no. 8, pp. 1390–1411, 2012. @article{Quigley2012, Attentional biases for threatening stimuli have been implicated in the development of anxiety disorders. However, little is known about the relative influences of trait and state anxiety on attentional biases. This study examined the effects of trait and state anxiety on attention to emotional images. Low, mid, and high trait anxious participants completed two trial blocks of an eye-tracking task. Participants viewed image pairs consisting of one emotional (threatening or positive) and one neutral image while their eye movements were recorded. Between trial blocks, participants underwent an anxiety induction. Primary analyses examined the effects of trait and state anxiety on the proportion of viewing time on emotional versus neutral images. State anxiety was associated with increased attention to threatening images for participants, regardless of trait anxiety. Furthermore, when in a state of anxiety, relative to a baseline condition, durations of initial gaze and average fixation were longer on threat versus neutral images. These findings were specific to the threatening images; no anxiety-related differences in attention were found with the positive images. The implications of these results for future research, models of anxiety-related information processing, and clinical interventions for anxiety are discussed. |
Federico Raimondo; Juan E. Kamienkowski; Mariano Sigman; Diego Fernandez Slezak CUDAICA: GPU optimization of infomax-ICA EEG analysis Journal Article In: Computational Intelligence and Neuroscience, vol. 2012, pp. 1–8, 2012. @article{Raimondo2012, In recent years, Independent Component Analysis (ICA) has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card) of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation. |
Tara Rastgardani; Mathias Abegg; Jason J. S. Barton The inter-trial spatial biases of stimuli and goals in saccadic programming Journal Article In: Journal of Eye Movement Research, vol. 7, no. 4, pp. 1–7, 2012. @article{Rastgardani2012, Prior studies have shown an ‘alternate antisaccade-goal bias', in that the saccadic landing points of antisaccades were displaced towards the location of antisaccade goals used in other trials in the same experimental block. Thus the motor response in one trial induced a spatial bias of a motor response in another trial. In this study we investigated whether sensory information, i.e. the location of a visual stimulus, might have a spatial effect on a motor response too. Such an effect might be attractive as for the alternate antisaccade-goal bias or repulsive. For this purpose we used block of trials with either antisaccades, prosaccades or mixed trials in order to study the alternate-trial biases generated by antisaccade goals, antisaccade stimuli, and prosaccade goals. in contrast to the effects of alternate antisaccade goals described in prior studies, alternate antisaccade stimuli generated a significant repulsive bias of about 1.8°: furthermore, if stimulus and motor goal coincide, as with an alternate prosaccade, the repulsive effect of a stimulus prevails, causing a bias of about 0.9°. Taken together with prior results, these findings may reflect averaging of current and alternate trial activity in a salience map, with excitatory activity from the motor response and inhibitory activity from the sensory input. |
Tara Rastgardani; Victor Lau; Jason J. S. Barton; Mathias Abegg Trial history biases the spatial programming of antisaccades Journal Article In: Experimental Brain Research, vol. 222, no. 3, pp. 175–183, 2012. @article{Rastgardani2012a, The historical context in which saccades are made influences their latency and error rates, but less is known about how context influences their spatial parameters. We recently described a novel spatial bias for antisaccades, in which the endpoints of these responses deviate towards alternative goal locations used in the same experimental block, and showed that expectancy (prior probability) is at least partly responsible for this 'alternate-goal bias'. In this report we asked whether trial history also plays a role. Subjects performed antisaccades to a stimulus randomly located on the horizontal meridian, on a 40° angle downwards from the horizontal meridian, or on a 40° upward angle, with all three locations equally probable on any given trial. We found that the endpoints of antisaccades were significantly displaced towards the goal location of not only the immediately preceding trial (n - 1) but also the penultimate (n - 2) trial. Furthermore, this bias was mainly present for antisaccades with a short latency of <250 ms and was rapidly corrected by secondary saccades. We conclude that the location of recent antisaccades biases the spatial programming of upcoming antisaccades, that this historical effect persists over many seconds, and that it influences mainly rapidly generated eye movements. Because corrective saccades eliminate the historical bias, we suggest that the bias arises in processes generating the response vector, rather than processes generating the perceptual estimate of goal location. |
Jason Satel; Zhiguo Wang Investigating a two causes theory of inhibition of return Journal Article In: Experimental Brain Research, vol. 223, no. 4, pp. 469–478, 2012. @article{Satel2012, It has recently been demonstrated that there are independent sensory and motor mechanisms underlying inhibition of return (IOR) when measured with oculomotor responses (Wang et al. in Exp Brain Res 218:441-453, 2012). However, these results are seemingly in conflict with previous empirical results which led to the proposal that there are two mutually exclusive flavors of IOR (Taylor and Klein in J Exp Psychol Hum Percept Perform 26:1639-1656, 2000). The observed differences in empirical results across these studies and the theoretical frameworks that were proposed based on the results are likely due to differences in the experimental designs. The current experiments establish that the existence of additive sensory and motor contributions to IOR do not depend on target type, repeated spatiotopic stimulation, attentional control settings, or a temporal gap between fixation offset and cue onset, when measured with saccadic responses. Furthermore, our experiments show that the motor mechanism proposed by Wang et al. in Exp Brain Res 218:441-453, (2012) is likely restricted to the oculomotor system, since the additivity effect does not carry over into the manual response modality. |
Daniel J. Schad; Antje Nuthmann; Ralf Engbert Your mind wanders weakly, your mind wanders deeply: Objective measures reveal mindless reading at different levels Journal Article In: Cognition, vol. 125, no. 2, pp. 179–194, 2012. @article{Schad2012, When the mind wanders, attention turns away from the external environment and cognitive processing is decoupled from perceptual information. Mind wandering is usually treated as a dichotomy (dichotomy-hypothesis), and is often measured using self-reports. Here, we propose the levels of inattention hypothesis, which postulates attentional decoupling to graded degrees at different hierarchical levels of cognitive processing. To measure graded levels of attentional decoupling during reading we introduce the sustained attention to stimulus task (SAST), which is based on psychophysics of error detection. Under experimental conditions likely to induce mind wandering, we found that subjects were less likely to notice errors that required high-level processing for their detection as opposed to errors that only required low-level processing. Eye tracking revealed that before errors were overlooked influences of high- and low-level linguistic variables on eye fixations were reduced in a graded fashion, indicating episodes of mindless reading at weak and deep levels. Individual fixation durations predicted overlooking of lexical errors 5s before they occurred. Our findings support the levels of inattention hypothesis and suggest that different levels of mindless reading can be measured behaviorally in the SAST. Using eye tracking to detect mind wandering online represents a promising approach for the development of new techniques to study mind wandering and to ameliorate its negative consequences. |
Elisa Scheller; Christian Büchel; Matthias Gamer Diagnostic features of emotional expressions are processed preferentially Journal Article In: PLoS ONE, vol. 7, no. 7, pp. e41792, 2012. @article{Scheller2012, Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders. |
Lisette J. Schmidt; Artem V. Belopolsky; Jan Theeuwes The presence of threat affects saccade trajectories Journal Article In: Visual Cognition, vol. 20, no. 3, pp. 284–299, 2012. @article{Schmidt2012, In everyday life, fast identification and processing of threat-related stimuli is of critical importance for survival. Previous studies suggested that spatial attention is automatically allocated to threatening stimuli, such as angry faces. However, in the previous studies the threatening stimuli were not completely irrelevant for the task. In the present study we used saccadic curvature to investigate whether attention is automatically allocated to threatening emotional information. Participants had to make an endogenous saccade up or down while an irrelevant face paired with an object was present in the periphery. The eyes curved away more from the angry faces than from either neutral or happy faces. This effect was not observed when the faces were inverted, excluding the possible role of low-level differences. Since the angry faces were completely irrelevant to the task, the results suggest that attention is automatically allocated to the threatening stimuli, which generates activity in the oculomotor system, and biases behaviour. |
Eyal M. Reingold; Erik D. Reichle; Mackenzie G. Glaholt; Heather Sheridan Direct lexical control of eye movements in reading: Evidence from a survival analysis of fixation durations Journal Article In: Cognitive Psychology, vol. 65, no. 2, pp. 177–206, 2012. @article{Reingold2012, Participants' eye movements were monitored in an experiment that manipulated the frequency of target words (high vs. low) as well as their availability for parafoveal processing during fixations on the pre-target word (valid vs. invalid preview). The influence of the word-frequency by preview validity manipulation on the distributions of first fixation duration was examined by using ex-Gaussian fitting as well as a novel survival analysis technique which provided precise estimates of the timing of the first discernible influence of word frequency on first fixation duration. Using this technique, we found a significant influence of word frequency on fixation duration in normal reading (valid preview) as early as 145. ms from the start of fixation. We also demonstrated an equally rapid non-lexical influence on first fixation duration as a function of initial landing position (location) on target words. The time-course of frequency effects, but not location effects was strongly influenced by preview validity, demonstrating the crucial role of parafoveal processing in enabling direct lexical control of reading fixation times. Implications for models of eye-movement control are discussed. |
Robert M. G. Reinhart; Richard P. Heitz; Braden A. Purcell; Pauline K. Weigand; Jeffrey D. Schall; Geoffrey F. Woodman Homologous mechanisms of visuospatial working memory maintenance in macaque and human: Properties and sources Journal Article In: Journal of Neuroscience, vol. 32, no. 22, pp. 7711–7722, 2012. @article{Reinhart2012, Although areas of frontal cortex are thought to be critical for maintaining information in visuospatial working memory, the event-related potential (ERP) index of maintenance is found over posterior cortex in humans. In the present study, we reconcile these seemingly contradictory findings. Here, we show that macaque monkeys and humans exhibit the same posterior ERP signature of working memory maintenance that predicts the precision of the memory-based behavioral responses. In addition, we show that the specific pattern of rhythmic oscillations in the alpha band, recently demonstrated to underlie the human visual working memory ERP component, is also present in monkeys. Next, we concurrently recorded intracranial local field potentials from two prefrontal and another frontal cortical area to determine their contribution to the surface potential indexing maintenance. The local fields in the two prefrontal areas, but not the cortex immediately posterior, exhibited amplitude modulations, timing, and relationships to behavior indicating that they contribute to the generation of the surface ERP component measured from the distal posterior electrodes. Rhythmic neural activity in the theta and gamma bands during maintenance provided converging support for the engagement of the same brain regions. These findings demonstrate that nonhuman primates have homologous electrophysiological signatures of visuospatial working memory to those of humans and that a distributed neural network, including frontal areas, underlies the posterior ERP index of visuospatial working memory maintenance. |
M. L. Reinholdt-Dunne; Karin Mogg; V. Benson; B. P. Bradley; M. G. Hardin; Simon P. Liversedge; Daniel S. Pine; M. Ernst Anxiety and selective attention to angry faces: An antisaccade study Journal Article In: Journal of Cognitive Psychology, vol. 24, no. 1, pp. 54–65, 2012. @article{ReinholdtDunne2012, Cognitive models of anxiety propose that anxiety is associated with an attentional bias for threat, which increases vulnerability to emotional distress and is difficult to control. The study aim was to investigate relationships between the effects of threatening information, anxiety, and attention control on eye movements. High and low trait anxious individuals performed antisaccade and prosaccade tasks with angry, fearful, happy, and neutral faces. Results indicated that high-anxious participants showed a greater antisaccade cost for angry than neutral faces (i.e., relatively slower to look away from angry faces), compared with low-anxious individuals. This bias was not found for fearful or happy faces. The bias for angry faces was not related to individual differences in attention control assessed on self-report and behavioural measures. Findings support the view that anxiety is associated with difficulty in using cognitive control resources to inhibit attentional orienting to angry faces, and that attention control is multifaceted. |
Eva Reinisch; Andrea Weber Adapting to suprasegmental lexical stress errors in foreign-accented speech Journal Article In: The Journal of the Acoustical Society of America, vol. 132, no. 2, pp. 1165–1176, 2012. @article{Reinisch2012, Can native listeners rapidly adapt to suprasegmental mispronunciations in foreign-accented speech? To address this question, an exposure-test paradigm was used to test whether Dutch listeners can improve their understanding of non-canonical lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard a Dutch story with only initially stressed words, whereas another group also heard 28 words with canonical second-syllable stress (e.g., EEKhorn, "squirrel" was replaced by koNIJN "rabbit"; capitals indicate stress). The 28 words, however, were non-canonically marked by the Hungarian speaker with high pitch and amplitude on the initial syllable, both of which are stress cues in Dutch. After exposure, listeners' eye movements were tracked to Dutch target-competitor pairs with segmental overlap but different stress patterns, while they listened to new words from the same Hungarian speaker (e.g., HERsens, herSTEL, "brain," "recovery"). Listeners who had previously heard non-canonically produced words distinguished target-competitor pairs better than listeners who had only been exposed to Hungarian accent with canonical forms of lexical stress. Even a short exposure thus allows listeners to tune into speaker-specific realizations of words' suprasegmental make-up, and use this information for word recognition. |
Kathleen Pirog Revill; Daniel H. Spieler The effect of lexical frequency on spoken word recognition in young and older listeners Journal Article In: Psychology and Aging, vol. 27, no. 1, pp. 80–87, 2012. @article{Revill2012, When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults' eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age. |
Helen J. Richards; Valerie Benson; Julie A. Hadwin The attentional processes underlying impaired inhibition of threat in anxiety: The remote distractor effect Journal Article In: Cognition and Emotion, vol. 26, no. 5, pp. 934–942, 2012. @article{Richards2012, The current study explored the proposition that anxiety is associated with impaired inhibition of threat. Using a modified version of the remote distractor paradigm, we considered whether this impairment is related to attentional capture by threat, difficulties disengaging from threat presented within foveal vision, or difficulties orienting to task-relevant stimuli when threat is present in central, parafoveal and peripheral locations in the visual field. Participants were asked to direct their eyes towards and identify a target in the presence and absence of a distractor (an angry, happy or neutral face). Trait anxiety was associated with a delay in initiating eye movements to the target in the presence of central, parafoveal and peripheral threatening distractors. These findings suggest that elevated anxiety is linked to difficulties inhibiting task-irrelevant threat presented across a broad region of the visual field. |
Gerulf Rieger; Ritch C. Savin-Williams The eyes have it: Sex and sexual orientation differences in pupil dilation patterns Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e40256, 2012. @article{Rieger2012, Recent research suggests profound sex and sexual orientation differences in sexual response. These results, however, are based on measures of genital arousal, which have potential limitations such as volunteer bias and differential measures for the sexes. The present study introduces a measure less affected by these limitations. We assessed the pupil dilation of 325 men and women of various sexual orientations to male and female erotic stimuli. Results supported hypotheses. In general, self-reported sexual orientation corresponded with pupil dilation to men and women. Among men, substantial dilation to both sexes was most common in bisexual-identified men. In contrast, among women, substantial dilation to both sexes was most common in heterosexual-identified women. Possible reasons for these differences are discussed. Because the measure of pupil dilation is less invasive than previous measures of sexual response, it allows for studying diverse age and cultural populations, usually not included in sexuality research. |
Hector Rieiro; Susana Martinez-Conde; Andrew P. Danielson; Jose L. Pardo-Vazquez; Nishit Srivastava; Stephen L. Macknik Optimizing the temporal dynamics of light to human perception Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 48, pp. 19828–19833, 2012. @article{Rieiro2012, No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings. |
Rebecca Rienhoff; Joseph Baker; Lennart Fischer; Bernd Strauss; Jörg Schorer Field of vision influences sensory-motor control of skilled and less-skilled dart players Journal Article In: Journal of Sports Science and Medicine, vol. 11, no. 3, pp. 542–550, 2012. @article{Rienhoff2012, One characteristic of perceptual expertise in sport and other domains is known as ‘the quiet eye', which assumes that fixated information is processed during gaze stability and insufficient spatial information leads to a decrease in performance. The aims of this study were a) replicating inter- and intra-group variability and b) investigating the extent to which quiet eye supports information pick-up of varying fields of vision (i.e., central versus peripheral) using a specific eye-tracking paradigm to compare different skill levels in a dart throwing task. Differences between skill levels were replicated at baseline, but no significant differences in throwing performance were revealed among the visual occlusion conditions. Findings are generally in line with the association between quiet eye duration and aiming perform- ance, but raise questions regarding the relevance of central vision information pick-up for the quiet eye. |
Simon Rigoulot; Marc D. Pell Seeing emotion with your ears: Emotional prosody implicitly guides visual attention to faces Journal Article In: PLoS ONE, vol. 7, no. 1, pp. e30740, 2012. @article{Rigoulot2012, Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. |
Evan F. Risko; Nicola C. Anderson; Sophie Lanthier; Alan Kingstone Curious eyes: Individual differences in personality predict eye movement behavior in scene-viewing Journal Article In: Cognition, vol. 122, no. 1, pp. 86–90, 2012. @article{Risko2012, Visual exploration is driven by two main factors - the stimuli in our environment, and our own individual interests and intentions. Research investigating these two aspects of attentional guidance has focused almost exclusively on factors common across individuals. The present study took a different tack, and examined the role played by individual differences in personality. Our findings reveal that trait curiosity is a robust and reliable predictor of an individual's eye movement behavior in scene-viewing. These findings demonstrate that who a person is relates to how they move their eyes. |
Sarah Risse; Reinhold Kliegl Evidence for delayed parafoveal-on-foveal effects from word n+2 in reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 4, pp. 1026–1042, 2012. @article{Risse2012, During reading information is acquired from word(s) beyond the word that is currently looked at. It is still an open question whether such parafoveal information can influence the current viewing of a word, and if so, whether such parafoveal-on-foveal effects are attributable to distributed processing or to mislocated fixations which occur when the eyes are directed at a parafoveal word but land on another word instead. In two display-change experiments, we orthogonally manipulated the preview and target difficulty of word n+2 to investigate the role of mislocated fixations on the previous word n+1. When the eyes left word n, an easy or difficult word n+2 preview was replaced by an easy or difficult n+2 target word. In Experiment 1, n+2 processing difficulty was manipulated by means of word frequency (i.e., easy high-frequency vs. difficult low-frequency word n+2). In Experiment 2, we varied the visual familiarity of word n+2 (i.e., easy lower-case vs. difficult alternating-case writing). Fixations on the short word n+1, which were likely to be mislocated, were nevertheless not influenced by the difficulty of the adjacent word n+2, the hypothesized target of the mislocated fixation. Instead word n+1 was influenced by the preview difficulty of word n+2, representing a delayed parafoveal-on-foveal effect. The results challenge the mislocated-fixation hypothesis as an explanation of parafoveal-on-foveal effects and provide new insight into the complex spatial and temporal effect structure of processing inside the perceptual span during reading. |
Kay L. Ritchie; Amelia R. Hunt; Arash Sahraie Trans-saccadic priming in hemianopia: Sighted-field sensitivity is boosted by a blind-field prime Journal Article In: Neuropsychologia, vol. 50, no. 5, pp. 997–1005, 2012. @article{Ritchie2012, We experience visual stability despite shifts of the visual array across the retina produced by eye movements. A process known as remapping is thought to keep track of the spatial locations of objects as they move on the retina. We explored remapping in damaged visual cortex by presenting a stimulus in the blind field of two patients with hemianopia. When they executed a saccadic eye movement that would bring the stimulated location into the sighted field, reported awareness of the stimulus increased, even though the stimulus was removed before the saccade began and so never actually fell in the sighted field. Moreover, when a location was primed by a blind-field stimulus and then brought into the sighted field by a saccade, detection sensitivity for near-threshold targets appearing at this location increased dramatically. The results demonstrate that brain areas supporting conscious vision are not necessary for remapping, and suggest visual stability is maintained for salient objects even when they are not consciously perceived. |
Martin Rolfs; Marisa Carrasco Rapid simultaneous enhancement of visual sensitivity and perceived contrast during saccade preparation Journal Article In: Journal of Neuroscience, vol. 32, no. 40, pp. 13744–13752, 2012. @article{Rolfs2012, Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ∼300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. |
Maria C. Romero; Ilse C. Van Dromme; Peter Janssen Responses to two-dimensional shapes in the macaque anterior intraparietal area Journal Article In: European Journal of Neuroscience, vol. 36, no. 3, pp. 2324–2334, 2012. @article{Romero2012, Neurons in the macaque dorsal visual stream respond to the visual presentation of objects in the context of a grasping task and to three-dimensional (3D) surfaces defined by binocular disparity, but little is known about the neural representation of two-dimensional (2D) shape in the dorsal stream. We recorded the activity of single neurons in the macaque anterior intraparietal area (AIP), which is known to be crucial for grasping, during the presentation of images of objects and silhouette, outline and line-drawing versions of these images (contour stimuli). The vast majority of AIP neurons responding selectively to 2D images were also selective for at least one of the contour stimuli with the same boundary shape, suggesting that the boundary is sufficient for the image selectivity of most AIP neurons. Furthermore, a subset of these neurons with foveal receptive fields generally preserved the shape preference across positions, whereas for more than half of the AIP population the center of the receptive field was at a parafoveal location with less tolerance to changes in stimulus position. AIP neurons frequently exhibited shape selectivity across different stimulus sizes. These results demonstrate that AIP neurons encode not only 3D but also 2D shape features. |
Jessica C. Roth; Steven L. Franconeri Asymmetric coding of categorical spatial relations in both language and vision Journal Article In: Frontiers in Psychology, vol. 3, pp. 464, 2012. @article{Roth2012, Describing certain types of spatial relationships between a pair of objects requires that the objects are assigned different "roles" in the relation, e.g., "A is above B" is different than "B is above A." This asymmetric representation places one object in the "target" or "figure" role and the other in the "reference" or "ground" role. Here we provide evidence that this asymmetry may be present not just in spatial language, but also in perceptual representations. More specifically, we describe a model of visual spatial relationship judgment where the designation of the target object within such a spatial relationship is guided by the location of the "spotlight" of attention. To demonstrate the existence of this perceptual asymmetry, we cued attention to one object within a pair by briefly previewing it, and showed that participants were faster to verify the depicted relation when that object was the linguistic target. Experiment 1 demonstrated this effect for left-right relations, and Experiment 2 for above-below relations. These results join several other types of demonstrations in suggesting that perceptual representations of some spatial relations may be asymmetrically coded, and further suggest that the location of selective attention may serve as the mechanism that guides this asymmetry. |
Elsie Premereur; Wim Vanduffel; Peter Janssen Local field potential activity associated with temporal expectations in the macaque lateral intraparietal area Journal Article In: Journal of Cognitive Neuroscience, vol. 24, no. 6, pp. 1314–1330, 2012. @article{Premereur2012, Oscillatory brain activity is attracting increasing interest in cognitive neuroscience. Numerous EEG (magnetoencephalography) and local field potential (LFP) measurements have related cognitive functions to different types of brain oscillations, but the functional significance of these rhythms remains poorly understood. Despite its proven value, LFP activity has not been extensively tested in the macaque lateral intraparietal area (LIP), which has been implicated in a wide variety of cognitive control processes. We recorded action potentials and LFPs in area LIP during delayed eye movement tasks and during a passive fixation task, in which the time schedule was fixed so that temporal expectations about task-relevant cues could be formed. LFP responses in the gamma band discriminated reliably between saccade targets and distractors inside the receptive field (RF). Alpha and beta responses were much less strongly affected by the presence of a saccade target, however, but rose sharply in the waiting period before the go signal. Surprisingly, conditions without visual stimulation of the LIP-RF-evoked robust LFP responses in every frequency band—most prominently in those below 50 Hz—precisely time-locked to the expected time of stimulus onset in the RF. These results indicate that in area LIP, oscillations in the LFP, which reflect synaptic input and local network activity, are tightly coupled to the temporal expectation of task-relevant cues. |
Elsie Premereur; Wim Vanduffel; Pieter R. Roelfsema; Peter Janssen Frontal eye field microstimulation induces task-dependent gamma oscillations in the lateral intraparietal area Journal Article In: Journal of Neurophysiology, vol. 108, no. 5, pp. 1392–1402, 2012. @article{Premereur2012a, Macaque frontal eye fields (FEF) and the lateral intraparietal area (LIP) are high-level oculomotor control centers that have been implicated in the allocation of spatial attention. Electrical microstimulation of macaque FEF elicits functional magnetic resonance imaging (fMRI) activations in area LIP, but no study has yet investigated the effect of FEF microstimulation on LIP at the single-cell or local field potential (LFP) level. We recorded spiking and LFP activity in area LIP during weak, subthreshold microstimulation of the FEF in a delayed-saccade task. FEF microstimulation caused a highly time- and frequency-specific, task-dependent increase in gamma power in retinotopically corresponding sites in LIP: FEF microstimulation produced a significant increase in LIP gamma power when a saccade target appeared and remained present in the LIP receptive field (RF), whereas less specific increases in alpha power were evoked by FEF microstimulation for saccades directed away from the RF. Stimulating FEF with weak currents had no effect on LIP spike rates or on the gamma power during memory saccades or passive fixation. These results provide the first evidence for task-dependent modulations of LFPs in LIP caused by top-down stimulation of FEF. Since the allocation and disengagement of spatial attention in visual cortex have been associated with increases in gamma and alpha power, respectively, the effects of FEF microstimulation on LIP are consistent with the known effects of spatial attention. |
Jessica M. Price; Anthony J. Sanford Reading in healthy ageing: The influence of information structuring in sentences Journal Article In: Psychology and Aging, vol. 27, no. 2, pp. 529–540, 2012. @article{Price2012, In three experiments, we investigated the cognitive effects of linguistic prominence to establish whether focus plays a similar or different role in modulating language processing in healthy ageing. Information structuring through the use of cleft sentences is known to increase the processing efficiency of anaphoric references to elements contained with a marked focus structure. It also protects these elements from becoming suppressed in the wake of subsequent information, suggesting selective mechanisms of enhancement and suppression. In Experiment 1 (using self-paced reading), we found that focus enhanced (faster) integration for anaphors referring to words contained within the scope of focus; but suppressed (slower) integration for anaphors to words contained outside of the scope of focus; and in some cases, the effects were larger in older adults. In Experiment 2 (using change detection), we showed that older adults relied more on the linguistic structure to enhance change detection when the changed word was in focus. In Experiment 3 (using delayed probe recognition and eye-tracking), we found that older adults recognized probes more accurately when they were made to elements within the scope of focus than when they were outside the scope of focus. These results indicate that older adults' ability to selectively attend or suppress concepts in a marked focus structure is preserved. |
M. Victoria Puig; Earl K. Miller The role of prefrontal dopamine D1 receptors in the neural mechanisms of associative learning Journal Article In: Neuron, vol. 74, no. 5, pp. 874–886, 2012. @article{Puig2012, Dopamine is thought to play a major role in learning. However, while dopamine D1 receptors (D1Rs) in the prefrontal cortex (PFC) have been shown to modulate working memory-related neural activity, their role in the cellular basis of learning is unknown. We recorded activity from multiple electrodes while injecting the D1R antagonist SCH23390 in the lateral PFC as monkeys learned visuomotor associations. Blocking D1Rs impaired learning of novel associations and decreased cognitive flexibility but spared performance of already familiar associations. This suggests a greater role for prefrontal D1Rs in learning new, rather than performing familiar, associations. There was a corresponding greater decrease in neural selectivity and increase in alpha and beta oscillations in local field potentials for novel than for familiar associations. Our results suggest that weak stimulation of D1Rs observed in aging and psychiatric disorders may impair learning and PFC function by reducing neural selectivity and exacerbating neural oscillations associated with inattention and cognitive deficits. |
Braden A. Purcell; Pauline K. Weigand; Jeffrey D. Schall Supplementary eye field during visual search: Salience, cognitive control, and performance monitoring Journal Article In: Journal of Neuroscience, vol. 32, no. 30, pp. 10273–10285, 2012. @article{Purcell2012, How supplementary eye field (SEF) contributes to visual search is unknown. Inputs from cortical and subcortical structures known to represent visual salience suggest that SEF may serve as an additional node in this network. This hypothesis was tested by recording action potentials and local field potentials (LFPs) in two monkeys performing an efficient pop-out visual search task. Target selection modulation, tuning width, and response magnitude of spikes and LFP in SEF were compared with those in frontal eye field. Surprisingly, only ~2% of SEF neurons and ~8% of SEF LFP sites selected the location of the search target. The absence of salience in SEF may be due to an absence of appropriate visual afferents, which suggests that these inputs are a necessary anatomical feature of areas representing salience. We also tested whether SEF contributes to overcoming the automatic tendency to respond to a primed color when the target identity switches during priming of pop-out. Very few SEF neurons or LFP sites modulated in association with performance deficits following target switches. However, a subset of SEF neurons and LFPs exhibited strong modulation following erroneous saccades to a distractor. Altogether, these results suggest that SEF plays a limited role in controlling ongoing visual search behavior, but may play a larger role in monitoring search performance. |
Aidan A. Thompson; Christopher V. Glover; Denise Y. P. Henriques Allocentrically implied target locations are updated in an eye-centred reference frame Journal Article In: Neuroscience Letters, vol. 514, no. 2, pp. 214–218, 2012. @article{Thompson2012, When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a " target" location at the midpoint of the stimulus. After determining the implied " target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered " target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. |
Rebecca M. Todd; Deborah Talmi; Taylor W. Schmitz; Josh Susskind; Adam K. Anderson Psychophysical and neural evidence for emotion-enhanced perceptual vividness Journal Article In: Journal of Neuroscience, vol. 32, no. 33, pp. 11201–11212, 2012. @article{Todd2012, Highly emotional events are associated with vivid ‘flashbulb' memories. Here we examine whether the flashbulb metaphor characterizes a previously unknown emotion enhanced vividness (EEV) during initial perceptual experience. Using a magnitude estimation procedure, human observers estimated the relative magnitude of visual noise overlaid on scenes. After controlling for computational metrics of objective visual salience, emotional salience was associated with decreased noise, or heightened perceptual vividness, demonstrating EEV, which predicted later memory vividness. ERPs revealed a posterior P2 component at ~200 ms that was associated with both increased emotional salience and decreased objective noise levels, consistent with EEV. BOLD response in the lateral occipital complex (LOC), insula, and amygdala predicted online EEV. The LOC and insula represented complementary influences on EEV, with the amygdala statistically mediating both. These findings indicate that the metaphorical vivid light surrounding emotional memories is embodied directly in perceptual cortices during initial experience, supported by cortico-limbic interactions. |
Jianliang Tong; Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor The effect of perceptual grouping on perisaccadic spatial distortions Journal Article In: Journal of Vision, vol. 12, no. 10, pp. 1–16, 2012. @article{Tong2012, Perisaccadic spatial distortion (PSD) occurs when a target is flashed immediately before the onset of a saccade and it appears displaced in the direction of the saccade. In previous studies, the magnitude of PSD of a single target was affected by multiple experimental parameters, such as the target's luminance and its position relative to the central fixation target. Here we describe a contextual effect in which the magnitude of the PSD for a target was influenced by the synchronous presentation of another target: PSD for simultaneously presented targets was more uniform than when each was presented individually. Perisaccadic compression was ruled out as a causal factor, and the results suggest that both low- and high-level perceptual grouping mechanisms may account for the change in PSD magnitude. We speculate that perceptual grouping could play a key role in preserving shape constancy during saccadic eye movements. |
Joseph C. Toscano; Bob McMurray Cue-integration and context effects in speech: Evidence against speaking-rate normalization Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 6, pp. 1284–1301, 2012. @article{Toscano2012, Listeners are able to accurately recognize speech despite variation in acoustic cues across contexts, such as different speaking rates. Previous work has suggested that listeners use rate information (indicated by vowel length; VL) to modify their use of context-dependent acoustic cues, like voice-onset time (VOT), a primary cue to voicing. We present several experiments and simulations that offer an alternative explanation: that listeners treat VL as a phonetic cue rather than as an indicator of speaking rate, and that they rely on general cue-integration principles to combine information from VOT and VL. We demonstrate that listeners use the two cues independently, that VL is used in both naturally produced and synthetic speech, and that the effects of stimulus naturalness can be explained by a cue-integration model. Together, these results suggest that listeners do not interpret VOT relative to rate information provided by VL and that the effects of speaking rate can be explained by more general cue-integration principles. |
Alison M. Trude; Sarah Brown-Schmidt Talker-specific perceptual adaptation during online speech perception Journal Article In: Language and Cognitive Processes, vol. 27, no. 7-8, pp. 979–1001, 2012. @article{Trude2012, Despite the ubiquity of between-talker differences in accent and dialect, little is known about how listeners adjust to this source of variability as language is perceived in real time. In three experiments, we examined whether, and when, listeners can use specific knowledge of a particular talker's accent during on-line speech processing. Listeners were exposed to the speech of two talkers, a male who had an unfamiliar regional dialect of American English, in which the /æ/ vowel is raised to /ei/ only before /g/ (e.g., bag is pronounced /beig/), and a female talker without the dialect. In order to examine how knowledge of a particular talker's accent influenced language processing, we examined listeners' interpretation of unaccented words such as back and bake in contexts that included a competitor like bag. If interpretation processes are talker-specific, the pattern of competition from bag should vary depending on how that talker pronounces the competitor word. In all three experiments, listeners rapidly used their knowledge of how the talker would have pronounced bag to either rule out or include bag as a temporary competitor. Providing a cue to talker identity prior to the critical word strengthened these effects. These results are consistent with views of language processing in which multiple sources of information, including previous experience with the current talker and contextual cues, are rapidly integrated during lexical activation and selection processes.$backslash$nDespite the ubiquity of between-talker differences in accent and dialect, little is known about how listeners adjust to this source of variability as language is perceived in real time. In three experiments, we examined whether, and when, listeners can use specific knowledge of a particular talker's accent during on-line speech processing. Listeners were exposed to the speech of two talkers, a male who had an unfamiliar regional dialect of American English, in which the /æ/ vowel is raised to /ei/ only before /g/ (e.g., bag is pronounced /beig/), and a female talker without the dialect. In order to examine how knowledge of a particular talker's accent influenced language processing, we examined listeners' interpretation of unaccented words such as back and bake in contexts that included a competitor like bag. If interpretation processes are talker-specific, the pattern of competition from bag should vary depending on how that talker pronounces the competitor word. In all three experiments, listeners rapidly used their knowledge of how the talker would have pronounced bag to either rule out or include bag as a temporary competitor. Providing a cue to talker identity prior to the critical word strengthened these effects. These results are consistent with views of language processing in which multiple sources of information, including previous experience with the current talker and contextual cues, are rapidly integrated during lexical activation and selection processes. |
Hans A. Trukenbrod; Ralf Engbert Eye movements in a sequential scanning task: Evidence for distributed processing Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–12, 2012. @article{Trukenbrod2012, Current models of eye movement control are derived from theories assuming serial processing of single items or from theories based on parallel processing of multiple items at a time. This issue has persisted because most investigated paradigms generated data compatible with both serial and parallel models. Here, we study eye movements in a sequential scanning task, where stimulus n indicates the position of the next stimulus n + 1. We investigate whether eye movements are controlled by sequential attention shifts when the task requires serial order of processing. Our measures of distributed processing in the form of parafoveal-on-foveal effects, long-range modulations of target selection, and skipping saccades provide evidence against models strictly based on serial attention shifts. We conclude that our results lend support to parallel processing as a strategy for eye movement control. |
Jie-Li Tsai; Reinhold Kliegl; Ming Yan Parafoveal semantic information extraction in traditional Chinese reading Journal Article In: Acta Psychologica, vol. 141, no. 1, pp. 17–23, 2012. @article{Tsai2012, Semantic information extraction from the parafovea has been reported only in simplified Chinese for a special subset of characters and its generalizability has been questioned. This study uses traditional Chinese, which differs from simplified Chinese in visual complexity and in mapping semantic forms, to demonstrate access to parafoveal semantic information during reading of this script. Preview duration modulates various types (identical, phonological, and unrelated) of parafoveal information extraction. Parafoveal semantic extraction is more elusive in English; therefore, we conclude that such effects in Chinese are presumably caused by substantial cross-language differences from alphabetic scripts. The property of Chinese characters carrying rich lexical information in a small region provides the possibility of semantic extraction in the parafovea. |
Siliang Tang; Ronan G. Reilly; Christian Vorstius EyeMap: A software system for visualizing and analyzing eye movement data in reading Journal Article In: Behavior Research Methods, vol. 44, no. 2, pp. 420–438, 2012. @article{Tang2012, We have developed EyeMap, a freely available software system for visualizing and analyzing eye movement data specifically in the area of reading research. As compared with similar systems, including commercial ones, EyeMap has more advanced features for text stimulus presentation, interest area extraction, eye movement data visualization, and experimental variable calculation. It is unique in supporting binocular data analysis for unicode, proportional, and nonproportional fonts and spaced and unspaced scripts. Consequently, it is well suited for research on a wide range of writing systems. To date, it has been used with English, German, Thai, Korean, and Chinese. EyeMap is platform independent and can also work on mobile devices. An important contribution of the EyeMap project is a device-independent XML data format for describing data from a wide range of reading experiments. An online version of EyeMap allows researchers to analyze and visualize reading data through a standard Web browser. This facility could, for example, serve as a front-end for online eye movement data corpora. |
Luminita Tarita-Nistor; Michael H. Brent; Martin J. Steinbach; Esther G. González Fixation patterns in maculopathy: From binocular to monocular viewing Journal Article In: Optometry and Vision Science, vol. 89, no. 3, pp. 277–287, 2012. @article{TaritaNistor2012, PURPOSE: The goal of this study was to explore binocular coordination during fixation in patients with age-related macular degeneration (AMD) and to investigate whether there is a shift in eye position when the viewing condition changes from binocular to monocular. METHODS: Sixteen people with normal vision and 12 patients with AMD were asked to look at a 3 deg fixation target with both eyes and with each eye individually while the fellow eye was covered by an infrared filter. Fixational eye movements were recorded for both eyes with an EyeLink eye-tracker in all conditions. The shift in eye position at the end of every fixation period was calculated for each eye. RESULTS: All people with normal vision as well as the majority of patients had good binocular coordination during fixation in the binocular viewing condition. When the viewing condition changed from binocular to monocular, three patients (25%) had atypical shifts in their eye position. The shift was related to (1) loss of fixational control when the better eye was covered and the worse eye viewed the target or (2) a slow drift of the viewing eye that was associated with a large phoria in the covered eye. CONCLUSIONS: Patients with AMD have good binocular ocular motor coordination during fixation. A change in viewing condition from binocular to monocular can lead to disturbances in ocular motor control for some patients, especially in the worse eye. |
A. Caglar Tas; Michael D. Dodd; Andrew Hollingworth The role of surface feature and spatiotemporal continuity in object-based inhibition of return Journal Article In: Visual Cognition, vol. 20, no. 1, pp. 29–47, 2012. @article{Tas2012, The contribution of surface feature continuity to object-based inhibition of return (IOR) was tested in three experiments. Participants executed a saccade to a previously fixated or unfixated colored disk after the object had moved to a new location. Object-based IOR was observed as lengthened saccade latency to a previously fixated object. The consistency of surface feature (color) and spatiotemporal information was manipulated to examine the feature used to define the persisting objects to which inhibition is assigned. If the two objects traded colors during motion, object-based IOR was reliably reduced (Experiment 2), suggesting a role for surface feature properties in defining the objects of object-based IOR. However, if the two objects changed to new colors during motion, object-based IOR was preserved (Experiment 1), and color consistency was not sufficient to support object continuity across a salient spatiotemporal discontinuity (Experiment 3). These results suggest that surface feature consistency plays a significant role in defining object persistence for the purpose of IOR, although surface features may be weighted less strongly than spatiotemporal features in this domain. |
A. Caglar Tas; Cathleen M. Moore; Andrew Hollingworth An object-mediated updating account of insensitivity to transsaccadic change Journal Article In: Journal of Vision, vol. 12, no. 11, pp. 1–13, 2012. @article{Tas2012a, Recent evidence has suggested that relatively precise information about the location and visual form of a saccade target object is retained across a saccade. However, this information appears to be available for report only when the target is removed briefly, so that the display is blank when the eyes land. We hypothesized that the availability of precise target information is dependent on whether a post-saccade object is mapped to the same object representation established for the presaccade target. If so, then the post-saccade features of the target overwrite the presaccade features, a process of object mediated updating in which visual masking is governed by object continuity. In two experiments, participants' sensitivity to the spatial displacement of a saccade target was improved when that object changed surface feature properties across the saccade, consistent with the prediction of the object-mediating updating account. Transsaccadic perception appears to depend on a mechanism of object-based masking that is observed across multiple domains of vision. In addition, the results demonstrate that surface-feature continuity contributes to visual stability across saccades. |
Shuichiro Taya; David Windridge; Magda Osman Looking to score: The dissociation of goal influence on eye movement and meta-attentional allocation in a complex dynamic natural scene Journal Article In: PLoS ONE, vol. 7, no. 6, pp. e39060, 2012. @article{Taya2012, Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers' beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer's beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior. |
Jan Theeuwes; Artem V. Belopolsky Reward grabs the eye: Oculomotor capture by rewarding stimuli Journal Article In: Vision Research, vol. 74, pp. 80–85, 2012. @article{Theeuwes2012, It is well known that salient yet task irrelevant stimuli may capture our eyes independent of our goals and intentions. The present study shows that a task-irrelevant stimulus that is previously associated with high monetary reward captures the eyes much stronger than that very same stimulus when previously associated with low monetary reward. We conclude that reward changes the salience of a stimulus such that a stimulus that is associated with high reward becomes more pertinent and therefore captures the eyes above and beyond its physical salience. Because the stimulus capture the eyes and disrupts goal-directed behavior we argue that this effect is automatic not driven by strategic, top-down control. |
Tom Theys; Siddharth Srivastava; Johannes Loon; Jan Goffin; Peter Janssen Selectivity for three-dimensional contours and surfaces in the anterior intraparietal area Journal Article In: Journal of Neurophysiology, vol. 107, no. 3, pp. 995–1008, 2012. @article{Theys2012, The macaque anterior intraparietal area (AIP) is crucial for visually guided grasping. AIP neurons respond during the visual presentation of real-world objects and encode the depth profile of disparity-defined curved surfaces. We investigated the neural representation of curved surfaces in AIP using a stimulus-reduction approach. The stimuli consisted of three-dimensional (3-D) shapes curved along the horizontal axis, the vertical axis, or both the horizontal and the vertical axes of the shape. The depth profile was defined solely by binocular disparity that varied along either the boundary or the surface of the shape or along both the boundary and the surface of the shape. The majority of AIP neurons were selective for curved boundaries along the horizontal or the vertical axis, and neural selectivity emerged at short latencies. Stimuli in which disparity varied only along the surface of the shape (with zero disparity on the boundaries) evoked selectivity in a smaller proportion of AIP neurons and at considerably longer latencies. AIP neurons were not selective for 3-D surfaces composed of anticorrelated disparities. Thus the neural selectivity for object depth profile in AIP is present when only the boundary is curved in depth, but not for disparity in anticorrelated stereograms. |
Yin Su; Li-Lin Rao; Xingshan Li; Yong Wang; Shu Li From quality to quantity: The role of common features in consumer preference Journal Article In: Journal of Economic Psychology, vol. 33, no. 6, pp. 1043–1058, 2012. @article{Su2012, Although previous studies of consumer choice have found that common features of alternatives are cancelled and that choices are based only on unique features, a recent study has suggested that common features are canceled only when they are irrelevant in regard to all unique features. The present study hypothesized that the role of a common feature in consumer choice depends on its quantity as well as its quality. Experiments 1 and 2 tested this hypothesis and the equate-to-differentiate account by varying the quality and the quantity of common features. Experiment 3 examined the cognitive process that was proposed to serve as the mechanism for the common feature effect using eye-tracking methodology. This study provided further insight into conditions when the cancellation-and-focus model applies. Study results revealed an attribute-based tradeoff process underlying multiple-attribute decision making, and suggested an avenue through which marketers might influence consumer choices. |
Simone Sulpizio; James M. McQueen Italians use abstract knowledge about lexical stress during spoken-word recognition Journal Article In: Journal of Memory and Language, vol. 66, no. 1, pp. 177–193, 2012. @article{Sulpizio2012, In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress. Words with penultimate stress - the most common pattern - appeared to be recognized by default. In Experiment 2, listeners had to learn new words from which some stress cues had been removed, and then recognize reduced- and full-cue versions of those words. The acoustic manipulation affected recognition only of newly-learnt words with antepenultimate stress: Full-cue versions, even though they were never heard during training, were recognized earlier than reduced-cue versions. Newly-learnt words with penultimate stress were recognized earlier overall, but recognition of the two versions of these words did not differ. Abstract knowledge (i.e., knowledge generalized over the lexicon) about lexical stress - which pattern is the default and which cues signal the non-default pattern - appears to be used during the recognition of known and newly-learnt Italian words. |
Sruthi K. Swaminathan; David J. Freedman Preferential encoding of visual categories in parietal cortex compared with prefrontal cortex Journal Article In: Nature Neuroscience, vol. 15, no. 2, pp. 315–320, 2012. @article{Swaminathan2012, The ability to recognize the behavioral relevance, or category membership, of sensory stimuli is critical for interpreting the meaning of events in our environment. Neurophysiological studies of visual categorization have found categorical representations of stimuli in prefrontal cortex (PFC), an area that is closely associated with cognitive and executive functions. Recent studies have also identified neuronal category signals in parietal areas that are typically associated with visual-spatial processing. It has been proposed that category-related signals in parietal cortex and other visual areas may result from 'top-down' feedback from PFC. We directly compared neuronal activity in the lateral intraparietal (LIP) area and PFC in monkeys performing a visual motion categorization task. We found that LIP showed stronger, more reliable and shorter latency category signals than PFC. These findings suggest that LIP is strongly involved in visual categorization and argue against the idea that parietal category signals arise as a result of feedback from PFC during this task. |
Martin Szinte; Patrick Cavanagh Apparent motion from outside the visual field, retinotopic cortices may register extra-retinal positions Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e47386, 2012. @article{Szinte2012a, Observers made a saccade between two fixation markers while a probe was flashed sequentially at two locations on a side screen. The first probe was presented in the far periphery just within the observer's visual field. This target was extinguished and the observers made a large saccade away from the probe, which would have left it far outside the visual field if it had still been present. The second probe was then presented, displaced from the first in the same direction as the eye movement and by about the same distance as the saccade step. Because both eyes and probes shifted by similar amounts, there was little or no shift between the first and second probe positions on the retina. Nevertheless, subjects reported seeing motion corresponding to the spatial displacement not the retinal displacement. When the second probe was presented, the effective location of the first probe lay outside the visual field demonstrating that apparent motion can be seen from a location outside the visual field to a second location inside the visual field. Recent physiological results suggest that target locations are "remapped" on retinotopic representations to correct for the effects of eye movements. Our results suggest that the representations on which this remapping occurs include locations that fall beyond the limits of the retina. |
Martin Szinte; Mark Wexler; Patrick Cavanagh Temporal dynamics of remapping captured by peri-saccadic continuous motion Journal Article In: Journal of Vision, vol. 12, no. 7, pp. 1–18, 2012. @article{Szinte2012, Different attention and saccade control areas contribute to space constancy by remapping target activity onto their expected post-saccadic locations. To visualize this dynamic remapping, we used a technique developed by Honda (2006) where a probe moved vertically while participants made a saccade across the motion path. Observers do not report any large excursions of the trace at the time of the saccade that would correspond to the classical peri-saccadic mislocalization effect. Instead, they reported that the motion trace appeared to be broken into two separate segments with a shift of approximately one-fifth of the saccade amplitude representing an overcompensation of the expected retinal displacement caused by the saccade. To measure the timing of this break in the trace, we introduced a second, physical shift that was the same size but opposite in direction to the saccade-induced shift. The trace appeared continuous most frequently when the physical shift was introduced at the midpoint of the saccade, suggesting that the compensation is in place when the saccade lands. Moreover, this simple linear shift made the combined traces appear continuous and linear, with no curvature. In contrast, Honda (2006) had reported that the pre- and post-saccadic portion of the trace appeared aligned and that there was often a small, visible excursion of the trace at the time of the saccade. To compare our results more directly, we increased the contrast of our moving probe in a third experiment. Now some observers reported seeing a deviation in the motion path but the misalignment remained present. We conclude that the large deviations at the time of saccade are generally masked for a continuously moving target but that there is nevertheless a residual misalignment between pre- and post-saccadic coordinates of approximately 20% of the saccade amplitude that normally goes unnoticed. |
Hiomasa Takemura; Hiroshi Ashida; Kaoru Amano; Akiyoshi Kitaoka; Ikuya Murakami Neural correlates of induced motion perception in the human brain Journal Article In: Journal of Neuroscience, vol. 32, no. 41, pp. 14344–14354, 2012. @article{Takemura2012, A physically stationary stimulus surrounded by a moving stimulus appears to move in the opposite direction. There are similarities between the characteristics of this phenomenon of induced motion and surround suppression of directionally selective neurons in the brain. Here, functional magnetic resonance imaging was used to investigate the link between the subjective perception of induced motion and cortical activity. The visual stimuli consisted of a central drifting sinusoid surrounded by a moving random-dot pattern. The change in cortical activity in response to changes in speed and direction of the central stimulus was measured. The human cortical area hMT+ showed the greatest activation when the central stimulus moved at a fast speed in the direction opposite to that of the surround. More importantly, the activity in this area was the lowest when the central stimulus moved in the same direction as the surround and at a speed such that the central stimulus appeared to be stationary. The results indicate that the activity in hMT+ is related to perceived speed modulated by induced motion rather than to physical speed or a kinetic boundary. Early visual areas (V1, V2, V3, and V3A) showed a similar pattern; however, the relationship to perceived speed was not as clear as that in hMT+. These results suggest that hMT+ may be a neural correlate of induced motion perception and play an important role in contrasting motion signals in relation to their surrounding context and adaptively modulating our motion perception depending on the spatial context. |
Adrian Staub; Matthew J. Abbott; Richard S. Bogartz Linguistically guided anticipatory eye movements in scene viewing Journal Article In: Visual Cognition, vol. 20, no. 8, pp. 922–946, 2012. @article{Staub2012, The present study replicated the well-known demonstration by Altmann and Kamide (1999) that listeners make linguistically guided anticipatory eye movements, but used photographs of scenes rather than clip-art arrays as the visual stimuli. When listeners heard a verb for which a particular object in a visual scene was the likely theme, they made earlier looks to this object (e.g., looks to a cake upon hearing The boy will eat ?) than when they heard a control verb (The boy will move ?). New data analyses assessed whether these anticipatory effects are due to a linguistic effect on the targeting of saccades (i.e., the where parameter of eye movement control), the duration of fixations (i.e., the when parameter), or both. Participants made fewer fixations before reaching the target object when the verb was selectionally restricting (e.g., will eat). However, verb type had no effect on the duration of individual eye fixations. These results suggest an important constraint on the linkage between spoken language processing and eye movement control: Linguistic input may influence only the decision of where to move the eyes, not the decision of when to move them. |
Solveiga Stonkute; Jochen Braun; Alexander Pastukhov The role of attention in ambiguous reversals of structure-from-motion Journal Article In: PLoS ONE, vol. 7, no. 5, pp. e37734, 2012. @article{Stonkute2012, Multiple dots moving independently back and forth on a flat screen induce a compelling illusion of a sphere rotating in depth (structure-from-motion). If all dots simultaneously reverse their direction of motion, two perceptual outcomes are possible: either the illusory rotation reverses as well (and the illusory depth of each dot is maintained), or the illusory rotation is maintained (but the illusory depth of each dot reverses). We investigated the role of attention in these ambiguous reversals. Greater availability of attention–as manipulated with a concurrent task or inferred from eye movement statistics–shifted the balance in favor of reversing illusory rotation (rather than depth). On the other hand, volitional control over illusory reversals was limited and did not depend on tracking individual dots during the direction reversal. Finally, display properties strongly influenced ambiguous reversals. Any asymmetries between 'front' and 'back' surfaces–created either on purpose by coloring or accidentally by random dot placement–also shifted the balance in favor of reversing illusory rotation (rather than depth). We conclude that the outcome of ambiguous reversals depends on attention, specifically on attention to the illusory sphere and its surface irregularities, but not on attentive tracking of individual surface dots. |
Michael J. Stroud; Tamaryn Menneer; Kyle R. Cave; Nick Donnelly Using the dual-target cost to explore the nature of search target representations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 1, pp. 113–122, 2012. @article{Stroud2012, Eye movements were monitored to examine search efficiency and infer how color is mentally represented to guide search for multiple targets. Observers located a single color target very efficiently by fixating colors similar to the target. However, simultaneous search for 2 colors produced a dual-target cost. In addition, as the similarity between the 2 target colors decreased, search efficiency suffered, resulting in more fixations on colors dissimilar to both target colors, which we describe as a "split-target cost." The patterns of fixations provide evidence to the type of mental representations guiding search. When the 2 targets are dissimilar, they are apparently encoded as separate and discrete representations. The fixation patterns for more similar targets can be explained with either 2 discrete target representations or a single, unitary range containing the target colors as well as the colors between them in color space. |
Tim J. Smith The attentional theory of cinematic continuity Journal Article In: Projections, vol. 6, no. 1, pp. 1–27, 2012. @article{Smith2012b, The intention of most film editing is to create the impression of con- tinuity by editing together discontinuous viewpoints. The continuity editing rules are well established yet there exists an incomplete understanding of their cognitive foundations. This article presents the Attentional Theory of Cinematic Continuity (AToCC), which identifies the critical role visual attention plays in the perception of continuity across cuts and demonstrates how perceptual expectations can be matched across cuts without the need for a coherent representation of the depicted space. The theory explains several key elements of the continuity editing style including match-action, matched-exit/entrances, shot/reverse-shot, the 180° rule, and point-of-view editing. AToCC formalizes insights about viewer cognition that have been latent in the filmmaking community for nearly a century and demonstrates how much vision science in general can learn from film. |
Tim J. Smith; Peter Lamont; John M. Henderson The penny drops: Change blindness at fixation Journal Article In: Perception, vol. 41, no. 4, pp. 489–492, 2012. @article{Smith2012c, Our perception of the visual world is fallible. Unattended objects may change without us noticing as long as the change does not capture attention (change blindness). However, it is often assumed that changes to a fixated object will be noticed if it is attended. In this experiment we demonstrate that participants fail to detect a change in identity of a coin during a magic trick even though eyetracking indicates that the coin is tracked by the eyes throughout the trick. The change is subsequently detected when participants are instructed to look for it. These results suggest that during naturalistic viewing, attention can be focused on an object at fixation without including all of its features. |
NaYoung So; Veit Stuphorn Supplementary eye field encodes reward prediction error Journal Article In: Journal of Neuroscience, vol. 32, no. 9, pp. 2950–2963, 2012. @article{So2012, The outcomes of many decisions are uncertain and therefore need to be evaluated. We studied this evaluation process by recording neuronal activity in the supplementary eye field (SEF) during an oculomotor gambling task. While the monkeys awaited the outcome, SEF neurons represented attributes of the chosen option, namely, its expected value and the uncertainty of this value signal. After the gamble result was revealed, a number of neurons reflected the actual reward outcome. Other neurons evaluated the outcome by encoding the difference between the reward expectation represented during the delay period and the actual reward amount (i.e., the reward prediction error). Thus, SEF encodes not only reward prediction error but also all the components necessary for its computation: the expected and the actual outcome. This suggests that SEF might actively evaluate value-based decisions in the oculomotor domain, independent of other brain regions. |
Grayden J. F. Solman; J. Allan Cheyne; Daniel Smilek Found and missed: Failing to recognize a search target despite moving it Journal Article In: Cognition, vol. 123, no. 1, pp. 100–118, 2012. @article{Solman2012a, We present results from five search experiments using a novel 'unpacking' paradigm in which participants use a mouse to sort through random heaps of distractors to locate the target. We report that during this task participants often fail to recognize the target despite moving it, and despite having looked at the item. Additionally, the missed target item appears to have been processed as evidenced by post-error slowing of individual moves within a trial. The rate of this 'unpacking error' was minimally affected by set size and dual task manipulations, but was strongly influenced by perceptual difficulty and perceptual load. We suggest that the error occurs because of a dissociation between perception for action and perception for identification, providing further evidence that these processes may operate relatively independently even in naturalistic contexts, and even in settings like search where they should be expected to act in close coordination. |
Grayden J. F. Solman; Daniel Smilek Memory benefits during visual search depend on difficulty Journal Article In: Journal of Cognitive Psychology, vol. 24, no. 6, pp. 689–702, 2012. @article{Solman2012, In three experiments we explored whether memory for previous locations of search items influences search efficiency more as the difficulty of exhaustive search increases. Difficulty was manipulated by varying item eccentricity and item similarity (discriminability). Participants searched through items placed at three levels of eccentricity. The search displays were either identical on every trial (repeated condition) or the items were randomly reorganised from trial to trial (random condition), and search items were either relatively easy or difficult to discriminate from each other. Search was both faster and more efficient (i.e., search slopes were shallower) in the repeated condition than in the random condition. More importantly, this advantage for repeated displays was greater (1) for items that were more difficult to discriminate and (2) for eccentric targets when items were easily discriminable. Thus, increasing target eccentricity and reducing item discriminability both increase the influence of memory during search. |
David Souto; Dirk Kerzel Like a rolling stone: Naturalistic kinematics influence tracking eye movements Journal Article In: Journal of Vision, vol. 12, no. 9, pp. 1–12, 2012. @article{Souto2012, Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information aSouto, D., & Kerzel, D. (2012). Like a rolling stone: naturalistic kinematics influence tracking eye movements. Journal of Vision, 12(9), 997–997. https://doi.org/10.1167/12.9.997bout object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics. |
Miriam Spering; Marisa Carrasco Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness Journal Article In: Journal of Neuroscience, vol. 32, no. 22, pp. 7594–7601, 2012. @article{Spering2012, Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. |
Xue-feng F. Shi; Li-min Xu; Yao Li; Ting Wang; Kan-xing Zhao; Bernhard A. Sabel Fixational saccadic eye movements are altered in anisometropic amblyopia Journal Article In: Restorative Neurology and Neuroscience, vol. 30, no. 6, pp. 445–462, 2012. @article{Shi2012a, PURPOSE: Amblyopia develops during a critical period in early visual development and is characterized by reduced visual sensory functions and structural reorganization of the brain. However, little is known about oculomotor functions in amblyopes despite the special role of eye movements in visual perception, task execution and fixation. Therefore, we studied the relationship of visual deficits in anisometropic amblyopia and fixational saccadic eye movements. METHODS: We recruited twenty-eight anisometropic amblyopes and twenty-eight age-matched control subjects. Using a high-speed eye-tracker, fixational eye-movements of both eyes were recorded. A computerized fixational saccadic component analysis of eye-movement waveforms was developed to quantify the parameters of fixational saccades (FSs) and a simulation model was developed to help explain the FS performances. RESULTS: Amblyopic eyes, but not control eyes, showed fewer FSs, but these had increased amplitudes, increased peak velocities, and longer inter-saccadic intervals. The reduced FSs occurred mainly in the 0- to 0.6-degree amplitude range, and the probability of FSs with larger amplitudes and longer inter-saccadic intervals was significantly higher than in controls. A new simulation model analysis suggests that an excitatory-inhibitory activity imbalance in superior colliculus may explain these FSs changes. CONCLUSIONS: We propose that the abnormal visual processing and circuitry reorganization in anisometropic amblyopia has an impact on the fixational saccade generation. We see two possible interpretations: (i) altered FSs may be an attempt of the visual system to adapt to the deficit, trying to capture more information from a broader spatial domain of the visual world so as to enhance the contrast sensitivity to low spatial frequencies viewed by the amblyopic eye, or (ii) it may be the cause of amblyopia or a contributing factor to the original deficit that aggravates the early deprivation. |
Zhuanghua Shi; Romi Nijhawan Motion extrapolation in the central fovea Journal Article In: PLoS ONE, vol. 7, no. 3, pp. e33651, 2012. @article{Shi2012, Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea. |
Kerry Shields; Paul E. Engelhardt; Magdalena Ietswaart Processing emotion information from both the face and body: An eye-movement study Journal Article In: Cognition and Emotion, vol. 26, no. 4, pp. 699–709, 2012. @article{Shields2012, This study examined the perception of emotional expressions, focusing on the face and the body. Photographs of four actors expressing happiness, sadness, anger, and fear were presented in congruent (e. g., happy face with happy body) and incongruent (e. g., happy face with fearful body) combinations. Participants selected an emotional label using a four-option categorisation task. Reaction times and accuracy for the categorisation judgement, and eye movements were the dependent variables. Two regions of interest were examined: face and body. Results showed better accuracy and faster reaction times for congruent images compared to incongruent images. Eye movements showed an interaction in which there were more fixations and longer dwell times to the face and fewer fixations and shorter dwell times to the body with incongruent images. Thus, conflicting information produced a marked effect on information processing in which participants focused to a greater extent on the face compared to the body. |
Shui-I Shih; Katie L. Meadmore; Simon P. Liversedge Using eye movement measures to investigate effects of age on memory for objects in a scene Journal Article In: Memory, vol. 20, no. 6, pp. 629–637, 2012. @article{Shih2012, We examined whether there were age-related differences in eye movements during intentional encoding of a photographed scene that might account for age-related differences in memory of objects in the scene. Younger and older adults exhibited similar scan path patterns, and visited each region of interest in the scene with similar frequency and duration. Despite the similarity in viewing, there were fundamental differences in the viewing?memory relationship. Although overall recognition was poorer in the older than younger adults, there was no age effect on recognition probability for objects visited only once. More importantly, re-visits to objects brought gain in recognition probability for the younger adults, but not for the older adults. These results suggest that the age-related differences in object recognition performance are in part due to inefficient integration of information from working memory to longer- term memory. |
Shui-I Shih; Katie L. Meadmore; Simon P. Liversedge Aging, eye movements, and object-location memory Journal Article In: PLoS ONE, vol. 7, no. 3, pp. e33485, 2012. @article{Shih2012a, This study investigated whether "intentional" instructions could improve older adults' object memory and object-location memory about a scene by promoting object-oriented viewing. Eye movements of younger and older adults were recorded while they viewed a photograph depicting 12 household objects in a cubicle with or without the knowledge that memory about these objects and their locations would be tested (intentional vs. incidental encoding). After viewing, participants completed recognition and relocation tasks. Both instructions and age affected viewing behaviors and memory. Relative to incidental instructions, intentional instructions resulted in more accurate memory about object identity and object-location binding, but did not affect memory accuracy about overall positional configuration. More importantly, older adults exhibited more object-oriented viewing in the intentional than incidental condition, supporting the environmental support hypothesis. |
Masanori Shimono; Hiroaki Mano; Kazuhisa Niki The brain structural hub of interhemispheric information integration for visual motion perception Journal Article In: Cerebral Cortex, vol. 22, no. 2, pp. 337–344, 2012. @article{Shimono2012, We investigated the key anatomical structures mediating interhemispheric integration during the perception of apparent motion across the retinal midline. Previous studies of commissurotomized patients suggest that subcortical structures mediate interhemispheric transmission but the specific regions involved remain unclear. Here, we exploit interindividual variations in the propensity of normal subjects to perceive horizontal motion, in relation to vertical motion. We characterize these differences psychophysically using a Dynamic Dot Quartet (an ambiguous stimulus that induces illusory motion). We then tested for correlations between a tendency to perceive horizontal motion and fractional anisotropy (FA) (from structural diffusion tensor imaging), over subjects. FA is an indirect measure of the orientation and integrity of white matter tracts. Subjects who found it easy to perceive horizontal motion showed significantly higher FA values in the pulvinar. Furthermore, fiber tracking from an independently identified (subject-specific) visual motion area converged on the pulvinar nucleus. These results suggest that the pulvinar is an anatomical hub and may play a central role in interhemispheric integration. |
Steven S. Shimozaki; Wade A. Schoonveld; Miguel P. Eckstein A unified Bayesian observer analysis for set size and cueing effects on perceptual decisions and saccades Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 1–26, 2012. @article{Shimozaki2012, Visual search and cueing tasks have been employed extensively in attentional research, with each having a standard effect (visual search: set size effects, cueing: cue validity). Generally these effects have been treated with different (but often similar) attentional theories. The present study aims to consolidate cueing and set size effects within an ideal observer approach. Four observers performed a yes/no contrast discrimination of a gaussian signal in a task combining cueing with visual search. The signal appeared in half the trials, and effective set size (M, 2 to 8) was determined by one primary precue (having 50% validity in signal present trials) and M-1 secondary precues. There were two stimulus durations: 1 second (eye movements allowed), and the first-saccade latency (in the 1 second duration condition) minus 80 milliseconds. Simulations found that an ideal observer for the perceptual yes/no decisions and the first saccadic localization decisions predicted both set size and cueing effects with a single weighting mechanism, providing a unifying account. For the human observer results, a modified ideal observer (with performance matched to human performance) fit the yes/no perceptual decisions well. For the first saccadic decisions, there was evidence of use of the primary cue, but the modified ideal observer was not a good fit, indicating a suboptimal use of the cue. We discuss possible underlying assumptions about the task that might explain the apparent suboptimal nature of saccadic decisions and the overall utility of the ideal observer for cueing and visual search studies in visual attention and saccades. |
Claudio Simoncini; Laurent U. Perrinet; Anna Montagnini; Pascal Mamassian; Guillaume S. Masson More is not always better: Adaptive gain control explains dissociation between perception and action Journal Article In: Nature Neuroscience, vol. 15, no. 11, pp. 1596–1603, 2012. @article{Simoncini2012, Moving objects generate motion information at different scales, which are processed in the visual system with a bank of spatiotemporal frequency channels. It is not known how the brain pools this information to reconstruct object speed and whether this pooling is generic or adaptive; that is, dependent on the behavioral task. We used rich textured motion stimuli of varying bandwidths to decipher how the human visual motion system computes object speed in different behavioral contexts. We found that, although a simple visuomotor behavior such as short-latency ocular following responses takes advantage of the full distribution of motion signals, perceptual speed discrimination is impaired for stimuli with large bandwidths. Such opposite dependencies can be explained by an adaptive gain control mechanism in which the divisive normalization pool is adjusted to meet the different constraints of perception and action. |
Chris R. Sims; Robert A. Jacobs; David C. Knill An ideal observer analysis of visual working memory Journal Article In: Psychological Review, vol. 119, no. 4, pp. 807–830, 2012. @article{Sims2012, Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM. |
Timothy J. Slattery; Adrian Staub; Keith Rayner Saccade launch site as a predictor of fixation durations in reading: Comments on Hand, Miellet, O'Donnell, and Sereno (2010) Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 1, pp. 251–261, 2012. @article{Slattery2012, An important question in research on eye movements in reading is whether word frequency and word predictability have additive or interactive effects on fixation durations. A fair number of studies have reported only additive effects of the frequency and predictability of a target word on reading times on that word, failing to show significant interactions. Recently, however, Hand, Miellet, O'Donnell, and Sereno (see record 2010-19099-001) reported interactive effects in a study that included the distance of the prior fixation from the target word (launch site). They reported that when the saccade into the target word was launched from very near to the word (within 3 characters), the predictability effect was larger for low frequency words, but when the saccade was launched from a medium distance (4-6 characters from the word) the predictability effect was larger for high frequency words. Hand et al. argued for the importance of including launch site in analyses of target word fixation durations. Here we describe several problems with Hand et al.'s use of analyses of variance in which launch site is divided into distinct ordinal levels. We describe a more appropriate way to analyze such data-linear mixed-effect models-and we use this method to show that launch site does not modulate the interaction between frequency and predictability in two other data sets. |
Nicholas D. Smith; David P. Crabb; Fiona C. Glen; Robyn Burton; David F. Garway-Heath Eye movements in patients with glaucoma when viewing images of everyday scenes Journal Article In: Seeing and Perceiving, vol. 25, no. 5, pp. 471–492, 2012. @article{Smith2012a, This study tests the hypothesis that patients with bilateral glaucoma exhibit different eye movements compared to normally-sighted people when viewing computer displayed photographs of everyday scenes. Thirty glaucomatous patients and 30 age-related controls with normal vision viewed images on a computer monitor whilst eye movements were simultaneously recorded using an eye tracking system. The patients demonstrated a significant reduction in the average number of saccades compared to controls (P = 0.02; mean reduction of 7% (95% confidence interval (CI): 311%)). There was no difference in average saccade amplitude between groups but there was between-person variability in patients. The average elliptical region scanned by the patients by a bivariate contour ellipse area (BCEA) analysis, was more restricted compared to controls (P = 0.004; mean reduction of 23% (95% (CI): 1135%)). A novel analysis mapping areas of interest in the images indicated a weak association between severity of functional deficit and a tendency to not view regions typically viewed by the controls. In conclusion, some eye movements in some patients with bilateral glaucomatous defects differ from normal-sighted people of a similar age when viewing images of everyday scenes, providing evidence for a potential new window for looking into the functional consequences of the disease. |
Nicholas D. Smith; Fiona C. Glen; David P. Crabb Eye movements during visual search in patients with glaucoma Journal Article In: BMC Ophthalmology, vol. 12, pp. 1–11, 2012. @article{Smith2012, BACKGROUND: Glaucoma has been shown to lead to disability in many daily tasks including visual search. This study aims to determine whether the saccadic eye movements of people with glaucoma differ from those of people with normal vision, and to investigate the association between eye movements and impaired visual search. METHODS: Forty patients (mean age: 67 [SD: 9] years) with a range of glaucomatous visual field (VF) defects in both eyes (mean best eye mean deviation [MD]: -5.9 (SD: 5.4) dB) and 40 age-related people with normal vision (mean age: 66 [SD: 10] years) were timed as they searched for a series of target objects in computer displayed photographs of real world scenes. Eye movements were simultaneously recorded using an eye tracker. Average number of saccades per second, average saccade amplitude and average search duration across trials were recorded. These response variables were compared with measurements of VF and contrast sensitivity. RESULTS: The average rate of saccades made by the patient group was significantly smaller than the number made by controls during the visual search task (P = 0.02; mean reduction of 5.6% (95% CI: 0.1 to 10.4%). There was no difference in average saccade amplitude between the patients and the controls (P = 0.09). Average number of saccades was weakly correlated with aspects of visual function, with patients with worse contrast sensitivity (PR logCS; Spearman's rho: 0.42; P = 0.006) and more severe VF defects (best eye MD; Spearman's rho: 0.34; P = 0.037) tending to make less eye movements during the task. Average detection time in the search task was associated with the average rate of saccades in the patient group (Spearman's rho = -0.65; P < 0.001) but this was not apparent in the controls. CONCLUSIONS: The average rate of saccades made during visual search by this group of patients was fewer than those made by people with normal vision of a similar average age. There was wide variability in saccade rate in the patients but there was an association between an increase in this measure and better performance in the search task. Assessment of eye movements in individuals with glaucoma might provide insight into the functional deficits of the disease. |
Michael J. Seiler; Poornima Madhavan; Molly Liechty Toward an understanding of real estate homebuyer internet search behavior: An application of ocular tracking technology Journal Article In: Journal of Real Estate Research, vol. 34, no. 2, pp. 211–241, 2012. @article{Seiler2012, This paper examines the eye movements of potential homebuyers searching for houses on the Internet. Total dwell time (looking at the photo), fixation duration (time spent at each focal point), and saccade amplitude (average distance between focal points) significantly explain someone's opinion of a house. The sections that are viewed first are the photo of the house, the description section, distantly followed by the real estate agent's remarks. The findings indicate that charm pricing, where agents list properties at slightly less than round numbers, works in opposition to its intended effect. Given that people dwell significantly longer on the first house they view, and since charm pricing typically causes a property to appear towards the end of a search when sorted by price from low to high, is charm pricing an effective marketing strategy? |
Michael J. Seiler; Poornima Madhavan; Molly Liechty Ocular tracking and the behavioral effects of negative externalities on perceived property values Journal Article In: Journal of Housing Research, vol. 21, no. 2, pp. 123–137, 2012. @article{Seiler2012a, This study proposes an alternative valuation technique to the standard hedonic model. Specifically, in the context of an experimental design, we use ocular tracking technology (dwell time, fixation duration, and saccade amplitude) to follow the eye movements of perspective homebuyers and a sample of student participants while searching for homes on the Internet. We superimpose ominous power lines in matched samples to just one home of the 10 homes that participants toured. Walls of another home within the tour package are artificially painted pink. Again using matched samples to compare results, we find that people rationally differentiate between negative externalities that can easily be changed (pink walls) versus those that cannot (power lines). |
Yasuhiro Seya; Katsumi Watanabe The minimal time required to process visual information in visual search tasks measured by using gaze-contingent visual masking Journal Article In: Perception, vol. 41, no. 7, pp. 819–830, 2012. @article{Seya2012, To estimate the minimal time required to process visual information (i.e., "effective acquisition time") during a visual search task, we used a gaze-contingent visual masking method. In the experiment, an opaque mask that restricted the central vision was presented at a current gaze position. We manipulated a temporal delay from a gaze shift to mask movement. Participants were asked to search for a target letter (T) among distractor letters (L)s as quickly as possible under various delays. The results showed that the reaction times and search rate decreased when the delay was increased. When the delay was longer than 50 ms, the reaction times and search rate reached a plateau. These results indicate that the effective acquisition time during the visual search task used in the study is equal to or less than 50 ms. The present study indicates that the gaze-contingent visual masking method used is useful for revealing the effective acquisition time. |
Madeleine E. Sharp; Jayalakshmi Viswanathan; Linda J. Lanyon; Jason J. S. Barton Sensitivity and bias in decision-making under risk: Evaluating the perception of reward, Its Probability and Value Journal Article In: PLoS ONE, vol. 7, no. 4, pp. e33460, 2012. @article{Sharp2012, BACKGROUND: There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour.$backslash$n$backslash$nOBJECTIVE: We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk.$backslash$n$backslash$nDESIGN/METHODS: Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%.$backslash$n$backslash$nRESULTS: Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a 'risk premium' of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability.$backslash$n$backslash$nCONCLUSIONS: This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia. |
Claire A. Sheldon; Mathias Abegg; Alla Sekunova; Jason J. S. Barton The word-length effect in acquired alexia, and real and virtual hemianopia Journal Article In: Neuropsychologia, vol. 50, no. 5, pp. 841–851, 2012. @article{Sheldon2012, A word-length effect is often described in pure alexia, with reading time proportional to the number of letters in a word. Given the frequent association of right hemianopia with pure alexia, it is uncertain whether and how much of the word-length effect may be attributable to the hemifield loss. To isolate the contribution of the visual field defect, we simulated hemianopia in healthy subjects with a gaze-contingent paradigm during an eye-tracking experiment. We found a minimal word-length effect of 14. ms/letter for full-field viewing, which increased to 38. ms/letter in right hemianopia and to 31. ms/letter in left hemianopia. We found a correlation between mean reading time and the slope of the word-length effect in hemianopic conditions. The 95% upper prediction limits for the word-length effect were 51. ms/letter in subjects with full visual fields and 161. ms/letter with simulated right hemianopia. These limits, which can be considered diagnostic criteria for an alexic word-length effect, were consistent with the reading performance of six patients with diagnoses based independently on perimetric and imaging data: two patients with probable hemianopic dyslexia, and four with alexia and lesions of the left fusiform gyrus, two with and two without hemianopia. Two of these patients also showed reduction of the word-length effect over months, one with and one without a reading rehabilitation program. Our findings clarify the magnitude of the word-length effect that originates from hemianopia alone, and show that the criteria for a word-length effect indicative of alexia differ according to the degree of associated hemifield loss. |
Deli Shen; Simon P. Liversedge; Jin Tian; Chuanli Zang; Lei Cui; Xuejun Bai; Guoli Yan; Keith Rayner Eye movements of second language learners when reading spaced and unspaced Chinese text Journal Article In: Journal of Experimental Psychology: Applied, vol. 18, no. 2, pp. 192–202, 2012. @article{Shen2012, The effect of spacing in relation to word segmentation was examined for four groups of non-native Chinese speakers (American, Korean, Japanese, and Thai) who were learning Chinese as second language. Chinese sentences with four types of spacing information were used: unspaced text, word-spaced text, character-spaced text, and nonword-spaced text. Also, participants' native languages were different in terms of their basic characteristics: English and Korean are spaced, whereas the other two are unspaced; Japanese is character based whereas the other three are alphabetic. Thus, we assessed whether any spacing effects were modulated by native language characteristics. Eye movement measures showed least disruption to reading for word-spaced text and longer reading times for unspaced than character-spaced text, with nonword-spaced text yielding the most disruption. These effects were uninfluenced by native language (though reading times differed between groups as a result of Chinese reading experience). Demarcation of word boundaries through spacing reduces non-native readers' uncertainty about the characters that constitute a word, thereby speeding lexical identification, and in turn, reading. More generally, the results indicate that words have psychological reality for those who are learning to read Chinese as a second language, and that segmentation of text into words is more beneficial to successful comprehension than is separating individual Chinese characters with spaces. |
Heather Sheridan; Eyal M. Reingold The time course of contextual influences during lexical ambiguity resolution: Evidence from distributional analyses of fixation durations Journal Article In: Memory & Cognition, vol. 40, no. 7, pp. 1122–1131, 2012. @article{Sheridan2012, In the lexical ambiguity literature, it is well-established that readers experience processing difficulties when they encounter biased homographs in a subordinate-instantiating prior context (i.e., the subordinate bias effect). To investigate the time course of this effect, the present study examined distributional analyses offirst-fixation durations on 60 biased homographs that were each read twice: once in a subordinate-instantiating context and once in a dominant-instantiating context. Ex-Gaussian fitting revealed that the subordinate context distribution was shifted to the right of the dominant context distribution, with no significant contextual differences in the degree of skew. In addition, a survival analysis technique showed a significant influence of the subordinate versus dominant contextual manipulation as early as 139 ms from the start of fixation. These results indicate that the contextual manipulation had a fast-acting influence on the majority of fixation durations, which is consistent with the reordered access model's assumption that prior context can affect the lexical access stage of reading. |
Heather Sheridan; Eyal M. Reingold The time course of predictability effects in reading: Evidence from a survival analysis of fixation durations Journal Article In: Visual Cognition, vol. 20, no. 7, pp. 733–745, 2012. @article{Sheridan2012a, To investigate the time course of predictability effects in reading, the present study examined distributions of first-fixation durations on target words in a low predictability versus a high predictability prior context. In a replication of Staub (2011), ex-Gaussian fitting demonstrated that the low predictability distribution was significantly shifted to the right of the high predictability distribution in the absence of any contextual differences in the degree of skew. Extending this finding, the present study used a survival analysis technique to demonstrate a significant influence of predictability on fixation duration as early as 140 ms from the start of fixation, which is similar to prior results obtained with the word frequency variable. These results provide convergent evidence that lexical variables have a fast acting influence on fixation durations during reading. Implications for models of eye- movement control are discussed. |
Heather Sheridan; Eyal M. Reingold Perceptually specific and perceptually non-specific influences on rereading benefits for spatially transformed text: Evidence from eye movements Journal Article In: Consciousness and Cognition, vol. 21, no. 4, pp. 1739–1747, 2012. @article{Sheridan2012b, The present study used eye tracking methodology to examine rereading benefits for spatially transformed text. Eye movements were monitored while participants read the same target word twice, in two different low-constraint sentence frames. The congruency of perceptual processing was manipulated by either applying the same type of transformation to the word during the first and second presentations (i.e., the congruent condition), or employing two different types of transformations across the two presentations of the word (i.e., the incongruent condition). Perceptual specificity effects were demonstrated such that fixation times for the second presentation of the target word were shorter for the congruent condition compared to the incongruent condition. Moreover, we demonstrated an additional perceptually non-specific effect such that second reading fixation times were shorter for the incongruent condition relative to a baseline condition that employed a normal typography (i.e., non-transformed) during the first presentation and a transformation during the second presentation. Both of these effects (i.e., perceptually specific and perceptually non-specific) were similar in magnitude for high and low frequency words, and both effects persisted across a 1. week lag between the first and second readings. We discuss the present findings in the context of the distinction between conscious and unconscious memory, and the distinction between perceptually versus conceptually driven processing. |
Heather Sheridan; Eyal M. Reingold Perceptual specificity effects in rereading: Evidence from eye movements Journal Article In: Journal of Memory and Language, vol. 67, no. 2, pp. 255–269, 2012. @article{Sheridan2012c, The present experiments examined perceptual specificity effects using a rereading paradigm. Eye movements were monitored while participants read the same target word twice, in two different low-constraint sentence frames. The congruency of perceptual processing was manipulated by either presenting the target word in the same distortion typography (i.e., font) during the first and second presentations (i.e., the congruent condition), or changing the distortion typography of the word across the two presentations (i.e., the incongruent condition). Fixation times for the second presentation of the target word were shorter for the congruent condition compared to the incongruent condition, and did not differ across the incongruent condition and an additional baseline condition that employed a normal (i.e., non-distorted) typography during the first presentation and a distortion typography during the second presentation. In Experiment 1, we employed both unusual and subtle distortion typographies, and we demonstrated that the typography congruency effect (i.e., the congruent < incongruent difference) was significant for low frequency but not for high frequency target words. In Experiment 2, the congruency effect persisted across a 1. week lag between the first and second presentations of the target words. Overall, the present demonstration of the long-term retention of superficial perceptual details (i.e., typography) supports the existence of perceptually specific memory representations. |
Veronica Whitford; Debra Titone In: Psychonomic Bulletin & Review, vol. 19, no. 1, pp. 73–80, 2012. @article{Whitford2012, We used eye movement measures of first-language (L1) and second-language (L2) paragraph reading to investigate whether the degree of current L2 exposure modulates the relative size of L1 and L2 frequency effects (FEs). The results showed that bilinguals displayed larger L2 than L1 FEs during both early-and late-stage eye movement measures, which are taken to reflect initial lexical access and postlexical access, respectively. More-over, the magnitude of L2 FEs was inversely related to current L2 exposure, such that lower levels of L2 exposure led to larger L2 FEs. In contrast, during early-stage reading measures, bilinguals with higher levels of current L2 exposure showed larger L1 FEs than did bilinguals with lower levels of L2 exposure, suggesting that increased L2 experience modifies the earliest stages of L1 lexical access. Taken together, the findings are consistent with implicit learning accounts (e.g., Monsell, 1991), the weaker links hypothesis (Gollan, Montoya, Cera, Sandoval, Journal of Memory and Language, 58:787–814, 2008), and current bilingual visual word recognition models (e.g., the bilingual interactive activation model plus [BIA+]; Dijkstra & van Heuven, Bilingualism: Language and Cognition, 5:175– 197, 2002). Thus, amount of current L2 exposure is a key determinant of FEs and, thus, lexical activation, in both the L1 and L2. |
Jan M. Wiener; Christoph Hölscher; Simon Büchner; Lars Konieczny Gaze behaviour during space perception and spatial decision making Journal Article In: Psychological Research, vol. 76, no. 6, pp. 713–729, 2012. @article{Wiener2012, A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screen-shots of choice points taken in large virtual environments. Each screen-shot depicted alternative path options. In Experiment 1, participants had to decide between them in order to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 & 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making. |
Eva Wiese; Agnieszka Wykowska; Jan Zwickel; Hermann J. Müller I see what you mean: How attentional selection is shaped by ascribing intentions to others Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e45391, 2012. @article{Wiese2012, The ability to understand and predict others' behavior is essential for successful interactions. When making predictions about what other humans will do, we treat them as intentional systems and adopt the intentional stance, i.e., refer to their mental states such as desires and intentions. In the present experiments, we investigated whether the mere belief that the observed agent is an intentional system influences basic social attention mechanisms. We presented pictures of a human and a robot face in a gaze cuing paradigm and manipulated the likelihood of adopting the intentional stance by instruction: in some conditions, participants were told that they were observing a human or a robot, in others, that they were observing a human-like mannequin or a robot whose eyes were controlled by a human. In conditions in which participants were made to believe they were observing human behavior (intentional stance likely) gaze cuing effects were significantly larger as compared to conditions when adopting the intentional stance was less likely. This effect was independent of whether a human or a robot face was presented. Therefore, we conclude that adopting the intentional stance when observing others' behavior fundamentally influences basic mechanisms of social attention. The present results provide striking evidence that high-level cognitive processes, such as beliefs, modulate bottom-up mechanisms of attentional selection in a top-down manner. |
Kelly S. Wild; Ellen Poliakoff; Andrew Jerrison; Emma Gowen Goal-directed and goal-less imitation in autism spectrum disorder Journal Article In: Journal of Autism and Developmental Disorders, vol. 42, no. 8, pp. 1739–1749, 2012. @article{Wild2012, To investigate how people with Autism are affected by the presence of goals during imitation, we conducted a study to measure movement kinematics and eye movements during the imitation of goal-directed and goal-less hand movements. Our results showed that a control group imitated changes in movement kinematics and increased the level that they tracked the hand with their eyes, in the goal-less compared to goal-direction condition. In contrast, the ASD group exhibited more goal-directed eye movements, and failed to modulate the observed movement kinematics successfully in either condition. These results increase the evidence for impaired goal-less imitation in ASD, and suggest that there is a reliance on goal-directed strategies for imitation in ASD, even in the absence of visual goals. |
C. Ellie Wilson; Romina Palermo; Jon Brock Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development Journal Article In: PLoS ONE, vol. 7, no. 5, pp. e37681, 2012. @article{Wilson2012, Background: Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology andPrincipalFindings: Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age- standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index' – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions andSignificance: In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. |
Heather Winskel; Manuel Perea; Theeraporn Ratitamkul On the flexibility of letter position coding during lexical processing: Evidence from eye movements when reading Thai Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 8, pp. 1522–1536, 2012. @article{Winskel2012, Previous research supports the view that initial letter position has a privileged role in comparison to internal letters for visual-word recognition in Roman script. The current study examines whether this is the case for Thai. Thai is an alphabetic script in which ordering of the letters does not necessarily correspond to the ordering of a word's phonemes. Furthermore, Thai does not normally have interword spaces. We examined whether the position of transposed letters (internal, e.g., porblem, vs. initial, e.g., rpoblem) within a word influences how readily those words are processed when interword spacing and demarcation of word boundaries (using alternating bold text) is manipulated. The eye movements of 54 participants were recorded while they were reading sentences silently. There was no apparent difference in degree of disruption caused when reading initial and internal transposed-letter nonwords. These findings give support to the view that letter position encoding in Thai is relatively flexible and that actual identity of the letter is more critical than letter position. This flexible encoding strategy is in line with the characteristics of Thai–that is, the flexibility in the ordering of the letters and the lack of interword spaces, which creates a certain level of ambiguity in relation to the demarcation of word boundaries. These findings point to script-specific effects operating in letter encoding in visual-word recognition and reading. |
Dagmar A. Wismeijer; Karl R. Gegenfurtner Orientation of noisy texture affects saccade direction during free viewing Journal Article In: Vision Research, vol. 58, pp. 19–26, 2012. @article{Wismeijer2012, We redirect our eye approximately three times per second to bring a new part of our environment on to our fovea (Findlay & Gilchrist, 2003). How a scanning path is planned is still an unsolved matter. Most research to date has focused on the question of target selection: how is the next fixation location, or saccade target, selected. Here we investigated the direction of spontaneous saccades, rather than fixation locations per se. We measured eye movements, while observers were freely viewing noisy textures: oriented gabors embedded in either pink (1/f) noise or pixel noise, of which they later had to report their orientation. Our results show that a significant percentage of the spontaneous saccades were directed along the orientation of the stimulus. These results suggest that observers may have used an underlying eye movement strategy involving the search for contour endings. |
Menno Schoot; Albert Reijntjes; Ernest C. D. M. Van Lieshout How do children deal with inconsistencies in text? An eye fixation and self-paced reading study in good and poor reading comprehenders Journal Article In: Reading and Writing, vol. 25, no. 7, pp. 1665–1690, 2012. @article{Schoot2012, In two experiments, we investigated comprehension monitoring in 10-12 years old children differing in reading comprehension skill. The children's self-paced reading times (Experiment 1) and eye fixations and regressions (Experiment 2) were measured as they read narrative texts in which an action of the protagonist was consistent or inconsistent with a description of the protagonist's character given earlier. The character description and action were adjacent (local condition) or separated by a long filler paragraph (global condition). The self-paced reading data (Experiment 1), the initial reading and rereading data (Experiment 2), together with the comprehension question data (both experiments), are discussed within the situation model framework and suggest that poor comprehenders find difficulty in constructing a richly elaborated situation model. Poor comprehenders presumably fail to represent character information in the model as a consequence of which they are not able to detect inconsistencies in the global condition (in which the character information is lost from working memory). The patterns of results rule out an explanation in terms of impaired situation model updating ability. |
Stefan Van der Stigchel; Jessica Heeman; Tanja C. W. Nijboer Averaging is not everything: The saccade global effect weakens with increasing stimulus size Journal Article In: Vision Research, vol. 62, pp. 108–115, 2012. @article{VanderStigchel2012a, When two elements are presented closely aligned, the average saccade endpoint will generally be located in between these two elements. This 'global effect' has been explained in terms of the center of gravity account which states that the saccade endpoint is based on the relative saliency of the different elements in the visual display. In the current study, we tested one of the implications of the center of gravity account: when two elements are presented closely aligned with the same size and the same distance from central fixation, the saccade should land on the intermediate location, irrespective of the stimulus size. To this end, two equally-sized elements were presented simultaneously and participants were required to execute an eye movement to the visual information presented on the display. Results showed that the strongest global effect was observed in the condition with smaller stimuli, whereas the saccade averaging was weaker when larger stimuli were presented. In a second experiment, in which only one element was presented, we observed that the width of the distribution of saccade endpoints is influenced by stimulus size in that the distribution is broader with smaller stimuli. We conclude that perfect saccade averaging is not always the default response by the oculomotor system. There appears to be a tendency to initiate an eye movement towards one of the visual elements, which becomes stronger with increasing stimulus size. This effect might be explained by an increased uncertainty in target localization for smaller stimuli, resulting in a higher probability of the merging of two stimulus representations into one representation. |
Stefan Van der Stigchel; Tanja C. W. Nijboer; D. P. Bergsma; Jason J. S. Barton; Chris L. E. Paffen Measuring palinopsia: Characteristics of a persevering visual sensation from cerebral pathology Journal Article In: Journal of the Neurological Sciences, vol. 316, no. 1-2, pp. 184–188, 2012. @article{VanderStigchel2012b, Palinopsia is an abnormal perseverative visual phenomenon, whose relation to normal afterimages is unknown. We measured palinoptic positive visual afterimages in a patient with a cerebral lesion. Positive afterimages were confined to the left inferior quadrant, which allowed a comparison between afterimages in the intact and the affected part of his visual field. Results showed that negative afterimages in the affected quadrant were no different from those in the unaffected quadrant. The positive afterimage in his affected field, however, differed both qualitatively and quantitatively from normal afterimages, being weaker but much more persistent, and displaced from the location of the inducing stimulus. These findings reveal distinctions between pathological afterimages of cerebral origin and physiological afterimages of retinal origin. |
Stefan Van der Stigchel; Roderick C. L. L. Reichenbach; Arie J. Wester; Tanja C. W. Nijboer Antisaccade performance in Korsakoff patients reveals deficits in oculomotor inhibition Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 34, no. 8, pp. 876–886, 2012. @article{VanderStigchel2012c, Oculomotor inhibition reflects the ability to suppress an unwanted eye movement. The goal of the present study was to assess oculomotor inhibition in patients with Korsakoff's syndrome (KS). To this end, an antisaccade task was employed in which an eye movement towards an onset stimulus has to be inhibited, and a voluntary saccade has to be executed in the opposite direction. Compared to the results of a matched control group, patients showed a higher percentage of intrusive saccades, made more antisaccade errors, and showed longer latencies on prosaccade trials. These results clearly show that oculomotor inhibition is impaired in KS. Part of these deficits in oculomotor inhibition may be explained by neuronal atrophy in the frontal areas, which is generally associated with KS. |
Stefan Van der Stigchel; M. Koningsbruggen; Tanja C. W. Nijboer; Alexandra List; Robert D. Rafal The role of the frontal eye fields in the oculomotor inhibition of reflexive saccades: Evidence from lesion patients Journal Article In: Neuropsychologia, vol. 50, no. 1, pp. 198–203, 2012. @article{VanderStigchel2012, The current study investigated the role of the frontal eye fields (FEF) in the suppression of an unwanted eye movement ('oculomotor inhibition'). Oculomotor inhibition has generally been investigated using the antisaccade task, in which an eye movement to a task-relevant onset must be inhibited. Various lines of research have suggested that successful inhibition in the antisaccade task relies heavily on the FEF. Here, we tested whether the FEF are also involved in the oculomotor inhibition of reflexive saccades. To this end, we used the oculomotor capture task in which the to-be-inhibited element is task-irrelevant. Performance of four patients with lesions to the FEF was measured on both the antisaccade and oculomotor capture task. In both tasks, the majority of the patients made more erroneous eye movements to contralesional elements than to ipsilesional elements. One patient showed no deficits in the antisaccade task, which could be explained by the developmental origin of his lesion. While we confirm the role of the FEF in the inhibition of task-relevant elements, the current study also reveals that the FEF play a crucial role in the oculomotor inhibition of task-irrelevant elements. |