All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
James Rankin; Andrew Isaac Meso; Guillaume S. Masson; O. Faugeras; Pierre Kornprobst Bifurcation study of a neural field competition model with an application to perceptual switching in motion integration Journal Article In: Journal of Computational Neuroscience, vol. 36, no. 2, pp. 193–213, 2014. @article{Rankin2014, Perceptual multistability is a phenomenon in which alternate interpretations of a fixed stimulus are perceived intermittently. Although correlates between activity in specific cortical areas and perception have been found, the complex patterns of activity and the underlying mechanisms that gate multistable perception are little understood. Here, we present a neural field competition model in which competing states are represented in a continuous feature space. Bifurcation analysis is used to describe the different types of complex spatio-temporal dynamics produced by the model in terms of several parameters and for different inputs. The dynamics of the model was then compared to human perception investigated psychophysically during long presentations of an ambiguous, multistable motion pattern known as the barberpole illusion. In order to do this, the model is operated in a parameter range where known physiological response properties are reproduced whilst also working close to bifurcation. The model accounts for characteristic behaviour from the psychophysical experiments in terms of the type of switching observed and changes in the rate of switching with respect to contrast. In this way, the modelling study sheds light on the underlying mechanisms that drive perceptual switching in different contrast regimes. The general approach presented is applicable to a broad range of perceptual competition problems in which spatial interactions play a role. |
Anne K. Rau; Korbinian Moeller; Karin Landerl The transition from sublexical to lexical processing in a consistent orthography: An eye-tracking study Journal Article In: Scientific Studies of Reading, vol. 18, no. 3, pp. 224–233, 2014. @article{Rau2014, We studied the transition in predominant reading strategy from serial sublexical processing to more parallel lexical processing as a function of word familiarity in German children of Grades 2, 3, 4, and adults. High-frequency words, low-frequency words, and nonwords of differing length were embedded in sentences and presented in an eye-tracking paradigm. The size of the word length effect was used as an indicator of serial sublexical decoding. When controlling for the generally higher processing times in younger readers, the effect of length over reading development was not direct but modulated by familiarity: Length effects were comparable between items of differing familiarity for Grade 2, whereas from Grade 3, length effects increased with decreasing familiarity. These findings suggest that Grade 2 children apply serial sublexical decoding as a default reading strategy to most items, whereas reading by direct lexical access is increasingly dominant in more experienced readers. |
Keith Rayner The gaze-contingent moving window in reading: Development and review Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 242–258, 2014. @article{Rayner2014, The development of the gaze-contingent moving window paradigm (McConkie & Rayner, 1975, 1976) is discussed and the results of the earliest research are reviewed. The original work suggested that the region from which readers can obtain useful information during an eye fixation in reading, or the perceptual span, was asymmetric around the fixation point, and extended from 3?4 letter spaces to the left of fixation to about 14?15 letter spaces to the right of fixation. Subsequent research which substantiated these findings is discussed. Then more recent research using the moving window paradigm to investigate the following topics (1) effects of reading speed, (2) effects of reading skill, (3) effects of the writing system, (4) effects due to age, (5) effects related to deafness, and (5) effects related to schizophrenia is discussed. Finally, some extensions of gaze-contingent paradigms to areas other than reading are discussed.$backslash$nThe development of the gaze-contingent moving window paradigm (McConkie & Rayner, 1975, 1976) is discussed and the results of the earliest research are reviewed. The original work suggested that the region from which readers can obtain useful information during an eye fixation in reading, or the perceptual span, was asymmetric around the fixation point, and extended from 3?4 letter spaces to the left of fixation to about 14?15 letter spaces to the right of fixation. Subsequent research which substantiated these findings is discussed. Then more recent research using the moving window paradigm to investigate the following topics (1) effects of reading speed, (2) effects of reading skill, (3) effects of the writing system, (4) effects due to age, (5) effects related to deafness, and (5) effects related to schizophrenia is discussed. Finally, some extensions of gaze-contingent paradigms to areas other than reading are discussed. |
Florian Perdreau; Patrick Cavanagh Drawing skill is related to the efficiency of encoding object structure Journal Article In: i-Perception, vol. 5, no. 2, pp. 101–119, 2014. @article{Perdreau2014, Accurate drawing calls on many skills beyond simple motor coordination. A good internal representation of the target object's structure is necessary to capture its proportion and shape in the drawing. Here, we assess two aspects of the perception of object structure and relate them to participants' drawing accuracy. First, we assessed drawing accuracy by computing the geometrical dissimilarity of their drawing to the target object. We then used two tasks to evaluate the efficiency of encoding object structure. First, to examine the rate of temporal encoding, we varied presentation duration of a possible versus impossible test object in the fovea using two different test sizes (8° and 28°). More skilled participants were faster at encoding an object's structure, but this difference was not affected by image size. A control experiment showed that participants skilled in drawing did not have a general advantage that might have explained their faster processing for object structure. Second, to measure the critical image size for accurate classification in the periphery, we varied image size with possible versus impossible object tests centered at two different eccentricities (3° and 8°). More skilled participants were able to categorise object structure at smaller sizes, and this advantage did not change with eccentricity. A control experiment showed that the result could not be attributed to differences in visual acuity, leaving attentional resolution as a possible explanation. Overall, we conclude that drawing accuracy is related to faster encoding of object structure and better access to crowded details. |
Effie J. Pereira; Monica S. Castelhano Peripheral guidance in scenes: The interaction of scene context and object content Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 5, pp. 2056–2072, 2014. @article{Pereira2014, In the present study, we examined how gaze guidance is affected by immediately available information in the periphery and investigated how search strategies differed across manipulations in the availability of scene context and object content information. Across 3 experiments, participants performed a visual search task in scenes while using a gaze-contingent moving-window paradigm. Extrafoveal information was manipulated across conditions to examine the contributions of object content, scene context, or some combination of the two. Experiment 1 demonstrated a possible interaction between scene context and object content information in improving guidance. Experiments 2 and 3 supported the notion that object content is selected for further scrutiny based on its position within scene context. These results suggest a prioritization of object information based on scene context, such that contextual information acts as a framework in the selection of relevant regions, and object information can then affect which specific locations in those regions are selected for further examination. |
Carolyn J. Perry; Abdullah Tahiri; Mazyar Fallah Feature integration within and across visual streams occurs at different visual processing stages Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–8, 2014. @article{Perry2014, Direction repulsion is a perceptual illusion in which the directions of two superimposed surfaces are repulsed away from the real directions of motion. The repulsion is reduced when the surfaces differ in dorsal stream features such as speed. We have previously shown that segmenting the surfaces by color, a ventral stream feature, did not affect repulsion but instead reduced the time needed to process both surfaces. The current study investigated whether segmenting two superimposed surfaces by a feature coprocessed with direction in the dorsal stream (i.e., speed) would also reduce processing time. We found that increasing the speed of one or both surfaces reduced direction repulsion. Since color segmentation does not affect direction repulsion, these results suggest that motion processing integrates speed and direction prior to forming an object representation that includes ventral stream features such as color. Like our previous results for differences in color, differences in speed also decreased processing time. Therefore, the reduction in processing time derives from a later processing stage where both ventral and dorsal features bound into the object representations can reduce the time needed for decision making when those features differentiate the superimposed surfaces from each other. |
Yoni Pertzov; Masud Husain The privileged role of location in visual working memory Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 7, pp. 1914–1924, 2014. @article{Pertzov2014, Reports have conflicted about the possible special role of location in visual working memory (WM). One important question is: Do we maintain the locations of objects in WM even when they are irrelevant to the task at hand? Here we used a continuous response scale to study the types of reporting errors that participants make when objects are presented at the same or at different locations in space. When several objects successively shared the same location, participants exhibited a higher tendency to report features of the wrong object in memory; that is, they responded with features that belonged to objects retained in memory but not probed at retrieval. On the other hand, a similar effect was not observed when objects shared a nonspatial feature, such as color. Furthermore, the effect of location on reporting errors was present even when its manipulation was orthogonal to the task at hand. These findings are consistent with the view that binding together different nonspatial features of an object in memory might be mediated through an object's location. Hence, spatial location may have a privileged role in WM. The relevance of these findings to conceptual models, as well as to neural accounts of visual WM, is discussed. |
Matthew F. Peterson; Miguel P. Eckstein Learning optimal eye movements to unusual faces Journal Article In: Vision Research, vol. 99, pp. 57–68, 2014. @article{Peterson2014, Eye movements, which guide the fovea's high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer's default face identification eye movement behavior to the new optimal fixation point and the observer's peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. |
Jella Pfeiffer; Martin Meißner; Eduard Brandstätter; René Riedl; Reinhold Decker; Franz Rothlauf On the influence of context-based complexity on information search patterns: An individual perspective Journal Article In: Journal of Neuroscience, Psychology, and Economics, vol. 7, no. 2, pp. 103–124, 2014. @article{Pfeiffer2014, Although context-based complexity measured as the similarity and conflict across alternatives is dependent on individual preference structures, existing studies investi- gating the influence of context-based complexity on information search patterns have largely ignored that context-based complexity is user- and preference-dependent. Addressing this research gap, this article elicits the individual preferences of decision makers by using the pairwise-comparison-based preference measurement (PCPM) technique and records individuals' search patterns using eye tracking. Our results show that an increased context-based complexity leads to an increase in information acqui- sition and the use of a more attribute-wise search pattern. Moreover, the information search pattern changes within a choice task as information is processed attribute-wise in earlier stages of the search process and alternative-wise in later ones. The fact that we do not find an interaction effect of context-based complexity and decision stages on the search patterns indicates that the influence of complexity on search patterns stays constant throughout the decision process and suggests that the more complex the choice task is, the later the switch from attribute-wise strategies to alternative-wise strategies will be. |
Ulrich J. Pfeiffer; Leonhard Schilbach; Bert Timmermans; Bojana Kuzmanovic; Alexandra L. Georgescu; Gary Bente; Kai Vogeley Why we interact: On the functional role of the striatum in the subjective experience of social interaction Journal Article In: NeuroImage, vol. 101, pp. 124–137, 2014. @article{Pfeiffer2014a, There is ample evidence that human primates strive for social contact and experience interactions with conspecifics as intrinsically rewarding. Focusing on gaze behavior as a crucial means of human interaction, this study employed a unique combination of neuroimaging, eye-tracking, and computer-animated virtual agents to assess the neural mechanisms underlying this component of behavior. In the interaction task, participants believed that during each interaction the agent's gaze behavior could either be controlled by another participant or by a computer program. Their task was to indicate whether they experienced a given interaction as an interaction with another human participant or the computer program based on the agent's reaction. Unbeknownst to them, the agent was always controlled by a computer to enable a systematic manipulation of gaze reactions by varying the degree to which the agent engaged in joint attention. This allowed creating a tool to distinguish neural activity underlying the subjective experience of being engaged in social and non-social interaction. In contrast to previous research, this allows measuring neural activity while participants experience active engagement in real-time social interactions. Results demonstrate that gaze-based interactions with a perceived human partner are associated with activity in the ventral striatum, a core component of reward-related neurocircuitry. In contrast, interactions with a computer-driven agent activate attention networks. Comparisons of neural activity during interaction with behaviorally naïve and explicitly cooperative partners demonstrate different temporal dynamics of the reward system and indicate that the mere experience of engagement in social interaction is sufficient to recruit this system. |
Andrea Phillipou; Susan Lee Rossell; David Jonathan Castle; Caroline T. Gurvich; Larry Allen Abel Square wave jerks and anxiety as distinctive biomarkers for anorexia nervosa Journal Article In: Investigative Ophthalmology & Visual Science, vol. 55, no. 12, pp. 8366–8370, 2014. @article{Phillipou2014, PURPOSE. The factors contributing to the cause and maintenance of anorexia nervosa (AN) are poorly understood, though increasing interest surrounds the neurobiological underpinnings of the condition. The examination of saccadic eye movements has proven useful in our understanding of the neurobiology of some other psychiatric illnesses, as they utilize identifiable brain circuits. Square wave jerks (SWJs), which describe an involuntary saccade away and back to fixation, have been observed to occur at abnormally high rates in neurodegenerative disorders and some psychiatric illnesses, but have not been examined in AN. Therefore, the aim of this study was to investigate whether individuals with AN and healthy control (HC) individuals differ in SWJ rate during attempted fixation. METHODS. Square wave jerk frequency was compared across 23 female participants with AN and 22 HC participants matched for age, sex, and premorbid intelligence. RESULTS. Anorexia nervosa participants were found to make SWJs at a significantly higher rate than HC participants. The rate of SWJs in AN was also found to negatively correlate with anxiety. Square wave jerk rate and anxiety were found to correctly classify groups, with an accuracy of 87% for AN participants and 95.5% for HCs. CONCLUSIONS. Given our current understanding of saccadic eye movements, the findings suggest a potential role of c-aminobutyric acid (GABA) in the superior colliculus, frontal eye fields, or posterior parietal cortex in the psychopathology of AN. |
Aleksandra Pieczykolan; Lynn Huestegge Oculomotor dominance in multitasking: Mechanisms of conflict resolution in cross-modal action Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–17, 2014. @article{Pieczykolan2014, In daily life, eye movement control usually occurs in the context of concurrent action demands in other effector domains. However, little research has focused on understanding how such cross-modal action demands are coordinated, especially when conflicting information needs to be processed conjunctly in different action modalities. In two experiments, we address this issue by studying vocal responses in the context of spatially conflicting eye movements (Experiment 1) and in the context of spatially conflicting manual actions (Experiment 2, under controlled eye fixation conditions). Crucially, a comparison across experiments allows us to assess resource scheduling priorities among the three effector systems by comparing the same (vocal) response demands in the context of eye movements in contrast to manual responses. The results indicate that in situations involving response conflict, eye movements are prioritized over concurrent action demands in another effector system. This oculomotor dominance effect corroborates previous observations in the context of multiple action demands without spatial response conflict. Furthermore, and in line with recent theoretical accounts of parallel multiple action control, resource scheduling patterns appear to be flexibly adjustable based on the temporal proximity of the two actions that need to be performed. |
Joanna Pilarczyk; Michał Kuniecki Emotional content of an image attracts attention more than visually salient features in various signal-to-noise ratio conditions Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 1–19, 2014. @article{Pilarczyk2014, Emotional images are processed in a prioritized manner, attracting attention almost immediately. In the present study we used eye tracking to reveal what type of features within neutral, positive, and negative images attract early visual attention: semantics, visual saliency, or their interaction. Semantic regions of interest were selected by observers, while visual saliency was determined using the Graph-Based Visual Saliency model. Images were transformed by adding pink noise in several proportions to be presented in a sequence of increasing and decreasing clarity. Locations of the first two fixations were analyzed. The results showed dominance of semantic features over visual saliency in attracting attention. This dominance was linearly related to the signal-to-noise ratio. Semantic regions were fixated more often in emotional images than in neutral ones, if signal-to-noise ratio was high enough to allow participants to comprehend the gist of a scene. Visual saliency on its own did not attract attention above chance, even in the case of pure noise images. Regions both visually salient and semantically relevant attracted a similar amount of fixation compared to semantic regions alone, or even more in the case of neutral pictures. Results provide evidence for fast and robust detection of semantically relevant features. |
Elmar H. Pinkhardt; Hazem Issa; Martin Gorges; Reinhart Jürgens; Dorothée Lulé; Johanna Heimrath; Hans Peter Müller; Albert C. Ludolph; Wolfgang Becker; Jan Kassubek In: Journal of Neurology, vol. 261, no. 4, pp. 791–803, 2014. @article{Pinkhardt2014, Small vessel cerebrovascular disease (SVCD) is one of the most frequent vessel disorders in the aged brain. Among the spectrum of neurological disturbances related to SVCD, oculomotor dysfunction is a not well understood symptom- in particular, it remains unclear whether vascular lesion load in specific brain regions affects oculomotor function independent of cognitive decline in SVCD patients or whether the effect of higher brain function deficits prevails. In this study, we examined a cohort of 25 SVCD patients and 19 healthy controls using video-oculographic eye movement recording in a laboratory environment, computer-based MRI assessment of white matter lesion load (WMLL), assessment of extrapyramidal motor deficits, and psychometric testing. In comparison to controls, the mean WMLL of patients was significantly larger than in controls. With respect to eye movement control, patients performed significantly worse than controls in almost all aspects of oculomotion. Likewise, patients showed a significantly worse performance in all but one of the neuropsychological tests. Oculomotor deficits in SVCD correlated with the patients' cognitive dysfunctioning while there was only weak evidence for a direct effect of WMLL on eye movement control. In conclusion, oculomotor impairment in SVCD seems to be mainly contingent upon cognitive deterioration in SVCD while WMLL might have only a minor specific effect upon oculomotor pathways. |
Alessandro Piras; Roberto Lobietti; Salvatore Squatrito Response time, visual search strategy, and anticipatory skills in volleyball players Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–10, 2014. @article{Piras2014, This paper aimed at comparing expert and novice volleyball players in a visuomotor task using realistic stimuli. Videos of a volleyball setter performing offensive action were presented to participants, while their eye movements were recorded by a head-mounted video based eye tracker. Participants were asked to foresee the direction (forward or backward) of the setter's toss by pressing one of two keys. Key-press response time, response accuracy, and gaze behaviour were measured from the first frame showing the setter's hand-ball contact to the button pressed by the participants. Experts were faster and more accurate in predicting the direction of the setting than novices, showing accurate predictions when they used a search strategy involving fewer fixations of longer duration, as well as spending less time in fixating all display areas from which they extract critical information for the judgment. These results are consistent with the view that superior performance in experts is due to their ability to efficiently encode domain-specific information that is relevant to the task. |
Alessandro Piras; Emanuela Pierantozzi; Salvatore Squatrito Visual search strategy in judo fighters during the execution of the first grip Journal Article In: International Journal of Sports Science & Coaching, vol. 9, no. 1, pp. 185–198, 2014. @article{Piras2014a, Visual search behaviour is believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that optimal visuo-motor strategy may be part of advanced training programs. Gaze behaviour of expert and novice judo fighters was investigated while they were doing a real sport-specific task. The athletes were tested while they performed a first grip either in an attack or defence condition. The results showed that expert judo fighters use a search strategy involving fewer fixations of longer duration than their novice counterparts. Experts spent a greater percentage of their time fixating on lapel and face with respect to other areas of the scene. On the contrary, the most frequently fixed cue for novice group was the sleeve area. It can be concluded that experts orient their gaze in the middle of the scene, both in attack and in defence, in order to gather more information at once, perhaps using parafoveal vision. |
Irina Pivneva; Julie Mercier; Debra Titone Executive control modulates cross-language lexical activation during L2 reading: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 3, pp. 787–796, 2014. @article{Pivneva2014, Models of bilingual reading such as Bilingual Interactive Activation Plus (Dijkstra & van Heuven, 2002) do not predict a central role for domain-general executive control during bilingual reading, in contrast with bilingual models from other domains, such as production (e.g., the Inhibitory Control Model; Green, 1998). We thus investigated whether individual differences among bilinguals in domain-general executive control modulate cross-language activation during L2 sentence reading, over and above other factors such as L2 proficiency. Fifty French-English bilinguals read L2-English sentences while their eye movements were recorded, and they subsequently completed a battery of executive control and L2 proficiency tasks. High- and low-constraint sentences contained interlingual homographs (chat = "casual conversation" in English, "a cat" in French), cognates (piano in English and French), or L2-specific control words. The results showed that greater executive control among bilinguals but not L2 proficiency reduced cross-language activation in terms of interlingual homograph interference. In contrast, increased L2 proficiency but not executive control reduced cross-language activation in terms of cognate facilitation. These results suggest that models of bilingual reading must incorporate mechanisms by which domain-general executive control can alter the earliest stages of bilingual lexical activation. |
Frederik Platten; Maximilian Schwalm; Julia Hülsmann; Josef Krems Analysis of compensative behavior in demanding driving situations Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 26, no. A, pp. 38–48, 2014. @article{Platten2014, Drivers usually perform a range of different activities while driving. Following a classical workload approach, additional activities are expected to increase the demand on the driver. Nevertheless, drivers can usually manage even demanding situations successfully. They seem to be able to compensate demands by behavior adaptations, mainly in the following factors: in the driving task itself, in an additional (secondary) task and in their mental workload. It is suggested that by analyzing these three factors in temporal coherence, compensative interactions between them become measurable. Additionally, a reduction of activity in the secondary task is expected to be influenced by the characteristics of this task. To analyze these effects, a driving simulator study with 33 participants was accomplished. It could be shown that if a secondary task can be interrupted without a perceived decline in performance, it is interrupted in demanding driving situations. If an interruption causes a perceived performance loss, efforts are increased, and so the workload is heightened (measured with a high resolution physiological measurement based on pupillometry). Thus, drivers compensate their current demands by behavior adaptations in different factors, depending on the characteristics of a secondary task. |
Patrick Plummer; Manuel Perea; Keith Rayner The influence of contextual diversity on eye movements in reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 1, pp. 275–283, 2014. @article{Plummer2014, Recent research has shown contextual diversity (i.e., the number of passages in which a given word appears) to be a reliable predictor of word processing difficulty. It has also been demonstrated that word-frequency has little or no effect on word recognition speed when accounting for contextual diversity in isolated word processing tasks. An eye-movement experiment was conducted wherein the effects of word-frePlummer, P., Perea, M., & Rayner, K. (2014). The influence of contextual diversity on eye movements in reading. Journal of Experimental Psychology: Learning Memory and Cognition, 40(1), 275–283. https://doi.org/10.1037/a0034058quency and contextual diversity were directly contrasted in a normal sentence reading scenario. Subjects read sentences with embedded target words that varied in word-frequency and contextual diversity. All 1st-pass and later reading times were significantly longer for words with lower contextual diversity compared to words with higher contextual diversity when controlling for word-frequency and other important lexical properties. Furthermore, there was no difference in reading times for higher frequency and lower frequency words when controlling for contextual diversity. The results confirm prior findings regarding contextual diversity and word-frequency effects and demonstrate that contextual diversity is a more accurate predictor of word processing speed than word-frequency within a normal reading task. |
Katja Poellmann; Hans Rutger Bosker; James M. McQueen; Holger Mitterer Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch Journal Article In: Journal of Phonetics, vol. 46, no. 1, pp. 101–127, 2014. @article{Poellmann2014a, This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [v]) or syllabic (ver- → [f:]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level. |
Katja Poellmann; Holger Mitterer; James M. McQueen Use what you can: Storage, abstraction processes, and perceptual adjustments help listeners recognize reduced forms Journal Article In: Frontiers in Psychology, vol. 5, pp. 437, 2014. @article{Poellmann2014, Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., minderij instead of binderij, "book binder") and a syllabic reduction group was exposed to full-vowel deletions (e.g., p'raat instead of paraat, "ready"), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 and 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations. |
Rafael Polanía; Ian Krajbich; Marcus Grueschow; Christian C. Ruff Neural oscillations and synchronization differentially support evidence accumulation in perceptual and value-based decision making Journal Article In: Neuron, vol. 82, no. 3, pp. 709–720, 2014. @article{Polania2014, Organisms make two types of decisions on a regular basis. Perceptual decisions are determined by objective states of the world (e.g., melons are bigger than apples), whereas value-based decisions are determined by subjective preferences (e.g., I prefer apples to melons). Theoretical accounts suggest that both types of choice involve neural computations accumulating evidence for the choice alternatives; however, little is known about the overlap or differences in the processes underlying perceptual versus value-based decisions. We analyzed EEG recordings during a paradigm where perceptual- and value-based choices were based on identical stimuli. For both types of choice, evidence accumulation was evident in parietal gamma-frequency oscillations, whereas a similar frontal signal was unique for value-based decisions. Fronto-parietal synchronization of these signals predicted value-based choice accuracy. These findings uncover how decisions emerge from topographic- and frequency-specificoscillations that accumulate distinct aspects of evidence, with large-scale synchronization as a mechanism integrating these spatially distributed signals. |
Arezoo Pooresmaeili; Thomas H. B. FitzGerald; Dominik R. Bach; Ulf Toelch; Florian Ostendorf; Raymond J. Dolan Cross-modal effects of value on perceptual acuity and stimulus encoding Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 42, pp. 15244–15249, 2014. @article{Pooresmaeili2014, Cross-modal interactions are very common in perception. An important feature of many perceptual stimuli is their reward-predicting properties, the utilization of which is essential for adaptive behavior. What is unknown is whether reward associations in one sensory modality influence perception of stimuli in another modality. Here we show that auditory stimuli with high-reward associations increase the sensitivity of visual perception, even when sounds and reward associations are both irrelevant for the visual task. This increased sensitivity correlates with a change in stimulus representation in the visual cortex, indexed by increased multivariate decoding accuracy in simultaneously acquired functional MRI data. Univariate analysis showed that reward associations modulated responses in regions associated with multisensory processing in which the strength of modulation was a better predictor of the magnitude of the behavioral effect than the modulation in classical reward regions. Our findings demonstrate a value-driven cross-modal interaction that affects perception and stimulus encoding, with a resemblance to well-described modulatory effects of attention. We suggest that multisensory processing areas may mediate the transfer of value signals across senses. |
Ivo D. Popivanov; Jan Jastorff; Wim Vanduffel; Rufin Vogels Heterogeneous single-unit selectivity in an fMRI-defined body-selective patch Journal Article In: Journal of Neuroscience, vol. 34, no. 1, pp. 95–111, 2014. @article{Popivanov2014, Although the visual representation of bodies is essential for reproduction, survival, and social communication, little is known about the mechanisms of body recognition at the single neuron level. Imaging studies showed body-category selective regions in the primate occipitotemporal cortex, but it is difficult to infer the stimulus selectivities of the neurons from the population activity measured in these fMRI studies. To overcome this, we recorded single unit activity and local field potentials (LFPs) in the middle superior temporal sulcus body patch, defined by fMRI in the same rhesus monkeys. Both the spiking activity, averaged across single neurons, and LFP gamma power in this body patch was greater for bodies (including monkey bodies, human bodies, mammals, and birds) compared with other objects, which fits the fMRI activation. Single neurons responded to a small proportion of body images. Thus, the category selectivity at the population level resulted from averaging responses of a heterogeneous population of single units. Despite such strong within-category selectivity at the single unit level, two distinct clusters, bodies and nonbodies, were present when analyzing the responses at the population level, and a classifier that was trained using the responses to a subset of images was able to classify novel images of bodies with high accuracy. The body-patch neurons showed strong selectivity for individual body parts at different orientations. Overall, these data suggest that single units in the fMRI-defined body patch are biased to prefer bodies over nonbody objects, including faces, with a strong selectivity for individual body images. |
Cheolsoo Park; Markus Plank; Joseph Snider; Sanggyun Kim; He Crane Huang; Sergei Gepshtein; Todd P. Coleman; Howard Poizner In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, no. 5, pp. 1083–1096, 2014. @article{Park2014, The neural dynamics underlying the coordination of spatially-directed limb and eye movements in humans is not well understood. Part of the difficulty has been a lack of signal processing tools suitable for the analysis of non-stationary electroencephalographic (EEG) signals. Here we use multivariate empirical mode decomposition (MEMD), a data-driven approach that does not employ predefined basis functions. High-density EEG, and arm and eye movements were synchronously recorded in 10 subjects performing time-constrained reaching and/or eye movements. Subjects were allowed to move both the hand and the eyes, only the hand, or only the eyes following a 500-700 ms delay interval where the hand and gaze remained on a central fixation cross. An additional condition involved a non-spatially-directed "lift" movement of the hand. The neural activity during a 500 ms delay interval was decomposed into intrinsic mode functions (IMFs) usingMEMD. Classification analysis revealed that gamma band (30 Hz <) IMFs produced more classifiable features differentiating the EEG according to the different upcoming movements. A benchmark test using conventional algorithms demonstrated that MEMD was the best algorithm for extracting oscillatory bands from EEG, yielding the best classification of the different movement conditions. The gamma rhythm decomposed using MEMD showed a higher correlation with the eventual movement accuracy than any other band rhythm and than any other algorithm. |
Hyeong Dong Park; Stéphanie Correia; Antoine Ducorps; Catherine Tallon-Baudry Spontaneous fluctuations in neural responses to heartbeats predict visual detection Journal Article In: Nature Neuroscience, vol. 17, no. 4, pp. 612–618, 2014. @article{Park2014a, Spontaneous fluctuations of ongoing neural activity substantially affect sensory and cognitive performance. Because bodily signals are constantly relayed up to the neocortex, neural responses to bodily signals are likely to shape ongoing activity. Here, using magnetoencephalography, we show that in humans, neural events locked to heartbeats before stimulus onset predict the detection of a faint visual grating in the posterior right inferior parietal lobule and the ventral anterior cingulate cortex, two regions that have multiple functional correlates and that belong to the same resting-state network. Neither fluctuations in measured bodily parameters nor overall cortical excitability could account for this finding. Neural events locked to heartbeats therefore shape visual conscious experience, potentially by contributing to the neural maps of the organism that might underlie subjectivity. Beyond conscious vision, our results show that neural events locked to a basic physiological input such as heartbeats underlie behaviorally relevant differential activation in multifunctional cortical areas. |
Benjamin A. Parris Task conflict in the Stroop task: When Stroop interference decreases as Stroop facilitation increases in a low task conflict context Journal Article In: Frontiers in Psychology, vol. 5, pp. 1182, 2014. @article{Parris2014, In the present study participants completed two blocks of the Stroop task, one in which the response-stimulus interval (RSI) was 3500 ms and one in which RSI was 200 ms. It was expected that, in line with previous research, the shorter RSI would induce a low Task Conflict context by increasing focus on the color identification goal in the Stroop task and lead to a novel finding of an increase in facilitation and simultaneous decrease in interference. Such a finding would be problematic for models of Stroop effects that predict these indices of performance should be affected in tandem. A crossover interaction is reported supporting these predictions. As predicted, the shorter RSI resulted in incongruent and congruent trial reaction times (RTs) decreasing relative to a static neutral baseline condition; hence interference decreased as facilitation increased. An explanatory model (expanding on the work of Goldfarb and Henik, 2007) is presented that: (1) Shows how under certain conditions the predictions from single mechanism models hold true (i.e., when Task conflict is held constant); (2) Shows how it is possible that interference can be affected by an experimental manipulation that leaves facilitation apparently untouched; and (3) Predicts that facilitation cannot be independently affected by an experimental manipulation. |
Kevin B. Paterson; Victoria A. McGowan; Sarah J. White; Sameen Malik; Lily Abedipour; Timothy R. Jordan Reading direction and the central perceptual span in Urdu and English Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e88358, 2014. @article{Paterson2014, BACKGROUND: Normal reading relies on the reader making a series of saccadic eye movements along lines of text, separated by brief fixational pauses during which visual information is acquired from a region of text. In English and other alphabetic languages read from left to right, the region from which useful information is acquired during each fixational pause is generally reported to extend further to the right of each fixation than to the left. However, the asymmetry of the perceptual span for alphabetic languages read in the opposite direction (i.e., from right to left) has received much less attention. Accordingly, in order to more fully investigate the asymmetry in the perceptual span for these languages, the present research assessed the influence of reading direction on the perceptual span for bilingual readers of Urdu and English. METHODS AND FINDINGS: Text in Urdu and English was presented either entirely as normal or in a gaze-contingent moving-window paradigm in which a region of text was displayed as normal at the reader's point of fixation and text outside this region was obscured. The windows of normal text extended symmetrically 0.5° of visual angle to the left and right of fixation, or asymmetrically by increasing the size of each window to 1.5° or 2.5° to either the left or right of fixation. When participants read English, performance for the window conditions was superior when windows extended to the right. However, when reading Urdu, performance was superior when windows extended to the left, and was essentially the reverse of that observed for English. CONCLUSION: These findings provide a novel indication that the perceptual span is modified by the language being read to produce an asymmetry in the direction of reading and show for the first time that such an asymmetry occurs for reading Urdu. |
C. A. Patterson; Jacob Duijnhouwer; S. C. Wissig; B. Krekelberg; Adam Kohn Similar adaptation effects in primary visual cortex and area MT of the macaque monkey under matched stimulus conditions Journal Article In: Journal of Neurophysiology, vol. 111, no. 6, pp. 1203–1213, 2014. @article{Patterson2014, Recent stimulus history, or adaptation, can alter neuronal response properties. Adaptation effects have been characterized in a number of visually responsive structures, from the retina to higher visual cortex. However, it remains unclear whether adaptation effects across stages of the visual system take a similar form in response to a particular sensory event. This is because studies typically probe a single structure or cortical area, using a stimulus ensemble chosen to provide potent drive to the cells of interest. Here we adopt an alternative approach and compare adaptation effects in primary visual cortex (V1) and area MT using identical stimulus ensembles. Previous work has suggested these areas adjust to recent stimulus drive in distinct ways. We show that this is not the case: adaptation effects in V1 and MT can involve weak or strong loss of responsivity and shifts in neuronal preference toward or away from the adapter, depending on stimulus size and adaptation duration. For a particular stimulus size and adaptation duration, however, effects are similar in nature and magnitude in V1 and MT. We also show that adaptation effects in MT of awake animals depend strongly on stimulus size. Our results suggest that the strategies for adjusting to recent stimulus history depend more strongly on adaptation duration and stimulus size than on the cortical area. Moreover, they indicate that different levels of the visual system adapt similarly to recent sensory experience. |
Clare Patterson; Helena Trompelt; Claudia Felser The online application of binding condition B in native and non-native pronoun resolution Journal Article In: Frontiers in Psychology, vol. 5, pp. 147, 2014. @article{Patterson2014a, Previous research has shown that anaphor resolution in a non-native language may be more vulnerable to interference from structurally inappropriate antecedents compared to native anaphor resolution. To test whether previous findings on reflexive anaphors generalize to non-reflexive pronouns, we carried out an eye-movement monitoring study investigating the application of binding condition B during native and non-native sentence processing. In two online reading experiments we examined when during processing local and/or non-local antecedents for pronouns were considered in different types of syntactic environment. Our results demonstrate that both native English speakers and native German-speaking learners of English showed online sensitivity to binding condition B in that they did not consider syntactically inappropriate antecedents. For pronouns thought to be exempt from condition B (so-called "short-distance pronouns"), the native readers showed a weak preference for the local antecedent during processing. The non-native readers, on the other hand, showed a preference for the matrix subject even where local coreference was permitted, and despite demonstrating awareness of short-distance pronouns' referential ambiguity in a complementary offline task. This indicates that non-native comprehenders are less sensitive during processing to structural cues that render pronouns exempt from condition B, and prefer to link a pronoun to a salient subject antecedent instead. |
David A. Paul; Elon Gaffin-Cahn; Eric B. Hintz; Giscard J. Adeclat; Tong Zhu; Zoë R. Williams; G. Edward Vates; Bradford Z. Mahon White matter changes linked to visual recovery after nerve decompression Journal Article In: Science Translational Medicine, vol. 6, no. 266, pp. 266ra173, 2014. @article{Paul2014, The relationship between the integrity of white matter tracts and cortical function in the human brain remains poorly understood. We investigate reversible white matter injury, in this case patients with compression of the optic chiasm by pituitary gland tumors, to study the structural and functional changes that attend spontaneous recovery of cortical function and visual abilities after surgical removal of the tumor and subsequent decompression of the nerves. We show that compression of the optic chiasm led to demyelination of the optic tracts, which reversed as quickly as 4 weeks after nerve decompression. Furthermore, variability across patients in the severity of demyelination in the optic tracts predicted visual ability and functional activity in early cortical visual areas. Preoperative measurements of myelination in the optic tracts predicted the magnitude of visual recovery after surgery. These data indicate that rapid regeneration of myelin in the human brain is a component of the normalization of cortical activity, and ultimately the recovery of sensory and cognitive function, after nerve decompression. More generally, our findings demonstrate the use of diffusion tensor imaging as an in vivo measure of myelination in the human brain. |
Angela M. Pazzaglia; Adrian Staub; Caren M. Rotello Encoding time and the mirror effect in recognition memory: Evidence from eyetracking Journal Article In: Journal of Memory and Language, vol. 75, pp. 77–92, 2014. @article{Pazzaglia2014, Low-frequency (LF) words have higher hit rates and lower false alarm rates than high-frequency (HF) words in recognition memory, a phenomenon termed the mirror effect. Visual word recognition latencies are longer for LF words. We examined the relationship between eye fixation durations during study and later recognition memory for individual words to test whether (1) increased fixation time on a word is associated with better memory, and (2) increased fixation times on LF words can account for their hit rate advantage. In Experiments 1 and 2, words of various frequencies were presented in lists in an intentional study design. In Experiment 3, HF and LF critical words were presented in matched sentence frames in an incidental study design. In all cases, the standard frequency effect on eye movements emerged, with longer reading times for lower frequency words. At test, studied words and new words from each frequency class were presented. The hit rate portion of the mirror effect was evident in all experiments. The time spent fixating a word did predict memory performance in the intentional encoding experiments, but critically, the frequency effect on hit rates was independent of this effect. Time spent fixating a word during incidental encoding did not predict later memory performance. These results suggest that the hit rate advantage for LF words is not due to the additional time spent on these words at encoding, which is consistent with retrieval-stage models of the mirror effect. |
Benjamin Pearson; Julius Raskevicius; Paul M. Bays; Yoni Pertzov; Masud Husain Working memory retrieval as a decision process Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–15, 2014. @article{Pearson2014, Working memory (WM) is a core cognitive process fundamental to human behavior, yet the mechanisms underlying it remain highly controversial. Here we provide a new framework for understanding retrieval of information from WM, conceptualizing it as a decision based on the quality of internal evidence. Recent findings have demonstrated that precision of WM decreases with memory load. IfWMretrieval uses a decision process that depends on memory quality, systematic changes in response time distribution should occur as a function of WM precision.We asked participants to view sample arrays and, after a delay, report the direction of change in location or orientation of a probe. As WM precision deteriorated with increasing memory load, retrieval time increased systematically. Crucially, the shape of reaction time distributions was consistent with a linear accumulator decision process. Varying either task relevance of items or maintenance duration influenced memory precision, with corresponding shifts in retrieval time. These results provide strong support for a decision-making account ofWMretrieval based on noisy storage of items. Furthermore, they show that encoding, maintenance, and retrieval in WM need not be considered as separate processes, but may instead be conceptually unified as operations on the same noise- limited, neural representation. |
Tyler R. Peel; Kevin D. Johnston; Stephen G. Lomber; Brian D. Corneil Bilateral saccadic deficits following large and reversible inactivation of unilateral frontal eye field Journal Article In: Journal of Neurophysiology, vol. 111, no. 2, pp. 415–433, 2014. @article{Peel2014, Inactivation permits direct assessment of the functional contribution of a given brain area to behavior. Previous inactivation studies of the frontal eye field (FEF) have either used large permanent ablations or reversible pharmacological techniques that only inactivate a small volume of tissue. Here we evaluated the impact of large, yet reversible, FEF inactivation on visually guided, delayed, and memory-guided saccades, using cryoloops implanted in the arcuate sulcus. While FEF inactivation produced the expected triad of contralateral saccadic deficits (increased reaction time, decreased accuracy and peak velocity) and performance errors (neglect or misdirected saccades), we also found consistent increases in reaction times of ipsiversive saccades in all three tasks. In addition, FEF inactivation did not increase the proportion of premature saccades to ipsilateral targets, as was predicted on the basis of pharmacological studies. Consistent with previous studies, greater deficits accompanied saccades toward extinguished visual cues. Our results attest to the functional contribution of the FEF to saccades in both directions. We speculate that the comparative effects of different inactivation techniques relate to the volume of inactivated tissue within the FEF. Larger inactivation volumes may reveal the functional contribution of more sparsely distributed neurons within the FEF, such as those related to ipsiversive saccades. Furthermore, while focal FEF inactivation may disinhibit the mirroring site in the other FEF, larger inactivation volumes may induce broad disinhibition in the other FEF that paradoxically prolongs oculomotor processing via increased competitive interactions. |
Didem Pehlivanoglu; Shivangi Jain; Robert Ariel; Paul Verhaeghen The ties to unbind: Age-related differences in feature (un)binding in working memory for emotional faces Journal Article In: Frontiers in Psychology, vol. 5, pp. 253, 2014. @article{Pehlivanoglu2014, In the present study, we investigated age-related differences in the processing of emotional stimuli. Specifically, we were interested in whether older adults would show deficits in unbinding emotional expression (i.e., either no emotion, happiness, anger, or disgust) from bound stimuli (i.e., photographs of faces expressing these emotions), as a hyper-binding account of age-related differences in working memory would predict. Younger and older adults completed different N-Back tasks (side-by-side 0-Back, 1-Back, 2-Back) under three conditions: match/mismatch judgments based on either the identity of the face (identity condition), the face's emotional expression (expression condition), or both identity and expression of the face (both condition). The two age groups performed more slowly and with lower accuracy in the expression condition than in the both condition, indicating the presence of an unbinding process. This unbinding effect was more pronounced in older adults than in younger adults, but only in the 2-Back task. Thus, older adults seemed to have a specific deficit in unbinding in working memory. Additionally, no age-related differences were found in accuracy in the 0-Back task, but such differences emerged in the 1-Back task, and were further magnified in the 2-Back task, indicating independent age-related differences in attention/STM and working memory. Pupil dilation data confirmed that the attention/STM version of the task (1-Back) is more effortful for older adults than younger adults. |
Jennifer Olejarczyk; Steven G. Luke; John M. Henderson Incidental memory for parts of scenes from eye movements Journal Article In: Visual Cognition, vol. 22, no. 7, pp. 975–995, 2014. @article{Olejarczyk2014, Incidental memory for parts of scenes was examined in two search experiments and one memory control experiment. Eye movements were recorded during the search experiments and used to select gaze-contingent sections from search scenes for a surprise memory recognition task. Results from the recognition task showed incidental memory was better for sections viewed longer and with multiple fixations. Sections not fixated during search were still recognized above chance as well. Differences in sections did not affect memory performance in a control experiment when viewing time was held constant. These results show that memory for parts of scenes can occur incidentally during search and encoding of tested sections is better with longer viewing time and with multiple fixations. Incidental memory for parts of scenes was examined in two search experiments and one memory control experiment. Eye movements were recorded during the search experiments and used to select gaze-contingent sections from search scenes for a surprise memory recognition task. Results from the recognition task showed incidental memory was better for sections viewed longer and with multiple fixations. Sections not fixated during search were still recognized above chance as well. Differences in sections did not affect memory performance in a control experiment when viewing time was held constant. These results show that memory for parts of scenes can occur incidentally during search and encoding of tested sections is better with longer viewing time and with multiple fixations. |
Rosanna K. Olsen; Mark Chiew; Bradley R. Buchsbaum; Jennifer D. Ryan The relationship between delay period eye movements and visuospatial memory Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–11, 2014. @article{Olsen2014, We investigated whether overt shifts of attention were associated with visuospatial memory performance. Participants were required to study the locations of a set of visual objects and subsequently detect changes to the spatial location of one of the objects following a brief delay period. Relational information regarding the locations among all of the objects could be used to support performance on the task (Experiment 1) or relational information was removed during test and location manipulation judgments had to be made for a singly presented target item (Experiment 2). We computed the similarity of the fixation patterns in space during the study phase to the fixations made during the delay period. Greater fixation pattern similarity across participants was associated with higher accuracy when relational information was available at test (Experiment 1); however, this association was not observed when the target item was presented in isolation during the test display (Experiment 2). Similarly, increased fixation pattern similarity on a given trial (within participants) was associated with successful task performance when the relations among studied items could be used for comparison (Experiment 1), but not when memory for absolute spatial location was assessed (Experiment 2). This pattern of behavior and performance on the two tasks suggested that eye movements facilitated memory for the relationships among objects. Shifts of attention through eye movements may provide a mechanism for the maintenance of relational visuospatial memory. |
Selim Onat; Alper Açik; Frank Schumann; Peter König The contributions of image content and behavioral relevancy to overt attention Journal Article In: PLoS ONE, vol. 9, no. 4, pp. e93254, 2014. @article{Onat2014, During free-viewing of natural scenes, eye movements are guided by bottom-up factors inherent to the stimulus, as well as top-down factors inherent to the observer. The question of how these two different sources of information interact and contribute to fixation behavior has recently received a lot of attention. Here, a battery of 15 visual stimulus features was used to quantify the contribution of stimulus properties during free-viewing of 4 different categories of images (Natural, Urban, Fractal and Pink Noise). Behaviorally relevant information was estimated in the form of topographical interestingness maps by asking an independent set of subjects to click at image regions that they subjectively found most interesting. Using a Bayesian scheme, we computed saliency functions that described the probability of a given feature to be fixated. In the case of stimulus features, the precise shape of the saliency functions was strongly dependent upon image category and overall the saliency associated with these features was generally weak. When testing multiple features jointly, a linear additive integration model of individual saliencies performed satisfactorily. We found that the saliency associated with interesting locations was much higher than any low-level image feature and any pair-wise combination thereof. Furthermore, the low-level image features were found to be maximally salient at those locations that had already high interestingness ratings. Temporal analysis showed that regions with high interestingness ratings were fixated as early as the third fixation following stimulus onset. Paralleling these findings, fixation durations were found to be dependent mainly on interestingness ratings and to a lesser extent on the low-level image features. Our results suggest that both low- and high-level sources of information play a significant role during exploration of complex scenes with behaviorally relevant information being more effective compared to stimulus features. |
K. Ooms; Philippe De Maeyer; V. Fack Study of the attentive behavior of novice and expert map users using eye tracking Journal Article In: Cartography and Geographic Information Science, vol. 41, no. 1, pp. 37–54, 2014. @article{Ooms2014, The aim of this paper is to gain better understanding of the way map users read and interpret the visual stimuli presented to them and how this can be influenced. In particular, the difference between expert and novice map users is considered. In a user study, the participants studied four screen maps which had been manipulated to introduce deviations. The eye movements of 24 expert and novice participants were tracked, recorded, and analyzed (both visually and statistically) based on a grid of Areas of Interest. These visual analyses are essential for studying the spatial dimension of maps to identify problems in design. In this research, we used visualization of eye movement metrics (fixation count and duration) in a 2D and 3D grid and a statistical comparison of the grid cells. The results show that the users' eye movements clearly reflect the main elements on the map. The users' attentive behavior is influenced by deviating colors, as their attention is drawn to it. This could also influence the users' interpretation process. Both user groups encountered difficulties when trying to interpret and store map objects that were mirrored. Insights into how different types of map users read and interpret map content are essential in this fast-evolving era of digital cartographic products. |
Isabel Orenes; David Beltrán; Carlos Santamaría How negation is understood: Evidence from the visual world paradigm Journal Article In: Journal of Memory and Language, vol. 74, pp. 36–45, 2014. @article{Orenes2014, This paper explores how negation (e.g., the figure is not red) is understood using the visual world paradigm. Our hypothesis is that people will switch to the alternative affirmative (e.g., a green figure) whenever possible, but will be able to maintain the negated argument (e.g., a non-red figure) when needed. To test this, we presented either a specific verbal context (binary: the figure could be red or green) or an unspecified verbal context (multary: the figure could be red or green or yellow or blue). Then, affirmative and negative sentences (e.g., the figure is (not) red) were heard while four figures were shown on the screen and eye movements were monitored. We found that people shifted their visual attention toward the alternative in the binary context, but focused on the negated argument in the multary context. Our findings corroborated our hypothesis and shed light on two issues that are currently under debate about how negation is represented and processed. Regarding representation, our results support the ideas that (1) the negative operator plays a role in the mental representation, and consequently a symbolic representation of negation is possible, and (2) it is not necessary to use a two-step process to represent and understand negation. |
Tania Ortuno; Kenneth L. Grieve; Ricardo Cao; Javier Cudeiro; Casto Rivadulla Bursting thalamic responses in awake monkey contribute to visual detection and are modulated by corticofugal feedback Journal Article In: Frontiers in Behavioral Neuroscience, vol. 8, pp. 198, 2014. @article{Ortuno2014, The lateral geniculate nucleus is the gateway for visual information en route to the visual cortex. Neural activity is characterized by the existence of two firing modes: burst and tonic. Originally associated with sleep, bursts have now been postulated to be a part of the normal visual response, structured to increase the probability of cortical activation, able to act as a "wake-up" call to the cortex. We investigated a potential role for burst in the detection of novel stimuli by recording neuronal activity in the lateral geniculate nucleus (LGN) of behaving monkeys during a visual detection task. Our results show that bursts are often the neuron's first response, and are more numerous in the response to attended target stimuli than to unattended distractor stimuli. Bursts are indicators of the task novelty, as repetition decreased bursting. Because the primary visual cortex is the major modulatory input to the LGN, we compared the results obtained in control conditions with those observed when cortical activity was reduced by TMS. This cortical deactivation reduced visual response related bursting by 90%. These results highlight a novel role for the thalamus, able to code higher order image attributes as important as novelty early in the thalamo-cortical conversation. |
Jorge Otero-Millan; Jose L. Alba Castro; Stephen L. Macknik; Susana Martinez-Conde Unsupervised clustering method to detect microsaccades Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–17, 2014. @article{OteroMillan2014, Microsaccades, small involuntary eye movements that occur once or twice per second during attempted visual fixation, are relevant to perception, cognition, and oculomotor control and present distinctive characteristics in visual and oculomotor pathologies. Thus, the development of robust and accurate microsaccade-detection techniques is important for basic and clinical neuroscience research. Due to the diminutive size of microsaccades, however, automatic and reliable detection can be difficult. Current challenges in microsaccade detection include reliance on set, arbitrary thresholds and lack of objective validation. Here we describe a novel microsaccade-detecting method, based on unsupervised clustering techniques, that does not require an arbitrary threshold and provides a detection reliability index. We validated the new clustering method using real and simulated eye-movement data. The clustering method reduced detection errors by 62% for binocular data and 78% for monocular data, when compared to standard contemporary microsaccade-detection techniques. Further, the clustering method's reliability index was correlated with the microsaccade-detection error rate, suggesting that the reliability index may be used to determine the comparative precision of eye-tracking devices. |
Jorge Otero-Millan; Stephen L. Macknik; Susana Martinez-Conde Fixational eye movements and binocular vision Journal Article In: Frontiers in Integrative Neuroscience, vol. 8, pp. 52, 2014. @article{OteroMillan2014a, During attempted visual fixation, small involuntary eye movements –called fixational eye movements–continuously change of our gaze's position. Disagreement between the left and right eye positions during such motions can produce diplopia (double vision). Thus, the ability to properly coordinate the two eyes during gaze fixation is critical for stable perception. For the last 50 years, researchers have studied the binocular characteristics of fixational eye movements. Here we review classical and recent studies on the binocular coordination (i.e. degree of conjugacy) of each fixational eye movement type: microsaccades, drift and tremor, and its perceptual contribution to increasing or reducing binocular disparity. We also discuss how amblyopia and other visual pathologies affect the binocular coordination of fixational eye movements. |
Ricki-Leigh Elliot; Linda E. Campbell; Mick Hunter; Gavin Cooper; Jessica L. Melville; Kathryn L. McCabe; Louise Newman; Carmel M. Loughland When I look into my baby's eyes... infant emotion recognition by mothers with Borderline Personality Disorder Journal Article In: Infant Mental Health Journal, vol. 35, pp. 21–32, 2014. @article{Elliot2014, Mothers with borderline personality disorder (BPD) have disturbed relationships with their infants, possibly associated with poor nonverbal cue perception. Individuals with BPD are poor at recognizing emotion in adults and tend to misattribute neutral (i.e., no emotion) as sad. This study extends previous research by examining how mothers with BPD perceive known (own) and unknown (control) infant stimuli depicting happy, sad, and neutral emotions. The sample consisted of 13 women diagnosed with BPD and 13 healthy control mothers. All participants completed clinical and parenting questionnaires and an infant emotion recognition task. Compared to control mothers, mothers with BPD were significantly poorer at infant emotion recognition overall, but especially neutral expressions which were misattributed most often as sad. Performance was not related to disturbed parenting but rather mothers' age and illness duration. Neither the BPD nor control mothers showed enhanced accuracy for emotional displays of their own verses unknown infant-face images. Although the sample size was small, this study provides evidence that mothers with BPD negatively misinterpret neutral images, which may impact sensitive responding to infant emotional cues. These findings have implications for clinical practiceand the development of remediation programs targeting emotion-perception disturbances in mothers with BPD. |
Jessica J. Ellis; Eyal M. Reingold The Einstellung effect in anagram problem solving: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 5, pp. 679, 2014. @article{Ellis2014, The Einstellung effect is the counterintuitive finding that prior experience or domain-specific knowledge can under some circumstances interfere with problem solving performance. This effect has been demonstrated in several domains of expertise including medicine and chess. In the present study we explored this effect in the context of a simplified anagram problem solving task. Participants solved anagram problems while their eye movements were monitored. Each problem consisted of six letters: a central three-letter string whose letters were part of the solution word, and three additional individual letters. Participants were informed that one of the individual letters was a distractor letter and were asked to find a five-letter solution word. In order to examine the impact of stimulus familiarity on problem solving performance and eye movements, the central letter string was presented either as a familiar three-letter word, or the letters were rearranged to form a three-letter nonword. Replicating the classic Einstellung effect, overall performance was better for nonword than word trials. However, participants' eye movements revealed a more complex pattern of both interference and facilitation as a function of the familiarity of the central letter string. Specifically, word trials resulted in shorter viewing times on the central letter string and longer viewing times on the individual letters than nonword trials. These findings suggest that while participants were better able to encode and maintain the meaningful word stimuli in working memory, they found it more challenging to integrate the individual letters into the central letter string when it was presented as a word. |
Nick C. Ellis; Kausar Hafeez; Katherine I. Martin; Lillian Chen; Julie E. Boland; Nuria Sagarra An eye-tracking study of learned attention in second language acquisition Journal Article In: Applied Psycholinguistics, vol. 35, no. 3, pp. 547–579, 2014. @article{Ellis2014a, This paper investigates the limited attainment of adult compared to child language acquisition in terms of learned attention to morphological cues. It replicates Ellis and Sagarra in demonstrating short-term learned attention in the acquisition of temporal reference in Latin, and it extends the investigation using eye-tracking indicators to determine the extent to which these biases are overt or covert. English native speakers learned adverbial and morphological cues to temporal reference in a small set of Latin phrases under experimental conditions. Comprehension and production data demonstrated that early experience with adverbial cues enhanced subsequent use of this cue dimension and blocked the acquisition of verbal tense morphology. Effects of early experience of verbal morphology were less pronounced. Eye-tracking measures showed that early experience of particular cue dimensions affected what participants overtly focused upon during subsequent language processing and how this overt study resulted in turn in covert attentional biases in comprehension and in productive knowledge. |
Paul E. Engelhardt Children's and adolescents' processing of temporary syntactic ambiguity: An eye movement study Journal Article In: Child Development Research, vol. 2014, no. 13, pp. 1–13, 2014. @article{Engelhardt2014, This study examined the eye movements of 24 children and adolescents as they read sentences containing temporary syntactic ambiguities. Prior research suggested that children primarily use grammatical information when making initial parsing decisions, and they tend to disregard semantic and contextual information. On each trial, participants read a garden path sentence (e.g., While the storm blew the boat sat in the shed ), and, afterwards, they answered a comprehension question (e.g., Did the storm blow the boat? ). The design was 2 × 2 (verb type × ambiguity) repeated measures. Verb type was optionally transitive or reflexive, and sentences were ambiguous or unambiguous. Results showed no differences in first pass reading times at the disambiguating verb (e.g., sat ). However, regressions did show a significant interaction. The unambiguous-reflexive condition had approximately half the number of regressions, suggesting less processing difficulty in this condition. Developmentally, we found that adolescents had significantly better comprehension, which seemed to be linked to the increased tendency to regress from the disambiguating word. Findings are consistent with the assumption that the processing architecture is more restricted in children compared to adolescents. In addition, results indicated that variance in ambiguity resolution was associated with interference control but not working memory. |
Paul E. Engelhardt; Fernanda Ferreira In: Language, Cognition and Neuroscience, vol. 29, no. 8, pp. 975–985, 2014. @article{Engelhardt2014a, Studies have shown that speakers often include unnecessary modifiers when producing referential expressions, which is contrary to the Maxim of Quantity. In this study, we examined the production of referring expressions (e.g. the red triangle) that contained an over-described (or redundant) pre-nominal adjective modifier. These expressions were compared to similar expressions that were uttered in a context that made the modifier necessary for unique referent identification. Our hypothesis was that speakers articulate over-described modifiers differently from those used to distinguish contrasting objects. Results showed that over-described modifiers were significantly shorter in duration than modifiers used to distinguish two objects. Conclusions focus on how these acoustic differences can be modelled by Natural Language Generation algorithms, such as the Incremental Algorithm, in combination with probabilistic prosodic reduction. |
Yulia Esaulova; Chiara Reali; Lisa Stockhausen Influences of grammatical and stereotypical gender during reading: Eye movements in pronominal and noun phrase anaphor resolution Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 7, pp. 781–803, 2014. @article{Esaulova2014, Two eye-tracking studies addressed the processing of grammatical and stereotypical gender cues in anaphor resolution in German. The authors investigated pronominal (er ‘he'/sie ‘she') and noun phrase (dieser Mann ‘this man'/diese Frau ‘this woman') anaphors in sentences containing stereotypical role nouns as antecedents (Example: Oft hatte der Elektriker gute Einfalle, regelmassig plante er/dieser Mann neue Projekte' Often had the electrician good ideas, regularly planned he/this man new projects'). Participants were native speakers of German (N=40 and N=24 in Experiments 1 and 2, respectively). Results show that influences of grammatical gender occur in early stages of processing, whereas the influences of stereotypical gender appear only in later measures. Both effects, however, strongly depend on the type of anaphor. Furthermore, the results provide evidence for asymmetries in processing feminine and masculine grammatical gender and are discussed with reference to two-stage models of anaphor resolution. |
Sarah C. Creel Impossible to _gnore: Word-form inconsistency slows preschool children's word-learning Journal Article In: Language Learning and Development, vol. 10, no. 1, pp. 68–95, 2014. @article{Creel2014, Many studies have examined language acquisition under morphosyntactic or semantic inconsistency, but few have considered word-form inconsistency. Many young learners encounter word-form inconsistency due to accent variation in their communities. The current study asked how preschoolers recognize accent-variants of newly learned words. Can preschoolers generalize recognition based on partial match to the learned form? When learning in two accents simultaneously, do children ignore inconsistent elements, or encode two word forms (one per accent)? Three- to 5-year-olds learned words in a novel-word learning paradigm but did not generalize to new accent-like pronunciations (Experiment 1) unless familiar-word recognition trials were interspersed (Experiments 3 and 4), which apparently generated a familiar-word-recognition pragmatic context. When exposure included two accent-variants per word, children were less accurate (Experiment 2) and slower to look to referents (Experiments 2, 5) relative to one-accent learning. Implications for language learning and accent processing over development are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved) |
Sarah C. Creel Tipping the scales: Auditory cue weighting changes over development Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1146–1160, 2014. @article{Creel2014a, How does auditory processing change over development? This study assessed preschoolers' and adults' sensitivity to pitch contour, pitch height, and timbre in an association-memory paradigm, with both explicit (overt recognition) and implicit measures (visual fixations to melody-linked objects). In the first 2 experiments, child and adult participants associated each of 2 melodies with a cartoon picture, and recognition was tested. Experiment 1 pitted pitch contour cues against pitch height cues, and Experiment 2 pitted contour cues against timbre cues. Although adults were sensitive to multiple cues, children responded predominantly based on pitch height and timbre, with little sensitivity to pitch contour. In Experiment 3, however, children detected changes to all 3 cues well above chance levels. Results overall suggest that contour differences, although readily perceptible, are less memorable to children than to adults. Gradual perceptual learning over development may increase the memorability of pitch contour. |
Sarah C. Creel Preschoolers' flexible use of talker information during word learning Journal Article In: Journal of Memory and Language, vol. 73, no. 1, pp. 81–98, 2014. @article{Creel2014b, Previous research suggests that preschool-aged children use novel information about talkers' preferences (e.g. favorite colors) to guide on-line language processing. But can children encode information about talkers while simultaneously learning new words, and if so, how is talker information encoded? In five experiments, children learned pairs of early-overlapping words (geeb, geege); a particular talker spoke each word. Across experiments, children learned labels for novel referents, showing an advantage for original-voice repetitions of words which appeared to stem mainly from semantic person-referent mappings (who liked what referent). Specifically, children looked to voice-matched referents when a talker asked for their own favorite ("I want to see the geege") or when the liker was unspecified ("Point to the geege"), but they looked to voice mismatched referents when a talker asked on behalf of the other talker ("Conor wants to see the geege"). Initial looks to voice-matched referents were flexibly corrected when later information became available (Anna saying "Find the geege for Conor"). Voice-matching looks vanished when talkers labeled the other talker's favorite referent during learning, possibly because children had learned two conflicting person-referent mappings: Anna likes-geeb vs. Anna talks-about-geege. Results imply that children's language input may be conditioned on talker context quite early in language learning. ©2014 Elsevier Inc. |
Sébastien M. Crouzet; Morten Overgaard; Niko A. Busch The fastest saccadic responses escape visual masking Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e87418, 2014. @article{Crouzet2014, Object-substitution masking (OSM) occurs when a briefly presented target in a search array is surrounded by small dots that remain visible after the target disappears. The reduction of target visibility occurring after OSM has been suggested to result from a specific interference with reentrant visual processing while the initial feedforward processing is thought to be left intact. We tested a prediction derived from this hypothesis: the fastest responses, being triggered before the beginning of reentrant processing, should escape the OSM interference. In a saccadic choice reaction time task, which gives access to very early stages of visual processing, target visibility was reduced either by OSM, conventional backward masking, or low stimulus contrast. A general reduction of performance was observed in all three conditions. However, the fastest saccades did not show any sign of interference under either OSM or backward masking, as they did under the low-contrast condition. This finding supports the hypothesis that masking interferes mostly with reentrant processing at later stages, while leaving early feedforward processing largely intact. |
Lei Cui; Denis Drieghe; Xuejun Bai; Guoli Yan; Simon P. Liversedge Parafoveal preview benefit in unspaced and spaced Chinese reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 11, pp. 2172–2188, 2014. @article{Cui2014, In an eye movement experiment during reading, we compared parafoveal preview benefit during the reading of Chinese sentences either in the familiar, unspaced format or with spaces inserted between the words. Single-character words or the first of a two-character word were either presented normally or were replaced by a pseudocharacter in the preview. Results indicate that word spacing increased the parafoveal preview benefit but only for the one-character target words. We hypothesized that the incorrect preview of the first character of the two-character word prevented parafoveal processing of the ensuing character(s), effectively nullifying any benefits from the spacing. Our results suggest that word boundary demarcation allows for more precise focusing of attention. |
Ian Cunnings; Clare Patterson; Claudia Felser Variable binding and coreference in sentence comprehension: Evidence from eye movements Journal Article In: Journal of Memory and Language, vol. 71, no. 1, pp. 39–56, 2014. @article{Cunnings2014a, The hypothesis that pronouns can be resolved via either the syntax or the discourse representation has played an important role in linguistic accounts of pronoun interpretation (e.g. Grodzinsky & Reinhart, 1993). We report the results of an eye-movement monitoring study investigating the relative timing of syntactically-mediated variable binding and discourse-based coreference assignment during pronoun resolution. We examined whether ambiguous pronouns are preferentially resolved via either the variable binding or coreference route, and in particular tested the hypothesis that variable binding should always be computed before coreference assignment. Participants' eye movements were monitored while they read sentences containing a pronoun and two potential antecedents, a c-commanding quantified noun phrase and a non c-commanding proper name. Gender congruence between the pronoun and either of the two potential antecedents was manipulated as an experimental diagnostic for dependency formation. In two experiments, we found that participants' reading times were reliably longer when the linearly closest antecedent mismatched in gender with the pronoun. These findings fail to support the hypothesis that variable binding is computed before coreference assignment, and instead suggest that antecedent recency plays an important role in affecting the extent to which a variable binding antecedent is considered. We discuss these results in relation to models of memory retrieval during sentence comprehension, and interpret the antecedent recency preference as an example of forgetting over time. |
Ian Cunnings; Patrick Sturt Coargumenthood and the processing of reflexives Journal Article In: Journal of Memory and Language, vol. 75, pp. 117–139, 2014. @article{Cunnings2014, We report three eye-movement experiments and an antecedent choice task investigating the interpretation of reflexives in different syntactic contexts. This included contexts in which the reflexive and a local antecedent were coarguments of the same verbal predicate (John heard that the soldier had injured himself), and also so-called picture noun phrases, either with a possessor (John heard about the soldier's picture of himself) or without (John heard that the soldier had a picture of himself). While results from the antecedent choice task indicated that comprehenders would choose a nonlocal antecedent ('John' above) for reflexives in either type of picture noun phrase, the eye-movement experiments suggested that participants preferred to initially interpret the reflexive in each context as referring to the local antecedent ('the soldier'), as indexed by longer reading times when it mismatched in gender with the reflexive. We also observed a difference in the time-course of this effect. While it was observed during first-pass processing at the reflexive for coargument reflexives and those in picture noun phrases with a possessor, it was comparatively delayed for reflexives in possessorless picture noun phrases. These results suggest that locality constraints are more strongly weighted cues to retrieval than gender agreement for both coargument reflexives and those inside picture noun phrases. We interpret the observed time-course differences as indexing the relative ease of accessing the local antecedent in different syntactic contexts. |
Michael G. Cutter; Denis Drieghe; Simon P. Liversedge Preview benefit in English spaced compounds Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 6, pp. 1778–1786, 2014. @article{Cutter2014, In an eye tracking experiment during reading we examined whether preview benefit could be observed from 2 words to the right of the currently fixated word if that word was the 2nd constituent of a spaced compound. The boundary paradigm (Rayner, 1975) was used to orthogonally manipulate whether participants saw an identity or nonword preview of the 1st (e.g., teddy) and 2nd constituent (e.g., bear) of a spaced compound located immediately beyond the boundary, respectively, words n + 1 and n + 2. Linear mixed-effects models revealed that participants gained an n + 2 preview benefit, such that they spent less time fixated on word n + 1 when given an identity preview of word n + 2. However, this effect was only observed if there was also an identity preview of word n + 1. Our findings suggest that the 2 constituent words of spaced compounds are processed as part of a larger lexical unit during natural reading. |
Marzena Cypryańska; Izabela Krejtz; Aleksandra Jaskółowska; Alicja Kulawik; Aleksandra Żukowska; Agnieszka Golec De Zavala; Jakub Niewiarowski; John B. Nezlek An experimental study of the influence of limited time horizon on positivity effects among young adults using eye-tracking Journal Article In: Psychological Reports, vol. 115, no. 3, pp. 813–827, 2014. @article{Cypryanska2014, Compared to younger adults, older adults attend more to positive stimuli, a positivity effect. Older adults have limited time horizons, and they focus on maintaining positive affect, whereas younger adults have unlimited time horizons, and they focus on acquiring knowledge and developing skills. Time horizons were manipulated by asking participants (66 young adults, M age = 20.5 yr. |
Pierre M. Daye; Lance M. Optican Saccade detection using a particle filter Journal Article In: Journal of Neuroscience Methods, vol. 235, pp. 157–168, 2014. @article{Daye2014, Background: When healthy subjects track a moving target, "catch-up" saccades are triggered to compensate for the non-perfect tracking gain. The evaluation of the pursuit and/or saccade kinematics requires that saccade and pursuit components be separated from the eye movement trace. A similar situation occurs when analyzes eye movements of patients that could contain eye drifts between saccades. This task is especially difficult, because the range of saccadic amplitudes goes from microsaccades (less than 1°) to large exploratory saccades (40°). New method: In this paper we proposed a new algorithm to detect saccades based on a particle filter. The new method suppresses the baseline velocity component linked to smooth pursuit (or to eye drifts) and thus permits a constant threshold during a trial despite the smooth pursuit behavior. It also accounts for a wide range of saccade amplitudes. Results: The new method is validated with five different paradigms, microsaccades, microsaccades plus saccades with drift, linear target motion, non-linear target motion and free viewing. The sensitivity of the method to signal noise is analyzed. Comparison with existing methods: Traditional saccade detection algorithms using a velocity (or acceleration or jerk) threshold can be inadequate because of the baseline velocity component linked to the smooth pursuit (especially if the target motion is non-linear, i.e. not constant velocity) or to eye drifts between saccades. Conclusions: The new method detects saccades in challenging situations involving eye movements between saccades (smooth pursuit and/or eye drifts) and unfiltered recordings. |
Anouk J. Brouwer; Eli Brenner; W. Pieter Medendorp; Jeroen B. J. Smeets Time course of the effect of the Muller-Lyer illusion on saccades and perceptual judgments Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–11, 2014. @article{Brouwer2014, The amplitude of saccadic eye movements is affected by size illusions such as the Müller-Lyer illusion, but this effect varies highly between studies. Here we examine the origin of this variability by testing the influence of three temporal factors on the effect of the Müller-Lyer illusion: presentation time, response delay, and saccade latency. Subjects performed reflexive saccades, deferred saccades, and memory-guided saccades along the shaft of the illusion. We evaluated the time course of the saccadic illusion effects. We compared it to the influence of presentation time on the illusion effect in a perceptual judgment task. According to the "two visual systems hypothesis", visual perception and visual memory rely on a perceptual representation coded along the ventral "perception" pathway, which is affected by visual contextual illusions. Visuomotor actions, such as saccades, depend on the dorsal "action" pathway that is largely immune to illusions. In contrast with this hypothesis, our results show that the illusion affected both saccade amplitude and perceptual judgments with a similar time course. Presentation time of the Müller-Lyer illusion, not response delay or saccade latency, was the major factor in determining the size of the illusion effect. Longer presentation times resulted in smaller effects, suggesting that our visual representation is dynamic and becomes more accurate when we look at an object for a longer time before we act on it. |
Natalie Bruin; Devon C. Bryant; Claudia L. R. Gonzalez "Left neglected," but only in far space: Spatial biases in healthy participants revealed in a visually guided grasping task Journal Article In: Frontiers in Neurology, vol. 5, pp. 4, 2014. @article{Bruin2014, Hemispatial neglect is a common outcome of stroke that is characterized by the inability to orient toward, and attend to stimuli in contralesional space. It is established that hemispatial neglect has a perceptual component, however, the presence and severity of motor impairments is controversial. Establishing the nature of space use and spatial biases during visually guided actions amongst healthy individuals is critical to understanding the presence of visuomotor deficits in patients with neglect. Accordingly, three experiments were conducted to investigate the effect of object spatial location on patterns of grasping. Experiment 1 required right-handed participants to reach and grasp for blocks in order to construct 3D models. The blocks were scattered on a tabletop divided into equal size quadrants: left near, left far, right near, and right far. Identical sets of building blocks were available in each quadrant. Space use was dynamic, with participants initially grasping blocks from right near space and tending to "neglect" left far space until the final stages of the task. Experiment 2 repeated the protocol with left-handed participants. Remarkably, left-handed participants displayed a similar pattern of space use to right-handed participants. In Experiment 3 eye movements were examined to investigate whether "neglect" for grasping in left far reachable space had its origins in attentional biases. It was found that patterns of eye movements mirrored patterns of reach-to-grasp movements. We conclude that there are spatial biases during visually guided grasping, specifically, a tendency to neglect left far reachable space, and that this "neglect" is attentional in origin. The results raise the possibility that visuomotor impairments reported among patients with right hemisphere lesions when working in contralesional space may result in part from this inherent tendency to "neglect" left far space irrespective of the presence of unilateral visuospatial neglect. |
Jan Willem Gee; Tomas Knapen; Tobias H. Donner Decision-related pupil dilation reflects upcoming choice and individual bias Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, pp. E618–E625, 2014. @article{Gee2014, A number of studies have shown that pupil size increases transiently during effortful decisions. These decision-related changes in pupil size are mediated by central neuromodulatory systems, which also influence the internal state of brain regions engaged in decision making. It has been proposed that pupil-linked neuromodulatory systems are activated by the termination of decision processes, and, consequently, that these systems primarily affect the postdecisional brain state. Here, we present pupil results that run contrary to this proposal, suggesting an important intradecisional role. We measured pupil size while subjects formed protracted decisions about the presence or absence (“yes” vs. “no”) of a visual contrast signal embedded in dynamic noise. Linear systems analysis revealed that the pupil was significantly driven by a sustained input throughout the course of the decision formation. This sustained component was larger than the transient component during the final choice (indicated by button press). The overall amplitude of pupil dilation during decision formation was bigger before yes than no choices, irrespective of the physical presence of the target signal. Remarkably, the magnitude of this pupil choice effect (yes > no) reflected the individual criterion: it was strongest in conservative subjects choosing yes against their bias. We conclude that the central neuromodulatory systems controlling pupil size are continuously engaged during decision formation in a way that reveals how the upcoming choice relates to the decision maker's attitude. Changes in brain state seem to interact with biased decision making in the face of uncertainty. |
Peter Lissa; Genevieve McArthur; Stefan Hawelka; Romina Palermo; Yatin Mahajan; Florian Hutzler Fixation location on upright and inverted faces modulates the N170 Journal Article In: Neuropsychologia, vol. 57, no. 1, pp. 1–11, 2014. @article{Lissa2014, The current study used event-related potentials (ERP) in combination with a variable viewing position paradigm (VVPP) to direct fixations to specific face parts (eyes or mouths) in upright or inverted whole faces. The N170 elicited by the VVPP was greater to faces than to non-face objects (wristwatches), and was delayed and enhanced in response to face inversion. A larger N170 response was elicited when the participants[U+05F3] fixation was directed to the eyes than when directed to the mouths of both upright and inverted faces, an effect that was also modulated by the spatial location of the face in the visual field. The N170 face inversion effect (upright minus inverted) was greater when fixations were directed to the mouth than when directed to the eyes, suggesting that the point of fixation within a face modulates brain potentials due to contributions from the features themselves, as well as their relative location in the visual field. |
Benoît De Smet; Lorent Lempereur; Zohreh Sharafi; Yann Gaël Guéhéneuc; Giuliano Antoniol; Naji Habra Taupe: Visualizing and analyzing eye-tracking data Journal Article In: Science of Computer Programming, vol. 79, pp. 260–278, 2014. @article{DeSmet2014, Program comprehension is an essential part of any maintenance activity. It allows developers to build mental models of the program before undertaking any change. It has been studied by the research community for many years with the aim to devise models and tools to understand and ease this activity. Recently, researchers have introduced the use of eye-tracking devices to gather and analyze data about the developers' cognitive processes during program comprehension. However, eye-tracking devices are not completely reliable and, thus, recorded data sometimes must be processed, filtered, or corrected. Moreover, the analysis software tools packaged with eye-tracking devices are not open-source and do not always provide extension points to seamlessly integrate new sophisticated analyses. Consequently, we develop the Taupe software system to help researchers visualize, analyze, and edit the data recorded by eye-tracking devices. The two main objectives of Taupe are compatibility and extensibility so that researchers can easily: (1) apply the system on any eye-tracking data and (2) extend the system with their own analyses. To meet our objectives, we base the development of Taupe: (1) on well-known good practices, such as design patterns and a plug-in architecture using reflection, (2) on a thorough documentation, validation, and verification process, and (3) on lessons learned from existing analysis software systems. This paper describes the context of development of Taupe, the architectural and design choices made during its development, and its documentation, validation and verification process. It also illustrates the application of Taupe in three experiments on the use of design patterns by developers during program comprehension. |
Michael Colombo; James S. Magnuson Eye movements reveal planning in humans: A comparison with Scarf and Colombo's (2009) monkeys Journal Article In: Journal of Experimental Psychology: Animal Learning and Cognition, vol. 40, no. 2, pp. 178–184, 2014. @article{Colombo2014, On sequential response tasks, a long pause preceding the first response is thought to reflect participants taking time to plan a sequence of responses. By tracking the eye movements of two monkeys (Macaca fascicularis), Scarf and Colombo (2009, Eye Movements During List Execution Reveal No Planning in Monkeys [Macaca fascicularis], Journal of Experimental Psychology: Animal Behavior Processes, Vol. 35, pp. 587–592) demonstrated that, at least with respect to monkeys, the long pause preceding the first response is not necessarily the product of planning. In the present experiment, we tracked the eye movements of adult humans using the paradigm employed by Scarf and Colombo and found that, in contrast to monkeys, the pause preceding the first item is indicative of planning in humans. These findings highlight the fact that similar response time profiles, displayed by human and nonhuman animals, do not necessarily reflect similar underlying cognitive operations. |
Jennifer E. Corbett; David Melcher Stable statistical representations facilitate visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 5, pp. 1915–1925, 2014. @article{Corbett2014a, Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability. |
Jennifer E. Corbett; David Melcher Characterizing ensemble statistics: Mean size is represented across multiple frames of reference Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 3, pp. 746–758, 2014. @article{Corbett2014, The visual system represents the overall statistical, not individual, properties of sets. Here we tested the spatial nature of ensemble statistics. We used a mean-size adaptation paradigm (Corbett et al. in Visual Cognition, 20, 211-231, 2012) to examine whether average size is encoded in multiple reference frames. We adapted observers to patches of small- and large-sized dots in opposite regions of the display (left/right or top/bottom) and then tested their perceptions of the sizes of single test dots presented in regions that corresponded to retinotopic, spatiotopic, and hemispheric coordinates within the adapting displays. We observed retinotopic, spatiotopic, and hemispheric adaptation aftereffects, such that participants perceived a test dot as being larger when it was presented in the area adapted to the patch of small dots than when it was presented in the area adapted to large dots. This aftereffect also transferred between eyes. Our results demonstrate that mean size is represented across multiple spatial frames of reference, supporting the proposal that ensemble statistics play a fundamental role in maintaining perceptual stability. |
Jason C. Coronel; Kara D. Federmeier Task demands modulate decision and eye movement responses in the chimeric face test: Examining the right hemisphere processing account Journal Article In: Frontiers in Psychology, vol. 5, pp. 229, 2014. @article{Coronel2014, A large and growing body of work, conducted in both brain-intact and brain-damaged populations, has used the free viewing chimeric face test as a measure of hemispheric dominance for the extraction of emotional information from faces. These studies generally show that normal right-handed individuals tend to perceive chimeric faces as more emotional if the emotional expression is presented on the half of the face to the viewer's left (“left hemiface”). However, the mechanisms underlying this lateralized bias remain unclear. Here, we examine the extent to which this bias is driven by right hemisphere processing advantages versus default scanning biases in a unique way – by changing task demands. In particular, we compare the original task with one in which right-hemisphere-biased processing cannot provide a decision advantage. Our behavioral and eye-movement data are inconsistent with the predictions of a default scanning bias account and support the idea that the left hemiface bias found in the chimeric face test is largely due to strategic use of right hemisphere processing mechanisms. |
Francisco M. Costela; Jorge Otero-Millan; Michael B. McCamy; Stephen L. Macknik; Xoana G. Troncoso; Ali Najafian Jazi; Sharon M. Crook; Susana Martinez-Conde Fixational eye movement correction of blink-induced gaze position errors Journal Article In: PLoS ONE, vol. 9, no. 10, pp. e110889, 2014. @article{Costela2014, Our eyes move continuously. Even when we attempt to fix our gaze, we produce "fixational" eye movements including microsaccades, drift and tremor. The potential role of microsaccades versus drifts in the control of eye position has been debated for decades and remains in question today. Here we set out to determine the corrective functions of microsaccades and drifts on gaze-position errors due to blinks in non-human primates (Macaca mulatta) and humans. Our results show that blinks contribute to the instability of gaze during fixation, and that microsaccades, but not drifts, correct fixation errors introduced by blinks. These findings provide new insights about eye position control during fixation, and indicate a more general role of microsaccades in fixation correction than thought previously. |
Antoine Coutrot; N. Guyader How saliency, faces, and sound influence gaze in dynamic social scenes Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–17, 2014. @article{Coutrot2014, Conversation scenes are a typical example in which classical models of visual attention dramatically fail to predict eye positions. Indeed, these models rarely consider faces as particular gaze attractors and never take into account the important auditory information that always accompanies dynamic social scenes. We recorded the eye movements of participants viewing dynamic conversations taking place in various contexts. Conversations were seen either with their original soundtracks or with unrelated soundtracks (unrelated speech and abrupt or continuous natural sounds). First, we analyze how auditory conditions influence the eye movement parameters of participants. Then, we model the probability distribution of eye positions across each video frame with a statistical method (Expectation- Maximization), allowing the relative contribution of different visual features such as static low-level visual saliency (based on luminance contrast), dynamic low- level visual saliency (based on motion amplitude), faces, and center bias to be quantified. Through experimental and modeling results, we show that regardless of the auditory condition, participants look more at faces, and especially at talking faces. Hearing the original soundtrack makes participants follow the speech turn-taking more closely. However, we do not find any difference between the different types of unrelated soundtracks. These eye- tracking results are confirmed by our model that shows that faces, and particularly talking faces, are the features that best explain the gazes recorded, especially in the original soundtrack condition. Low-level saliency is not a relevant feature to explain eye positions made on social scenes, even dynamic ones. Finally, we propose groundwork for an audiovisual saliency model. |
Antoine Coutrot; Nathalie Guyader; Gelu Ionescu; Alice Caplier Video viewing: Do auditory salient events capture visual attention? Journal Article In: Annals of Telecommunications, vol. 69, no. 1-2, pp. 89–97, 2014. @article{Coutrot2014a, We assess whether salient auditory events contained in soundtracks modify eye movements when exploring videos. In a previous study, we found that, on average, nonspatial sound contained in video soundtracks impacts on eye movements. This result indicates that sound could play a leading part in visual attention models to predict eye movements. In this research, we go further and test whether the effect of sound on eye movements is stronger just after salient auditory events. To automatically spot salient auditory events, we used two auditory saliency models: the discrete energy separation algorithm and the energy model. Both models provide a saliency time curve, based on the fusion of several elementary audio features. The most salient auditory events were extracted by thresholding these curves. We examined some eye movement parameters just after these events rather than on all the video frames. We showed that the effect of sound on eye movements (variability between eye positions, saccade amplitude, and fixation duration) was not stronger after salient auditory events than on average over entire videos. Thus, we suggest that sound could impact on visual exploration not only after salient events but in a more global way. © 2013 Institut Mines-Télécom and Springer-Verlag France. |
David G. Cowan; Eric J. Vanman; Mark Nielsen Motivated empathy: The mechanics of the empathic gaze Journal Article In: Cognition and Emotion, vol. 28, no. 8, pp. 1522–1530, 2014. @article{Cowan2014, Successful human social interactions frequently rely on appropriate interpersonal empathy and eye contact. Here, we report a previously unseen relationship between trait empathy and eye-gaze patterns to affective facial features in video-based stimuli. Fifty-nine healthy adult participants had their eyes tracked while watching a three-minute long "sad" and "emotionally neutral" video. The video stimuli portrayed the head and shoulders of the same actor recounting a fictional personal event. Analyses revealed that the greater participants' trait emotional empathy, the more they fixated on the eye-region of the actor, regardless of the emotional valence of the video stimuli. Our findings provide the first empirical evidence of a relationship between empathic capacity and eye-gaze pattern to the most affective facial region (eyes). |
M. A. Cox; Kaleb A. Lowe; Randolph Blake; A. Maier Sustained perceptual invisibility of solid shapes following contour adaptation to partial outlines Journal Article In: Consciousness and Cognition, vol. 26, no. 1, pp. 37–50, 2014. @article{Cox2014, Contour adaptation (CA) is a recently described paradigm that renders otherwise salient visual stimuli temporarily perceptually invisible. Here we investigate whether this illusion can be exploited to study visual awareness. We found that CA can induce seconds of sustained invisibility following similarly long periods of uninterrupted adaptation. Furthermore, even fragmented adaptors are capable of producing CA, with the strength of CA increasing monotonically as the adaptors encompass a greater fraction of the stimulus outline. However, different types of adaptor patterns, such as distinctive shapes or illusory contours, produce equivalent levels of CA suggesting that the main determinants of CA are low-level stimulus characteristics, with minimal modulation by higher-order visual processes. Taken together, our results indicate that CA has desirable properties for studying visual awareness, including the production of prolonged periods of perceptual dissociation from stimulation as well as parametric dependencies of that dissociation on a host of stimulus parameters. |
David P. Crabb; Nicholas D. Smith; Haogang Zhu What's on TV? Detecting age-related neurodegenerative eye disease using eye movement scanpaths Journal Article In: Frontiers in Aging Neuroscience, vol. 6, pp. 312, 2014. @article{Crabb2014, PURPOSE: We test the hypothesis that age-related neurodegenerative eye disease can be detected by examining patterns of eye movement recorded whilst a person naturally watches a movie.$backslash$n$backslash$nMETHODS: Thirty-two elderly people with healthy vision (median age: 70, interquartile range [IQR] 64-75 years) and 44 patients with a clinical diagnosis of glaucoma (median age: 69, IQR 63-77 years) had standard vision examinations including automated perimetry. Disease severity was measured using a standard clinical measure (visual field mean deviation; MD). All study participants viewed three unmodified TV and film clips on a computer set up incorporating the Eyelink 1000 eyetracker (SR Research, Ontario, Canada). Eye movement scanpaths were plotted using novel methods that first filtered the data and then generated saccade density maps. Maps were then subjected to a feature extraction analysis using kernel principal component analysis (KPCA). Features from the KPCA were then classified using a standard machine based classifier trained and tested by a 10-fold cross validation which was repeated 100 times to estimate the confidence interval (CI) of classification sensitivity and specificity.$backslash$n$backslash$nRESULTS: Patients had a range of disease severity from early to advanced (median [IQR] right eye and left eye MD was -7 [-13 to -5] dB and -9 [-15 to -4] dB, respectively). Average sensitivity for correctly identifying a glaucoma patient at a fixed specificity of 90% was 79% (95% CI: 58-86%). The area under the Receiver Operating Characteristic curve was 0.84 (95% CI: 0.82-0.87).$backslash$n$backslash$nCONCLUSIONS: Huge data from scanpaths of eye movements recorded whilst people freely watch TV type films can be processed into maps that contain a signature of vision loss. In this proof of principle study we have demonstrated that a group of patients with age-related neurodegenerative eye disease can be reasonably well separated from a group of healthy peers by considering these eye movement signatures alone. |
Adele Diederich; Annette Schomburg; Marieke K. Vugt Fronto-central theta oscillations are related to oscillations in saccadic response times (SRT): An EEG and behavioral data analysis Journal Article In: PLoS ONE, vol. 9, no. 11, pp. e112974, 2014. @article{Diederich2014, The phase reset hypothesis states that the phase of an ongoing neural oscillation, reflecting periodic fluctuations in neural activity between states of high and low excitability, can be shifted by the occurrence of a sensory stimulus so that the phase value become highly constant across trials (Schroeder et al., 2008). From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multi sensory processing (Senkowski et al. 2008). We follow up on a study in which evidence of phase reset was found using a purely behavioral paradigm by including also EEG measures. In this paradigm, presentation of an auditory accessory stimulus was followed by a visual target with a stimulus-onset asynchrony (SOA) across a range from 0 to 404 ms in steps of 4 ms. This fine-grained stimulus presentation allowed us to do a spectral analysis on the mean SRT as a function of the SOA, which revealed distinct peak spectral components within a frequency range of 6 to 11 Hz with a modus of 7 Hz. The EEG analysis showed that the auditory stimulus caused a phase reset in 7-Hz brain oscillations in a widespread set of channels. Moreover, there was a significant difference in the average phase at which the visual target stimulus appeared between slow and fast SRT trials. This effect was evident in three different analyses, and occurred primarily in frontal and central electrodes. |
Kevin C. Dieter; Bo Hu; David C. Knill; Randolph Blake; Duje Tadin Kinesthesis can make an invisible hand visible Journal Article In: Psychological Science, vol. 25, no. 1, pp. 66–75, 2014. @article{Dieter2014, Self-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown whether actions can have generative effects on visual perception. We asked whether, in total darkness, self-generated body movements are sufficient to evoke normally concomitant visual perceptions. Using a deceptive experimental design, we discovered that waving one's own hand in front of one's covered eyes can cause visual sensations of motion. Conjecturing that these visual sensations arise from multisensory connectivity, we showed that grapheme-color synesthetes experience substantially stronger kinesthesis-induced visual sensations than nonsynesthetes do. Finally, we found that the perceived vividness of kinesthesis-induced visual sensations predicted participants' ability to smoothly track self-generated hand movements with their eyes in darkness, which indicates that these sensations function like typical retinally driven visual sensations. Evidently, even in the complete absence of external visual input, the brain predicts visual consequences of actions. |
Barnaby J. Dixson; Gina M. Grimshaw; Diane K. Ormsby; Alan F. Dixson Eye-tracking women's preferences for men's somatotypes Journal Article In: Evolution and Human Behavior, vol. 35, no. 2, pp. 73–79, 2014. @article{Dixson2014, Judging physical attractiveness involves sight, touch, sound and smells. Where visual judgments are concerned, attentional processes may have evolved to prioritize sex-typical traits that reflect cues signaling direct or indirect (i.e. genetic) benefits. Behavioral techniques that measure response times or eye movements provide a powerful test of this assumption by directly assessing how attractiveness influences the deployment of attention. We used eye-tracking to characterize women's visual attention to men's back-posed bodies, which varied in overall fat and muscle distribution, while they judged the potential of each model for a short- or long-term relationship. We hypothesized that when judging male bodily attractiveness women would focus more on the upper body musculature of all somatotypes, as it is a signal of metabolic health, immunocompetence and underlying endocrine function. Results showed that mesomorphs (muscular men) received the highest attractiveness ratings, followed by ectomorphs (lean men) and endomorphs (heavily-set men). For eye movements, attention was evenly distributed to the upper and lower back of both ectomorphs and mesomorphs. In contrast, for endomorphs the lower back, including the waist, captured more attention over the viewing period. These patterns in visual attention were evident in the first second of viewing, suggesting that body composition is identified early in viewing and guides attention to body regions that provide salient biological information during judgments of men's bodily attractiveness. |
Dejan Draschkow; Jeremy M. Wolfe; Melissa L. -H. Võ Seek and you shall remember: Scene semantics interact with visual search to build better memories Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–18, 2014. @article{Draschkow2014, Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. |
Stefania Vito; Antimo Buonocore; Jean François Bonnefon; Sergio Della Sala Eye movements disrupt spatial but not visual mental imagery Journal Article In: Cognitive Processing, vol. 15, no. 4, pp. 543–549, 2014. @article{Vito2014, It has long been known that eye movements are functionally involved in the generation and maintenance of mental images. Indeed, a number of studies demonstrated that voluntary eye movements interfere with mental imagery tasks (e.g., Laeng and Teodorescu in Cogn Sci 26:207-231, 2002). However, mental imagery is conceived as a multifarious cognitive function with at least two components, a spatial component and a visual component. The present study investigated the question of whether eye movements disrupt mental imagery in general or only its spatial component. We present data on healthy young adults, who performed visual and spatial imagery tasks concurrently with a smooth pursuit. In line with previous literature, results revealed that eye movements had a strong disruptive effect on spatial imagery. Moreover, we crucially demonstrated that eye movements had no disruptive effect when participants visualized the depictive aspects of an object. Therefore, we suggest that eye movements serve to a greater extent the spatial than the visual component of mental imagery. |
Jelmer P. De Vries; Ignace T. C. Hooge; Frans A. J. Verstraten Saccades toward the target are planned as sequences rather than as single steps Journal Article In: Psychological Science, vol. 25, no. 1, pp. 215–223, 2014. @article{DeVries2014, To find a target during visual search, observers often need to make multiple eye movements, which results in a scan path. It is an open question whether the saccade destinations in scan paths are planned ahead. In the two experiments reported here, we investigated this question by focusing on the observer's ability to deviate from potentially planned paths. In the first experiment, the stimulus configuration could change during the initial saccade. We found that the observer's ability to deviate from potentially planned paths crucially depended on whether altered configurations could be processed with sufficient rapidity. In a follow-up experiment, we asked whether planned paths can include more than two saccade destinations. Investigating the influence of potentially planned paths on a secondary task demonstrated that planned paths can include at least three saccade destinations. Together, these experiments provide the first evidence of scan-path planning in visual search. |
Louis F. Dell'Osso; Faruk H. Orge; Jonathan B. Jacobs; Zhong I. Wang; Louis F. Dell'Osso; Faruk H. Orge; Jonathan B. Jacobs; Zhong I. Wang Fusion maldevelopment (latent/manifest latent) nystagmus syndrome: Effects of four-muscle tenotomy and reattachment Journal Article In: Journal of Pediatric Ophthalmology & Strabismus, vol. 51, no. 3, pp. 180–188, 2014. @article{DellOsso2014, PURPOSE: To examine the waveform and clinical effects of the four-muscle tenotomy and reattachment procedure in fusion maldevelopment nystagmus syndrome (FMNS) and to compare them to those documented in infantile nystagmus syndrome (INS) and acquired nystagmus. METHODS: Both infrared reflection and high-speed digital video systems were used to record the eye movements in a patient with FMNS (before and after tenotomy and reattachment). Data were analyzed using the eXpanded Nystagmus Acuity Function (NAFX) that is part of the OMtools software. Model simulations and predictions were performed using the authors' behavioral ocular motor system model in MATLAB Simulink (The MathWorks, Inc., Natick, MA). RESULTS: The model predicted, and the patient's data confirmed, that the tenotomy and reattachment procedure produces improvements in FMN waveforms across a broader field of gaze and decreases the Alexander's law variation. The patient's tenotomy and reattachment plots of NAFX after surgery versus gaze angle were higher and had lower slope than before surgery. Clinically, despite moderate improvements in both peak measured acuity and stereoacuity, dramatic improvements in the patient's abilities and lifestyle resulted. CONCLUSIONS: The four-muscle tenotomy and reattachment nystagmus surgery produced beneficial therapeutic effects on FMN waveforms that are similar to those demonstrated in INS and acquired nystagmus. These results support the authors' prior recommendation that tenotomy and reattachment nystagmus should be added to required strabismus procedures in patients who also have FMNS (ie, perform tenotomy and reattachment on all unoperated muscles in the plane of the nystagmus). Furthermore, when strabismus surgery is not required, four-muscle tenotomy and reattachment may be used to improve FMN waveforms and visual function. |
Denton J. DeLoss; Takeo Watanabe; George J. Andersen Optimization of perceptual learning: Effects of task difficulty and external noise in older adults Journal Article In: Vision Research, vol. 99, pp. 37–45, 2014. @article{DeLoss2014, Previous research has shown a wide array of age-related declines in vision. The current study examined the effects of perceptual learning (PL), external noise, and task difficulty in fine orientation discrimination with older individuals (mean age 71.73, range 65-91). Thirty-two older subjects participated in seven 1.5-h sessions conducted on separate days over a three-week period. A two-alternative forced choice procedure was used in discriminating the orientation of Gabor patches. Four training groups were examined in which the standard orientations for training were either easy or difficult and included either external noise (additive Gaussian noise) or no external noise. In addition, the transfer to an untrained orientation and noise levels were examined. An analysis of the four groups prior to training indicated no significant differences between the groups. An analysis of the change in performance post-training indicated that the degree of learning was related to task difficulty and the presence of external noise during training. In addition, measurements of pupil diameter indicated that changes in orientation discrimination were not associated with changes in retinal illuminance. These results suggest that task difficulty and training in noise are factors important for optimizing the effects of training among older individuals. |
Jenni Deveau; Gary Lovcik; Aaron R. Seitz Broad-based visual benefits from training with an integrated perceptual-learning video game Journal Article In: Vision Research, vol. 99, pp. 134–140, 2014. @article{Deveau2014, Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on isolating particular mechanisms. In the current study, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult population. Transfer from the game includes; improvements in acuity (measured with self-paced standard eye-charts), improvement along the full contrast sensitivity function, and improvements in peripheral acuity and contrast thresholds. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision. |
Leandro Luigi Di Stasi; Raúl Cabestrero; Michael B. Mccamy; Francisco Ríos; Andrés Catena; Pilar Quirós; Jose A. Lopez; Carolina Saez; Stephen L. Macknik; Susana Martinez-Conde Intersaccadic drift velocity is sensitive to short-term hypobaric hypoxia Journal Article In: European Journal of Neuroscience, vol. 39, no. 8, pp. 1384–1390, 2014. @article{DiStasi2014, Hypoxia, defined as decreased availability of oxygen in the body's tissues, can lead to dyspnea, rapid pulse, syncope, visual dysfunction, mental disturbances such as delirium or euphoria, and even death. It is considered to be one of the most serious hazards during flight. Thus, early and objective detection of the physiological effects of hypoxia is critical to prevent catastrophes in civil and military aviation. The few studies that have addressed the effects of hypoxia on objective oculomotor metrics have had inconsistent results, however. Thus, the question of whether hypoxia modulates eye movement behavior remains open. Here we examined the effects of short-term hypobaric hypoxia on the velocity of saccadic eye movements and intersaccadic drift of Spanish Air Force pilots and flight engineers, compared with a control group that did not experience hypoxia. Saccadic velocity decreased with time-on-duty in both groups, in correlation with subjective fatigue. Intersaccadic drift velocity increased in the hypoxia group only, suggesting that acute hypoxia diminishes eye stability, independently of fatigue. Our results suggest that intersaccadic drift velocity could serve as a biomarker of acute hypoxia. These findings may also contribute to our understanding of the relationship between hypoxia episodes and central nervous system impairments. |
Leandro Luigi Di Stasi; Michael B. McCamy; Stephen L. Macknik; James A. Mankin; Nicole Hooft; Andrés Catena; Susana Martinez-Conde Saccadic eye movement metrics reflect surgical residents′ fatigue Journal Article In: Annals of Surgery, vol. 259, no. 4, pp. 824–829, 2014. @article{DiStasi2014a, OBJECTIVE: Little is known about the effects of surgical residentsÊ fatigue on patient safety. We monitored surgical residentsÊ fatigue levels during their call day using (1) eye movement metrics, (2) objective measures of laparoscopic surgical performance, and (3) subjective reports based on standardized questionnaires. BACKGROUND: Prior attempts to investigate the effects of fatigue on surgical performance have suffered from methodological limitations, including inconsistent definitions and lack of objective measures of fatigue, and nonstandardized measures of surgical performance. Recent research has shown that fatigue can affect the characteristics of saccadic (fast ballistic) eye movements in nonsurgical scenarios. Here we asked whether fatigue induced by time-on-duty (∼24 hours) might affect saccadic metrics in surgical residents. Because saccadic velocity is not under voluntary control, a fatigue index based on saccadic velocity has the potential to provide an accurate and unbiased measure of the residentÊs fatigue level. METHODS: We measured the eye movements of members of the general surgery resident team at St. JosephÊs Hospital and Medical Center (Phoenix, AZ) (6 males and 6 females), using a head-mounted video eye tracker (similar configuration to a surgical headlight), during the performance of 3 tasks: 2 simulated laparoscopic surgery tasks (peg transfer and precision cutting) and a guided saccade task, before and after their call day. Residents rated their perceived fatigue level every 3 hours throughout their 24-hour shift, using a standardized scale. RESULTS:: Time-on-duty decreased saccadic velocity and increased subjective fatigue but did not affect laparoscopic performance. These results support the hypothesis that saccadic indices reflect graded changes in fatigue. They also indicate that fatigue due to prolonged time-on-duty does not result necessarily in medical error, highlighting the complicated relationship among continuity of care, patient safety, and fatigued providers. CONCLUSIONS: Our data show, for the first time, that saccadic velocity is a reliable indicator of the subjective fatigue of health care professionals during prolonged time-on-duty. These findings have potential impacts for the development of neuroergonomic tools to detect fatigue among health professionals and in the specifications of future guidelines regarding residentsÊ duty hours. |
Jan Drewes; Weina Zhu; Yingzhou Hu; Xintian Hu Smaller is better: Drift in gaze measurements due to pupil dynamics Journal Article In: PLoS ONE, vol. 9, no. 10, pp. e111197, 2014. @article{Drewes2014, Camera-based eye trackers are the mainstay of eye movement research and countless practical applications of eye tracking. Recently, a significant impact of changes in pupil size on gaze position as measured by camera-based eye trackers has been reported. In an attempt to improve the understanding of the magnitude and population-wise distribution of the pupil-size dependent shift in reported gaze position, we present the first collection of binocular pupil drift measurements recorded from 39 subjects. The pupil-size dependent shift varied greatly between subjects (from 0.3 to 5.2 deg of deviation, mean 2.6 deg), but also between the eyes of individual subjects (0.1 to 3.0 deg difference, mean difference 1.0 deg). We observed a wide range of drift direction, mostly downward and nasal. We demonstrate two methods to partially compensate the pupil-based shift using separate calibrations in pupil-constricted and pupil-dilated conditions, and evaluate an improved method of compensation based on individual look-up-tables, achieving up to 74% of compensation. |
Serge O. Dumoulin; R. F. Hess; Keith A. May; Ben M. Harvey Contour extracting networks in early extrastriate cortex Journal Article In: Journal of Vision, vol. 14, no. 5, pp. 1–14, 2014. @article{Dumoulin2014, Neurons in the visual cortex process a local region of visual space, but in order to adequately analyze natural images, neurons need to interact. The notion of an ‘‘association field'' proposes that neurons interact to extract extended contours. Here, we identify the site and properties of contour integration mechanisms. We used functional magnetic resonance imaging (fMRI) and population receptive field (pRF) analyses. We devised pRF mapping stimuli consisting of contours. We isolated the contribution of contour integration mechanisms to the pRF by manipulating the contour content. This stimulus manipulation led to systematic changes in pRF size. Whereas a bank of Gabor filters quantitatively explains pRF size changes in V1, only V2/V3 pRF sizes match the predictions of the association field. pRF size changes in later visual field maps, hV4, LO-1, and LO-2 do not follow either prediction and are probably driven by distinct classical receptive field properties or other extraclassical integration mechanisms. These pRF changes do not follow conventional fMRI signal strength measures. Therefore, analyses of pRF changes provide a novel computational neuroimaging approach to investigating neural interactions. We interpreted these results as evidence for neural interactions along co- oriented, cocircular receptive fields in the early extrastriate visual cortex (V2/V3), consistent with the notion of a contour association field. |
Matt J. Dunn; Tom H. Margrain; J. Margaret Woodhouse; Fergal A. Ennis; Christopher M. Harris; Jonathan T. Erichsen Grating visual acuity in infantile nystagmus in the absence of image motion Journal Article In: Investigative Ophthalmology & Visual Science, vol. 55, no. 4, pp. 2682–2686, 2014. @article{Dunn2014, PURPOSE: Infantile nystagmus (IN) consists of largely horizontal oscillations of the eyes that usually begin shortly after birth. The condition is almost always associated with lower-than-normal visual acuity (VA). This is assumed to be at least partially due to motion blur induced by the eye movements. Here, we investigated the effect of image motion on VA. METHODS: Grating stimuli were presented, illuminated by either multiple tachistoscopic flashes (0.76 ms) to circumvent retinal image motion, or under constant illumination, to subjects with horizontal idiopathic IN and controls. A staircase procedure was used to estimate VA (by judging direction of tilt) under each condition. Orientation-specific effects were investigated by testing gratings oriented about both the horizontal and vertical axes. RESULTS: Nystagmats had poorer VA than controls under both constant and tachistoscopic illumination. Neither group showed a significant difference in VA between illumination conditions. Nystagmats performed worse for vertically oriented gratings, even under tachistoscopic conditions (P < 0.01), but there was no significant effect of orientation in controls. CONCLUSIONS: The fact that VA was not significantly affected by either illumination condition strongly suggests that the eye movements themselves do not significantly degrade VA in adults with IN. Treatments and therapies that seek to modify and/or reduce eye movements may therefore be fundamentally limited in any improvement that can be achieved with respect to VA. |
Lien Dupont; Marc Antrop; Veerle Van Eetvelde Eye-tracking analysis in landscape perception research: Influence of photograph properties and landscape characteristics Journal Article In: Landscape Research, vol. 39, no. 4, pp. 417–432, 2014. @article{Dupont2014, The European Landscape Convention emphasises the need for public participation in landscape planning and management. This demands understanding of how people perceive and observe landscapes. This can objectively be measured using eye tracking, a system recording eye movements and fixations while observing images. In this study, 23 participants were asked to observe 90 landscape photographs, representing 18 landscape character types in Flanders (Belgium) differing in degree of openness and heterogeneity. For each landscape, five types of photographs were shown, varying in view angle. This experiment design allowed testing the effect of the landscape characteristics and photograph types on the observation pattern, measured by Eye-tracking Metrics (ETM). The results show that panoramic and detail photographs are observed differently than the other types. The degree of openness and heterogeneity also seems to exert a significant influence on the observation of the landscape. |
Muriel Dysli; Nicolas Vogel; Mathias Abegg Reading performance is not affected by a prism induced increase of horizontal and vertical vergence demand Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 431, 2014. @article{Dysli2014, PURPOSE: Dyslexia is the most common developmental reading disorder that affects language skills. Latent strabismus (heterophoria) has been suspected to be causally involved. Even though phoria correction in dyslexic children is commonly applied, the evidence in support of a benefit is poor. In order to provide experimental evidence on this issue, we simulated phoria in healthy readers by modifying the vergence tone required to maintain binocular alignment. METHODS: Vergence tone was altered with prisms that were placed in front of one eye in 16 healthy subjects to induce exophoria, esophoria, or vertical phoria. Subjects were to read one paragraph for each condition, from which reading speed was determined. Text comprehension was tested with a forced multiple choice test. Eye movements were recorded during reading and subsequently analyzed for saccadic amplitudes, saccades per 10 letters, percentage of regressive (backward) saccades, average fixation duration, first fixation duration on a word, and gaze duration. RESULTS: Acute change of horizontal and vertical vergence tone does neither significantly affect reading performance nor reading associated eye movements. CONCLUSION: Prisms in healthy subjects fail to induce a significant change of reading performance. This finding is not compatible with a role of phoria in dyslexia. Our results contrast the proposal for correcting small angle heterophorias in dyslexic children. |
R. Becket Ebitz; John M. Pearson; Michael L. Platt Pupil size and social vigilance in rhesus macaques Journal Article In: Frontiers in Neuroscience, vol. 8, pp. 100, 2014. @article{Ebitz2014, Complex natural environments favor the dynamic alignment of neural processing between goal-relevant stimuli and conflicting but biologically salient stimuli like social competitors or predators. The biological mechanisms that regulate dynamic changes in vigilance have not been fully elucidated. Arousal systems that ready the body to respond adaptively to threat may contribute to dynamic regulation of vigilance. Under conditions of constant luminance, pupil diameter provides a peripheral index of arousal state. Although pupil size varies with the processing of goal-relevant stimuli, it remains unclear whether pupil size also predicts attention to biologically salient objects and events like social competitors, whose presence interferes with current goals. Here we show that pupil size in rhesus macaques both reflects the biological salience of task-irrelevant social distractors and predicts vigilance for these stimuli. We measured pupil size in monkeys performing a visual orienting task in which distractors-monkey faces and phase-scrambled versions of the same images-could appear in a congruent, incongruent, or neutral position relative to a rewarded target. Baseline pupil size under constant illumination predicted distractor interference, consistent with the hypothesis that pupil-linked arousal mechanisms regulate task engagement and distractibility. Notably, pupil size also predicted enhanced vigilance for social distractors, suggesting that pupil-linked arousal may adjust the balance of processing resources between goal-relevant and biologically important stimuli. The magnitude of pupil constriction in response to distractors closely tracked distractor interference, saccade planning and the social relevance of distractors, endorsing the idea that the pupillary light response is modulated by attention. These findings indicate that pupil size indexes dynamic changes in attention evoked by both the social environment and arousal. |
Yoshiko Yabe; Melvyn A. Goodale; Hiroaki Shigemasu Temporal order judgments are disrupted more by reflexive than by voluntary saccades Journal Article In: Journal of Neurophysiology, vol. 111, no. 10, pp. 2103–2108, 2014. @article{Yabe2014, We do not always perceive the sequence of events as they actually unfold. For example, when two events occur before a rapid eye movement (saccade), the interval between them is often perceived as shorter than it really is and the order of those events can be sometimes reversed (Morrone MC, Ross J, Burr DC. Nat Neurosci 8: 950-954, 2005). In the present article we show that these misperceptions of the temporal order of events critically depend on whether the saccade is reflexive or voluntary. In the first experiment, participants judged the temporal order of two visual stimuli that were presented one after the other just before a reflexive or voluntary saccadic eye movement. In the reflexive saccade condition, participants moved their eyes to a target that suddenly appeared. In the voluntary saccade condition, participants moved their eyes to a target that was present already. Similarly to the above-cited study, we found that the temporal order of events was often misjudged just before a reflexive saccade to a suddenly appearing target. However, when people made a voluntary saccade to a target that was already present, there was a significant reduction in the probability of misjudging the temporal order of the same events. In the second experiment, the reduction was seen in a memory-delay task. It is likely that the nature of the motor command and its origin determine how time is perceived during the moments preceding the motor act. |
Daniel L. K. Yamins; Ha Hong; Charles F. Cadieu; Ethan A. Solomon; Darren Seibert; James J. DiCarlo Performance-optimized hierarchical models predict neural responses in higher visual cortex Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 23, pp. 8619–8624, 2014. @article{Yamins2014, The ventral visual stream underlies key human visual object recognition abilities. However, neural encoding in the higher areas of the ventral stream remains poorly understood. Here, we describe a modeling approach that yields a quantitatively accurate model of inferior temporal (IT) cortex, the highest ventral cortical area. Using high-throughput computational techniques, we discovered that, within a class of biologically plausible hierarchical neural network models, there is a strong correlation between a model's categorization performance and its ability to predict individual IT neural unit response data. To pursue this idea, we then identified a high-performing neural network that matches human performance on a range of recognition tasks. Critically, even though we did not constrain this model to match neural data, its top output layer turns out to be highly predictive of IT spiking responses to complex naturalistic images at both the single site and population levels. Moreover, the model's intermediate layers are highly predictive of neural responses in the V4 cortex, a midlevel visual area that provides the dominant cortical input to IT. These results show that performance optimization - applied in a biologically appropriate model class - can be used to build quantitative predictive models of neural processing. |
Ming Yan; Yingyi Luo; Albrecht W. Inhoff Syllable articulation influences foveal and parafoveal processing of words during the silent reading of Chinese sentences Journal Article In: Journal of Memory and Language, vol. 75, pp. 93–103, 2014. @article{Yan2014, The current study examined effects of syllable articulation on eye movements during the silent reading of Chinese sentences, which contained two types of two-character target words whose second characters were subject to dialect-specific variation. In one condition the second syllable was articulated with a neutral tone for northern-dialect Chinese speakers and with a full tone for southern-dialect Chinese speakers (neutral-tone target words) and in the other condition the second syllable was articulated with a full tone irrespective of readers' dialect type (full-tone target words). Native speakers of northern and southern Chinese dialects were recruited in Experiment 1 to examine the effect of dialect-specific articulation on silent reading. Recordings of their eye movements revealed shorter viewing durations for neutral- than for full-tone target words only for speakers of northern but not for southern dialects, indicating that dialect-specific articulation of syllabic tone influenced visual word recognition. Experiment 2 replicated the syllabic tone effect for speakers of northern dialects, and the use of gaze-contingent display changes further revealed that these readers processed an upcoming parafoveal word less effectively when a neutral- than when a full-tone target was fixated. Shorter viewing duration for neutral-tone words thus cannot be attributed to their easier lexical processing; instead, tonal effects appear to reflect Chinese readers' simulated articulation of to-be-recognized words during silent reading. |
Ming Yan; Wei Zhou; Hua Shu; Rizwangul Yusupu; Dongxia Miao; André Krügel; Reinhold Kliegl Eye movements guided by morphological structure: Evidence from the Uighur language Journal Article In: Cognition, vol. 132, no. 2, pp. 181–215, 2014. @article{Yan2014a, It is generally accepted that low-level features (e.g., inter-word spaces) are responsible for saccade-target selection in eye-movement control during reading. In two experiments using Uighur script known for its rich suffixes, we demonstrate that, in addition to word length and launch site, the number of suffixes influences initial landing positions. We also demonstrate an influence of word frequency. These results are difficult to explain purely by low-level guidance of eye movements and indicate that due to properties specific to Uighur script low-level visual information and high-level information such as morphological structure of parafoveal words jointly influence saccade programming. |
Jingwen Yang; Frederic Hamelin; Dominique Sauter Fault detection observer design using time and frequency domain specifications Journal Article In: IFAC Proceedings Volumes, vol. 19, no. 1, pp. 8564–8569, 2014. @article{Yang2014, Several scholars have proposed personalization models based on product variety breadth and the intensity of customer-firm interaction with a focus on marketing strategies ranging from basic product versioning to customerization and reverse marketing. However, some studies have shown that the explosion of product variety may generate information overload. Moreover, customers are highly heterogeneous in willingness and ability to interact with firms in personalization processes. This often results in consumer confusion and wasteful investments. To address this problem, we propose a conceptual framework of e-customer profiling for interactive personalization by distinguishing content (that is, expected customer benefits) and process (that is, expected degree of interaction) issues. The framework focuses on four general dimensions suggested by previous research as significant drivers of online customer heterogeneity: VALUE, KNOWLEDGE, ORIENTATION, and RELATIONSHIP QUALITY. We also present a preliminary test of the framework and derive directions for customer relationship management and future research. |
Jinmian Yang; Nan Li; Suiping Wang; Timothy J. Slattery; Keith Rayner Encoding the target or the plausible preview word? The nature of the plausibility preview benefit in reading Chinese Journal Article In: Visual Cognition, vol. 22, no. 2, pp. 193–213, 2014. @article{Yang2014a, Previous studies have shown that a plausible preview word can facilitate the processing of a target word as compared to an implausible preview word (a plausibility preview benefit effect) when reading Chinese (Yang, Wang, Tong, & Rayner, 2012; Yang, 2013). Regarding the nature of this effect, it is possible that readers processed the meaning of the plausible preview word and did not actually encode the target word (given that the parafoveal preview word lies close to the fovea). The current experiment examined this possibility with three conditions wherein readers received a preview of a target word that was either (1) identical to the target word (identical preview), (2) a plausible continuation of the pre-target text, but the post-target text in the sentence was incompatible with it (initially plausible preview), or (3) not a plausible continuation of the pre-target text, nor compatible with the post-target text (implausible preview). Gaze durations on target words were longer in the initially plausible condition than the identical condition. Overall, the results showed a typical preview benefit, but also implied that readers did not encode the initially plausible preview. Also, a plausibility preview benefit was replicated: gaze durations were longer with implausible previews than the initially plausible ones. Furthermore, late eye movement measures did not reveal differences between the initially plausible and the implausible preview conditions, which argues against the possibility of misreading the plausible preview word as the target word. In sum, these results suggest that a plausible preview word provides benefit in processing the target word as compared to an implausible preview word, and this benefit is only present in early but not late eye movement measures. |
Xiaohong Yang; Lijing Chen; Yufang Yang The effect of discourse structure on depth of semantic integration in reading Journal Article In: Memory & Cognition, vol. 42, no. 2, pp. 325–339, 2014. @article{Yang2014b, A coherent discourse exhibits certain structures in that subunits of discourses are related to one another in various ways and in that subunits that contribute to the same discourse purpose are joined to create a larger unit so as to produce an effect on the reader. To date, this crucial aspect of discourse has been largely neglected in the psycholinguistic literature. In two experiments, we examined whether semantic integration in discourse context was influenced by the difference of discourse structure. Readers read discourses in which the last sentence was locally congruent but either semantically congruent or incongruent when interpreted with the preceding sentence. Furthermore, the last sentence was either in the same discourse unit or not in the same discourse unit as the preceding sentence, depending on whether they shared the same discourse purpose. Results from self-paced reading (Experiment 1) and eye tracking (Experiment 2) showed that discourse-incongruous words were read longer than discourse-congruous words only when the critical sentence and the preceding sentence were in the same discourse unit, but not when they belonged to different discourse units. These results establish discourse structure as a new factor in semantic integration and suggest that discourse effects depend both on the content of what is being said and on the way that the contents are organized. |
Shanna C. Yeung; Cristina Rubino; Jaya Viswanathan; Jason J. S. Barton The inter-trial effect of prepared but not executed antisaccades Journal Article In: Experimental Brain Research, vol. 232, no. 12, pp. 3699–3705, 2014. @article{Yeung2014, A preceding antisaccade increases the latency of the saccade in the next trial. Whether this inter-trial effect is generated by the preparation or the execution of the antisaccade is not certain. Our goal was to examine the inter-trial effects from trials on which subjects prepared an antisaccade but did not make one. We tested 15 subjects on blocks of randomly ordered prosaccades and antisaccades. An instructional cue at fixation indicated whether a prosaccade or antisaccade was required, with the target appearing 2 s later. On 20 % of antisaccade trials, the target did not appear (prepared-only antisaccade trials). We analyzed the latencies of all correct prosaccades or antisaccades preceded by correctly executed trials. The latencies of prosaccade trials were 15 ms shorter if they were preceded by prosaccades than if the prior trial was an antisaccade. Prosaccades preceded by trials on which antisaccades were cued but not executed also showed prolonged latencies that were equivalent to those preceded by executed antisaccades. We conclude that the inter-trial effects from a prior antisaccade are generated by its preparation rather than its execution. This may reflect persistence of pre-target preparatory activity from the prior trial to affect that of the next trial in structures like the superior colliculus and frontal eye field. |
Peng Zhou; Stephen Crain; Likan Zhan Grammatical aspect and event recognition in children's online sentence comprehension Journal Article In: Cognition, vol. 133, no. 1, pp. 262–276, 2014. @article{Zhou2014, This study investigated whether or not the temporal information encoded in aspectual morphemes can be used immediately by young children to facilitate event recognition during online sentence comprehension. We focused on the contrast between two grammatical aspectual morphemes in Mandarin Chinese, the perfective morpheme -le and the (imperfective) durative morpheme -zhe. The perfective morpheme -le is often used to indicate that an event has been completed, whereas the durative morpheme -zhe indicates that an event is still in progress or continuing. We were interested to see whether young children are able to use the temporal reference encoded in the two aspectual morphemes (i.e., completed versus ongoing) as rapidly as adults to facilitate event recognition during online sentence comprehension. Using the visual world eye-tracking paradigm, we tested 34 Mandarin-speaking adults and 99 Mandarin-speaking children (35 three-year-olds, 32 four-year-olds and 32 five-year-olds). On each trial, participants were presented with spoken sentences containing either of the two aspectual morphemes while viewing a visual image containing two pictures, one representing a completed event and one representing an ongoing event. Participants' eye movements were recorded from the onset of the spoken sentences. The results show that both the adults and the three age groups of children exhibited a facilitatory effect trigged by the aspectual morpheme: hearing the perfective morpheme -le triggered more eye movements to the completed event area, whereas hearing the durative morpheme -zhe triggered more eye movements to the ongoing event area. This effect occurred immediately after the onset of the aspectual morpheme, both for the adults and the three groups of children. This is evidence that young children are able to use the temporal information encoded in aspectual morphemes as rapidly as adults to facilitate event recognition. Children's eye movement patterns reflect a rapid mapping of grammatical aspect onto the temporal structures of events depicted in the visual scene. |