All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
Joaquin Navajas; Mariano Sigman; Juan E. Kamienkowski Dynamics of visibility, confidence, and choice during eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1213–1227, 2014. @article{Navajas2014, We study the dynamics of objective and subjective measures of visibility and choice in brief presentations occurring within a fixation during free eye-movements. We show that brief presentations yield homogeneous levels of performance in a window that extends almost throughout the entire fixation. Instead, confidence judgments vary for presentations occurring at different moments of the fixations. When the target occurs close to the onset of the fixation, it is reported accurately but with lower values of confidence; when it occurs close to the end of the fixation, it is reported with high confidence (Experiments 1 and 2). Consistently, in experiments in which participants can freely choose to report items, we observe a report bias toward the end of the fixation, where the maximum of confidence occurs for experiments with a single target (Experiments 3 and 4). Hence, these results suggest that confidence is not merely a measure of accumulated stimulus energy but instead varies reflecting an endogenous integration process by which later stimuli are assigned greater confidence. |
Karly N. Neath; Roxane J. Itier Facial expression discrimination varies with presentation time but not with fixation on features: A backward masking study using eye-tracking Journal Article In: Cognition and Emotion, vol. 28, no. 1, pp. 115–131, 2014. @article{Neath2014, The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful expressions presented for 50 and 100 ms. While performance was not improved by the use of expression-specific diagnostic facial features, performance increased with presentation time for all emotions. Results support the idea of an integration of facial features (holistic processing) varying as a function of emotion and presentation time. |
Dustin Nelson; Dan J. Graham; Lisa Harnack An objective measure of nutrition facts panel usage and nutrient quality of food choice Journal Article In: Journal of Nutrition Education and Behavior, vol. 46, no. 6, pp. 589–594, 2014. @article{Nelson2014, Objective: The relationship between time viewing nutrition information and nutrient quality of foods chosen in a food selection task was objectively evaluated through direct observation using an eye-tracking camera. Methods: A total of 202 participants' food choices were scored for nutrient density. Multivariate linear regression analysis was conducted with mean nutrient density of foods selected regressed on mean label viewing time and participants' sociodemographic characteristics. Results: Label viewing time was not significantly associated with nutrient density food score. A significant relationship emerged between the covariate, age, and mean nutrient density food score such that mean nutrient density scores were higher for older participants compared with younger ones (. P=.04). Foods selected by males had a higher mean nutrient density score than foods selected by females (. P = .03). Conclusions and Implications: Findings suggest that those who spend more time viewing nutrition facts panels during a single shopping trip may not select more nutritious foods. |
Dan Nemrodov; Thomas Anderson; Frank F. Preston; Roxane J. Itier Early sensitivity for eyes within faces: A new neuronal account of holistic and featural processing Journal Article In: NeuroImage, vol. 97, pp. 81–94, 2014. @article{Nemrodov2014, Eyes are central to face processing however their role in early face encoding as reflected by the N170 ERP component is unclear. Using eye tracking to enforce fixation on specific facial features, we found that the N170 was larger for fixation on the eyes compared to fixation on the forehead, nasion, nose or mouth, which all yielded similar amplitudes. This eye sensitivity was seen in both upright and inverted faces and was lost in eyeless faces, demonstrating it was due to the presence of eyes at fovea. Upright eyeless faces elicited largest N170 at nose fixation. Importantly, the N170 face inversion effect (FIE) was strongly attenuated in eyeless faces when fixation was on the eyes but was less attenuated for nose fixation and was normal when fixation was on the mouth. These results suggest the impact of eye removal on the N170 FIE is a function of the angular distance between the fixated feature and the eye location. We propose the Lateral Inhibition, Face Template and Eye Detector based (LIFTED) model which accounts for all the present N170 results including the FIE and its interaction with eye removal. Although eyes elicit the largest N170 response, reflecting the activity of an eye detector, the processing of upright faces is holistic and entails an inhibitory mechanism from neurons coding parafoveal information onto neurons coding foveal information. The LIFTED model provides a neuronal account of holistic and featural processing involved in upright and inverted faces and offers precise predictions for further testing. |
Daniel P. Newman; Gerard M. Loughnane; Rafael Abe; Marco T. R. Zoratti; Ana C. P. Martins; Petra C. Bogert; Simon P. Kelly; Redmond G. O'Connell; Mark A. Bellgrove Differential shift in spatial bias over time depends on observers' initial bias: Observer subtypes, or regression to the mean? Journal Article In: Neuropsychologia, vol. 64, pp. 33–40, 2014. @article{Newman2014, Healthy subjects typically exhibit a subtle bias of visuospatial attention favouring left space that is commonly termed 'pseudoneglect'. This bias is attenuated, or shifted rightwards, with decreasing alertness over time, consistent with theoretical models proposing that pseudoneglect is a result of the right hemisphere[U+05F3]s dominance in regulating attention. Although this 'time-on-task effect' for spatial bias is observed when averaging across whole samples of healthy participants, Benwell, C. S. Y., Thut, G., Learmonth, G., & Harvey, M. (2013b). Spatial attention: differential shifts in pseudoneglect direction with time-on-task and initial bias support the idea of observer subtypes. Neuropsychologia, 51(13), 2747-2756 recently presented evidence that the direction and magnitude of bias exhibited by the participant early in the task (left biased, no bias, or right biased) were stable traits that predicted the direction of the subsequent time-on-task shift in spatial bias. That is, the spatial bias of participants who were initially left biased shifted in a rightward direction with time, whereas that of participants who were initially right biased shifted in a leftward direction. If valid, the data of Benwell et al. are potentially important and may demand a re-evaluation of current models of the neural networks governing spatial attention. Here we use two novel spatial attention tasks in an attempt to confirm the results of Benwell et al. We show that rather than being indicative of true participant subtypes, these data patterns are likely driven, at least in part, by 'regression towards the mean' arising from the analysis method employed. Although evidence supports the contention that trait-like individual differences in spatial bias exist within the healthy population, no clear evidence is yet available for participant/observer subtypes in the direction of time-on-task shift in spatial biases. |
Khanh Vy Nguyen; Katherine S. Binder; Carolyn Nemier; Scott P. Ardoin Gotcha! Catching kids during mindless reading Journal Article In: Scientific Studies of Reading, vol. 18, no. 4, pp. 274–290, 2014. @article{Nguyen2014, The purpose of the current study was to examine the mindless reading behavior of children. Across two studies, 2nd-grade students read passages while their eye movements were monitored. Trained raters then identified mindless reading behaviors from the eye movement records. Several important findings emerged. We were able to reliably identify mindless reading behavior in children using eye-tracking methodology, which was characterized by shorter gaze durations and total time, more skipping, and in general a more erratic reading pattern than on-task reading behavior. On the other hand, on-task reading behavior was characterized by an increase in fixations and regressions, especially intraword regressions. Word frequency effects were attenuated during mindless reading. In addition, the children who engaged in mindless reading had weaker reading achievement profiles compared to children who read the entire passage.$backslash$nThe purpose of the current study was to examine the mindless reading behavior of children. Across two studies, 2nd-grade students read passages while their eye movements were monitored. Trained raters then identified mindless reading behaviors from the eye movement records. Several important findings emerged. We were able to reliably identify mindless reading behavior in children using eye-tracking methodology, which was characterized by shorter gaze durations and total time, more skipping, and in general a more erratic reading pattern than on-task reading behavior. On the other hand, on-task reading behavior was characterized by an increase in fixations and regressions, especially intraword regressions. Word frequency effects were attenuated during mindless reading. In addition, the children who engaged in mindless reading had weaker reading achievement profiles compared to children who read the entire passage. |
Krzysztof Templin; Piotr Didyk; Karol Myszkowski; Mohamed M Hefeeda; Hans-Peter Seidel; Wojciech Matusik Modeling and optimizing eye vergence response to stereoscopic cuts Journal Article In: ACM Transactions on Graphics, vol. 33, no. 4, pp. 1–8, 2014. @article{Templin2014, Sudden temporal depth changes, such as cuts that are introduced by video edits, can significantly degrade the quality of stereoscopic content. Since usually not encountered in the real world, they are very challenging for the audience. This is because the eye vergence has to constantly adapt to new disparities in spite of conflicting accommodation requirements. Such rapid disparity changes may lead to confusion, reduced understanding of the scene, and overall attractiveness of the content. In most cases the problem cannot be solved by simply matching the depth around the transition, as this would require flattening the scene completely. To better understand this limitation of the human visual system, we conducted a series of eye-tracking experiments. The data obtained allowed us to derive and evaluate a model describing adaptation of vergence to disparity changes on a stereoscopic display. Besides computing user-specific models, we also estimated parameters of an average observer model. This enables a range of strategies for minimizing the adaptation time in the audience. |
Antonia F. Ten Brink; Tanja C. W. Nijboer; Nathan Van der Stoep; Stefan Van der Stigchel The influence of vertically and horizontally aligned visual distractors on aurally guided saccadic eye movements Journal Article In: Experimental Brain Research, vol. 232, no. 4, pp. 1357–1366, 2014. @article{TenBrink2014, Eye movements towards a new target can be guided or disrupted by input from multiple modalities. The degree of oculomotor competition evoked by a distractor depends on both distractor and target properties, such as distractor salience or certainty regarding the target location. The ability to localize the target is particularly important when studying saccades made towards auditory targets, since determination of elevation and azimuth of a sound are based on different processes, and these processes may be affected independently by a distractor. We investigated the effects of a visual distractor on saccadic eye movements made to an auditory target in a two-dimensional plane. Results showed that the competition evoked by a vertical visual distractor was stronger compared with a horizontal visual distractor. The eye movements that were not captured by the vertical visual distractor were still influenced by it: a deviation of endpoints was seen in the direction of the visual distractor. Furthermore, the interference evoked by a high-contrast visual distractor was stronger compared with low-contrast visual stimuli, which was reflected by a faster initiation of an eye movement towards the high-contrast visual distractor and a stronger shift of endpoints in the direction of the high-contrast visual distractor. Together, these findings show that the influence of a visual distractor on aurally guided eye movements depends strongly on its location relative to the target, and to a lesser extent, on stimulus contrast. |
Paul M. J. Thomas; Margaret C. Jackson; Jane E. Raymond A threatening face in the crowd: Effects of emotional singletons on visual working memory Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 253–263, 2014. @article{Thomas2014, Faces with threatening versus positive expressions are better remembered in visual working memory (WM) and are especially effective at capturing attention. We asked how the presence of a single threatening or happy face affects WM for concurrently viewed faces with neutral expressions. If threat captures attention and attention determines WM, then a WM performance cost for neutral faces should be evident. However, if threat boosts processing in an object-specific, noncompetitive manner, then no such costs should be produced. Participants viewed three neutral and one angry or happy face for 2 s. Face recognition was tested 1 s later. Although WM was better for singletons than nonsingletons and better for angry versus happy singletons, WM for neutral faces remained unaffected by either singleton. These results, combined with eye movement and response time analyses, argue against a selective attention account of threat-based benefits to WM and support object-specific enhancement via threat processing. |
Martha M. Shiell; François Champoux; Robert J. Zatorre Enhancement of visual motion detection thresholds in early deaf people Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e90498, 2014. @article{Shiell2014, In deaf people, the auditory cortex can reorganize to support visual motion processing. Although this cross-modal reorganization has long been thought to subserve enhanced visual abilities, previous research has been unsuccessful at identifying behavioural enhancements specific to motion processing. Recently, research with congenitally deaf cats has uncovered an enhancement for visual motion detection. Our goal was to test for a similar difference between deaf and hearing people. We tested 16 early and profoundly deaf participants and 20 hearing controls. Participants completed a visual motion detection task, in which they were asked to determine which of two sinusoidal gratings was moving. The speed of the moving grating varied according to an adaptive staircase procedure, allowing us to determine the lowest speed necessary for participants to detect motion. Consistent with previous research in deaf cats, the deaf group had lower motion detection thresholds than the hearing. This finding supports the proposal that cross-modal reorganization after sensory deprivation will occur for supramodal sensory features and preserve the output functions. |
Yoshihito Shigihara; Semir Zeki Parallel processing of face and house stimuli by V1 and specialized visual areas: A magnetoencephalographic (MEG) study Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 901, 2014. @article{Shigihara2014, We used easily distinguishable stimuli of faces and houses constituted from straight lines, with the aim of learning whether they activate V1 on the one hand, and the specialized areas that are critical for the processing of faces and houses on the other, with similar latencies. Eighteen subjects took part in the experiment, which used magnetoencephalography (MEG) coupled to analytical methods to detect the time course of the earliest responses which these stimuli provoke in these cortical areas. Both categories of stimuli activated V1 and areas of the visual cortex outside it at around 40 ms after stimulus onset, and the amplitude elicited by face stimuli was significantly larger than that elicited by house stimuli. These results suggest that "low-level" and "high-level" features of form stimuli are processed in parallel by V1 and visual areas outside it. Taken together with our previous results on the processing of simple geometric forms (Shgihara and Zeki, 2013; Shigihara and Zeki, 2014), the present ones reinforce the conclusion that parallel processing is an important component in the strategy used by the brain to process and construct forms. |
Alisha Siebold; Mieke Donk Reinstating salience effects over time: The influence of stimulus changes on visual selection behavior over a sequence of eye movements Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 6, pp. 1655–1670, 2014. @article{Siebold2014a, Recently, we showed that salience affects initial saccades only in a static stimulus environment; subsequent saccades were unaffected by salience but, instead, were directed in line with task requirements (Siebold, van Zoest, & Donk, PLoS ONE 6(9): e23552, 2011). Yet multiple studies have shown that people tend to fixate salient regions more often than nonsalient ones when they are looking at images-in particular, when salience is defined by dynamic changes. The goal of the present study was to investigate how oculomotor selection beyond an initial saccade is affected by salience as derived from changing, as opposed to static, stimuli. Observers were presented with displays containing two fixation dots, one target, one distractor, and multiple background elements. They were instructed to fixate on one of the fixation dots and make a speeded eye movement to the target, either directly or preceded by an initial eye movement to the other fixation dot. In Experiment 1, target and distractor differed in orientation contrast relative to the background, such that one was more salient than the other, whereas in Experiments 2 and 3, the orientation contrast between the two elements was identical. Here, salience was implemented by a continuous luminance flicker or by a difference in luminance contrast, respectively, which was presented either simultaneously with display onset or contingent upon the first saccade. The results showed that in all experiments, initial saccades were strongly guided by salience, whereas second saccades were consistently goal directed if the salience manipulation was present from display onset. However, if the flicker or luminance contrast was presented contingent upon the initial saccade, salience effects were reinstated. We argue that salience effects are short-lived but can be reinstated if new information is presented, even when this occurs during an eye movement. |
Alisha Siebold; Mieke Donk On the importance of relative salience: Comparing overt selection behavior of single versus simultaneously presented stimuli Journal Article In: PLoS ONE, vol. 9, no. 6, pp. e99707, 2014. @article{Siebold2014, The goal of the current study was to investigate time-dependent effects of the number of targets presented and its interaction with stimulus salience on oculomotor selection performance. To this end, observers were asked to make a speeded eye movement to a target orientation singleton embedded in a homogeneous background of vertically oriented lines. In Experiment 1, either one or two physically identical targets were presented, whereas in Experiment 2 an additional orientation-based salience manipulation was performed. The results showed that the probability of a singleton being available for selection is reduced in the presence of an identical singleton (Experiment 1) and that this effect is modulated by the salience of the other singleton (Experiment 2). While the absolute orientation contrast of a target relative to the background contributed to the probability that it is available for selection, the crucial factor affecting selection was the relative salience between singletons. These findings are incompatible with a processing speed account, which highlights the importance of visibility and claims that a certain singleton identity has a unique speed with which it can be processed. In contrast, the finding that the number of targets presented affected a target's availability suggests an important role of the broader display context in determining oculomotor selection performance. |
Eva Siegenthaler; Francisco M. Costela; Michael B. Mccamy; Leandro Luigi Di Stasi; Jorge Otero-Millan; Andreas Sonderegger; Rudolf Groner; Stephen L. Macknik; Susana Martinez-Conde Task difficulty in mental arithmetic affects microsaccadic rates and magnitudes Journal Article In: European Journal of Neuroscience, vol. 39, no. 2, pp. 287–294, 2014. @article{Siegenthaler2014, Microsaccades are involuntary, small-magnitude saccadic eye movements that occur during attempted visual fixation. Recent research has found that attention can modulate microsaccade dynamics, but few studies have addressed the effects of task difficulty on microsaccade parameters, and those have obtained contradictory results. Further, no study to date has investigated the influence of task difficulty on microsaccade production during the performance of non-visual tasks. Thus, the effects of task difficulty on microsaccades, isolated from sensory modality, remain unclear. Here we investigated the effects of task difficulty on microsaccades during the performance of a non-visual, mental arithmetic task with two levels of complexity. We found that microsaccade rates decreased and microsaccade magnitudes increased with increased task difficulty. We propose that changes in microsaccade rates and magnitudes with task difficulty are mediated by the effects of varying attentional inputs on the rostral superior colliculus activity map. |
Heida M. Sigurdardottir; Suzanne M. Michalak; David L. Sheinberg Shape beyond recognition: Form-derived directionality and its effects on visual attention and motion perception Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 1, pp. 434–454, 2014. @article{Sigurdardottir2014, The shape of an object restricts its movements and therefore its future location. The rules governing selective sampling of the environment likely incorporate any available data, including shape, that provide information about where important things are going to be in the near future so that the object can be located, tracked, and sampled for information. We asked people to assess in which direction several novel objects pointed or directed them. With independent groups of people, we investigated whether their attention and sense of motion were systematically biased in this direction. Our work shows that nearly any novel object has intrinsic directionality derived from its shape. This shape information is swiftly and automatically incorporated into the allocation of overt and covert visual orienting and the detection of motion, processes that themselves are inherently directional. The observed connection between form and space suggests that shape processing goes beyond recognition alone and may help explain why shape is a relevant dimension throughout the visual brain. |
J. D. Silvis; Stefan Van der Stigchel How memory mechanisms are a key component in the guidance of our eye movements: Evidence from the global effect Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 2, pp. 357–362, 2014. @article{Silvis2014, Investigating eye movements has been a promising approach to uncover the role of visual working memory in early attentional processes. Prior research has already demonstrated that eye movements in search tasks are more easily drawn toward stimuli that show similarities to working memory content, as compared with neutral stimuli. Previous saccade tasks, however, have always required a selection process, thereby automatically recruiting working memory. The present study was an attempt to confirm the role of working memory in oculomotor selection in an unbiased saccade task that rendered memory mechanisms irrelevant. Participants executed a saccade in a display with two elements, without any instruction to aim for one particular element. The results show that when two objects appear simultaneously, a working memory match attracts the first saccade more profoundly than do mismatch objects, an effect that was present throughout the saccade latency distribution. These findings demonstrate that memory plays a fundamental biasing role in the earliest competitive processes in the selection of visual objects, even when working memory is not recruited during selection. |
Jeroen D. Silvis; Mieke Donk The effects of saccade-contingent changes on oculomotor capture: Salience is important even beyond the first oculomotor response Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 6, pp. 1803–1814, 2014. @article{Silvis2014a, Whenever a novel scene is presented, visual salience merely plays a transient role in oculomotor selection. Unique stimulus properties, such as a distinct and, thereby, salient color, affect the oculomotor response only when observers react relatively quickly. For slower responses, or for consecutive ones, salience-driven effects appear completely absent. To date, however, the circumstances that may reinstate the effects of salience over multiple eye movements are still unclear. Recent research shows that changes to a scene can attract gaze, even when these changes occur without a transient signal (i.e., during an eye movement). The aim of the present study was to investigate whether this capture is mediated through salience-driven or memory-guided processes. In three experiments, we examined how the nature of a change in salience that occurred during an eye movement affected consecutive saccades. The results demonstrate that the oculomotor system is exclusively susceptible to increases in salience from one fixation to the next, but only when these increases result in a uniquely high salience level. This suggests that even in the case of a saccade-contingent change, oculomotor selection behavior can be affected by salience-driven mechanisms, possibly to allow the automatic detection of uniquely distinct objects at any moment. The results and implications will be discussed in relation to current views on visual selection. |
Jedediah M. Singer; Gabriel Kreiman Short temporal asynchrony disrupts visual object recognition Journal Article In: Journal of vision, vol. 14, no. 5, pp. 1–14, 2014. @article{Singer2014, Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. |
Chandan Singh; Dhananjay Yadav; Jinho Lee Reader comprehension ranking by monitoring eye gaze using eye tracker Journal Article In: International Journal of Intelligent Systems Technologies and Applications, vol. 13, no. 4, pp. 294–307, 2014. @article{Singh2014, This paper concentrates on measuring comprehension ability of a reader by calculating reader ranking based on correct answer lines recorded by eye gaze tracker (mounted on reader's eye) and number of correct answers given by reader. Time is measured to find the answer line (page time T1) and time spent on the answer line (score time T2). The ratio (T2/T1) of both these time parameters plays vital role in evaluation of rank of reader. Score is calculated only if reader reads the answer line/s and after that gives the correct answer otherwise the score will be zero for same question. Finally, the reader gets score and rank among the existing readers on the basis of time ratio and correctness of answers. |
Nicholas D. Smith; Fiona C. Glen; Vera M. Mönter; David P. Crabb Using eye tracking to assess reading performance in patients with glaucoma: A within-person study Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–10, 2014. @article{Smith2014, Reading is often cited as a demanding task for patients with glaucomatous visual field (VF) loss, yet reading speed varies widely between patients and does not appear to be predicted by standard visual function measures. This within-person study aimed to investigate reading duration and eye movements when reading short passages of text in a patient's worse eye (most VF damage) when compared to their better eye (least VF damage). Reading duration and saccade rate were significantly different on average in the worse eye when compared to the better eye (P < 0.001) in 14 patients with glaucoma that had median (interquartile range) between-eye difference in mean deviation (MD; a standard clinical measure for VF loss) of 9.8 (8.3 to 14.8) dB; differences were not related to the size of the difference in MD between eyes. Patients with a more pronounced effect of longer reading duration on their worse eye made a larger proportion of "regressions" (backward saccades) and "unknown" EMs (not adhering to expected reading patterns) when reading with the worse eye when compared to the better eye. A between-eye study in patients with asymmetric disease, coupled with eye tracking, provides a useful experimental design for exploring reading performance in glaucoma. |
Steven D. Stagg; Karina J. Linnell; Pamela Heaton Investigating eye movement patterns, language, and social ability in children with autism spectrum disorder Journal Article In: Development and Psychopathology, vol. 26, no. 2, pp. 529–537, 2014. @article{Stagg2014, Although all intellectually high-functioning children with autism spectrum disorder (ASD) display core social and communication deficits, some develop language within a normative timescale and others experience significant delays and subsequent language impairment. Early attention to social stimuli plays an important role in the emergence of language, and reduced attention to faces has been documented in infants later diagnosed with ASD. We investigated the extent to which patterns of attention to social stimuli would differentiate early and late language onset groups. Children with ASD (mean age = 10 years) differing on language onset timing (late/normal) and a typically developing comparison group completed a task in which visual attention to interacting and noninteracting human figures was mapped using eye tracking. Correlations on visual attention data and results from tests measuring current social and language ability were conducted. Patterns of visual attention did not distinguish typically developing children and ASD children with normal language onset. Children with ASD and late language onset showed significantly reduced attention to salient social stimuli. Associations between current language ability and social attention were observed. Delay in language onset is associated with current language skills as well as with specific eye-tracking patterns. |
Beth A. Stankevich; Joy J. Geng Reward associations and spatial probabilities produce additive effects on attentional selection Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 8, pp. 2315–2325, 2014. @article{Stankevich2014, Recent studies have shown that reward history acts as a powerful attentional bias, even overcoming top-down goals. This has led to the suggestion that rewards belong to a class of attentional cues based on selection history, which are defined by past outcomes with a stimulus feature. Selection history is thought to be separate from traditional attentional cues based on physical salience and voluntary goals, but there is relatively little understanding of how selection history operates as a mechanism of attentional selection. Critically, it has yet to be understood how multiple sources of selection history interact when presented simultaneously. For example, it may be easier to find something we like if it also appears in a predictable location. We therefore pitted spatial probabilities against reward associations and found that the two sources of information had independent and additive effects. Additionally, the strength of the two sources in biasing attentional selection could be equated. In contrast, while a nonpredictive but perceptually salient cue also exhibited independent and additive effects with reward, reward associations dominated the perceptually salient cue at all levels. Our data indicate that reward associations are part of a class of particularly potent attentional cues that guide behavior through learned expectations. However, selection history should not be thought of as a unitary concept but should be understood as a collection of independent sources of information that bias attention in a similar fashion. |
Maria Staudte; Matthew W. Crocker; Alexis Heloir; Michael Kipp The influence of speaker gaze on listener comprehension: Contrasting visual versus intentional accounts Journal Article In: Cognition, vol. 133, no. 1, pp. 317–328, 2014. @article{Staudte2014, Previous research has shown that listeners follow speaker gaze to mentioned objects in a shared environment to ground referring expressions, both for human and robot speakers. What is less clear is whether the benefit of speaker gaze is due to the inference of referen- tial intentions (Staudte and Crocker, 2011) or simply the (reflexive) shifts in visual atten- tion. That is, is gaze special in how it affects simultaneous utterance comprehension? In four eye-tracking studies we directly contrast speech-aligned speaker gaze of a virtual agent with a non-gaze visual cue (arrow). Our findings show that both cues similarly direct listeners' attention and that listeners can benefit in utterance comprehension from both cues. Only when they are similarly precise, however, does this equality extend to incongru- ent cueing sequences: that is, even when the cue sequence does not match the concurrent sequence of spoken referents can listeners benefit from gaze as well as arrows. The results suggest that listeners are able to learn a counter-predictive mapping of both cues to the sequence of referents. Thus, gaze and arrows can in principle be applied with equal flexi- bility and efficiency during language comprehension. |
Nicholas A. Steinmetz; Tirin Moore Eye movement preparation modulates neuronal responses in area V4 when dissociated from attentional demands Journal Article In: Neuron, vol. 83, no. 2, pp. 496–506, 2014. @article{Steinmetz2014, We examined whether the preparation of saccadic eye movements, when behaviorally dissociated from covert attention, modulates activity within visual cortex. We measured single-neuron and local field potential (LFP) responses to visual stimuli in area V4 while monkeys covertly attended a stimulus at one location and prepared saccades to a potential target at another. In spite of the irrelevance of visual information at the saccade target, visual activity at that location was modulated at least as much as, and often more than, activity at the covertly attended location. Modulations of activity at the attended and saccade target locations were qualitatively similar and included increased response magnitude, stimulus selectivity, and spiking reliability, as well as increased gamma and decreased low-frequency power of LFPs. These results demonstrate that saccade preparation is sufficient to modulate visual cortical representations and suggest that the interrelationship of oculomotor and attention-related mechanisms extends to posterior visual cortex. |
Chess Stetson; Richard A. Andersen The parietal reach region selectively anti-synchronizes with dorsal premotor cortex during planning Journal Article In: Journal of Neuroscience, vol. 34, no. 36, pp. 11948–11958, 2014. @article{Stetson2014, Recent reports have indicated that oscillations shared across distant cortical regions can enhance their connectivity, but do coherent oscillations ever diminish connectivity? We investigated oscillatory activity in two distinct reach-related regions in the awake behaving monkey (Macaca mulatta): the parietal reach region (PRR) and the dorsal premotor cortex (PMd). PRR and PMd were found to oscillate at similar frequencies (beta, 15–30 Hz) during periods of fixation and movement planning. At first glance, the stronger oscillator of the two, PRR, would seem to drive the weaker, PMd. However, a more fine-grained measure, the partial spike-field coherence, revealed a different relationship. Relative to global beta-band activity in the brain, action potentials in PRR anti-synchronize with PMd oscillations. These data suggest that, rather than driving PMd during planning, PRR neurons fire in such a way that they are less likely to communicate information to PMd. |
Caleb E. Strait; Tommy C. Blanchard; Benjamin Y. Hayden Reward value comparison via mutual inhibition in ventromedial prefrontal cortex Journal Article In: Neuron, vol. 82, no. 6, pp. 1357–1366, 2014. @article{Strait2014, Recent theories suggest that reward-based choice reflects competition between value signals in the ventromedial prefrontal cortex (vmPFC). We tested this idea by recording vmPFC neurons while macaques performed a gambling task with asynchronous offer presentation. We found that neuronal activity shows four patterns consistent with selection via mutual inhibition: (1) correlated tuning for probability and reward size, suggesting that vmPFC carries an integrated value signal; (2) anti-correlated tuning curves for the two options, suggesting mutual inhibition; (3) neurons rapidly come to signal the value of the chosen offer, suggesting the circuit serves to produce a choice; and (4) after regressing out the effects of option values, firing rates still could predict choice-a choice probability signal. In addition, neurons signaled gamble outcomes, suggesting that vmPFC contributes to both monitoring and choice processes. These data suggest a possible mechanism for reward-based choice and endorse the centrality of vmPFC in that process. |
Lars Strother; Danila Alferov Inter-element orientation and distance influence the duration of persistent contour integration Journal Article In: Frontiers in Psychology, vol. 5, pp. 1273, 2014. @article{Strother2014, Contour integration is a fundamental form of perceptual organization. We introduce a new method of studying the mechanisms responsible for contour integration. This method capitalizes on the perceptual persistence of contours under conditions of impending camouflage. Observers viewed arrays of randomly arranged line segments upon which circular contours comprised of similar line segments were superimposed via abrupt onset. Crucially, these contours remained visible for up to a few seconds following onset, but eventually disappeared due to the camouflaging effects of surrounding background line segments. Our main finding was that the duration of contour visibility depended on the distance and degree of co-alignment between adjacent contour segments such that relatively dense smooth contours persisted longest. The stimulus-related effects reported here parallel similar results from contour detection studies, and complement previous reported top-down influences on contour persistence (Strother et al., 2011). We propose that persistent contour visibility reflects the sustained activity of recurrent processing loops within and between visual cortical areas involved in contour integration and other important stages of visual object recognition. |
Grayden J. F. Solman; Kersondra Hickey; Daniel Smilek Comparing target detection errors in visual search and manually-assisted search Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 4, pp. 945–958, 2014. @article{Solman2014, Subjects searched for low- or high-prevalence targets among static nonoverlapping items or items piled in heaps that could be moved using a computer mouse. We replicated the classical prevalence effect both in visual search and when unpacking items from heaps, with more target misses under low prevalence. Moreover, we replicated our previous finding that while unpacking, people often move the target item without noticing (the unpacking error) and determined that these errors also increase under low prevalence. On the basis of a comparison of item movements during the manually-assisted search and eye movements during static visual search, we suggest that low prevalence leads to broadly reduced diligence during search but that the locus of this reduced diligence depends on the nature of the task. In particular, while misses during visual search often arise from a failure to inspect all of the items, misses during manually-assisted search more often result from a failure to adequately inspect individual items. Indeed, during manually-assisted search, over 90 % of target misses occurred despite subjects having moved the target item during search. |
Grayden J. F. Solman; Alan Kingstone Balancing energetic and cognitive resources: Memory use during search depends on the orienting effector Journal Article In: Cognition, vol. 132, no. 3, pp. 443–454, 2014. @article{Solman2014a, Search outside the laboratory involves tradeoffs among a variety of internal and external exploratory processes. Here we examine the conditions under which item specific memory from prior exposures to a search array is used to guide attention during search. We extend the hypothesis that memory use increases as perceptual search becomes more difficult by turning to an ecologically important type of search difficulty - energetic cost. Using optical motion tracking, we introduce a novel head-contingent display system, which enables the direct comparison of search using head movements and search using eye movements. Consistent with the increased energetic cost of turning the head to orient attention, we discover greater use of memory in head-contingent versus eye-contingent search, as reflected in both timing and orienting metrics. Our results extend theories of memory use in search to encompass embodied factors, and highlight the importance of accounting for the costs and constraints of the specific motor groups used in a given task when evaluating cognitive effects. |
David Souto; Dirk Kerzel Ocular tracking responses to background motion gated by feature-based attention Journal Article In: Journal of Neurophysiology, vol. 112, no. 5, pp. 1074–1081, 2014. @article{Souto2014, Involuntary ocular tracking responses to background motion offer a window on the dynamics of motion computations. In contrast to spatial attention, we know little about the role of feature-based attention in determining this ocular response. To probe feature-based effects of background motion on involuntary eye movements, we presented human observers with a balanced background perturbation. Two clouds of dots moved in opposite vertical directions while observers tracked a target moving in horizontal direction. Additionally, they had to discriminate a change in the direction of motion (±10° from vertical) of one of the clouds. A vertical ocular following response occurred in response to the motion of the attended cloud. When motion selection was based on motion direction and color of the dots, the peak velocity of the tracking response was 30% of the tracking response elicited in a single task with only one direction of background motion. In two other experiments, we tested the effect of the perturbation when motion selection was based on color, by having motion direction vary unpredictably, or on motion direction alone. Although the gain of pursuit in the horizontal direction was significantly reduced in all experiments, indicating a trade-off between perceptual and oculomotor tasks, ocular responses to perturbations were only observed when selection was based on both motion direction and color. It appears that selection by motion direction can only be effective for driving ocular tracking when the relevant elements can be segregated before motion onset. |
Eelke Spaak; Floris P. Lange; Ole Jensen Local entrainment of alpha oscillations by visual stimuli causes cyclic modulation of perception Journal Article In: Journal of Neuroscience, vol. 34, no. 10, pp. 3536–3544, 2014. @article{Spaak2014, Prestimulus oscillatory neural activity in the visual cortex has large consequences for perception and can be influenced by top-down control from higher-order brain regions. Making a causal claim about the mechanistic role of oscillatory activity requires that oscillations be directly manipulated independently of cognitive instructions. There are indications that a direct manipulation, or entrainment, of visual alpha activity is possible through visual stimulation. However, three important questions remain: (1) Can the entrained alpha activity be endogenously maintained in the absence of continuous stimulation?; (2) Does entrainment of alpha activity reflect a global or a local process?; and (3) Does the entrained alpha activity influence perception? To address these questions, we presented human subjects with rhythmic stimuli in one visual hemifield, and arhythmic stimuli in the other. After rhythmic entrainment, we found a periodic pattern in detection performance of near-threshold targets specific to the entrained hemifield. Using magnetoencephalograhy to measure ongoing brain activity, we observed strong alpha activity contralateral to the rhythmic stimulation outlasting the stimulation by several cycles. This entrained alpha activity was produced locally in early visual cortex, as revealed by source analysis. Importantly, stronger alpha entrainment predicted a stronger phasic modulation of detection performance in the entrained hemifield. These findings argue for a cortically focal entrainment of ongoing alpha oscillations by visual stimulation, with concomitant consequences for perception. Our results support the notion that oscillatory brain activity in the alpha band provides a causal mechanism for the temporal organization of visual perception. |
Laura J. Speed; Gabriella Vigliocco Eye movements reveal the dynamic simulation of speed in language Journal Article In: Cognitive Science, vol. 38, no. 2, pp. 367–382, 2014. @article{Speed2014, This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., The lion ambled/dashed to the balloon). Results showed that looking time to relevant objects in the visual scene was affected by the speed of verb of the sentence, speaking rate, and configuration of a supporting visual scene. The results provide novel evidence for the mental simulation of speed in language and show that internal dynamic simulations can be played out via eye movements toward a static visual scene. |
Sara Spotorno; George L. Malcolm; Benjamin W. Tatler How context information and target information guide the eyes from the first epoch of search in real-world scenes Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–21, 2014. @article{Spotorno2014, This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high- level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene. |
Aidan A. Thompson; Patrick A. Byrne; Denise Y. P. Henriques Visual targets aren't irreversibly converted to motor coordinates: Eye-centered updating of visuospatial memory in online reach control Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e92455, 2014. @article{Thompson2014, Counter to current and widely accepted hypotheses that sensorimotor transformations involve converting target locations in spatial memory from an eye-fixed reference frame into a more stable motor-based reference frame, we show that this is not strictly the case. Eye-centered representations continue to dominate reach control even during movement execution; the eye-centered target representation persists after conversion to a motor-based frame and is continuously updated as the eyes move during reach, and is used to modify the reach plan accordingly during online control. While reaches are known to be adjusted online when targets physically shift, our results are the first to show that similar adjustments occur in response to changes in representations of remembered target locations. Specifically, we find that shifts in gaze direction, which produce predictable changes in the internal (specifically eye-centered) representation of remembered target locations also produce mid-transport changes in reach kinematics. This indicates that representations of remembered reach targets (and visuospatial memory in general) continue to be updated relative to gaze even after reach onset. Thus, online motor control is influenced dynamically by both the external and internal updating mechanisms. |
Jennifer G. Tichon; Timothy Mavin; Guy Wallis; Troy A. W. Visser; Stephan Riek Using pupillometry and electromyography to track positive and negative affect during flight simulation Journal Article In: Aviation Psychology and Applied Human Factors, vol. 4, no. 1, pp. 23–32, 2014. @article{Tichon2014, Affect is a key determinant of performance, due to its influence on cognitive processing. Negative emotions such as anxiety are recognized cognitive stressors shown to degrade decision making and situation awareness. Conversely, positive affect can improve problem solving and facilitate recall. This exploratory pilot study used electromyography and pupillometry measures to track pilots' levels of negative and positive affect while training in a flight simulator. Fixation duration and saccade rate were found to correspond reliably to pilot self-reports of anxiety. Additionally, large increases in muscle activation were also recorded when higher anxiety was reported. Decreases in positive affect correlated significantly with saccade rate, fixation duration, and mean saccade velocity. Results are discussed in terms of using psychophysiological measures to provide a continuous, objective measure of pilot affective levels as an additional evaluation method to support assessment of pilot performance in simulation training environments. |
Jennifer G. Tichon; Guy Wallis; Stephan Riek; Timothy Mavin Physiological measurement of anxiety to evaluate performance in simulation training Journal Article In: Cognition, Technology and Work, vol. 16, no. 2, pp. 203–210, 2014. @article{Tichon2014a, The ability to control emotion is a skill which contributes to performance in the same way as cognitive and technical skills do to the successful completion of high stress operations. The interdependence between emotion, problem-solving and decision-making makes a negative emotion such as anxiety of interest in evaluating trainee performance in simulations which replicate stressful work conditions. Self-report measures of anxiety require trainees to interrupt the simulation experience to either complete psychological scales or make verbal reports of state anxiety. An uninterrupted, continuous measure of anxiety is, therefore, preferable for simulation environments. During this study, the anxiety levels of trainee pilots were tracked via electromyography, eye movements and pupillometry while undertaking required tasks in a flight simulation. Fixation duration and saccade rate corresponded reliably to pilot self-reports of anxiety, while pupil size and saccade amplitude did not show a strong comparison to changes in affective state. Large increases in muscle activation where recorded when higher anxiety was reported. The results suggest that a combination of physiological measures could provide a robust, continuous indicator of anxiety level. The implications of the current study on further development of physiological measures to support tracking anxiety as a tool for simulation training assessment are discussed. © 2013 Springer-Verlag London. |
Jianliang Tong; Jun Maruta; Kristin J. Heaton; Alexis L. Maule; Jamshid Ghajar Adaptation of visual tracking synchronization after one night of sleep deprivation Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 121–131, 2014. @article{Tong2014, The temporal delay between sensory input and motor execution is a fundamental constraint in interactions with the environment. Predicting the temporal course of a stimulus and dynamically synchronizing the required action with the stimulus are critical for offsetting this constraint, and this prediction-synchronization capacity can be tested using visual tracking of a target with predictable motion. Although the role of temporal prediction in visual tracking is assumed, little is known of how internal predictions interact with the behavioral outcome or how changes in the cognitive state influence such interaction. We quantified and compared the predictive visual tracking performance of military volunteers before and after one night of sleep deprivation. The moment-to-moment synchronization of visual tracking during sleep deprivation deteriorated with sensitivity changes greater than 40 %. However, increased anticipatory saccades maintained the overall temporal accuracy with near zero phase error. Results suggest that acute sleep deprivation induces instability in visuomotor prediction, but there is compensatory visuomotor adaptation. Detection of these visual tracking features may aid in the identification of insufficient sleep. |
Annie Tremblay; Elsa Spinelli English listeners' use of distributional and acoustic-phonetic cues to liaison in French: Evidence from eye movements Journal Article In: Language and Speech, vol. 57, no. 3, pp. 310–337, 2014. @article{Tremblay2014, This study investigates English listeners' use of distributional and acoustic-phonetic cues to liaison in French. Liaison creates a misalignment of the syllable and word boundaries, but is signaled by distributional cues (/z/ is a frequent liaison but not a frequent word onset; /t/ is a frequent word onset but a less frequent liaison) and acoustic-phonetic cues (liaison consonants are 15 per cent shorter than word-initial consonants). English-speaking French learners completed a visual- world eye-tracking experiment in which they heard adjective-noun sequences where the pivotal consonant was /t/ (expected advantage for consonant-initial words) or /z/ (expected advantage for liaison-initial words). Their results were compared to those of native French speakers. Both groups showed an advantage for consonant-initial targets with /t/ but no advantage for consonant- or liaison-initial targets with /z/. Both groups' competitor fixations were modulated by the duration of the pivotal consonant, but only the learners' fixations to liaison-initial targets were modulated by the duration of the pivotal consonant. This suggests that English listeners use both top-down (distributional) and bottom-up (acoustic-phonetic) cues to liaison in French. Their greater reliance on acoustic-phonetic cues is hypothesized to stem in part from English, where such cues play an important role for locating word boundaries. |
Danijela Trenkic; Jelena Mirkovic; Gerry T. M. Altmann Real-time grammar processing by native and non-native speakers: Constructions unique to the second language Journal Article In: Bilingualism: Language and Cognition, vol. 17, no. 2, pp. 237–257, 2014. @article{Trenkic2014, We investigated second language (L2) comprehension of grammatical structures that are unique to the L2, and which are known to cause persistent difficulties in production. A visual-world eye-tracking experiment focused on online comprehension of English articles by speakers of the article-lacking Mandarin, and a control group of English native speakers. The results show that non-native speakers from article-lacking backgrounds can incrementally utilise the information signalled by L2 articles in real time to constrain referential domains and resolve reference more efficiently. The findings support the hypothesis that L2 processing does not always over-rely on pragmatic affordances, and that some morphosyntactic structures unique to the target language can be processed in a targetlike manner in comprehension-despite persistent difficulties with their production. A novel proposal, based on multiple meaning-to-form, but consistent form-to-meaning mappings, is developed to account for such comprehension-production asymmetries. © 2013 Cambridge University Press. |
Alison M. Trude; Melissa C. Duff; Sarah Brown-Schmidt Talker-specific learning in amnesia: Insight into mechanisms of adaptive speech perception Journal Article In: Cortex, vol. 54, no. 1, pp. 117–123, 2014. @article{Trude2014, A hallmark of human speech perception is the ability to comprehend speech quickly and effortlessly despite enormous variability across talkers. However, current theories of speech perception do not make specific claims about the memory mechanisms involved in this process. To examine whether declarative memory is necessary for talker-specific learning, we tested the ability of amnesic patients with severe declarative memory deficits to learn and distinguish the accents of two unfamiliar talkers by monitoring their eye-gaze as they followed spoken instructions. Analyses of the time-course of eye fixations showed that amnesic patients rapidly learned to distinguish these accents and tailored perceptual processes to the voice of each talker. These results demonstrate that declarative memory is not necessary for this ability and points to the involvement of non-declarative memory mechanisms. These results are consistent with findings that other social and accommodative behaviors are preserved in amnesia and contribute to our understanding of the interactions of multiple memory systems in the use and understanding of spoken language. |
Aroline E. Seibert Hanson; Matthew T. Carlson The roles of first language and proficiency in L2 processing of Spanish clitics: Global effects Journal Article In: Language Learning, vol. 64, pp. 310–342, 2014. @article{SeibertHanson2014, We assessed the roles of first language (L1) and second language (L2) proficiency in the processing of preverbal clitics in L2 Spanish by considering the predictions of four processing theories—the Input Processing Theory, the Unified Competition Model, the Amalgamation Model, and the Associative-Cognitive CREED. We compared the performance of L1 English (typologically different from Spanish) to L1 Romanian (typologically similar to Spanish) speakers from various L2 Spanish proficiency levels on an auditory sentence-processing task.We foundmain effects ofproficiency, condition, and L1 and an interaction between proficiency and condition. Although we did not find an interaction between L1 and condition, the L1Romanians showed an overall advantage that may be attributable to structure-specific experience in the L1, raising new questions about how crosslinguistic differences influence the processing strategies learners apply to their L2. |
Mehrdad Seirafi; Peter De Weerd; Beatrice De Gelder Suppression of face perception during saccadic eye movements Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–7, 2014. @article{Seirafi2014, Lack of awareness of a stimulus briefly presented during saccadic eye movement is known as saccadic omission. Studying the reduced visibility of visual stimuli around the time of saccade-known as saccadic suppression-is a key step to investigate saccadic omission. To date, almost all studies have been focused on the reduced visibility of simple stimuli such as flashes and bars. The extension of the results from simple stimuli to more complex objects has been neglected. In two experimental tasks, we measured the subjective and objective awareness of a briefly presented face stimuli during saccadic eye movement. In the first task, we measured the subjective awareness of the visual stimuli and showed that in most of the trials there is no conscious awareness of the faces. In the second task, we measured objective sensitivity in a two-alternative forced choice (2AFC) face detection task, which demonstrated chance-level performance. Here, we provide the first evidence of complete suppression of complex visual stimuli during the saccadic eye movement. |
Yamila Sevilla; Mora Maldonado; Diego E. Shalom Pupillary dynamics reveal computational cost in sentence planning Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 6, pp. 1041–1052, 2014. @article{Sevilla2014, This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production. |
Aasef G. Shaikh; Fatema F. Ghasia Gaze holding after anterior-inferior temporal lobectomy Journal Article In: Neurological Sciences, vol. 35, no. 11, pp. 1749–1756, 2014. @article{Shaikh2014, Eye position-sensitive neurons are found in parietooccipital and anterior-inferior temporal cortex. Putative role of these neurons is to facilitate transformation of reference frame from the retina-fixed to world-fixed coordinates and assure precise action. We assessed the nature of ocular motor disorder in a subject who had selective resection of the right anterior-inferior temporal cortex for the treatment of intractable epilepsy from cortical dysplasia. The gaze was stable when the subject was viewing straight-ahead, but centrally directed drifts in the eye position were seen during eccentric horizontal gaze holding. Eye-in-orbit position determined drift velocity and its direction. Conjugate and sinusoidal vertical oscillations were also present. Horizontal drifts and vertical oscillations became prominent and disconjugate in the absence of visual cue. The gaze-holding deficit was consistent with impairment in neural integration, but in the absence of cerebellar and visual deficits. We speculate that brainstem neural integrator might receive cortical feedback regarding world-fixed coordinates. Visual system might calibrate this process. Hence the lesion of the anterior-inferior temporal lobe leads to impairment in the function of neural integrator. Vision might be used to calibrate such feedback, hence the lack of visual cue further impairs the function of the neural integrator leading to worsening of gaze-holding deficits. |
Annie L. Shelton; Kim M. Cornish; Claudine Kraan; Nellie Georgiou-Karistianis; Sylvia A. Metcalfe; John L. Bradshaw; Darren R. Hocking; Alison D. Archibald; Jonathan Cohen; Julian N. Trollor; Joanne Fielding Exploring inhibitory deficits in female premutation carriers of fragile X syndrome: Through eye movements Journal Article In: Brain and Cognition, vol. 85, no. 1, pp. 201–208, 2014. @article{Shelton2014, There is evidence which demonstrates that a subset of males with a premutation CGG repeat expansion (between 55 and 200 repeats) of the fragile X mental retardation 1 gene exhibit subtle deficits of executive function that progressively deteriorate with increasing age and CGG repeat length. However, it remains unclear whether similar deficits, which may indicate the onset of more severe degeneration, are evident in female PM-carriers. In the present study we explore whether female PM-carriers exhibit deficits of executive function which parallel those of male PM-carriers. Fourteen female fragile X premutation carriers without fragile X-associated tremor/ataxia syndrome and fourteen age, sex, and IQ matched controls underwent ocular motor and neuropsychological tests of select executive processes, specifically of response inhibition and working memory. Group comparisons revealed poorer inhibitory control for female premutation carriers on ocular motor tasks, in addition to demonstrating some difficulties in behaviour self-regulation, when compared to controls. A negative correlation between CGG repeat length and antisaccade error rates for premutation carriers was also found. Our preliminary findings indicate that impaired inhibitory control may represent a phenotype characteristic which may be a sensitive risk biomarker within this female fragile X premutation population. |
Kelly Shen; Anthony R. McIntosh; Jennifer D. Ryan A working memory account of refixations in visual search Journal Article In: Journal of Vision, vol. 14, no. 14, pp. 1–11, 2014. @article{Shen2014, We tested the hypothesis that active exploration of the visual environment is mediated not only by visual attention but also by visual working memory (VWM) by examining performance in both a visual search and a change detection task. Subjects rarely fixated previously examined distracters during visual search, suggesting that they successfully retained those items. Change detection accuracy decreased with increasing set size, suggesting that subjects had a limited VWM capacity. Crucially, performance in the change detection task predicted visual search efficiency: Higher VWM capacity was associated with faster and more accurate responses as well as lower probabilities of refixation. We found no temporal delay for return saccades, suggesting that active vision is primarily mediated by VWM rather than by a separate attentional disengagement mechanism commonly associated with the inhibition-of-return (IOR) effect. Taken together with evidence that visual attention, VWM, and the oculomotor system involve overlapping neural networks, these data suggest that there exists a general capacity for cognitive processing. |
Heather Sheridan; Eyal M. Reingold Expert vs. novice differences in the detection of relevant information during a chess game: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 5, pp. 941, 2014. @article{Sheridan2014, The present study explored the ability of expert and novice chess players to rapidly distinguish between regions of a chessboard that were relevant to the best move on the board, and regions of the board that were irrelevant. Accordingly, we monitored the eye movements of expert and novice chess players, while they selected white's best move for a variety of chess problems. To manipulate relevancy, we constructed two different versions of each chess problem in the experiment, and we counterbalanced these versions across participants. These two versions of each problem were identical except that a single piece was changed from a bishop to a knight. This subtle change reversed the relevancy map of the board, such that regions that were relevant in one version of the board were now irrelevant (and vice versa). Using this paradigm, we demonstrated that both the experts and novices spent more time fixating the relevant relative to the irrelevant regions of the board. However, the experts were faster at detecting relevant information than the novices, as shown by the finding that experts (but not novices) were able to distinguish between relevant and irrelevant information during the early part of the trial. These findings further demonstrate the domain-related perceptual processing advantage of chess experts, using an experimental paradigm that allowed us to manipulate relevancy under tightly controlled conditions. |
Hong-Yue Sun; Li-Lin Rao; Kun Zhou; Shu Li Formulating an emergency plan based on expectation-maximization is one thing, but applying it to a single case is another Journal Article In: Journal of Risk Research, vol. 17, no. 7, pp. 785–814, 2014. @article{Sun2014, This research extends the exploration of single-play/multiple-play distinctions from monetary gambling paradigm to emergency management situation. We conducted three studies (two survey studies and one eye tracking study) to test whether an emergency plan we formulated in advance based on expectation-maximization would be likely to be applied in a single case. In the first two survey studies we found that the plan with the higher EV was more likely to be preferred when the plan was applied 100 times or to 100 areas than when the plan was applied only once or to only one area. We also found significant framing and reflection effects, both of which violated the invariance principle in the single-application condition, but not in the multiple-application condition. Furthermore, in the eye tracking study, we found distinctly different eye movement patterns in the single-application condition and the multiple-application condition. The eye movement patterns in the multiple-application condition are more consistent with the predictions deduced from expectation computation. The overall results suggest that a gap exists between the formulation and the implementation of an emergency plan. Formulating an emergency plan based on expectation-maximization is doable, but applying it to a single case may be more challenging. |
Benjamin Swets; Matthew E. Jacovina; Richard J. Gerrig Individual differences in the scope of speech planning: Evidence from eye-movements Journal Article In: Language and Cognition, vol. 6, no. 1, pp. 12–44, 2014. @article{Swets2014, Previous research has demonstrated that the scope of speakers' planning in language production varies in response to external forces such as time pressure. This susceptibility to external pressures indicates a flexibly incremental production system: speakers plan utterances piece by piece, but external pressures affect the size of the pieces speakers buffer. In the current study, we explore internal constraints on speech planning. Specifi cally, we examine whether individual differences in working memory predict the scope and efficiency of advance planning. In our task, speakers described picture arrays to partners in a matching game. The arrays sometimes required speakers to note a contrast between a sentence-initial object (e.g., a four-legged cat) and a sentence-final object (e.g., a three-legged cat). Based on prior screening, we selected participants who differed on verbal working memory span. Eye-movement measures revealed that high-span speakers were more likely to gaze at the contrasting pictures prior to articulation than were low-span speakers. As a result, high-span speakers were also more likely to reference the contrast early in speech. We conclude that working memory plays a substantial role in the fl exibility of incremental speech planning. |
Sarit F. A. Szpiro; Miriam Spering; Marisa Carrasco Perceptual learning modifies untrained pursuit eye movements Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–13, 2014. @article{Szpiro2014, Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. |
Bahareh Taghizadeh; Alexander Gail Spatial task context makes short-latency reaches prone to induced Roelofs illusion Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 673, 2014. @article{Taghizadeh2014, The perceptual localization of an object is often more prone to illusions than an immediate visuomotor action towards that object. The induced Roelofs effect (IRE) probes the illusory influence of task-irrelevant visual contextual stimuli on the processing of task-relevant visuospatial instructions during movement preparation. In the IRE, the position of a task-irrelevant visual object induces a shift in the localization of a visual target when subjects indicate the position of the target by verbal response, key-presses or delayed pointing to the target ("perception" tasks), but not when immediately pointing or reaching towards it without instructed delay ("action" tasks). This discrepancy was taken as evidence for the dual-visual-stream or perception-action hypothesis, but was later explained by a phasic distortion of the egocentric spatial reference frame which is centered on subjective straight-ahead (SSA) and used for reach planning. Both explanations critically depend on delayed movements to explain the IRE for action tasks. Here we ask: first, if the IRE can be observed for short-latency reaches; second, if the IRE in fact depends on a distorted egocentric frame of reference. Human subjects were tested in new versions of the IRE task in which the reach goal had to be localized with respect to another object, i.e., in an allocentric reference frame. First, we found an IRE even for immediate reaches in our allocentric task, but not for an otherwise similar egocentric control task. Second, the IRE depended on the position of the task-irrelevant frame relative to the reference object, not relative to SSA. We conclude that the IRE for reaching does not mandatorily depend on prolonged response delays, nor does it depend on motor planning in an egocentric reference frame. Instead, allocentric encoding of a movement goal is sufficient to make immediate reaches susceptible to IRE, underlining the context dependence of visuomotor illusions. |
L. L. Tanaka; J. C. Dessing; Pankhuri Malik; S. L. Prime; J. Douglas Crawford The effects of TMS over dorsolateral prefrontal cortex on trans-saccadic memory of multiple objects Journal Article In: Neuropsychologia, vol. 63, pp. 185–193, 2014. @article{Tanaka2014, Humans typically make several rapid eye movements (saccades) per second. It is thought that visual working memory can retain and spatially integrate three to four objects or features across each saccade but little is known about this neural mechanism. Previously we showed that transcranial magnetic stimulation (TMS) to the posterior parietal cortex and frontal eye fields degrade trans-saccadic memory of multiple object features (Prime, Vesia, & Crawford, 2008, Journal of Neuroscience, 28(27), 6938-6949; Prime, Vesia, & Crawford, 2010, Cerebral Cortex, 20(4), 759-772.). Here, we used a similar protocol to investigate whether dorsolateral prefrontal cortex (DLPFC), an area involved in spatial working memory, is also involved in trans-saccadic memory. Subjects were required to report changes in stimulus orientation with (saccade task) or without (fixation task) an eye movement in the intervening memory interval. We applied single-pulse TMS to left and right DLPFC during the memory delay, timed at three intervals to arrive approximately 100. ms before, 100. ms after, or at saccade onset. In the fixation task, left DLPFC TMS produced inconsistent results, whereas right DLPFC TMS disrupted performance at all three intervals (significantly for presaccadic TMS). In contrast, in the saccade task, TMS consistently facilitated performance (significantly for left DLPFC/. perisaccadic TMS and right DLPFC/. postsaccadic TMS) suggesting a dis-inhibition of trans-saccadic processing. These results are consistent with a neural circuit of trans-saccadic memory that overlaps and interacts with, but is partially separate from the circuit for visual working memory during sustained fixation. |
Amy Rouinfar; Elise Agra; Adam M. Larson; N. Sanjay Rebello; Lester C. Loschky In: Frontiers in Psychology, vol. 5, pp. 1094, 2014. @article{Rouinfar2014, This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. |
Paul Roux; Baudoin Forgeot d'Arc; Christine Passerieux; Franck Ramus Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation? Journal Article In: Schizophrenia Research, vol. 157, no. 1-3, pp. 78–83, 2014. @article{Roux2014, Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation.We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations).29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore.Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. |
Arani Roy; Stephen V. Shepherd; Michael L. Platt Reversible inactivation of pSTS suppresses social gaze following in the macaque (Macaca mulatta) Journal Article In: Social Cognitive and Affective Neuroscience, vol. 9, no. 2, pp. 209–217, 2014. @article{Roy2014, Humans and other primates shift their attention to follow the gaze of others [gaze following (GF)]. This behavior is a foundational component of joint attention, which is severely disrupted in neurodevelopmental disorders such as autism and schizophrenia. Both cortical and subcortical pathways have been implicated in GF, but their contributions remain largely untested. While the proposed subcortical pathway hinges crucially on the amygdala, the cortical pathway is thought to require perceptual processing by a region in the posterior superior temporal sulcus (pSTS). To determine whether pSTS is necessary for typical GF behavior, we engaged rhesus macaques in a reward discrimination task confounded by leftward- and rightward-facing social distractors following saline or muscimol injections into left pSTS. We found that reversible inactivation of left pSTS with muscimol strongly suppressed GF, as assessed by reduced influence of observed gaze on target choices and saccadic reaction times. These findings demonstrate that activity in pSTS is required for normal GF by primates. |
Annie Roy-Charland; Melanie Perron; Olivia Beaudry; Kaylee Eady Confusion of fear and surprise: A test of the perceptual-attentional limitation hypothesis with eye movement monitoring Journal Article In: Cognition and Emotion, vol. 28, no. 7, pp. 1214–1222, 2014. @article{RoyCharland2014, Of the basic emotional facial expressions, fear is typically less accurately recognised as a result of being confused with surprise. According to the perceptual-attentional limitation hypothesis, the difficulty in recognising fear could be attributed to the similar visual configuration with surprise. In effect, they share more muscle movements than they possess distinctive ones. The main goal of the current study was to test the perceptual-attentional limitation hypothesis in the recognition of fear and surprise using eye movement recording and by manipulating the distinctiveness between expressions. Results revealed that when the brow lowerer is the only distinctive feature between expressions, accuracy is lower, participants spend more time looking at stimuli and they make more comparisons between expressions than when stimuli include the lip stretcher. These results not only support the perceptual-attentional limitation hypothesis but extend its definition by suggesting that it is not solely the number of distinctive features that is important but also their qualitative value. |
Douglas A. Ruff; Marlene R. Cohen Attention can increase or decrease spike count correlations between pairs of neurons depending on their role in a task Journal Article In: Nature Neuroscience, vol. 17, no. 11, pp. 1591–1597, 2014. @article{Ruff2014, Visual attention enhances the responses of visual neurons that encode the attended location. Several recent studies showed that attention also decreases correlations between fluctuations in the responses of pairs of neurons (termed spike count correlation or rSC). The previous results are consistent with two hypotheses. Attention–related changes in rate and rSC might be linked (perhaps through a common mechanism), so that attention always decreases rSC. Alternately, attention might either increase or decrease rSC, possibly depending on the role the neurons play in the behavioral task. We recorded simultaneously from dozens of neurons in area V4 while monkeys performed a discrimination task. We found strong evidence in favor of the second hypothesis, showing that attention can flexibly increase or decrease correlations, depending on whether the neurons provide evidence for the same or opposite perceptual decisions. These results place important constraints on models of the neuronal mechanisms underlying cognitive factors. |
Rachel A. Ryskin; Sarah Brown-Schmidt; Enriqueta Canseco-Gonzalez; Loretta K. Yiu; Elizabeth T. Nguyen Visuospatial perspective-taking in conversation and the role of bilingual experience Journal Article In: Journal of Memory and Language, vol. 74, pp. 46–76, 2014. @article{Ryskin2014, Little is known about how listeners use spatial perspective information to guide comprehension. Perspective-taking abilities have been linked to executive function in both children and adults. Bilingual children excel at perspective-taking tasks compared to their monolingual counterparts (e.g., Greenberg, Bellana, & Bialystok, 2013), possibly due to the executive function benefits conferred by the experience of switching between languages. Here we examine the mechanisms of visuo-spatial perspective-taking in adults, and the effects of bilingualism on this process. We report novel results regarding the ability of listeners to appreciate the spatial perspective of another person in conversation: While spatial perspective-taking does pose challenges, listeners rapidly accommodated the speaker's perspective, in time to guide the on-line processing of the speaker's utterances. Moreover, once adopted, spatial perspectives were enduring, resulting in costs when switching to a different perspective, even when that perspective is one's own. In addition to these findings, direct comparison of monolingual and bilingual participants offer no support for the hypothesis that bilingualism improves the ability to appreciate the perspective of another person during language comprehension. In fact, in some cases adult bilinguals have significantly more difficulty with perspective-laden language. |
Patrick T. Sadtler; Kristin M. Quick; Matthew D. Golub; Steven M. Chase; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Byron M. Yu; Aaron P. Batista Neural constraints on learning Journal Article In: Nature, vol. 512, pp. 423–426, 2014. @article{Sadtler2014, Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess. |
Stanislav M. Sajin; Cynthia M. Connine Semantic richness: The role of semantic features in processing spoken words Journal Article In: Journal of Memory and Language, vol. 70, no. 1, pp. 13–35, 2014. @article{Sajin2014, A lexical decision and two visual world paradigm experiments are reported that investigated the role of semantic representations in recognizing spoken words. Semantic richness (NOF: number of features) influenced lexical decision reaction times in that semantically rich words (high NOF) were processed faster than semantically impoverished words (low NOF). Processing in the VWP was faster for high NOF words but only when an onset competitor was present in the display (target BREAD, onset competitor BRICK). Adding background speech babble to the spoken stimuli resulted in an advantage for processing high NOF words with and without onset competitors in the display. The results suggest that semantic representations directly contribute to the recognition of spoken words and that sub-optimal listening conditions (e.g., background babble) enhance the role of semantics. |
Robert J. Sall; Timothy J. Wright; Walter R. Boot Driven to distraction? The effect of simulated red light running camera flashes on attention and oculomotor control Journal Article In: Visual Cognition, vol. 22, no. 1, pp. 57–73, 2014. @article{Sall2014, Do similar factors influence the allocation of attention in visually sparse and abstract laboratory paradigms and complex real-world scenes? To explore this question we conducted a series of experiments that examined whether the flash that accompanies a Red Light Running Camera (RLRC) can capture observers' attention away from important roadway changes. Inhibition of Return (IOR) and eye movement direction served as indices of the spatial allocation of attention. In two experiments, participants were slower to respond to the brake lights of a vehicle in a driving scene when an RLRC flash occurred nearby or were slower to initiate eye movements to brake light signals (IOR effects). In a third experiment, we found evidence that less prevalent RLRC flashes disrupted eye movement control. Results suggest that attention can be misdirected as a result of RLRC flashes and provide additional evidence that findings from simple laboratory paradigms can predict the allocation of attention in complex settings that are more familiar to observers. |
Anne Pier Salverda; Dave Kleinschmidt; Michael K. Tanenhaus Immediate effects of anticipatory coarticulation in spoken-word recognition Journal Article In: Journal of Memory and Language, vol. 71, no. 1, pp. 145–163, 2014. @article{Salverda2014, Two visual-world experiments examined listeners' use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as ". The ladder is the target". With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200-240. ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word ("The ladder is the target"). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article's vowel. With these stimuli, effects of speech on eye-movement control began about 70. ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for "data explanation" approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. |
Ardi Roelofs Tracking eye movements to localize Stroop interference in naming: Word planning versus articulatory buffering Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 5, pp. 1332–1347, 2014. @article{Roelofs2014, Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the interference occurs in an articulatory buffer after word planning. Here, 2 color-word Stroop experiments are reported that tested between these accounts using eye tracking. Previous research has indicated that the shifting of eye gaze from a stimulus to another occurs before the articulatory buffer is reached in spoken word planning. In the present experiments, participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to manually indicate its direction. If Stroop interference arises in the articulatory buffer, the interference should be present in the color-naming latencies but not in the gaze shift and manual response latencies. Contrary to these predictions, Stroop interference was present in all 3 behavioral measures. These results indicate that Stroop interference arises during spoken word planning rather than in articulatory buffering. |
Gustavo Rohenkohl; Ian C. Gould; Jessica Pessoa; Anna C. Nobre Combining spatial and temporal expectations to improve visual perception Journal Article In: Journal of Vision, vol. 14, no. 4, pp. 1–13, 2014. @article{Rohenkohl2014, The importance of temporal expectations in modulating perceptual functions is increasingly recognized. However, the means through which temporal expectations can bias perceptual information processing remains ill understood. Recent theories propose that modulatory effects of temporal expectations rely on the co-existence of other biases based on receptive-field properties, such as spatial location. We tested whether perceptual benefits of temporal expectations in a perceptually demanding psychophysical task depended on the presence of spatial expectations. Foveally presented symbolic arrow cues indicated simultaneously where (location) and when (time) target events were more likely to occur. The direction of the arrow indicated target location (80% validity), while its color (pink or blue) indicated the interval (80% validity) for target appearance. Our results confirmed a strong synergistic interaction between temporal and spatial expectations in enhancing visual discrimination. Temporal expectation significantly boosted the effectiveness of spatial expectation in sharpening perception. However, benefits for temporal expectation disappeared when targets occurred at unattended locations. Our findings suggest that anticipated receptive-field properties of targets provide a natural template upon which temporal expectations can operate in order to help prioritize goal-relevant events from early perceptual stages. |
Maria C. Romero; Pierpaolo Pani; Peter Janssen Coding of shape features in the macaque anterior intraparietal area Journal Article In: Journal of Neuroscience, vol. 34, no. 11, pp. 4006–4021, 2014. @article{Romero2014, The exquisite ability of primates to grasp and manipulate objects relies on the transformation of visual information into motor com- mands. To this end, the visual system extracts object affordances that can be used to program and execute the appropriate grip. The macaque anterior intraparietal (AIP) area hasbeen implicated in the extraction ofaffordances for the purpose ofgrasping. Neurons in the AIP area respond during visually guided grasping and to the visual presentation ofobjects. A subset ofAIP neurons is also activated by two-dimensional images ofobjects and even by outline contours defining the object shape, but it is unknown how AIP neurons actually represent object shape. In this study, we used a stimulus reduction approach to determine the minimum effective shape feature evoking AIP responses. AIP neurons responding to outline shapes also responded selectively to very small fragment stimuli measuring only 1–2°. This fragment selectivity could not be explained bydifferences in eyemovementsor simple orientation selectivity, but proved to be highly dependent on the relative position ofthe stimulus in the receptive field. Our findings challenge the current understanding ofthe AIP area as a critical stage in the dorsal stream for the extraction ofobject affordances. |
Hélène Samson; Nicole Fiori-Duharcourt; Karine Doré-Mazars; Christelle Lemoine; Dorine Vergilino-Perez Perceptual and gaze biases during face processing: Related or not? Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e85746, 2014. @article{Samson2014, Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. |
Germán Sanchis-Trilles; Vicent Alabau; Christian Buck; Michael Carl; Francisco Casacuberta; Mercedes García-Martínez; Ulrich Germann; Jesús González-Rubio; Robin L. Hill; Philipp Koehn; Luis A. Leiva; Bartolomé Mesa-Lao; Daniel Ortiz-Martínez; Herve Saint-Amand; Chara Tsoukala; Enrique Vidal Interactive translation prediction versus conventional post-editing in practice: A study with the CasMaCat workbench Journal Article In: Machine Translation, vol. 28, no. 3-4, pp. 217–235, 2014. @article{SanchisTrilles2014, We conducted a field trial in computer-assisted professional translation to compare interactive translation prediction (ITP) against conventional post-editing (PE) of machine translation (MT) output. In contrast to the conventional PE set-up, where an MT system first produces a static translation hypothesis that is then edited by a professional (hence post-editing), ITP constantly updates the translation hypothesis in real time in response to user edits. Our study involved nine professional translators and four reviewers working with the web-based CasMaCat workbench. Various new interactive features aiming to assist the post-editor/translator were also tested in this trial. Our results show that even with little training, ITP can be as productive as conventional PE in terms of the total time required to produce the final translation. Moreover, translation editors working with ITP require fewer key strokes to arrive at the final version of their translation. |
Laura K. Sasse; Matthias Gamer; Christian Büchel; Stefanie Brassen Selective control of attention supports the positivity effect in aging Journal Article In: PLoS ONE, vol. 9, no. 8, pp. e104180, 2014. @article{Sasse2014, There is emerging evidence for a positivity effect in healthy aging, which describes an age-specific increased focus on positive compared to negative information. Life-span researchers have attributed this effect to the selective allocation of cognitive resources in the service of prioritized emotional goals. We explored the basic principles of this assumption by assessing selective attention and memory for visual stimuli, differing in emotional content and self-relevance, in young and old participants. To specifically address the impact of cognitive control, voluntary attentional selection during the presentation of multiple-item displays was analyzed and linked to participants' general ability of cognitive control. Results revealed a positivity effect in older adults' selective attention and memory, which was particularly pronounced for self-relevant stimuli. Focusing on positive and ignoring negative information was most evident in older participants with a generally higher ability to exert top-down control during visual search. Our findings highlight the role of controlled selectivity in the occurrence of a positivity effect in aging. Since the effect has been related to well-being in later life, we suggest that the ability to selectively allocate top-down control might represent a resilience factor for emotional health in aging. |
Michaël Sassi; Maarten Demeyer; Johan Wagemans Peripheral contour grouping and saccade targeting: The role of mirror symmetry Journal Article In: Symmetry, vol. 6, no. 1, pp. 1–22, 2014. @article{Sassi2014, Integrating shape contours in the visual periphery is vital to our ability to locate objects and thus make targeted saccadic eye movements to efficiently explore our surroundings. We tested whether global shape symmetry facilitates peripheral contour integration and saccade targeting in three experiments, in which observers responded to a successful peripheral contour detection by making a saccade towards the target shape. The target contours were horizontally (Experiment 1) or vertically (Experiments 2 and 3) mirror symmetric. Observers responded by making a horizontal (Experiments 1 and 2) or vertical (Experiment 3) eye movement. Based on an analysis of the saccadic latency and accuracy, we conclude that the figure-ground cue of global mirror symmetry in the periphery has little effect on contour integration or on the speed and precision with which saccades are targeted towards objects. The role of mirror symmetry may be more apparent under natural viewing conditions with multiple objects competing for attention, where symmetric regions in the visual field can pre-attentively signal the presence of objects, and thus attract eye movements. |
Jason Satel; Matthew D. Hilchey; Zhiguo Wang; Caroline S. Reiss; Raymond M. Klein In search of a reliable electrophysiological marker of oculomotor inhibition of return Journal Article In: Psychophysiology, vol. 51, no. 10, pp. 1037–1045, 2014. @article{Satel2014, Inhibition of return (IOR) operationalizes a behavioral phenomenon characterized by slower responding to cued, relative to uncued, targets. Two independent forms of IOR have been theorized: input-based IOR occurs when the oculomotor system is quiescent, while output-based IOR occurs when the oculomotor system is engaged. EEG studies forbidding eye movements have demonstrated that reductions of target-elicited P1 components are correlated with IOR magnitude, but when eye movements occur, P1 effects bear no relationship to behavior. We expand on this work by adapting the cueing paradigm and recording event-related potentials: IOR is caused by oculomotor responses to central arrows or peripheral onsets and measured by key presses to peripheral targets. Behavioral IOR is observed in both conditions, but P1 reductions are absent in the central arrow condition. By contrast, arrow and peripheral cues enhance Nd, especially over contralateral electrode sites. |
Daniel R. Saunders; Russell L. Woods Direct measurement of the system latency of gaze-contingent displays Journal Article In: Behavior Research Methods, vol. 46, no. 2, pp. 439–447, 2014. @article{Saunders2014, Gaze-contingent displays combine a display device with an eyetracking system to rapidly update an image on the basis of the measured eye position. All such systems have a delay, the system latency, between a change in gaze location and the related change in the display. The system latency is the result of the delays contributed by the eyetracker, the display computer, and the display, and it is affected by the properties of each component, which may include variability. We present a direct, simple, and low-cost method to measure the system latency. The technique uses a device to briefly blind the eyetracker system (e.g., for video-based eyetrackers, a device with infrared light-emitting diodes (LED)), creating an eyetracker event that triggers a change to the display monitor. The time between these two events, as captured by a relatively low-cost consumer camera with high-speed video capability (1,000 Hz), is an accurate measurement of the system latency. With multiple measurements, the distribution of system latencies can be characterized. The same approach can be used to synchronize the eye position time series and a video recording of the visual stimuli that would be displayed in a particular gaze-contingent experiment. We present system latency assessments for several popular types of displays and discuss what values are acceptable for different applications, as well as how system latencies might be improved. |
Daniel J. Schad; Sarah Risse; Timothy J. Slattery; Keith Rayner Word frequency in fast priming: Evidence for immediate cognitive control of eye movements during reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 390–414, 2014. @article{Schad2014, Numerous studies have demonstrated effects of word frequency on eye movements during reading, but the precise timing of this influence has remained unclear. The fast priming paradigm (Sereno & Rayner, 1992) was previously used to study influences of related versus unrelated primes on the target word. Here, we used this procedure to investigate whether the frequency of the prime word has a direct influence on eye movements during reading when the prime-target relation is not manipulated. We found that with average prime intervals of 32 ms readers made longer single fixation durations on the target word in the low than in the high frequency prime condition. Distributional analyses demonstrated that the effect of prime frequency on single fixation durations occurred very early, supporting theories of immediate cognitive control of eye movements. Finding prime frequency effects only 207 ms after visibility of the prime and for prime durations of 32 ms yields new time constraints for cognitive processes controlling eye movements during reading. Our variant of the fast priming paradigm provides a new approach to test early influences of word processing on eye movement control during reading. |
Lutz Schega; Daniel Hamacher; Sandra Erfuth; Wolfgang Behrens-Baumann; Juliane Reupsch; Michael B. Hoffmann Differential effects of head-mounted displays on visual performance Journal Article In: Ergonomics, vol. 57, no. 1, pp. 1–11, 2014. @article{Schega2014, Head-mounted displays (HMDs) virtually augment the visual world to aid visual task completion. Three types of HMDs were compared [look around (LA); optical see-through with organic light emitting diodes and virtual retinal display] to determine whether LA, leaving the observer functionally monocular, is inferior. Response times and error rates were determined for a combined visual search and Go-NoGo task. The costs of switching between displays were assessed separately. Finally, HMD effects on basic visual functions were quantified. Effects of HMDs on visual search and Go-NoGo task were small, but for LA display-switching costs for the Go-NoGo-task the effects were pronounced. Basic visual functions were most affected for LA (reduced visual acuity and visual field sensitivity, inaccurate vergence movements and absent stereo-vision). LA involved comparatively high switching costs for the Go-NoGo task, which might indicate reduced processing of external control cues. Reduced basic visual functions are a likely cause of this effect. |
Joseph Schmidt; Annmarie MacNamara; Greg Hajcack Proudfit; Gregory J. Zelinsky More target features in visual working memory leads to poorer search guidance: Evidence from contralateral decay activity Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–19, 2014. @article{Schmidt2014, The visual-search literature has assumed that the top-down target representation used to guide search resides in visual working memory (VWM). We directly tested this assumption using contralateral delay activity (CDA) to estimate the VWM load imposed by the target representation. In Experiment 1, observers previewed four photorealistic objects and were cued to remember the two objects appearing to the left or right of central fixation; Experiment 2 was identical except that observers previewed two photorealistic objects and were cued to remember one. CDA was measured during a delay following preview offset but before onset of a four-object search array. One of the targets was always present, and observers were asked to make an eye movement to it and press a button. We found lower magnitude CDA on trials when the initial search saccade was directed to the target (strong guidance) compared to when it was not (weak guidance). This difference also tended to be larger shortly before search-display onset and was largely unaffected by VWM item-capacity limits or number of previews. Moreover, the difference between mean strong- and weak-guidance CDA was proportional to the increase in search time between mean strong-and weak-guidance trials (as measured by time-to-target and reaction-time difference scores). Contrary to most search models, our data suggest that trials resulting in the maintenance of more target features results in poor search guidance to a target. We interpret these counterintuitive findings as evidence for strong search guidance using a small set of highly discriminative target features that remain after pruning from a larger set of features, with the load imposed on VWM varying with this feature-consolidation process. |
Sebastian Schneegans; John P. Spencer; Gregor Schoner; Seongmin Hwang; Andrew Hollingworth Dynamic interactions between visual working memory and saccade target selection Journal Article In: Journal of Vision, vol. 14, no. 11, pp. 1–23, 2014. @article{Schneegans2014, Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task- irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. |
Eyal M. Reingold Eye tracking research and technology: Towards objective measurement of data quality Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 635–652, 2014. @article{Reingold2014, Two methods for objectively measuring eye tracking data quality are explored. The first method works by tricking the eye tracker to detect an abrupt change in the gaze position of an artificial eye that in actuality does not move. Such a device, referred to as an artificial saccade generator, is shown to be extremely useful for measuring the temporal accuracy and precision of eye tracking systems and for validating the latency to display change in gaze contingent display paradigms. The second method involves an artificial pupil that is mounted on a computer controlled moving platform. This device is designed to be able to provide the eye tracker with motion sequences that closely resemble biological eye movements. The main advantage of using artificial motion for testing eye tracking data quality is the fact that the spatiotemporal signal is fully specified in a manner independent of the eye tracker that is being evaluated and that nearly identical motion sequence can be reproduced multiple times with great precision. The results of the present study demonstrate that the equipment described has the potential to become an important tool in the comprehensive evaluation of data quality. |
Eyal M. Reingold; Mackenzie G. Glaholt Cognitive control of fixation duration in visual search: The role of extrafoveal processing Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 610–634, 2014. @article{Reingold2014a, Participants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed.$backslash$nParticipants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed. |
Gabriel Reyes; Jérôme Sackur Introspection during visual search Journal Article In: Consciousness and Cognition, vol. 29, pp. 212–229, 2014. @article{Reyes2014, Recent advances in the field of metacognition have shown that human participants are introspectively aware of many different cognitive states, such as confidence in a decision. Here we set out to expand the range of experimental introspection by asking whether participants could access, through pure mental monitoring, the nature of the cognitive processes that underlie two visual search tasks: an effortless "pop-out" search, and a difficult, effortful, conjunction search. To this aim, in addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of items scanned before a decision was reached. By controlling response times and eye movements, we assessed the contribution of self-observation of behavior in these subjective estimates. Results showed that introspection is a flexible mechanism and that pure mental monitoring of cognitive processes is possible in elementary tasks. |
Theo Rhodes; Christopher T. Kello; Bryan Kerster Intrinsic and extrinsic contributions to heavy tails in visual foraging Journal Article In: Visual Cognition, vol. 22, no. 6, pp. 809–842, 2014. @article{Rhodes2014, Eyes move over visual scenes to gather visual information. Studies have found heavy-tailed distributions in measures of eye movements during visual search, which raises questions about whether these distributions are pervasive to eye movements, and whether they arise from intrinsic or extrinsic factors. Three different measures of eye movement trajectories were examined during visual foraging of complex images, and all three were found to exhibit heavy tails: Spatial clustering of eye movements followed a power law distribution, saccade length distributions were lognormally distributed, and the speeds of slow, small amplitude movements occurring during fixations followed a 1/f spectral power law relation. Images were varied to test whether the spatial clustering of visual scene information is responsible for heavy tails in eye movements. Spatial clustering of eye movements and saccade length distributions were found to vary with image type and task demands, but no such effects were found for eye movement speeds during fixations. Results showed that heavy-tailed distributions are general and intrinsic to visual foraging, but some of them become aligned with visual stimuli when required by task demands. The potentially adaptive value of heavy-tailed distributions in visual foraging is discussed. |
Alby Richard; Jan Churan; Veronica Whitford; Gillian A. O'Driscoll; Debra Titone; Christopher C. Pack Perisaccadic perception of visual space in people with schizophrenia Journal Article In: Journal of Neuroscience, vol. 34, no. 14, pp. 4760–4765, 2014. @article{Richard2014, Corollary discharge signals are found in the nervous systems of many animals, where they serve a large variety of functions related to the integration of sensory and motor signals. In humans, an important corollary discharge signal is generated by oculomotor structures and communicated to sensory systems in concert with the execution of each saccade. This signal is thought to serve a number of purposes related to the maintenance of accurate visual perception. The properties of the oculomotor corollary discharge can be probed by asking subjects to localize stimuli that are flashed briefly around the time of a saccade. The results of such experiments typically reveal large errors in localization. Here, we have exploited these well-known psychophysical effects to assess the potential dysfunction of corollary discharge signals in people with schizophrenia. In a standard perisaccadic localization task, we found that, compared with controls, patients with schizophrenia exhibited larger errors in localizing visual stimuli. The pattern of errors could be modeled as an overdamped corollary discharge signal that encodes instantaneous eye position. The dynamics of this signal predicted symptom severity among patients, suggesting a possible mechanistic basis for widely observed behavioral manifestations of schizophrenia. |
Fabio Richlan; Benjamin Gagl; Stefan Hawelka; Mario Braun; Matthias Schurz; Martin Kronbichler; Florian Hutzler In: Cerebral Cortex, vol. 24, no. 10, pp. 2647–2656, 2014. @article{Richlan2014, The present study investigated the feasibility of using self-paced eye movements during reading (measured by an eye tracker) as markers for calculating hemodynamic brain responses measured by functional magnetic resonance imaging (fMRI). Specifically, we were interested in whether the fixation-related fMRI analysis approach was sensitive enough to detect activation differences between reading material (words and pseudowords) and nonreading material (line and unfamiliar Hebrew strings). Reliable reading-related activation was identified in left hemisphere superior temporal, middle temporal, and occipito-temporal regions including the visual word form area (VWFA). The results of the present study are encouraging insofar as fixation-related analysis could be used in future fMRI studies to clarify some of the inconsistent findings in the literature regarding the VWFA. Our study is the first step in investigating specific visual word recognition processes during self-paced natural sentence reading via simultaneous eye tracking and fMRI, thus aiming at an ecologically valid measurement of reading processes. We provided the proof of concept and methodological framework for the analysis of fixation-related fMRI activation in the domain of reading research. |
Katrin Riese; Mareike Bayer; Gerhard Lauer; Annekathrin Schacht In the eye of the recipient: Pupillary responses to suspense in literary classics Journal Article In: Scientific Study of Literature, vol. 4, no. 2, pp. 211–232, 2014. @article{Riese2014, <p>Plot suspense is one of the most important components of narrative fiction that motivate recipients to follow fictional characters through their worlds. The present study investigates the dynamic development of narrative suspense in excerpts of literary classics from the 19th century in a multi-methodological approach. For two texts, differing in suspense as judged by a large independent sample, we collected (a) data from questionnaires, indicating different affective and cognitive dimensions of receptive engagement, (b) continuous ratings of suspense during text reception from both experts and lay recipients, and (c) registration of pupil diameter as a physiological indicator of changes in emotional arousal and attention during reception. Data analyses confirmed differences between the two texts at different dimensions of receptive engagement and, importantly, revealed significant correlations of pupil diameter and the course of suspense over time. Our findings demonstrate that changes of the pupil diameter provide a reliable ‘online' indicator of suspense.</p> |
Ioannis Rigas; Oleg V. Komogortsev Biometric recognition via probabilistic spatial projection of eye movement trajectories in dynamic visual environments Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 9, no. 10, pp. 1743–1754, 2014. @article{Rigas2014, This paper proposes a method for the extraction of biometric features from the spatial patterns formed by eye movements during an inspection of dynamic visual stimulus. In the suggested framework, each eye movement signal is transformed into a time-constrained decomposition by using a probabilistic representation of spatial and temporal features related to eye fixations and called fixation density map (FDM). The results for a large collection of eye movements recorded from 200 individuals indicate the best equal error rate of 10.8% and Rank-1 identification rate as high as 51%, which is a significant improvement over existing eye movement-driven biometric methods. In addition, our experiments reveal that a person recognition approach based on the FDM performs well even in cases when eye movement data are captured at lower than optimum sampling frequencies. This property is very important for the future ocular biometric systems where existing iris recognition devices could be employed to combine eye movement traits with iris information for increased security and accuracy. Considering that commercial iris recognition devices are able to implement eye image sampling usually at a relatively low rate, the ability to perform eye movement-driven biometrics at such rates is of great significance. |
Lily Riggs; Takako Fujioka; Jessica Chan; Douglas A. McQuiggan; Adam K. Anderson; Jennifer D. Ryan Association with emotional information alters subsequent processing of neutral faces Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 1001, 2014. @article{Riggs2014, The processing of emotional as compared to neutral information is associated with different patterns in eye movement and neural activity. However, the 'emotionality' of a stimulus can be conveyed not only by its physical properties, but also by the information that is presented with it. There is very limited work examining the how emotional information may influence the immediate perceptual processing of otherwise neutral information. We examined how presenting an emotion label for a neutral face may influence subsequent processing by using eye movement monitoring (EMM) and magnetoencephalography (MEG) simultaneously. Participants viewed a series of faces with neutral expressions. Each face was followed by a unique negative or neutral sentence to describe that person, and then the same face was presented in isolation again. Viewing of faces paired with a negative sentence was associated with increased early viewing of the eye region and increased neural activity between 600 and 1200 ms in emotion processing regions such as the cingulate, medial prefrontal cortex, and amygdala, as well as posterior regions such as the precuneus and occipital cortex. Viewing of faces paired with a neutral sentence was associated with increased activity in the parahippocampal gyrus during the same time window. By monitoring behavior and neural activity within the same paradigm, these findings demonstrate that emotional information alters subsequent visual scanning and the neural systems that are presumably invoked to maintain a representation of the neutral information along with its emotional details. |
Lillian M. Rigoli; Daniel Holman; Michael J. Spivey; Christopher T. Kello In: Frontiers in Human Neuroscience, vol. 8, pp. 713, 2014. @article{Rigoli2014, When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior. |
Simon Rigoulot; Marc D. Pell Emotion in the voice influences the way we scan emotional faces Journal Article In: Speech Communication, vol. 65, pp. 36–49, 2014. @article{Rigoulot2014, Previous eye-tracking studies have found that listening to emotionally-inflected utterances guides visual behavior towards an emotionally congruent face (e.g., Rigoulot and Pell, 2012). Here, we investigated in more detail whether emotional speech prosody influences how participants scan and fixate specific features of an emotional face that is congruent or incongruent with the prosody. Twenty-one participants viewed individual faces expressing fear, sadness, disgust, or happiness while listening to an emotionally-inflected pseudo-utterance spoken in a congruent or incongruent prosody. Participants judged whether the emotional meaning of the face and voice were the same or different (match/mismatch). Results confirm that there were significant effects of prosody congruency on eye movements when participants scanned a face, although these varied by emotion type; a matching prosody promoted more frequent looks to the upper part of fear and sad facial expressions, whereas visual attention to upper and lower regions of happy (and to some extent disgust) faces was more evenly distributed. These data suggest ways that vocal emotion cues guide how humans process facial expressions in a way that could facilitate recognition of salient visual cues, to arrive at a holistic impression of intended meanings during interpersonal events. |
Evan F. Risko; Srdan Medimorec; Joseph D. Chisholm; Alan Kingstone Rotating with rotated text: A natural behavior approach to investigating cognitive offloading Journal Article In: Cognitive Science, vol. 38, pp. 537–564, 2014. @article{Risko2014, Determining how we use our body to support cognition represents an important part of understanding the embodied and embedded nature of cognition. In the present investigation, we pursue this question in the context of a common perceptual task. Specifically, we report a series of experiments investigating head tilt (i.e., external normalization) as a strategy in letter naming and reading stimuli that are upright or rotated. We demonstrate that the frequency of this natural behavior is modulated by the cost of stimulus rotation on performance. In addition, we demonstrate that external normalization can benefit performance. All of the results are consistent with the notion that external normalization represents a form of cognitive offloading and that effort is an important factor in the decision to adopt an internal or external strategy. |
Sarah Risse Effects of visual span on reading speed and parafoveal processing in eye movements during sentence reading Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–13, 2014. @article{Risse2014, The visual span (or ‘‘uncrowded window''), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers' VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. |
Sarah Risse; Reinhold Kliegl Dissociating preview validity and preview difficulty in parafoveal processing of word n + 1 during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 2, pp. 653–668, 2014. @article{Risse2014a, Many studies have shown that previewing the next word n + 1 during reading leads to substantial processing benefit (e.g., shorter word viewing times) when this word is eventually fixated. However, evidence of such preprocessing in fixations on the preceding word n when in fact the information about the preview is acquired is far less consistent. A recent study suggested that such effects may be delayed into fixations on the next word n + 1 (Risse & Kliegl, 2012). To investigate the time course of parafoveal information-acquisition on the control of eye movements during reading, we conducted 2 gaze-contingent display-change experiments and orthogonally manipulated the processing difficulty (i.e., word frequency) of an n + 1 preview word and its validity relative to the target word. Preview difficulty did not affect fixation durations on the pretarget word n but on the target word n + 1. In fact, the delayed preview-difficulty effect was almost of the same size as the preview benefit associated with the n + 1 preview validity. Based on additional results from quantile-regression analyses on the time course of the 2 preview effects, we discuss consequences as to the integration of foveal and parafoveal information and potential implications for computational models of eye guidance in reading. |
Dana Schneider; Zoie E. Nott; Paul E. Dux Task instructions and implicit theory of mind Journal Article In: Cognition, vol. 133, no. 1, pp. 43–47, 2014. @article{Schneider2014, It has been hypothesized that humans are able to track other's mental states efficiently and without being conscious of doing so using their implicit theory of mind (iToM) system. However, while iToM appears to operate unconsciously recent work suggests it does draw on executive attentional resources (Schneider, Lam, Bayliss, & Dux, 2012) bringing into question whether iToM is engaged efficiently. Here, we examined other aspects relating to automatic processing: The extent to which the operation of iToM is controllable and how it is influenced by behavioral intentions. This was implemented by assessing how task instructions affect eye-movement patterns in a Sally-Anne false-belief task. One group of subjects was given no task instructions (No Instructions), another overtly judged the location of a ball a protagonist interacted with (Ball Tracking) and a third indicated the location consistent with the actor's belief about the ball's location (Belief Tracking). Despite different task goals, all groups' eye-movement patterns were consistent with belief analysis, and the No Instructions and Ball Tracking groups reported no explicit mentalizing when debriefed. These findings represent definitive evidence that humans implicitly track the belief states of others in an uncontrollable and unintentional manner. |
Dana Schneider; Virginia P. Slaughter; Stefanie I. Becker; Paul E. Dux Implicit false-belief processing in the human brain Journal Article In: NeuroImage, vol. 101, pp. 268–275, 2014. @article{Schneider2014a, Eye-movement patterns in 'Sally-Anne' tasks reflect humans' ability to implicitly process the mental states of others, particularly false-beliefs - a key theory of mind (ToM) operation. It has recently been proposed that an efficient ToM system, which operates in the absence of awareness (implicit ToM, iToM), subserves the analysis of belief-like states. This contrasts to consciously available belief processing, performed by the explicit ToM system (eToM). The frontal, temporal and parietal cortices are engaged when humans explicitly 'mentalize' about others' beliefs. However, the neural underpinnings of implicit false-belief processing and the extent to which they draw on networks involved in explicit general-belief processing are unknown. Here, participants watched 'Sally-Anne' movies while fMRI and eye-tracking measures were acquired simultaneously. Participants displayed eye-movements consistent with implicit false-belief processing. After independently localizing the brain areas involved in explicit general-belief processing, only the left anterior superior temporal sulcus and precuneus revealed greater blood-oxygen-level-dependent activity for false- relative to true-belief trials in our iToM paradigm. No such difference was found for the right temporal-parietal junction despite significant activity in this area. These findings fractionate brain regions that are associated with explicit general ToM reasoning and false-belief processing in the absence of awareness. |
Christina Schonberg; Catherine M. Sandhofer; Tawny Tsang; Scott P. Johnson Does bilingual experience affect early visual perceptual development? Journal Article In: Frontiers in Psychology, vol. 5, pp. 1429, 2014. @article{Schonberg2014a, Visual attention and perception develop rapidly during the first few months after birth, and these behaviors are critical components in the development of language and cognitive abilities. Here we ask how early bilingual experiences might lead to differences in visual attention and perception. Experiments 1-3 investigated the looking behavior of monolingual and bilingual infants when presented with social (Experiment 1), mixed (Experiment 2), or non-social (Experiment 3) stimuli. In each of these experiments, infants' dwell times (DT) and number of fixations to areas of interest (AOIs) were analyzed, giving a sense of where the infants looked. To examine how the infants looked at the stimuli in a more global sense, Experiment 4 combined and analyzed the saccade data collected in Experiments 1-3. There were no significant differences between monolingual and bilingual infants' DTs, AOI fixations, or saccade characteristics (specifically, frequency, and amplitude) in any of the experiments. These results suggest that monolingual and bilingual infants process their visual environments similarly, supporting the idea that the substantial cognitive differences between monolinguals and bilinguals in early childhood are more related to active vocabulary production than perception of the environment. |
Tom Schonberg; Akram Bakkour; Ashleigh M. Hover; Jeanette A. Mumford; Lakshya Nagar; Jacob Perez; Russell A. Poldrack Changing value through cued approach: An automatic mechanism of behavior change Journal Article In: Nature Neuroscience, vol. 17, no. 4, pp. 625–630, 2014. @article{Schonberg2014, It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex. |
Elizabeth R. Schotter; Klinton Bicknell; Ian Howard; Roger P. Levy; Keith Rayner Task effects reveal cognitive flexibility responding to frequency and predictability: Evidence from eye movements in reading and proofreading Journal Article In: Cognition, vol. 131, no. 1, pp. 1–27, 2014. @article{Schotter2014a, It is well-known that word frequency and predictability affect processing time. These effects change magnitude across tasks, but studies testing this use tasks with different response types (e.g., lexical decision, naming, and fixation time during reading; Schilling, Rayner, & Chumbley, 1998), preventing direct comparison. Recently, Kaakinen and Hyönä (2010) overcame this problem, comparing fixation times in reading for comprehension and proofreading, showing that the frequency effect was larger in proofreading than in reading. This result could be explained by readers exhibiting substantial cognitive flexibility, and qualitatively changing how they process words in the proofreading task in a way that magnifies effects of word frequency. Alternatively, readers may not change word processing so dramatically, and instead may perform more careful identification generally, increasing the magnitude of many word processing effects (e.g., both frequency and predictability). We tested these possibilities with two experiments: subjects read for comprehension and then proofread for spelling errors (letter transpositions) that produce nonwords (e.g., trcak for track as in Kaakinen & Hyönä) or that produce real but unintended words (e.g., trial for trail) to compare how the task changes these effects. Replicating Kaakinen and Hyönä, frequency effects increased during proofreading. However, predictability effects only increased when integration with the sentence context was necessary to detect errors (i.e., when spelling errors produced words that were inappropriate in the sentence; trial for trail). The results suggest that readers adopt sophisticated word processing strategies to accommodate task demands. |
Elizabeth R. Schotter; Annie Jia; Victor S. Ferreira; Keith Rayner Preview benefit in speaking occurs regardless of preview timing Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 3, pp. 755–762, 2014. @article{Schotter2014b, Speakers access information from objects they will name but have not looked at yet, indexed by preview benefit–faster processing of the target when a preview object previously occupying its location was related rather than unrelated to the target. This suggests that speakers distribute attention over multiple objects, but it does not reveal the time course of the processing of a current and a to-be-named object. Is the preview benefit a consequence of attention shifting to the next-to-be-named object shortly before the eyes move to that location, or does the benefit reflect a more unconstrained deployment of attention to upcoming objects? Using the multiple-object naming paradigm with a gaze-contingent display change manipulation, we addressed this issue by manipulating the latency of the onset of the preview (SOA) and whether the preview represented the same concept as (but a different visual token of) the target or an unrelated concept. The results revealed that the preview benefit was robust, regardless of the latency of the preview onset or the latency of the saccade to the target (the lag between preview offset and fixation on the target). Together, these data suggest that preview benefit is not restricted to the time during an attention shift preceding an eye movement, and that speakers are able to take advantage of information from nonfoveal objects whenever such objects are visually available. |
Elizabeth R. Schotter; Randy Tran; Keith Rayner Don't believe what you read (Only Once): Comprehension is supported by regressions during reading Journal Article In: Psychological Science, vol. 25, no. 6, pp. 1218–1226, 2014. @article{Schotter2014, Recent Web apps have spurred excitement around the prospect of achieving speed reading by eliminating eye movements (i.e., with rapid serial visual presentation, or RSVP, in which words are presented briefly one at a time and sequentially). Our experiment using a novel trailing-mask paradigm contradicts these claims. Subjects read normally or while the display of text was manipulated such that each word was masked once the reader's eyes moved past it. This manipulation created a scenario similar to RSVP: The reader could read each word only once; regressions (i.e., rereadings of words), which are a natural part of the reading process, were functionally eliminated. Crucially, the inability to regress affected comprehension negatively. Furthermore, this effect was not confined to ambiguous sentences. These data suggest that regressions contribute to the ability to understand what one has read and call into question the viability of speed-reading apps that eliminate eye movements (e.g., those that use RSVP). |
Daniel Schreij; Sander A. Los; Jan Theeuwes; James T. Enns; Christian N. L. Olivers The interaction between stimulus-driven and goal-driven orienting as revealed by eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 378–390, 2014. @article{Schreij2014, It is generally agreed that attention can be captured in a stimulus-driven or in a goal-driven fashion. In studies that investigated both types of capture, the effects on mean manual response time (reaction time [RT]) are generally additive, suggesting two independent underlying processes. However, potential interactions between the two types of capture may fail to be expressed in manual RT, as it likely reflects multiple processing steps. Here we measured saccadic eye movements along with manual responses. Participants searched a target display for a red letter. To assess contingent capture, this display was preceded by an irrelevant red cue. To assess stimulus-driven capture, the target display could be accompanied by the simultaneous onset of an irrelevant new object. At the level of eye movements, the results showed strong interactions between cue validity and onset presence on the spatiotemporal trajectories of the saccades. However, at the level of manual responses, these effects cancelled out, leading to additive effects on mean RT. We conclude that both types of capture influence a shared spatial orienting mechanism and we provide a descriptive computational model of their dynamics. |
Mark W. Schurgin; Jonathan I. Flombaum How undistorted spatial memories can produce distorted responses Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 5, pp. 1371–1380, 2014. @article{Schurgin2014, Reproducing the location of an object from the contents of spatial working memory requires the translation of a noisy representation into an action at a single location-for instance, a mouse click or a mark with a writing utensil. In many studies, these kinds of actions result in biased responses that suggest distortions in spatial working memory. We sought to investigate the possibility of one mechanism by which distortions could arise, involving an interaction between undistorted memories and nonuniformities in attention. Specifically, the resolution of attention is finer below than above fixation, which led us to predict that bias could arise if participants tend to respond in locations below as opposed to above fixation. In Experiment 1 we found such a bias to respond below the true position of an object. Experiment 2 demonstrated with eye-tracking that fixations during response were unbiased and centered on the remembered object's true position. Experiment 3 further evidenced a dependency on attention relative to fixation, by shifting the effect horizontally when participants were required to tilt their heads. Together, these results highlight the complex pathway involved in translating probabilistic memories into discrete actions, and they present a new attentional mechanism by which undistorted spatial memories can lead to distorted reproduction responses. |
Mark W. Schurgin; J. Nelson; S. Iida; Hideki Ohira; J. Y. Chiao; Steven L. Franconeri Eye movements during emotion recognition in faces Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–16, 2014. @article{Schurgin2014a, When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face. |
Alexander C. Schütz Interindividual differences in preferred directions of perceptual and motor decisions. Journal Article In: Journal of vision, vol. 14, no. 12, pp. 1–17, 2014. @article{Schuetz2014, Both the perceptual system and the motor system can be faced with ambiguous information and then have to choose between different alternatives. Often these alternatives involve decisions about directions, and anisotropies have been reported for different tasks. Here we measured interindividual differences and temporal stability of directional preferences in eye movement, motion perception, and thumb movement tasks. In all tasks, stimuli were created such that observers had to decide between two opposite directions in each trial and preferences were measured at 12 axes around the circle. There were clear directional preferences in all utilized tasks. The strongest effects were present in tasks that involved motion, like the smooth pursuit eye movement, apparent motion, and structure-from-motion tasks. The weakest effects were present in the saccadic eye movement task. Observers with strong directional preferences in the eye movement tasks showed shorter latency costs for target-conflict trials compared to single-target trials, suggesting that directional preferences might be advantageous for solving the target conflict. Although there were consistent preferences across observers in most of the tasks, there was also considerable variability in preferred directions between observers. The magnitude of preferences and the preferred directions were correlated only between few tasks. While the magnitude of preferences varied substantially over time, the direction of these preferences was stable over several weeks. These results indicate that individually stable directional preferences exist in a range of perceptual and motor tasks. |