All EyeLink Eye Tracker Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2013 |
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Karin Mogg; Brendan P. Bradley; Anita Jansen Attention bias for food is independent of restraint in healthy weight individuals-An eye tracking study Journal Article In: Eating Behaviors, vol. 14, no. 3, pp. 397–400, 2013. @article{Werthmann2013a, Objective: Restrained eating style and weight status are highly correlated. Though both have been associated with an attentional bias for food cues, in prior research restraint and BMI were often confounded. The aim of the present study was to determine the existence and nature of an attention bias for food cues in healthy-weight female restrained and unrestrained eaters, when matching the two groups on BMI. Method: Attention biases for food cues were measured by recordings of eye movements during a visual probe task with pictorial food versus non-food stimuli. Healthy weight high restrained (n=. 24) and low restrained eaters (n=. 21) were matched on BMI in an attempt to unconfound the effects of restraint and weight on attention allocation patterns. Results: All participants showed elevated attention biases for food stimuli in comparison to neutral stimuli, independent of restraint status. Discussion: These findings suggest that attention biases for food-related cues are common for healthy weight women and show that restrained eating (per se) is not related to biased processing of food stimuli, at least not in healthy weight participants. |
Gregory L. West; Naseem Al-Aidroos; Jay Pratt Action video game experience affects oculomotor performance Journal Article In: Acta Psychologica, vol. 142, no. 1, pp. 38–42, 2013. @article{West2013, Action video games have been show to affect a variety of visual and cognitive processes. There is, however, little evidence of whether playing video games can also affect motor action. To investigate the potential link between experience playing action video games and changes in oculomotor action, we tested habitual action video game players (VGPs) and non-video game players (NVGPs) in a saccadic trajectory deviation task. We demonstrate that spatial curvature of a saccadic trajectory towards or away from distractor is profoundly different between VGPs and NVGPs. In addition, task performance accuracy improved over time only in VGPs. Results are discussed in the context of the competing interplay between stimulus-driven motor programming and top-down inhibition during oculomotor execution. |
David A. Westwood; Stephanie A. H. Jones; Christopher D. Cowper-Smith; Raymond M. Klein Changes in trunk orientation do not induce asymmetries in covert orienting Journal Article In: Attention, Perception, & Psychophysics, vol. 75, pp. 1193–1205, 2013. @article{Westwood2013, We explored the effect of trunk orientation on responses to visual targets in five experiments, following work suggesting a disengage deficit in covert orienting related to changes in the trunk orientation of healthy partic- ipants. In two experiments, participants responded to the color of a target appearing in the left or right visual field following a peripheral visual cue that was informative about target location. In three additional experiments, participants responded to the location (left/right) of a target using a spatially compatible motor response. In none of the exper- iments did trunk orientation interact with spatial-cuing ef- fects, suggesting that orienting behavior is not affected by the rotation of the body relative to the head. Theoretical implications are discussed. |
Alex L. White; Martin Rolfs; Marisa Carrasco Adaptive deployment of spatial and feature-based attention before saccades Journal Article In: Vision Research, vol. 85, pp. 26–35, 2013. @article{White2013, What you see depends not only on where you are looking but also on where you will look next. The pre-saccadic attention shift is an automatic enhancement of visual sensitivity at the target of the next saccade. We investigated whether and how perceptual factors independent of the oculomotor plan modulate pre-saccadic attention within and across trials. Observers made saccades to one (the target) of six patches of moving dots and discriminated a brief luminance pulse (the probe) that appeared at an unpredictable location. Sensitivity to the probe was always higher at the target's location (spatial attention), and this attention effect was stronger if the previous probe appeared at the previous target's location. Furthermore, sensitivity was higher for probes moving in directions similar to the target's direction (feature-based attention), but only when the previous probe moved in the same direction as the previous target. Therefore, implicit cognitive processes permeate pre-saccadic attention, so that-contingent on recent experience-it flexibly distributes resources to potentially relevant locations and features. |
Katherine S. White; Eiling Yee; Sheila E. Blumstein; James L. Morgan Adults show less sensitivity to phonetic detail in unfamiliar words, too Journal Article In: Journal of Memory and Language, vol. 68, no. 4, pp. 362–378, 2013. @article{White2013a, Young word learners fail to discriminate phonetic contrasts in certain situations, an observation that has been used to support arguments that the nature of lexical representation and lexical processing changes over development. An alternative possibility, however, is that these failures arise naturally as a result of how word familiarity affects lexical processing. In the present work, we explored the effects of word familiarity on adults' use of phonetic detail. Participants' eye movements were monitored as they heard single-segment onset mispronunciations of words drawn from a newly learned artificial lexicon. In Experiment 1, single-feature onset mispronunciations were presented; in Experiment 2, participants heard two-feature onset mispronunciations. Word familiarity was manipulated in both experiments by presenting words with various frequencies during training. Both word familiarity and degree of mismatch affected adults' use of phonetic detail: in their looking behavior, participants did not reliably differentiate single-feature mispronunciations and correct pronunciations of low frequency words. For higher frequency words, participants differentiated both 1- and 2-feature mispronunciations from correct pronunciations. However, responses were graded such that 2-feature mispronunciations had a greater effect on looking behavior. These experiments demonstrate that the use of phonetic detail in adults, as in young children, is affected by word familiarity. Parallels between the two populations suggest continuity in the architecture underlying lexical representation and processing throughout development. |
Veronica Whitford; Gillian A. O'Driscoll; Christopher C. Pack; Ridha Joober; Ashok Malla; Debra Titone In: Journal of Experimental Psychology: General, vol. 142, no. 1, pp. 57–75, 2013. @article{Whitford2013, Language and oculomotor disturbances are 2 of the best replicated findings in schizophrenia. However, few studies have examined skilled reading in schizophrenia (e.g., Arnott, Sali, Copland, 2011; Hayes & O'Grady, 2003; Revheim et al., 2006; E. O. Roberts et al., 2012), and none have examined the contribution of cognitive and motor processes that underlie reading performance. Thus, to evaluate the relationship of linguistic processes and oculomotor control to skilled reading in schizophrenia, 20 individuals with schizophrenia and 16 demographically matched controls were tested using a moving window paradigm (McConkie & Rayner, 1975). Linguistic skills supporting reading (phonological awareness) were assessed with the Comprehensive Test of Phonological Processing (R. K. Wagner, Torgesen, & Rashotte, 1999). Eye movements were assessed during reading tasks and during nonlinguistic tasks tapping basic oculomotor control (prosaccades, smooth pursuit) and executive functions (predictive saccades, antisaccades). Compared with controls, schizophrenia patients exhibited robust oculomotor markers of reading difficulty (e.g., reduced forward saccade amplitude) and were less affected by reductions in window size, indicative of reduced perceptual span. Reduced perceptual span in schizophrenia was associated with deficits in phonological processing and reduced saccade amplitudes. Executive functioning (antisaccade errors) was not related to perceptual span but was related to reading comprehension. These findings suggest that deficits in language, oculomotor control, and cognitive control contribute to skilled reading deficits in schizophrenia. Given that both language and oculomotor dysfunction precede illness onset, reading may provide a sensitive window onto cognitive dysfunction in schizophrenia vulnerability and be an important target for cognitive remediation. |
Hiroyuki Sogo GazeParser: An open-source and multiplatform library for low-cost eye tracking and analysis Journal Article In: Behavior Research Methods, vol. 45, no. 3, pp. 684–695, 2013. @article{Sogo2013, Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments. |
Maria Solé Puig; Laura Pérez Zapata; J. Antonio Aznar-Casanova; Hans Supèr; Maria Solé Puig; Laura Perez Zapata; Hans Super A role of eye vergence in covert attention Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52955, 2013. @article{SolePuig2013, Covert spatial attention produces biases in perceptual and neural responses in the absence of overt orienting movements. The neural mechanism that gives rise to these effects is poorly understood. Here we report the relation between fixational eye movements, namely eye vergence, and covert attention. Visual stimuli modulate the angle of eye vergence as a function of their ability to capture attention. This illustrates the relation between eye vergence and bottom-up attention. In visual and auditory cue/no-cue paradigms, the angle of vergence is greater in the cue condition than in the no-cue condition. This shows a top-down attention component. In conclusion, observations reveal a close link between covert attention and modulation in eye vergence during eye fixation. Our study suggests a basis for the use of eye vergence as a tool for measuring attention and may provide new insights into attention and perceptual disorders. |
Chen Song; D. Samuel Schwarzkopf; Antoine Lutti; Baojuan Li; Ryota Kanai; Geraint Rees Effective connectivity within human primary visual cortex predicts interindividual diversity in illusory perception Journal Article In: Journal of Neuroscience, vol. 33, no. 48, pp. 18781–18791, 2013. @article{Song2013c, Visual perception depends strongly on spatial context. A classic example is the tilt illusion where the perceived orientation of a central stimulus differs from its physical orientation when surrounded by tilted spatial contexts. Here we show that such contextual modulation of orientation perception exhibits trait-like interindividual diversity that correlates with interindividual differences in effective connectivity within human primary visual cortex. We found that the degree to which spatial contexts induced illusory orientation perception, namely, the magnitude of the tilt illusion, varied across healthy human adults in a trait-like fashion independent of stimulus size or contrast. Parallel to contextual modulation of orientation perception, the presence of spatial contexts affected effective connectivity within human primary visual cortex between peripheral and foveal representations that responded to spatial context and central stimulus, respectively. Importantly, this effective connectivity from peripheral to foveal primary visual cortex correlated with interindividual differences in the magnitude of the tilt illusion. Moreover, this correlation with illusion perception was observed for effective connectivity under tilted contextual stimulation but not for that under iso-oriented contextual stimulation, suggesting that it reflected the impact of orientation-dependent intra-areal connections. Our findings revealed an interindividual correlation between intra-areal connectivity within primary visual cortex and contextual influence on orientation perception. This neurophysiological-perceptual link provides empirical evidence for theoretical proposals that intra-areal connections in early visual cortices are involved in contextual modulation of visual perception. |
Guanghan Song; Denis Pellerin; Lionel Granjon Different types of sounds influence gaze differently in videos Journal Article In: Journal of Eye Movement Research, vol. 6, no. 4, pp. 1–13, 2013. @article{Song2013, This paper presents an analysis of the effect of different types of sounds on visual gaze when a person is looking freely at videos, which would be helpful to predict eye position. In order to test the effect of sound, an audio-visual experiment was designed with two groups of participants, with audio-visual (AV) and visual (V) conditions. By using statisti- cal tools, we analyzed the difference between eye position of participants with AV and V conditions. We observed that the effect of sound is different depending on the kind of sound, and that the classes with human voice (i.e. speech, singer, human noise and singers) have the greatest effect. Furthermore, the results of the distance between sound source and eye position of the group with AV condition, suggested that only particular types of sound attract human eye position to the sound source. Finally, an analysis of the fixation duration between AV and V conditions showed that participants with AV condition move eyes more frequently than those with V condition. |
Joo-Hyun Song; Patrick Bédard Allocation of attention for dissociated visual and motor goals Journal Article In: Experimental Brain Research, vol. 226, no. 2, pp. 209–219, 2013. @article{Song2013a, In daily life, selecting an object visually is closely intertwined with processing that object as a potential goal for action. Since visual and motor goals are typically identical, it remains unknown whether attention is primarily allocated to a visual target, a motor goal, or both. Here, we dissociated visual and motor goals using a visuomotor adaptation paradigm, in which participants reached toward a visual target using a computer mouse or a stylus pen, while the direction of the cursor was rotated 45° counter-clockwise from the direction of the hand movement. Thus, as visuomotor adaptation was accomplished, the visual target was dissociated from the movement goal. Then, we measured the locus of attention using an attention-demanding rapid serial visual presentation (RSVP) task, in which participants detected a pre-defined visual stimulus among the successive visual stimuli presented on either the visual target, the motor goal, or a neutral control location. We demonstrated that before visuomotor adaptation, participants performed better when the RSVP stream was presented at the visual target than at other locations. However, once visual and motor goals were dissociated following visuomotor adaptation, performance at the visual and motor goals was equated and better than performance at the control location. Therefore, we concluded that attentional resources are allocated both to visual target and motor goals during goal-directed reaching movements. |
Mingli Song; Dapeng Tao; Chun Chen; Jiajun Bu; Yezhou Yang Color-to-gray based on chance of happening preservation Journal Article In: Neurocomputing, vol. 119, pp. 222–231, 2013. @article{Song2013b, It is important to convert color images into grayscale ones for both commercial and scientific applications, such as reducing the publication cost and making the color blind people capture the visual content and semantics from color images. Recently, a dozen of algorithms have been developed for color-to-gray conversion. However, none of them considers the visual attention consistency between the color image and the converted grayscale one. Therefore, these methods may fail to convey important visual information from the original color image to the converted grayscale image. Inspired by the Helmholtz principle (Desolneux et al. 2008 [16]) that "we immediately perceive whatever could not happen by chance", we propose a new algorithm for color-to-gray to solve this problem. In particular, we first define the Chance of Happening (CoH) to measure the attentional level of each pixel in a color image. Afterward, natural image statistics are introduced to estimate the CoH of each pixel. In order to preserve the CoH of the color image in the converted grayscale image, we finally cast the color-to-gray to a supervised dimension reduction problem and present locally sliced inverse regression that can be efficiently solved by singular value decomposition. Experiments on both natural images and artificial pictures suggest (1) that the proposed approach makes the CoH of the color image and that of the converted grayscale image consistent and (2) the effectiveness and the efficiency of the proposed approach by comparing with representative baseline algorithms. In addition, it requires no human-computer interactions. |
Irene Sperandio; Shaleeza Kaderali; Philippe A. Chouinard; Jared Frey; Melvyn A. Goodale Perceived size change induced by nonvisual signals in darkness: The relative contribution of vergence and proprioception Journal Article In: Journal of Neuroscience, vol. 33, no. 43, pp. 16915–16923, 2013. @article{Sperandio2013, Most of the time, the human visual system computes perceived size by scaling the size of an object on the retina with its perceived distance. There are instances, however, in which size-distance scaling is not based on visual inputs but on extraretinal cues. In the Taylor illusion, the perceived afterimage that is projected on an observer's hand will change in size depending on how far the limb is positioned from the eyes-even in complete darkness. In the dark, distance cues might derive from hand position signals either by an efference copy of the motor command to the moving hand or by proprioceptive input. Alternatively, there have been reports that vergence signals from the eyes might also be important. We performed a series of behavioral and eye-tracking experiments to tease apart how these different sources of distance information contribute to the Taylor illusion. We demonstrate that, with no visual information, perceived size changes mainly as a function of the vergence angle of the eyes, underscoring its importance in size-distance scaling. Interestingly, the strength of this relationship decreased when a mismatch between vergence and proprioception was introduced, indicating that proprioceptive feedback from the arm also affected size perception. By using afterimages, we provide strong evidence that the human visual system can benefit from sensory signals that originate from the hand when visual information about distance is unavailable. |
Miriam Spering; Elisa C. Dias; Jamie L. Sanchez; Alexander C. Schutz; Daniel C. Javitt Efference copy failure during smooth pursuit eye movements in schizophrenia Journal Article In: Journal of Neuroscience, vol. 33, no. 29, pp. 11779–11787, 2013. @article{Spering2013, Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target ("ball") would hit/miss a stationary vertical line segment ("goal"). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder. |
Patti Spinner; Susan M. Gass; Jennifer Behney Ecological validity in eye-tracking: An empirical study Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 389–415, 2013. @article{Spinner2013, Eye-trackers are becoming increasingly widespread as a tool to investigate second language (L2) acquisition. Unfortunately, clear standards for methodology—including font size, font type, and placement of interest areas—are not yet available. Although many researchers stress the need for ecological validity—that is, the simulation of natural reading conditions—it may not be prudent to use such a design to investigate new directions in eye-tracking research, and particularly in research involving small lexical items such as articles. In this study, we examine whether two different screen layouts can lead to different results in an eye-tracking study on the L2 acquisition of Italian gender. The results of an experiment with an ecologically valid design are strikingly different than the results of an experiment with a design tailored to track eye movements to articles. We conclude that differences in screen layout can have significant effects on results and that it is crucial that researchers report screen layout information. |
Andreas Sprenger; Monique Friedrich; Matthias Nagel; Christiane S. Schmidt; Steffen Moritz; Rebekka Lencer Advanced analysis of free visual exploration patterns in schizophrenia Journal Article In: Frontiers in Psychology, vol. 4, pp. 737, 2013. @article{Sprenger2013, Background: Visual scanpath analyses provide important information about attention allocation and attention shifting during visual exploration of social situations. This study investigated whether patients with schizophrenia simply show restricted free visual exploration behavior reflected by reduced saccade frequency and increased fixation duration or whether patients use qualitatively different exploration strategies than healthy controls. Methods: Scanpaths of 32 patients with schizophrenia and age-matched 33 healthy controls were assessed while participants freely explored six photos of daily life situations (20 s/photo) evaluated for cognitive complexity and emotional strain. Using fixation and saccade parameters, we compared temporal changes in exploration behavior, cluster analyses, attentional landscapes, and analyses of scanpath similarities between both groups. Results: We found fewer fixation clusters, longer fixation durations within a cluster, fewer changes between clusters, and a greater increase of fixation duration over time in patients compared to controls. Scanpath patterns and attentional landscapes in patients also differed significantly from those of controls. Generally, cognitive complexity and emotional strain had significant effects on visual exploration behavior. This effect was similar in both groups as were physical properties of fixation locations. Conclusions: Longer attention allocation to a given feature in a scene and less attention shifts in patients suggest a more focal processing mode compared to a more ambient exploration strategy in controls. These visual exploration alterations were present in patients independently of cognitive complexity, emotional strain or physical properties of visual cues implying that they represent a rather general deficit. Despite this impairment, patients were able to adapt their scanning behavior to changes in cognitive complexity and emotional strain similar to controls. |
Andreas Sprenger; Peter Trillenberg; Matthias Nagel; John A. Sweeney; Rebekka Lencer Enhanced top-down control during pursuit eye tracking in schizophrenia Journal Article In: European Archives of Psychiatry and Clinical Neuroscience, vol. 263, no. 3, pp. 223–231, 2013. @article{Sprenger2013a, Alterations in sensorimotor processing and predictive mechanisms have both been proposed as the primary cause of eye tracking deficits in schizophrenia. 20 schizophrenia patients and 20 healthy controls were assessed on blocks of predictably moving visual targets at constant speeds of 10, 15 or 30 degrees /s. To assess internal drive to the eye movement system based on predictions about the ongoing target movement, targets were blanked off for either 666 or 1,000 ms during the ongoing pursuit movement in additional conditions. Main parameters of interest were eye deceleration after extinction of the visual target and residual eye velocity during blanking intervals. Eye deceleration after target extinction, reflecting persistence of predictive signals, was slower in patients than in controls, implying greater rather than diminished utilization of predictive mechanisms for pursuit in schizophrenia. Further, residual gain was not impaired in patients indicating a basic integrity of internal predictive models. Pursuit velocity gain in patients was reduced in all conditions with visible targets replicating previous findings about a sensorimotor transformation deficit in schizophrenia. A pattern of slower eye deceleration and unimpaired residual gain during blanking intervals implies greater adherence to top-down predictive models for pursuit tracking in schizophrenia. This suggests that predictive modeling is relatively intact in schizophrenia and that the primary cause of abnormal visual pursuit is impaired sensorimotor transformation of the retinal error signal needed for the maintenance of accurate visually driven pursuit. This implies that disruption in extrastriate and sensorimotor systems rather than frontostriatal predictive mechanisms may underlie this widely reported endophenotypes for schizophrenia. |
Matthew J. Stainer; Kenneth C. Scott-Brown; Benjamin W. Tatler Behavioral biases when viewing multiplexed scenes: Scene structure and frames of reference for inspection Journal Article In: Frontiers in Psychology, vol. 4, pp. 624, 2013. @article{Stainer2013, Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate "sub-scenes." Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers. |
Adrian Staub; Ashley Benatar Individual differences in fixation duration distributions in reading Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 6, pp. 1304–1311, 2013. @article{Staub2013, The present study investigated the relationship between the location and skew of an individual reader's fixation duration distribution. The ex-Gaussian distribution was fit to eye fixation data from 153 subjects in five experiments, four previously presented and one new. The τ parameter was entirely uncorrelated with the μ and σ parameters; by contrast, there was a modest positive correlation between these parameters for lexical decision and speeded pronunciation response times. The conclusion that, for fixation durations, the degree of skew is uncorrelated with the location of the distribution's central tendency was also confirmed nonparametrically, by examining vincentile plots for subgroups of subjects. Finally, the stability of distributional parameters for a given subject was demonstrated to be relatively high. Taken together with previous findings of selective influence on the μ parameter of the fixation duration distribution, the present results suggest that in reading, the location and the skew of the fixation duration distribution may reflect functionally distinct processes. The authors speculate that the skew parameter may specifically reflect the frequency of processing disruption. |
Michael Stengel; Martin Eisemann; Stephan Wenger; Benjamin Hell; Marcus Magnor Optimizing apparent display resolution enhancement for arbitrary videos Journal Article In: IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3604–3613, 2013. @article{Stengel2013, Display resolution is frequently exceeded by available image resolution. Recently, apparent display resolution enhancement (ADRE) techniques show how characteristics of the human visual system can be exploited to provide super-resolution on high refresh rate displays. In this paper, we address the problem of generalizing the ADRE technique to conventional videos of arbitrary content. We propose an optimization-based approach to continuously translate the video frames in such a way that the added motion enables apparent resolution enhancement for the salient image region. The optimization considers the optimal velocity, smoothness, and similarity to compute an appropriate trajectory. In addition, we provide an intuitive user interface that allows to guide the algorithm interactively and preserves important compositions within the video. We present a user study evaluating apparent rendering quality and show versatility of our method on a variety of general test scenes. |
Denise Nadine Stephan; Iring Koch; Jessica Hendler; Lynn Huestegge Task switching, modality compatibility, and the supra-modal function of eye movements Journal Article In: Experimental Psychology, vol. 60, no. 2, pp. 90–99, 2013. @article{Stephan2013, Previous research suggested that specific pairings of stimulus and response modalities (visual-manual and auditory-vocal tasks) lead to better dual-task performance than other pairings (visual-vocal and auditory-manual tasks). In the present task-switching study, we further examined this modality compatibility effect and investigated the role of response modality by additionally studying oculomotor responses as an alternative to manual responses. Interestingly, the switch cost pattern revealed a much stronger modality compatibility effect for groups in which vocal and manual responses were combined as compared to a group involving vocal and oculomotor responses, where the modality compatibility effect was largely abolished. We suggest that in the vocal-manual response groups the modality compatibility effect is based on cross-talk of central processing codes due to preferred stimulus-response modality processing pathways, whereas the oculomotor response modality may be shielded against cross-talk due to the supra-modal functional importance of visual orientation. |
Julia M. Stephen; Brian A. Coffman; David B. Stone; Piyadasa Kodituwakku Differences in MEG gamma oscillatory power during performance of a prosaccade task in adolescents with FASD Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 900, 2013. @article{Stephen2013, Fetal alcohol spectrum disorder (FASD) is characterized by a broad range of behavioral and cognitive deficits that impact the long-term quality of life for affected individuals. However, the underlying changes in brain structure and function associated with these cognitive impairments are not well-understood. Previous studies identified deficits in behavioral performance of prosaccade tasks in children with FASD. In this study, we investigated group differences in gamma oscillations during performance of a prosaccade task. We collected magnetoencephalography (MEG) data from 15 adolescents with FASD and 20 age-matched healthy controls (HC) with a mean age of 15.9 ± 0.4 years during performance of a prosaccade task. Eye movement was recorded and synchronized to the MEG data using an MEG compatible eye-tracker. The MEG data were analyzed relative to the onset of the visual saccade. Time-frequency analysis was performed using Fieldtrip with a focus on group differences in gamma-band oscillations. Following left target presentation, we identified four clusters over right frontal, right parietal, and left temporal/occipital cortex, with significantly different gamma-band (30-50 Hz) power between FASD and HC. Furthermore, visual M100 latencies described in Coffman etal. (2012) corresponded with increased gamma power over right central cortex in FASD only. Gamma-band differences were not identified for stimulus-averaged responses implying that these gamma-band differences were related to differences in saccade network functioning. These differences in gamma-band power may provide indications of atypical development of cortical networks in individuals with FASD. |
Andrew J. Stewart; Matthew Haigh; Heather J. Ferguson Sensitivity to speaker control in the online comprehension of conditional tips and promises: An eye-tracking study Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 4, pp. 1022–1036, 2013. @article{Stewart2013, Statements of the form if… then… can be used to communicate conditional speech acts such as tips and promises. Conditional promises require the speaker to have perceived control over the outcome event, whereas conditional tips do not. In an eye-tracking study, we examined whether readers are sensitive to information about perceived speaker control during processing of conditionals embedded in context. On a number of eye-tracking measures, we found that readers are sensitive to whether or not the speaker of a conditional has perceived control over the consequent event; conditional promises (which require the speaker to have perceived control over the consequent) result in processing disruption for contexts where this control is absent. Conditional tips (which do not require perceived control) are processed equivalently easily regardless of context. These results suggest that readers rapidly utilize pragmatic information related to perceived control in order to represent conditional speech acts as they are read. |
Mallory C. Stites; Kara D. Federmeier; Elizabeth A. L. Stine-Morrow Cross-age comparisons reveal multiple strategies for lexical ambiguity resolution during natural reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1823–1841, 2013. @article{Stites2013a, Eye tracking was used to investigate how younger and older (60 or more years) adults use syntactic and semantic information to disambiguate noun/verb (NV) homographs (e.g., park). In event-related potential (ERP) work using the same materials, Lee and Federmeier (2009, 2011) found that young adults elicited a sustained frontal negativity to NV homographs when only syntactic cues were available (i.e., in syntactic prose); this effect was eliminated by semantic constraints. The negativity was only present in older adults with high verbal fluency. The current study shows parallel findings: Young adults exhibit inflated first fixation durations to NV homographs in syntactic prose, but not semantically congruent sentences. This effect is absent in older adults as a group. Verbal fluency modulates the effect in both age groups: High fluency is associated with larger first fixation effects in syntactic prose. Older, but not younger, adults also show significantly increased rereading of the NV homographs in syntactic prose. Verbal fluency modulates this effect as well: High fluency is associated with a reduced tendency to reread, regardless of age. This relationship suggests a trade-off between initial and downstream processing costs for ambiguity during natural reading. Together the eye-tracking and ERP data suggest that effortful meaning selection recruits mechanisms important for suppressing contextually inappropriate meanings, which also slow eye movements. Efficacy of frontotemporal circuitry, as captured by verbal fluency, predicts the success of engaging these mechanisms in both young and older adults. Failure to recruit these processes requires compensatory rereading or leads to comprehension failures (Lee & Federmeier, 2012). |
Mallory C. Stites; Steven G. Luke; Kiel Christianson The psychologist said quickly, "Dialogue descriptions modulate reading speed!" Journal Article In: Memory & Cognition, vol. 41, no. 1, pp. 137–151, 2013. @article{Stites2013, In the present study, we investigated whether the semantic content of a dialogue description can affect reading times on an embedded quote, to determine whether the speed at which a character is described as saying a quote influences how quickly it is read. Yao and Scheepers (Cognition, 121:447-453, 2011) previously found that readers were faster to read direct quotes when the preceding context implied that the talker generally spoke quickly, an effect attributed to perceptual simulation of talker speed. For the present study, we manipulated the speed of a physical action performed by the speaker independently from character talking rate to determine whether these sources have separable effects on perceptual simulation of a direct quote. The results showed that readers spent less time reading direct quotes described as being said quickly, as compared to those described as being said slowly (e.g., John walked/bolted into the room and said energetically/nonchalantly, "I finally found my car keys."), an effect that was not present when a nearly identical phrase was presented as an indirect quote (e.g., John . . . said energetically that he finally found his car keys.). The speed of the character's movement did not affect direct-quote reading times. Furthermore, fast adverbs were themselves read significantly faster than slow adverbs, an effect that we attribute to implicit effects on the eye movement program stemming from automatically activated semantic features of the adverbs. Our findings add to the literature on perceptual simulation by showing that these effects can be instantiated with only a single adverb and are strong enough to override the effects of global sentence speed. |
Charmaine L. Thomas; Lauren D. Goegan; Kristin R. Newman; Jody E. Arndt; Christopher R. Sears Attention to threat images in individuals with clinical and subthreshold symptoms of post-traumatic stress disorder Journal Article In: Journal of Anxiety Disorders, vol. 27, no. 5, pp. 447–455, 2013. @article{Thomas2013, Attention to general and trauma-relevant threat was examined in individuals with clinical and subthreshold symptoms of post-traumatic stress disorder (PTSD). Participants' eye gaze was tracked and recorded while they viewed sets of four images over a 6-s presentation (one negative, positive, and neutral image, and either a general threat image or a trauma-relevant threat image). Two trauma-exposed groups (a clinical and a subthreshold PTSD symptom group) were compared to a non-trauma-exposed group. Both the clinical and subthreshold PTSD symptom groups attended to trauma-relevant threat images more than the no-trauma-exposure group, whereas there were no group differences for general threat images. A time course analysis of attention to trauma-relevant threat images revealed different attentional profiles for the trauma-exposed groups. Participants with clinical PTSD symptoms exhibited immediate heightened attention to the images relative to participants with no-trauma-exposure, whereas participants with subthreshold PTSD symptoms did not. In addition, participants with subthreshold PTSD symptoms attended to trauma-relevant threat images throughout the 6-s presentation, whereas participants with clinical symptoms of PTSD exhibited evidence of avoidance. The theoretical and clinical implications of these distinct attentional profiles are discussed. |
Laura E. Thomas Spatial working memory is necessary for actions to guide thought Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1974–1981, 2013. @article{Thomas2013a, Directed actions can play a causal role in cognition, shaping thought processes. What drives this cross-talk between action and thought? I investigated the hypothesis that representations in spatial working memory mediate interactions between directed actions and problem solving. Participants attempted to solve an insight problem while occasionally either moving their eyes in a pattern embodying the problem's solution or maintaining fixation. They simultaneously held either a spatial or verbal stimulus in working memory. Participants who moved their eyes in a pattern that embodied the solution were more likely to solve the problem, but only while also performing a verbal working memory task. Embodied guidance of insight was eliminated when participants were instead engaged in a spatial working memory task while moving their eyes, implying that loading spatial working memory prevented movement representations from influencing problem solving. These results point to spatial working memory as a mechanism driving embodied guidance of insight, suggesting that actions do not automatically influence problem solving. Instead, cross-talk between action and higher order cognition requires representations in spatial working memory. |
Dominic Thompson; S. P. Ling; Andriy Myachykov; Fernanda Ferreira; Christoph Scheepers Patient-related constraints on get- and be-passive uses in English: Evidence from paraphrasing Journal Article In: Frontiers in Psychology, vol. 4, pp. 848, 2013. @article{Thompson2013, In English, transitive events can be described in various ways. The main possibilities are active-voice and passive-voice, which are assumed to have distinct semantic and pragmatic functions. Within the passive, there are two further options, namely be-passive or get-passive. While these two forms are generally understood to differ, there is little agreement on precisely how and why. The passive Patient is frequently cited as playing a role, though again agreement on the specifics is rare. Here we present three paraphrasing experiments investigating Patient-related constraints on the selection of active vs. passive voice, and be- vs. get-passive, respectively. Participants either had to re-tell short stories in their own words (Experiments 1 and 2) or had to answer specific questions about the Patient in those short stories (Experiment 3). We found that a given Agent in a story promotes the use of active-voice, while a given Patient promotes be-passives specifically. Meanwhile, get-passive use increases when the Patient is marked as important. We argue that the three forms of transitive description are functionally and semantically distinct, and can be arranged along two dimensions: Patient Prominence and Patient Importance. We claim that active-voice has a near-complementary relationship with the be-passive, driven by which protagonist is given. Since both get and be are passive, they share the features of a Patient-subject and an optional Agent by-phrase; however, get specifically responds to a Patient being marked as important. Each of these descriptions has its own set of features that differentiate it from the others. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Optimal sampling of visual information for lightness judgments Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 27, pp. 11163–11168, 2013. @article{Toscani2013a, The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting showthat higher values in an object's luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Selection of visual information for lightness judgements by eye movements Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–8, 2013. @article{Toscani2013, When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance. |
Joseph C. Toscano; Nathaniel D. Anderson; Bob McMurray Reconsidering the role of temporal order in spoken word recognition Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 5, pp. 981–987, 2013. @article{Toscano2013, Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes. |
R. Blythe Towal; Milica Mormann; Christof Koch Simultaneous modeling of visual saliency and value computation improves predictions of economic choice Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 40, pp. E3858–E3867, 2013. @article{Towal2013, Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. |
David J. Townsend Aspectual coercion in eye movements Journal Article In: Journal of Psycholinguistic Research, vol. 42, no. 3, pp. 281–306, 2013. @article{Townsend2013, Comprehension includes interpreting sentences in terms of aspectual categories such as processes (Harry climbed) and culminations (Harry reached the top). Adding a verbal modifier such as for many years to a culmination coerces its interpretation from one to many culminations. Previous studies have found that coercion increases lexical decision and meaning judgment time, but not eye fixation time. This study recorded eye movements as participants read sentences in which a coercive adverb increased the interpretation of multiple events. Adverbs appeared at the end of a clause and line; the post-adverb region appeared at the beginning of the next line; follow-up questions occasionally asked about aspectual meaning; and clause type varied systematically. Coercive adverbs increased eye fixation time in the post-adverb region and in the adverb and post-adverb regions combined. Factors that influence the appearance of aspectual coercion may include world knowledge, follow-up questions, and the location and ambiguity of adverbs. |
Alisha Siebold; Wieske Zoest; Martijn Meeter; Mieke Donk In defense of the salience map: Salience rather than visibility determines selection Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 6, pp. 1516–1524, 2013. @article{Siebold2013, The aim of the present study was to investigate whether time-dependent biases of oculomotor selection as typically observed during visual search are better accounted for by an absolute-processing-speed account (J. P. de Vries, I. T. C. Hooge, M. A. Wiering, & F. A. J. Verstraten, 2011, How longer saccade latencies lead to a competition for salience. Psychological Science, 22, 916-923) or a relative-salience account (e.g., M. Donk, & W. van Zoest, 2008, Effects of salience are short-lived. Psychological Science, 19, 733-739; M. Donk & W. van Zoest, 2011, No control in orientation search: The effects of instruction on oculomotor selection in visual search. Vision Research, 51, 2156-2166). In order to test these two models, we performed an experiment in which participants were instructed to make a speeded eye movement to any of two orientation singletons presented among a homogeneous set of vertically oriented background lines. One singleton, the fixed singleton, remained identical across conditions, whereas the other singleton, the variable singleton, varied such that its orientation contrast relative to the background lines was either smaller or larger than that of the fixed singleton. The results showed that the proportion of eye movements directed toward the fixed singleton varied substantially depending on the orientation contrast of the variable singleton. A model assuming selection behavior to be determined by relative salience provided a better fit to the individual data than the absolute processing speed model. These findings suggest that relative salience rather than the visibility of an element is crucial in determining temporal variations in oculomotor selection behavior and that an explanation of visual selection behavior is insufficient without the concept of a salience map. |
Massimo Silvetti; Ruth Seurinck; Marlies E. Bochove; Tom Verguts The influence of the noradrenergic system on optimal control of neural plasticity Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 160, 2013. @article{Silvetti2013, Decision making under uncertainty is challenging for any autonomous agent. The challenge increases when the environment's stochastic properties change over time, i.e., when the environment is volatile. In order to efficiently adapt to volatile environments, agentsmust primarily rely on recent outcomes to quickly change their decision strategies; in otherwords, they need to increase their knowledge plasticity. On the contrary, in stable environments, knowledge stability must be preferred to preserve useful information against noise. Here we propose that in mammalian brain, the locus coeruleus (LC) is one of the nuclei involved in volatility estimation and in the subsequent control of neural plasticity. During a reinforcement learning task, LC activation, measured bymeans of pupil diameter, coded both for environmental volatility and learning rate. We hypothesize that LC could be responsible, through norepinephrinic modulation, for adaptations to optimize decisionmaking in volatile environments.We also suggest a computational model on the interaction between the anterior cingulate cortex (ACC) and LC for volatility estimation. |
Timothy J. Slattery; Keith Rayner Effects of intraword and interword spacing on eye movements during reading: Exploring the optimal use of space in a line of text Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 6, pp. 1275–1292, 2013. @article{Slattery2013, Two eye movement experiments investigated intraword spacing (the space between letters within words) and interword spacing (the space between words) to explore the influence these variables have on eye movement control during reading. Both variables are important factors in determining the optimal use of space in a line of text, and fonts differ widely in how they employ these spaces. Prior research suggests that the proximity of flanking letters influences the identification of a central letter via lateral inhibition or crowding. If so, decrements in intraword spacing may produce inhibition in word processing. Still other research suggests that increases in intraword spacing can disrupt the integrity of word units. In English, interword spacing has a large influence on word segmentation and is important for saccade target selection. The results indicate an interplay between intra- and interword spacing that influences a font's readability. Additionally, these studies highlight the importance of word segmentation processes and have implications for the nature of lexical processing (serial vs. parallel). |
Timothy J. Slattery; Patrick Sturt; Kiel Christianson; Masaya Yoshida; Fernanda Ferreira Lingering misinterpretations of garden path sentences arise from competing syntactic representations Journal Article In: Journal of Memory and Language, vol. 69, no. 2, pp. 104–120, 2013. @article{Slattery2013a, Recent work has suggested that readers' initial and incorrect interpretation of temporarily ambiguous ("garden path") sentences (e.g., Christianson, Hollingworth, Halliwell, & Ferreira, 2001) sometimes lingers even after attempts at reanalysis. These lingering effects have been attributed to incomplete reanalysis. In two eye tracking experiments, we distinguish between two types of incompleteness: the language comprehension system might not build a faithful syntactic structure, or it might not fully erase the structure built during an initial misparse. The first experiment used reflexive binding and the gender mismatch paradigm to show that a complete and faithful structure is built following processing of the garden-path. The second experiment used two-sentence texts to examine the extent to which the garden-path meaning from the first sentence interferes with reading of the second. Together, the results indicate that misinterpretation effects are attributable not to failure in building a proper structure, but rather to failure in cleaning up all remnants of earlier attempts to build that syntactic representation. |
Tim J. Smith; Parag K. Mital Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes Journal Article In: Journal of Vision, vol. 13, no. 8, pp. 1–24, 2013. @article{Smith2013, Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion. |
Adam C. Snyder; Michael J. Morais; Matthew A. Smith Variance in population firing rate as a measure of slow time-scale correlation Journal Article In: Frontiers in Computational Neuroscience, vol. 7, pp. 176, 2013. @article{Snyder2013, Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research. |
Annie Tremblay; Elsa Spinelli Segmenting liaison-initial words: The role of predictive dependencies Journal Article In: Language and Cognitive Processes, vol. 28, no. 8, pp. 1093–1113, 2013. @article{Tremblay2013, Listeners use several cues to segment speech into words. However, it is unclear how these cues work together. This study examines the relative weight of distributional and (natural) acoustic-phonetic cues in French listeners' recognition of temporarily ambiguous vowel-initial words in liaison contexts (e.g., parfai t [t]abri "perfect shelter") and corresponding consonant-initial words (e.g., parfait tableau "perfect painting"). Participants completed a visual-world eye-tracking experiment in which they heard adjective-noun sequences where the pivotal consonant was /t/ (more frequent as word-initial consonant and thus expected advantage for consonant-initial words), /z/ (more frequent as liaison consonant and thus expected advantage for liaison-initial words), or /n/ (roughly as frequent as word-initial and liaison consonants and thus no expected advantage). The results for /t/ and /z/ were as expected, but those for /n/ showed an advantage for consonant-initial words over liaison-initial ones. These results are consistent with speech segmentation theories in which distributional information supersedes acoustic-phonetic information, but they also suggest a privileged status for consonant-initial words when the input does not strongly favour liaison-initial words. |
Alison M. Trude; Annie Tremblay; Sarah Brown-Schmidt Limitations on adaptation to foreign accents Journal Article In: Journal of Memory and Language, vol. 69, no. 3, pp. 349–367, 2013. @article{Trude2013, Although foreign accents can be highly dissimilar to native speech, existing research suggests that listeners readily adapt to foreign accents after minimal exposure. However, listeners often report difficulty understanding non-native accents, and the time-course and specificity of adaptation remain unclear. Across five experiments, we examined whether listeners could use a newly learned feature of a foreign accent to eliminate lexical competitors during on-line speech perception. Participants heard the speech of a native English speaker and a native speaker of Québec French who, in English, pronounces /i/ as [. i] (e.g., weak as wick) before all consonants except voiced fricatives. We examined whether listeners could learn to eliminate a shifted /i/-competitor (e.g., weak) when interpreting the accented talker produce an unshifted word (e.g., wheeze). In four experiments, adaptation was strikingly limited, though improvement across the course of the experiment and with stimulus variations indicates learning was possible. In a fifth experiment, adaptation was not improved when a native English talker produced the critical vowel shift, demonstrating that the limitation is not simply due to the fact the accented talker was non-native. These findings suggest that although listeners can arrive at the correct interpretation of a foreign accent, this process can pose significant difficulty. |
Feng-Yi Tseng; Chin-Jung Chao; Wen-Yang Feng; Sheue-Ling Hwang Effects of display modality on critical battlefield e-map search performance Journal Article In: Behaviour & Information Technology, vol. 32, no. 9, pp. 888–901, 2013. @article{Tseng2013, Visual search performance in visual display terminals can be affected by several changeable display parameters, such as the dimensions of screen, target size and background clutter. We found that when there was time pressure for operators to execute the critical battlefield map searching in a control room, efficient visual search became more important. We investigated the visual search performance in a simulated radar interface, which included the warrior symbology. Thirty-six participants were recruited and a three-factor mixed design was used in which the independent variables were three screen dimensions (7, 15 and 21 in.), five icon sizes (visual angle 40, 50, 60, 70 and 80 min of arc) and two map background clutter types (topography displayed [TD] and topography not displayed [TND]). The five dependent variables were completion time, accuracy, fixation duration, fixation count and saccade amplitude. The results showed that the best icon sizes were 80 and 70 min. The 21 in. screen dimension was chosen as the superior screen for search tasks. The TND map background with less clutters produced higher accuracy compared to that of TD background with clutter. The results of this research can be used in control room design to promote operators' visual search performance. |
Yusuke Uchida; Daisuke Kudoh; Takatoshi Higuchi; Masaaki Honda; Kazuyuki Kanosue Dynamic visual acuity in baseball players is due to superior tracking abilities Journal Article In: Medicine and Science in Sports and Exercise, vol. 45, no. 2, pp. 319–325, 2013. @article{Uchida2013, PURPOSE: Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in baseball players than that in nonplayers. Although the better DVA of baseball players has been attributed to a better ability to track moving objects, it might be derived from the ability to perceive an object even in the presence of a great distance between the image on the retina and the fovea (retinal error). However, the ability to perceive moving visual stimuli has not been compared between baseball players and nonplayers. METHODS: To clarify this, we quantitatively measured abilities of eye movement and visual perception using moving Landolt C rings in baseball players and nonplayers. RESULTS: Baseball players could achieve high DVA with significantly faster eye movement at shorter latencies than nonplayers. There was no difference in the ability to perceive moving object's images projected onto the retina between baseball players and nonplayers. CONCLUSIONS: These results suggest that the better DVA of baseball players is primarily due to a better ability to track moving objects with their eyes rather than to improved perception of moving images on the retina. This skill is probably obtained through baseball training. |
Hiroshi Ueda; Kohske Takahashi; Katsumi Watanabe Contributions of retinal input and phenomenal representation of a fixation object to the saccadic gap effect Journal Article In: Vision Research, vol. 82, pp. 52–57, 2013. @article{Ueda2013, The saccadic " gap effect" refers to a phenomenon whereby saccadic reaction times (SRTs) are shortened by the removal of a visual fixation stimulus prior to target presentation. In the current study, we investigated whether the gap effect was influenced by retinal input of a fixation stimulus, as well as phenomenal permanence and/or expectation of the re-emergence of a fixation stimulus. In Experiment 1, we used an occluded fixation stimulus that was gradually hidden by a moving plate prior to the target presentation, which produced the impression that the fixation stimulus still remained and would reappear from behind the plate. We found that the gap effect was significantly weakened with the occluded fixation stimulus. However, the SRT with the occluded fixation stimulus was still shorter in comparison to when the fixation stimulus physically remained on the screen. In Experiment 2, we investigated whether this effect was due to phenomenal maintenance or expectation of the reappearance of the fixation stimulus; this was achieved by using occluding plates that were an identical color to the background screen, giving the impression of reappearance of the fixation stimulus but not of its maintenance. The result showed that the gap effect was still weakened by the same degree even without phenomenal maintenance of the fixation stimulus. These results suggest that the saccadic gap effect is modulated by both retinal input and subjective expectation of re-emergence of the fixation stimulus. In addition to oculomotor mechanisms, other components, such as attentional mechanisms, likely contribute to facilitation of the subsequent action. |
Durk Talsma; Brian J. White; Sebastiaan Mathôt; Douglas P. Munoz; Jan Theeuwes A retinotopic attentional trace after saccadic eye movements: Evidence from event-related potentials Journal Article In: Journal of Cognitive Neuroscience, vol. 25, no. 9, pp. 1563–1577, 2013. @article{Talsma2013, Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts. |
Heng Ru May Tan; Hartmut Leuthold; Joachim Gross Gearing up for action: Attentive tracking dynamically tunes sensory and motor oscillations in the alpha and beta band Journal Article In: NeuroImage, vol. 82, pp. 634–644, 2013. @article{Tan2013, Allocation of attention during goal-directed behavior entails simultaneous processing of relevant and attenuation of irrelevant information. How the brain delegates such processes when confronted with dynamic (biological motion) stimuli and harnesses relevant sensory information for sculpting prospective responses remains unclear. We analyzed neuromagnetic signals that were recorded while participants attentively tracked an actor's pointing movement that ended at the location where subsequently the response-cue indicated the required response. We found the observers' spatial allocation of attention to be dynamically reflected in lateralized parieto-occipital alpha (8-12. Hz) activity and to have a lasting influence on motor preparation. Specifically, beta (16-25. Hz) power modulation reflected observers' tendency to selectively prepare for a spatially compatible response even before knowing the required one. We discuss the observed frequency-specific and temporally evolving neural activity within a framework of integrated visuomotor processing and point towards possible implications about the mechanisms involved in action observation. |
Kyeong Jin Tark; Clayton E. Curtis Deciding where to look based on visual, auditory, and semantic information Journal Article In: Brain Research, vol. 1525, pp. 26–38, 2013. @article{Tark2013, Neurons in the dorsal frontal and parietal cortex are thought to transform incoming visual signals into the spatial goals of saccades, a process known as target selection. Here, we used functional magnetic resonance imaging (fMRI) to test how target selection may generalize beyond visual transformations when auditory and semantic information is used for selection. We compared activity in the frontal and parietal cortex when subjects made visually, aurally, and semantically guided saccades to one of four differently colored dots. Selection was based on a visual cue (i.e., one of the dots blinked), an auditory cue (i.e., a white noise burst was emitted at one of the dots' location), or a semantic cue (i.e., the color of one of the dots was spoken). Although neural responses in frontal and parietal cortex were robust, they were non-specific with regard to the type of information used for target selection. Decoders, however, trained on the patterns of activity in the intraparietal sulcus could classify both the type of cue used for target selection and the direction of the saccade. Therefore, we find evidence that the posterior parietal cortex is involved in transforming multimodal inputs into general spatial representations that can be used to guide saccades. |
Benjamin W. Tatler; Yoriko Hirose; Sarah K. Finnegan; Riina Pievilainen; Clare Kirtley; Alan Kennedy Priorities for selection and representation in natural tasks Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–10, 2013. @article{Tatler2013, Selecting and remembering visual information is an active and competitive process. In natural environments, representations are tightly coupled to task. Objects that are task-relevant are remembered better due to a combination of increased selection for fixation and strategic control of encoding and/or retaining viewed information. However, it is not understood how physically manipulating objects when performing a natural task influences priorities for selection and memory. In this study, we compare priorities for selection and memory when actively engaged in a natural task with first-person observation of the same object manipulations. Results suggest that active manipulation of a task-relevant object results in a specific prioritization for object position information compared with other properties and compared with action observation of the same manipulations. Experiment 2 confirms that this spatial prioritization is likely to arise from manipulation rather than differences in spatial representation in real environments and the movies used for action observation. Thus, our findings imply that physical manipulation of task relevant objects results in a specific prioritization of spatial information about task-relevant objects, possibly coupled with strategic de-prioritization of colour memory for irrelevant objects. |
Shuichiro Taya; David Windridge; Magda Osman Trained eyes: Experience promotes adaptive gaze control in dynamic and uncertain visual environments Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e71371, 2013. @article{Taya2013, Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around 'events,' which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. |
Yasuo Terao; Hideki Fukuda; Yuichiro Shirota; Akihiro Yugeta; Masayuki Yoshioka; Masahiko Suzuki; Ritsuko Hanajima; Yoshiko Nomura; Masaya Segawa; Shoji Tsuji; Yoshikazu Ugawa Deterioration of horizontal saccades in progressive supranuclear palsy Journal Article In: Clinical Neurophysiology, vol. 124, no. 2, pp. 354–363, 2013. @article{Terao2013, Objective: To investigate horizontal saccade changes according to disease stage in patients with progressive supranuclear palsy (PSP). Methods: We studied visually and memory guided saccades (VGS and MGS) in 36 PSP patients at various disease stages, and compared results with those in 66 Parkinson's disease (PD) patients and 58 age-matched normal controls. Results: Both vertical and horizontal saccades were affected in PSP patients, usually manifesting as " slow saccades" but sometimes as a sequence of small amplitude saccades with relatively well preserved velocities. Disease progression caused saccade amplitude reduction in PSP but not PD patients. In contrast, VGS and MGS latencies were comparable between PSP and PD patients, as were the frequencies of saccades to cue, suggesting that voluntary initiation and inhibitory control of saccades are similar in both disorders. Hypermetria was rarely observed in PSP patients with cerebellar ataxia (PSPc patients). Conclusions: The progressively reduced accuracy of horizontal saccades in PSP suggests a brainstem oculomotor pathology that includes the superior colliculus and/or paramedian pontine reticular formation. In contrast, the functioning of the oculomotor system above the brainstem was similar between PSP and PD patients. Significance: These findings may reflect a brainstem oculomotor pathology. |
Lore Thaler; Alexander C. Schütz; Melvyn A. Goodale; Karl R. Gegenfurtner What is the best fixation target? The effect of target shape on stability of fixational eye movements Journal Article In: Vision Research, vol. 76, pp. 31–42, 2013. @article{Thaler2013, People can direct their gaze at a visual target for extended periods of time. Yet, even during fixation the eyes make small, involuntary movements (e.g. tremor, drift, and microsaccades). This can be a problem during experiments that require stable fixation. The shape of a fixation target can be easily manipulated in the context of many experimental paradigms. Thus, from a purely methodological point of view, it would be good to know if there was a particular shape of a fixation target that minimizes involuntary eye movements during fixation, because this shape could then be used in experiments that require stable fixation. Based on this methodological motivation, the current experiments tested if the shape of a fixation target can be used to reduce eye movements during fixation. In two separate experiments subjects directed their gaze at a fixation target for 17. s on each trial. The shape of the fixation target varied from trial to trial and was drawn from a set of seven shapes, the use of which has been frequently reported in the literature. To determine stability of fixation we computed spatial dispersion and microsaccade rate. We found that only a target shape which looks like a combination of bulls eye and cross hair resulted in combined low dispersion and microsaccade rate. We recommend the combination of bulls eye and cross hair as fixation target shape for experiments that require stable fixation. |
Tom Theys; Pierpaolo Pani; Johannes Loon; Jan Goffin; Peter Janssen Three-dimensional shape coding in grasping circuits: A comparison between the anterior intraparietal area and ventral premotor area F5a Journal Article In: Journal of Cognitive Neuroscience, vol. 25, no. 3, pp. 352–364, 2013. @article{Theys2013, Depth information is necessary for adjusting the hand to the three-dimensional (3-D) shape of an object to grasp it. The transformation of visual information into appropriate distal motor commands is critically dependent on the anterior intraparietal area (AIP) and the ventral premotor cortex (area F5), particularly the F5p sector. Recent studies have demonstrated that both AIP and the F5a sector of the ventral premotor cortex contain neurons that respond selectively to disparity-defined 3-D shape. To investigate the neural coding of 3-D shape and the behavioral role of 3-D shape-selective neurons in these two areas, we recorded single-cell activity in AIP and F5a during passive fixation of curved surfaces and during grasping of real-world objects. Similar to those in AIP, F5a neurons were either first- or second-order disparity selective, frequently showed selectivity for discrete approximations of smoothly curved surfaces that contained disparity discontinuities, and exhibited mostly monotonic tuning for the degree of disparity variation. Furthermore, in both areas, 3-D shape-selective neurons were colocalized with neurons that were active during grasping of real-world objects. Thus, area AIP and F5a contain highly similar representations of 3-D shape, which is consistent with the proposed transfer of object information from AIP to the motor system through the ventral premotor cortex. |
Yin Su; Li-Lin Rao; Hong-Yue Sun; Xue-Lei Du; Xingshan Li; Shu Li Is making a risky choice based on a weighting and adding process? An eye-tracking investigation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1765–1780, 2013. @article{Su2013, The debate about whether making a risky choice is based on a weighting and adding process has a long history and is still unresolved. To address this long-standing controversy, we developed a comparative paradigm. Participants' eye movements in 2 risky choice tasks that required participants to choose between risky options in single-play and multiple-play conditions were separately compared with those in a baseline task in which participants naturally performed a deliberate calculation following a weighting and adding process. The results showed that, when participants performed the multiple-play risky choice task, their eye movements were similar to those in the baseline task, suggesting that participants may use a weighting and adding process to make risky choices in multiple-play conditions. In contrast, participants' eye movements were different in the single-play risky choice task versus the baseline task, suggesting that participants were not likely to use a weighting and adding process to make risky choices in single-play conditions and were more likely to use a heuristic process. We concluded that an expectation-based index for predicting risk preferences is applicable in multiple-play conditions but not in single-play conditions, implying the need to improve current theories that postulate the use of a heuristic process. |
Pei Sun; Justin L. Gardner; Mauro Costagli; Kenichi Ueno; R. Allen Waggoner; Keiji Tanaka; Kang Cheng In: Cerebral Cortex, vol. 23, no. 7, pp. 1618–1629, 2013. @article{Sun2013, Cells in the animal early visual cortex are sensitive to contour orientations and form repeated structures known as orientation columns. At the behavioral level, there exist 2 well-known global biases in orientation perception (oblique effect and radial bias) in both animals and humans. However, their neural bases are still under debate. To unveil how these behavioral biases are achieved in the early visual cortex, we conducted high-resolution functional magnetic resonance imaging experiments with a novel continuous and periodic stimulation paradigm. By inserting resting recovery periods between successive stimulation periods and introducing a pair of orthogonal stimulation conditions that differed by 90 degrees continuously, we focused on analyzing a blood oxygenation level-dependent response modulated by the change in stimulus orientation and reliably extracted orientation preferences of single voxels. We found that there are more voxels preferring horizontal and vertical orientations, a physiological substrate underlying the oblique effect, and that these over-representations of horizontal and vertical orientations are prevalent in the cortical regions near the horizontal- and vertical-meridian representations, a phenomenon related to the radial bias. Behaviorally, we also confirmed that there exists perceptual superiority for horizontal and vertical orientations around horizontal and vertical meridians, respectively. Our results, thus, refined the neural mechanisms of these 2 global biases in orientation perception. |
Megumi Suzuki; Jeremy M. Wolfe; Todd S. Horowitz; Yasuki Noguchi Apparent color-orientation bindings in the periphery can be influenced by feature binding in central vision Journal Article In: Vision Research, vol. 82, pp. 58–65, 2013. @article{Suzuki2013, A previous study reported the misbinding illusion in which visual features belonging to overlapping sets of items were erroneously integrated (Wu, Kanai, & Shimojo, 2004, Nature, 429, 262). In this illusion, central and peripheral portions of a transparent motion field combined color and motion in opposite fashions. When observers saw such stimuli, their perceptual color-motion bindings in the periphery were re-arranged in such a way as to accord with the bindings in the central region, resulting in erroneous color-motion pairings (misbinding) in peripheral vision. Here we show that this misbinding illusion is also seen in the binding of color and orientation. When the central field of a stimulus array was composed of objects that had coherent (regular) color-orientation pairings, subjective color-orientation bindings in the peripheral stimuli were automatically altered to match the coherent pairings of the central stimuli. Interestingly, the illusion was induced only when all items in the central field combined color and orientation in an orthogonal fashion (e.g. all red bars were horizontal and all green bars were vertical). If this orthogonality was disrupted (e.g. all red and green bars were horizontal), the central field lost its power to induce the misbinding illusion in the peripheral stimuli. The original misbinding illusion study proposed that the illusion stemmed from a perceptual extrapolation that resolved peripheral ambiguity with clear central vision. However, our present results indicate that visual analyses of the correlational structure between two features (color and orientation) are critical for the illusion to occur, suggesting a rapid integration of multiple featural cues in the human visual system. |
Sruthi K. Swaminathan; Nicolas Y. Masse; David J. Freedman A comparison of lateral and medial intraparietal areas during a visual categorization task Journal Article In: Journal of Neuroscience, vol. 33, no. 32, pp. 13157–13170, 2013. @article{Swaminathan2013, Categorization is essential for interpreting sensory stimuli and guiding our actions. Recent studies have revealed robust neuronal category representations in the lateral intraparietal area (LIP). Here, we examine the specialization of LIP for categorization and the roles of other parietal areas by comparing LIP and the medial intraparietal area (MIP) during a visual categorization task. MIP is involved in goal-directed arm movements and visuomotor coordination but has not been implicated in non-motor cognitive functions, such as categorization. As expected, we found strong category encoding in LIP. Interestingly, we also observed category signals in MIP. However, category signals were stronger and appeared with a shorter latency in LIP than MIP. In this task, monkeys indicated whether a test stimulus was a category match to a previous sample with a manual response. Test-period activity in LIP showed category encoding and distinguished between matches and non-matches. In contrast, MIP primarily reflected the match/non-match status of test stimuli, with a strong preference for matches (which required a motor response). This suggests that, although category representations are distributed across parietal cortex, LIP and MIP play distinct roles: LIP appears more involved in the categorization process itself, whereas MIP is more closely tied to decision-related motor actions. |
Bernard Marius Hart; Hannah Claudia Elfriede; Fanny Schmidt; Ingo Klein-Harmeyer; Wolfgang Einhäuser Attention in natural scenes: Contrast affects rapid visual processing and fixations alike Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–10, 2013. @article{tHart2013a, For natural scenes, attention is frequently quantified either by performance during rapid presentation or by gaze allocation during prolonged viewing. Both paradigms operate on different time scales, and tap into covert and overt attention, respectively. To compare these, we ask some observers to detect targets (animals/vehicles) in rapid sequences, and others to freely view the same target images for 3 s, while their gaze is tracked. In some stimuli, the target's contrast is modified (increased/decreased) and its back- ground modified either in the same or in the opposite way. We find that increasing target contrast relative to the background increases fixations and detection alike, whereas decreasing target contrast and simultaneously increasing background contrast has little effect. Contrast increase for the whole image (target þ background) improves detection, decrease worsens detection, whereas fixation probability remains unaffected by whole-image modifications. Object-unrelated local increase or decrease of contrast attracts gaze, but less than actual objects, supporting a precedence of objects over low-level features. Detection and fixation probability are correlated: the more likely a target is detected in one paradigm, the more likely it is fixated in the other. Hence, the link between overt and covert attention, which has been established in simple stimuli, transfers to more naturalistic scenarios. |
Bernard Marius Hart; Hannah C. E. F. Schmidt; Christine Roth; Wolfgang Einhäuser Fixations on objects in natural scenes: Dissociating importance from salience Journal Article In: Frontiers in Psychology, vol. 4, pp. 455, 2013. @article{tHart2013, The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's "importance" for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named ("common"/"important") or a rarely named ("rare"/"unimportant") object, track the observers' eye movements during scene viewing and ask them to provide wibble99 describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in reading. |
Karine Tadros; Nicolas Dupuis-Roy; Daniel Fiset; Martin Arguin; Frédéric Gosselin Reading laterally: The cerebral hemispheric use of spatial frequencies in visual word recognition Journal Article In: Journal of Vision, vol. 13, no. 1, pp. 1–12, 2013. @article{Tadros2013, It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding. |
Chun Po Yin; Feng-Yang Kuo A study of how information system professionals comprehend indirect and direct speech acts in project communication Journal Article In: IEEE Transactions on Professional Communication, vol. 56, no. 3, pp. 226–241, 2013. @article{Yin2013, Research problem: Indirect communication is prevalent in business communication practices. For information systems (IS) projects that require professionals from multiple disciplines to work together, the use of indirect communication may hinder successful design, implementation, and maintenance of these systems. Drawing on the Speech Act Theory (SAT), this study investigates how direct and indirect speech acts may influence language comprehension in the setting of communication problems inherent in IS projects. Research questions: (1) Do participating subjects, who are IS professionals, differ in their comprehension of indirect and direct speech acts? (2) Do participants display different attention processes in their comprehension of indirect and direct speech acts? (3) Do participants' attention processes influence their comprehension of indirect and direct speech acts? Literature review: We review two relevant areas of theory—polite speech acts in professional communication and SAT. First, a broad review that focuses on literature related to the use of polite speech acts in the workplace and in information system (IS) projects suggests the importance of investigating speech acts by professionals. In addition, the SAT provides the theoretical framework guiding this study and the development of hypotheses. Methodology: The current study uses a quantitative approach. A between-groups experiment design was employed to test how direct and indirect speech acts influence the language comprehension of participants. Forty-three IS professionals participated in the experiment. In addition, through the use of eye-tracking technology, this study captured the attention process and analyzed the relationship between attention and comprehension. Results and discussion: The results show that the directness of speech acts significantly influences participants' attention process, which, in turn, significantly affects their comprehension. In addition, the findings indicate that indirect speech acts, if employed by IS professionals to communicate with others, may easily be distorted or misunderstood. Professionals and managers of organizations should be aware that effective communication in interdisciplinary projects, such as IS development, is not easy, and that reliance on polite or indirect communication may inhibit the generation of valid information. |
Si On Yoon; Sarah Brown-Schmidt Lexical differentiation in language production and comprehension Journal Article In: Journal of Memory and Language, vol. 69, no. 3, pp. 397–416, 2013. @article{Yoon2013, This paper presents the results of three experiments that explore the breadth of the relevant discourse context in language production and comprehension. Previous evidence from language production suggests the relevant context is quite broad, based on findings that speakers differentiate new discourse referents from similar referents discussed in past contexts (Van Der Wege, 2009). Experiment 1 replicated and extended this "lexical differentiation" effect by demonstrating that speakers used two different mechanisms, modification, and the use of subordinate level nouns, to differentiate current from past referents. In Experiments 2 and 3, we examined whether addressees expect speakers to differentiate. The results of these experiments did not support the hypothesis that listeners expect differentiation, for either lexically differentiated modified expressions (Experiment 2), nor for subordinate level nouns (Experiment 3). Taken together, the present findings suggest that the breadth of relevant discourse context differs across language production and comprehension. Speakers show more sensitivity to things they have said before, possibly due to better knowledge of the relevant context. In contrast, listeners have the task of inferring what the speaker believes is the relevant context; this inferential process may be more error-prone. |
Angela H. Young; Johan Hulleman Eye movements reveal how task difficulty moulds visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 1, pp. 168–190, 2013. @article{Young2013, In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size during visual search shrinks with increasing task difficulty. In Experiment 2, we used a gaze-contingent window and confirmed the validity of the size estimates. The experiment also revealed that breakdown in robustness against item motion is related to item-by-item search, rather than search difficulty per se. We argue that visual search is an eye-movement-based process that works on a continuum, from almost parallel (where many items can be processed within a fixation) to completely serial (where only one item can be processed within a fixation). |
Kiwon Yun; Yifan Peng; Dimitris Samaras; Gregory J. Zelinsky; Tamara L. Berg Exploring the role of gaze behavior and object detection in scene understanding Journal Article In: Frontiers in Psychology, vol. 4, pp. 917, 2013. @article{Yun2013, We posit that a person's gaze behavior while freely viewing a scene contains an abundance of information, not only about their intent and what they consider to be important in the scene, but also about the scene's content. Experiments are reported, using two popular image datasets from computer vision, that explore the relationship between the fixations that people make during scene viewing, how they describe the scene, and automatic detection predictions of object categories in the scene. From these exploratory analyses, we then combine human behavior with the outputs of current visual recognition methods to build prototype human-in-the-loop applications for gaze-enabled object detection and scene annotation. |
Chuanli Zang; Feifei Liang; Xuejun Bai; Guoli Yan; Simon P. Liversedge Interword spacing and landing position effects during Chinese reading in children and adults Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 3, pp. 720–734, 2013. @article{Zang2013, The present study examined children and adults' eye movement behavior when reading word spaced and unspaced Chinese text. The results showed that interword spacing reduced children and adults' first pass reading times and refixation probabilities indicating spaces between words facilitated word identification. Word spacing effects occurred to a similar degree for both children and adults, though there were differential landing position effects for single and multiple fixation situations in both groups; clear preferred viewing location effects occurred for single fixations, whereas landing positions were closer to word beginnings, and further into the word for adults than children for multiple fixation situations. Furthermore, adults targeted refixations contingent on initial landing positions to a greater degree than did children. Overall, the results indicate that some aspects of children's eye movements during reading show similar levels of maturity to adults, while others do not. |
Michael Zehetleitner; Anja Isabel Koch; Harriet Goschy; Hermann J. Müller Salience-based selection: Attentional capture by distractors less salient than the target Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52595, 2013. @article{Zehetleitner2013, Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. |
Semir Zeki; Jonathan Stutters Functional specialization and generalization for grouping of stimuli based on colour and motion Journal Article In: NeuroImage, vol. 73, pp. 156–166, 2013. @article{Zeki2013, This study was undertaken to learn whether the principle of functional specialization that is evident at the level of the prestriate visual cortex extends to areas that are involved in grouping visual stimuli according to attribute, and specifically according to colour and motion. Subjects viewed, in an fMRI scanner, visual stimuli composed of moving dots, which could be either coloured or achromatic; in some stimuli the moving coloured dots were randomly distributed or moved in random directions; in others, some of the moving dots were grouped together according to colour or to direction of motion, with the number of groupings varying from 1 to 3. Increased activation was observed in area V4 in response to colour grouping and in V5 in response to motion grouping while both groupings led to activity in separate though contiguous compartments within the intraparietal cortex. The activity in all the above areas was parametrically related to the number of groupings, as was the prominent activity in Crus I of the cerebellum where the activity resulting from the two types of grouping overlapped. This suggests (a) that, the specialized visual areas of the prestriate cortex have functions beyond the processing of visual signals according to attribute, namely that of grouping signals according to colour (V4) or motion (V5); (b) that the functional separation evident in visual cortical areas devoted to motion and colour, respectively, is maintained at the level of parietal cortex, at least as far as grouping according to attribute is concerned; and (c) that, by contrast, this grouping-related functional segregation is not maintained at the level of the cerebellum. |
Gregory J. Zelinsky; Hossein Adeli; Yifan Peng; Dimitris Samaras Modelling eye movements in a categorical search task Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–13, 2013. @article{Zelinsky2013, We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. |
Younes Zerouali; Jean Marc Lina; Boutheina Jemel Optimal eye-gaze fixation position for face-related neural responses Journal Article In: PLoS ONE, vol. 8, no. 6, pp. e60128, 2013. @article{Zerouali2013, It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template. |
En Zhang; Gong-Liang Zhang; Wu Li Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping Journal Article In: European Journal of Neuroscience, vol. 38, no. 12, pp. 3758–3767, 2013. @article{Zhang2013, Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training. |
Li Zhang; Jie Ren; Liang Xu; Xue Jun Qiu; Jost B. Jonas Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis Journal Article In: British Journal of Ophthalmology, vol. 97, no. 7, pp. 941–942, 2013. @article{Zhang2013a, With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue. |
Li Zhang; Ya-Qin Zhang; Jing-Shang Zhang; Liang Xu; Jost B. Jonas Visual fatigue and discomfort after stereoscopic display viewing Journal Article In: Acta Ophthalmologica, vol. 91, no. 2, pp. 149–153, 2013. @article{Zhang2013b, Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p < 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively. |
Ruyuan Zhang; Oh-Sang Kwon; Duje Tadin Illusory movement of stationary stimuli in the visual periphery: Evidence for a strong centrifugal prior in motion processing Journal Article In: Journal of Neuroscience, vol. 33, no. 10, pp. 4415–4423, 2013. @article{Zhang2013c, Visual input is remarkably diverse. Certain sensory inputs are more probable than others, mirroring statistical regularities of the visual environment. The visual system exploits many of these regularities, resulting, on average, in better inferences about visual stimuli. However, by incorporating prior knowledge into perceptual decisions, visual processing can also result in perceptions that do not match sensory inputs. Such perceptual biases can often reveal unique insights into underlying mechanisms and computations. For example, a prior assumption that objects move slowly can explain a wide range of motion phenomena. The prior on slow speed is usually rationalized by its match with visual input, which typically includes stationary or slow moving objects. However, this only holds for foveal and parafoveal stimulation. The visual periphery tends to be exposed to faster motions, which are biased toward centrifugal directions. Thus, if prior assumptions derive from experience, peripheral motion processing should be biased toward centrifugal speeds. Here, in experiments with human participants, we support this hypothesis and report a novel visual illusion where stationary objects in the visual periphery are perceived as moving centrifugally, while objects moving as fast as 7°/s toward fovea are perceived as stationary. These behavioral results were quantitatively explained by a Bayesian observer that has a strong centrifugal prior. This prior is consistent with both the prevalence of centrifugal motions in the visual periphery and a centrifugal bias of direction tuning in cortical area MT, supporting the notion that visual processing mirrors its input statistics. |
Ming Yan; Jinger Pan; Jochen Laubrock; Reinhold Kliegl; Hua Shu Parafoveal processing efficiency in rapid automatized naming: A comparison between Chinese normal and dyslexic children Journal Article In: Journal of Experimental Child Psychology, vol. 115, no. 3, pp. 579–589, 2013. @article{Yan2013, Dyslexic children are known to be slower than normal readers in rapid automatized naming (RAN). This suggests that dyslexics encounter local processing difficulties, which presumably induce a narrower perceptual span. Consequently, dyslexics should suffer less than normal readers from removing parafoveal preview. Here we used a gaze-contingent moving window paradigm in a RAN task to experimentally test this prediction. Results indicate that dyslexics extract less parafoveal information than control children. We propose that more attentional resources are recruited to the foveal processing because of dyslexics' less automatized translation of visual symbols into phonological output, thereby causing a reduction of the perceptual span. This in turn leads to less efficient preactivation of parafoveal information and, hence, more difficulty in processing the next foveal item. |
Hongsheng Yang; Fang Wang; Nianjun Gu; Xiao Gao; Guang Zhao The cognitive advantage for one's own name is not simply familiarity: An eye-tracking study Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 6, pp. 1176–1180, 2013. @article{Yang2013, Eye-tracking technique and visual search task were employed to examine the cognitive advantage for one's own name and the possible effect of familiarity on this advantage. The results showed that fewer saccades and an earlier start time of first fixations on the target were associated with trials in which participants were asked to search for their own name, as compared to search for personally familiar or famous names. In addition, the results also demonstrated faster response times and higher accuracy in the former kind of trials. Taken together, these findings provide important evidence that one's own name has the potential to capture attention and that familiarity cannot account for this advantage. |
Jinmian Yang Preview effects of plausibility and character order in reading Chinese transposed words: evidence from eye movements Journal Article In: Journal of Research in Reading, vol. 36, no. SUPPL.1, pp. S18–S34, 2013. @article{Yang2013a, The current paper examined the role of plausibility information in the parafovea for Chinese readers by using two-character transposed words (in which the order of the component characters is reversed but are still words). In two eye-tracking experiments, readers received a preview of a target word that was (1) identical to the target word, (2) a reverse word that was the target word with the order of its characters reversed or (3) a control word different from the target word. Reading times on target words were comparable between the identical and the reverse preview conditions when the reverse preview words were plausible. This plausibility preview effect was independent of whether the reverse word shared the meaning with the target word or not. Furthermore, the reverse preview words yielded shorter fixation durations than the control preview words. Implications of these results for preview processing during Chinese reading are discussed. |
Zhou Yang; Todd Jackson; Hong Chen Effects of chronic pain and pain-related fear on orienting and maintenance of attention: An eye movement study Journal Article In: Journal of Pain, vol. 14, no. 10, pp. 1148–1157, 2013. @article{Yang2013b, Abstract In this study, effects of chronic pain and pain-related fear on orienting and maintenance of attention toward pain stimuli were evaluated by tracking eye movements within a dot-probe paradigm. The sample comprised matched chronic pain (n = 24) and pain-free (n = 24) groups, each of which included lower and higher fear of pain subgroups. Participants completed a dot-probe task wherein eye movements were assessed during the presentation of sensory pain-neutral, health catastrophe-neutral, and neutral-neutral word pairs. Higher fear of pain levels were associated with biases in 1) directing initial gaze toward health catastrophe words and, among participants with chronic pain, 2) subsequent avoidance of threat as reflected by shorter first fixation durations on health catastrophe words compared to pain-free cohorts. As stimulus word pairs persisted for 2,000 ms, no group differences were observed for overall gaze durations or reaction times to probes that followed. In sum, this research identified specific biases in visual attention related to fear of pain and chronic pain during early stages of information processing that were not evident on the basis of later behavior responses to probes. Perspective Effects of chronic pain and fear of pain on attention were examined by tracking eye movements within a dot-probe paradigm. Heightened fear of pain corresponded to biases in initial gaze toward health catastrophe words and, among participants with chronic pain, subsequent gaze shifts away from these words. No reaction time differences emerged. |
Lok-Kin Yeung; Jennifer D. Ryan; Rosemary A. Cowell; Morgan D. Barense Recognition memory impairments caused by false recognition of novel objects Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 4, pp. 1384–1397, 2013. @article{Yeung2013, A fundamental assumption underlying most current theories of amnesia is that memory impairments arise because previously studied information either is lost rapidly or is made inaccessible (i.e., the old information appears to be new). Recent studies in rodents have challenged this view, suggesting instead that under conditions of high interference, recognition memory impairments following medial temporal lobe damage arise because novel information appears as though it has been previously seen. Here, we developed a new object recognition memory paradigm that distinguished whether object recognition memory impairments were driven by previously viewed objects being treated as if they were novel or by novel objects falsely recognized as though they were previously seen. In this indirect, eyetracking-based passive viewing task, older adults at risk for mild cognitive impairment showed false recognition to high-interference novel items (with a significant degree of feature overlap with previously studied items) but normal novelty responses to low-interference novel items (with a lower degree of feature overlap). The indirect nature of the task minimized the effects of response bias and other memory-based decision processes, suggesting that these factors cannot solely account for false recognition. These findings support the counterintuitive notion that recognition memory impairments in this memory-impaired population are not characterized by forgetting but rather are driven by the failure to differentiate perceptually similar objects, leading to the false recognition of novel objects as having been seen before. |
Jing Zhou; Adam Reeves; Scott N. J. Watamaniuk; Stephen J. Heinen Shared attention for smooth pursuit and saccades Journal Article In: Journal of Vision, vol. 13, no. 4, pp. 1–12, 2013. @article{Zhou2013, Identification of brief luminance decrements on parafoveal stimuli presented during smooth pursuit improves when a spot pursuit target is surrounded by a larger random dot cinematogram (RDC) that moves with it (Heinen, Jin, & Watamaniuk, 2011). This was hypothesized to occur because the RDC provided an alternative, less attention-demanding pursuit drive, and therefore released attentional resources for visual perception tasks that are shared with those used to pursue the spot. Here, we used the RDC as a tool to probe whether spot pursuit also shares attentional resources with the saccadic system. To this end, we set out to determine if the RDC could release attention from pursuit of the spot to perform a saccade task. Observers made a saccade to one of four parafoveal targets that moved with the spot pursuit stimulus. The targets either moved alone or were surrounded by an RDC (100% coherence). Saccade latency decreased with the RDC, suggesting that the RDC released attention needed to pursue the spot, which was then used for the saccade task. Additional evidence that attention was released by the RDC was obtained in an experiment in which attention was anchored to the fovea by requiring observers to detect a brief color change applied 130 ms before the saccade target appeared. This manipulation eliminated the RDC advantage. The results imply that attentional resources used by the pursuit and saccadic eye movement control systems are shared. |
Wei Zhou; Reinhold Kliegl; Ming Yan A validation of parafoveal semantic information extraction in reading Chinese Journal Article In: Journal of Research in Reading, vol. 36, no. SUPPL.1, pp. S51–S63, 2013. @article{Zhou2013a, Parafoveal semantic processing has recently been well documented in reading Chinese sentences, presumably because of language-specific features. However, because of a large variation of fixation landing positions on pretarget words, some preview words actually were located in foveal vision when readers' eyes landed close to the end of the pretarget words. None of the previous studies has completely ruled out a possibility that the semantic preview effects might mainly arise from these foveally processed preview words. This case, whether previously observed positive evidence for parafoveal semantic processing can still hold, has been called into question. Using linear mixed models, we demonstrate in this study that semantic preview benefit from word N+1 decreased if fixation on pretarget word N was close to the preview. We argue that parafoveal semantic processing is not a consequence of foveally processed preview words. |
Weina Zhu; Jan Drewes; Karl R. Gegenfurtner Animal detection in natural images: effects of color and image database Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75816, 2013. @article{Zhu2013, The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. |
Xiao Lin Zhu; Shu Ping Tan; Fu De Yang; Wei Sun; Chong Sheng Song; Jie Feng Cui; Yan Li Zhao; Feng Mei Fan; Ya Jun Li; Yun Long Tan; Yi Zhuang Zou Visual scanning of emotional faces in schizophrenia Journal Article In: Neuroscience Letters, vol. 552, pp. 46–51, 2013. @article{Zhu2013a, This study investigated eye movement differences during facial emotion recognition between 101 patients with chronic schizophrenia and 101 controls. Independent of facial emotion, patients with schizophrenia processed facial information inefficiently; they showed significantly more direct fixations that lasted longer to interest areas (IAs), such as the eyes, nose, mouth, and nasion. The total fixation number, mean fixation duration, and total fixation duration were significantly increased in schizophrenia. Additionally, the number of fixations per second to IAs (IA fixation number/s) was significantly lower in schizophrenia. However, no differences were found between the two groups in the proportion of number of fixations to IAs or total fixation number (IA fixation number %). Interestingly, the negative symptoms of patients with schizophrenia negatively correlated with IA fixation number %. Both groups showed significantly greater attention to positive faces. Compared to controls, patients with schizophrenia exhibited significantly more fixations directed to IAs, a higher total fixation number, and lower IA fixation number/s for negative faces. These results indicate that facial processing efficiency is significantly decreased in schizophrenia, but no difference was observed in processing strategy. Patients with schizophrenia may have special deficits in processing negative faces, and negative symptoms may affect visual scanning parameters. |
Eckart Zimmermann The reference frames in saccade adaptation Journal Article In: Journal of Neurophysiology, vol. 109, pp. 1815, 2013. @article{Zimmermann2013a, Saccade adaptation is a mecha- nism that adjusts saccade landing positions if they systematically fail to reach their intended target. In the laboratory, saccades can be shortened or lengthened if the saccade target is displaced during execution of the saccade. In this study, saccades were performed from different positions to an adapted saccade target to dissociate adapta- tion to a spatiotopic position in external space from a combined retinotopic and spatiotopic coding. The presentation duration of the saccade target before saccade execution was systematically varied, during adaptation and during test trials, with a delayed saccade paradigm. Spatiotopic shifts in landing positions depended on a certain preview duration of the target before saccade execution. When saccades were performed immediately to a suddenly appearing target, no spatiotopic adaptation was observed. These results suggest that a spatiotopic representation of the visual target signal builds up as a function of the duration the saccade target is visible before saccade execution. Different coordinate frames might also explain the separate adaptability of reactive and voluntary saccades. Spatiotopic effects were found only in outward adaptation but not in inward adaptation, which is consistent with the idea that outward adaptation takes place at the level of the visual target representation, whereas inward adap- tation is achieved at a purely motor level. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Spatial position information accumulates steadily over time Journal Article In: Journal of Neuroscience, vol. 33, no. 47, pp. 18396–18401, 2013. @article{Zimmermann2013, One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor; but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the "saccadic suppression of displacement" paradigm are a result of the fact that the target has had insufficient time to be encoded in memory, and not a result of the action of special mechanisms conferring saccadic stability. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked. |
Heather Cleland Woods; Christoph Scheepers; K. A. Ross; Colin A. Espie; Stephany M. Biello What are you looking at? Moving toward an attentional timeline in insomnia: A novel semantic eye tracking study Journal Article In: Sleep, vol. 36, no. 10, pp. 1491–1499, 2013. @article{Woods2013, STUDY OBJECTIVES: To date, cognitive probe paradigms have been used in different guises to obtain reaction time measurements suggestive of an attention bias towards sleep in insomnia. This study adopts a methodology which is novel to sleep research to obtain a continual record of where the eyes-and therefore attention-are being allocated with regard to sleep and neutral stimuli.$backslash$n$backslash$nDESIGN: A head mounted eye tracker (Eyelink II,SR Research, Ontario, Canada) was used to monitor eye movements in respect to two words presented on a computer screen, with one word being a sleep positive, sleep negative, or neutral word above or below a second distracter pseudoword. Probability and reaction times were the outcome measures.$backslash$n$backslash$nPARTICIPANTS: Sleep group classification was determined by screening interview and PSQI (> 8 = insomnia, < 3 = good sleeper) score.$backslash$n$backslash$nMEASUREMENTS AND RESULTS: Those individuals with insomnia took longer to fixate on the target word and remained fixated for less time than the good sleep controls. Word saliency had an effect with longer first fixations on positive and negative sleep words in both sleep groups, with largest effect sizes seen with the insomnia group.$backslash$n$backslash$nCONCLUSIONS: This overall delay in those with insomnia with regard to vigilance and maintaining attention on the target words moves away from previous attention bias work showing a bias towards sleep, particularly negative, stimuli but is suggestive of a neurocognitive deficit in line with recent research. |
Nicola M. Wöstmann; Désirée S. Aichert; Anna Costa; Katya Rubia; Hans-Jürgen Möller; Ulrich Ettinger Reliability and plasticity of response inhibition and interference control Journal Article In: Brain and Cognition, vol. 81, no. 1, pp. 82–94, 2013. @article{Woestmann2013, This study investigated the internal reliability, temporal stability and plasticity of commonly used measures of inhibition-related functions. Stop-signal, go/no-go, antisaccade, Simon, Eriksen flanker, Stroop and Continuous Performance tasks were administered twice to 23 healthy participants over a period of approximately 11. weeks in order to assess test-retest correlations, internal consistency (Cronbach's alpha), and systematic between as well as within session performance changes. Most of the inhibition-related measures showed good test-retest reliabilities and internal consistencies, with the exception of the stop-signal reaction time measure, which showed poor reliability. Generally no systematic performance changes were observed across the two assessments with the exception of four variables of the Eriksen flanker, Simon and Stroop task which showed reduced variability of reaction time and an improvement in the response time for incongruent trials at second assessment. Predominantly stable performance within one test session was shown for most measures. Overall, these results are informative for studies with designs requiring temporally stable parameters e.g. genetic or longitudinal treatment studies. |
Christiane Wotschack; Reinhold Kliegl Reading strategy modulates parafoveal-on-foveal effects in sentence reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 548–562, 2013. @article{Wotschack2013, Task demands and individual differences have been linked reliably to word skipping during reading. Such differences in fixation probability may imply a selection effect for multivariate analyses of eye-movement corpora if selection effects correlate with word properties of skipped words. For example, with fewer fixations on short and highly frequent words the power to detect parafoveal-on-foveal effects is reduced. We demonstrate that increasing the fixation probability on function words with a manipulation of the expected difficulty and frequency of questions reduces an age difference in skipping probability (i.e., old adults become comparable to young adults) and helps to uncover significant parafoveal-on-foveal effects in this group of old adults. We discuss implications for the comparison of results of eye-movement research based on multivariate analysis of corpus data with those from display-contingent manipulations of target words. |
Timothy J. Wright; Walter R. Boot; Chelsea S. Morgan Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness Journal Article In: Acta Psychologica, vol. 144, no. 1, pp. 6–11, 2013. @article{Wright2013, Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB. |
Chia-Chien Wu; Eileen Kowler Timing of saccadic eye movements during visual search for multiple targets Journal Article In: Journal of Vision, vol. 13, no. 11, pp. 11–11, 2013. @article{Wu2013, Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing.We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate. |
Esther X. W. Wu; Syed O. Gilani; Jeroen J. A. Boxtel; Ido Amihai; Fook K. Chua; Shih-Cheng Yen Parallel programming of saccades during natural scene viewing: Evidence from eye movement positions Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 17–17, 2013. @article{Wu2013a, Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis. |
Yan Jing Wu; Filipe Cristino; Charles Leek; Guillaume Thierry Non-selective lexical access in bilinguals is spontaneous and independent of input monitoring: Evidence from eye tracking Journal Article In: Cognition, vol. 129, no. 2, pp. 418–425, 2013. @article{Wu2013b, Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing. |
Chenjiang Xie; Tong Zhu; Chunlin Guo; Yimin Zhang Measuring IVIS impact to driver by on-road test and simulator experiment Journal Article In: Procedia Social and Behavioral Sciences, vol. 96, pp. 1566–1577, 2013. @article{Xie2013, This work examined the effects of using in-vehicle information systems (IVIS) on drivers by on-road test and simulator experiment. Twelve participants participated in the test. In on-road test, drivers performed driving task with voice prompt and non-voice prompt navigation device mounted on different position. In simulator experiment, secondary tasks, including cognitive, visual and manual tasks, were performed in a driving simulator. Subjective rating was used to test mental workload of drivers in on-road test and simulator experiment. The impact of task complexity and reaction mode was investigated in this paper. The results of the test and the simulation showed that position 1 was more comfortable than other two positions for drivers and it would cause less mental load. Drivers tend to support this result in subjective rating. IVIS with voice prompt causes less visual demand to drivers. The mental load will grow as the difficulty of the task increasing. The cognitive task on manual reaction causes higher mental load than cognitive task which doesn't require manual reaction. These results may have practical implications for in-vehicle information system design. |
Buyun Xu; James W. Tanaka Does face inversion qualitatively change face processing: An eye movement study using a face change detection task Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 1–16, 2013. @article{Xu2013, Understanding the Face Inversion Effect is important for the study of face processing. Some researchers believe that the processing of inverted faces is qualitatively different from the processing of upright faces because inversion leads to a disproportionate performance decrement on the processing of different kinds of face information. Other researchers believe that the difference is quantitative because the processing of all kinds of facial information is less efficient due to the change in orientation and thus, the performance decrement is not disproportionate. To address the Qualitative and Quantitative debate, the current study employed a response-contingent, change detection paradigm to study eye movement during the processing of upright and inverted faces. In this study, configural and featural information were parametrically and independently manipulated in the eye and mouth region of the face. The manipulations for configural information involved changing the interocular distance between the eyes or the distance between the mouth and the nose. The manipulations for featural information involved changing the size of the eyes or the size of the mouth. The main results showed that change detection was more difficult in inverted than upright faces. Specifically, performance declined when the manipulated change occurred in the mouth region, despite the greater efforts allocated to the mouth region. Moreover, compared to upright faces where fixations were concentrated on the eyes and nose regions, inversion produced a higher concentration of fixations on the nose and mouth regions. Finally, change detection performance was better when the last fixation prior to response was located on the region of change, and the relationship between last fixation location and accuracy was stronger for inverted than upright faces. These findings reinforce the connection between eye movements and face processing strategies, and suggest that face inversion produces a qualitative disruption of looking behavior in the mouth region. |
Louise O'Hare; Alasdair D. F. Clarke; Paul B. Hibbard Visual search and visual discomfort Journal Article In: Perception, vol. 42, no. 1, pp. 1–15, 2013. @article{OHare2013, Certain visual stimuli evoke perceptions of discomfort in non-clinical populations. We investigated the impact of stimuli previously judged as uncomfortable by non-clinical populations on a visual search task. One stimulus that has been shown to affect discomfort judgments is noise that has been filtered to have particular statistical properties (Juricevic et al, 2010 Perception 39 884-899). A second type of stimulus associated with visual discomfort is striped patterns (Wilkins et al, 1984 Brain 107 989-1017). These stimuli were used as backgrounds in a visual search task, to determine their influence on search performance. Results showed that, while striped backgrounds did have an impact on visual search performance, this depended on the similarity between the target and background in orientation and spatial frequency. We found no evidence for a more generalised effect of discomfort on performance. |
Sven Ohl; Stephan A. Brandt; Reinhold Kliegl The generation of secondary saccades without postsaccadic visual feedback Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–13, 2013. @article{Ohl2013, Primary saccades are often followed by small secondary saccades, which are generally thought to reduce the distance between the saccade endpoint and target location. Accumulated evidence demonstrates that secondary saccades are subject to various influences, among which retinal feedback during postsaccadic fixation constitutes only one important signal. Recently, we reported that target eccentricity and an orientation bias influence the generation of secondary saccades. In the present study, we examine secondary saccades in the absence of postsaccadic visual feedback. Although extraretinal signals (e.g., efference copy) have received widespread attention in eye-movement studies, it is still unclear whether an extraretinal error signal contributes to the programming of secondary saccades. We have observed that secondary saccade latency and amplitude depend on primary saccade error despite the absence of postsaccadic visual feedback. Strong evidence for an extraretinal error signal influencing secondary saccade programming is given by the observation that secondary saccades are more likely to be oriented in a direction opposite to the primary saccade as primary saccade error shifts from target undershoot to overshoot. We further show how the functional relationship between primary saccade landing position and secondary saccade characteristics varies as a function of target eccentricity. We propose that initial target eccentricity and an extraretinal error signal codetermine the postsaccadic activity distribution in the saccadic motor map when no visual feedback is available. |
Bettina Olk Measuring the allocation of attention in the Stroop task: Evidence from eye movement patterns Journal Article In: Psychological Research, vol. 77, no. 2, pp. 106–115, 2013. @article{Olk2013, Attention plays a crucial role in the Stroop task, which requires attending to less automatically processed task-relevant attributes of stimuli and the suppression of involuntary processing of task-irrelevant attributes. The experiment assessed the allocation of attention by monitoring eye movements throughout congruent and incongruent trials. Participants viewed two stimulus arrays that differed regarding the amount of items and their numerical value and judged by manual response which of the arrays contained more items, while disregarding their value. Different viewing patterns were observed between congruent (e.g., larger array of numbers with higher value) and incongruent (e.g., larger array of numbers with lower value) trials. The direction of first saccades was guided by task-relevant information but in the incongruent condition directed more frequently towards task-irrelevant information. The data further suggest that the difference in the deployment of attention between conditions changes throughout a trial, likely reflecting the impact and resolution of the conflict. For instance, stimulus arrays in line with the correct response were attended for longer and fixations were longer for incongruent trials, with the second fixation and considering all fixations. By the time of the correct response, this latter difference between conditions was absent. Possible mechanisms underlying eye movement patterns are discussed. |
Hans P. Op de Beeck; Ben Vermaercke; Daniel G. Woolley; Nicole Wenderoth Combinatorial brain decoding of people's whereabouts during visuospatial navigation Journal Article In: Frontiers in Neuroscience, vol. 7, pp. 78, 2013. @article{OpdeBeeck2013, Complex behavior typically relies upon many different processes which are related to activity in multiple brain regions. In contrast, neuroimaging analyses typically focus upon isolated processes. Here we present a new approach, combinatorial brain decoding, in which we decode complex behavior by combining the information which we can retrieve from the neural signals about the many different sub-processes. The case in point is visuospatial navigation. We explore the extent to which the route travelled by human subjects (N = 3) in a complex virtual maze can be decoded from activity patterns as measured with functional magnetic resonance imaging. Preliminary analyses suggest that it is difficult to directly decode spatial position from regions known to contain an explicit cognitive map of the environment, such as the hippocampus. Instead, we were able to indirectly derive spatial position from the pattern of activity in visual and motor cortex. The non-spatial representations in these regions reflect processes which are inherent to navigation, such as which stimuli are perceived at which point in time and which motor movement is executed when (e.g., turning left at a crossroad). Highly successful decoding of routes followed through the maze was possible by combining information about multiple aspects of navigation events across time and across multiple cortical regions. This "proof of principle" study highlights how visuospatial navigation is related to the combined activity of multiple brain regions, and establishes combinatorial brain decoding as a means to study complex mental events that involve a dynamic interplay of many cognitive processes. |
Jill X. O'Reilly; Urs Schuffelgen; Steven F. Cuell; Timothy E. J. Behrens; Rogier B. Mars; Matthew F. S. Rushworth Dissociable effects of surprise and model update in parietal and anterior cingulate cortex Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 38, pp. E3660–E3669, 2013. @article{OReilly2013, Brains use predictive models to facilitate the processing of expected stimuli or planned actions. Under a predictive model, surprising (low probability) stimuli or actions necessitate the immediate reallocation of processing resources, but they can also signal the need to update the underlying predictive model to reflect changes in the environment. Surprise and updating are often correlated in experimental paradigms but are, in fact, distinct constructs that can be formally defined as the Shannon information (IS) and Kullback-Leibler divergence (DKL) associated with an observation. In a saccadic planning task, we observed that distinct behaviors and brain regions are associated with surprise/IS and updating/DKL. Although surprise/IS was associated with behavioral reprogramming as indexed by slower reaction times, as well as with activity in the posterior parietal cortex [human lateral intraparietal area (LIP)], the anterior cingulate cortex (ACC) was specifically activated during updating of the predictive model (DKL). A second saccade-sensitive region in the inferior posterior parietal cortex (human 7a), which has connections to both LIP and ACC, was activated by surprise and modulated by updating. Pupillometry revealed a further dissociation between surprise and updating with an early positive effect of surprise and late negative effect of updating on pupil area. These results give a computational account of the roles of the ACC and two parietal saccade regions, LIP and 7a, by which their involvement in diverse tasks can be understood mechanistically. The dissociation of functional roles between regions within the reorienting/reprogramming network may also inform models of neurological phenomena, such as extinction and Balint syndrome, and neglect. |
Jorge Otero-Millan; Stephen L. Macknik; Rachel E. Langston; Susana Martinez-Conde An oculomotor continuum from exploration to fixation Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 15, pp. 6175–6180, 2013. @article{OteroMillan2013, During visual exploration, saccadic eye movements scan the scene for objects of interest. During attempted fixation, the eyes are relatively still but often produce microsaccades. Saccadic rates during exploration are higher than those of microsaccades during fixation, reinforcing the classic view that exploration and fixation are two distinct oculomotor behaviors. An alternative model is that fixation and exploration are not dichotomous, but are instead two extremes of a functional continuum. Here, we measured the eye movements of human observers as they either fixed their gaze on a small spot or scanned natural scenes of varying sizes. As scene size diminished, so did saccade rates, until they were continuous with microsaccadic rates during fixation. Other saccadic properties varied as function of image size as well, forming a continuum with microsaccadic parameters during fixation. This saccadic continuum extended to nonrestrictive, ecological viewing conditions that allowed all types of saccades and fixation positions. Eye movement simulations moreover showed that a single model of oculomotor behavior can explain the saccadic continuum from exploration to fixation, for images of all sizes. These findings challenge the view that exploration and fixation are dichotomous, suggesting instead that visual fixation is functionally equivalent to visual exploration on a spatially focused scale. |
Jorge Otero-Millan; Rosalyn Schneider; R. John Leigh; Stephen L. Macknik; Susana Martinez-Conde Saccades during attempted fixation in Parkinsonian disorders and recessive ataxia: From microsaccades to square-wave jerks Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e58535, 2013. @article{OteroMillan2013a, During attempted visual fixation, saccades of a range of sizes occur. These "fixational saccades" include microsaccades, which are not apparent in regular clinical tests, and "saccadic intrusions", predominantly horizontal saccades that interrupt accurate fixation. Square-wave jerks (SWJs), the most common type of saccadic intrusion, consist of an initial saccade away from the target followed, after a short delay, by a "return saccade" that brings the eye back onto target. SWJs are present in most human subjects, but are prominent by their increased frequency and size in certain parkinsonian disorders and in recessive, hereditary spinocerebellar ataxias. Here we asked whether fixational saccades showed distinctive features in various parkinsonian disorders and in recessive ataxia. Although some saccadic properties differed between patient groups, in all conditions larger saccades were more likely to form SWJs, and the intervals between the first and second saccade of SWJs were similar. These findings support the proposal of a common oculomotor mechanism that generates all fixational saccades, including microsaccades and SWJs. The same mechanism also explains how the return saccade in SWJs is triggered by the position error that occurs when the first saccadic component is large, both in the healthy brain and in neurological disease. |
Ralph Radach Monitoring local comprehension monitoring in sentence reading Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 191–206, 2013. @article{Radach2013, Comprehension monitoring is considered a key issue in current debates on ways to improve children' reading comprehension. However, processes and mechanisms underlying this skill are currently not well understood. This article describes one of the first attempts to study comprehension monitoring using eye-tracking methodology. Students in fifth grade were asked to read sentences for comprehension while also checking whether the meaning of the sentence was generally correct or incorrect. Items required the processing of conjunctive relations between two clauses that were either causally consistent or inconsistent. In addition, the polarity of the relation was varied by replacing the conjunction “because” with “although, ” creating an additional level of processing difficulty. Inconsistency played a minor role and was dominated by polarity effects that were also modulated by the correctness of the answer. The present task represents an effective tool to study local comprehension monitoring and highlights the importance of conjunctive relations for maintaining textual coherence during reading. |