All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2007 |
Miia Sainio; Jukka Hyönä; Kazuo Bingushi; Raymond Bertram The role of interword spacing in reading Japanese: An eye movement study Journal Article In: Vision Research, vol. 47, no. 20, pp. 2575–2584, 2007. @article{Sainio2007, The present study investigated the role of interword spacing in a naturally unspaced language, Japanese. Eye movements were registered of native Japanese readers reading pure Hiragana (syllabic) and mixed Kanji-Hiragana (ideographic and syllabic) text in spaced and unspaced conditions. Interword spacing facilitated both word identification and eye guidance when reading syllabic script, but not when the script contained ideographic characters. We conclude that in reading Hiragana interword spacing serves as an effective segmentation cue. In contrast, spacing information in mixed Kanji-Hiragana text is redundant, since the visually salient Kanji characters serve as effective segmentation cues by themselves. |
Jean Saint-Aubin; Sébastien Tremblay; Annie Jalbert Eye movements and serial memory for visual-spatial information: Does time spent fixating contribute to recall? Journal Article In: Experimental Psychology, vol. 54, no. 4, pp. 264–272, 2007. @article{SaintAubin2007, This research investigated the nature of encoding and its contribution to serial recall for visual-spatial information. In order to do so, we examined the relationship between fixation duration and recall performance. Using the dot task–a series of seven dots spatially distributed on a monitor screen is presented sequentially for immediate recall–performance and eye-tracking data were recorded during the presentation of the to-be-remembered items. When participants were free to move their eyes at their will, both fixation durations and probability of correct recall decreased as a function of serial position. Furthermore, imposing constant durations of fixation across all serial positions had a beneficial impact (though relatively small) on item but not order recall. Great care was taken to isolate the effect of fixation duration from that of presentation duration. Although eye movement at encoding contributes to immediate memory, it is not decisive in shaping serial recall performance. Our results also provide further evidence that the distinction between item and order information, well-established in the verbal domain, extends to visual-spatial information. |
Ardi Roelofs Attention and gaze control in picture naming, word reading, and word categorizing Journal Article In: Journal of Memory and Language, vol. 57, no. 2, pp. 232–251, 2007. @article{Roelofs2007, The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture-word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107-142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88-125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings. |
Martin Rolfs On the limited role of target onset in the gap task: Support for the motor-preparation hypothesis Journal Article In: Journal of Vision, vol. 7, no. 10, pp. 1–20, 2007. @article{Rolfs2007, Saccade latency is reduced when the fixation stimulus is removed shortly before a saccade target appears (gap task) as compared to when the fixation stimulus remains present (overlap task). To test the assumption that this gap effect benefits from advanced motor preparation (M. Paré & D. P. Munoz, 1996), we manipulated target onset independently of the signal to launch a saccade (peripheral offset at the mirror location). In Experiment 1, we showed that, when the target appears at one of only two possible locations, target onset strongly improves performance (lower latency, higher accuracy) in the overlap task but not in the gap task. In Experiment 2, we found that the lack of an effect of target onset in the gap task was not due to inhibition of a reflexive response to the transient associated with the offset (go signal) in our task. In Experiment 3, we manipulated target onset and target uncertainty (two, four, or eight potential target locations) in gap and overlap tasks. As target uncertainty increased, the gap effect decreased, and the effect of target onset on saccade latency in the gap condition became greater. Overall, our results suggest, in line with the motor-preparation hypothesis, that saccade metrics in a gap task are computed before the target is actually displayed and that advanced motor preparation is enhanced when the location of the target is predictable. Analyses of anticipations and regular-latency errors corroborated this view. |
Chloé Prado; Matthieu Dubois; Sylviane Valdois The eye movements of dyslexic children during reading and visual search: Impact of the visual attention span Journal Article In: Vision Research, vol. 47, no. 19, pp. 2521–2530, 2007. @article{Prado2007, The eye movements of 14 French dyslexic children having a VA span reduction and 14 normal readers were compared in two tasks of visual search and text reading. The dyslexic participants made a higher number of rightward fixations in reading only. They simultaneously processed the same low number of letters in both tasks whereas normal readers processed far more letters in reading. Importantly, the children's VA span abilities related to the number of letters simultaneously processed in reading. The atypical eye movements of some dyslexic readers in reading thus appear to reflect difficulties to increase their VA span according to the task request. |
Frank A. Proudlock; Irene Gottlob Physiology and pathology of eye-head coordination Journal Article In: Progress in Retinal and Eye Research, vol. 26, no. 5, pp. 486–515, 2007. @article{Proudlock2007, Human head movement control can be considered as part of the oculomotor system since the control of gaze involves coordination of the eyes and head. Humans show a remarkable degree of flexibility in eye-head coordination strategies, nonetheless an individual will often demonstrate stereotypical patterns of eye-head behaviour for a given visual task. This review examines eye-head coordination in laboratory-based visual tasks, such as saccadic gaze shifts and combined eye-head pursuit, and in common tasks in daily life, such as reading. The effect of the aging process on eye-head coordination is then reviewed from infancy through to senescence. Consideration is also given to how pathology can affect eye-head coordination from the lowest through to the highest levels of oculomotor control, comparing conditions as diverse as eye movement restrictions and schizophrenia. Given the adaptability of the eye-head system we postulate that this flexible system is under the control of the frontal cortical regions, which assist in planning, coordinating and executing behaviour. We provide evidence for this based on changes in eye-head coordination dependant on the context and expectation of presented visual stimuli, as well as from changes in eye-head coordination caused by frontal lobe dysfunction. |
Frederic Sares; Lionel Granjon; Abdelrhani Benraiss; Philippe Boulinguez Analyzing head roll and eye torsion by means of offline image processing Journal Article In: Behavior Research Methods, vol. 39, no. 3, pp. 590–599, 2007. @article{Sares2007, Ocular torsion is a key problem in the understanding of many visual perceptual effects. However, since it is difficult to record, its integration with other sensorimotor signals is still poorly understood. Unfortunately, eyetracker systems are generally not dedicated to the monitoring of eye torsion. In addition, the classical methods used with video-based systems present some limits in the accuracy of torsion calculation. These limits are especially related to the detection of pupil center and the effects of pupil size changes. This article aims at (1) proposing a solution to analyze ocular torsion together with head roll using EyeLink II or similar equipment, (2) reviewing and adapting classical polar cross-correlation methods in order to improve the accuracy of torsion measurement, (3) providing a lower-cost method compared with the existing ones. Video sequences issued from the EyeLink II host computer monitor were recorded by means of a second computer equipped with a video acquisition card and converted into image sequences. Images were analyzed with algorithms of pupil center detection (median-based algorithm), torsion analysis (adapted polar cross-correlation method which takes into account pupil size variations) and marker tracking (head roll analysis). This method was tested on virtual eye images. Results are discussed with respect to other algorithms found in the literature. |
Naoyuki Sato; Yoko Yamaguchi Theta synchronization networks emerge during human object-place memory encoding Journal Article In: NeuroReport, vol. 18, no. 5, pp. 419–424, 2007. @article{Sato2007, Recent rodent hippocampus studies have suggested that theta rhythm-dependent neural dynamics ('theta phase precession') is essential for an on-line memory formation. A computational study indicated that the phase precession enables a human object-place association memory with voluntary eye movements, although it is still an open question whether the human brain uses the dynamics. Here we elucidated subsequent memory-correlated activities in human scalp electroencephalography in an object-place association memory designed according the former computational study. Our results successfully demonstrated that subsequent memory recall is characterized by an increase in theta power and coherence, and further, that multiple theta synchronization networks emerge. These findings suggest the human theta dynamics in common with rodents in episodic memory formation. |
Paul Sauleau; Pierre Pollak; Paul Krack; Denis Pélisson; Alain Vighetto; Alim Louis Benabid; Caroline Tilikete Contraversive eye deviation during stimulation of the subthalamic region Journal Article In: Movement Disorders, vol. 22, no. 12, pp. 1810–1813, 2007. @article{Sauleau2007, Contraversive eye deviation (CED) is most often observed intraoperatively during subthalamic nucleus implantation for Parkinson's disease and considered to result from wrong electrode positioning. We report on a woman, bilaterally implanted in the subthalamic nucleus for severe Parkinson's disease disclosing long-lasting CED only when the stimulators were activated separately. Clinical examination and eye movements recording in this patient showed that CED occurred when stimulation was applied at the site and at similar intensity used for the best antiparkinsonian effect. These results suggest that the subthalamic area may be involved in orienting movements, either through the subthalamic nucleus itself or the fibers from the Frontal Eye Fields. Interestingly, this report shows that CED may be corrected by bilateral stimulation and that CED may not necessarily implicate electrode repositioning. |
Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner Contrast sensitivity during the initiation of smooth pursuit eye movements Journal Article In: Vision Research, vol. 47, no. 21, pp. 2767–2777, 2007. @article{Schuetz2007, Eye movements challenge the perception of a stable world by inducing retinal image displacement. During saccadic eye movements visual stability is accompanied by a remapping of visual receptive fields, a compression of visual space and perceptual suppression. Here we explore whether a similar suppression changes the perception of briefly presented low contrast targets during the initiation of smooth pursuit eye movements. In a 2AFC design we investigated the contrast sensitivity for threshold-level stimuli during the initiation of smooth pursuit and during saccades. Pursuit was elicited by horizontal step-ramp and ramp stimuli. At any time from 200 ms before to 500 ms after pursuit stimulus onset, a blurred 0.3 deg wide horizontal line with low contrast just above detection threshold appeared for 10 ms either 2 deg above or below the pursuit trajectory. Observers had to pursue the moving stimulus and to indicate whether the target line appeared above or below the pursuit trajectory. In contrast to perceptual suppression effects during saccades, no pronounced suppression was found at pursuit onset for step-ramp motion. When pursuit was elicited by a ramp stimulus, pursuit initiation was accompanied by catch-up saccades, which caused saccadic suppression. Additionally, contrast sensitivity was attenuated at the time of pursuit or saccade stimulus onset. This attenuation might be due to an attentional deficit, because the stimulus required the focus of attention during the programming of the following eye movement. |
Alexander C. Schütz; Elias Delipetkos; Doris I. Braun; Dirk Kerzel; Karl R. Gegenfurtner Temporal contrast sensitivity during smooth pursuit eye movements Journal Article In: Journal of Vision, vol. 7, no. 13, pp. 1–15, 2007. @article{Schuetz2007a, During smooth pursuit eye movements, stimuli other than the pursuit target move across the retina, and this might affect their detectability. We measured detection thresholds for vertically oriented Gabor stimuli with different temporal frequencies (1, 4, 8, 12, 16, 20, and 24 Hz) of the sinusoids. Observers kept fixation on a small target spot that was either stationary or moved horizontally at a speed of 8 deg/s. The sinusoid of the Gabor stimuli moved either in the same or in the opposite direction as the pursuit target. Observers had to indicate whether the Gabor stimuli were displayed 4- above or below the target spot. Results show that contrast sensitivity was mainly determined by retinal-image motion but was slightly reduced during smooth pursuit eye movements. Moreover, sensitivity for motion opposite to pursuit direction was reduced in comparison to motion in pursuit direction. The loss in sensitivity for peripheral targets during pursuit can be interpreted in terms of space-based attention to the pursuit target. The loss of sensitivity for motion opposite to pursuit direction can be interpreted as feature-based attention to the pursuit direction. |
D. J. Quinlan; J. C. Culham fMRI reveals a preference for near viewing in the human parieto-occipital cortex Journal Article In: NeuroImage, vol. 36, no. 1, pp. 167–187, 2007. @article{Quinlan2007, Posterior parietal cortex in primates contains several functional areas associated with visual control of body effectors (e.g., arm, hand and head) which contain neurons tuned to specific depth ranges appropriate for the effector. For example, the macaque ventral intraparietal area (VIP) is involved in head movements and is selective for motion in near-space around the head. We used functional magnetic resonance imaging to examine activation in the putative human VIP homologue (pVIP), as well as parietal and occipital cortex, as a function of viewing distance when multiple cues to target depth were available (Expt 1) and when only oculomotor cues were available (Expt 2). In Experiment 1, subjects viewed stationary or moving disks presented at three distances (with equal retinal sizes). Although activation in pVIP showed no preference for any particular spatial range, the dorsal parieto-occipital sulcus (dPOS) demonstrated a near-space preference, with activation highest for near viewing, moderate for arm's length viewing, and lowest for far viewing. In Experiment 2, we investigated whether the near response alone (convergence of the eyes, accommodation of the lens and pupillary constriction) was sufficient to elicit this same activation pattern. Subjects fixated lights presented at three distances which were illuminated singly (with luminance and visual angle equated across distances). dPOS displayed the same gradient of activation (Near > Medium > Far) as that seen in Experiment 1, even with reduced cues to depth. dPOS seems to reflect the status of the near response (perhaps driven largely by vergence angle) and may provide areas in the dorsal visual stream with spatial information useful for guiding actions toward targets in depth. |
Keith Rayner; Xingshan Li; Carrick C. Williams; Kyle R. Cave; Arnold D. Well Eye movements during information processing tasks: Individual differences and cultural effects Journal Article In: Vision Research, vol. 47, no. 21, pp. 2714–2726, 2007. @article{Rayner2007, The eye movements of native English speakers, native Chinese speakers, and bilingual Chinese/English speakers who were either born in China (and moved to the US at an early age) or in the US were recorded during six tasks: (1) reading, (2) face processing, (3) scene perception, (4) visual search, (5) counting Chinese characters in a passage of text, and (6) visual search for Chinese characters. Across the different groups, there was a strong tendency for consistency in eye movement behavior; if fixation durations of a given viewer were long on one task, they tended to be long on other tasks (and the same tended to be true for saccade size). Some tasks, notably reading, did not conform to this pattern. Furthermore, experience with a given writing system had a large impact on fixation durations and saccade lengths. With respect to cultural differences, there was little evidence that Chinese participants spent more time looking at the background information (and, conversely less time looking at the foreground information) than the American participants. Also, Chinese participants' fixations were more numerous and of shorter duration than those of their American counterparts while viewing faces and scenes, and counting Chinese characters in text. |
Peter C. Gordon; Stephanie Moser Insight into analogies: Evidence from eye movements Journal Article In: Visual Cognition, vol. 15, no. 1, pp. 20–35, 2007. @article{Gordon2007, Eye movements were recorded while participants solved picture analogies in which they had to identify the object in one picture that ?went with? an object in another, simultaneously presented picture. The pattern of saccades between objects, but not the time spent looking at objects, was a very sensitive measure of the time course of both relational and object-matching processes. The results show that processing of relations between objects precedes processing of matches between objects for young adults solving simple analogies. |
Sven-Thomas Graupner; Boris M. Velichkovsky; Sebastian Pannasch; Johannes Marx Surprise, surprise: Two distinct components in the visually evoked distractor effect Journal Article In: Psychophysiology, vol. 44, no. 2, pp. 251–261, 2007. @article{Graupner2007, The distractor effect is an inhibition of saccades shortly after a sudden visual event. It has been explained both as an oculomotor reflex and as a manifestation of the orienting response. To clarify which explanation is more appropriate, we investigated a possible habituation of this effect. Visual and auditory distractors were presented at gaze-contingent intervals during the perception of meaningful pictures. Both reflexlike and modifiable components were present in the visual distractor effect, with latencies of about 110 and 180 ms, respectively. The influence of visual and auditory distractors on saccades preceded the earliest changes in cortical ERPs. Only for long-term habituation in the visual modality was a correlation with ERPs (N1) found. |
Karsten Georg; Markus Lappe Spatio-temporal contingency of saccade-induced chronostasis Journal Article In: Experimental Brain Research, vol. 180, pp. 535–539, 2007. @article{Georg2007, During fast, saccadic eye movements visual perception is suppressed. This saccadic suppression prevents erroneous and distracting motion percepts resulting from saccade induced retinal slip. Although saccadic suppression occurs over a substantial time interval around the saccade, there is no "perceptual gap" during saccades. The mechanisms underlying this temporal perceptual filling-in are unknown. When subjects are asked to perform temporal interval judgements of stimuli presented at the time of saccades, the time interval following the termination of the saccade appears longer than subsequent intervals of identical length. This illusion is known as "chronostasis", because a clock presented at the saccade target seemingly stops for a moment. We test whether chronostasis is a global mechanism that may compensate for the temporal gap associated with saccadic suppression. We show that a clock positioned halfway between the initial fixation point and the saccade target does not exhibit prolongation of the interval following the saccade. The characteristical distortion of temporal perception occurred only in the case of a clock being located at the saccade target. This result suggests a local, object-specific mechanism underlying the stopped clock illusion that might originate from a shift in attention immediately preceding the eye movement. |
Thomas Geyer; Adrian Mühlenen; Hermann J. Müller What do eye movements reveal about the role of memory in visual search? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 60, no. 7, pp. 924–935, 2007. @article{Geyer2007, Horowitz and Wolfe (1998, 2003) have challenged the view that serial visual search involves memory processes that keep track of already inspected locations. The present study used a search paradigm similar to Horowitz and Wolfe's (1998), comparing a standard static search condition with a dynamic condition in which display elements changed locations randomly every 111 ins. In addition to measuring search reaction times, observers' eye movements were recorded. For target-present trials, the search rates were near-identical in the two search conditions, replicating Horowitz and Wolfe's findings. However, the number of fixations and saccade amplitude were larger in the static than in the dynamic condition, whereas fixation duration and the latency of the first saccade were longer in the dynamic condition. These results indicate that an active, memory-guided search strategy was adopted in the static condition, and a passive "sit-and-wait" strategy in the dynamic condition. |
Richard Godijn; Arthur F. Kramer Antisaccade costs with static and dynamic targets Journal Article In: Perception and Psychophysics, vol. 69, no. 5, pp. 802–815, 2007. @article{Godijn2007, In the present study we examined the antisaccade cost (latency difference between antisaccades and prosaccades) in a variety of search tasks. In a series of experiments participants searched for a target and were required to execute a saccade toward (prosaccade) or aw ay (antisaccade) from the target. The results revealed that the antisaccade cost was greater for static targets than for dynamic targets, and it was greater for onset targets than for offset targets. Furthermore, the offset of an onset target interfered with prosaccades, but facilitated antisaccades, resulting in a reduction of the antisaccade cost. To account for the data a model is presented, in which attentional control and working memory processes play an important role in the generation of antisaccades. |
D. J. Hagler; L. Riecke; M. I. Sereno Parietal and superior frontal visuospatial maps activated by pointing and saccades Journal Article In: NeuroImage, vol. 35, no. 4, pp. 1562–1577, 2007. @article{Hagler2007, A recent study from our laboratory demonstrated that parietal cortex contains a map of visual space related to saccades and spatial attention and identified this area as the likely human homologue of the lateral intraparietal (LIP). A human homologue for the parietal reach region (PRR), thought to preferentially encode planned hand movements, has also been recently proposed. Both of these areas, originally identified in the macaque monkey, have been shown to encode space with eye-centered coordinates. Functional magnetic resonance imaging (fMRI) of humans was used to test the hypothesis that the putative human PRR contains a retinotopic map recruited by finger pointing but not saccades and to test more generally for differences in the visuospatial maps recruited by pointing and saccades. We identified multiple maps in both posterior parietal cortex and superior frontal cortex recruited for eye and hand movements, including maps not observed in previous mapping studies. Pointing and saccade maps were generally consistent within single subjects. We have developed new group analysis methods for phase-encoded data, which revealed subtle differences between pointing and saccades, including hemispheric asymmetries, but we did not find evidence of pointing-specific maps of visual space. |
Benjamin Y. Hayden; Michael L. Platt Temporal discounting predicts risk sensitivity in rhesus macaques Journal Article In: Current Biology, vol. 17, no. 1, pp. 49–53, 2007. @article{Hayden2007, Humans and animals tend both to avoid uncertainty and to prefer immediate over future rewards. The comorbidity of psychiatric disorders such as impulsivity, problem gambling, and addiction suggests that a common mechanism may underlie risk sensitivity and temporal discounting [1-6]. Nonetheless, the precise relationship between these two traits remains largely unknown [3, 5]. To examine whether risk sensitivity and temporal discounting reflect a common process, we recorded choices made by two rhesus macaques in a visual gambling task [7] while we varied the delay between trials. We found that preference for the risky option declined with increasing delay between sequential choices in the task, even when all other task parameters were held constant. These results were quantitatively predicted by a model that assumed that the subjective expected utility of the risky option is evaluated based on the expected time of the larger payoff [5, 6]. The importance of the larger payoff in this model suggests that the salience of larger payoffs played a critical role in determining the value of risky options. These data suggest that risk sensitivity may be a product of other cognitive processes, and specifically that myopia for the future and the salience of jackpots control the propensity to take a gamble. |
Timothy L. Hodgson; Marcia Chamberlain; Benjamin A. Parris; Martin James; Nicholas Gutowski; Masud Husain; Christopher Kennard The role of the ventrolateral frontal cortex in inhibitory oculomotor control Journal Article In: Brain, vol. 130, no. 6, pp. 1525–1537, 2007. @article{Hodgson2007, It has been proposed that the inferior/ventrolateral frontal cortex plays a critical role in the inhibitory control of action during cognitive tasks. However, the contribution of this region to the control of eye movements has not been clearly established. Here, we describe the performance of a group of 23 frontal lobe damaged patients in an oculomotor rule switching task for which the association between a centrally presented visual cue and the direction of a saccade could change from trial to trial. A subset of 16 patients also completed the standard antisaccade task. Ventrolateral damage was found to be a significant predictor of errors in both tasks. Analysis of the rate at which patients corrected errors in the rule switching task also revealed an important dissociation between left and right hemisphere damaged patients. Whilst patients with left ventrolateral damage usually corrected response errors with secondary saccades, those with right hemisphere lesions often failed to do so. The results suggest that the inferior frontal cortex forms part of a wider frontal network mediating inhibitory control over stimulus elicited eye movements. The critical role played by the right ventrolateral region in cognitive tasks may arise due to an additional functional specialization for the monitoring and updating of task rules. |
Christoph Helmchen; Stefan Gottschalk; Thurid Sander; Peter Trillenberg; Holger Rambold; Andreas Sprenger Beneficial effects of 3,4-diaminopyridine on positioning downbeat nystagmus in a circumscribed uvulo-nodular lesion [6] Journal Article In: Journal of Neurology, vol. 254, no. 8, pp. 1126–1128, 2007. @article{Helmchen2007, Central positioning downbeat nystagmus (pDBN) presents with transient nystagmus in supine or the head hanging position in the absence of DBN in the head erect position. In contrast to central positional downbeat nystagmus, pDBN requires rapid head posi- tioning manoeuvres to be elicited. The pathomechanism and therapy of central pDBN is not yet known and circumscribed lesions are missing so far [1, 2]. We examined the effect of 3,4-diaminopyridine (DAP) [3, 4] on the oculomotor behavior of a patient with pDBN. |
Yun Xu; Emily C. Higgins; Mei Xiao; Marc Pomplun Mapping the color space of saccadic selectivity in visual search Journal Article In: Cognitive Science, vol. 31, no. 5, pp. 877–887, 2007. @article{Xu2007, Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does similar precisely mean? Can we predict the amount of attention that a display color will receive during a search for a given target color? To tackle this question, two color-search experiments measuring the selectivity of saccadic eye movements and mapping out its underlying color space were conducted. A variety of mathematical models, predicting saccadic selectivity for given target and display colors, were devised and evaluated. The results suggest that applying a Gaussian function to a weighted Euclidean distance in a slightly modified HSI color space is the best predictor of saccadic selectivity in the chosen paradigm. Hue and intensity information by itself provides a basis for useful predictors, spanning a possibly spherical color space of saccadic selectivity. Although the current models cannot predict saccadic selectivity values for a wide variety of visual search tasks, they reveal some characteristics of color search that are of both theoretical and applied interest, such as for the design of human-computer interfaces. |
Jason H. Wong; Matthew S. Peterson; Anne P. Hillstrom Are changes in semantic and structural information sufficient for oculomotor capture? Journal Article In: Journal of Vision, vol. 7, no. 12, pp. 3.1–10, 2007. @article{Wong2007, The abrupt onset of objects often involuntarily captures attention (J. Jonides & S. Yantis, 1988) and the eyes (J. Theeuwes, A. F. Kramer, S. Hahn, & D. Irwin, 1998). The new-object hypothesis proposes that the appearance of something new (new semantic and structural information and/or spatiotemporal newness), not the accompanying low-level perceptual transients, causes an involuntary reorienting of attention (S. Yantis & A. P. Hillstrom, 1994). We investigated whether semantic and structural changes alone are sufficient to capture the eyes as strongly as abrupt onsets do. Observers moved their eyes to a target object while another object either onset or smoothly and quickly morphed. If semantic and structural changes are sufficient to capture the eyes, morphs should capture the eyes as strongly as onsets do. Results show that morphs were not fixated first as often as onsets. These findings indicate that new semantic and structural information alone is far less effective at capturing the eyes as onsets. |
Ian Cunnings; Harald Clahsen The time-course of morphological constraints: Evidence from eye-movements during reading Journal Article In: Cognition, vol. 104, no. 3, pp. 476–494, 2007. @article{Cunnings2007, Lexical compounds in English are constrained in that the non-head noun can be an irregular but not a regular plural (e.g. mice eater vs. *rats eater), a contrast that has been argued to derive from a morphological constraint on modifiers inside compounds. In addition, bare nouns are preferred over plural forms inside compounds (e.g. mouse eater vs. mice eater), a contrast that has been ascribed to the semantics of compounds. Measuring eye-movements during reading, this study examined how morphological and semantic information become available over time during the processing of a compound. We found that the morphological constraint affected both early and late eye-movement measures, whereas the semantic constraint for singular non-heads only affected late measures of processing. These results indicate that morphological information becomes available earlier than semantic information during the processing of compounds. |
Joan M. Dafoe; Irene T. Armstrong; Douglas P. Munoz The influence of stimulus direction and eccentricity on pro- and anti-saccades in humans Journal Article In: Experimental Brain Research, vol. 179, no. 4, pp. 563–570, 2007. @article{Dafoe2007, We examined the sensory and motor influences of stimulus eccentricity and direction on saccadic reaction times (SRTs), direction-of-movement errors, and saccade amplitude for stimulus-driven (prosaccade) and volitional (antisaccade) oculomotor responses in humans. Stimuli were presented at five eccentricities, ranging from 0.5 degrees to 8 degrees , and in eight radial directions around a central fixation point. At 0.5 degrees eccentricity, participants showed delayed SRT and increased direction-of-movement errors consistent with misidentification of the target and fixation points. For the remaining eccentricities, horizontal saccades had shorter mean SRT than vertical saccades. Stimuli in the upper visual field trigger overt shifts in gaze more easily and faster than in the lower visual field: prosaccades to the upper hemifield had shorter SRT than to the lower hemifield, and more anti-saccade direction-of-movement errors were made into the upper hemifield. With the exception of the 0.5 degrees stimuli, SRT was independent of eccentricity. Saccade amplitude was dependent on target eccentricity for prosaccades, but not for antisaccades within the range we tested. Performance matched behavioral measures described previously for monkeys performing the same tasks, confirming that the monkey is a good model for the human oculomotor function. We conclude that an upper hemifield bias lead to a decrease in SRT and an increase in direction errors. |
Delphine Dahan; M. Gareth Gaskell The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition Journal Article In: Journal of Memory and Language, vol. 57, no. 4, pp. 483–501, 2007. @article{Dahan2007, Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants' responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal. |
Michael Dambacher; Reinhold Kliegl Synchronizing timelines: Relations between fixation durations and N400 amplitudes during sentence reading Journal Article In: Brain Research, vol. 1155, no. 1, pp. 147–162, 2007. @article{Dambacher2007, We examined relations between eye movements (single-fixation durations) and RSVP-based event-related potentials (ERPs; N400s) recorded during reading the same sentences in two independent experiments. Longer fixation durations correlated with larger N400 amplitudes. Word frequency and predictability of the fixated word as well as the predictability of the upcoming word accounted for this covariance in a path-analytic model. Moreover, larger N400 amplitudes entailed longer fixation durations on the next word, a relation accounted for by word frequency. This pattern offers a neurophysiological correlate for the lag-word frequency effect on fixation durations: word processing is reliably expressed not only in fixation durations on currently fixated words, but also in those on subsequently fixated words. |
Meredyth Daneman; Tracy Lennertz; Brenda Hannon Shallow semantic processing of text: Evidence from eye movements Journal Article In: Language and Cognitive Processes, vol. 22, no. 1, pp. 83–105, 2007. @article{Daneman2007, Evidence for shallow semantic processing has depended on paradigms that required readers to explicitly report whether they noticed an anomalous noun phrase (NP) after reading text such as 'Amanda was bouncing all over because she had taken too many tranquillizing sedatives in one day'. We replicated previous research by showing that readers frequently fail to report the anomaly, and that less-skilled readers have particular difficulty reporting locally anomalous NPs such as tranquillizing stimulants. In addition, we examined the time course of anomaly detection by monitoring readers' eye movements for spontaneous disruptions when encountering the anomalous NPs. The eye fixation data provided evidence for on-line detection of anomalies; however, the detection was delayed. Readers who later reported the anomaly did not spend longer processing the anomalous NP when first encountering it; however, they did spend longer refixating it. Our results challenge orthodox models of comprehension that assume that semantic analysis is exhaustive and complete. |
Julien Cotti; Alain Guillaume; Nadia Alahyane; Denis Pelisson; Jean-Louis Vercher Adaptation of voluntary saccades, but not of reactive saccades, transfers to hand pointing movements Journal Article In: Journal of Neurophysiology, vol. 98, no. 2, pp. 602–612, 2007. @article{Cotti2007, Studying the transfer of visuomotor adaptation from a given effector (e.g., the eye) to another (e.g., the hand) allows us to question whether sensorimotor processes influenced by adaptation are common to both effector control systems and thus to address the level where adaptation takes place. Previous studies have shown only very weak transfer of the amplitude adaptation of reactive saccades–i.e., produced automatically in response to the sudden appearance of visual targets–to hand pointing movements. Here we compared the amplitude of hand pointing movements recorded before and after adaptation of either reactive or voluntary saccades, produced either in a saccade sequence task or in a single saccade task. No transfer to hand pointing movements was found after adaptation of reactive saccades. In contrast, a substantial transfer to the hand was obtained following adaptation of voluntary saccades produced in sequence. Large amounts of transfer between the two saccade types were also found. These results demonstrate that the visuomotor processes influenced by saccadic adaptation depend on the type of saccades and that, in the case of voluntary saccades, they are shared by hand pointing movements. Implications for the neurophysiological substrates of the adaptation of reactive and voluntary saccades are discussed. |
Thérèse Collins; Karine Doré-Mazars; Markus Lappe Motor space structures perceptual space: Evidence from human saccadic adaptation Journal Article In: Brain Research, vol. 1172, no. 1, pp. 32–39, 2007. @article{Collins2007, Saccadic adaptation is the progressive correction of systematic saccade targeting errors. When a saccade to a particular target is adapted, saccades within a spatial window around the target, the adaptation field, are affected as a function of their distance from the adapted target. Furthermore, previous studies suggest that saccadic adaptation might modify the perceptual localization of objects in space. We investigated the localization of visual probes before and after saccadic adaptation, and examined whether the spatial layout of the observed mislocalizations was structurally similar to the saccadic adaptation field. We adapted a horizontal saccade directed towards a target 12° to the right. Thirty-eight saccades towards the right visual hemifield were then used to measure the adaptation field. The adaptation field was asymmetric: transfer of adaptation to saccades larger than the adapted saccade was greater than transfer to smaller saccades. Subjects judged the localization of 39 visual probes both within and outside the adaptation field. The perceived localization of a probe at a given position was proportional to the amount of transfer from the adapted saccade to the saccade towards that position. This similar effect of saccadic adaptation on both the action and perception representations of space suggests that the system providing saccade metrics also contributes to the metric used for the perception of space. |
Dirk Calow; Markus Lappe Local statistics of retinal optic flow for self-motion through natural sceneries Journal Article In: Network: Computation in Neural Systems, vol. 18, no. 4, pp. 343–374, 2007. @article{Calow2007, Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems. |
Manuel G. Calvo; Lauri Nummenmaa Processing of unattended emotional visual scenes Journal Article In: Journal of Experimental Psychology: General, vol. 136, no. 3, pp. 347–369, 2007. @article{Calvo2007, Prime pictures of emotional scenes appeared in parafoveal vision, followed by probe pictures either congruent or incongruent in affective valence. Participants responded whether the probe was pleasant or unpleasant (or whether it portrayed people or animals). Shorter latencies for congruent than for incongruent prime-probe pairs revealed affective priming. This occurred even when visual attention was focused on a concurrent verbal task and when foveal gaze-contingent masking prevented overt attention to the primes but only if these had been preexposed and appeared in the left visual field. The preexposure and laterality patterns were different for affective priming and semantic category priming. Affective priming was independent of the nature of the task (i.e., affective or category judgment), whereas semantic priming was not. The authors conclude that affective processing occurs without overt attention–although it is dependent on resources available for covert attention–and that prior experience of the stimulus is required and right-hemisphere dominance is involved. |
Manuel G. Calvo; Lauri Nummenmaa; Jukka Hyönä Emotional and neutral scenes in competition: Orienting, efficiency, and identification Journal Article In: Quarterly Journal of Experimental Psychology, vol. 60, no. 12, pp. 1585–1593, 2007. @article{Calvo2007a, To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on–and shorter saccade latencies to–emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes. |
C. R. Camalier; A. Gotler; A. Murthy; K. G. Thompson; Gordon D. Logan; T. J. Palmeri; Jeffrey D. Schall Dynamics of saccade target selection: Race model analysis of double step and search step saccade production in human and macaque Journal Article In: Vision Research, vol. 47, no. 16, pp. 2187–2211, 2007. @article{Camalier2007, We investigated how saccade target selection by humans and macaque monkeys reacts to unexpected changes of the image. This was explored using double step and search step tasks in which a target, presented alone or as a singleton in a visual search array, steps to a different location on infrequent, random trials. We report that human and macaque monkey performance are qualitatively indistinguishable. Performance is stochastic with the probability of producing a compensated saccade to the final target location decreasing with the delay of the step. Compensated saccades to the final target location are produced with latencies relative to the step that are comparable to or less than the average latency of saccades on trials with no target step. Noncompensated errors to the initial target location are produced with latencies less than the average latency of saccades on trials with no target step. Noncompensated saccades to the initial target location are followed by corrective saccades to the final target location following an intersaccade interval that decreases with the interval between the target step and the initiation of the noncompensated saccade. We show that this pattern of results cannot be accounted for by a race between two stochastically independent processes producing the saccade to the initial target location and another process producing the saccade to the final target location. However, performance can be accounted for by a race between three stochastically independent processes-a GO process producing the saccade to the initial target location, a STOP process interrupting that GO process, and another GO process producing the saccade to the final target location. Furthermore, if the STOP process and second GO process start at the same time, then the model can account for the incidence and latency of mid-flight corrections and rapid corrective saccades. This model provides a computational account of saccade production when the image changes unexpectedly. |
C. Christine Camblin; Peter C. Gordon; Tamara Y. Swaab The interplay of discourse congruence and lexical association during sentence processing: Evidence from ERPs and eye tracking Journal Article In: Journal of Memory and Language, vol. 56, no. 1, pp. 103–128, 2007. @article{Camblin2007, Five experiments used ERPs and eye tracking to determine the interplay of word-level and discourse-level information during sentence processing. Subjects read sentences that were locally congruent but whose congruence with discourse context was manipulated. Furthermore, critical words in the local sentence were preceded by a prime word that was associated or not. Violations of discourse congruence had early and lingering effects on ERP and eye-tracking measures. This indicates that discourse representations have a rapid effect on lexical semantic processing even in locally congruous texts. In contrast, effects of association were more malleable: Very early effects of associative priming were only robust when the discourse context was absent or not cohesive. Together these results suggest that the global discourse model quickly influences lexical processing in sentences, and that spreading activation from associative priming does not contribute to natural reading in discourse contexts. |
G. P. Caplovitz; P. U. Tse Rotating dotted ellipses: Motion perception driven by grouped figural rather than local dot motion signals Journal Article In: Vision Research, vol. 47, no. 15, pp. 1979–1991, 2007. @article{Caplovitz2007, Unlike the motion of a continuous contour, the motion of a single dot is unambiguous and immune to the aperture problem. Here we exploit this fact to explore the conditions under which unambiguous local motion signals are used to drive global percepts of an ellipse undergoing rotation. In previous work, we have shown that a thin, high aspect ratio ellipse will appear to rotate faster than a lower aspect ratio ellipse even when the two in fact rotate at the same angular velocity [Caplovitz, G. P., Hsieh, P. -J., & Tse, P. U. (2006) Mechanisms underlying the perceived angular velocity of a rigidly rotating object. Vision Research, 46(18), 2877-2893]. In this study we examined the perceived speed of rotation of ellipses defined by a virtual contour made up of evenly spaced dots. Results: Ellipses defined by closely spaced dots exhibit the speed illusion observed with continuous contours. That is, thin dotted ellipses appear to rotate faster than fat dotted ellipses when both rotate at the same angular velocity. This illusion is not observed if the dots defining the ellipse are spaced too widely apart. A control experiment ruled out low spatial frequency "blurring" as the source of the illusory percept. Conclusion: Even in the presence of local motion signals that are immune to the aperture problem, the global percept of an ellipse undergoing rotation can be driven by potentially ambiguous motion signals arising from the non-local form of the grouped ellipse itself. Here motion perception is driven by emergent motion signals such as those of virtual contours constructed by grouping procedures. Neither these contours nor their emergent motion signals are present in the image. |
Gideon P. Caplovitz; Peter U. Tse V3A processes contour curvature as a trackable feature for the perception of rotational motion Journal Article In: Cerebral Cortex, vol. 17, no. 5, pp. 1179–1189, 2007. @article{Caplovitz2007a, Contour curvature (CC) is a vital cue for the analysis of both form and motion. Using functional magnetic resonance imaging, we localized the neural correlates of CC for the processing and perception of rotational motion. We found that the blood oxygen level-dependent signal in retinotopic area V3A and possibly also lateral occipital cortex (LOC) varied parametrically with the degree of CC. Control experiments ruled out the possibility that these modulations resulted from either changes in the area of the stimuli, the velocity with which contour elements were actually translating, or perceived angular velocity. We conclude that neurons within V3A and perhaps also LOC process continuously moving CC as a trackable feature. These data are consistent with the hypothesis that V3A contains neural populations that process trackable form features such as CC, not to solve the "ventral problem" of determining object shape but in order to solve the "dorsal problem" of what is going where. |
G. J. Brouwer; Raymond Van Ee Visual cortex allows prediction of perceptual states during ambiguous structure-from-motion Journal Article In: Journal of Neuroscience, vol. 27, no. 5, pp. 1015–1023, 2007. @article{Brouwer2007, We investigated the role of retinotopic visual cortex and motion-sensitive areas in representing the content of visual awareness during ambiguous structure-from-motion (SFM), using functional magnetic resonance imaging (fMRI) and multivariate statistics (support vector machines). Our results indicate that prediction of perceptual states can be very accurate for data taken from dorsal visual areas V3A, V4D, V7, and MTϩ and for parietal areas responsive to SFM, but to a lesser extent for other visual areas. Generalization of prediction was possible, because prediction accuracy was significantly better than chance for both an unambiguous stimulus and a different experimental design. Detailed analysis of eye movements revealed that strategic and even encouraged beneficial eye movements were not the cause of the prediction accuracy based on cortical activation. We conclude that during perceptual rivalry, neural correlates of visual awareness can be found in retinotopic visual cortex, MTϩ, and parietal cortex. We argue that the organization of specific motion-sensitive neurons creates detectable biases in the preferred direction selectivity of voxels, allowing prediction of perceptual states. During perceptual rivalry, retinotopic visual cortex, in particular higher-tier dorsal areas like V3A and V7, actively represents the content the visual awareness. |
Julie Buchan; Martin Paré; Kevin G. Munhall Spatial statistics of gaze fixations during dynamic face processing Journal Article In: Social Neuroscience, vol. 2, no. 1, pp. 1–13, 2007. @article{Buchan2007, Social interaction involves the active visual perception of facial expressions and communicative gestures. This study examines the distribution of gaze fixations while watching videos of expressive talking faces. The knowledge-driven factors that influence the selective visual processing of facial information were examined by using the same set of stimuli, and assigning subjects to either a speech recognition task or an emotion judgment task. For half of the subjects assigned to each of the tasks, the intelligibility of the speech was manipulated by the addition of moderate masking noise. Both tasks and the intelligibility of the speech signal influenced the spatial distribution of gaze. Gaze was concentrated more on the eyes when emotion was being judged as compared to when words were being identified. When noise was added to the acoustic signal, gaze in both tasks was more centralized on the face. This shows that subject's gaze is sensitive to the distribution of information on the face, but can also be influenced by strategies aimed at maximizing the amount of visual information processed. |
Christine Burton; Meredyth Daneman Compensating for a limited working memory capacity during reading: Evidence from eye movements Journal Article In: Reading Psychology, vol. 28, no. 2, pp. 163–186, 2007. @article{Burton2007, Although working memory capacity is an important contributor to reading comprehension performance, it is not the only contributor. Studies have shown that epistemic knowledge (or knowledge about knowledge and learning) is related to comprehension success and may enable low-span readers to compensate for their limited resources. By comparing the eye movements of epistemically mature versus epistemically naïve low-span readers, this study provided evidence for how the compensation occurs. Metacognitively mature low-span readers spent more time engaged in selective backtracking to unfamiliar and task-relevant text information. These selective look-backs would have reinstated the difficult and important information into working memory, thereby allowing these readers to offset some of the disadvantages of a limited temporary storage capacity. |
Xiaodong Chen; Feng Han; Mu-ming Poo; Yang Dan Excitatory and suppressive receptive field subunits in awake monkey primary visual cortex (V1) Journal Article In: Proceedings of the National Academy of Sciences, vol. 104, no. 48, pp. 19120–19125, 2007. @article{Chen2007a, An essential step in understanding visual processing is to characterize the neuronal receptive fields (RFs) at each stage of the visual pathway. However, RF characterization beyond simple cells in the primary visual cortex (V1) remains a major challenge. Recent application of spike-triggered covariance (STC) analysis has greatly facilitated characterization of complex cell RFs in anesthetized animals. Here we apply STC to RF characterization in awake monkey V1. We found up to nine subunits for each cell, including one or two dominant excitatory subunits as described by the standard model, along with additional excitatory and suppressive subunits with weaker contributions. Compared with the dominant subunits, the nondominant excitatory subunits prefer similar orientations and spatial frequencies but have larger spatial envelopes. They contribute to response invariance to small changes in stimulus orientation, position, and spatial frequency. In contrast, the suppressive subunits are tuned to orientations 45 degrees -90 degrees different from the excitatory subunits, which may underlie cross-orientation suppression. Together, the excitatory and suppressive subunits form a compact description of RFs in awake monkey V1, allowing prediction of the responses to arbitrary visual stimuli. |
Ed H. Chi; Michelle Gumbrecht; Lichan Hong Visual foraging of highlighted text: An eye-tracking study Journal Article In: Human-Computer Interaction, pp. 589–598, 2007. @article{Chi2007, The wide availability of digital reading material online is causing a major shift in everyday reading activities. Readers are skimming instead of reading in depth [Nielson 1997]. Highlights are increasingly used in digital interfaces to direct attention toward relevant passages within texts. In this paper, we study the eye-gaze behavior of subjects using both keyword highlighting and ScentHighlights [Chi et al. 2005]. In this first eye-tracking study of highlighting interfaces, we show that there is direct evidence of the von Restorff isolation effect [VonRestorff 1933] in the eye-tracking data, in that subjects focused on highlighted areas when highlighting cues are present. The results point to future design possibilities in highlighting interfaces. |
C. S. Chapman; Amelia R. Hunt; Alan Kingstone Squeezing uncertainty from saccadic compression Journal Article In: Journal of Eye Movement Research, vol. 1, no. 1, pp. 1–5, 2007. @article{Chapman2007, Brief visual stimuli presented before and during a saccade are often mislocalized due to spatial compression. This saccadic compression effect is thought to have a perceptual basis, and results in visual objects being squeezed together and their number underestimated. Here we show that observers are also uncertain about their visual experiences just before and during a saccade. It is known that responses tend to be biased away from extreme values under conditions of uncertainty. Thus, a plausible alternative explanation of compression is that it reflects the uncertainty-bias to underestimate the number of items that were presented. We test this hypothesis and find that saccadic compression is independent of certainty, and is significantly modulated by orientation, with larger effects for stimuli oriented horizontally, in the direction of the saccade. These findings confirm that saccadic compression is a perceptual phenomenon that may enable seamless perceptual continuity across saccades. |
Aoju Chen; Els Den Os; Jan Peter De Ruiter Pitch accent type matters for online processing of information status: Evidence from natural and synthetic speech Journal Article In: The Linguistic Review, vol. 24, no. 2-3, pp. 317–344, 2007. @article{Chen2007, Adopting an eyetracking paradigm, we investigated the role of H*L, L*HL, L*H, H*LH, and deaccentuation at the intonational phrase-final position in online processing of information status in British English in natural speech. The role of H*L, L*H and deaccentuation was also examined in diphonesynthetic speech. It was found that H*L and L*HL create a strong bias towards newness, whereas L*H, like deaccentuation, creates a strong bias towards givenness. In synthetic speech, the same effect was found for H*L, L*H and deaccentuation, but it was delayed. The delay may not be caused entirely by the difference in the segmental quality between synthetic and natural speech. The pitch accent H*LH, however, appears to bias participants' interpretation to the target word, independent of its information status. This finding was explained in the light of the effect of durational information at the segmental level on word recognition. © Walter de Gruyter 2007. |
François Bonnetblanc; Pierre Baraduc Saccadic adaptation without retinal postsaccadic error Journal Article In: NeuroReport, vol. 18, no. 13, pp. 1399–1402, 2007. @article{Bonnetblanc2007, Primary saccades undershoot their target. Corrective saccades are then triggered by retinal postsaccadic information. We tested whether primary saccades still undershoot when no postsaccadic visual information is available. Participants saccaded to five targets (10-34 degrees) that were either constantly illuminated (ON) or extinguished at saccade onset (OFF(Onset)). In OFF(Onset), few corrective saccades were observed. The saccadic gain increased over trials for the furthest (34 degrees) target. Terminal eye position after glissades or microsaccades progressively converged to the values observed in ON (targets over 16 degrees). Target extinction during the saccade only did not elicit any change. The results show that (i) postsaccadic retinal signals stabilize the saccadic gain and (ii) adaptive changes that reduce terminal error can take place without visual information. |
Leanne Boucher; Veit Stuphorn; Gordon D. Logan; Jeffrey D. Schall; Thomas J. Palmeri Stopping eye and hand movements: Are the processes independent? Journal Article In: Perception and Psychophysics, vol. 69, no. 5, pp. 785–801, 2007. @article{Boucher2007, To explore how eye and hand movements are controlled in a stop task, we introduced effector uncertainty by instructing subjects to initiate and occasionally inhibit eye, hand, or eye + hand movements in response to a color-coded foveal or tone-coded auditory stop signal. Regardless of stop signal modality, stop signal reaction time was shorter for eye movements than for hand movements, but notably did not vary with knowledge about which movement to cancel. Most errors on eye + hand stopping trials were combined eye + hand movements. The probability and latency of signal respond eye and hand movements corresponded to predictions of Logan and Cowan's (1984) race model applied to each effector independently. |
Eli Brenner; Jeroen B. J. Smeets Flexibility in intercepting moving objects. Journal Article In: Journal of Vision, vol. 7, no. 5, pp. 1–17, 2007. @article{Brenner2007, When hitting moving targets, the hand does not always move to the point of interception in the same manner as it would if the target were not moving. This could be because the point at which the target will be intercepted is initially misjudged, or even not judged at all, but it could also be because a different path is optimal for intercepting a moving target. Here we examine the extent to which performance is degraded if people have to follow a different path than their preferred one. Forcing people to make small adjustments to their path by placing obstacles near the path hardly influenced their performance. When the orientation of elongated targets was manipulated, people adjusted their paths, but not quite enough to avoid intercepting the targets at a sub-optimal angle, probably because following a more curved path would have reduced the spatial accuracy and taken more time. When the task was to hit targets in certain directions, people had to sometimes follow much more curved paths. This gave rise to larger errors and longer movement times. An asymmetry in performance between hitting moving targets further in the direction in which they were moving and hitting them back from where they came is consistent with the different consequences of timing errors for the two directions of target motion. We conclude that the path that people take to intercept moving targets depends on the precise constraints under the prevailing conditions rather than being a consequence of judgment errors or of limitations in the way in which movements can be controlled. |
Gerry T. M. Altmann; Yuki Kamide In: Journal of Memory and Language, vol. 57, no. 4, pp. 502–518, 2007. @article{Altmann2007, Two experiments explored the representational basis for anticipatory eye movements. Participants heard 'the man will drink ...' or 'the man has drunk ...' (Experiment 1) or 'the man will drink all of ...' or 'the man has drunk all of ...' (Experiment 2). They viewed a concurrent scene depicting a full glass of beer and an empty wine glass (amongst other things). There were more saccades towards the empty wine glass in the past tensed conditions than in the future tense conditions; the converse pattern obtained for looks towards the full glass of beer. We argue that these anticipatory eye movements reflect sensitivity to objects' affordances, and develop an account of the linkage between language processing and visual attention that can account not only for looks towards named objects, but also for those cases (including anticipatory eye movements) where attention is directed towards objects that are not being named. |
E. J. Anderson; Sabira K. Mannan; Masud Husain; Geraint Rees; Petroc Sumner; Dominic J. Mort; Donald McRobbie; Christopher Kennard Involvement of prefrontal cortex in visual search Journal Article In: Experimental Brain Research, vol. 180, no. 2, pp. 289–302, 2007. @article{Anderson2007, Visual search for target items embedded within a set of distracting items has consistently been shown to engage regions of occipital and parietal cortex, but the contribution of different regions of prefrontal cortex remains unclear. Here, we used fMRI to compare brain activity in 12 healthy participants performing efficient and inefficient search tasks in which target discriminability and the number of distractor items were manipulated. Matched baseline conditions were incorporated to control for visual and motor components of the tasks, allowing cortical activity associated with each type of search to be isolated. Region of interest analysis was applied to critical regions of prefrontal cortex to determine whether their involvement was common to both efficient and inefficient search, or unique to inefficient search alone. We found regions of the inferior and middle frontal cortex were only active during inefficient search, whereas an area in the superior frontal cortex (in the region of FEF) was active for both efficient and inefficient search. Thus, regions of ventral as well as dorsal prefrontal cortex are recruited during inefficient search, and we propose that this activity is related to processes that guide, control and monitor the allocation of selective attention. |
Ensar Becic; Arthur F. Kramer; Walter R. Boot Age-related differences in the use of background layout in visual search Journal Article In: Aging, Neuropsychology, and Cognition, vol. 14, no. 2, pp. 109–125, 2007. @article{Becic2007, The effect of background layout on visual search performance, and more specifically on the tendency to refixate previously inspected locations and objects, was investigated. Older and younger adults performed a search task in which a background layout or landmark was present or absent in a gaze contingent visual search paradigm. Regardless of age, participants demonstrated fewer refixations when landmarks were present, with older adults showing a larger landmark advantage. This visual search advantage did not come at the cost of saccadic latency. Furthermore, the visual search performance advantage obtained in the presence of a background layout or landmark was observed both for individuals with small and large memory spans. |
Mark W. Becker; Ian P. Rasmussen The rhythm aftereffect: Support for time sensitive neurons with broad overlapping tuning curves Journal Article In: Brain and Cognition, vol. 64, no. 3, pp. 274–281, 2007. @article{Becker2007, Ivry [Ivry, R. B. (1996). The representation of temporal information in perception and motor control. Current Opinion in Neurobiology, 6, 851-857.] proposed that explicit coding of brief time intervals is accomplished by neurons that are tuned to a preferred temporal interval and have broad overlapping tuning curves. This proposal is analogous to the orientation selective cells in visual area V1. To test this proposal, we used a temporal analog to the visual tilt aftereffect. After adapting to a fast auditory rhythm, a moderately fast test rhythm (400 ms between beats) seemed slow and vice versa. If the speed of the adapting rhythm was made too disparate from speed of the test rhythm the effect was diminished. The effect occurred whether the adapting and test stimuli were presented to the same or different ears, but did not occur when an auditory adapting rhythm was followed by a visual test rhythm. Results support the proposition that explicit time information is coded by neural units tuned to specific temporal intervals with broad overlapping tuning curves. In addition, it appears that there is a single timing mechanism for each incoming sensory mode, but distinct timers for different modes. |
Eva Belke; Antje S. Meyer Single and multiple object naming in healthy ageing Journal Article In: Language and Cognitive Processes, vol. 22, no. 8, pp. 1178–1211, 2007. @article{Belke2007, We compared the performance of young (college-aged) and older (50'years) speakers in a single object and a multiple object naming task and assessed their susceptibility to semantic and phonological context effects when producing words amidst semantically or phonologically similar or dissimilar words. In single object naming, there were no performance differences between the age groups. In multiple object naming, we observed significant age-related slowing, expressed in longer gazes to the objects and slower speech. In addition, the direction of the phonological context effects differed for the two groups. The results of a supplementary experiment showed that young speakers, when adopting a slow speech rate, coordinated their eye movements and speech differently from the older speakers. Our results imply that age-related slowing in connected speech is not a direct consequence of a slowing of lexical retrieval processes. Instead, older speakers might allocate more processing capacity to speech monitoring processes, which would slow down their concurrent speech planning processes. |
Manabu Arai; Roger P. G. Gompel; Christoph Scheepers Priming ditransitive structures in comprehension Journal Article In: Cognitive Psychology, vol. 54, no. 3, pp. 218–250, 2007. @article{Arai2007, Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent. |
Jennifer E. Arnold; Carla L. Hudson Kam; Michael K. Tanenhaus If you say thee uh you are describing something hard: The on-line attribution of disfluency during reference comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 5, pp. 914–930, 2007. @article{Arnold2007, Eye-tracking and gating experiments examined reference comprehension with fluent (Click on the red. . .) and disfluent (Click on [pause] thee uh red . . .) instructions while listeners viewed displays with 2 familiar (e.g., ice cream cones) and 2 unfamiliar objects (e.g., squiggly shapes). Disfluent instructions made unfamiliar objects more expected, which influenced listeners' on-line hypotheses from the onset of the color word. The unfamiliarity bias was sharply reduced by instructions that the speaker had object agnosia, and thus difficulty naming familiar objects (Experiment 2), but was not affected by intermittent sources of speaker distraction (beeps and construction noises; Experiments 3). The authors conclude that listeners can make situation-specific inferences about likely sources of disfluency, but there are some limitations to these attributions. |
Philip J. Benson; Ute Leonards; Robert M. Lothian; David M. St. Clair; Marco C. G. Merlo Visual scan paths in first-episode schizophrenia and cannabis-induced psychosis Journal Article In: Journal of Psychiatry and Neuroscience, vol. 32, no. 4, pp. 267–274, 2007. @article{Benson2007, OBJECTIVE: Patterns of successive saccades and fixations (scan paths) that are made while viewing images are often spatially restricted in schizophrenia, but the relation with cannabis-induced psychosis has not been examined. We used higher-order statistical methods to examine spatiotemporal characteristics of scan paths to determine whether viewing behaviour was distinguishable on a continuum. METHODS: Patients with early acute first-episode paranoid schizophrenia (SCH; n = 11), cannabis-induced psychosis (CIP; n = 6) and unaffected control subjects (n = 22) undertook a task requiring free viewing of facial, fractal and landscape images for 5 seconds while their eye movements were recorded. Frequencies and distributions of saccades and fixations were calculated in relation to image regions examined during each trial. RESULTS: Findings were independent of image category, indicating generalized scanning deficits. Compared with control subjects, patients with SCH and CIP made fewer saccades and fewer fixations of longer duration. In turn, the spatial distribution of fixations in CIP patients was more clustered than in SCH and control subjects. The diversity of features fixated in subjects with CIP was also lower than in SCH patients and control subjects. CONCLUSION: A continuous approach to characterizing scan path changes in different phenotypes suggests that CIP shares some of the abnormalities of SCH but can be distinguished with measures that are sensitive to cognitive strategies active or inhibited during visual exploration. |
Jean-Baptiste Bernard; Anne-Catherine Scherlen; Eric Castet Page mode reading with simulated scotomas: A modest effect of interline spacing on reading speed Journal Article In: Vision Research, vol. 47, no. 28, pp. 3447–3459, 2007. @article{Bernard2007, Crowding is thought to be one potent limiting factor of reading in peripheral vision. While several studies investigated how crowding between horizontally adjacent letters or words can influence eccentric reading, little attention has been paid to the influence of vertically adjacent lines of text. The goal of this study was to examine the dependence of page mode reading performance (speed and accuracy) on interline spacing. A gaze-contingent visual display was used to simulate a visual central scotoma while normally sighted observers read meaningful French sentences following MNREAD principles. The sensitivity of this new material to low-level factors was confirmed by showing strong effects of perceptual learning, print size and scotoma size on reading performance. In contrast, reading speed was only slightly modulated by interline spacing even for the largest range tested: a 26% gain for a 178% increase in spacing. This modest effect sharply contrasts with the dramatic influence of vertical word spacing found in a recent RSVP study. This discrepancy suggests either that vertical crowding is minimized when reading meaningful sentences, or that the interaction between crowding and other factors such as attention and/or visuo-motor control is dependent on the paradigm used to assess reading speed (page vs. RSVP mode). |
Elena Betta; Giovanni Galfano; Massimo Turatto Microsaccadic response during inhibition of return in a target-target paradigm Journal Article In: Vision Research, vol. 47, no. 3, pp. 428–436, 2007. @article{Betta2007, This study examined the relationship between inhibition of return (IOR) in covert orienting and microsaccade statistics. Unlike a previous study [Galfano, G., Betta, E., & Turatto, M. (2004)], IOR was assessed by means of a target-target paradigm, and microsaccade dynamics were monitored as a function of both the first and the second visual event. In line with what has been reported with a cue-target paradigm, a significant directional modulation was observed opposite to the first visual event. Because participants were to respond to any stimulus, this rules out the possibility that the modulation resulted from a generic motor inhibition, showing instead that it is peculiarly coupled to the oculomotor system. Importantly, after the second visual event, a different response was observed in microsaccade orientation, whose direction critically depended of whether the second visual event appeared at the same location as the first visual event. The results are consistent with the notion that IOR is composed of both attentional and oculomotor components, and challenge the view that covert orienting paradigms engage the attentional component in isolation. |
Lisa R. Betts; Allison B. Sekuler; Patrick J. Bennett The effects of aging on orientation discrimination Journal Article In: Vision Research, vol. 47, no. 13, pp. 1769–1780, 2007. @article{Betts2007, The current experiments measured orientation discrimination thresholds in younger (mean age ≈ 23 years) and older (mean age ≈ 66 years) subjects. In Experiment 1, the contrast needed to discriminate Gabor patterns (0.75, 1.5, and 3 c/deg) that differed in orientation by 12 deg was measured for different levels of external noise. At all three spatial frequencies, discrimination thresholds were significantly higher in older than younger subjects when external noise was low, but not when external noise was high. In Experiment 2, discrimination thresholds were measured as a function of stimulus contrast by varying orientation while contrast was fixed. The resulting threshold-vs-contrast curves had very similar shapes in the two age groups, although the curve obtained from older subjects was shifted to slightly higher contrasts. At contrasts greater than 0.05, thresholds in both older and younger subjects were approximately constant at 0.5 deg. The results from Experiments 1 and 2 suggest that age differences in orientation discrimination are due solely to differences in equivalent input noise. Using the same methods as Experiment 1, Experiment 3 measured thresholds in 6 younger observers as a function of external noise and retinal illuminance. Although reducing retinal illumination increased equivalent input noise, the effect was much smaller than the age difference found in Experiment 1. Therefore, it is unlikely that differences in orientation discrimination were due solely to differences in retinal illumination. Our findings are consistent with recent physiological experiments that have found elevated spontaneous activity and reduced orientation tuning on visual cortical neurons in senescent cats (Hua, T., Li, X., He, L., Zhou, Y., Wang, Y., Leventhal, A. G. (206). Functional degradation of visual cortical cells in old cats. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Why do we look at people's eyes? Journal Article In: Journal of Eye Movement Research, vol. 1, no. 1, pp. 1–6, 2007. @article{Birmingham2007, We have previously shown that when observers are presented with complex natural scenes that contain a number of objects and people, observers look mostly at the eyes of the people. Why is this? It cannot be because eyes are merely the most salient area in a scene, as relative to other objects they are fairly inconspicuous. We hypothesized that people look at the eyes because they consider the eyes to be a rich source of information. To test this idea, we tested two groups of participants. One set of participants, called the Told Group, was informed that there would be a recognition test after they were shown the natural scenes. The second set, the Not Told Group, was not informed that there would be a subsequent recognition test. Our data showed that during the initial and test viewings, the Told Group fixated the eyes more frequently than the Not Told group, supporting the idea that the eyes are considered an informative region in social scenes. Converging evidence for this interpretation is that the Not Told Group fixated the eyes more frequently in the test session than in the study session. |
Ronald Berg; Jos B. T. M. Roerdink; Frans W. Cornelissen On the generality of crowding: Visual crowding in size, saturation, and hue compared to orientation Journal Article In: Journal of vision, vol. 7, no. 2, pp. 14, 2007. @article{Berg2007, Perception of peripherally viewed shapes is impaired when surrounded by similar shapes. This phenomenon is commonly referred to as "crowding". Although studied extensively for perception of characters (mainly letters) and, to a lesser extent, for orientation, little is known about whether and how crowding affects perception of other features. Nevertheless, current crowding models suggest that the effect should be rather general and thus not restricted to letters and orientation. Here, we report on a series of experiments investigating crowding in the following elementary feature dimensions: size, hue, and saturation. Crowding effects in these dimensions were benchmarked against those in the orientation domain. Our primary finding is that all features studied show clear signs of crowding. First, identification thresholds increase with decreasing mask spacing. Second, for all tested features, critical spacing appears to be roughly half the viewing eccentricity and independent of stimulus size, a property previously proposed as the hallmark of crowding. Interestingly, although critical spacings are highly comparable, crowding magnitude differs across features: Size crowding is almost as strong as orientation crowding, whereas the effect is much weaker for saturation and hue. We suggest that future theories and models of crowding should be able to accommodate these differences in crowding effects. |
Stefan Van der Stigchel; Martijn Meeter; Jan Theeuwes Top-down influences make saccades deviate away: The case of endogenous cues Journal Article In: Acta Psychologica, vol. 125, no. 3, pp. 279–290, 2007. @article{VanderStigchel2007, We tested a recent hypothesis suggesting that the eye deviates away from a location when top-down preparation can influence target selection. Participants had to make an eye movement to a peripheral target. Before the upcoming target, a central cue indicated the likely target location. Results show that when the target was presented at a location different from that indicated by the cue, eye movements to the target deviated away from the cued location. Because central cues are under top-down control, the present results are in line with a determining role of top-down preparation on saccade direction. These results contrast with the findings reported in a similar paradigm executed with hand movements, in which the movements were mostly initiated in the direction of the cued location. Therefore, we conclude that inhibitory effects typically observed when executing eye movements may not be observed when executing hand movements in similar conditions. |
Stefan Van der Stigchel; Martijn Meeter; Jan Theeuwes The spatial coding of the inhibition evoked by distractors Journal Article In: Vision Research, vol. 47, no. 2, pp. 210–218, 2007. @article{VanderStigchel2007a, It is generally agreed that saccade deviations away from a distractor location represent inhibition in the oculomotor system. By systematically manipulating the location of a distractor we tested whether the inhibition of the distractor is coded coarsely or fine-grained. Results showed that the location of a distractor had an effect on the saccade trajectories, suggesting that the amount of inhibition observed depends on the location of the distractor. More specifically, the vertical distance of the distractor from fixation seems to be a determining factor. These findings have important implications for models that account for inhibition in the target selection process and the areas that could underlie inhibitory influences on the superior colliculus (SC), like the frontal eye fields (FEF) and the dorsolateral prefrontal cortex (dlPFC). Finally, the initial direction and the endpoint of a saccade were found to be strongly correlated, which contradicts recent models proposing that the initial saccade direction and saccade endpoint are unrelated. |
Stefan Van der Stigchel; H. Merten; Martijn Meeter; Jan Theeuwes The effects of visual spatial interference on spatial working memory Journal Article In: Psychonomic Bulletin & Review, vol. 14, no. 6, pp. 1066–1071, 2007. @article{VanderStigchel2007b, In the present experiment, we investigated whether the memory of a location is affected by the occurrence of an irrelevant visual event. Participants had to memorize the location of a dot. During the retention interval, a task-irrelevant stimulus was presented with abrupt onset somewhere in the visual field. Results showed that the spatial memory representation was affected by the occurrence of the external irrelevant event relative to a control condition in which there was no external event. Specifically, the memorized location was shifted toward the location of the task-irrelevant stimulus. This effect was only present when the onset was close in space to the memory representation. These findings suggest that the "internal" spatial map used for keeping a location in spatial working memory and the "external" spatial map that is affected by exogenous events in the outside world are either the same or tightly linked. |
Stefan Van der Stigchel; N. N. J. Rommelse; J. -B. Deijen; C. J. A. Geldof; J. Witlox; Jaap Oosterlaan; J. A. Sergeant; Jan Theeuwes Oculomotor capture in ADHD Journal Article In: Cognitive Neuropsychology, vol. 24, no. 5, pp. 535–549, 2007. @article{VanderStigchel2007c, It is generally thought that deficits in response inhibition form an important area of dysfunction in patients with attention-deficit/hyperactivity disorder (ADHD). However, recent research using visual search paradigms seems to suggest that these inhibitory deficits do not extend towards inhibiting irrelevant distractors. Using an oculomotor capture task, the present study investigated whether boys with ADHD and their nonaffected brothers are impaired in suppressing reflexive eye movements to a task-irrelevant onset distractor. Results showed that boys with ADHD had slower responses than controls, but were as accurate in their eye movements as controls. Nonaffected brothers showed similar problems in the speed of responding as their affected brothers, which might suggest that this deficit relates to a familial risk for developing the disorder. Importantly, all three groups were equally captured by the distractor, which shows that boys with ADHD and their brothers are not more distracted by the distractor than are controls. Saccade latency and the proportion of intrusive saccades were related to continuous dimensions of ADHD symptoms, which suggests that these deficits are not simply present or absent, but rather indicate that the severity of these deficits relate to the severity of ADHD. The finding that boys with ADHD (and their nonaffected brothers) did not have problems inhibiting irrelevant distractors contradicts a general response inhibition deficiency in ADHD, which may be explained by the relatively independency of working memory in this type of response inhibition. |
Stefan Van Der Stigchel; Jan Theeuwes The relationship between covert and overt attention in endogenous cuing Journal Article In: Perception and Psychophysics, vol. 69, no. 5, pp. 719–731, 2007. @article{VanDerStigchel2007e, In a standard Posner paradigm, participants were endogenously cued to attend to a peripheral location in visual space without making eye movements. They responded faster to target letters presented at cued than at uncued locations. On some trials, instead of a manual response, they had to move their eyes to a location in space. Results showed that the eyes deviated away from the validly cued location; when the cue was invalid and attention had to be allocated to the uncued location, eye movements also deviated away, but now from the uncued location. The extent to which the eyes deviated from cued and uncued locations was related to the dynamics of attention allocation. We hypothesized that this deviation was due to the successful inhibition of the attended location. The results imply that the oculomotor system is not only involved during the endogenous direction of covert attention to a cued location, but also when covert attention is directed to an uncued location. It appears that the oculomotor system is activated wherever spatial attention is allocated. The strength of saccade deviation might turn out to be an important measure for the amount of attention allocated to any particular location over time. |
J. M. Hagen; Josef N. Geest; R. S. Giessen; Gerardina C. Lagers-van Haselen; H. J. F. M. M. Eussen; J. J. P. Gille; L. C. P. Govaerts; C. H. Wouters; I. F. M. Coo; C. C. Hoogenraad; Sebastiaan K. E. Koekkoek; Maarten A. Frens; N. Camp; A. Linden; M. C. E. Jansweijer; S. S. Thorgeirsson; Chris I. De Zeeuw Contribution of CYLN2 and GTF2IRD1 to neurological and cognitive symptoms in Williams Syndrome Journal Article In: Neurobiology of Disease, vol. 26, no. 1, pp. 112–124, 2007. @article{Hagen2007, Williams Syndrome (WS, [MIM 194050]) is a disorder caused by a hemizygous deletion of 25-30 genes on chromosome 7q11.23. Several of these genes including those encoding cytoplasmic linker protein-115 (CYLN2) and general transcription factors (GTF2I and GTF2IRD1) are expressed in the brain and may contribute to the distinct neurological and cognitive deficits in WS patients. Recent studies of patients with partial deletions indicate that hemizygosity of GTF2I probably contributes to mental retardation in WS. Here we investigate whether CYLN2 and GTF2IRD1 contribute to the motoric and cognitive deficits in WS. Behavioral assessment of a new patient in which STX1A and LIMK1, but not CYLN2 and GTF2IRD1, are deleted showed that his cognitive and motor coordination functions were significantly better than in typical WS patients. Comparative analyses of gene specific CYLN2 and GTF2IRD1 knockout mice showed that a reduced size of the corpus callosum as well as deficits in motor coordination and hippocampal memory formation may be attributed to a deletion of CYLN2, while increased ventricle volume can be attributed to both CYLN2 and GTF2IRD1. We conclude that the motor and cognitive deficits in Williams Syndrome are caused by a variety of genes and that heterozygous deletion of CYLN2 is one of the major causes responsible for such dysfunctions. |
Wieske Zoest; Alejandro Lleras; Alan Kingstone; James T. Enns In sight, out of mind: The role of eye movements in the rapid resumption of visual search Journal Article In: Perception and Psychophysics, vol. 69, no. 7, pp. 1204–1217, 2007. @article{Zoest2007, Three experiments investigated the role of eye movements in the rapid resumption of an interrupted search. Passive monitoring of eye position in Experiment 1 showed that rapid resumption was associated with a short distance between the eye and the target on the next-to-last look before target detection. Experiments 2 and 3 used two different methods for presenting the target to the point of eye fixation on some trials. If eye position alone is predictive, rapid resumption should increase when the target is near fixation. The results showed that gaze-contingent targets increased overall search success, but that the proportion of rapid responses decreased dramatically. We conclude that rather than depending on a high-quality single look at a search target, rapid resumption of search depends on two glances; a first glance in which a hypothesis is formed, and a second glance in which the hypothesis is confirmed. |
P. U. Tse; P. -J. Hsieh Component and intrinsic motion integrate in 'dancing bar' illusion Journal Article In: Biological Cybernetics, vol. 96, no. 1, pp. 1–8, 2007. @article{Tse2007, We introduce a new illusion that contradicts common assumptions in the field of visual motion perception. When an unoccluded bar moves at certain speeds and oscillates at certain frequencies, the perceived direction of the bar is not predicted by its intrinsic terminators but is biased to move in the direction orthogonal to its orientation. It appears that the veridical terminator motions are integrated with spurious component motion signals, generating an at times complex pattern of motion around an apparently closed loop path. In the absence of oscillation the effect does not occur. Several factors, including optimal angle, speed, and oscillating distance of the bar, are quantified and possible mechanisms are discussed. In a model, we suggest that the effect arises because of the failure to inhibit spurious component motion signals arising from contours that are nearly oriented along the direction of true motion. |
Massimo Turatto; Matteo Valsecchi; Luigi Tamè; Elena Betta Microsaccades distinguish between global and local visual processing Journal Article In: NeuroReport, vol. 18, no. 10, pp. 1015–1018, 2007. @article{Turatto2007, Much is known about the functional mechanisms involved in visual search. Yet, the fundamental question of whether the visual system can perform different types of visual analysis at different spatial resolutions still remains unsettled. In the visual-attention literature, the distinction between different spatial scales of visual processing corresponds to the distinction between distributed and focused attention. Some authors have argued that singleton detection can be performed in distributed attention, whereas others suggest that even such a simple visual operation involves focused attention. Here we showed that microsaccades were spatially biased during singleton discrimination but not during singleton detection. The results provide support to the hypothesis that some coarse visual analysis can be performed in a distributed attention mode. |
Massimo Turatto; Massimo Vescovi; Matteo Valsecchi Attention makes moving objects be perceived to move faster Journal Article In: Vision Research, vol. 47, no. 2, pp. 166–178, 2007. @article{Turatto2007a, Although it is well established that attention affects visual performance in many ways, by using a novel paradigm [Carrasco, M., Ling, S., & Read. S. (2004). Attention alters appearance. Nature Neuroscience, 7, 308-313.] it has recently been shown that attention can alter the perception of different properties of stationary stimuli (e.g., contrast, spatial frequency, gap size). However, it is not clear whether attention can also change the phenomenological appearance of moving stimuli, as to date psychophysical and neuro-imaging studies have specifically shown that attention affects the adaptability of the visual motion system. Here, in five experiments we demonstrated that attention effectively alters the perceived speed of moving stimuli, so that attended stimuli were judged as moving faster than less attended stimuli. However, our results suggest that this change in visual performance was not accompanied by a corresponding change in the phenomenological appearance of the speed of the moving stimulus. |
Matteo Valsecchi; Elena Betta; Massimo Turatto Visual oddballs induce prolonged microsaccadic inhibition Journal Article In: Experimental Brain Research, vol. 177, no. 2, pp. 196–208, 2007. @article{Valsecchi2007a, Eyes never stop moving. Even when asked to maintain the eyes at fixation, the oculomotor system produces small and rapid eye movements called microsaccades, at a frequency of about 1.5-2 s(-1). The frequency of microsaccades changes when a stimulus is presented in the visual field, showing a stereotyped response pattern consisting of an early inhibition of microsaccades followed by a rebound, before the baseline is reached again. Although this pattern of response has generally been considered as a sort of oculomotor reflex, directional biases in microsaccades have been recently linked to the orienting of spatial attention. In the present study, we show for the first time that regardless of any spatial bias, the pattern of absolute microsaccadic frequency is different for oddball stimuli compared to that elicited by standard stimuli. In a visual-oddball task, the oddball stimuli caused an initial prolonged inhibition of microsaccades, particularly when oddballs had to be explicitly recognized and remembered. The present findings suggest that high-order cognitive processes, other than spatial attention, can influence the frequency of microsaccades. Finally, we also introduce a new method for exploring the visual system response to oddball stimuli. |
Matteo Valsecchi; Massimo Turatto Microsaccadic response to visual events that are invisible to the superior colliculus Journal Article In: Behavioral Neuroscience, vol. 121, no. 4, pp. 786–793, 2007. @article{Valsecchi2007, Even when people think their eyes are still, tiny fixational eye movements, called microsaccades, occur at a rate of -1 Hz. Whenever a new (and potentially dangerous) event takes place in the visual field, the microsaccadic frequency is at first inhibited and then is followed by a rebound before the frequency returns to baseline. It has been suggested that this inhibition-rebound response is a type of oculomotor reflex mediated by the superior colliculus (SC), a midbrain structure involved in saccade programming. The present study investigated microsaccadic responses to visual events that were invisible to the SC; the authors recorded microsaccadic responses to visual oddballs when the latter were equiluminant with respect to the standard stimuli and when both oddballs and standards were equiluminant with respect to the background. Results showed that microsaccadic responses to oddballs and to standards were virtually identical both when the stimuli were visible to the SC and when they were invisible to it. Although the SC may be the generator of microsaccades, this research suggests that the specific fixational oculomotor activity in response to visual events can be controlled by other brain centers. |
Robert J. Beers The sources of variability in saccadic eye movements Journal Article In: Journal of Neuroscience, vol. 27, no. 33, pp. 8757–8770, 2007. @article{Beers2007, Our movements are variable, but the origin of this variability is poorly understood. We examined the sources of variability in human saccadic eye movements. In two experiments, we measured the spatiotemporal variability in saccade trajectories as a function of movement direction and amplitude. One of our new observations is that the variability in movement direction is smaller for purely horizontal and vertical saccades than for saccades in oblique directions. We also found that saccade amplitude, duration, and peak velocity are all correlated with one another. To determine the origin of the observed variability, we estimated the noise in motor commands from the observed spatiotemporal variability, while taking into account the variability resulting from uncertainty in localization of the target. This analysis revealed that uncertainty in target localization is the major source of variability in saccade endpoints, whereas noise in the magnitude of the motor commands explains a slightly smaller fraction. In addition, there is temporal variability such that saccades with a longer than average duration have a smaller than average peak velocity. This noise model has a large generality because it correctly predicts the variability in other data sets, which contain saccades starting from very different initial locations. Because the temporal noise most likely originates in movement planning, and the motor command noise in movement execution, we conclude that uncertainty in sensory signals and noise in movement planning and execution all contribute to the variability in saccade trajectories. These results are important for understanding how the brain controls movement. |
Fernando Vilariño; Gerard Lacey; Jiang Zhou; Hugh Mulcahy; Stephen Patchett Automatic labeling of colonoscopy video for cancer detection Journal Article In: Pattern Recognition and Image Analysis, no. 1, pp. 290–297, 2007. @article{Vilarino2007, The labeling of large quantities of medical video data by clinicians is a tedious and time consuming task. In addition, the labeling process itself is rigid, since it requires the expert's interaction to classify image contents into a limited number of predetermined categories. This paper describes an architecture to accelerate the labeling step using eye movement tracking data. We report some initial results in training a Support Vector Machine (SVM) to detect cancer polyps in colonoscopy video, and a further analysis of their categories in the feature space using Self Organizing Maps (SOM). Our overall hypothesis is that the clinician's eye will be drawn to the salient features of the image and that sustained fixations will be associated with those features that are associated with disease states. |
Jyun Cheng Wang; Rong-Fuh Day The effects of attention inertia on advertisements on the WWW Journal Article In: Computers in Human Behavior, vol. 23, no. 3, pp. 1390–1407, 2007. @article{Wang2007b, When a viewer browses a web site, one presumably performs the task of seeking information from a sequence of scattered web pages to form a meaningful path. The aim of this study is to explore changes in the distribution of attention to banner advertisements as a viewer advances along a meaningful path and their effects on the advertisements. With aid of an instrument called eye-tracker, a laboratory experiment was conducted to observe directly the attention that subjects allocate along meaningful paths. Our results show that at different levels of depth in a meaningful path, the amount of attention allocated to the content of a web page is not the same, regardless of whether attention indexes were based on dwell time or the number of fixations. Theoretically, this experiment successfully generalizes the attentional inertia theory to web environment and elaborates web advertising research by involving a significant web structural factor. In practice, this findings hint that web advertising located in the earlier and later phases of a path should be priced higher than advertising in the middle phases because, during these two phases, the audience is more sensitive to the peripheral advertising. |
Z. I. Wang; Louis F. Dell'Osso In: Vision Research, vol. 47, no. 11, pp. 1550–1560, 2007. @article{Wang2007, The objective of this study was to investigate the dynamic properties of infantile nystagmus syndrome (INS) that affect visual function; i.e., which factors influence latency of the initial reflexive saccade (Ls) and latency to target acquisition (Lt). We used our behavioral ocular motor system (OMS) model to simulate saccadic responses (in the presence of INS) to target jumps at different times within a single INS cycle and at random times during multiple cycles. We then studied the responses of 4 INS subjects with different waveforms to test the model's predictions. Infrared reflection was used for 1 INS subject, high-speed digital video for 3. We recorded and analyzed human responses to large and small target-step stimuli. We evaluated the following factors: stimulus time within the cycle (Tc), normalized Tc (Tc%), initial orbital position (Po), saccade amplitude, initial retinal error (ei), and final retinal error (ef). The ocular motor simulations were performed in MATLAB Simulink environment and the analysis was performed in MATLAB environment using OMLAB software. Both the OMS model and OMtools software are available from http://http:www.omlab.org. Our data analysis showed that for each subject, Ls was a fixed value that is typically higher than the normal saccadic latency. Although saccadic latency appears somewhat lengthened in INS, the amount is insufficient to cause the "slow-to-see" impression. For Lt, Tc% was the most influential factor for each waveform type. The main refixation strategies employed by INS subjects made use of slow and fast phases and catch-up saccades, or combinations of them. These strategies helped the subjects to foveate effectively after target movement, sometimes at the cost of increased target acquisition time. Foveating or braking saccades intrinsic to the nystagmus waveforms seemed to disrupt the OMS' ability to accurately calculate reflexive saccades' amplitude and refoveate. Our OMS model simulations demonstrated this emergent behavior and predicted the lengthy target acquisition times found in the patient data. |
Zhong I. Wang; Louis F. Dell'Osso; Robert L. Tomsak; Jonathan B. Jacobs In: Journal of AAPOS, vol. 11, no. 2, pp. 135–141, 2007. @article{Wang2007a, Purpose: To investigate the effects of combined tenotomy and recession procedures on both acquired downbeat nystagmus and horizontal infantile nystagmus. Methods: Patient 1 had downbeat nystagmus with a chin-down (upgaze) position, oscillopsia, strabismus, and diplopia. Asymmetric superior rectus recessions and inferior rectus tenotomies reduced right hypertropia and rotated both eyes downward. Patient 2 had horizontal infantile nystagmus, a 20° left-eye exotropia, and alternating (abducting-eye) fixation. Lateral rectus recessions and medial rectus tenotomies were performed. Horizontal and vertical eye movements were recorded pre- and postsurgically using high-speed digital video. The eXpanded Nystagmus Acuity Function (NAFX) and nystagmus amplitudes and frequencies were measured. Results: Patient 1: The NAFX peak moved from 10° up to primary position where NAFX values improved 17% and visual acuity increased 25%. Vertical NAFX increased across the -10° to +5° vertical range. Primary-position right hypertropia decreased ∼50%; foveation time per cycle increased 102%; vertical amplitude, oscillopsia, and diplopia were reduced, and frequency was unchanged. Patient 2: Two lateral, narrow high-NAFX regions (due to alternating fixation) became one broad region with a 43% increase in primary position (acuity increased ∼92.3%). Diplopia amplitude decreased; convergence and gaze holding were improved. Primary-position right exotropia was reduced; foveation time per cycle increased 257%; horizontal-component amplitude decreased 45.7%, and frequency remained unchanged. Conclusions: Combining tenotomy with nystagmus or strabismus recession procedures increased NAFX and visual acuities and reduced diplopia and oscillopsia in downbeat nystagmus and infantile nystagmus. |
Benjamin M. Wilkowski; Michael D. Robinson; Robert D. Gordon; Wendy Troop-Gordon Tracking the evil eye: Trait anger and selective attention within ambiguously hostile scenes Journal Article In: Journal of Research in Personality, vol. 41, no. 3, pp. 650–666, 2007. @article{Wilkowski2007, Previous research has shown that trait anger is associated with biases in attention and interpretation, but the temporal relation between these two types of biases remains unresolved. Indeed, two very different models can be derived from the literature. One model proposes that interpretation biases emerge from earlier biases in attention, whereas the other model proposes that hostile interpretations occur quickly, even prior to the allocation of attention to specific cues. Within the context of integrated visual scenes of ambiguously intended harm, the two models make opposite predictions that can be examined using an eye-tracking methodology. The present study (N = 45) therefore tracked participants' allocation of attention to hostile and non-hostile cues in ambiguous visual scenes, and found support for the idea that high anger individuals make early hostile interpretations prior to encoding hostile and non-hostiles cues. The data are important in understanding associations between trait anger and cognitive biases. |
Carrick C. Williams; Alexander Pollatsek Searching for an O in an array of Cs: Eye movements track moment-to-moment processing in visual search Journal Article In: Perception and Psychophysics, vol. 69, no. 3, pp. 372–381, 2007. @article{Williams2007, We examined how closely the underlying cognitive processing in a visual search task guides eye movements by comparing two different search tasks. In the extended search task, participants searched for an O in eight clusters of Landolt Cs with varying gap widths (four characters per cluster, arranged to look like words in text). In the single-cluster task, participants searched a single cluster (identical to the ones in the extended search). The key manipulation was gap size; although gap orientation for the distractors varied within a cluster, gap size was constant within a cluster but differed in size from cluster to cluster. The principal findings were that (1) gaze durations in the extended search were almost completely a function of the difficulty of the cluster (i.e., the gap size of the Cs) and (2) the effect of gap size on gaze durations in the extended search was very similar to its effect on response times in the single-cluster search. Thus, it appears that eye movements in the search task are determined almost exclusively by the ongoing cognitive processing on that cluster. |
A. J. Wills; Aureliu Lavric; G. S. Croft; Timothy L. Hodgson Predictive learning, prediction errors, and attention: evidence from event-related potentials and eye tracking Journal Article In: Journal of Cognitive Neuroscience, vol. 19, no. 5, pp. 843–854, 2007. @article{Wills2007, Prediction error ("surprise") affects the rate of learning: We learn more rapidly about cues for which we initially make incorrect predictions than cues for which our initial predictions are correct. The current studies employ electrophysiological measures to reveal early attentional differentiation of events that differ in their previous involvement in errors of predictive judgment. Error-related events attract more attention, as evidenced by features of event-related scalp potentials previously implicated in selective visual attention (selection negativity, augmented anterior N1). The earliest differences detected occurred around 120 msec after stimulus onset, and distributed source localization (LORETA) indicated that the inferior temporal regions were one source of the earliest differences. In addition, stimuli associated with the production of prediction errors show higher dwell times in an eye-tracking procedure. Our data support the view that early attentional processes play a role in human associative learning. |
Jeremy B. Wilmer; Ken Nakayama Two distinct visual motion mechanisms for smooth pursuit: evidence from individual differences Journal Article In: Neuron, vol. 54, no. 6, pp. 987–1000, 2007. @article{Wilmer2007, Smooth-pursuit eye velocity to a moving target is more accurate after an initial catch-up saccade than before, an enhancement that is poorly understood. We present an individual-differences-based method for identifying mechanisms underlying a physiological response and use it to test whether visual motion signals driving pursuit differ pre- and postsaccade. Correlating moment-to-moment measurements of pursuit over time with two psychophysical measures of speed estimation during fixation, we find two independent associations across individuals. Presaccadic pursuit acceleration is predicted by the precision of low-level (motion-energy-based) speed estimation, and postsaccadic pursuit precision is predicted by the precision of high-level (position-tracking) speed estimation. These results provide evidence that a low-level motion signal influences presaccadic acceleration and an independent high-level motion signal influences postsaccadic precision, thus presenting a plausible mechanism for postsaccadic enhancement of pursuit. © 2007 Elsevier Inc. All rights reserved. |
Glenn F. Wilson; John A. Caldwell; Christopher A. Russell Performance and psychophysiological measures of fatigue effects on aviation related tasks of varying difficulty Journal Article In: International Journal of Aviation Psychology, vol. 17, no. 2, pp. 219–247, 2007. @article{Wilson2007, Fatigue is a well known stressor in aviation operations and its interaction with mental workload needs to be understood. Performance, psychophysiological, and subjective measures were collected during performance of three tasks of increasing complexity. A psychomotor vigilance task, multi-attribute task battery and an uninhabited air vehicle task were performed five times during one night's sleep loss. EEG, ECG and pupil area were recorded during task performance. Performance decrements were found at the next to last and/or last testing session. The EEG showed concomitant changes. The degree of impairment was at least partially dependent on the task being performed and the performance variable assessed. |
Katsumi Watanabe; Kenji Yokoi Object-based anisotropic mislocalization by retinotopic motion signals Journal Article In: Vision Research, vol. 47, no. 12, pp. 1662–1667, 2007. @article{Watanabe2007, The relative visual positions of briefly flashed stimuli are systematically modified in the presence of motion signals. We have recently shown that the perceived position of a spatially extended flash stimulus is anisotropically shifted toward a single convergent point back along the trajectory of a moving object without a significant change in the perceived shape of the flash [Watanabe, K., & Yokoi, K. (2006). Object-based anisotropies in the flash-lag effect. Psychological Science, 17, 728-735]. In the previous experiment, the moving stimulus moved in both retinotopic and environmental coordinates. In the present study, we examined whether the anisotropic mislocalization depends on retinotopic or object motion signals. When the retinal image of a moving stimulus was rendered stationary by smooth pursuit, the anisotropic pattern of mislocalization was not observed. In contrast, when the retinal image of a stationary stimulus was moved by eye movements, anisotropic mislocalization was observed, with the magnitude of the mislocalization comparable to that in the previous study. In both cases, there was little indication of shape distortion of the flash stimulus. These results demonstrate a clear case of object-based mislocalization by retinotopic motion signals; retinotopic-not object-motion signals distort the perceived positions of visual objects after the shape representations are established. |
Derrick G. Watson; Matthew Inglis Eye movements and time-based selection: Where do the eyes go in preview search? Journal Article In: Psychonomic Bulletin & Review, vol. 14, no. 5, pp. 852–857, 2007. @article{Watson2007, In visual search tasks, presenting one set of distractors (previewing them) before a second set which contains the target, improves search efficiency compared to when all items appear simultaneously. It has been proposed that this preview benefit reflects an attentional bias against old information and toward new information. Here we tested directly whether there was such a bias by measuring eye movement behavior. The main findings were that fixations were biased against, and overall dwell times were shorter on, old stimuli during search in the preview condition. In addition, the initial onset of search was delayed in the preview condition and saccades made during the preview period did not disrupt the ability to prioritize new items. The data demonstrate directly that preview search results in an attentional bias toward new items and against old items. |
Derrick G. Watson; Elizabeth A. Maylor; Lucy A. M. Bruce The role of eye movements in subitizing and counting Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 33, no. 6, pp. 1389–1399, 2007. @article{Watson2007a, Previous work has suggested that eye movements may be necessary for accurate enumeration beyond the subitization range of about 4 items. This study determined the frequency of eye movements normally made during enumeration, their relationship to response times, and whether they are required for accurate performance. This was achieved by monitoring eye movements and comparing performance when observers were allowed to saccade and when they were not. The results showed that (a) there was a sharp increase in saccadic frequency beyond about 4 items (from < 0.2 saccades per item to about 1 per item), and (b) enumeration of fewer than 4 items remained rapid and accurate even when eye movements were prevented, whereas enumeration beyond this became less efficient and sometimes less accurate. The results are discussed in relation to the memory and processing requirements of enumeration tasks. |
Ulrich W. Weger; Albrecht W. Inhoff Long-range regressions to previously read words are guided by spatial and verbal memory Journal Article In: Memory & Cognition, vol. 35, no. 6, pp. 1293–1306, 2007. @article{Weger2007, To examine the nature of the information that guides eye movements to previously read text during reading (regressions), we used a relatively novel technique to request a regression to a particular target word when the eyes reached a predefined location during sentence reading. A regression was to be directed to a close or a distant target when either the first or the second line of a complex two-line sentence was read. In addition, conditions were created that pitted effects of spatial and linguistic distance against each other. Initial regressions were more accurate when the target was spatially near, and effects of spatial distance dominated effects of verbal distance. Initial regressions rarely moved the eyes onto the target, however, and subsequent "corrective" regressions that homed in on the target were subject to general linguistic processing demands, being more accurate during first-line reading than during second-line reading. The results suggest that spatial and verbal memory guide regressions in reading. Initial regressions are primarily guided by fixation-centered spatial memory, and corrective regressions are primarily guided by linguistic knowledge. |
Michael W. Grünau; Kamala Pilgrim; Rong Zhou Velocity discrimination thresholds for flowfield motions with moving observers Journal Article In: Vision Research, vol. 47, no. 18, pp. 2453–2464, 2007. @article{Gruenau2007, The visual flow field, produced by forward locomotion, contains useful information about many aspects of visually guided behavior. But locomotion itself also contributes to possible distortions by adding head bobbing motions. Here we examine whether vertical head bobbing affects velocity discrimination thresholds and how the system may compensate for the distortions. Vertical head and eye movements while fixating were recorded during standing, walking or running on a treadmill. Bobbing noise was found to be larger during locomotion. The same observers were equally good at discriminating velocity increases in large accelerating flow fields when standing or walking or running. Simulated head bobbing was compensated when produced by pursuit eye movements, but not when it was part of the flow field. The results showed that these two contributions are additive and dealt with independently before they are combined. Distortions produced by body/head oscillations may also be compensated. Visual performance during running was at least as good as during walking, suggesting more efficient compensation mechanisms for running. |
Roman Von Wartburg; Pascal Wurtz; Tobias Pflugshaupt; Thomas Nyffeler; Mathias Lüthi; René M. Müri Size matters: Saccades during scene perception Journal Article In: Perception, vol. 36, no. 3, pp. 355–365, 2007. @article{VonWartburg2007, We investigated the effect of image size on saccade amplitudes. First, in a meta-analysis, relevant results from previous scene perception studies are summarised, suggesting the possibility of a linear relationship between mean saccade amplitude and image size. Forty-eight observers viewed 96 colour scene images scaled to four different sizes, while their eye movements were recorded. Mean and median saccade amplitudes were found to be directly proportional to image size, while the mode of the distribution lay in the range of very short saccades. However, saccade amplitudes expressed as percentages of image size were not constant over the different image sizes; on smaller stimulus images, the relative saccades were found to be larger, and vice versa. In sum, and as far as mean and median saccade amplitudes are concerned, the size of stimulus images is the dominant factor. Other factors, such as image properties, viewing task, or measurement equipment, are only of subordinate importance. Thus, the role of stimulus size has to be reconsidered, in theoretical as well as methodological terms. |
Henning U. Voss; Bruce D. McCandliss; Jamshid Ghajar; Minah Suh A quantitative synchronization model for smooth pursuit target tracking Journal Article In: Biological Cybernetics, vol. 96, no. 3, pp. 309–322, 2007. @article{Voss2007, We propose a quantitative model for human smooth pursuit tracking of a continuously moving visual target which is based on synchronization of an internal expectancy model of the target position coupled to the retinal target signal. The model predictions are tested in a smooth circular pursuit eye tracking experiment with transient target blanking of variable duration. In subjects with a high tracking accuracy, the model accounts for smooth pursuit and repeatedly reproduces quantitatively characteristic patterns of the eye dynamics during target blanking. In its simplest form, the model has only one free parameter, a coupling constant. An extended model with a second parameter, a time delay or memory term, accounts for predictive smooth pursuit eye movements which advance the target. The model constitutes an example of synchronization of a complex biological system with perceived sensory signals. |
Claudiu Simion; Shinsuke Shimojo Interrupting the cascade: Orienting contributes to decision making even in the absence of visual stimulation Journal Article In: Perception and Psychophysics, vol. 69, pp. 591–595, 2007. @article{Simion2007, Most systematic studies of human decision making approach the subject from a cost analysis point of view and assume that people make the highest utility choice. Very few articles investigate subjective decision making, such as that involving preference, although such decisions are very important for our daily functioning. We have argued (Shimojo, Simion, Shimojo, & Scheier, 2003) that an orienting bias effectively leads to the preference decision by means of a positive feedback loop involving mere exposure and preferential looking. The illustration of this process is a continually increasing gaze bias toward the eventual choice, which we call the gaze cascade effect. In the present study, we interrupt the natural process of preference selection, but we show that gaze behavior does not change even when the stimuli are removed from observers' visual field. This demonstrates that once started, the involvement of orienting in decision making cannot be stopped and that orienting acts independently of the presence of visual stimuli. We also show that the cascade effect is intrinsically linked to the decision itself and is not triggered simply by a tendency to look at preferred targets. |
Daniel Smilek; Kelly A. Malcolmson; Jonathan S. A. Carriere; Meghan Eller; Donna Kwan; Michael G. Reynolds When "3" is a jerk and "E" is a king: Personifying inanimate objects in synesthesia Journal Article In: Journal of Cognitive Neuroscience, vol. 19, no. 6, pp. 981–992, 2007. @article{Smilek2007, We report a case study of an individual (TE) for whom inanimate objects, such as letters, numbers, simple shapes, and even furniture, are experienced as having rich and detailed personalities. TE reports that her object-personality pairings are stable over time, occur independent of her intentions, and have been there for as long as she can remember. In these respects, her experiences are indicative of synesthesia. Here we show that TE's object-personality pairings are very consistent across test-retest, even for novel objects. A qualitative analysis of TE's personality descriptions revealed that her personifications are extremely detailed and multi-dimensional, and that her personifications of familiar and novel objects differ in specific ways. We also found that TE's eye movements can be biased by the emotional associations she has with letters and numbers. These findings demonstrate that synesthesia can involve complex semantic personifications, which can influence visual attention. Finally, we propose a neural model of normal personification and the unusual personifications that accompany object-personality synesthesia. |
Susan Sullivan; Ted Ruffman; Samuel B. Hutton Age differences in emotion recognition skills and the visual scanning of emotion faces Journal Article In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, vol. 62, no. 1, pp. 53–60, 2007. @article{Sullivan2007, Research suggests that a person's emotion recognition declines with advancing years. We examined whether or not this age-related decline was attributable to a tendency to overlook emotion information in the eyes. In Experiment 1, younger adults were significantly better than older adults at inferring emotions from full faces and eyes, though not from mouths. Using an eye tracker in Experiment 2, we found young adults, in comparison with older adults, to have superior emotion recognition performance and to look proportionately more to eyes than mouths. However, although better emotion recognition performance was significantly correlated with more eye looking in younger adults, the same was not true in older adults. We discuss these results in terms of brain changes with age. |
Yung-Chi Sung; Da-Lun Tang Unconscious processing embedded in conscious processing: Evidence from gaze time on Chinese sentence reading Journal Article In: Consciousness and Cognition, vol. 16, no. 2, pp. 339–348, 2007. @article{Sung2007, The current study aims to separate conscious and unconscious behaviors by employing both online and offline measures while the participants were consciously performing a task. Using an eye-movement tracking paradigm, we observed participants' response patterns for distinguishing within-word-boundary and across-word-boundary reverse errors while reading Chinese sentences (also known as the "word inferiority effect"). The results showed that when the participants consciously detected errors, their gaze time for target words associated with across-word-boundary reverse errors was significantly longer than that for targets words associated with within-word-boundary reverse errors. Surprisingly, the same gaze time pattern was found even when the readers were not consciously aware of the reverse errors. The results were statistically robust, providing converging evidence for the feasibility of our experimental paradigm in decoupling offline behaviors and the online, automatic, and unconscious aspects of cognitive processing in reading. |
Zhi-Ming Shen; Wei-Feng Xu; Chao-Yi Li Cue-invariant detection of centre-surround discontinuity by V1 neurons in awake macaque monkey Journal Article In: Journal of Physiology, vol. 583, no. 2, pp. 581–592, 2007. @article{Shen2007, Visual perception of an object depends on the discontinuity between the object and its background, which can be defined by a variety of visual features, such as luminance, colour and motion. While human object perception is largely cue invariant, the extent to which neural mechanisms in the primary visual cortex contribute to cue-invariant perception has not been examined extensively. Here we report that many V1 neurons in the awake monkey are sensitive to the stimulus discontinuity between their classical receptive field (CRF) and non-classical receptive field (nCRF) regardless of the visual feature that defines the discontinuity. The magnitude of this sensitivity is strongly dependent on the strength of nCRF suppression of the cell. These properties of V1 neurons may contribute significantly to cue-invariant object perception. |
Adrian Staub The parser doesn't ignore intransitivity, after all Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 3, pp. 550–569, 2007. @article{Staub2007, Several previous studies (B. C. Adams, C. Clifton, & D. C. Mitchell, 1998; D. C. Mitchell, 1987; R. P. G. van Gompel & M. J. Pickering, 2001) have explored the question of whether the parser initially analyzes a noun phrase that follows an intransitive verb as the verb's direct object. Three eye-tracking experiments examined this issue in more detail. Experiment 1 replicated the finding that readers experience difficulty on this noun phrase in normal reading and found that this difficulty occurs even with intransitive verbs for which a direct object is categorically prohibited. Experiment 2, however, demonstrated that this effect is not due to syntactic misanalysis but to disruption that occurs when a comma is absent at a subordinate clause/main clause boundary. Experiment 3 replicated the finding (M. J. Pickering & M. J. Traxler, 2003; M. J. Traxler & M. J. Pickering, 1996) that when a noun phrase "filler" is an implausible direct object for an optionally transitive relative clause verb, processing difficulty results; however, there was no evidence for such difficulty when the relative clause verb was strictly intransitive. Taken together, the 3 experiments undermine the support for the claim that the parser initially ignores a verb's subcategorization restrictions. |
Bert Steenbergen; Julius Verrel; Andrew M. Gordon Motor planning in congenital hemiplegia Journal Article In: Disability and Rehabilitation, vol. 29, no. 1, pp. 13–23, 2007. @article{Steenbergen2007, PURPOSE: Cerebral Palsy (CP) is a broad definition of a neurological condition in which disorders in movement execution and postural control limit the performance of activities of daily living. In this paper, we first review studies on motor planning in hemiplegic CP. Second, preliminary data of a recent study on eye-hand coordination in participants with hemiplegic CP are presented. Here, the potential role of vision for online and prospective control of action was examined. METHOD: Review and presentation of preliminary data of an eye- and hand movement registration experiment in hemiplegic CP. RESULTS: Deficits in motor planning in hemiplegic CP contribute to limitations of activities of daily living. In the second part, exemplary plots of eye-hand coordination are presented for the affected and unaffected hand in one participant with hemiplegic CP, and for the preferred hand in controls, both as an illustration of the research methodology and to give an impression of the observed gaze patterns. CONCLUSION: Research on CP should not solely focus on low-level aspects of action execution, but also take into account the more high-level aspects of motor control, such as planning. Possible deviations therein may be sought in altered gaze patterns as illustrated in the paper. |
Bobby B. Stojanoski; Matthias Niemeier Feature-based attention modulates the perception of object contours Journal Article In: Journal of Vision, vol. 7, no. 14, pp. 1–11, 2007. @article{Stojanoski2007, Feature-based attention is known to support perception of visual features associated with early and intermediate visual areas. Here we examined the role of feature-based attention in higher levels of object processing. We used a dual-task design to probe perception of poorly attended contour-defined or motion-defined loops while attention was occupied with congruent or incongruent feature detection tasks. Perception of the unattended task was better when concurrently presented with a congruent stimulus. However, this effect was eliminated when detection of the primary task was made easy suggesting that task-demand in object perception is feature specific. Our results provide evidence for the contribution of feature-based attention to object perception. |
Raliza S. Stoyanova; Jay Pratt; Adam K. Anderson Inhibition of return to social signals of fear Journal Article In: Emotion, vol. 7, no. 1, pp. 49–56, 2007. @article{Stoyanova2007, The present study examined whether inhibition of return (IOR) is modulated by the fear relevance of the cue. Experiment 1 found similar magnitude of IOR was produced by neutral and fear faces and luminance matched cues. To allow a more sensitive measure of endogenously directed attention, Experiment 2 removed a central reorienting cue and more precisely measured the time course of IOR. At stimulus onset asynchronies (SOAs) of 500, 1,000 and 1,500 ms, fear face and luminance matched cues resulted in similar IOR. These findings suggest that IOR is triggered by event onsets and disregards event value. Views of IOR as an adaptive "foraging facilitator," whereby attention is guided to promote optimal sampling of important environmental events, are discussed. |
Martin Stritzke; Julia Trommershäuser Eye movements during rapid pointing under risk Journal Article In: Vision Research, vol. 47, no. 15, pp. 2000–2009, 2007. @article{Stritzke2007, We recorded saccadic eye movements during visually-guided rapid pointing movements under risk. We intended to determine whether saccadic end points are necessarily tied to the goals of rapid pointing movements or whether, when the visual features of a display and the goals of a pointing movement are different, saccades are driven by low-level features of the visual stimulus. Subjects pointed at a stimulus configuration consisting of a target region and a penalty region. Each target hit yielded a gain of points; each penalty hit incurred a loss of points. Late responses were penalized. The luminance of either target or penalty region was indicated by a disk which differed significantly from the background in luminance, while the other region was indicated by a thin circle. In subsequent experiments, we varied the visual salience of the stimulus configuration and found that manual responses followed near-optimal strategies maximizing expected gain, independent of the salience of the target region. We suggest that the final eye position is partially pre-programmed prior to hand movement initiation. While we found that manipulations of the visual salience of the display determined the end point of the initial saccade we also found that subsequent saccades are driven by the goal of the hand movement. |