All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2013 |
Sabine Born; Ulrich Ansorge; Dirk Kerzel Predictability of spatial and non-spatial target properties improves perception in the pre-saccadic interval Journal Article In: Vision Research, vol. 91, pp. 93–101, 2013. @article{Born2013, In a dual-task paradigm with a perceptual discrimination task and a concurrent saccade task, we examined participants' ability to make use of prior knowledge of a critical property of the perceptual target to improve discrimination. Previous research suggests that during a short time window before a saccade, covert attention is imperatively directed towards the saccade target location. Consequently, discrimination of perceptual targets at the saccade target location is better than at other locations. We asked whether the obligatory pre-saccadic attention shift prevents perceptual benefits arising for perceptual target stimuli with predictable as opposed to non-predictable properties. We compared conditions in which the color or location of the perceptual target was constant to conditions in which those properties varied randomly across trials. In addition to the expected improvements of perception at the saccade target location, we found perception to be better with constant than with random properties of the perceptual target. Thus, color or location information about an upcoming perceptual target facilitates perception even while spatial attention is shifted to the saccade target. The improvement occurred irrespective of the saccade target location, which suggests that the underlying mechanism is independent of the pre-saccadic attention shift, but alternative interpretations are discussed as well. |
Elika Bergelson; Daniel Swingley The acquisition of abstract words by young infants Journal Article In: Cognition, vol. 127, no. 3, pp. 391–397, 2013. @article{Bergelson2013, Young infants' learning of words for abstract concepts like 'all gone' and 'eat,' in contrast to their learning of more concrete words like 'apple' and 'shoe,' may follow a relatively protracted developmental course. We examined whether infants know such abstract words. Parents named one of two events shown in side-by-side videos while their 6-16-month-old infants (n= 98) watched. On average, infants successfully looked at the named video by 10. months, but not earlier, and infants' looking at the named referent increased robustly at around 14. months. Six-month-olds already understand concrete words in this task (Bergelson & Swingley, 2012). A video-corpus analysis of unscripted mother-infant interaction showed that mothers used the tested abstract words less often in the presence of their referent events than they used concrete words in the presence of their referent objects. We suggest that referential uncertainty in abstract words' teaching conditions may explain the later acquisition of abstract than concrete words, and we discuss the possible role of changes in social-cognitive abilities over the 6-14. month period. |
Elika Bergelson; Daniel Swingley Young toddlers' word comprehension is flexible and efficient Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e73359, 2013. @article{Bergelson2013a, Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children's gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers' spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called "cup" or "juice") and pictures that had only one likely name for toddlers (such as "apple"), using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months). Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents. |
Susanne Bergert How do our brain hemispheres cooperate to avoid false memories? Journal Article In: Cortex, vol. 49, no. 2, pp. 572–581, 2013. @article{Bergert2013, Memories are not always as reliable as they may appear. The occurrence of false memories can be reduced, however, by enhancing the cooperation between the two brain hemispheres. Yet is the communication from left to right hemisphere as helpful as the information transfer from right to left? To address this question, 72 participants were asked to learn 16 word lists. Applying the DeeseeRoedigereMcDermott paradigm, the words in each list were associated with an unpresented prototype word. In the test condition, learned words and corresponding prototypes were presented along with non-associated new words, and participants were asked to indicate which of the words they recognized. Crucially, both study and test words were projected to only one hemisphere in order to stimulate each hemisphere separately. It was found that false recognitions occurred significantly less often when the right hemisphere studied and the left hemisphere recognized the stimuli. Moreover, only the right-to-left direction of interhemispheric communication reduced false memories signifi- cantly, whereas left-to-right exchange did not. Further analyses revealed that the observed reduction of false memories was not due to an enhanced discrimination sensitivity, but to a stricter response bias. Hence, the data suggest that interhemispheric cooperation does not improve the ability to tell old and new apart, but rather evokes a conservative response tendency. Future studies may narrow down in which cognitive processing steps inter- hemispheric interaction can change the response criterion. |
Nick Berggren; Anne Richards; Joseph Taylor; Nazanin Derakshan Affective attention under cognitive load: Reduced emotional biases but emergent anxiety-related costs to inhibitory control Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 188, 2013. @article{Berggren2013, Trait anxiety is associated with deficits in attentional control, particularly in the ability to inhibit prepotent responses. Here, we investigated this effect while varying the level of cognitive load in a modified antisaccade task that employed emotional facial expressions (neutral, happy, and angry) as targets. Load was manipulated using a secondary auditory task requiring recognition of tones (low load), or recognition of specific tone pitch (high load). Results showed that load increased antisaccade latencies on trials where gaze toward face stimuli should be inhibited. This effect was exacerbated for high anxious individuals. Emotional expression also modulated task performance on antisaccade trials for both high and low anxious participants under low cognitive load, but did not influence performance under high load. Collectively, results (1) suggest that individuals reporting high levels of anxiety are particularly vulnerable to the effects of cognitive load on inhibition, and (2) support recent evidence that loading cognitive processes can reduce emotional influences on attention and cognition. |
Jean-Baptiste Bernard; Girish Kumar; Jasmine Junge; Susana T. L. Chung The effect of letter-stroke boldness on reading speed in central and peripheral vision Journal Article In: Vision Research, vol. 84, pp. 33–42, 2013. @article{Bernard2013, People with central vision loss often prefer boldface print over normal print for reading. However, little is known about how reading speed is influenced by the letter-stroke boldness of font. In this study, we examined the reliance of reading speed on stroke boldness, and determined whether this reliance differs between the normal central and peripheral vision. Reading speed was measured using the rapid serial visual presentation paradigm, where observers with normal vision read aloud short single sentences presented on a computer monitor, one word at a time. Text was rendered in Courier at six levels of boldness, defined as the stroke-width normalized to that of the standard Courier font: 0.27, 0.72, 1, 1.48, 1.89 and 3.04× the standard. Testings were conducted at the fovea and 10° in the inferior visual field. Print sizes used were 0.8× and 1.4× the critical print size (smallest print size that can be read at the maximum reading speed). At the fovea, reading speed was invariant for the middle four levels of boldness, but dropped by 23.3% for the least and the most bold text. At 10° eccentricity, reading speed was virtually the same for all boldness <1, but showed a poorer tolerance to bolder text, dropping by 21.5% for 1.89× boldness and 51% for the most bold (3.04×) text. These results could not be accounted for by the changes in print size or the RMS contrast of text associated with changes in stroke boldness. Our results suggest that contrary to the popular belief, reading speed does not benefit from bold text in the normal fovea and periphery. Excessive increase in stroke boldness may even impair reading speed, especially in the periphery. |
Raymond Bertram; Laura Helle; Johanna K. Kaakinen; Erkki Svedström The effect of expertise on eye movement behaviour in medical image perception Journal Article In: PLoS ONE, vol. 8, no. 6, pp. e66169, 2013. @article{Bertram2013, The present eye-movement study assessed the effect of expertise on eye-movement behaviour during image perception in the medical domain. To this end, radiologists, computed-tomography radiographers and psychology students were exposed to nine volumes of multi-slice, stack-view, axial computed-tomography images from the upper to the lower part of the abdomen with or without abnormality. The images were presented in succession at low, medium or high speed, while the participants had to detect enlarged lymph nodes or other visually more salient abnormalities. The radiologists outperformed both other groups in the detection of enlarged lymph nodes and their eye-movement behaviour also differed from the other groups. Their general strategy was to use saccades of shorter amplitude than the two other participant groups. In the presence of enlarged lymph nodes, they increased the number of fixations on the relevant areas and reverted to even shorter saccades. In volumes containing enlarged lymph nodes, radiologists' fixation durations were longer in comparison to their fixation durations in volumes without enlarged lymph nodes. More salient abnormalities were detected equally well by radiologists and radiographers, with both groups outperforming psychology students. However, to accomplish this, radiologists actually needed fewer fixations on the relevant areas than the radiographers. On the basis of these results, we argue that expert behaviour is manifested in distinct eye-movement patterns of proactivity, reactivity and suppression, depending on the nature of the task and the presence of abnormalities at any given moment. |
Raymond Bertram; Jukka Hyönä The role of hyphens at the constituent boundary in compound word identification: Facilitative for long, detrimental for short compound words Journal Article In: Experimental Psychology, vol. 60, no. 3, pp. 157–163, 2013. @article{Bertram2013a, The current eye-movement study investigated whether a salient segmentation cue like the hyphen facilitates the identification of long and short compound words. The study was conducted in Finnish, where compound words exist in great abundance. The results showed that long hyphenated compounds (musiikki-ilta) are identified faster than concatenated ones (yllätystulos), but short hyphenated compounds (ilta-asu) are identified slower than their concatenated counterparts (kesäsää). This pattern of results is explained by the visual acuity principle (<citationReference id="cr1-1" rid="c1">Bertram & Hyönä, 2003</citationReference>): A long compound word does not fully fit in the foveal area, where visual acuity is at its best. Therefore, its identification begins with the access of the initial constituent and this sequential processing is facilitated by the hyphen. However, a short compound word fits in the foveal area, and consequently the hyphen slows down processing by encouraging sequential processing in cases where it is possible to extract and use information of the second constituent as well. |
Hans-Joachim Bieg; Jean-Pierre Bresciani; Heinrich H. Bülthoff; Lewis L. Chuang Saccade reaction time asymmetries during task-switching in pursuit tracking Journal Article In: Experimental Brain Research, vol. 230, no. 3, pp. 271–281, 2013. @article{Bieg2013, We investigate how smooth pursuit eye movements affect the latencies of task-switching saccades. Participants had to alternate their foveal vision between a continuous pursuit task in the display center and a discrete object discrimination task in the periphery. The pursuit task was either carried out by following the target with the eyes only (ocular) or by steering an on-screen cursor with a joystick (oculomanual). We measured participants' saccadic reaction times (SRTs) when foveal vision was shifted from the pursuit task to the discrimination task and back to the pursuit task. Our results show asymmetries in SRTs depending on the movement direction of the pursuit target: SRTs were generally shorter in the direction of pursuit. Specifically, SRTs from the pursuit target were shorter when the discrimination object appeared in the motion direction. SRTs to pursuit were shorter when the pursuit target moved away from the current fixation location. This result was independent of the type of smooth pursuit behavior that was performed by participants (ocular/oculomanual). The effects are discussed in regard to asymmetries in attention and processes that suppress saccades at the onset of pursuit. |
Adam T. Biggs; James R. Brockmole; Jessica K. Witt Armed and attentive: Holding a weapon can bias attentional priorities in scene viewing Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1715–1724, 2013. @article{Biggs2013, The action-specific perception hypothesis (Witt, Current Directions in Psychological Science 20: 201-206, 2011) claims that the environment is represented with respect to potential interactions for objects present within said environment. This investigation sought to extend the hypothesis beyond perceptual mechanisms and assess whether action-specific potential could alter attentional allocation. To do so, we examined a well-replicated attention bias in the weapon focus effect (Loftus, Loftus, & Messo, Law and Human Behaviour 1, 55-62, 1987), which represents the tendency for observers to attend more to weapons than to neutral objects. Our key manipulation altered the anticipated action-specific potential of observers by providing them a firearm while they freely viewed scenes with and without weapons present. We replicated the original weapon focus effect using modern eye tracking and confirmed that the increase in time looking at weapons comes at a cost of less time spent looking at faces. Additionally, observers who held firearms while viewing the various scenes showed a general bias to look at faces over objects, but only if the firearm was in a readily usable position (i.e., pointed at the scenes rather than holstered at one's side). These two effects, weapon focus and the newly found bias to look more at faces when armed, canceled out one another without interacting. This evidence confirms that the action capabilities of the observer alter more than just perceptual mechanisms and that holding a weapon can change attentional priorities. Theoretical and real-world implications are discussed. |
Markus Bindemann; Michael B. Lewis Face detection differs from categorization: Evidence from visual search in natural scenes Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 6, pp. 1140–1145, 2013. @article{Bindemann2013, In this study, we examined whether the detection of frontal, ¾, and profile face views differs from their categorization as faces. In Experiment 1, we compared three tasks that required observers to determine the presence or absence of a face, but varied in the extents to which participants had to search for the faces in simple displays and in small or large scenes to make this decision. Performance was equivalent for all of the face views in simple displays and small scenes, but it was notably slower for profile views when this required the search for faces in extended scene displays. This search effect was confirmed in Experiment 2, in which we compared observers' eye movements with their response times to faces in visual scenes. These results demonstrate that the categorization of faces at fixation is dissociable from the detection of faces in space. Consequently, we suggest that face detection should be studied with extended visual displays, such as natural scenes. |
Patrick G. Bissett; Gordon D. Logan Stop before you leap: Changing eye and hand movements requires stopping Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 941–946, 2013. @article{Bissett2013, The search-step paradigm addresses the processes involved in changing movement plans, usually saccadic eye-movements. Subjects move their eyes to a target (T1) among distractors, but when the target steps to a new location (T2), subjects are instructed to move their eyes directly from fixation to the new location. We ask whether moving to T2 requires a separate stop process that inhibits the movement to T1. It need not. The movement plan for the second response may inhibit the first response. To distinguish these hypotheses, we decoupled the offset of T1 from the onset of T2. If the second movement is sufficient to inhibit the first, then the probability of responding to T1 should depend only on T2 onset. If a separate stop process is required, then the probability of responding to T1 should depend only on T1 offset, which acts as a stop signal. We tested these hypotheses in manual and saccadic search-step tasks and found that the probability of responding to T1 depended most strongly on T1 offset, supporting the hypothesis that changing from one movement plan to another involves a separate stop process that inhibits the first plan. |
Arielle Borovsky; Erin Burns; Jeffrey L. Elman; Julia L. Evans Lexical activation during sentence comprehension in adolescents with history of specific language impairment Journal Article In: Journal of Communication Disorders, vol. 46, no. 5-6, pp. 413–427, 2013. @article{Borovsky2013, One remarkable characteristic of speech comprehension in typically developing (TD) children and adults is the speed with which the listener can integrate information across multiple lexical items to anticipate upcoming referents. Although children with Specific Language Impairment (SLI) show lexical deficits (Sheng & McGregor, 2010) and slower speed of processing (Leonard et al., 2007), relatively little is known about how these deficits manifest in real-time sentence comprehension. In this study, we examine lexical activation in the comprehension of simple transitive sentences in adolescents with a history of SLI and age-matched, TD peers. Participants listened to sentences that consisted of the form, Article-Agent-Action-Article-Theme, (e.g., The pirate chases the ship) while viewing pictures of four objects that varied in their relationship to the Agent and Action of the sentence (e.g., Target, Agent-Related, Action-Related, and Unrelated). Adolescents with SLI were as fast as their TD peers to fixate on the sentence's final item (the Target) but differed in their post-action onset visual fixations to the Action-Related item. Additional exploratory analyses of the spatial distribution of their visual fixations revealed that the SLI group had a qualitatively different pattern of fixations to object images than did the control group. The findings indicate that adolescents with SLI integrate lexical information across words to anticipate likely or expected meanings with the same relative fluency and speed as do their TD peers. However, the failure of the SLI group to show increased fixations to Action-Related items after the onset of the action suggests lexical integration deficits that result in failure to consider alternate sentence interpretations.Learning outcomes: As a result of this paper, the reader will be able to describe several benefits of using eye-tracking methods to study populations with language disorders. They should also recognize several potential explanations for lexical deficits in SLI, including possible reduced speed of processing, and degraded lexical representations. Finally, they should recall the main outcomes of this study, including that adolescents with SLI show different timing and location of eye-fixations while interpreting sentences than their age-matched peers. © 2013. |
S. E. Bosch; Sebastiaan F. W. Neggers; Stefan Van der Stigchel The role of the frontal eye fields in oculomotor competition: Image-guided TMS enhances contralateral target selection Journal Article In: Cerebral Cortex, vol. 23, no. 4, pp. 824–832, 2013. @article{Bosch2013, In order to execute a correct eye movement to a target in a search display, a saccade program toward the target element must be activated, while saccade programs toward distracting elements must be inhibited. The aim of the present study was to elucidate the role of the frontal eye fields (FEFs) in oculomotor competition. Functional magnetic resonance imaging-guided single-pulse transcranial magnetic stimulation (TMS) was administered over either the left FEF, the right FEF, or the vertex (control site) at 3 time intervals after target presentation, while subjects performed an oculomotor capture task. When TMS was applied over the FEF contralateral to the visual field where a target was presented, there was less interference of an ipsilateral distractor compared with FEF stimulation ipsilateral to the target's visual field or TMS over vertex. Furthermore, TMS over the FEFs decreased latencies of saccades to the contralateral visual field, irrespective of whether the saccade was directed to the target or to the distractor. These findings show that single-pulse TMS over the FEFs enhances the selection of a target in the contralateral visual field and decreases saccade latencies to the contralateral visual field. |
Oliver Bott The processing domain of aspectual interpretation Journal Article In: Studies in Linguistics and Philosophy, vol. 93, pp. 195–229, 2013. @article{Bott2013, In the semantic literature lexical aspect is often treated as a property of VPs or even of whole sentences. Does the interpretation of lexical aspect – contrary to the incrementality assumption commonly made in psycholinguistics – have to wait until the verb and all its arguments are present? To address this issue, we conducted an offline study, two self-paced reading experiments and an eyetracking experiment to investigate aspectual mismatch and aspectual coercion in German sentences while manipulating the position of the mismatching or coercing stimulus. Our findings provide evidence that mismatch detection and aspectual repair depend on a complete verb-argument structure. When the verb didn't receive all its (minimally required) arguments no mismatch or coercion effects showed up at the mismatching or coercing stimulus. Effects were delayed until a later point after all the arguments had been encountered. These findings have important consequences for semantic theory and for processing accounts of aspectual semantics. As far as semantic theory is concerned, it has to model lexical aspect as a supralexical property coming only into play at the sentence level. For theories of semantic processing the results are even more striking because they indicate that (at least some) semantic phenomena are processed on a more global level than it would be expected assuming incremental semantic interpretation. |
Mara Breen; Charles Clifton Stress matters revisited: A boundary change experiment Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 10, pp. 1896–1909, 2013. @article{Breen2013, Breen and Clifton (Stress matters: Effects of anticipated lexical stress on silent reading. Journal of Memory and Language, 2011, 64, 153-170) argued that readers' eye movements during silent reading are influenced by the stress patterns of words. This claim was supported by the observation that syntactic reanalysis that required concurrent metrical reanalysis (e.g., a change from the noun form of abstract to the verb form) resulted in longer reading times than syntactic reanalysis that did not require metrical reanalysis (e.g., a change from the noun form of report to the verb form). However, the data contained a puzzle: The disruption appeared on the critical word (abstract, report) itself, although the material that forced the part of speech change did not appear until the next region. Breen and Clifton argued that parafoveal preview of the disambiguating material triggered the revision and that the eyes did not move on until a fully specified lexical representation of the critical word was achieved. The present experiment used a boundary change paradigm in which parafoveal preview of the disambiguating region was prevented. Once again, an interaction was observed: Syntactic reanalysis resulted in particularly long reading times when it also required metrical reanalysis. However, now the interaction did not appear on the critical word, but only following the disambiguating region. This pattern of results supports Breen and Clifton's claim that readers form an implicit metrical representation of text during silent reading. |
Julie Brisson; Marc Mainville; Dominique Mailloux; Christelle Beaulieu; Josette Serres; Sylvain Sirois Pupil diameter measurement errors as a function of gaze direction in corneal reflection eyetrackers Journal Article In: Behavior Research Methods, vol. 45, no. 4, pp. 1322–1331, 2013. @article{Brisson2013, Pupil dilation is a useful, noninvasive technique for measuring the change in cognitive load. Since it is implicit and nonverbal, it is particularly useful with preverbal or nonverbal participants. In cognitive psychology, pupil dilation is most often measured by corneal reflection eye-tracking devices. The present study investigates the effect of gaze position on pupil size estimation by three common eye-tracking systems. The task consisted of a simple object pursuit situation, as a sphere rotated around the display screen. Systematic errors of pupil size estimation were found with all three systems. Implications for task-elicited pupillometry, especially for gaze-contingent studies such as object tracking or reading, are discussed. |
Jon Brock; Samantha Bzishvili Deconstructing Frith and Snowling's homograph-reading task: Implications for autism spectrum disorders Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 9, pp. 1764–1773, 2013. @article{Brock2013, The poor performance of autistic individuals on a test of homograph reading is widely interpreted as evidence for a reduction in sensitivity to context termed "weak central coherence". To better understand the cognitive processes involved in completing the homograph-reading task, we monitored the eye movements of nonautistic adults as they completed the task. Using single trial analysis, we determined that the time between fixating and producing the homograph (eye-to-voice span) increased significantly across the experiment and predicted accuracy of homograph pronunciation, suggesting that participants adapted their reading strategy to minimize pronunciation errors. Additionally, we found evidence for interference from previous trials involving the same homograph. This progressively reduced the initial advantage for dominant homograph pronunciations as the experiment progressed. Our results identify several additional factors that contribute to performance on the homograph reading task and may help to reconcile the findings of poor performance on the test with contradictory findings from other studies using different measures of context sensitivity in autism. The results also undermine some of the broader theoretical inferences that have been drawn from studies of autism using the homograph task. Finally, we suggest that this approach to task deconstruction might have wider applications in experimental psychology. |
Andrew P. Bayliss; Emily Murphy; Claire K. Naughtin; Ada Kritikos; Leonhard Schilbach; Stefanie I. Becker Gaze leading: Initiating simulated joint attention influences eye movements and choice behavior Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 1, pp. 76–92, 2013. @article{Bayliss2013, Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator–the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon and reorienting toward a face are affected by whether that individual face shifts its eyes in a congruent or an incongruent direction in response to the participant's eye movement. Gaze leading also biased affective responses toward the faces and attended objects. These findings demonstrate that leading the eyes of other individuals alters how we explore and evaluate our social environment. |
Stefanie I. Becker Simply shapely: Relative, not absolute shapes are primed in pop-out search Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 5, pp. 845–861, 2013. @article{Becker2013, Visual search is typically faster when the target from the previous trial is repeated than when it changes. This priming effect is commonly attributed to a selection bias for the target feature value or against the nontarget feature value that carries over to the next trial. By contrast, according to a relational account, what is primed in visual search is the target-nontarget relationship-namely, the feature that the target has in relation to the features in the nontarget context (e.g., larger, darker, redder)-and switch costs occur only when the target-nontarget relations reverse across trials. Here, the relational account was tested against current feature-based views in three eye movement experiments that used different shape search tasks (e.g., geometrical figures varying in the number of corners). For all tested shapes, reversing the target-nontarget relationships produced switch costs of the same magnitude as directly switching the target and nontarget features across trials ("full-switch"). In particular, changing only the nontargets produced large switch costs, even when the target feature was always repeated across trials. By contrast, no switch costs were observed when both the target and nontarget features changed, such that the coarse target-nontarget relations remained constant across trials. These results support the relational account over feature-based accounts of priming and indicate that a target's shape can be encoded relative to the shapes in the nontarget context. |
Stefanie I. Becker; Ulrich Ansorge Higher set sizes in pop-out search displays do not eliminate priming or enhance target selection Journal Article In: Vision Research, vol. 81, pp. 18–28, 2013. @article{Becker2013a, Previous research shows that salient stimuli do not pop out solely in virtue of their feature contrast. Rather, visual selection of a pop-out target is strongly modulated by feature priming: Repeating the target feature (e.g., red) across trials primes attention shifts to the target but delays target selection when the target feature changes (e.g., from red to green). However, it has been argued that priming modulated target selection only because the stimuli were too sparsely packed, suggesting that pop-out is still mostly determined by the target's saliency (i.e., local feature contrast). Here, we tested these different views by measuring the observer's eye movements in search for a colour target (Exp. 1) or size target (Exp. 2), when the target was similar versus dissimilar to the target, and when the displays contained 6 or 12 search items. The results showed that making the target less similar to the nontargets indeed eliminated priming effects in search for colour, but not in search for size. Moreover, increasing the set size neither increased search efficiency nor eliminated feature priming effects. Taken together, the results indicated that priming can still modulate target selection even in search for salient targets. |
Stefanie I. Becker; Charles L. Folk; Roger W. Remington Attentional capture does not depend on feature similarity, but on target-nontarget relations Journal Article In: Psychological Science, vol. 24, no. 5, pp. 634–647, 2013. @article{Becker2013b, What factors determine which stimuli of a scene will be visually selected and become available for conscious perception? The currently prevalent view is that attention operates on specific feature values, so attention will be drawn to stimuli that have features similar to those of the sought-after target. Here, we show that, instead, attentional capture depends on whether a distractor's feature relationships match the target-nontarget relations (e.g., redder). In three spatial-cuing experiments, we found that (a) a cue with the target color (e.g., orange) can fail to capture attention when the cue-cue-context relations do not match the target-nontarget relations (e.g., redder target vs. yellower cue), whereas (b) a cue with the nontarget color can capture attention when its relations match the target-nontarget relations (e.g., both are redder). These results support a relational account in which attention is biased toward feature relationships instead of particular feature values, and show that attentional capture by an irrelevant distractor does not depend on feature similarity, but rather depends on whether the distractor matches or mismatches the target's relative attributes (e.g., relative color). |
Patrice Speeter Beddor; Kevin B. McGowan; Julie E. Boland; Andries W. Coetzee; Anthony Brasher The time course of perception of coarticulation Journal Article In: The Journal of the Acoustical Society of America, vol. 133, no. 4, pp. 2350–2366, 2013. @article{Beddor2013, The perception of coarticulated speech as it unfolds over time was investigated by monitoring eye movements of participants as they listened to words with oral vowels or with late or early onset of anticipatory vowel nasalization. When listeners heard [CVNC] and had visual choices of images of CVNC (e.g., send) and CVC (said) words, they fixated more quickly and more often on the CVNC image when onset of nasalization began early in the vowel compared to when the coarticulatory information occurred later. Moreover, when a standard eye movement programming delay is factored in, fixations on the CVNC image began to occur before listeners heard the nasal consonant. Listeners' attention to coarticulatory cues for velum lowering was selective in two respects: (a) listeners assigned greater perceptual weight to coarticulatory information in phonetic contexts in which [V] but not N is an especially robust property, and (b) individual listeners differed in their perceptual weights. Overall, the time course of perception of velum lowering in American English indicates that the dynamics of perception parallel the dynamics of the gestural information encoded in the acoustic signal. In real-time processing, listeners closely track unfolding coarticulatory information in ways that speed lexical activation. |
Nathalie N. Bélanger; Rachel I. Mayberry; Keith Rayner Orthographic and phonological preview benefits: Parafoveal processing in skilled and less-skilled deaf readers Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 11, pp. 2237–2252, 2013. @article{Belanger2013a, Many deaf individuals do not develop the high-level reading skills that will allow them to fully take part into society. To attempt to explain this widespread difficulty in the deaf population, much research has honed in on the use of phonological codes during reading. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers, though not well supported, still lingers in the literature. We investigated skilled and less-skilled adult deaf readers' processing of orthographic and phonological codes in parafoveal vision during reading by monitoring their eye movements and using the boundary paradigm. Orthographic preview benefits were found in early measures of reading for skilled hearing, skilled deaf, and less-skilled deaf readers, but only skilled hearing readers processed phonological codes in parafoveal vision. Crucially, skilled and less-skilled deaf readers showed a very similar pattern of preview benefits during reading. These results support the notion that reading difficulties in deaf adults are not linked to their failure to activate phonological codes during reading. |
Nathalie N. Bélanger; Keith Rayner Frequency and predictability effects in eye fixations for skilled and less-skilled deaf readers Journal Article In: Visual Cognition, vol. 21, no. 4, pp. 477–497, 2013. @article{Belanger2013, The illiteracy rate in the deaf population has been alarmingly high for several decades, despite the fact that deaf children go through the standard stages of schooling. Much research addressing this issue has focused on word-level processes, but in the recent years, little research has focused on sentence-levels processes. Previous research (Fischler, 1985) investigated word integration within context in college-level deaf and hearing readers in a lexical decision task following incomplete sentences with targets that were congruous or incongruous relative to the preceding context; it was found that deaf readers, as a group, were more dependent on contextual information than their hearing counterparts. The present experiment extended Fischler's results and investigated the relationship between frequency, predictability, and reading skill in skilled hearing, skilled deaf, and less-skilled deaf readers. Results suggest that only less-skilled deaf readers, and not all deaf readers, rely more on contextual cues to boost word processing. Additionally, early effects of frequency and predictability were found for all three groups of readers, without any evidence for an interaction between frequency and predictability. |
Artem V. Belopolsky; Stefan Van der Stigchel Saccades curve away from previously inhibited locations: Evidence for the role of priming in oculomotor competition Journal Article In: Journal of Neurophysiology, vol. 110, no. 10, pp. 2370–2377, 2013. @article{Belopolsky2013, The oculomotor system serves as the basis for representing concurrently competing motor programs. Here, we examine whether the oculomotor system also keeps track of the outcome of competition between target and distractor on the previous trial. Participants had to perform a simple task of making a saccade toward a predefined direction. On two-thirds of the trials, an irrelevant distractor was presented to either the left or right of the fixation. On one-third of the trials, no distractor was present. The results show that on trials without a distractor, saccades curved away from the empty location that was occupied by a distractor on the previous trial. This result was replicated and extended to cases when different saccade directions were used. In addition, we show that repetition of distractor location on the distractor-present trials results in a stronger curvature away and in a shorter saccade latency to the target. Taken together, these results provide strong evidence that the oculomotor system automatically codes and retains locations that had been ignored in the past to bias future behavior. |
Daniel Belyusar; Adam C. Snyder; Hans Peter Frey; Mark R. Harwood; Josh Wallman; John J. Foxe Oscillatory alpha-band suppression mechanisms during the rapid attentional shifts required to perform an anti-saccade task Journal Article In: NeuroImage, vol. 65, pp. 395–407, 2013. @article{Belyusar2013, Neuroimaging has demonstrated anatomical overlap between covert and overt attention systems, although behavioral and electrophysiological studies have suggested that the two systems do not rely on entirely identical circuits or mechanisms. In a parallel line of research, topographically-specific modulations of alpha-band power (~. 8-14. Hz) have been consistently correlated with anticipatory states during tasks requiring covert attention shifts. These tasks, however, typically employ cue-target-interval paradigms where attentional processes are examined across relatively protracted periods of time and not at the rapid timescales implicated during overt attention tasks. The anti-saccade task, where one must first covertly attend for a peripheral target, before executing a rapid overt attention shift (i.e. a saccade) to the opposite side of space, is particularly well-suited for examining the rapid dynamics of overt attentional deployments. Here, we asked whether alpha-band oscillatory mechanisms would also be associated with these very rapid overt shifts, potentially representing a common neural mechanism across overt and covert attention systems. High-density electroencephalography in conjunction with infra-red eye-tracking was recorded while participants engaged in both pro- and anti-saccade task blocks. Alpha power, time-locked to saccade onset, showed three distinct phases of significantly lateralized topographic shifts, all occurring within a period of less than 1. s, closely reflecting the temporal dynamics of anti-saccade performance. Only two such phases were observed during the pro-saccade task. These data point to substantially more rapid temporal dynamics of alpha-band suppressive mechanisms than previously established, and implicate oscillatory alpha-band activity as a common mechanism across both overt and covert attentional deployments. |
Julia Bender; Kyeong Jin Tark; Benedikt Reuter; Norbert Kathmann; Clayton E. Curtis Differential roles of the frontal and parietal cortices in the control of saccades Journal Article In: Brain and Cognition, vol. 83, no. 1, pp. 1–9, 2013. @article{Bender2013, Although externally as well as internally-guided eye movements allow us to flexibly explore the visual environment, their differential neural mechanisms remain elusive. A better understanding of these neural mechanisms will help us to understand the control of action and to elucidate the nature of cognitive deficits in certain psychiatric populations (e.g. schizophrenia) that show increased latencies in volitional but not visually-guided saccades. Both the superior precentral sulcus (sPCS) and the intraparietal sulcus (IPS) are implicated in the control of eye movements. However, it remains unknown what differential contributions the two areas make to the programming of visually-guided and internally-guided saccades. In this study we tested the hypotheses that sPCS and IPS distinctly encode internally-guided saccades and visually-guided saccades. We scanned subjects with fMRI while they generated visually-guided and internally-guided delayed saccades. We used multi-voxel pattern analysis to test whether patterns of cue related, preparatory and saccade related activation could be used to predict the direction of the planned eye movement. Results indicate that patterns in the human sPCS predicted internally-guided saccades but not visually-guided saccades in all trial periods and patterns in the IPS predicted internally-guided saccades and visually-guided saccades equally well. The results support the hypothesis that the human sPCS and IPS make distinct contributions to the control of volitional eye movements. |
Stephen P. Badham; Claire V. Hutchinson Characterising eye movement dysfunction in myalgic encephalomyelitis/ chronic fatigue syndrome Journal Article In: Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 251, no. 12, pp. 2769–2776, 2013. @article{Badham2013, BACKGROUND: People who suffer from myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) often report that their eye movements are sluggish and that they have difficulties tracking moving objects. However, descriptions of these visual problems are based solely on patients' self-reports of their subjective visual experiences, and there is a distinct lack of empirical evidence to objectively verify their claims. This paper presents the first experimental research to objectively examine eye movements in those suffering from ME/CFS. METHODS: Patients were assessed for ME/CFS symptoms and were compared to age, gender, and education matched controls for their ability to generate saccades and smooth pursuit eye movements. RESULTS: Patients and controls exhibited similar error rates and saccade latencies (response times) on prosaccade and antisaccade tasks. Patients showed relatively intact ability to accurately fixate the target (prosaccades), but were impaired when required to focus accurately in a specific position opposite the target (antisaccades). Patients were most markedly impaired when required to direct their gaze as closely as possible to a smoothly moving target (smooth pursuit). CONCLUSIONS: It is hypothesised that the effects of ME/CFS can be overcome briefly for completion of saccades, but that continuous pursuit activity (accurately tracking a moving object), even for a short time period, highlights dysfunctional eye movement behaviour in ME/CFS patients. Future smooth pursuit research may elucidate and improve diagnosis of ME/CFS. |
Xuejun Bai; Feifei Liang; Hazel I. Blythe; Chuanli Zang; Guoli Yan; Simon P. Liversedge Interword spacing effects on the acquisition of new vocabulary for readers of Chinese as a second language Journal Article In: Journal of Research in Reading, vol. 36, pp. S4–S17, 2013. @article{Bai2013, We examined whether interword spacing would facilitate acquisition of new vocabulary for second language learners of Chinese. Participants' eye movements were measured as they read new vocabulary embedded in sentences during a learning session and a test session. In the learning session, participants read sentences in traditional unspaced format and half-read sentences with interword spacing. In the test session, all participants read unspaced sentences. Participants in the spaced learning group read the target words more quickly than those in the unspaced learning group. This benefit was maintained at test, indicating that the manipulation enhanced learning of the novel words and was not a transient effect limited to occasions when interword spacing was present in the printed text. The insertion of interword spaces may allow readers to form a more fully specified representation of the novel word, or to strengthen connections between representations of the constituent characters and the multi-character word. |
D. A. Baker; N. J. Schweitzer; Evan F. Risko; Jillian M. Ware Visual attention and the neuroimage bias Journal Article In: PLoS ONE, vol. 8, no. 9, pp. e74449, 2013. @article{Baker2013, Several highly-cited experiments have presented evidence suggesting that neuroimages may unduly bias laypeople's judgments of scientific research. This finding has been especially worrisome to the legal community in which neuroimage techniques may be used to produce evidence of a person's mental state. However, a more recent body of work that has looked directly at the independent impact of neuroimages on layperson decision-making (both in legal and more general arenas), and has failed to find evidence of bias. To help resolve these conflicting findings, this research uses eye tracking technology to provide a measure of attention to different visual representations of neuroscientific data. Finding an effect of neuroimages on the distribution of attention would provide a potential mechanism for the influence of neuroimages on higher-level decisions. In the present experiment, a sample of laypeople viewed a vignette that briefly described a court case in which the defendant's actions might have been explained by a neurological defect. Accompanying these vignettes was either an MRI image of the defendant's brain, or a bar graph depicting levels of brain activity-two competing visualizations that have been the focus of much of the previous research on the neuroimage bias. We found that, while laypeople differentially attended to neuroimagery relative to the bar graph, this did not translate into differential judgments in a way that would support the idea of a neuroimage bias. |
Daniela Balslev; Bartholomäus Odoj; Hans-Otto Karnath Role of somatosensory cortex in visuospatial attention Journal Article In: Journal of Neuroscience, vol. 33, no. 46, pp. 18311–18318, 2013. @article{Balslev2013, The human somatosensory cortex (S1) is not among the brain areas usually associated with visuospatial attention. However, such a function can be presumed, given the recently identified eye proprioceptive input to S1 and the established links between gaze and attention. Here we investigated a rare patient with a focal lesion of the right postcentral gyrus that interferes with the processing of eye proprioception without affecting the ability to locate visual objects relative to her body or to execute eye movements. As a behavioral measure of spatial attention, we recorded fixation time during visual search and reaction time for visual discrimination in lateral displays. In contrast to a group of age-matched controls, the patient showed a gradient in looking time and in visual sensitivity toward the midline. Because an attention bias in the opposite direction, toward the ipsilesional space, occurs in patients with spatial neglect, in a second study, we asked whether the incidental coinjury of S1 together with the neglect-typical perisylvian lesion leads to a milder neglect. A voxelwise lesion behavior mapping analysis of a group of right-hemisphere stroke patients supported this hypothesis. The effect of an isolated S1 lesion on visual exploration and visual sensitivity as well as the modulatory role of S1 in spatial neglect suggest a role of this area in visuospatial attention. We hypothesize that the proprioceptive gaze signal in S1, although playing only a minor role in locating visual objects relative to the body, affects the allocation of attention in the visual space. |
Niraj Barot; Rebecca J. McLean; Irene Gottlob; Frank A. Proudlock Reading performance in infantile nystagmus Journal Article In: Ophthalmology, vol. 120, no. 6, pp. 1232–1238, 2013. @article{Barot2013, Objective: To characterize reading deficits in infantile nystagmus (IN), to determine optimal font sizes for reading in IN, and to investigate whether visual acuity (VA) and severity of nystagmus are good indicators of reading performance in IN. Design: Prospective cross-sectional study. Participants and Controls: Seventy-one participants with IN (37 idiopathic, 34 with albinism) and 20 age-matched controls. Methods: Reading performance was assessed using Radner reading charts and was compared with near logarithm of the minimum angle of resolution (logMAR) VA, nystagmus intensity, and foveation characteristics as quantified using eye movement recordings. Main Outcome Measures: Reading acuity (smallest readable font size), maximum reading speed, critical print size (font size below which reading is suboptimal), near logMAR VA, nystagmus intensity, and foveation characteristics (using the eXpanded Nystagmus Acuity Function). Results: Using optimal reading conditions, maximum reading speeds were 18.8% slower in albinism and 14.7% slower in idiopathic IN patients compared with controls. Reading acuities were significantly worse (P<0.001) in IN patients compared with controls. Also, the range of font sizes over which reading speeds were less than the optimum were much larger in IN patients compared with controls (P<0.001). Reading acuity was correlated strongly to near VA (r2= 0.74 albinism |
Ana Margarida Barreto Do users look at banner ads on Facebook? Journal Article In: Journal of Research in Interactive Marketing, vol. 7, no. 2, pp. 119–139, 2013. @article{Barreto2013, Purpose – The main purpose of this study was to determine whether users of the online social network site, Facebook, actually look at the ads displayed (briefly, to test the existence of the phenomenon known as “banner blindness” in this website), thus ascertaining the effectiveness of paid advertising, and comparing it with the number of friends' recommendations seen. Design/methodology/approach – In order to achieve this goal, an experiment using eye-tracking technology was administered to a total of 20 participants from a major university in the USA, followed by a questionnaire. Findings – Findings show that online ads attract less attention levels than friends' recommendations. A possible explanation for this phenomenon may be related to the fact that ads on Facebook are outside of the F-shaped visual pattern range, causing a state of “banner blindness”. Results also show that statistically there is no difference in ads seen and clicked between women and men. Research limitations/implications – The sample type (undergraduate and graduate students) and the sample size (20 participants) inhibit the generalization of the findings to other populations. Practical implications – The paper includes implications for the development of an effective online advertising campaign, as well as some proposed conceptualizations of the terms social network site and advertising, which can be used as platforms for discussion or as standards for future definitions. Originality/value – This study fulfils some identified needs to study advertising effectiveness based on empirical data and to assess banner blindness in other contexts, representative of current internet users' habits. |
Si On Yoon; Sarah Brown-Schmidt Lexical differentiation in language production and comprehension Journal Article In: Journal of Memory and Language, vol. 69, no. 3, pp. 397–416, 2013. @article{Yoon2013, This paper presents the results of three experiments that explore the breadth of the relevant discourse context in language production and comprehension. Previous evidence from language production suggests the relevant context is quite broad, based on findings that speakers differentiate new discourse referents from similar referents discussed in past contexts (Van Der Wege, 2009). Experiment 1 replicated and extended this "lexical differentiation" effect by demonstrating that speakers used two different mechanisms, modification, and the use of subordinate level nouns, to differentiate current from past referents. In Experiments 2 and 3, we examined whether addressees expect speakers to differentiate. The results of these experiments did not support the hypothesis that listeners expect differentiation, for either lexically differentiated modified expressions (Experiment 2), nor for subordinate level nouns (Experiment 3). Taken together, the present findings suggest that the breadth of relevant discourse context differs across language production and comprehension. Speakers show more sensitivity to things they have said before, possibly due to better knowledge of the relevant context. In contrast, listeners have the task of inferring what the speaker believes is the relevant context; this inferential process may be more error-prone. |
Angela H. Young; Johan Hulleman Eye movements reveal how task difficulty moulds visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 1, pp. 168–190, 2013. @article{Young2013, In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size during visual search shrinks with increasing task difficulty. In Experiment 2, we used a gaze-contingent window and confirmed the validity of the size estimates. The experiment also revealed that breakdown in robustness against item motion is related to item-by-item search, rather than search difficulty per se. We argue that visual search is an eye-movement-based process that works on a continuum, from almost parallel (where many items can be processed within a fixation) to completely serial (where only one item can be processed within a fixation). |
Kiwon Yun; Yifan Peng; Dimitris Samaras; Gregory J. Zelinsky; Tamara L. Berg Exploring the role of gaze behavior and object detection in scene understanding Journal Article In: Frontiers in Psychology, vol. 4, pp. 917, 2013. @article{Yun2013, We posit that a person's gaze behavior while freely viewing a scene contains an abundance of information, not only about their intent and what they consider to be important in the scene, but also about the scene's content. Experiments are reported, using two popular image datasets from computer vision, that explore the relationship between the fixations that people make during scene viewing, how they describe the scene, and automatic detection predictions of object categories in the scene. From these exploratory analyses, we then combine human behavior with the outputs of current visual recognition methods to build prototype human-in-the-loop applications for gaze-enabled object detection and scene annotation. |
Chuanli Zang; Feifei Liang; Xuejun Bai; Guoli Yan; Simon P. Liversedge Interword spacing and landing position effects during Chinese reading in children and adults Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 3, pp. 720–734, 2013. @article{Zang2013, The present study examined children and adults' eye movement behavior when reading word spaced and unspaced Chinese text. The results showed that interword spacing reduced children and adults' first pass reading times and refixation probabilities indicating spaces between words facilitated word identification. Word spacing effects occurred to a similar degree for both children and adults, though there were differential landing position effects for single and multiple fixation situations in both groups; clear preferred viewing location effects occurred for single fixations, whereas landing positions were closer to word beginnings, and further into the word for adults than children for multiple fixation situations. Furthermore, adults targeted refixations contingent on initial landing positions to a greater degree than did children. Overall, the results indicate that some aspects of children's eye movements during reading show similar levels of maturity to adults, while others do not. |
Michael Zehetleitner; Anja Isabel Koch; Harriet Goschy; Hermann J. Müller Salience-based selection: Attentional capture by distractors less salient than the target Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52595, 2013. @article{Zehetleitner2013, Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. |
Semir Zeki; Jonathan Stutters Functional specialization and generalization for grouping of stimuli based on colour and motion Journal Article In: NeuroImage, vol. 73, pp. 156–166, 2013. @article{Zeki2013, This study was undertaken to learn whether the principle of functional specialization that is evident at the level of the prestriate visual cortex extends to areas that are involved in grouping visual stimuli according to attribute, and specifically according to colour and motion. Subjects viewed, in an fMRI scanner, visual stimuli composed of moving dots, which could be either coloured or achromatic; in some stimuli the moving coloured dots were randomly distributed or moved in random directions; in others, some of the moving dots were grouped together according to colour or to direction of motion, with the number of groupings varying from 1 to 3. Increased activation was observed in area V4 in response to colour grouping and in V5 in response to motion grouping while both groupings led to activity in separate though contiguous compartments within the intraparietal cortex. The activity in all the above areas was parametrically related to the number of groupings, as was the prominent activity in Crus I of the cerebellum where the activity resulting from the two types of grouping overlapped. This suggests (a) that, the specialized visual areas of the prestriate cortex have functions beyond the processing of visual signals according to attribute, namely that of grouping signals according to colour (V4) or motion (V5); (b) that the functional separation evident in visual cortical areas devoted to motion and colour, respectively, is maintained at the level of parietal cortex, at least as far as grouping according to attribute is concerned; and (c) that, by contrast, this grouping-related functional segregation is not maintained at the level of the cerebellum. |
Gregory J. Zelinsky; Hossein Adeli; Yifan Peng; Dimitris Samaras Modelling eye movements in a categorical search task Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–13, 2013. @article{Zelinsky2013, We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. |
Younes Zerouali; Jean Marc Lina; Boutheina Jemel Optimal eye-gaze fixation position for face-related neural responses Journal Article In: PLoS ONE, vol. 8, no. 6, pp. e60128, 2013. @article{Zerouali2013, It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template. |
Wei Zhou; Reinhold Kliegl; Ming Yan A validation of parafoveal semantic information extraction in reading Chinese Journal Article In: Journal of Research in Reading, vol. 36, no. SUPPL.1, pp. S51–S63, 2013. @article{Zhou2013a, Parafoveal semantic processing has recently been well documented in reading Chinese sentences, presumably because of language-specific features. However, because of a large variation of fixation landing positions on pretarget words, some preview words actually were located in foveal vision when readers' eyes landed close to the end of the pretarget words. None of the previous studies has completely ruled out a possibility that the semantic preview effects might mainly arise from these foveally processed preview words. This case, whether previously observed positive evidence for parafoveal semantic processing can still hold, has been called into question. Using linear mixed models, we demonstrate in this study that semantic preview benefit from word N+1 decreased if fixation on pretarget word N was close to the preview. We argue that parafoveal semantic processing is not a consequence of foveally processed preview words. |
Weina Zhu; Jan Drewes; Karl R. Gegenfurtner Animal detection in natural images: effects of color and image database Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75816, 2013. @article{Zhu2013, The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. |
Xiao Lin Zhu; Shu Ping Tan; Fu De Yang; Wei Sun; Chong Sheng Song; Jie Feng Cui; Yan Li Zhao; Feng Mei Fan; Ya Jun Li; Yun Long Tan; Yi Zhuang Zou Visual scanning of emotional faces in schizophrenia Journal Article In: Neuroscience Letters, vol. 552, pp. 46–51, 2013. @article{Zhu2013a, This study investigated eye movement differences during facial emotion recognition between 101 patients with chronic schizophrenia and 101 controls. Independent of facial emotion, patients with schizophrenia processed facial information inefficiently; they showed significantly more direct fixations that lasted longer to interest areas (IAs), such as the eyes, nose, mouth, and nasion. The total fixation number, mean fixation duration, and total fixation duration were significantly increased in schizophrenia. Additionally, the number of fixations per second to IAs (IA fixation number/s) was significantly lower in schizophrenia. However, no differences were found between the two groups in the proportion of number of fixations to IAs or total fixation number (IA fixation number %). Interestingly, the negative symptoms of patients with schizophrenia negatively correlated with IA fixation number %. Both groups showed significantly greater attention to positive faces. Compared to controls, patients with schizophrenia exhibited significantly more fixations directed to IAs, a higher total fixation number, and lower IA fixation number/s for negative faces. These results indicate that facial processing efficiency is significantly decreased in schizophrenia, but no difference was observed in processing strategy. Patients with schizophrenia may have special deficits in processing negative faces, and negative symptoms may affect visual scanning parameters. |
Eckart Zimmermann The reference frames in saccade adaptation Journal Article In: Journal of Neurophysiology, vol. 109, pp. 1815, 2013. @article{Zimmermann2013a, Saccade adaptation is a mecha- nism that adjusts saccade landing positions if they systematically fail to reach their intended target. In the laboratory, saccades can be shortened or lengthened if the saccade target is displaced during execution of the saccade. In this study, saccades were performed from different positions to an adapted saccade target to dissociate adapta- tion to a spatiotopic position in external space from a combined retinotopic and spatiotopic coding. The presentation duration of the saccade target before saccade execution was systematically varied, during adaptation and during test trials, with a delayed saccade paradigm. Spatiotopic shifts in landing positions depended on a certain preview duration of the target before saccade execution. When saccades were performed immediately to a suddenly appearing target, no spatiotopic adaptation was observed. These results suggest that a spatiotopic representation of the visual target signal builds up as a function of the duration the saccade target is visible before saccade execution. Different coordinate frames might also explain the separate adaptability of reactive and voluntary saccades. Spatiotopic effects were found only in outward adaptation but not in inward adaptation, which is consistent with the idea that outward adaptation takes place at the level of the visual target representation, whereas inward adap- tation is achieved at a purely motor level. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Spatial position information accumulates steadily over time Journal Article In: Journal of Neuroscience, vol. 33, no. 47, pp. 18396–18401, 2013. @article{Zimmermann2013, One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor; but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the "saccadic suppression of displacement" paradigm are a result of the fact that the target has had insufficient time to be encoded in memory, and not a result of the action of special mechanisms conferring saccadic stability. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked. |
Ming Yan; Jinger Pan; Jochen Laubrock; Reinhold Kliegl; Hua Shu Parafoveal processing efficiency in rapid automatized naming: A comparison between Chinese normal and dyslexic children Journal Article In: Journal of Experimental Child Psychology, vol. 115, no. 3, pp. 579–589, 2013. @article{Yan2013, Dyslexic children are known to be slower than normal readers in rapid automatized naming (RAN). This suggests that dyslexics encounter local processing difficulties, which presumably induce a narrower perceptual span. Consequently, dyslexics should suffer less than normal readers from removing parafoveal preview. Here we used a gaze-contingent moving window paradigm in a RAN task to experimentally test this prediction. Results indicate that dyslexics extract less parafoveal information than control children. We propose that more attentional resources are recruited to the foveal processing because of dyslexics' less automatized translation of visual symbols into phonological output, thereby causing a reduction of the perceptual span. This in turn leads to less efficient preactivation of parafoveal information and, hence, more difficulty in processing the next foveal item. |
Hongsheng Yang; Fang Wang; Nianjun Gu; Xiao Gao; Guang Zhao The cognitive advantage for one's own name is not simply familiarity: An eye-tracking study Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 6, pp. 1176–1180, 2013. @article{Yang2013, Eye-tracking technique and visual search task were employed to examine the cognitive advantage for one's own name and the possible effect of familiarity on this advantage. The results showed that fewer saccades and an earlier start time of first fixations on the target were associated with trials in which participants were asked to search for their own name, as compared to search for personally familiar or famous names. In addition, the results also demonstrated faster response times and higher accuracy in the former kind of trials. Taken together, these findings provide important evidence that one's own name has the potential to capture attention and that familiarity cannot account for this advantage. |
Jinmian Yang Preview effects of plausibility and character order in reading Chinese transposed words: evidence from eye movements Journal Article In: Journal of Research in Reading, vol. 36, no. SUPPL.1, pp. S18–S34, 2013. @article{Yang2013a, The current paper examined the role of plausibility information in the parafovea for Chinese readers by using two-character transposed words (in which the order of the component characters is reversed but are still words). In two eye-tracking experiments, readers received a preview of a target word that was (1) identical to the target word, (2) a reverse word that was the target word with the order of its characters reversed or (3) a control word different from the target word. Reading times on target words were comparable between the identical and the reverse preview conditions when the reverse preview words were plausible. This plausibility preview effect was independent of whether the reverse word shared the meaning with the target word or not. Furthermore, the reverse preview words yielded shorter fixation durations than the control preview words. Implications of these results for preview processing during Chinese reading are discussed. |
Zhou Yang; Todd Jackson; Hong Chen Effects of chronic pain and pain-related fear on orienting and maintenance of attention: An eye movement study Journal Article In: Journal of Pain, vol. 14, no. 10, pp. 1148–1157, 2013. @article{Yang2013b, Abstract In this study, effects of chronic pain and pain-related fear on orienting and maintenance of attention toward pain stimuli were evaluated by tracking eye movements within a dot-probe paradigm. The sample comprised matched chronic pain (n = 24) and pain-free (n = 24) groups, each of which included lower and higher fear of pain subgroups. Participants completed a dot-probe task wherein eye movements were assessed during the presentation of sensory pain-neutral, health catastrophe-neutral, and neutral-neutral word pairs. Higher fear of pain levels were associated with biases in 1) directing initial gaze toward health catastrophe words and, among participants with chronic pain, 2) subsequent avoidance of threat as reflected by shorter first fixation durations on health catastrophe words compared to pain-free cohorts. As stimulus word pairs persisted for 2,000 ms, no group differences were observed for overall gaze durations or reaction times to probes that followed. In sum, this research identified specific biases in visual attention related to fear of pain and chronic pain during early stages of information processing that were not evident on the basis of later behavior responses to probes. Perspective Effects of chronic pain and fear of pain on attention were examined by tracking eye movements within a dot-probe paradigm. Heightened fear of pain corresponded to biases in initial gaze toward health catastrophe words and, among participants with chronic pain, subsequent gaze shifts away from these words. No reaction time differences emerged. |
Lok-Kin Yeung; Jennifer D. Ryan; Rosemary A. Cowell; Morgan D. Barense Recognition memory impairments caused by false recognition of novel objects Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 4, pp. 1384–1397, 2013. @article{Yeung2013, A fundamental assumption underlying most current theories of amnesia is that memory impairments arise because previously studied information either is lost rapidly or is made inaccessible (i.e., the old information appears to be new). Recent studies in rodents have challenged this view, suggesting instead that under conditions of high interference, recognition memory impairments following medial temporal lobe damage arise because novel information appears as though it has been previously seen. Here, we developed a new object recognition memory paradigm that distinguished whether object recognition memory impairments were driven by previously viewed objects being treated as if they were novel or by novel objects falsely recognized as though they were previously seen. In this indirect, eyetracking-based passive viewing task, older adults at risk for mild cognitive impairment showed false recognition to high-interference novel items (with a significant degree of feature overlap with previously studied items) but normal novelty responses to low-interference novel items (with a lower degree of feature overlap). The indirect nature of the task minimized the effects of response bias and other memory-based decision processes, suggesting that these factors cannot solely account for false recognition. These findings support the counterintuitive notion that recognition memory impairments in this memory-impaired population are not characterized by forgetting but rather are driven by the failure to differentiate perceptually similar objects, leading to the false recognition of novel objects as having been seen before. |
Chun Po Yin; Feng-Yang Kuo A study of how information system professionals comprehend indirect and direct speech acts in project communication Journal Article In: IEEE Transactions on Professional Communication, vol. 56, no. 3, pp. 226–241, 2013. @article{Yin2013, Research problem: Indirect communication is prevalent in business communication practices. For information systems (IS) projects that require professionals from multiple disciplines to work together, the use of indirect communication may hinder successful design, implementation, and maintenance of these systems. Drawing on the Speech Act Theory (SAT), this study investigates how direct and indirect speech acts may influence language comprehension in the setting of communication problems inherent in IS projects. Research questions: (1) Do participating subjects, who are IS professionals, differ in their comprehension of indirect and direct speech acts? (2) Do participants display different attention processes in their comprehension of indirect and direct speech acts? (3) Do participants' attention processes influence their comprehension of indirect and direct speech acts? Literature review: We review two relevant areas of theory—polite speech acts in professional communication and SAT. First, a broad review that focuses on literature related to the use of polite speech acts in the workplace and in information system (IS) projects suggests the importance of investigating speech acts by professionals. In addition, the SAT provides the theoretical framework guiding this study and the development of hypotheses. Methodology: The current study uses a quantitative approach. A between-groups experiment design was employed to test how direct and indirect speech acts influence the language comprehension of participants. Forty-three IS professionals participated in the experiment. In addition, through the use of eye-tracking technology, this study captured the attention process and analyzed the relationship between attention and comprehension. Results and discussion: The results show that the directness of speech acts significantly influences participants' attention process, which, in turn, significantly affects their comprehension. In addition, the findings indicate that indirect speech acts, if employed by IS professionals to communicate with others, may easily be distorted or misunderstood. Professionals and managers of organizations should be aware that effective communication in interdisciplinary projects, such as IS development, is not easy, and that reliance on polite or indirect communication may inhibit the generation of valid information. |
En Zhang; Gong-Liang Zhang; Wu Li Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping Journal Article In: European Journal of Neuroscience, vol. 38, no. 12, pp. 3758–3767, 2013. @article{Zhang2013, Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training. |
Li Zhang; Jie Ren; Liang Xu; Xue Jun Qiu; Jost B. Jonas Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis Journal Article In: British Journal of Ophthalmology, vol. 97, no. 7, pp. 941–942, 2013. @article{Zhang2013a, With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue. |
Li Zhang; Ya-Qin Zhang; Jing-Shang Zhang; Liang Xu; Jost B. Jonas Visual fatigue and discomfort after stereoscopic display viewing Journal Article In: Acta Ophthalmologica, vol. 91, no. 2, pp. 149–153, 2013. @article{Zhang2013b, Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p < 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively. |
Ruyuan Zhang; Oh-Sang Kwon; Duje Tadin Illusory movement of stationary stimuli in the visual periphery: Evidence for a strong centrifugal prior in motion processing Journal Article In: Journal of Neuroscience, vol. 33, no. 10, pp. 4415–4423, 2013. @article{Zhang2013c, Visual input is remarkably diverse. Certain sensory inputs are more probable than others, mirroring statistical regularities of the visual environment. The visual system exploits many of these regularities, resulting, on average, in better inferences about visual stimuli. However, by incorporating prior knowledge into perceptual decisions, visual processing can also result in perceptions that do not match sensory inputs. Such perceptual biases can often reveal unique insights into underlying mechanisms and computations. For example, a prior assumption that objects move slowly can explain a wide range of motion phenomena. The prior on slow speed is usually rationalized by its match with visual input, which typically includes stationary or slow moving objects. However, this only holds for foveal and parafoveal stimulation. The visual periphery tends to be exposed to faster motions, which are biased toward centrifugal directions. Thus, if prior assumptions derive from experience, peripheral motion processing should be biased toward centrifugal speeds. Here, in experiments with human participants, we support this hypothesis and report a novel visual illusion where stationary objects in the visual periphery are perceived as moving centrifugally, while objects moving as fast as 7°/s toward fovea are perceived as stationary. These behavioral results were quantitatively explained by a Bayesian observer that has a strong centrifugal prior. This prior is consistent with both the prevalence of centrifugal motions in the visual periphery and a centrifugal bias of direction tuning in cortical area MT, supporting the notion that visual processing mirrors its input statistics. |
Jing Zhou; Adam Reeves; Scott N. J. Watamaniuk; Stephen J. Heinen Shared attention for smooth pursuit and saccades Journal Article In: Journal of Vision, vol. 13, no. 4, pp. 1–12, 2013. @article{Zhou2013, Identification of brief luminance decrements on parafoveal stimuli presented during smooth pursuit improves when a spot pursuit target is surrounded by a larger random dot cinematogram (RDC) that moves with it (Heinen, Jin, & Watamaniuk, 2011). This was hypothesized to occur because the RDC provided an alternative, less attention-demanding pursuit drive, and therefore released attentional resources for visual perception tasks that are shared with those used to pursue the spot. Here, we used the RDC as a tool to probe whether spot pursuit also shares attentional resources with the saccadic system. To this end, we set out to determine if the RDC could release attention from pursuit of the spot to perform a saccade task. Observers made a saccade to one of four parafoveal targets that moved with the spot pursuit stimulus. The targets either moved alone or were surrounded by an RDC (100% coherence). Saccade latency decreased with the RDC, suggesting that the RDC released attention needed to pursue the spot, which was then used for the saccade task. Additional evidence that attention was released by the RDC was obtained in an experiment in which attention was anchored to the fovea by requiring observers to detect a brief color change applied 130 ms before the saccade target appeared. This manipulation eliminated the RDC advantage. The results imply that attentional resources used by the pursuit and saccadic eye movement control systems are shared. |
Felicity D. A. Wolohan; Sarah J. V. Bennett; Trevor J. Crawford Females and attention to eye gaze: Effects of the menstrual cycle Journal Article In: Experimental Brain Research, vol. 227, no. 3, pp. 379–386, 2013. @article{Wolohan2013, It is well known that an observer will attend to the location cued by another's eye gaze and that in some circumstances, this effect is enhanced when the emotion expressed is threat-related. This study explored whether attention to the gaze of threat-related faces is potentiated in the luteal phase of the menstrual cycle when detection of threat is suggested to be enhanced, compared to the follicular phase. Female participants were tested on a gaze cueing task in their luteal (N = 13) or follicular phase (N = 15). Participants were presented with various emotional expressions with an averted eye gaze that was either spatially congruent or incongruent with a forthcoming target. Females in the luteal phase responded faster overall to targets on trials with a 200-ms stimulus onset asynchrony interval. The results suggest that during the luteal phase, females show a general and automatic hypersensitivity to respond to stimuli associated with socially and emotionally relevant cues. This may be a part of an adaptive biological mechanism to protect foetal development. |
Jason H. Wong; Matthew S. Peterson What we remember affects how we see: Spatial working memory steers saccade programming Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 2, pp. 308–321, 2013. @article{Wong2013, Relationships between visual attention, saccade programming, and visual working memory have been hypothesized for over a decade. Awh, Jonides, and Reuter-Lorenz (Journal of Experimental Psychology: Human Perception and Performance 24(3):780-90, 1998) and Awh et al. (Psychological Science 10(5):433-437, 1999) proposed that rehearsing a location in memory also leads to enhanced attentional processing at that location. In regard to eye movements, Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) found that holding a location in working memory affects saccade programming, albeit negatively. In three experiments, we attempted to replicate the findings of Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) and determine whether the spatial memory effect can occur in other saccade-cuing paradigms, including endogenous central arrow cues and exogenous irrelevant singletons. In the first experiment, our results were the opposite of those in Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009), in that we found facilitation (shorter saccade latencies) instead of inhibition when the saccade target matched the region in spatial working memory. In Experiment 2, we sought to determine whether the spatial working memory effect would generalize to other endogenous cuing tasks, such as a central arrow that pointed to one of six possible peripheral locations. As in Experiment 1, we found that saccade programming was facilitated when the cued location coincided with the saccade target. In Experiment 3, we explored how spatial memory interacts with other types of cues, such as a peripheral color singleton target or irrelevant onset. In both cases, the eyes were more likely to go to either singleton when it coincided with the location held in spatial working memory. On the basis of these results, we conclude that spatial working memory and saccade programming are likely to share common overlapping circuitry. |
Heather Cleland Woods; Christoph Scheepers; K. A. Ross; Colin A. Espie; Stephany M. Biello What are you looking at? Moving toward an attentional timeline in insomnia: A novel semantic eye tracking study Journal Article In: Sleep, vol. 36, no. 10, pp. 1491–1499, 2013. @article{Woods2013, STUDY OBJECTIVES: To date, cognitive probe paradigms have been used in different guises to obtain reaction time measurements suggestive of an attention bias towards sleep in insomnia. This study adopts a methodology which is novel to sleep research to obtain a continual record of where the eyes-and therefore attention-are being allocated with regard to sleep and neutral stimuli.$backslash$n$backslash$nDESIGN: A head mounted eye tracker (Eyelink II,SR Research, Ontario, Canada) was used to monitor eye movements in respect to two words presented on a computer screen, with one word being a sleep positive, sleep negative, or neutral word above or below a second distracter pseudoword. Probability and reaction times were the outcome measures.$backslash$n$backslash$nPARTICIPANTS: Sleep group classification was determined by screening interview and PSQI (> 8 = insomnia, < 3 = good sleeper) score.$backslash$n$backslash$nMEASUREMENTS AND RESULTS: Those individuals with insomnia took longer to fixate on the target word and remained fixated for less time than the good sleep controls. Word saliency had an effect with longer first fixations on positive and negative sleep words in both sleep groups, with largest effect sizes seen with the insomnia group.$backslash$n$backslash$nCONCLUSIONS: This overall delay in those with insomnia with regard to vigilance and maintaining attention on the target words moves away from previous attention bias work showing a bias towards sleep, particularly negative, stimuli but is suggestive of a neurocognitive deficit in line with recent research. |
Nicola M. Wöstmann; Désirée S. Aichert; Anna Costa; Katya Rubia; Hans-Jürgen Möller; Ulrich Ettinger Reliability and plasticity of response inhibition and interference control Journal Article In: Brain and Cognition, vol. 81, no. 1, pp. 82–94, 2013. @article{Woestmann2013, This study investigated the internal reliability, temporal stability and plasticity of commonly used measures of inhibition-related functions. Stop-signal, go/no-go, antisaccade, Simon, Eriksen flanker, Stroop and Continuous Performance tasks were administered twice to 23 healthy participants over a period of approximately 11. weeks in order to assess test-retest correlations, internal consistency (Cronbach's alpha), and systematic between as well as within session performance changes. Most of the inhibition-related measures showed good test-retest reliabilities and internal consistencies, with the exception of the stop-signal reaction time measure, which showed poor reliability. Generally no systematic performance changes were observed across the two assessments with the exception of four variables of the Eriksen flanker, Simon and Stroop task which showed reduced variability of reaction time and an improvement in the response time for incongruent trials at second assessment. Predominantly stable performance within one test session was shown for most measures. Overall, these results are informative for studies with designs requiring temporally stable parameters e.g. genetic or longitudinal treatment studies. |
Christiane Wotschack; Reinhold Kliegl Reading strategy modulates parafoveal-on-foveal effects in sentence reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 548–562, 2013. @article{Wotschack2013, Task demands and individual differences have been linked reliably to word skipping during reading. Such differences in fixation probability may imply a selection effect for multivariate analyses of eye-movement corpora if selection effects correlate with word properties of skipped words. For example, with fewer fixations on short and highly frequent words the power to detect parafoveal-on-foveal effects is reduced. We demonstrate that increasing the fixation probability on function words with a manipulation of the expected difficulty and frequency of questions reduces an age difference in skipping probability (i.e., old adults become comparable to young adults) and helps to uncover significant parafoveal-on-foveal effects in this group of old adults. We discuss implications for the comparison of results of eye-movement research based on multivariate analysis of corpus data with those from display-contingent manipulations of target words. |
Timothy J. Wright; Walter R. Boot; Chelsea S. Morgan Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness Journal Article In: Acta Psychologica, vol. 144, no. 1, pp. 6–11, 2013. @article{Wright2013, Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB. |
Chia-Chien Wu; Eileen Kowler Timing of saccadic eye movements during visual search for multiple targets Journal Article In: Journal of Vision, vol. 13, no. 11, pp. 11–11, 2013. @article{Wu2013, Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing.We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate. |
Esther X. W. Wu; Syed O. Gilani; Jeroen J. A. Boxtel; Ido Amihai; Fook K. Chua; Shih-Cheng Yen Parallel programming of saccades during natural scene viewing: Evidence from eye movement positions Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 17–17, 2013. @article{Wu2013a, Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis. |
Yan Jing Wu; Filipe Cristino; Charles Leek; Guillaume Thierry Non-selective lexical access in bilinguals is spontaneous and independent of input monitoring: Evidence from eye tracking Journal Article In: Cognition, vol. 129, no. 2, pp. 418–425, 2013. @article{Wu2013b, Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing. |
Chenjiang Xie; Tong Zhu; Chunlin Guo; Yimin Zhang Measuring IVIS impact to driver by on-road test and simulator experiment Journal Article In: Procedia Social and Behavioral Sciences, vol. 96, pp. 1566–1577, 2013. @article{Xie2013, This work examined the effects of using in-vehicle information systems (IVIS) on drivers by on-road test and simulator experiment. Twelve participants participated in the test. In on-road test, drivers performed driving task with voice prompt and non-voice prompt navigation device mounted on different position. In simulator experiment, secondary tasks, including cognitive, visual and manual tasks, were performed in a driving simulator. Subjective rating was used to test mental workload of drivers in on-road test and simulator experiment. The impact of task complexity and reaction mode was investigated in this paper. The results of the test and the simulation showed that position 1 was more comfortable than other two positions for drivers and it would cause less mental load. Drivers tend to support this result in subjective rating. IVIS with voice prompt causes less visual demand to drivers. The mental load will grow as the difficulty of the task increasing. The cognitive task on manual reaction causes higher mental load than cognitive task which doesn't require manual reaction. These results may have practical implications for in-vehicle information system design. |
Buyun Xu; James W. Tanaka Does face inversion qualitatively change face processing: An eye movement study using a face change detection task Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 1–16, 2013. @article{Xu2013, Understanding the Face Inversion Effect is important for the study of face processing. Some researchers believe that the processing of inverted faces is qualitatively different from the processing of upright faces because inversion leads to a disproportionate performance decrement on the processing of different kinds of face information. Other researchers believe that the difference is quantitative because the processing of all kinds of facial information is less efficient due to the change in orientation and thus, the performance decrement is not disproportionate. To address the Qualitative and Quantitative debate, the current study employed a response-contingent, change detection paradigm to study eye movement during the processing of upright and inverted faces. In this study, configural and featural information were parametrically and independently manipulated in the eye and mouth region of the face. The manipulations for configural information involved changing the interocular distance between the eyes or the distance between the mouth and the nose. The manipulations for featural information involved changing the size of the eyes or the size of the mouth. The main results showed that change detection was more difficult in inverted than upright faces. Specifically, performance declined when the manipulated change occurred in the mouth region, despite the greater efforts allocated to the mouth region. Moreover, compared to upright faces where fixations were concentrated on the eyes and nose regions, inversion produced a higher concentration of fixations on the nose and mouth regions. Finally, change detection performance was better when the last fixation prior to response was located on the region of change, and the relationship between last fixation location and accuracy was stronger for inverted than upright faces. These findings reinforce the connection between eye movements and face processing strategies, and suggest that face inversion produces a qualitative disruption of looking behavior in the mouth region. |
Bipin Indurkhya; Amitash Ojha An empirical study on the role of perceptual similarity in visual metaphors and creativity Journal Article In: Metaphor and Symbol, vol. 28, no. 4, pp. 233–253, 2013. @article{Indurkhya2013, We investigate the role of perceptual similarity in visual metaphor comprehension process. In visual metaphors, perceptual features of the source and the target are objectively present as images. Moreover, to determine perceptual similarity, we use an image-based search system that computes similarity based on low-level perceptual features. We hypothesize that perceptual similarity at the level of color, shape, texture, orientation, and the like, between the source and the target image facilitates metaphorical comprehension and aids creative interpretation. We present three experiments, two of which are eye-movement studies, to demonstrate that in the interpretation and generation of visual metaphors, perceptual similarity between the two images is recognized at a subconscious level, and facilitates the search for creative conceptual associations in terms of emergent features. We argue that the capacity to recognize perceptual similarity-considered to be a hallmark of creativity-plays a major role in the creative understanding of metaphors. |
David E. Irwin; Glyn W. Humphreys Visual marking across eye blinks Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 1, pp. 128–134, 2013. @article{Irwin2013, Visual search for a conjunction target can be made efficient by presenting one initial set of distractors as a preview, prior to the onset of the other items in the search display Watson & Humphreys (Psychological Review 104:90-122, 1997). However, this "preview advantage" is lost if the initial items are offset for a brief period before onsetting again with the search display Kunar, Humphreys, & Smith (Psychological Science 14:181-185, 2003). Researchers have long disputed whether the preview advantage reflects a process of internally coding and suppressing the old items or of the onset of the new items capturing attention Donk & Theeuwes (Perception & Psychophysics 63:891-900, 2001). In this study, we assessed whether an internally driven blink (in which participants close their eyes) acts in the same manner as an external blink produced by offsetting and then onsetting the preview. In the novel blink conditions, participants searched feature, conjunction, and preview displays after being cued to blink their eyes. The search displays were presented during the eye blink, and so were immediately available once participants opened their eyes. Having participants make an eye blink generally slowed search but had no effect on the search slopes. In contrast, imposing an externally driven blink disrupted preview search. The data indicated that visual attention can compensate for internally driven blinks, and this does not lead to the loss of the representations of distractors across time. Moreover, efficient preview search occurred when the search items had no abrupt onsets, demonstrating that onsets of new search items are not critical for the preview benefit. |
Eve A. Isham; Joy J. Geng Looking time predicts choice but not aesthetic value Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e71698, 2013. @article{Isham2013, Although visual fixations are commonly used to index stimulus-driven or internally-determined preference, recent evidence suggests that visual fixations can also be a source of decisional bias that moves selection toward the fixated object. These contrasting results raise the question of whether visual fixations always index comparative processes during choice-based tasks, or whether they might better reflect internal preferences when the decision does not carry any economic or corporeal consequences. In two experiments, participants chose which of two objects were more aesthetically pleasing (Exp.1) or appeared more organic (Exp.2), and provided independent aesthetic ratings of the stimuli. Our results demonstrated that fixation parameters were a better index of choice in both decisional domains than of aesthetic preference. The data support models in which visual fixations are specifically related to the evolution of decision processes even when the decision has no tangible consequences. |
Malou Janssen; Jelmer P. De Vries; Britta K. Ischebeck; Maarten A. Frens; Josef N. Geest Small effects of neck torsion on healthy human voluntary eye movements Journal Article In: European Journal of Applied Physiology, vol. 113, no. 12, pp. 3049–3057, 2013. @article{Janssen2013, PURPOSE: Although several lines of research suggest that the head and eye movement systems interact, previous studies have reported that applying static neck torsion does not affect smooth pursuit eye movements in healthy controls. This might be due to several methodological issues. Here we systematically investigated the effect of static neck torsion on smooth pursuit and saccadic eye movement behavior in healthy subjects. METHODS: In twenty healthy controls, we recorded eye movements with video-oculography while their trunk was in static rotation relative to the head (7 positions from 45° to the left to 45° to right). The subject looked at a moving dot on the screen. In two separate paradigms, we evoked saccadic and smooth pursuit eye movements, using both predictable and unpredictable target motions. RESULTS: Smooth pursuit gain and saccade peak velocity decreased slightly with increasing neck torsion. Smooth pursuit gains were higher for predictable target movements than for unpredictable target movements. Saccades to predictable targets had lower latencies, but reduced gains compared to saccades to unpredictable targets. No interactions between neck torsion and target predictability were observed. CONCLUSION: Applying static neck torsion has small effects on voluntary eye movements in healthy subjects. These effects are not modulated by target predictability. |
Yu-Cin Jian; Ming-Lei Chen; Hwa-Wei Ko Context effects in processing of Chinese academic words: An eye-tracking investigation Journal Article In: Reading Research Quarterly, vol. 48, no. 4, pp. 403–413, 2013. @article{Jian2013, This study investigated context effects of online processing of Chinese academic words during text reading. Undergraduate participants were asked to read Chinese texts that were familiar or unfamiliar (containing physics terminology) to them. Physics texts were selected first, and then we replaced the physics terminology with familiar words; other common words remained the same in both text versions. Our results indicate that readers experienced longer rereading times and total fixation durations for the same common words in the physics texts than for the corresponding texts. Shorter gaze durations were observed for the replaced words than the physics terminology; however, the duration of participants' first fixations on these two word types did not differ from each other. Furthermore, although the participants performed similar reading paths after encountering the target words of the physics terminology and replaced words, their processing duration of the current sentences was very different. They reread the physics terminology more times and spent more reading time on the current sentences containing the physics terminology, searching for more information to aid comprehension. This study showed that adult readers seemed to successfully access each Chinese character's meaning but initially failed to access the meaning of the physics terminology. This could be attributable to the nature of the formation of Chinese words; however, the use of contextual information to comprehend unfamiliar words is a universal phenomenon. |
Li Jingling; Da-Lun Tang; Chia-huei Tseng Salient collinear grouping diminishes local salience in visual search: An eye movement study Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 1–10, 2013. @article{Jingling2013, Our eyes and attention are easily attracted to salient items in search displays. When a target is spatially overlapped with a salient distractor (overlapping target), it is usually detected more easily than when it is not (nonoverlapping target). Jingling and Tseng (2013), however, found that a salient distractor impaired visual search when the distractor was comprised of more than nine bars collinearly aligned to each other. In this study, we examined whether this search impairment is due to reduction of salience on overlapping targets. We used the short-latency saccades as an index for perceptual salience. Results showed that a long collinear distractor decreases perceptual salience of local overlapping targets in comparison to nonoverlapping targets, reflected by a smaller proportion of the short-latency saccades. Meanwhile, a salient noncollinear distractor increases salience of overlapping targets. Our results led us to conclude that a long collinear distractor diminishes the perceptual salience of the target, a factor which poses a counter-intuitive condition in which a target on a salient region becomes less salient. We discuss the possible causes for our findings, including crowding, the global precedence effect, and the filling-in of a collinear contour. |
Jiri Lukavsky Eye movements in repeated multiple object tracking Journal Article In: Journal of vision, vol. 13, pp. 1–16, 2013. @article{JiriLukavsky2013, Contrary to other tasks (free viewing, recognition, visual search), participants often fail to recognize repetition of trials in multiple object tracking (MOT). This study examines the intra- and interindividual variability of eye movements in repeated MOT trials along with the adherence of eye movements to the previously described strategies. I collected eye movement data from 20 subjects during 64 MOT trials at slow speed (58/s). Half of the trials were repeated four times, and the remaining trials were unique. I measured the variability of eye- movement patterns during repeated trials using normalized scanpath saliency extended to the temporal domain. People tended to make similar eye movements during repeated presentations (with no or vague feeling of repetition) and the interindividual similarity remained at the same level over time. Several strategies (centroid strategy and its variants) were compared with data and they accounted for 48.8% to 54.3% of eye-movement variability, which was less then variability explained by other peoples' eye movements (68.6%). The results show that the observed intra- and interindividual similarity of eye movements is only partly explained by the current models. |
Beth P. Johnson; Nicole J. Rinehart; Owen B. White; Lynette Millist; Joanne Fielding Saccade adaptation in autism and Asperger's disorder Journal Article In: Neuroscience, vol. 243, pp. 76–87, 2013. @article{Johnson2013, Autism and Asperger's disorder (AD) are neuro- developmental disorders primarily characterized by deficits in social interaction and communication, however motor coordination deficits are increasingly recognized as a prevalent feature of these conditions. Although it has been proposed that children with autism and AD may have diffi- culty utilizing visual feedback during motor learning tasks, this has not been directly examined. Significantly, changes within the cerebellum, which is implicated in motor learning, are known to be more pronounced in autism compared to AD. We used the classic double-step saccade adaptation paradigm, known to depend on cerebellar integrity, to inves- tigate differences in motor learning and the use of visual feedback in children aged 9–14 years with high-functioning autism (HFA; IQ > 80; n = 10) and AD (n = 13). Performance was compared to age and IQ matched typically developing children (n = 12). Both HFA and AD groups successfully adapted the gain of their saccades in response to perceived visual error, however the time course for adaptation was prolonged in the HFA group. While a shift in saccade dynamics typically occurs during adaptation, we revealed aberrant changes in both HFA and AD groups. This study contributes to a growing body of evidence centrally impli- cating the cerebellum in ocular motor dysfunction in autism. Specifically, these findings collectively imply functional impairment of the cerebellar network and its inflow and outflow tracts that underpin saccade adaptation, with greater disturbance in HFA compared to AD. |
Manon W. Jones; Jane Ashby; Holly P. Branigan Dyslexia and fluency: Parafoveal and foveal influences on rapid automatized naming Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 2, pp. 554–567, 2013. @article{Jones2013, The ability to coordinate serial processing of multiple items is crucial for fluent reading but is known to be impaired in dyslexia. To investigate this impairment, we manipulated the orthographic and phonological similarity of adjacent letters online as dyslexic and nondyslexic readers named letters in a serial naming (RAN) task. Eye movements and voice onsets were recorded. Letter arrays contained target item pairs in which the second letter was orthographically or phonologically similar to the first letter when viewed either parafoveally (Experiment 1a) or foveally (Experiment 1b). Relative to normal readers, dyslexic readers were more affected by orthographic confusability in Experiment 1a and phonological confusability in Experiment 1b. Normal readers were slower to process orthographically similar letters in Experiment 1b. Findings indicate that the phonological and orthographic processing problems of dyslexic readers manifest differently during parafoveal and foveal processing, with each contributing to slower RAN performance and impaired reading fluency. |
Benjawan Kasisopa; Ronan G. Reilly; Sudaporn Luksaneeyanawin; Denis Burnham Eye movements while reading an unspaced writing system: The case of Thai Journal Article In: Vision Research, vol. 86, pp. 71–80, 2013. @article{Kasisopa2013, Thai has an alphabetic script with a distinctive feature - it has no spaces between words. Since previous research with spaced alphabetic systems (e.g., English) has suggested that readers use spaces to guide eye movements, it is of interest to investigate what physical factors might guide Thai readers' eye movements. Here the effects of word-initial and word-final position-specific character frequency, word-boundary bigram frequency, and overall word frequency on 30 Thai adults' eye movements when reading unspaced and spaced text was investigated. Linear mixed-effects model analyses of viewing time measures (first fixation duration, single fixation duration, and gaze duration) and of landing sites were conducted. Thai readers tended to land their first fixation at or near the centre of words, just as readers of spaced texts do. A critical determinant of this was word boundary characters: higher position-specific frequency of initial and of final characters significantly facilitated landing sites closer to the word centre while word-boundary bigram frequency appeared to behave as a proxy for initial and final position-specific character frequency. It appears, therefore, that Thai readers make use of the position-specific frequencies of word boundary characters in targeting words and directing eye movements to an optimal landing site. |
Kai Kaspar; Teresa Maria Hloucal; Jürgen Kriz; Sonja Canzler; Ricardo Ramos Gameiro; Vanessa Krapp; Peter König Emotions' impact on viewing behavior under natural conditions Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52737, 2013. @article{Kaspar2013, Human overt attention under natural conditions is guided by stimulus features as well as by higher cognitive components, such as task and emotional context. In contrast to the considerable progress regarding the former, insight into the interaction of emotions and attention is limited. Here we investigate the influence of the current emotional context on viewing behavior under natural conditions.In two eye-tracking studies participants freely viewed complex scenes embedded in sequences of emotion-laden images. The latter primes constituted specific emotional contexts for neutral target images.Viewing behavior toward target images embedded into sets of primes was affected by the current emotional context, revealing the intensity of the emotional context as a significant moderator. The primes themselves were not scanned in different ways when presented within a block (Study 1), but when presented individually, negative primes were more actively scanned than positive primes (Study 2). These divergent results suggest an interaction between emotional priming and further context factors. Additionally, in most cases primes were scanned more actively than target images. Interestingly, the mere presence of emotion-laden stimuli in a set of images of different categories slowed down viewing activity overall, but the known effect of image category was not affected. Finally, viewing behavior remained largely constant on single images as well as across the targets' post-prime positions (Study 2).We conclude that the emotional context significantly influences the exploration of complex scenes and the emotional context has to be considered in predictions of eye-movement patterns. |
Sally R. Ke; Jessica Lam; Dinesh K. Pai; Miriam Spering Directional asymmetries in human smooth pursuit eye movements Journal Article In: Investigative Ophthalmology & Visual Science, vol. 54, no. 6, pp. 4409–4421, 2013. @article{Ke2013, PURPOSE: Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. METHODS: In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. RESULTS: Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones. |
Dirk Kerzel; Josef G. Schönhammer Salient stimuli capture attention and action Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1633–1643, 2013. @article{Kerzel2013, Reaction times in a visual search task increase when an irrelevant but salient stimulus is presented. Recently, the hypothesis that the increase in reaction times was due to attentional capture by the salient distractor has been disputed. We devised a task in which a search display was shown after observers had initiated a reaching movement toward a touch screen. In a display of vertical bars, observers had to touch the oblique target while ignoring a salient color singleton. Because the hand was moving when the display appeared, reach trajectories revealed the current selection for action. We observed that salient but irrelevant stimuli changed the reach trajectory at the same time as the target was selected, about 270 ms after movement onset. The change in direction was corrected after another 160 ms. In a second experiment, we compared manual selection of color and orientation targets and observed that selection occurred earlier for color than for orientation targets. Salient stimuli support faster selection than do less salient stimuli. Under the assumption that attentional selection for action and perception are based on a common mechanism, our results suggest that attention is indeed captured by salient stimuli. |
Donatas Jonikaitis; Martin Szinte; Martin Rolfs; Patrick Cavanagh Allocation of attention across saccades Journal Article In: Journal of Neurophysiology, vol. 109, no. 5, pp. 1425–1434, 2013. @article{Jonikaitis2013, Whenever the eyes move, spatial attention must keep track of the locations of targets as they shift on the retina. This study investigated transsaccadic updating of visual attention to cued targets. While observers prepared a saccade, we flashed an irrelevant, but salient, color cue in their visual periphery and measured the allocation of spatial attention before and after the saccade using a tilt discrimination task. We found that just before the saccade, attention was allocated to the cue's future retinal location, its predictively "remapped" location. Attention was sustained at the cue's location in the world across the saccade, despite the change of retinal position whereas it decayed quickly at the retinal location of the cue, after the eye landed. By extinguishing the color cue across the saccade, we further demonstrate that the visual system relies only on predictive allocation of spatial attention, as the presence of the cue after the saccade did not substantially affect attentional allocation. These behavioral results support and extend physiological evidence showing predictive activation of visual neurons when an attended stimulus will fall in their receptive field after a saccade. Our results show that tracking of spatial locations across saccades is a plausible consequence of physiological remapping. |
Donatas Jonikaitis; Jan Theeuwes Dissociating oculomotor contributions to spatial and feature-based selection Journal Article In: Journal of Neurophysiology, vol. 110, no. 7, pp. 1525–1534, 2013. @article{Jonikaitis2013a, Saccades not only deliver the high-resolution retinal image requisite for visual perception, but processing stages associated with saccade target selection affect visual perception even before the eye movement starts. These presaccadic effects are thought to arise from two visual selection mechanisms: spatial selection that enhances processing of the saccade target location and feature-based selection that enhances processing of the saccade target features. By measuring oculomotor performance and perceptual discrimination, we determined which selection mechanisms are associated with saccade preparation. We observed both feature-based and space-based selection during saccade preparation but found that feature-based selection was neither related to saccade initiation nor was it affected by simultaneously observed redistribution of spatial selection. We conclude that oculomotor selection biases visual selection only in a spatial, feature-unspecific manner. |
Timothy R. Jordan; Victoria A. McGowan; Kevin B. Paterson What's left? An eye movement study of the influence of interword spaces to the left of fixation during reading Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 3, pp. 551–557, 2013. @article{Jordan2013, In English and other alphabetic systems read from left to right, the useful information acquired during each fixational pause is generally reported to extend 14-15 character spaces to the right of each fixation, but only 3-4 character spaces to the left, and certainly no farther than the beginning of the fixated word. However, this leftward extent is remarkably small and seems inconsistent with the general bilateral symmetry of vision. Accordingly, in the present study we investigated the influence of a fundamental component of text to the left of fixation-interword spaces-using a well-established eyetracking paradigm in which invisible boundaries were set up along individual sentence displays that were then read. Each boundary corresponded to the leftmost edge of a word in a sentence, so that as the eyes crossed a boundary, interword spaces in the text to the left of that word were obscured (by inserting a letter x). The proximity of the obscured text during each fixational pause was maintained at one, two, three, or four interword spaces from the left boundary of each fixated word. Normal fixations, regressions, and progressive saccades were disrupted when the obscured text was up to three interword spaces (an average of over 12 character spaces) away from the fixated word, while four interword spaces away produced no disruption. These findings suggest that influential information from text is acquired during each fixational pause from much farther leftward than is generally realized and that this information contributes to normal reading performance. Implications of these findings for reading are discussed. |
Holly S. S. L. Joseph; Simon P. Liversedge Children's and adults' on-line processing of syntactically ambiguous sentences during reading Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e54141, 2013. @article{Joseph2013, While there has been a fair amount of research investigating children's syntactic processing during spoken language comprehension, and a wealth of research examining adults' syntactic processing during reading, as yet very little research has focused on syntactic processing during text reading in children. In two experiments, children and adults read sentences containing a temporary syntactic ambiguity while their eye movements were monitored. In Experiment 1, participants read sentences such as, ‘The boy poked the elephant with the long stick/trunk from outside the cage' in which the attachment of a prepositional phrase was manipulated. In Experiment 2, participants read sentences such as, ‘I think I'll wear the new skirt I bought tomorrow/yesterday. It's really nice' in which the attachment of an adverbial phrase was manipulated. Results showed that adults and children exhibited similar processing preferences, but that children were delayed relative to adults in their detection of initial syntactic misanalysis. It is concluded that children and adults have the same sentence-parsing mechanism in place, but that it operates with a slightly different time course. In addition, the data support the hypothesis that the visual processing system develops at a different rate than the linguistic processing system in children. |
Holly S. S. L. Joseph; Kate Nation; Simon P. Liversedge Using eye movements to investigate word frequency effects in children's sentence reading Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 207–222, 2013. @article{Joseph2013a, Although eye movements have been used widely to investigate how skilled adult readers process written language, relatively little research has used this methodology with children. This is unfortunate as, as we discuss here, eye-movement studies have significant potential to inform our understanding of children's reading development. We consider some of the empirical and theoretical issues that arise when using this methodology with children, illustrating our points with data from an experiment examining word frequency effects in 8-year-old children's sentence reading. Children showed significantly longer gaze durations to low- than high-frequency words, demonstrating that linguistic characteristics of text drive children's eye movements as they read. We discuss these findings within the broader context of how eye-movement studies can inform our understanding of children's reading, and can assist with the development of appropriately targeted interventions to support children as they learn to read. |
Evgenia Kanonidou; Irene Gottlob; Frank A. Proudlock The effect of font size on reading performance in strabismic amblyopia: An eye movement investigation Journal Article In: Investigative Ophthalmology & Visual Science, vol. 55, no. 1, pp. 451–459, 2013. @article{Kanonidou2013, PURPOSE: We investigated the effect of font size on reading speed and ocular motor performance in strabismic amblyopes during text reading under monocular and binocular viewing conditions. METHODS: Eye movements were recorded at 250 Hz using a head-mounted infrared video eye tracker in 15 strabismic amblyopes and 18 age-matched controls while silently reading paragraphs of text at font sizes equivalent to 1.0 to 0.2 logMAR acuity. Reading under monocular viewing with amblyopic eye/nondominant eye and nonamblyopic/dominant eye was compared to binocular viewing. Mean reading speed; number, amplitude, and direction of saccades; and fixation duration were calculated for each font size and viewing condition. RESULTS: Reading speed was significantly slower in amblyopes compared to controls for all font sizes during monocular reading with the amblyopic eye (P = 0.004), but only for smaller font sizes for reading with the nonamblyopic eye (P = 0.045) and binocularly (P = 0.038). The most significant ocular motor change was that strabismic amblyopes made more saccades per line than controls irrespective of font size and viewing conditions (P < 0.05 for all). There was no significant difference in saccadic amplitudes and fixation duration was only significantly longer in strabismic amblyopes when reading smaller fonts with the amblyopic eye viewing. CONCLUSIONS: Ocular motor deficits exist in strabismic amblyopes during reading even when reading speeds are normal and when visual acuity is not a limiting factor; that is, when reading larger font sizes with nonamblyopic eye viewing and binocular viewing. This suggests that these abnormalities are not related to crowding. |
Shah Khalid; Ulrich Ansorge The Simon effect of spatial words in eye movements: Comparison of vertical and horizontal effects and of eye and finger responses Journal Article In: Vision Research, vol. 86, pp. 6–14, 2013. @article{Khalid2013, Spatial stimulus location information impacts on saccades: Pro-saccades (saccades towards a stimulus location) are faster than anti-saccades (saccades away from the stimulus). This is true even when the spatial location is irrelevant for the choice of the correct response (Simon effect). The results are usually ascribed to spatial sensorimotor coupling. However, with finger responses Simon effects can be observed with irrelevant spatial word meaning, too. Here we tested whether a Simon effect of spatial word meaning in saccades could be observed for words with vertical ("above" or "below") and horizontal ("left" or "right") meanings. We asked our participants to make saccades towards one of the two saccade targets depending on the color of the centrally presented spatial word, while ignoring their spatial meaning (Experiment 1 and 2a). Results are compared to a condition in which finger responses instead of saccades were required (Experiment 2b). In addition to response latency we compared the time course of vertical and horizontal effects. We found the Simon effects due to irrelevant spatial meaning of the words in both saccades and finger responses. The time course investigations revealed different patterns for vertical and horizontal effects in saccades, indicating that distinct processes may be involved in the two types of Simon effects. |
Shah Khalid; Matthew Finkbeiner; Peter König; Ulrich Ansorge Subcortical human face processing? Evidence from masked priming Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 989–1002, 2013. @article{Khalid2013a, Face processing without awareness might depend on subcortical structures (retino-collicular projection), cortical structures, or a combination of the two. The present study was designed to tease apart these possibilities. Because the retino-collicular projection is more sensitive to low spatial frequencies, we used masked (subliminal) face prime images that were spatially low-pass filtered, or high-pass filtered. The masked primes were presented in the periphery prior to clearly visible target faces. Participants had to discriminate between male and female target faces and we recorded prime-target congruence effects–that is, the difference in discrimination speed between congruent pairs (with prime and target of the same sex) and incongruent pairs (with prime and target of different sexes). In two experiments, we consistently find that masked low-pass filtered face primes produce a congruence effect and that masked high-pass filtered face primes do not. Together our results support the assumption that the retino-collicular route which carries the low spatial frequencies also conveys sex specific features of face images contributing to subliminal face processing. |
Rizwan Ahmed Khan; Alexandre Meyer; Hubert Konik; Saïda Bouakaz Framework for reliable, real-time facial expression recognition for low resolution images Journal Article In: Pattern Recognition Letters, vol. 34, no. 10, pp. 1159–1168, 2013. @article{Khan2013, Automatic recognition of facial expressions is a challenging problem specially for low spatial resolution facial images. It has many potential applications in human-computer interactions, social robots, deceit detection, interactive video and behavior monitoring. In this study we present a novel framework that can recognize facial expressions very efficiently and with high accuracy even for very low resolution facial images. The proposed framework is memory and time efficient as it extracts texture features in a pyramidal fashion only from the perceptual salient regions of the face. We tested the framework on different databases, which includes Cohn-Kanade (CK+) posed facial expression database, spontaneous expressions of MMI facial expression database and FG-NET facial expressions and emotions database (FEED) and obtained very good results. Moreover, our proposed framework exceeds state-of-the-art methods for expression recognition on low resolution images. |
Farhan A. Khawaja; Liu D. Liu; Christopher C. Pack Responses of MST neurons to plaid stimuli Journal Article In: Journal of Neurophysiology, vol. 110, no. 1, pp. 63–74, 2013. @article{Khawaja2013, The estimation of motion information from retinal input is a fundamental function of the primate dorsal visual pathway. Previous work has shown that this function involves multiple cortical areas, with each area integrating information from its predecessors. Compared with neurons in the primary visual cortex (V1), neurons in the middle temporal (MT) area more faithfully represent the velocity of plaid stimuli, and the observation of this pattern selectivity has led to two-stage models in which MT neurons integrate the outputs of component-selective V1 neurons. Motion integration in these models is generally complemented by motion opponency, which refines velocity selectivity. Area MT projects to a third stage of motion processing, the medial superior temporal (MST) area, but surprisingly little is known about MST responses to plaid stimuli. Here we show that increased pattern selectivity in MST is associated with greater prevalence of the mechanisms implemented by two-stage MT models: Compared with MT neurons, MST neurons integrate motion components to a greater degree and exhibit evidence of stronger motion opponency. Moreover, when tested with more challenging unikinetic plaid stimuli, an appreciable percentage of MST neurons are pattern selective, while such selectivity is rare in MT. Surprisingly, increased motion integration is found in MST even for transparent plaid stimuli, which are not typically integrated perceptually. Thus the relationship between MST and MT is qualitatively similar to that between MT and V1, as repeated application of basic motion mechanisms leads to novel selectivities at each stage along the pathway. |
Markku Kilpeläinen; Christian N. L. Olivers; Jan Theeuwes The eyes like their targets on a stable background Journal Article In: Journal of Vision, vol. 13, no. 6, pp. 1–11, 2013. @article{Kilpelaeinen2013, In normal human visual behavior, our visual system is continuously exposed to abrupt changes in the local contrast and mean luminance in various parts of the visual field, as caused by actual changes in the environment, as well as by movements of our body, head, and eyes. Previous research has shown that both threshold and suprathreshold contrast percepts are attenuated by a co-occurring change in the mean luminance at the location of the target stimulus. In the current study, we tested the hypothesis that contrast targets presented with a co-occurring change in local mean luminance receive fewer fixations than targets presented in a region with a steady mean luminance. To that end we performed an eye-tracking experiment involving eight observers. On each trial, after a 4 s adaptation period, an observer's task was to make a saccade to one of two target gratings, presented simultaneously at 78 eccentricity, separated by 308 in polar angle. When both targets were presented with a steady mean luminance, saccades landed mostly in the area between the two targets, signifying the classic global effect. However, when one of the targets was presented with a change in luminance, the saccade distribution was biased towards the target with the steady luminance. The results show that the attenuation of contrast signals by co-occurring, ecologically typical changes in mean luminance affects fixation selection and is therefore likely to affect eye movements in natural visual behavior. |
Johann S. C. Kim; Gerhard Vossel; Matthias Gamer Effects of emotional context on memory for details: The role of attention Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e77405, 2013. @article{Kim2013, It was repeatedly demonstrated that a negative emotional context enhances memory for central details while impairing memory for peripheral information. This trade-off effect is assumed to result from attentional processes: a negative context seems to narrow attention to central information at the expense of more peripheral details, thus causing the differential effects in memory. However, this explanation has rarely been tested and previous findings were partly inconclusive. For the present experiment 13 negative and 13 neutral naturalistic, thematically driven picture stories were constructed to test the trade-off effect in an ecologically more valid setting as compared to previous studies. During an incidental encoding phase, eye movements were recorded as an index of overt attention. In a subsequent recognition phase, memory for central and peripheral details occurring in the picture stories was tested. Explicit affective ratings and autonomic responses validated the induction of emotion during encoding. Consistent with the emotional trade-off effect on memory, encoding context differentially affected recognition of central and peripheral details. However, contrary to the common assumption, the emotional trade-off effect on memory was not mediated by attentional processes. By contrast, results suggest that the relevance of attentional processing for later recognition memory depends on the centrality of information and the emotional context but not their interaction. Thus, central information was remembered well even when fixated very briefly whereas memory for peripheral information depended more on overt attention at encoding. Moreover, the influence of overt attention on memory for central and peripheral details seems to be much lower for an arousing as compared to a neutral context. |
Pilyoung Kim; Joseph Arizpe; Brooke H. Rosen; Varun Razdan; Catherine T. Haring; Sarah E. Jenkins; Christen M. Deveney; Melissa A. Brotman; R. James R. Blair; Daniel S. Pine; Chris I. Baker; Ellen Leibenluft Impaired fixation to eyes during facial emotion labelling in children with bipolar disorder or severe mood dysregulation Journal Article In: Journal of Psychiatry and Neuroscience, vol. 38, no. 6, pp. 407–416, 2013. @article{Kim2013a, Background: Children with bipolar disorder (BD) or severe mood dysregulation (SMD) show behavioural and neural deficits during facial emotion processing. In those with other psychiatric disorders, such deficits have been associated with reduced attention to eye regions while looking at faces. Methods: We examined gaze fixation patterns during a facial emotion labelling task among children with pediatric BD and SMD and among healthy controls. Participants viewed facial expressions with varying emotions (anger, fear, sadness, happi- ness, neutral) and emotional levels (60%, 80%, 100%) and labelled emotional expressions. Results: Our study included 22 children with BD, 28 with SMD and 22 controls. Across all facial emotions, children with BD and SMD made more labelling errors than controls. Compared with controls, children with BD spent less time looking at eyes and made fewer eye fixations across emotional expressions. Gaze patterns in children with SMD tended to fall between those of children with BD and controls, although they did not differ significantly from either of these groups on most measures. Decreased fixations to eyes correlated with lower labelling accuracy in children with BD, but not in those with SMD or in controls. Limitations: Most children with BD were medicated, which precluded our ability to evaluate med- ication effects on gaze patterns. Conclusion: Facial emotion labelling deficits in children with BD are associated with impaired attention to eyes. Future research should examine whether impaired attention to eyes is associated with neural dysfunction. Eye gaze deficits in children with BD during facial emotion labelling may also have treatment implications. Finally, children with SMD exhibited decreased attention to eyes to a lesser extent than those with BD, and these equivocal findings are worthy of further study. |
Hannah E. Kirk; Darren R. Hocking; Deborah M. Riby; Kim M. Cornish Linking social behaviour and anxiety to attention to emotional faces in Williams syndrome Journal Article In: Research in Developmental Disabilities, vol. 34, no. 12, pp. 4608–4616, 2013. @article{Kirk2013, The neurodevelopmental disorder Williams syndrome (WS) has been associated with a social phenotype of hypersociability, non-social anxiety and an unusual attraction to faces. The current study uses eye tracking to explore attention allocation to emotionally expressive faces. Eye gaze and behavioural measures of anxiety and social reciprocity were investigated in adolescents and adults with WS when compared to typically developing individuals of comparable verbal mental age (VMA) and chronological age (CA). Results showed significant associations between high levels of behavioural anxiety and attention allocation away from the eye regions of threatening facial expressions in WS. The results challenge early claims of a unique attraction to the eyes in WS and suggest that individual differences in anxiety may mediate the allocation of attention to faces in WS. |
Julie A. Kirkby; H. I. Blythe; Denis Drieghe; V. Benson; Simon P. Liversedge In: Behavior Research Methods, vol. 45, no. 3, pp. 664–678, 2013. @article{Kirkby2013, Previous studies examining binocular coordination during reading have reported conflicting results in terms of the nature of disparity (e.g. Kliegl, Nuthmann, & Engbert (Journal of Experimental Psychology General 135:12-35, 2006); Liversedge, White, Findlay, & Rayner (Vision Research 46:2363-2374, 2006). One potential cause of this inconsistency is differences in acquisition devices and associated analysis technologies. We tested this by directly comparing binocular eye movement recordings made using SR Research EyeLink 1000 and the Fourward Technologies Inc. DPI binocular eye-tracking systems. Participants read sentences or scanned horizontal rows of dot strings; for each participant, half the data were recorded with the EyeLink, and the other half with the DPIs. The viewing conditions in both testing laboratories were set to be very similar. Monocular calibrations were used. The majority of fixations recorded using either system were aligned, although data from the EyeLink system showed greater disparity magnitudes. Critically, for unaligned fixations, the data from both systems showed a majority of uncrossed fixations. These results suggest that variability in previous reports of binocular fixation alignment is attributable to the specific viewing conditions associated with a particular experiment (variables such as luminance and viewing distance), rather than acquisition and analysis software and hardware. |
Johannes Klackl; Michaela Pfundmair; Dmitrij Agroskin; Eva Jonas Who is to blame? Oxytocin promotes nonpersonalistic attributions in response to a trust betrayal Journal Article In: Biological Psychology, vol. 92, pp. 387–394, 2013. @article{Klackl2013, Recent research revealed that the neuropeptide Oxytocin (OT) increases and maintains trustful behavior, even towards interaction partners that have proven to be untrustworthy. However, the cognitive mechanisms behind this effect are unclear. In the present paper, we propose that OT might boost trust through the link between angry rumination and the use of nonpersonalistic and personalistic attributions. Nonpersonalistic attributions put the blame for the betrayal on the perpetrator's situation, whereas personalistic attributions blame his dispositions for the event. We predict that OT changes attribution processes in favor of nonpersonalistic ones and thereby boosts subsequent trust. Participants played a classic trust game in which the opponent systematically betrayed their trust. As predicted, OT strength- ened the relationship between angry rumination about the event and nonpersonalistic attribution of the opponents' behavior and weakened the link between angry rumination and personalistic attribution. Critically, nonpersonalistic attribution also mediated the interactive effect of OT and angry rumination on how strongly investments were reduced in the remaining rounds of the trust game. In summary, the present findings suggest that one underlying cognitive mechanism behind OT-induced trust might relate to how negative emotions evoked by a breach of trust influence the subsequent attributional analysis: OT seems to augment trust by fostering the interpretation of untrustworthy behavior as caused by non-personal factors. |
Jeffrey T. Klein; Michael L. Platt Social information signaling by neurons in primate striatum Journal Article In: Current Biology, vol. 23, pp. 691–696, 2013. @article{Klein2013, Social decisions depend on reliable information about others. Consequently, social primates are motivated to acquire information about the identity, social status, and reproductive quality of others [1]. Neurophysiological [2] and neuroimaging [3, 4] studies implicate the striatum in the motivational control of behavior. Neuroimaging studies specifically implicate the ventromedial striatum in signaling motivational aspects of social interaction [5]. Despite this evidence, precisely how striatal neurons encode social information remains unknown. Therefore, we probed the activity of single striatal neurons in monkeys choosing between visual social information at the potential expense of fluid reward. We show for the first time that a population of neurons located primarily in medial striatum selectively signals social information. Surprisingly, representation of social information was unrelated to simultaneously expressed social preferences. A largely nonoverlapping population of neurons that was not restricted to the medial striatum signaled information about fluid reward. Our findings demonstrate that information about social context and nutritive reward are maintained largely independently in striatum, even when both influence decisions to execute a single action. |
Reinhold Kliegl; Sven Hohenstein; Ming Yan; Scott A. McDonald How preview space/time translates into preview cost/benefit for fixation durations during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 581–600, 2013. @article{Kliegl2013, Eye-movement control during reading depends on foveal and parafoveal information. If the parafoveal preview of the next word is suppressed, reading is less efficient. A linear mixed model (LMM) reanalysis of McDonald (2006) confirmed his observation that preview benefit may be limited to parafoveal words that have been selected as the saccade target. Going beyond the original analyses, in the same LMM, we examined how the preview effect (i.e., the difference in single-fixation duration, SFD, between random-letter and identical preview) depends on the gaze duration on the pretarget word and on the amplitude of the saccade moving the eye onto the target word. There were two key results: (a) The shorter the saccade amplitude (i.e., the larger preview space), the shorter a subsequent SFD with an identical preview; this association was not observed with a random-letter preview. (b) However, the longer the gaze duration on the pretarget word, the longer the subsequent SFD on the target, with the difference between random-letter string and identical previews increasing with preview time. A third pattern-increasing cost of a random-letter string in the parafovea associated with shorter saccade amplitudes-was observed for target gaze durations. Thus, LMMs revealed that preview effects, which are typically summarized under "preview benefit", are a complex mixture of preview cost and preview benefit and vary with preview space and preview time. The consequence for reading is that parafoveal preview may not only facilitate, but also interfere with lexical access. |